url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28557/comments | https://api.github.com/repos/huggingface/transformers/issues/28557/events | https://github.com/huggingface/transformers/pull/28557 | 2,086,242,029 | PR_kwDOCUB6oc5kUNpJ | 28,557 | Fix duplicate & unnecessary flash attention warnings | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28557). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@amyeroberts Thank you. `logger.warning_once` works, however it does not in case we load several time models in the same python process:\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoConfig, LlamaForCausalLM\r\nimport torch\r\n\r\nmodel = LlamaForCausalLM.from_pretrained(\"fxmarty/tiny-llama-fast-tokenizer\", use_flash_attention_2=True, torch_dtype=torch.float32)\r\n\r\nprint(\"LOAD AGAIN\")\r\nmodel = LlamaForCausalLM.from_pretrained(\"fxmarty/tiny-llama-fast-tokenizer\", use_flash_attention_2=True, torch_dtype=torch.float32)\r\n```\r\n\r\nHere the second call does not log anything at all with `logger.warning_once`.\r\n\r\n---\r\n\r\nThe thing is that there are three ways to load models in transformers: `from_pretrained`, `from_config`, and `__init__`. In order to keep the same behavior for all three w.r.t. attention implementation, we call `_autoset_attn_implementation` in these three methods. This results in `_autoset_attn_implementation` being called twice in case we initialize a model from `PreTrainedModel.from_pretrained` or `PreTrainedModel.from_config` (as they call in turn `PreTrainedModel.__init__`).\r\n\r\nFor example, from the above script without any fix, we may get this log:\r\n```\r\nFlash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in LlamaForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained(\"openai/whisper-tiny\", attn_implementation=\"flash_attention_2\", torch_dtype=torch.float16)`\r\nYou are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.\r\nFlash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in LlamaForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained(\"openai/whisper-tiny\", attn_implementation=\"flash_attention_2\", torch_dtype=torch.float16)`\r\nFlash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in LlamaModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained(\"openai/whisper-tiny\", attn_implementation=\"flash_attention_2\", torch_dtype=torch.float16)`\r\n\r\n```\r\n\r\nwhich is the second point in my original post - we want to avoid such duplicate logs (here duplicate for `LlamaForCausalLM`).\r\n\r\n@amyeroberts Currently I fixed with your suggestion of using `logger.warning_once`. Let me know if you prefer it that way despite the issue I mentioned just above.",
"> The thing is that there are three ways to load models in transformers: from_pretrained, from_config, and __init__. In order to keep the same behavior for all three w.r.t. attention implementation, we call _autoset_attn_implementation in these three methods. This results in _autoset_attn_implementation being called twice in case we initialize a model from PreTrainedModel.from_pretrained or PreTrainedModel.from_config (as they call in turn PreTrainedModel.__init__).\r\n\r\nWhy do we need `_autoset_attn_implementation` to be in `_from_config` or `from_pretrained` if it's already in `__init__`? ",
"@amyeroberts Because of the following: https://github.com/huggingface/transformers/blob/98dda8ed03ac3f4af5733bdddaa1dab6a81e15c1/src/transformers/modeling_utils.py#L3579-L3586\r\n\r\n`device_map`, `use_flash_attention_2`, `torch_dtype` are arguments that are not passed to `XXXModel.__init__`, `PretrainedModel.__init__`.",
"I also think that `torch_dtype` and `device_map` should be passed in here: https://github.com/huggingface/transformers/blob/98dda8ed03ac3f4af5733bdddaa1dab6a81e15c1/src/transformers/modeling_utils.py#L1314, as https://github.com/huggingface/transformers/blob/98dda8ed03ac3f4af5733bdddaa1dab6a81e15c1/src/transformers/modeling_utils.py#L1335 will use these parameters.",
"yes, that's in part what this PR is for!",
"> is an indication that we're probably not drawing the right boundaries around what is set and when.\r\n\r\nAgreed.",
"Hey folks! when will the changes of this PR going to be released? I see it got merged 3 weeks ago and the recent release is 2 weeks ago (v4.37.2) which doesn't include these changes.\r\n\r\nhttps://github.com/huggingface/transformers/blob/345b9b1a6a308a1fa6559251eb33ead2211240ac/src/transformers/modeling_utils.py#L1321",
"@rohitgr7 This was not picked in the patch release 4.37.2. Unless there are more critical fixes needed, I guess this will make it in 4.38.0."
] | 1,705 | 1,708 | 1,706 | COLLABORATOR | null | Complete the fixes from https://github.com/huggingface/transformers/pull/28142, that finally closes https://github.com/huggingface/transformers/issues/28052
With this PR:
* no log is shown when loading from `from_config` with a good `torch_dtype` is set (fp16, bf16), while previously erronous logs were shown (https://github.com/huggingface/transformers/issues/28052#issuecomment-1856811089)
* no duplicate logs are shown (was previously the case due to `_autoset_attn_implementation` being called both in from_pretrained/from_config and __init__) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28557/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28557",
"html_url": "https://github.com/huggingface/transformers/pull/28557",
"diff_url": "https://github.com/huggingface/transformers/pull/28557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28557.patch",
"merged_at": 1706258225000
} |
https://api.github.com/repos/huggingface/transformers/issues/28556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28556/comments | https://api.github.com/repos/huggingface/transformers/issues/28556/events | https://github.com/huggingface/transformers/pull/28556 | 2,086,192,123 | PR_kwDOCUB6oc5kUCyF | 28,556 | Feature Update [added `initial_prompt` support for automatic-speech-recognition whisper pipeline] | {
"login": "Biswajit2902",
"id": 10162006,
"node_id": "MDQ6VXNlcjEwMTYyMDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/10162006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Biswajit2902",
"html_url": "https://github.com/Biswajit2902",
"followers_url": "https://api.github.com/users/Biswajit2902/followers",
"following_url": "https://api.github.com/users/Biswajit2902/following{/other_user}",
"gists_url": "https://api.github.com/users/Biswajit2902/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Biswajit2902/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Biswajit2902/subscriptions",
"organizations_url": "https://api.github.com/users/Biswajit2902/orgs",
"repos_url": "https://api.github.com/users/Biswajit2902/repos",
"events_url": "https://api.github.com/users/Biswajit2902/events{/privacy}",
"received_events_url": "https://api.github.com/users/Biswajit2902/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi thank you your code saved my day! I think line 535 needs to modify a bit `prompt_tensor = torch.tensor(generate_kwargs[\"prompt_ids\"], dtype=out[\"tokens\"].dtype).cuda() if is_torch_cuda_available else torch.tensor(generate_kwargs[\"prompt_ids\"], dtype=out[\"tokens\"].dtype)`, and add `is_torch_cuda_available` to line 22. without cuda it'll run on cpu which is a lot slower. ",
"@kaminwong , this is just to modify the output sequence to avoid showing `inital_prompt` in transcription.\r\n\r\nActual generation has device handles in below line.\r\n``` python\r\n tokens = self.model.generate(\r\n attention_mask=attention_mask,\r\n **generate_kwargs,\r\n )\r\n```\r\n\r\nApart from this token decoding part is serialised implementation which has no effect, that can be misuse of GPU.",
"Thanks for the reply! But if I don't make that changes I get the following error, so I assume `prompt_tensor` needs to be in cuda if device is also in cuda? Or is there any other way to correct the error? Thank you for your time.\r\n\r\n`File \"/.../python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 538, in _forward\r\n if (tmp_tokens[0:nprompt_token] == prompt_tensor).sum() == nprompt_token:\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!`\r\n\r\nI followed the code you posted:\r\n```from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\ntorch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32\r\n\r\nmodel_id = \"openai/whisper-small\"\r\nmodel = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n)\r\nmodel.to(device)\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id)\r\n\r\npipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=model,\r\n tokenizer=processor.tokenizer,\r\n feature_extractor=processor.feature_extractor,\r\n max_new_tokens=128,\r\n chunk_length_s=15,\r\n batch_size=16,\r\n torch_dtype=torch_dtype,\r\n device=device,\r\n processor=processor\r\n)\r\n\r\n",
"@kaminwong , Thank you for addressing. I understood the issue. let me verify and reolved it.",
"@kaminwong , you can pull latest commit and install it should work now. its fixed.",
"Thank you for the elegant solution. It works now!",
"Gentle ping @sanchit-gandhi for review ",
"@amyeroberts is there any plan to close this in near future? or will it take time?\r\n",
"@Biswajit2902 Once @sanchit-gandhi has reviewed and approved, the PR will need a final review from a maintainer. Once approved, then the PR can be merged in. "
] | 1,705 | 1,707 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (feature)
- `initial_prompt` support for whisper Pipeline (automatic-speech-recognition)
## Before submitting
- [x] Added initial_prompt as an option for whisper model
- [x] To handle initial prompt `processor` considered as optional parameter
- [x] Current implementation supports only Torch version of decoding.
- [x] how to use initial prompt;
``` python
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-small"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=15,
batch_size=16,
torch_dtype=torch_dtype,
device=device,
processor=processor
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
audio = dataset[0]["audio"]["array"]
sampling_rate = dataset[0]["audio"]["sampling_rate"]
# including timestamp
print(pipe(audio, initial_prompt = "Biswajit, Whisper", return_timestamps=True))
# without timestamp
print(pipe(audio, initial_prompt = "Biswajit, Whisper"))
```
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. @sanchit-gandhi , @Narsil, Can anyone help to take this PR forward please. Let me know, if anything is needed.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28556/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28556/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28556",
"html_url": "https://github.com/huggingface/transformers/pull/28556",
"diff_url": "https://github.com/huggingface/transformers/pull/28556.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28556.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28555/comments | https://api.github.com/repos/huggingface/transformers/issues/28555/events | https://github.com/huggingface/transformers/pull/28555 | 2,086,171,345 | PR_kwDOCUB6oc5kT-Ur | 28,555 | Fix max_new_tokens for assistant model in assistant generation | {
"login": "jmamou",
"id": 19263306,
"node_id": "MDQ6VXNlcjE5MjYzMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19263306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmamou",
"html_url": "https://github.com/jmamou",
"followers_url": "https://api.github.com/users/jmamou/followers",
"following_url": "https://api.github.com/users/jmamou/following{/other_user}",
"gists_url": "https://api.github.com/users/jmamou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmamou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmamou/subscriptions",
"organizations_url": "https://api.github.com/users/jmamou/orgs",
"repos_url": "https://api.github.com/users/jmamou/repos",
"events_url": "https://api.github.com/users/jmamou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmamou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It seems that it has been solved yesterday by @ofirzaf https://github.com/huggingface/transformers/pull/28508#issuecomment-1892870681"
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
During assistant generation, at each iteration, assistant model generate `num_assistant_tokens` tokens.
If the maximum numbers of tokens to generate is limited by `max_len`, in the case that `max_len-cur_len` is less than `num_assistant_tokens`, it will be more efficient for the assistant model to generate only `max_len-cur_len` instead of generating `num_assistant_tokens`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
@echarlaix
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28555/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28555",
"html_url": "https://github.com/huggingface/transformers/pull/28555",
"diff_url": "https://github.com/huggingface/transformers/pull/28555.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28555.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28554/comments | https://api.github.com/repos/huggingface/transformers/issues/28554/events | https://github.com/huggingface/transformers/pull/28554 | 2,086,068,129 | PR_kwDOCUB6oc5kToJn | 28,554 | `intial_prompt` support for automatic-speech-recognition (whisper) pipeline | {
"login": "Biswajit2902",
"id": 10162006,
"node_id": "MDQ6VXNlcjEwMTYyMDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/10162006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Biswajit2902",
"html_url": "https://github.com/Biswajit2902",
"followers_url": "https://api.github.com/users/Biswajit2902/followers",
"following_url": "https://api.github.com/users/Biswajit2902/following{/other_user}",
"gists_url": "https://api.github.com/users/Biswajit2902/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Biswajit2902/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Biswajit2902/subscriptions",
"organizations_url": "https://api.github.com/users/Biswajit2902/orgs",
"repos_url": "https://api.github.com/users/Biswajit2902/repos",
"events_url": "https://api.github.com/users/Biswajit2902/events{/privacy}",
"received_events_url": "https://api.github.com/users/Biswajit2902/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (feature)
- `initial_prompt` support for whisper Pipeline (automatic-speech-recognition)
## Before submitting
- [ ] Added initial_prompt as an option for whisper model
- [ ] To handle initial prompt `processor` considered as optional parameter
- [ ] Current implementation supports only Torch version of decoding.
- [ ] how to use initial prompt;
``` python
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
model_id = "openai/whisper-small"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=15,
batch_size=16,
torch_dtype=torch_dtype,
device=device,
processor=processor
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
# including timestamp
print(pipe(audio, initial_prompt = "Biswajit, Whisper", return_timestamps=True))
# without timestamp
print(pipe(audio, initial_prompt = "Biswajit, Whisper"))
```
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. @sanchit-gandhi , @Narsil, Can anyone help to take this PR forward please. Let me know, if anything is needed.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28554/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28554",
"html_url": "https://github.com/huggingface/transformers/pull/28554",
"diff_url": "https://github.com/huggingface/transformers/pull/28554.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28554.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28553/comments | https://api.github.com/repos/huggingface/transformers/issues/28553/events | https://github.com/huggingface/transformers/issues/28553 | 2,086,028,169 | I_kwDOCUB6oc58VkOJ | 28,553 | llama 2 conversion script unknown error | {
"login": "liboliba",
"id": 51449526,
"node_id": "MDQ6VXNlcjUxNDQ5NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/51449526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liboliba",
"html_url": "https://github.com/liboliba",
"followers_url": "https://api.github.com/users/liboliba/followers",
"following_url": "https://api.github.com/users/liboliba/following{/other_user}",
"gists_url": "https://api.github.com/users/liboliba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liboliba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liboliba/subscriptions",
"organizations_url": "https://api.github.com/users/liboliba/orgs",
"repos_url": "https://api.github.com/users/liboliba/repos",
"events_url": "https://api.github.com/users/liboliba/events{/privacy}",
"received_events_url": "https://api.github.com/users/liboliba/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @liboliba, thanks for raising an issue! \r\n\r\nSo that we can best help you, please make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) including: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* The full error traceback\r\n* A minimal code reproducer. Here we don't have access to the weights. Are there weights you could share which reproduce this error? ",
"Thank you for the advise!\r\ntransformers-cli env returns:\r\n- `transformers` version: 4.36.2\r\n- Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.11.5\r\n- Huggingface_hub version: 0.19.4\r\n- Safetensors version: 0.4.1\r\n- Accelerate version: 0.25.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.1.1 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes & No, I am working under HPC environment, my current compute node has no GPU, but it shared hard disk with a GPU node. A GPU node has no internet connection and can only load model via the shared hard disk with the compute node I work on now. \r\n- Using distributed or parallel set-up in script?: No\r\n\r\nFor the other two bullets, sorry I am less sure how to respond because what I did was to download the official meta llama 2 into a folder, and then I git clone the transformer source code and try to run the conversion code. The error I get now is:\r\n\r\npython /scratch/ll1d19/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /scratch/ll1d19/llama/llama/llama-2-7b-chat/ --model_size 7B --output_dir /scratch/ll1d19/hf_llama2/Llama-2-7b-chat-hf/\r\nYou are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565\r\nTraceback (most recent call last):\r\n File \"/scratch/ll1d19/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py\", line 319, in <module>\r\n main()\r\n File \"/scratch/ll1d19/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py\", line 307, in main\r\n write_model(\r\n File \"/scratch/ll1d19/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py\", line 109, in write_model\r\n tokenizer = tokenizer_class(tokenizer_path)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ll1d19/.conda/envs/myiai/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama_fast.py\", line 124, in __init__\r\n super().__init__(\r\n File \"/home/ll1d19/.conda/envs/myiai/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py\", line 117, in __init__\r\n slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ll1d19/.conda/envs/myiai/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama.py\", line 178, in __init__\r\n self.sp_model = self.get_spm_processor(kwargs.pop(\"from_slow\", False))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ll1d19/.conda/envs/myiai/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama.py\", line 203, in get_spm_processor\r\n tokenizer.Load(self.vocab_file)\r\n File \"/home/ll1d19/.conda/envs/myiai/lib/python3.11/site-packages/sentencepiece/__init__.py\", line 905, in Load\r\n return self.LoadFromFile(model_file)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ll1d19/.conda/envs/myiai/lib/python3.11/site-packages/sentencepiece/__init__.py\", line 310, in LoadFromFile\r\n return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nOSError: Not found: \"/scratch/ll1d19/llama/llama/llama-2-7b-chat/tokenizer.model\": No such file or directory Error #2\r\n\r\nThe weights are too large to share as it is about 13GB, the json file is around 100bytes.\r\nAny advise would be grateful!",
"Hi @liboliba, thanks for the update! \r\n\r\nBased on the error, I'd suggest making sure you have the latest versions of `tokenizers` and `sentencepiece` installed in your environment. \r\n\r\nThere's no need to convert the official checkpoints though - there's many already available on the hub e.g. [here](https://huggingface.co/huggyllama/llama-7b) which you can access provided you've filled out the access form; or [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf) for llama 2.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
Hi,
I have downloaded llama 2 weights and installed the transformer package. I plan to use it under transformer package and applied the conversion script.
The conversion script does not work:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/tomyfilepath
File "...path/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 126
print(f"Fetching all parameters from the checkpoint at {input_base_path}.")
^
SyntaxError: invalid syntax
On Linux when I do for example:
ls /path/to/downloaded/llama/llama-2-7b-chat
I get:
checklist.chk consolidated.00.pth params.json
I assume I have the correct files. Any advise would be grateful.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/tomyfilepath
### Expected behavior
It is expected tokenizer and model be converted so that they are usable for transformer package. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28553/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28552/comments | https://api.github.com/repos/huggingface/transformers/issues/28552/events | https://github.com/huggingface/transformers/pull/28552 | 2,086,011,975 | PR_kwDOCUB6oc5kTb95 | 28,552 | Fix SDPA tests | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28552). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thank you @amyeroberts "
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | @ydshieh testing on a T4 (& cpu), all tests pass now. Most of the failing ones were due to the fact that we run the CI on a T4 GPU, that does not support bf16. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28552/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28552",
"html_url": "https://github.com/huggingface/transformers/pull/28552",
"diff_url": "https://github.com/huggingface/transformers/pull/28552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28552.patch",
"merged_at": 1705508959000
} |
https://api.github.com/repos/huggingface/transformers/issues/28551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28551/comments | https://api.github.com/repos/huggingface/transformers/issues/28551/events | https://github.com/huggingface/transformers/pull/28551 | 2,085,796,120 | PR_kwDOCUB6oc5kSswv | 28,551 | [Makefile] Exclude research projects from format | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
When running `make style` from root, the research folder files are changed even though they are outdated. This PR makes sure we exclude the deprecated research folder all together.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28551/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28551",
"html_url": "https://github.com/huggingface/transformers/pull/28551",
"diff_url": "https://github.com/huggingface/transformers/pull/28551.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28551.patch",
"merged_at": 1705485580000
} |
https://api.github.com/repos/huggingface/transformers/issues/28550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28550/comments | https://api.github.com/repos/huggingface/transformers/issues/28550/events | https://github.com/huggingface/transformers/issues/28550 | 2,085,781,875 | I_kwDOCUB6oc58UoFz | 28,550 | Tr-OCR Large Checkpoint model diverges | {
"login": "nogifeet",
"id": 72322393,
"node_id": "MDQ6VXNlcjcyMzIyMzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/72322393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nogifeet",
"html_url": "https://github.com/nogifeet",
"followers_url": "https://api.github.com/users/nogifeet/followers",
"following_url": "https://api.github.com/users/nogifeet/following{/other_user}",
"gists_url": "https://api.github.com/users/nogifeet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nogifeet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nogifeet/subscriptions",
"organizations_url": "https://api.github.com/users/nogifeet/orgs",
"repos_url": "https://api.github.com/users/nogifeet/repos",
"events_url": "https://api.github.com/users/nogifeet/events{/privacy}",
"received_events_url": "https://api.github.com/users/nogifeet/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @nogifeet, thanks for raising this issue! \r\n\r\nHmmm, I'm not sure what's happening here. However, I don't think there's an issue in the code itself. \r\n\r\nClearly the larger model isn't generating the eos token. However, the processing and generation configs for the models are equivalent, and the only difference in the models' configs is their encoder hidden size (expected). \r\n\r\nFollowing the paper, both models have been trained to predict eos and the only difference should be the size of the vision encoder, which matches with the difference in the hidden sizes in the model configs. \r\n\r\nThe only thing that's slightly unexpected is the models are adding a pooling layer, which is randomly initialized (but this would affect both) cc @NielsRogge \r\n\r\nIn terms of your experiments - does the large model always have this behaviour or only occasionally i.e. do we ever observe it correctly generating the eos token? ",
"Hello @amyeroberts. There are examples where the eos is generated successfully and the example mentioned is actually an outlier. I can provide more examples from the IAM dataset if you want.\n\nI would also like to point out that I don't think it's a problem isolated to the large checkpoint, I have observed them in the base checkpoints.\n",
"@nogifeet OK, in this case I'd recommend looking into [different decoding strategies](https://huggingface.co/docs/transformers/generation_strategies#decoding-strategies) when generating to try and force more sensible sequences. Some more information can be found here: https://huggingface.co/blog/how-to-generate. ",
"@amyeroberts Is the handling of decoder layer nodes with shape limitations, managed internally by Hugging Face to prevent undefined behavior?",
"> Is the handling of decoder layer nodes with shape limitations, managed internally by Hugging Face to prevent undefined behavior?\r\n\r\nSorry, I don't understand. What do you mean by \"decoder layer nodes with shape limitations\"? \r\nYou should be able to call `model.generate` and all the handling of autoregressive calls and generation logic should be handled for you. "
] | 1,705 | 1,706 | null | NONE | null | ### System Info
Transformers Version -- 4.35.2
Python Version -- 3.10.12 [GCC 11.4.0]
Environment -- Google Colab
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the large checkpoint, the initial max_token_length is 20, which is not ideal as the tokens could be larger for different images. When playing with this parameter we notice that the model starts diverging and producing repeated tokens in some of the sample images.
Please use the below sample image
![e04-083-00](https://github.com/huggingface/transformers/assets/72322393/b8c20b9e-bc8b-441a-a417-2131e0af78c6)
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
import requests
from PIL import Image
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-large-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-large-handwritten")
image = Image.open("/content/e04-083-00.png").convert("RGB")
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values,use_cache=True,max_new_tokens=100)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
// The edges of the transoms should be bevelled to be edges to the edges of the edges of the edges of the edges of the
edges of the edges of the edges of the edges of the edges of the edges of the edges of the edges of the edges of
the edges of the edges of the edges of the edges of the edges of the edges of the edges of the edges of the edges
of the edges of the edges of the edges of the edges of the edges of the edges of the
print(len(generated_text))
// 427
### Expected behavior
I would expect the large checkpoint to behave similarly or even better than the base checkpoint...
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
import requests
from PIL import Image
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
# load image from the IAM dataset
image = Image.open("/content/e04-083-00.png").convert("RGB")
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values,use_cache=True,max_new_tokens=100)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
// The edges of the transoms should be bevelled to
print(len(generated_text))
// 47 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28550/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28558/comments | https://api.github.com/repos/huggingface/transformers/issues/28558/events | https://github.com/huggingface/transformers/issues/28558 | 2,086,245,243 | I_kwDOCUB6oc58WZN7 | 28,558 | Optional files still require token for `huggingface-cli` downloaded gated repo | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"repos_url": "https://api.github.com/users/scruel/repos",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] | [
"@scruel it is expected that `.no_exist` is not populated when using `huggingface-cli download --resume-download meta-llama/Llama-2-7b-hf`. In fact, the \"no exists\" folder is only populated when `transformers` (or any script/library) tries to download a file that does not exist on the repo. `transformers` uses this information to default to another file when that's the case. When you use the `huggingface-cli` we only download existing files but we don't populate \"missing files\" (which wouldn't really make sense). \r\n\r\nSo I agree this is a bug but rather in `transformers` than `huggingface_hub` itself. I think `transformers` should be able to load a model when it has already been downloaded, even if the `.no_exists` folder is not populated.\r\n\r\nI'm therefore transferring this issue to `transformers` repo. Btw, thanks for providing a reproducible example of your problem. @amyeroberts @ArthurZucker @ydshieh would you have time to take a look at how `transformers` is loading files when already cached? ",
"Hi @Wauplin When I seared `no_exist` (case insensitive), there are only 4 results, and they are like `if resolved_file is not _CACHED_NO_EXIST`. \r\n\r\nI am happy to take the question you raised, but is it possible you share a bit more how `.no_exists` is related to `transformers`? I mean the logic of creating this is inside `huggingface_hub` right, and `transformers` only check if such path exist or not and perform different actions.",
"@ydshieh the `no_exist` folder is indeed populated by `huggingface_hub` each time a library calls `huggingface_hub.hf_hub_download` to download a file that doesn't exist (but the repo does!). `transformers` leverages this information when it tries to load optional files. To avoid making a HEAD call for each file each time the user do a `from_pretrained`, `transformers` looks at the no_exist folder which saves some time (actually the `no_exists` folder is only used by `transformers` as far as I know).\r\n\r\nWhat `transformers` should do is that in the case of missing `no_exists` entries, if the cache is fully downloaded then the model should be loaded correctly. This is what @scruel's PR is doing if I understand correctly. My only comment is that it shouldn't be specific to gated repos but also private/missing repos as well, as long as the files are cached.",
"Hi @Wauplin and @scruel \r\n\r\nI am looking into this, but there is one minor stuff I don't get clearly.\r\n\r\nbefore I do `until we run from_XXX method once to have those missed files created as empty files into .no_exist directory`, I checked with\r\n\r\n> !ls -l /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf/snapshots/8cca527612d856d7d32bd94f8103728d614eb852\r\n\r\nand it gives \r\n\r\n```\r\ntotal 24\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 config.json -> ../../blobs/34f901200fa131819b355bc4bed876c957a77a5a\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 generation_config.json -> ../../blobs/aa1b3d3486df56a0699ce90c33283b13556fb5a3\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 LICENSE.txt -> ../../blobs/51089e27e6764fb9f72c06a0f3710699fb6c9448\r\nlrwxrwxrwx 1 root root 76 Jan 22 14:49 model-00001-of-00002.safetensors -> ../../blobs/4ec71fd53e99766de38f24753b30c9e8942630e9e576a1ba27b0ec531e87be41\r\nlrwxrwxrwx 1 root root 76 Jan 22 14:48 model-00002-of-00002.safetensors -> ../../blobs/41780b5dac322ac35598737e99208d90bdc632a1ba3389ebedbb46a1d8385a7f\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 model.safetensors.index.json -> ../../blobs/8b6245796e966e50960a317e4a54aa7bf73b0186\r\nlrwxrwxrwx 1 root root 76 Jan 22 14:51 pytorch_model-00001-of-00002.bin -> ../../blobs/ee62ed2ad7ded505ae47df50bc6c52916860dfb1c009df4715148cc4bfb50d2f\r\nlrwxrwxrwx 1 root root 76 Jan 22 14:51 pytorch_model-00002-of-00002.bin -> ../../blobs/1fd7762035b3ca4f2d6af6bf10129689a119b7c38058025f9842511532ea02fb\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 pytorch_model.bin.index.json -> ../../blobs/db7264b24cac7a39947bb5fc02fe5c2d7ac9eaf4\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 README.md -> ../../blobs/c7579dcbcbcd699270238173303a9013135a2a7d\r\nlrwxrwxrwx 1 root root 76 Jan 22 14:47 Responsible-Use-Guide.pdf -> ../../blobs/525dc349d71fe257fce4098c146446df6fef4247174f351381e4c3214af126f0\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 special_tokens_map.json -> ../../blobs/451134b2ddc2e78555d1e857518c54b4bdc2e87d\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 tokenizer_config.json -> ../../blobs/2ef41cbc275000b29afe157ba487f0530b8c26dc\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 tokenizer.json -> ../../blobs/a6e931b92caff4c79c5c56282f1e89569a0ae558\r\nlrwxrwxrwx 1 root root 76 Jan 22 14:47 tokenizer.model -> ../../blobs/9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347\r\nlrwxrwxrwx 1 root root 52 Jan 22 14:47 USE_POLICY.md -> ../../blobs/abbcc199b2d1e4feb5d7e40c0bd67e1b0ce29e97\r\n```\r\nAnd after running \r\n\r\n```\r\nAutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf', token=token)\r\n```\r\n(with a real token)\r\n\r\nThe same list of entries is shown, and I can't see any `.no_exist`\r\n\r\nBut then running the following\r\n\r\n```\r\nAutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf', token=False)\r\n```\r\nindeed works.\r\n\r\nCould you help me to understand where the mentioned `no_exist` is?\r\n\r\n(I am aware of the PR #28566)",
"@ydshieh yes so the `.no_exists` folder is separated from the `snapshot` folder. If you do\r\n```\r\n!ls -l /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf/\r\n```\r\nyou should see `refs/`, `blobs/`, `snapshots/` and `/no_exists` which should help the investigation :)",
"Thanks @Wauplin , so when I run\r\n\r\n> until we run from_XXX method once to have those missed files created as empty files into .no_exist directory,\r\n\r\n(with a real token), and checked\r\n\r\n```\r\n!ls -l /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf/\r\n```\r\n\r\nit still shows \r\n\r\n```\r\ndrwxr-xr-x 2 root root 4096 Jan 22 15:04 blobs\r\ndrwxr-xr-x 2 root root 4096 Jan 22 15:00 refs\r\ndrwxr-xr-x 3 root root 4096 Jan 22 15:00 snapshots\r\n```\r\n\r\nbut again, the subsequent call with `token=False` works. 🤔 \r\n",
"wait, maybe let check the huggingface_hub version\r\n\r\nIt is `0.20.2` with `transformers==4.35.2`.",
"@ydshieh I can confirm, I have same as both:\r\n\r\nBoth \r\n\r\n```py\r\nfrom transformers import AutoTokenizer\r\nAutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')\r\n```\r\n\r\nand \r\n\r\n```py\r\nfrom transformers import AutoTokenizer\r\nAutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf', token=False)\r\n```\r\n\r\nworks for me. \r\n\r\nCache directory only have tokenizers files:\r\n\r\n```\r\n(.venv310) ➜ huggingface_hub git:(main) ✗ tree -h ~/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf \r\n[4.0K] /home/wauplin/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf\r\n├── [4.0K] blobs\r\n│ ├── [ 776] 2ef41cbc275000b29afe157ba487f0530b8c26dc\r\n│ ├── [ 609] 34f901200fa131819b355bc4bed876c957a77a5a\r\n│ ├── [ 414] 451134b2ddc2e78555d1e857518c54b4bdc2e87d\r\n│ ├── [ 26K] 8b6245796e966e50960a317e4a54aa7bf73b0186\r\n│ ├── [488K] 9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347\r\n│ └── [1.8M] a6e931b92caff4c79c5c56282f1e89569a0ae558\r\n├── [4.0K] refs\r\n│ └── [ 40] main\r\n└── [4.0K] snapshots\r\n └── [4.0K] 8cca527612d856d7d32bd94f8103728d614eb852\r\n ├── [ 52] config.json -> ../../blobs/34f901200fa131819b355bc4bed876c957a77a5a\r\n ├── [ 52] model.safetensors.index.json -> ../../blobs/8b6245796e966e50960a317e4a54aa7bf73b0186\r\n ├── [ 52] special_tokens_map.json -> ../../blobs/451134b2ddc2e78555d1e857518c54b4bdc2e87d\r\n ├── [ 52] tokenizer_config.json -> ../../blobs/2ef41cbc275000b29afe157ba487f0530b8c26dc\r\n ├── [ 52] tokenizer.json -> ../../blobs/a6e931b92caff4c79c5c56282f1e89569a0ae558\r\n └── [ 76] tokenizer.model -> ../../blobs/9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347\r\n\r\n4 directories, 13 files\r\n```",
"@scruel are you sure, the failing command was `AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf', token=False)` and not `AutoModel.from_pretrained('meta-llama/Llama-2-7b-hf', token=False)`?",
"Yes, I'm pretty sure.\r\nRather than use `ls -l` and `tree -h`, you should use `ls -la` and `tree -ha`, the directory you created is not `no_exiss` but `.no_exist`, it will be treated as a hidden folder in the modern OS.",
"Ah yes of course :facepalm: Thanks @scruel! \r\n\r\nHere it is @ydshieh:\r\n```\r\n(.venv310) ➜ huggingface_hub git:(main) ✗ tree -ah ~/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf \r\n[4.0K] /home/wauplin/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf\r\n├── [4.0K] blobs\r\n│ ├── [ 776] 2ef41cbc275000b29afe157ba487f0530b8c26dc\r\n│ ├── [ 609] 34f901200fa131819b355bc4bed876c957a77a5a\r\n│ ├── [ 414] 451134b2ddc2e78555d1e857518c54b4bdc2e87d\r\n│ ├── [ 26K] 8b6245796e966e50960a317e4a54aa7bf73b0186\r\n│ ├── [488K] 9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347\r\n│ └── [1.8M] a6e931b92caff4c79c5c56282f1e89569a0ae558\r\n├── [4.0K] .no_exist\r\n│ └── [4.0K] 8cca527612d856d7d32bd94f8103728d614eb852\r\n│ └── [ 0] added_tokens.json\r\n├── [4.0K] refs\r\n│ └── [ 40] main\r\n└── [4.0K] snapshots\r\n └── [4.0K] 8cca527612d856d7d32bd94f8103728d614eb852\r\n ├── [ 52] config.json -> ../../blobs/34f901200fa131819b355bc4bed876c957a77a5a\r\n ├── [ 52] model.safetensors.index.json -> ../../blobs/8b6245796e966e50960a317e4a54aa7bf73b0186\r\n ├── [ 52] special_tokens_map.json -> ../../blobs/451134b2ddc2e78555d1e857518c54b4bdc2e87d\r\n ├── [ 52] tokenizer_config.json -> ../../blobs/2ef41cbc275000b29afe157ba487f0530b8c26dc\r\n ├── [ 52] tokenizer.json -> ../../blobs/a6e931b92caff4c79c5c56282f1e89569a0ae558\r\n └── [ 76] tokenizer.model -> ../../blobs/9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347\r\n\r\n6 directories, 14 files\r\n```",
"Thank you both for helping me on this! Will continue this along with a review"
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | ### Describe the bug
For functions `from_XXX`, it will create empty files into `.no_exist` directory if repo have some files missed, however the CLI tool `huggingface-cli download` won't do so, which caused inconsistency issues.
### Reproduction
1. `export HF_TOKEN=XXX`
2. `huggingface-cli download --resume-download meta-llama/Llama-2-7b-hf`
3. `python -c "from transformers import AutoTokenizer; AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf', token=False)"`
Will produce `OSError` because we are loading a gated repo, even we already requested and downloaded it via CLI tool, we won't be able to use this cached model (e.g., if we want to use it offline), until we run `from_XXX` method once to have those missed files created as empty files into `.no_exist` directory.
### Logs
```shell
...
OSError: You are trying to access a gated repo.
Make sure to request access at https://huggingface.co/meta-llama/Llama-2-7b-hf and pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`.
```
### System info
```shell
- huggingface_hub version: 0.20.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.6
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/scruel/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: situqingyun
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.0.1
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: N/A
- hf_transfer: 0.1.4
- gradio: N/A
- tensorboard: N/A
- numpy: 1.26.0
- pydantic: N/A
- aiohttp: 3.9.1
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/scruel/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/scruel/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/scruel/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28558/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28549/comments | https://api.github.com/repos/huggingface/transformers/issues/28549/events | https://github.com/huggingface/transformers/issues/28549 | 2,085,743,538 | I_kwDOCUB6oc58Ueuy | 28,549 | Fine tuning whisper and whisper lora with prompts | {
"login": "kenfus",
"id": 47979198,
"node_id": "MDQ6VXNlcjQ3OTc5MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/47979198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenfus",
"html_url": "https://github.com/kenfus",
"followers_url": "https://api.github.com/users/kenfus/followers",
"following_url": "https://api.github.com/users/kenfus/following{/other_user}",
"gists_url": "https://api.github.com/users/kenfus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kenfus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenfus/subscriptions",
"organizations_url": "https://api.github.com/users/kenfus/orgs",
"repos_url": "https://api.github.com/users/kenfus/repos",
"events_url": "https://api.github.com/users/kenfus/events{/privacy}",
"received_events_url": "https://api.github.com/users/kenfus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @kenfus, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\ncc @sanchit-gandhi @ylacombe ",
"Hey, thank you!\r\n\r\nI have added a question to this post: https://discuss.huggingface.co/t/finetuning-whisper-with-prompts/43053/3\r\n\r\nHowever, even when looking at the code, I don't quite understand how to pass the prompt to fine tune the model, so I think it's not implemented yet. With some guidance, I am very happy to implement it myself!",
"Hi @kenfus, \r\n\r\nI'll let @sanchit-gandhi and @ylacombe answer about fine-tuning. For reference on the intended API, the PR to add prompts is #22395. There is also another open PR looking to add prompts to the pipeline #28556. Although these are not specific to the training regime, they may provide more context on how to pass prompts to the model. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### Feature request
Currently, I am successfully able to fine-tune whisper and whisper Lora with timestamps, thank you for that!
Now, I would like to fine tune whisper with prompts, as Openai has trained it. There is not too much documentation, so maybe it's already in? Currently, my dataset has the following columns: `input_features`, `labels` and `prompt_ids`. The labels do currently not contain the `prompt_ids`. So my first question:
- If I add `prompt_ids` to the `labels` at the beginning, is it already correct? Will the huggingface library automatically cut the labels at the correct point and pass them to the model to start the decoding? I did not understand where this in the code exactly happens.
- If not, where would it be best to add? I think either it should happen automatically from the `label` or we could make the trainer use `prompt_ids` automatically, if available.
### Motivation
Overall, it was a bit unclear on how to finetune whisper with timestamps and prompts. Maybe it's already there, maybe not. In addition, the code was a bit hard to understand in the HU library.
### Your contribution
Absolutely! If I get some guidance, where to look and what to change, I am happy to do so. I am happy to contribute to HU, which helped me a lot in my work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28549/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28548/comments | https://api.github.com/repos/huggingface/transformers/issues/28548/events | https://github.com/huggingface/transformers/issues/28548 | 2,085,696,927 | I_kwDOCUB6oc58UTWf | 28,548 | Pipeline batching with ZeroShotImageClassificationPipeline outputs less items per iteration than expected | {
"login": "ryan-caesar-ramos",
"id": 65334734,
"node_id": "MDQ6VXNlcjY1MzM0NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/65334734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryan-caesar-ramos",
"html_url": "https://github.com/ryan-caesar-ramos",
"followers_url": "https://api.github.com/users/ryan-caesar-ramos/followers",
"following_url": "https://api.github.com/users/ryan-caesar-ramos/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-caesar-ramos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryan-caesar-ramos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-caesar-ramos/subscriptions",
"organizations_url": "https://api.github.com/users/ryan-caesar-ramos/orgs",
"repos_url": "https://api.github.com/users/ryan-caesar-ramos/repos",
"events_url": "https://api.github.com/users/ryan-caesar-ramos/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryan-caesar-ramos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Batch_size is transparent. The model will run the inference batched, but the pipeline still outputs results one by one, meaning you can modify the batch_size according to the actual hardware you are using without actually needing to unpack results manually (it also allows running next batch, before you even finished unpacking the results manually, leading to better GPU usage.",
"Thanks! That clears things up"
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
from datasets import load_dataset
import numpy as np
from transformers.pipelines.pt_utils import KeyDataset
pipe = pipeline('zero-shot-image-classification', model='openai/clip-vit-base-patch16', device=0)
dataset = load_dataset('mnist', split='test')
for out in pipe(KeyDataset(dataset, "image"), candidate_labels=range(10), batched=True, batch_size=1024):
break
print(len(out))
```
### Expected behavior
I would have assumed that `out` would have 10 classes * 1024 images in the batch = 10240 items in it, but it only has 10. Maybe I'm misinterpreting what batching with pipelines does? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28548/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28547/comments | https://api.github.com/repos/huggingface/transformers/issues/28547/events | https://github.com/huggingface/transformers/pull/28547 | 2,085,630,838 | PR_kwDOCUB6oc5kSJcn | 28,547 | [`PEFT`] make the trainer support resume checkpoint from a named adapter #28531 | {
"login": "chenbin11200",
"id": 5245644,
"node_id": "MDQ6VXNlcjUyNDU2NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5245644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenbin11200",
"html_url": "https://github.com/chenbin11200",
"followers_url": "https://api.github.com/users/chenbin11200/followers",
"following_url": "https://api.github.com/users/chenbin11200/following{/other_user}",
"gists_url": "https://api.github.com/users/chenbin11200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenbin11200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenbin11200/subscriptions",
"organizations_url": "https://api.github.com/users/chenbin11200/orgs",
"repos_url": "https://api.github.com/users/chenbin11200/repos",
"events_url": "https://api.github.com/users/chenbin11200/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenbin11200/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @younesbelkada ",
"> Makes sense, thank you ! Can you also run the styling checks? `make fixup`\r\n\r\nHi @younesbelkada , I have made the styling checks. If there is still something missing, please informe me :D",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thanks a lot for fixing!\r\n\r\nMy pleasure :D \r\nI guess it has to be approved by @amyeroberts before being merged?",
"Yes @chenbin11200 - it will get reviewed as soon as possible and we'll merge the fix in main ! ",
"@chenbin11200 the test would go here ideally: https://github.com/huggingface/transformers/blob/main/tests/peft_integration/test_peft_integration.py let me know if you need help designing the test!",
"> @chenbin11200 the test would go here ideally: https://github.com/huggingface/transformers/blob/main/tests/peft_integration/test_peft_integration.py let me know if you need help designing the test!\r\n\r\nHi @younesbelkada @amyeroberts ,\r\nThanks for guiding me. Does this \"resuming from checkpoint\" test(or anything similar) already exist? Will save me some time if I could change on an exisiting one :)",
"Hi @chenbin11200 \r\nAfter thinking a bit, to not complexify things, you can write a new test similar than this one: https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py#L874 by adding the `require_peft` decorator and make sure training runs fine with resume checkpoint for a named adapter",
"> Hi @chenbin11200 After thinking a bit, to not complexify things, you can write a new test similar than this one: https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py#L874 by adding the `require_peft` decorator and make sure training runs fine with resume checkpoint for a named adapter\r\n\r\nThank you @younesbelkada, I will try that. "
] | 1,705 | 1,708 | null | NONE | null | # What does this PR do?
Fixes # 28531
In peft>=0.5.0, when one initialize the PeftModel with a adapter name, like
```python
peft_model = get_peft_model(model=base_model,
peft_config=peft_config,
adapter_name='my_lora_model_name',
```
In this case, the `adapter_config.json` and `adapter_model.bin` files will be saved in `/my_output_dir/checkpoint-300/my_lora_model_name` instead of `/my_output_dir/checkpoint-300` directly. That will raise ValueError when trying to resume a training from checkpoint.
This PR is to fix this issue by join path into the subfolder, and load the adapter from the right subfolder(if necessary, because if one don't offer a adapter_name, the weight and config files will not be saved into a subfolder, this is also considered).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
The bug is reported there but not yet discussed. `https://github.com/huggingface/transformers/issues/28531`
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
No unit test, only tested locally for this small change.
## Who can review?
@muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28547/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28547",
"html_url": "https://github.com/huggingface/transformers/pull/28547",
"diff_url": "https://github.com/huggingface/transformers/pull/28547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28547.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28546/comments | https://api.github.com/repos/huggingface/transformers/issues/28546/events | https://github.com/huggingface/transformers/issues/28546 | 2,085,555,311 | I_kwDOCUB6oc58Twxv | 28,546 | How to use fp32 and qLora to fine-tune models | {
"login": "guoyunqingyue",
"id": 77528622,
"node_id": "MDQ6VXNlcjc3NTI4NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/77528622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guoyunqingyue",
"html_url": "https://github.com/guoyunqingyue",
"followers_url": "https://api.github.com/users/guoyunqingyue/followers",
"following_url": "https://api.github.com/users/guoyunqingyue/following{/other_user}",
"gists_url": "https://api.github.com/users/guoyunqingyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guoyunqingyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guoyunqingyue/subscriptions",
"organizations_url": "https://api.github.com/users/guoyunqingyue/orgs",
"repos_url": "https://api.github.com/users/guoyunqingyue/repos",
"events_url": "https://api.github.com/users/guoyunqingyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/guoyunqingyue/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @guoyunqingyue, thanks for raising an issue! \r\n\r\nMy first suggest would be to update your version of transformers, as there have been many recent updates to weight loading and quantization in the codebase: `pip install -U transformers`. \r\n\r\nSo that we can be best help you, please make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and provide: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* A minimal code snippet we can run to reproduce the error. We can't run the current code example as many variables are undefined. \r\n* All relevant details about the error including the full error traceback",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
I'm using transformers version 4.32.0 and I want to fine-tune the Qwen/Qwen-VL-Chat-Int4 model, but my 1080ti GPU doesn't support fp16. When I want to use "training_args.fp16 = False" to modify the parameters, the error "dataclasses.FrozenInstanceError: cannot assign to field fp16" will be reported. I guess this parameter cannot be changed manually. What should I do besides changing the GPU so that it can use fp16?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using the fine-tuning code given by Qwen:
```python
parser = transformers.HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments, LoraArguments)
)
(
model_args,
data_args,
training_args,
lora_args,
) = parser.parse_args_into_dataclasses()
if getattr(training_args, 'deepspeed', None) and getattr(lora_args, 'q_lora', False):
training_args.distributed_state.distributed_type = DistributedType.DEEPSPEED
training_args.fp16 = False
compute_dtype = (
torch.float16
if training_args.fp16
else (torch.bfloat16 if training_args.bf16 else torch.float32)
)
local_rank = training_args.local_rank
device_map = None
world_size = int(os.environ.get("WORLD_SIZE", 1))
ddp = world_size != 1
if lora_args.q_lora:
device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} if ddp else None
if len(training_args.fsdp) > 0 or deepspeed.is_deepspeed_zero3_enabled():
logging.warning(
"FSDP or ZeRO3 are not incompatible with QLoRA."
)
# Set RoPE scaling factor
config = transformers.AutoConfig.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
trust_remote_code=True,
)
config.use_cache = False
# Load model and tokenizer
model = transformers.AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=training_args.cache_dir,
device_map=device_map,
trust_remote_code=True,
quantization_config=GPTQConfig(
bits=4, disable_exllama=True
)
if training_args.use_lora and lora_args.q_lora
else None,
)
```
### Expected behavior
I want a solution | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28546/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28545/comments | https://api.github.com/repos/huggingface/transformers/issues/28545/events | https://github.com/huggingface/transformers/issues/28545 | 2,085,330,571 | I_kwDOCUB6oc58S56L | 28,545 | Download reconfiguration | {
"login": "LOVE-YOURSELF-1",
"id": 71559440,
"node_id": "MDQ6VXNlcjcxNTU5NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/71559440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LOVE-YOURSELF-1",
"html_url": "https://github.com/LOVE-YOURSELF-1",
"followers_url": "https://api.github.com/users/LOVE-YOURSELF-1/followers",
"following_url": "https://api.github.com/users/LOVE-YOURSELF-1/following{/other_user}",
"gists_url": "https://api.github.com/users/LOVE-YOURSELF-1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LOVE-YOURSELF-1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LOVE-YOURSELF-1/subscriptions",
"organizations_url": "https://api.github.com/users/LOVE-YOURSELF-1/orgs",
"repos_url": "https://api.github.com/users/LOVE-YOURSELF-1/repos",
"events_url": "https://api.github.com/users/LOVE-YOURSELF-1/events{/privacy}",
"received_events_url": "https://api.github.com/users/LOVE-YOURSELF-1/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @LOVE-YOURSELF-1, thanks for opening this feature request! \r\n\r\nIf you want to download a specific file from the hub, you can use `hf_hub_download` from `huggingface_hub`. \r\n\r\nIf you are using the auto classes, they will only download what's necessary. For example: \r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n```\r\n\r\nwill only download the files for the tokenizer, not the model weights. ",
"If we have multiple download sources,do you think it is necessary to bulit a new class named downloader?",
"@LOVE-YOURSELF-1 Could you explain what you mean in more detail? What do you mean by 'multiple download sources'? ",
"https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L3258\r\nCan we rewrite the code about download(config,tokenizer and model)?\r\nrebuilt a class named downloader, which can accomplish download, (from local and hugging hub)",
"@LOVE-YOURSELF-1 There's a plan to harmonise how e.g. loading and downloading shards is handled across libraries which will also account for e.g. local versus downloading from the hub. \r\n\r\nI don't see the reason for having a downloader class which handles downloading these all files - it's bundling together responsibilities making the scope of the class ill-defined. What problem would this be solving? ",
"As you said, i want to integrate downloading and loading. It can be rewritten in downloader.py.\r\nCan you give me some suggestion or idea?",
"@LOVE-YOURSELF-1 Any contributor is welcome to open a PR to demonstrate and new feature or refactor. As this isn't something we or the community are requesting, it's up to you to define the feature you'd like to add. Once there's a clear diff or feature description, then we'll be able to help.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### Feature request
Download reconfiguration
### Motivation
Separate out the download of the pretrained function for model、configuration and tokenizer.
### Your contribution
Do you think it is necessary? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28545/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28544/comments | https://api.github.com/repos/huggingface/transformers/issues/28544/events | https://github.com/huggingface/transformers/issues/28544 | 2,085,205,752 | I_kwDOCUB6oc58Sbb4 | 28,544 | Early stopping patience does not work when resuming from checkpoint | {
"login": "Ubadub",
"id": 1286898,
"node_id": "MDQ6VXNlcjEyODY4OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1286898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ubadub",
"html_url": "https://github.com/Ubadub",
"followers_url": "https://api.github.com/users/Ubadub/followers",
"following_url": "https://api.github.com/users/Ubadub/following{/other_user}",
"gists_url": "https://api.github.com/users/Ubadub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ubadub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ubadub/subscriptions",
"organizations_url": "https://api.github.com/users/Ubadub/orgs",
"repos_url": "https://api.github.com/users/Ubadub/repos",
"events_url": "https://api.github.com/users/Ubadub/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ubadub/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"IMO, the problem can be generalised through the TrainingArguments parameter `save_only_model` ([link](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/training_args.py#L307)):\r\n\r\nIts definition states: \"...when this is true, you won't be able to resume training from checkpoint.\".\r\nAs the current [TrainerState](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/trainer_callback.py#L35) implementation stands, we are not _truly_ able to resume training from a checkpoint even on setting `save_only_model=False`.\r\n\r\nAs pointed by @Ubadub here, Trainer's callbacks' states are not persisted along with models (even on setting `save_only_model=False`). To fix this issue (and the auxiliary issue #10290, pointed above), we need the capability to persist callbacks and load them when using `resume_from_checkpoint`.\r\n\r\nIn short, Trainer's callbacks should be a part of the [TrainerState](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/trainer_callback.py#L35) object.\r\n\r\nI can help in this implementation if this analysis seems reasonable.\r\n\r\n\r\nCheers!"
] | 1,705 | 1,706 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.33.3
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.16.2
- Safetensors version: 0.3.1
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0.post101 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Fundamentally the issue is that the `early_stopping_patience_counter` is not persisted when checkpointing. Consequently, it is [always (re)set to 0](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/trainer_callback.py#L564) when initializing `Trainer`, including when resuming from checkpoint. This means that if, for example, you never train your model for `early_stopping_patience`-many evaluation steps at once before stopping and resuming from checkpoint, early stopping will never happen.
An auxiliary issue is that even if you train your model for longer than `early_stopping_patience`-many evaluation steps, and training correctly stops, if you happen to then re-initiate training from a checkpoint, training will resume [even though the run ended with `self.control.should_training_stop == True`](https://github.com/huggingface/transformers/blob/f4f57f9dfa68948a383c352a900d588f63f6290a/src/transformers/trainer_callback.py#L154). This is because this variable is also not persisted to the `trainer_state.json` file when checkpointing. This issue was reported in #10290, but was never resolved before the issue was closed as stale.
To reproduce the main issue, simply initiate a training run and set `early_stopping_patience` to a value of your choice, then interrupt training before the run gets there. Reinitiate training with `resume_from_checkpoint=True`. Rinse and repeat until `best_metric` increases for `early_stopping_patience`-many evaluation calls.
To reproduce the auxiliary issue, don't interrupt your run until it stops due to early stopping. When it is complete, reinitiate training with `resume_from_checkpoint=True`.
### Expected behavior
Early stopping patience should work exactly the same when stopping and resuming runs from a checkpoint as when training continuously without interruption. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28544/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28543/comments | https://api.github.com/repos/huggingface/transformers/issues/28543/events | https://github.com/huggingface/transformers/issues/28543 | 2,085,134,751 | I_kwDOCUB6oc58SKGf | 28,543 | IsADirectoryError: [Errno 21] Is a directory: 'my-company/my-llm' | {
"login": "gventuri",
"id": 15671184,
"node_id": "MDQ6VXNlcjE1NjcxMTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/15671184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gventuri",
"html_url": "https://github.com/gventuri",
"followers_url": "https://api.github.com/users/gventuri/followers",
"following_url": "https://api.github.com/users/gventuri/following{/other_user}",
"gists_url": "https://api.github.com/users/gventuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gventuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gventuri/subscriptions",
"organizations_url": "https://api.github.com/users/gventuri/orgs",
"repos_url": "https://api.github.com/users/gventuri/repos",
"events_url": "https://api.github.com/users/gventuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/gventuri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Make sure you are logged in to the hub on `my-company` . For me this works: \r\n```python \r\n>>> from transformers import AutoModel\r\n>>> model = AutoModel.from_pretrained(\"bert-base-uncase\")\r\n>>> model.push_to_hub(\"ArthurZ/my-llm\")\r\nCommitInfo(commit_url='https://huggingface.co/ArthurZ/my-llm/commit/53cb603f0cdd1d6284f21c541f06a3b8a0ddd2d1', commit_message='Upload model', commit_description='', oid='53cb603f0cdd1d6284f21c541f06a3b8a0ddd2d1', pr_url=None, pr_revision=None, pr_num=None)\r\n```",
"@ArthurZucker I tried to login both with\r\n```\r\n!huggingface-cli login --token $hf_token\r\n```\r\nand with\r\n```\r\nhuggingface_hub.login\r\n```\r\n\r\nIn both cases I provided my personal token (I'm admin in the company). If I try to login again, it says that I'm already connected.",
"cc @Wauplin you might know better than me what this represents! ",
"@gventuri Can it be that you have a local directory called `\"my-company/my-llm\"` and that it conflicts with the push_to_hub? \r\n\r\n(to confirm that you are indeed logged in, you can do `huggingface-cli whoami` but it really doesn't seem to be the problem here)",
"@Wauplin thanks a lot, it was conflicting with the local folder, it's now working!",
"Great to know your problem's solved! :hugs: "
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.27.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
- Run the following code:
```
from transformers import AutoModelForCausalLM
import torch
m = AutoModelForCausalLM.from_pretrained(
"pretrained-model",
return_dict=True,
torch_dtype=torch.bfloat16,
device_map="auto"
)
m.push_to_hub("my-company/my-llm")
```
- I get the following error
```
---------------------------------------------------------------------------
IsADirectoryError Traceback (most recent call last)
Cell In[5], line 1
----> 1 m.push_to_hub("my-company/my-llm")
File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:2530, in PreTrainedModel.push_to_hub(self, *args, **kwargs)
2528 if tags:
2529 kwargs["tags"] = tags
-> 2530 return super().push_to_hub(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/hub.py:865, in PushToHubMixin.push_to_hub(self, repo_id, use_temp_dir, commit_message, private, token, max_shard_size, create_pr, safe_serialization, revision, commit_description, tags, **deprecated_kwargs)
860 repo_id = self._create_repo(
861 repo_id, private=private, token=token, repo_url=repo_url, organization=organization
862 )
864 # Create a new empty model card and eventually tag it
--> 865 model_card = create_and_tag_model_card(
866 repo_id, tags, token=token, ignore_metadata_errors=ignore_metadata_errors
867 )
869 if use_temp_dir is None:
870 use_temp_dir = not os.path.isdir(working_dir)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/hub.py:1120, in create_and_tag_model_card(repo_id, tags, token, ignore_metadata_errors)
1104 """
1105 Creates or loads an existing model card and tags it.
1106
(...)
1116 the process. Use it at your own risk.
1117 """
1118 try:
1119 # Check if the model card is present on the remote repo
-> 1120 model_card = ModelCard.load(repo_id, token=token, ignore_metadata_errors=ignore_metadata_errors)
1121 except EntryNotFoundError:
1122 # Otherwise create a simple model card from template
1123 model_description = "This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated."
File /opt/conda/lib/python3.10/site-packages/huggingface_hub/repocard.py:185, in RepoCard.load(cls, repo_id_or_path, repo_type, token, ignore_metadata_errors)
182 raise ValueError(f"Cannot load RepoCard: path not found on disk ({repo_id_or_path}).")
184 # Preserve newlines in the existing file.
--> 185 with card_path.open(mode="r", newline="", encoding="utf-8") as f:
186 return cls(f.read(), ignore_metadata_errors=ignore_metadata_errors)
File /opt/conda/lib/python3.10/pathlib.py:1119, in Path.open(self, mode, buffering, encoding, errors, newline)
1117 if "b" not in mode:
1118 encoding = io.text_encoding(encoding)
-> 1119 return self._accessor.open(self, mode, buffering, encoding, errors,
1120 newline)
IsADirectoryError: [Errno 21] Is a directory: 'my-company/my-llm'
```
This happen every time I run `push_to_hub`, even with other configurations.
### Expected behavior
I would expect this code to push the pretrained model to hugging face. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28543/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28542/comments | https://api.github.com/repos/huggingface/transformers/issues/28542/events | https://github.com/huggingface/transformers/pull/28542 | 2,084,992,834 | PR_kwDOCUB6oc5kQBBB | 28,542 | [docs] DeepSpeed | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"First draft wrapped up!\r\n\r\n1. It is easier to find `HfDeepSpeedConfig` now\r\n2. Prebuilding troubleshooting steps are in `debugging.md`\r\n3. Testing the DeepSpeed integration is moved to `testing.md`\r\n4. The main DeepSpeed guide is refactored to flow better. For example, installation > memory requirements > selecting a ZeRO stage. I think it's important to address these first before a user finds out they don't have the required memory or chose the wrong ZeRO stage. I've also condensed and made things more concise where appropriate without losing any context or info."
] | 1,705 | 1,706 | 1,706 | MEMBER | null | Refactors the [DeepSpeed API page](https://huggingface.co/docs/transformers/main/en/main_classes/deepspeed#deepspeed-integration) to make it easier to find and view `HfDeepSpeedConfig`, the only actual API reference on this doc. The rest of the DeepSpeed content will go to the Efficient Training Techniques section as a standalone guide. I've also moved some of the troubleshooting content with building DeepSpeed to the [Debugging guide](https://huggingface.co/docs/transformers/main/en/debugging) with a link.
todo:
- [x] discuss choosing which ZeRO stage to use
- [x] get model weights out
- [x] ZeRO-3 and inference
- [x] memory requirements
- [x] troubleshooting/filing issues
- [x] non-Trainer DeepSpeed integration | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28542/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28542",
"html_url": "https://github.com/huggingface/transformers/pull/28542",
"diff_url": "https://github.com/huggingface/transformers/pull/28542.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28542.patch",
"merged_at": 1706113888000
} |
https://api.github.com/repos/huggingface/transformers/issues/28541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28541/comments | https://api.github.com/repos/huggingface/transformers/issues/28541/events | https://github.com/huggingface/transformers/issues/28541 | 2,084,552,019 | I_kwDOCUB6oc58P71T | 28,541 | LLM fine-tuning with deepspeed | {
"login": "vallabh001",
"id": 88985147,
"node_id": "MDQ6VXNlcjg4OTg1MTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/88985147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vallabh001",
"html_url": "https://github.com/vallabh001",
"followers_url": "https://api.github.com/users/vallabh001/followers",
"following_url": "https://api.github.com/users/vallabh001/following{/other_user}",
"gists_url": "https://api.github.com/users/vallabh001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vallabh001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vallabh001/subscriptions",
"organizations_url": "https://api.github.com/users/vallabh001/orgs",
"repos_url": "https://api.github.com/users/vallabh001/repos",
"events_url": "https://api.github.com/users/vallabh001/events{/privacy}",
"received_events_url": "https://api.github.com/users/vallabh001/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @vallabh001, thanks for raising an issue! \r\n\r\nSo that we can best help you, please make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and provide:\r\n* The full error traceback\r\n* A minimal code snippet we can use to reproduce the error. In this case, we can't help without knowing what's in `llm_training.py`\r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output",
"There's the Alignment Handbook which uses Deepspeed to train any decoder LLM like Llama or Mistral on multiple GPUs: https://github.com/huggingface/alignment-handbook. It includes scripts for both supervised fine-tuning (SFT) and human preference fine-tuning using DPO (direct preference optimization): https://github.com/huggingface/alignment-handbook/tree/main/scripts.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | I was trying to fine-tune llama2 by referring this [blog](https://www.philschmid.de/instruction-tune-llama-2l), but the training time is very high (in days). So, I was thinking of using deepspeed optimization for the training process. However, there is no proper documentation for fine-tuning llm's using deepspeed.
I executed the command below to start training, but encountered an error. I have single A100 40gb GPU.
`torchrun --num_gpus=1 --nnodes 1 --nproc_per_node 1 llm_training.py --deepspeed "ds_zero2_no_offload.json" --ddp_find_unused_parameters False
`
Error
`RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 447 with name base_model.model.model.layers.31.mlp.down_proj.lora_B.default.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28541/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28540/comments | https://api.github.com/repos/huggingface/transformers/issues/28540/events | https://github.com/huggingface/transformers/pull/28540 | 2,084,533,824 | PR_kwDOCUB6oc5kOY9P | 28,540 | Remove CaptureLogger | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28540). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Closing for now as we need some funky logs for the no logs case whilst we still support < 3.10 and the logger issue has been resolved. "
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Replace our custom CaptureLogger class with the python library `unittest.TestCase().assertLogs`.
Makes testing for no logs being raised more robust - asserts no logs rather than string matching.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28540/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28540",
"html_url": "https://github.com/huggingface/transformers/pull/28540",
"diff_url": "https://github.com/huggingface/transformers/pull/28540.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28540.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28539/comments | https://api.github.com/repos/huggingface/transformers/issues/28539/events | https://github.com/huggingface/transformers/issues/28539 | 2,084,510,356 | I_kwDOCUB6oc58PxqU | 28,539 | `load_best_model_at_end` is inconsistent with evaluation (and save) logic at end of training | {
"login": "antoine-lizee",
"id": 2957716,
"node_id": "MDQ6VXNlcjI5NTc3MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2957716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antoine-lizee",
"html_url": "https://github.com/antoine-lizee",
"followers_url": "https://api.github.com/users/antoine-lizee/followers",
"following_url": "https://api.github.com/users/antoine-lizee/following{/other_user}",
"gists_url": "https://api.github.com/users/antoine-lizee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antoine-lizee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antoine-lizee/subscriptions",
"organizations_url": "https://api.github.com/users/antoine-lizee/orgs",
"repos_url": "https://api.github.com/users/antoine-lizee/repos",
"events_url": "https://api.github.com/users/antoine-lizee/events{/privacy}",
"received_events_url": "https://api.github.com/users/antoine-lizee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"### Notes\r\n\r\nI realize that this issue probably doesn't arise if the strategy is `epoch`.\r\n\r\nIt seems that using N + epsilon as the `num_train_epochs` would go around this problem in a very hacky way (and evaluate / save the model that corresponds to the first step after the desired epoch that is a multiple of `eval_steps`). Would that be your recommendation?\r\n\r\nedit: Ok digging a bit more, it seems that the proper way of fixing this problem would be to **add a callback** to the trainer which would enforce saving at the end of training.\r\nI will do this, but the default behaviour is still \"wrong\" I believe. (and would warrant at least some clear disclaimer in the doc?)"
] | 1,705 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.10.201-191.748.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.3.3
- Accelerate version: 0.26.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@muellerzr @pacman100 @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Shortened script below:
```
model_checkpoint = "xlm-roberta-large"
model_name = model_checkpoint.split("/")[-1]
model = XLMRobertaForTokenClassification.from_pretrained(model_checkpoint, num_labels=len(label_list))
batch_size = 32
learning_rate = 2e-5
eval_steps = 0.1
# The data + batch size leads to having 11277 steps
training_args = TrainingArguments(
output_dir_name,
logging_dir=run_dir,
logging_strategy="steps",
logging_steps=eval_steps / 5,
evaluation_strategy="steps",
eval_steps=eval_steps,
save_strategy="steps",
save_steps=eval_steps,
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=epochs,
weight_decay=0.01,
push_to_hub=False,
save_total_limit=4,
load_best_model_at_end=True
)
data_collator = DataCollatorForTokenClassification(tokenizer)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
# Train the model
trainer.train()
```
### Expected behavior
I would expect that my model is evaluated (and saved!) at the last step.
It is not, and in most example scripts we see `trainer.evaluate()` after the `trainer.train()`.
As a result, when we set `load_best_model_at_end=True` we concretely **discard any training that happened after the last checkpoint**, which seems wrong. In my case, the last 10% of training is discarded.
My understanding of what's happening:
- In the trainer callback, we check ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_callback.py#L447)) if the `global_step` is a multiple of the `eval_steps`. If the total number of step is not a multiple of it, this condition is not met at the last step.
- If we `load_best_model_at_end`, the last accessible evaluation does not include the performance of the latest stages of training.
- As a side note, running `trainer.evaluate()` by hand after the training only re-evaluates the past checkpoint that was selected as the best. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28539/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28538/comments | https://api.github.com/repos/huggingface/transformers/issues/28538/events | https://github.com/huggingface/transformers/pull/28538 | 2,084,408,051 | PR_kwDOCUB6oc5kN9vs | 28,538 | [`gradient_checkpointing`] default to use it for torch 2.3 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Why do we use reentrant gc by default? It said the non-reentrant gc can be more advantageous than the reentrant version: https://pytorch.org/docs/2.0/checkpoint.html#torch.utils.checkpoint.checkpoint",
"@hiyouga the use_reentrant=True is used by default in PT anyway so if you set it to `None`, `use_reentrant` will be set to `True`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28538). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,708 | 1,708 | COLLABORATOR | null | # What does this PR do?
Fixes #28536 in preparation for next torch release | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28538/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28538",
"html_url": "https://github.com/huggingface/transformers/pull/28538",
"diff_url": "https://github.com/huggingface/transformers/pull/28538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28538.patch",
"merged_at": 1708392205000
} |
https://api.github.com/repos/huggingface/transformers/issues/28537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28537/comments | https://api.github.com/repos/huggingface/transformers/issues/28537/events | https://github.com/huggingface/transformers/pull/28537 | 2,084,372,239 | PR_kwDOCUB6oc5kN15A | 28,537 | Fixes default value of `softmax_scale` in `PhiFlashAttention2`. | {
"login": "gugarosa",
"id": 4120639,
"node_id": "MDQ6VXNlcjQxMjA2Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gugarosa",
"html_url": "https://github.com/gugarosa",
"followers_url": "https://api.github.com/users/gugarosa/followers",
"following_url": "https://api.github.com/users/gugarosa/following{/other_user}",
"gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions",
"organizations_url": "https://api.github.com/users/gugarosa/orgs",
"repos_url": "https://api.github.com/users/gugarosa/repos",
"events_url": "https://api.github.com/users/gugarosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/gugarosa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `loss=0.0` error while fine-tuning with FP16 is another issue and I do have an ugly fix, but will look into it with more patience (and use a separate PR).",
"No problems! Thanks for the merge!",
"Thanks very much @gugarosa for the deep dive and the fix! ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28537). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
- Phi has never used `softmax_scale=1.0` with Flash-Attention, so the default is being moved to `None`. This tentatively fixes any issue regarding fine-tuning Phi-based checkpoints when Flash-Attention 2 is turned on.
- Documentation is also updated to reflect the official Phi checkpoints.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28488 (tentative)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @susnato
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28537/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28537/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28537",
"html_url": "https://github.com/huggingface/transformers/pull/28537",
"diff_url": "https://github.com/huggingface/transformers/pull/28537.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28537.patch",
"merged_at": 1705497765000
} |
https://api.github.com/repos/huggingface/transformers/issues/28536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28536/comments | https://api.github.com/repos/huggingface/transformers/issues/28536/events | https://github.com/huggingface/transformers/issues/28536 | 2,084,363,783 | I_kwDOCUB6oc58PN4H | 28,536 | Gradient checkpointing throws use_reentrant warning on PyTorch 2.1 | {
"login": "rosario-purple",
"id": 123594463,
"node_id": "U_kgDOB13m3w",
"avatar_url": "https://avatars.githubusercontent.com/u/123594463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rosario-purple",
"html_url": "https://github.com/rosario-purple",
"followers_url": "https://api.github.com/users/rosario-purple/followers",
"following_url": "https://api.github.com/users/rosario-purple/following{/other_user}",
"gists_url": "https://api.github.com/users/rosario-purple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rosario-purple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rosario-purple/subscriptions",
"organizations_url": "https://api.github.com/users/rosario-purple/orgs",
"repos_url": "https://api.github.com/users/rosario-purple/repos",
"events_url": "https://api.github.com/users/rosario-purple/events{/privacy}",
"received_events_url": "https://api.github.com/users/rosario-purple/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for raising! given that we had #27020, this should be fairly easy to fix! cc @younesbelkada ",
"@ArthurZucker is this still outstanding?",
"Will merge the PR today"
] | 1,705 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': False, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.21
- JaxLib version: 0.4.21
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Training any text model with gradient checkpointing enabled on PyTorch 2.1 and higher produces this warning:
```
/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: Warning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
```
This can be resolved by manually monkey-patching the model code with `use_reentrant=True`, eg. like so:
```
hidden_states, self_attns, decoder_cache = torch.utils.checkpoint.checkpoint(
create_custom_forward(decoder_layer),
hidden_states,
attention_mask,
position_ids,
None,
is_padded_inputs,
use_reentrant=True,
)
```
This is caused by an upstream change in PyTorch:
https://medium.com/pytorch/how-activation-checkpointing-enables-scaling-up-training-deep-learning-models-7a93ae01ff2d
### Expected behavior
No warning should be written | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28536/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28535/comments | https://api.github.com/repos/huggingface/transformers/issues/28535/events | https://github.com/huggingface/transformers/pull/28535 | 2,084,210,365 | PR_kwDOCUB6oc5kNR55 | 28,535 | Allow add_tokens for ESM | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28535). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | MEMBER | null | The tokenizer code for ESM forces all added tokens to be special tokens, presumably because the authors felt that the list of amino acids in proteins was constant and therefore that there wouldn't be a need to actually expand the core vocabulary. However, there are definitely use-cases for expanding the vocabulary - see #28387.
This PR makes `add_tokens()` for ESM tokenizers behave like it does for other tokenizers, and doesn't force the added tokens to be special tokens.
Fixes #28387 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28535/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28535",
"html_url": "https://github.com/huggingface/transformers/pull/28535",
"diff_url": "https://github.com/huggingface/transformers/pull/28535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28535.patch",
"merged_at": 1705667526000
} |
https://api.github.com/repos/huggingface/transformers/issues/28534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28534/comments | https://api.github.com/repos/huggingface/transformers/issues/28534/events | https://github.com/huggingface/transformers/issues/28534 | 2,084,136,856 | I_kwDOCUB6oc58OWeY | 28,534 | run_glue_no_trainer.py script crashes on Mistral model due to tokenizer issue | {
"login": "rosario-purple",
"id": 123594463,
"node_id": "U_kgDOB13m3w",
"avatar_url": "https://avatars.githubusercontent.com/u/123594463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rosario-purple",
"html_url": "https://github.com/rosario-purple",
"followers_url": "https://api.github.com/users/rosario-purple/followers",
"following_url": "https://api.github.com/users/rosario-purple/following{/other_user}",
"gists_url": "https://api.github.com/users/rosario-purple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rosario-purple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rosario-purple/subscriptions",
"organizations_url": "https://api.github.com/users/rosario-purple/orgs",
"repos_url": "https://api.github.com/users/rosario-purple/repos",
"events_url": "https://api.github.com/users/rosario-purple/events{/privacy}",
"received_events_url": "https://api.github.com/users/rosario-purple/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | [
"Adding these lines seems to fix it, not sure if this is the best/most general solution though:\r\n\r\n```\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n args.model_name_or_path, use_fast=not args.use_slow_tokenizer, trust_remote_code=args.trust_remote_code\r\n )\r\n tokenizer.pad_token = tokenizer.eos_token\r\n config.pad_token_id = tokenizer.pad_token_id\r\n model = AutoModelForSequenceClassification.from_pretrained(\r\n args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in args.model_name_or_path),\r\n config=config,\r\n ignore_mismatched_sizes=args.ignore_mismatched_sizes,\r\n trust_remote_code=args.trust_remote_code,\r\n )\r\n```",
"Hi @rosario-purple, thanks for raising this issue! \r\n\r\nThe proposed fix is the recommended way to address this. Would you like to open a PR to add this to the script? This way you get the github contribution",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': False, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.21
- JaxLib version: 0.4.21
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@ArthurZucker @younesbelkada @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Check out the transformers repo, and run this command (on a large server with appropriately configured `accelerate`, so it won't OOM):
`python run_glue_no_trainer.py --model_name_or_path mistralai/Mistral-7B-v0.1 --task_name sst2 --per_device_train_batch_size 4 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/sst2`
It will crash with this error and stack trace:
```
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
Traceback (most recent call last):
File "/scratch/brr/run_glue.py", line 662, in <module>
main()
File "/scratch/brr/run_glue.py", line 545, in main
for step, batch in enumerate(active_dataloader):
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/accelerate/data_loader.py", line 448, in __iter__
current_batch = next(dataloader_iter)
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 674, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
return self.collate_fn(data)
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/transformers/data/data_collator.py", line 249, in __call__
batch = self.tokenizer.pad(
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3259, in pad
padding_strategy, _, max_length, _ = self._get_padding_truncation_strategies(
File "/scratch/miniconda3/envs/brr/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2707, in _get_padding_truncation_strategies
raise ValueError(
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
/scratch/miniconda3/envs/brr/lib/python3.10/tempfile.py:860: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmprbynkmzk'>
_warnings.warn(warn_message, ResourceWarning)
```
### Expected behavior
It should train without crashing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28534/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28533/comments | https://api.github.com/repos/huggingface/transformers/issues/28533/events | https://github.com/huggingface/transformers/pull/28533 | 2,083,951,133 | PR_kwDOCUB6oc5kMYb8 | 28,533 | Fix attention mask creation for GPTNeo | {
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28533). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,706 | null | MEMBER | null | # What does this PR do?
It seems that #26486 broke the way the attention mask was created. It creates a causal attention mask by default, but there is already a causal attention mask in the `GPTNeoSelfAttention` modules, resulting in `NaN`s.
I am not sure the solution is perfect, so opened to suggestions. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28533/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28533",
"html_url": "https://github.com/huggingface/transformers/pull/28533",
"diff_url": "https://github.com/huggingface/transformers/pull/28533.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28533.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28532/comments | https://api.github.com/repos/huggingface/transformers/issues/28532/events | https://github.com/huggingface/transformers/issues/28532 | 2,083,781,693 | I_kwDOCUB6oc58M_w9 | 28,532 | Inconsistent check for is_accelerate_available() in transformers.training_args | {
"login": "faph",
"id": 8397805,
"node_id": "MDQ6VXNlcjgzOTc4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8397805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faph",
"html_url": "https://github.com/faph",
"followers_url": "https://api.github.com/users/faph/followers",
"following_url": "https://api.github.com/users/faph/following{/other_user}",
"gists_url": "https://api.github.com/users/faph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faph/subscriptions",
"organizations_url": "https://api.github.com/users/faph/orgs",
"repos_url": "https://api.github.com/users/faph/repos",
"events_url": "https://api.github.com/users/faph/events{/privacy}",
"received_events_url": "https://api.github.com/users/faph/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this is already fixed here: https://github.com/huggingface/transformers/blob/002566f398cd6dbf7053a89c26646ac45be540f4/src/transformers/training_args.py#L1830",
"Hi @faph, thanks for raising this issue! \r\n\r\nYes, as you identified, a fix was added with #28171 which will be part of the next release. ",
"Thanks @amyeroberts "
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-4.18.0-477.21.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.2
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: unknown
- Using distributed or parallel set-up in script?: unknown
Output of `pip show accelerate`:
```
Name: accelerate
Version: 0.20.3
(...)
```
### Who can help?
@muellerz @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Pip install `transformers 4.36.2` and `accelerate 0.20.3`
2. Instantiate `transformers.TrainingArguments`
This raises something like this:
```
@cached_property
def _setup_devices(self) -> "torch.device":
requires_backends(self, ["torch"])
logger.info("PyTorch: setting up devices")
if not is_sagemaker_mp_enabled():
if not is_accelerate_available(min_version="0.20.1"):
raise ImportError(
"Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`"
)
> AcceleratorState._reset_state(reset_partial_state=True)
E NameError: name 'AcceleratorState' is not defined
```
This because the import of `AcceleratorState` is conditional upon `accelerate` with minimum version `0.21.0`. See https://github.com/huggingface/transformers/blob/v4.36.2/src/transformers/utils/import_utils.py#L684
### Expected behavior
Consistent min version check for `accelerate` and successful `TrainingArguments` instantiation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28532/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28531/comments | https://api.github.com/repos/huggingface/transformers/issues/28531/events | https://github.com/huggingface/transformers/issues/28531 | 2,083,735,951 | I_kwDOCUB6oc58M0mP | 28,531 | A named Peft Model doesn't work with resume_from_checkpoint=True | {
"login": "chenbin11200",
"id": 5245644,
"node_id": "MDQ6VXNlcjUyNDU2NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5245644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenbin11200",
"html_url": "https://github.com/chenbin11200",
"followers_url": "https://api.github.com/users/chenbin11200/followers",
"following_url": "https://api.github.com/users/chenbin11200/following{/other_user}",
"gists_url": "https://api.github.com/users/chenbin11200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenbin11200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenbin11200/subscriptions",
"organizations_url": "https://api.github.com/users/chenbin11200/orgs",
"repos_url": "https://api.github.com/users/chenbin11200/repos",
"events_url": "https://api.github.com/users/chenbin11200/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenbin11200/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"A PR is attached, if it is necessary: https://github.com/huggingface/transformers/pull/28547",
"Also cc @younesbelkada for PEFT",
"Thanks @amyeroberts @chenbin11200 - the PR makes sense to me ! I left a single comment ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
transformers==4.36.2
peft==0.5.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi @muellerzr and @pacman100,
It seem like if I try to resume a lora training by using
```
trainer.train(resume_from_checkpoint=True)
```
it fails with the following error:
```
ValueError: Can't find a valid checkpoint at /my_output_dir/checkpoint-300
```
By checking the code, I figured out the resuming proccess is stopped by the following assert in `Trainer._load_from_checkpoint`
```
if not (
any(
os.path.isfile(f)
for f in [
weights_file,
safe_weights_file,
weights_index_file,
safe_weights_index_file,
adapter_weights_file,
adapter_safe_weights_file,
]
)
or is_fsdp_ckpt
):
raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}")
```
Since I initialize the PeftModel with a adapter name, which is used to manage my adapters.
```
peft_model = get_peft_model(model=base_model,
peft_config=peft_config,
adapter_name='my_lora_model_name',
```
In this case, the `adapter_config.json` and `adapter_model.bin` files will be saved in `/my_output_dir/checkpoint-300/my_lora_model_name` instead of `/my_output_dir/checkpoint-300` directly. That's why the ValueError is raised.
I am not sure whether it is a known issue, and the proper way to fix it? Or I have to write my own PeftTrainer to handle this issue?
Thank you in advance for your support.
Best regards.
### Expected behavior
Resume the training with a named peft model is supported. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28531/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28530/comments | https://api.github.com/repos/huggingface/transformers/issues/28530/events | https://github.com/huggingface/transformers/issues/28530 | 2,083,730,860 | I_kwDOCUB6oc58MzWs | 28,530 | Early stopping required metric_for_best_model, but did not find eval_f1 so early stopping is disabled | {
"login": "ManishChandra12",
"id": 17062142,
"node_id": "MDQ6VXNlcjE3MDYyMTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/17062142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ManishChandra12",
"html_url": "https://github.com/ManishChandra12",
"followers_url": "https://api.github.com/users/ManishChandra12/followers",
"following_url": "https://api.github.com/users/ManishChandra12/following{/other_user}",
"gists_url": "https://api.github.com/users/ManishChandra12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ManishChandra12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ManishChandra12/subscriptions",
"organizations_url": "https://api.github.com/users/ManishChandra12/orgs",
"repos_url": "https://api.github.com/users/ManishChandra12/repos",
"events_url": "https://api.github.com/users/ManishChandra12/events{/privacy}",
"received_events_url": "https://api.github.com/users/ManishChandra12/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @ManishChandra12, thanks for raising an issue! \r\n\r\nIn the provided snippet, you don't appear to be passing `compute_metrics` to the `Trainer` class, as required. ",
"Hi @amyeroberts. Thanks for the quick response.\r\n\r\nThe error persists even when I pass ```compute_metrics``` to ```Trainer```.\r\n\r\nPS: I have updated the original issue by adding ```compute_metrics```.",
"@ManishChandra12 OK, thanks for reporting back. Is the error exactly the same as before? \r\n\r\nCould you update the script to use a public dataset, or provide a sample of the dataset so we can run the code to reproduce on our end? ",
"@amyeroberts yes, the error is exactly the same as before.\r\n\r\nHere is a sample of the dataset:\r\n[val.csv](https://github.com/huggingface/transformers/files/13950970/val.csv)\r\n[test.csv](https://github.com/huggingface/transformers/files/13950971/test.csv)\r\n[train.csv](https://github.com/huggingface/transformers/files/13950972/train.csv)\r\n",
"Hi @ManishChandra12, I'm able to run the provided code and data without error on main. Could you try updating your transformers version to the latest release v4.36.2? ",
"@ManishChandra12 @amyeroberts could you figure out the issue?\r\n\r\nWhen I run the code with the provided example data, it also says\r\n\"early stopping required metric_for_best_model, but did not find eval_f1 so early stopping is disabled\"\r\nand when I run ```trainer.evaluate()``` it only returns the following keys ```'eval_runtime', 'eval_samples_per_second', 'eval_steps_per_second', 'epoch'``` so neither the loss or the F1 score\r\n\r\nI am on v4.36.2.\r\n\r\nI also have the same issue with my code, to recapitulate:\r\n- TrainingArguments.prediction_loss_only = False\r\n- TrainingArguments.metric_for_best_model = \"name_of_my_metric\"\r\n- TrainingArguments.greater_is_better = True\r\n- Trainer.compute_metrics = fn_eval_metric which is a function that returns {\"name_of_my_metric\": score_to_improve}\r\n\r\nI am going through the source code of the Trainer to try to figure out what happens in detail because it seems the compute_metrics function isn't called at all ... I tried to introduce a bug in the function but there is no error as it should happen if the function was called, the only error comes at the end of the evaluation loop as it doesn't find the \"eval_name_of_my_metric\" key ...\r\n\r\nI am puzzled, any hints would be great on how to fix this, thanks!",
"@adrienchaton - could you try running on the latest release v4.37? ",
"@amyeroberts thanks, I updated to v4.37 but the problem persists in the same way.\r\n\r\nAnd if I set \r\n- Trainer.compute_metrics = None\r\n- TrainingArguments.metric_for_best_model = \"loss\"\r\n- TrainingArguments.greater_is_better = False\r\n- TrainingArguments.prediction_loss_only = True\r\n\r\nThen the same error comes, stating\r\n```\r\ntransformers/trainer.py\", line 1929, in _inner_training_loop\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\ntransformers/trainer.py\", line 2300, in _maybe_log_save_evaluate\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\ntransformers/trainer.py\", line 2389, in _save_checkpoint\r\n metric_value = metrics[metric_to_check]\r\nKeyError: 'eval_loss'```",
"this issue might be related, but setting or not the label_names in TrainingArguments didnt change my issues\r\n\r\nhttps://discuss.huggingface.co/t/why-do-i-get-no-validation-loss-and-why-are-metrics-not-calculated/32373",
"@amyeroberts @ManishChandra12 I think I figured out the problem, TrainingArguments.label_names wasn't configured properly and it would silently be ignored by the trainer which seems to run evaluation but doesnt compute any metric, neither the loss, nor the custom metrics if providing a compute_metrics function\r\n\r\nmaking sure that your data_collator returns a \"labels\" key AND setting label_names=[\"labels\"] should fix the problem\r\n\r\nI am not the only one who faced this problem .. I think it would be better to print a warning e.g. \"compute_metrics provided but no label_names, thus metrics will be ignored\" because it is misleading to see the progress bar with all eval samples being processed while the metrics arent computed because label_names keys arent found in the input",
"Thanks for updating with the fix @adrienchaton! \r\n\r\nI'll let @pacman100 and @muellerzr decide on the best way to handle / flag on the trainer side "
] | 1,705 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@muellerzr @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import os
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
from transformers import TrainingArguments, Trainer
from transformers import EarlyStoppingCallback, IntervalStrategy
import numpy as np
import evaluate
os.environ["CUDA_VISIBLE_DEVICES"]=str(gpu_id)
from datasets import Dataset, DatasetDict
train_k = pd.read_csv('train.csv', usecols=["text", "k"])
train_k.rename(columns={"text":"text", "k":"label"}, inplace=True)
val_k = pd.read_csv('val.csv', usecols=["text", "k"])
val_k.rename(columns={"text":"text", "k":"label"}, inplace=True)
test_k = pd.read_csv('test.csv', usecols=["text", "k"])
test_k.rename(columns={"text":"text", "k":"label"}, inplace=True)
train_k = Dataset.from_pandas(train_k)
val_k = Dataset.from_pandas(val_k)
test_k = Dataset.from_pandas(test_k)
ds = DatasetDict()
ds['train'] = train_k
ds['val'] = val_k
ds['test'] = test_k
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(str(examples['text']), padding="max_length", truncation=True)
tokenized_datasets = ds.map(tokenize_function)
tokenized_train_k = tokenized_datasets["train"]
tokenized_val_k = tokenized_datasets["val"]
model_k = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=6)
training_args = TrainingArguments(output_dir="trained_k_predictors", evaluation_strategy="steps", eval_steps=100, metric_for_best_model = 'f1', learning_rate=1e-3, num_train_epochs=5, weight_decay=0.01, load_best_model_at_end=True, per_device_train_batch_size = 16, per_device_eval_batch_size = 32, save_total_limit = 3, optim="adafactor", label_names=['label'], remove_unused_columns=False,)
metric = evaluate.load("f1")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return {'f1': metric.compute(predictions=predictions, references=labels)}
trainer = Trainer(model=model_k, args=training_args, train_dataset=tokenized_train_k, eval_dataset=tokenized_val_k, compute_metrics=compute_metrics, callbacks = [EarlyStoppingCallback(early_stopping_patience=3)])
trainer.train()
```
## Error message:
{'eval_runtime': 21.6631, 'eval_samples_per_second': 208.926, 'eval_steps_per_second': 3.277, 'epoch': 0.47}
9%|████████████████ | 5 [26/1960]
00/5305 [06:15<41:00, 1.95it/s]
100%|████████████████████████████████████████████████████████████████████� $ [24/1960]
�█████████████████████████████████████████████�early stopping required metric_for_best_model, but did not find eval_f1 so [23/1960]
early stopping is disabled██████████████████████████| 71/71 [00:21<00:00, 3.42it/s]
Traceback (most recent call last):
File "/scratch/manish/apl/apl_env/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/scratch/manish/apl/apl_env/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/scratch/manish/apl/src/apl.py", line 386, in <module>
main()
File "/scratch/manish/apl/src/apl.py", line 139, in main
trainer.train()
File "/scratch/manish/apl/apl_env/lib/python3.8/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/scratch/manish/apl/apl_env/lib/python3.8/site-packages/transformers/trainer.py", line 1922, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/scratch/manish/apl/apl_env/lib/python3.8/site-packages/transformers/trainer.py", line 2282, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/scratch/manish/apl/apl_env/lib/python3.8/site-packages/transformers/trainer.py", line 2407, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_f1'
9%|███████████████▉ | 500 [3/1960]
/5305 [06:18<1:00:34, 1.32it/s]
### Expected behavior
Train the model with early stopping enabled. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28530/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28529/comments | https://api.github.com/repos/huggingface/transformers/issues/28529/events | https://github.com/huggingface/transformers/issues/28529 | 2,083,589,686 | I_kwDOCUB6oc58MQ42 | 28,529 | Error while fetching adapter layer from huggingface library | {
"login": "Muskanb",
"id": 35324348,
"node_id": "MDQ6VXNlcjM1MzI0MzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/35324348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muskanb",
"html_url": "https://github.com/Muskanb",
"followers_url": "https://api.github.com/users/Muskanb/followers",
"following_url": "https://api.github.com/users/Muskanb/following{/other_user}",
"gists_url": "https://api.github.com/users/Muskanb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muskanb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muskanb/subscriptions",
"organizations_url": "https://api.github.com/users/Muskanb/orgs",
"repos_url": "https://api.github.com/users/Muskanb/repos",
"events_url": "https://api.github.com/users/Muskanb/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muskanb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Muskanb, thanks for raising an issue! \r\n\r\nIs the error occurring when calling `LlamaForCausalLM.from_pretrained` or `pa_extractor.load_adapter`? ",
"Hey @amyeroberts, Thanks for responding. It occurs while calling ` pa_extractor.load_adapter`",
"hi @Muskanb what are your transformers & peft versions? ",
"These are the versions : \r\n ```\r\npeft==0.6.2\r\ntransformers==4.35.2\r\n```\r\n",
"`token` argument should be supported in the latest transformers version: https://github.com/huggingface/transformers/blob/002566f398cd6dbf7053a89c26646ac45be540f4/src/transformers/integrations/peft.py#L74 , can you try to update peft & transformers? `pip install -U transformers peft`",
"@younesbelkada It worked. Appreciate the prompt help!! :)",
"Awesome thanks @Muskanb !",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | 1,708 | NONE | null | ### System Info
```
pa_extractor = LlamaForCausalLM.from_pretrained(LLAMA_MODEL_NAME,
token=HF_ACCESS_TOKEN,
max_length=LLAMA2_MAX_LENGTH,
pad_token_id=cls.tokenizer.eos_token_id,
device_map="auto",
quantization_config=bnb_config)
pa_extractor.load_adapter(PEFT_MODEL_NAME, token=HF_ACCESS_TOKEN, device_map="auto")
```
# getting the below error while executing :
401 client error, Repository Not Found for url: https://huggingface.co/muskan/llama2/resolve/main/adapter_model.safetensors. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password. Even though works fine while fetching models but fails at load_adapter step.
### Who can help?
@Narsil @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The model is present in my private repo, should be replicable if you will try to use load_adapter to fetch any adapter layer from hf directly.
### Expected behavior
Should be able to download the peft adapter layer successfullt | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28529/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28528/comments | https://api.github.com/repos/huggingface/transformers/issues/28528/events | https://github.com/huggingface/transformers/issues/28528 | 2,083,582,439 | I_kwDOCUB6oc58MPHn | 28,528 | The generation speed on NPU is too slow | {
"login": "hhllxx1121",
"id": 96508996,
"node_id": "U_kgDOBcCcRA",
"avatar_url": "https://avatars.githubusercontent.com/u/96508996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hhllxx1121",
"html_url": "https://github.com/hhllxx1121",
"followers_url": "https://api.github.com/users/hhllxx1121/followers",
"following_url": "https://api.github.com/users/hhllxx1121/following{/other_user}",
"gists_url": "https://api.github.com/users/hhllxx1121/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hhllxx1121/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hhllxx1121/subscriptions",
"organizations_url": "https://api.github.com/users/hhllxx1121/orgs",
"repos_url": "https://api.github.com/users/hhllxx1121/repos",
"events_url": "https://api.github.com/users/hhllxx1121/events{/privacy}",
"received_events_url": "https://api.github.com/users/hhllxx1121/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @gante ",
"Hi @hhllxx1121 👋 \r\n\r\nSee this issue and its comments: https://github.com/huggingface/transformers/issues/28075",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | The generation speed on NPU device is too slow,The first conversation takes about 5 minutes, and it may be faster next. May I ask if there is any error? Below is my code demo
```python
import torch
import torch_npu
from transformers import LlamaForCausalLM, LlamaTokenizer, TextStreamer
tokenizer = LlamaTokenizer.from_pretrained(
"",
device_map="npu:2"
)
llama_model = LlamaForCausalLM.from_pretrained(
"",
device_map="npu:2"
)
streamer = TextStreamer(tokenizer)
while True:
ins = input("user: ")
res = tokenizer.encode(ins, return_tensors="pt").to("npu:2")
outputs = llama_model.generate(
inputs=res,
streamer=streamer,
max_new_tokens=100,
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28528/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28527/comments | https://api.github.com/repos/huggingface/transformers/issues/28527/events | https://github.com/huggingface/transformers/pull/28527 | 2,083,559,986 | PR_kwDOCUB6oc5kLAk3 | 28,527 | [`TokenizationRoformerFast`] Fix the save and loading | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Fixes #28164, the pre tokenizer state was not correctly set after saving a fast tokenizer only. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28527/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28527",
"html_url": "https://github.com/huggingface/transformers/pull/28527",
"diff_url": "https://github.com/huggingface/transformers/pull/28527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28527.patch",
"merged_at": 1705419436000
} |
https://api.github.com/repos/huggingface/transformers/issues/28526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28526/comments | https://api.github.com/repos/huggingface/transformers/issues/28526/events | https://github.com/huggingface/transformers/pull/28526 | 2,083,550,366 | PR_kwDOCUB6oc5kK-cX | 28,526 | Fix labels encoding in RobertaForSequenceClassification when problem_type="multi_label_classification" | {
"login": "DamienAllonsius",
"id": 11852475,
"node_id": "MDQ6VXNlcjExODUyNDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/11852475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DamienAllonsius",
"html_url": "https://github.com/DamienAllonsius",
"followers_url": "https://api.github.com/users/DamienAllonsius/followers",
"following_url": "https://api.github.com/users/DamienAllonsius/following{/other_user}",
"gists_url": "https://api.github.com/users/DamienAllonsius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DamienAllonsius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DamienAllonsius/subscriptions",
"organizations_url": "https://api.github.com/users/DamienAllonsius/orgs",
"repos_url": "https://api.github.com/users/DamienAllonsius/repos",
"events_url": "https://api.github.com/users/DamienAllonsius/events{/privacy}",
"received_events_url": "https://api.github.com/users/DamienAllonsius/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@amyeroberts thanks a lot for your answer, I'll close this PR then. \r\n\r\nFor the record, I have actually tried to pass one hot labels: \r\n```\r\ndef one_hot_label(batch):\r\n batch[\"label\"] = [torch.functional.F.one_hot(torch.tensor(label), num_classes=18).double() for label in batch[\"label\"]]\r\n return batch\r\nds = ds.map(one_hot_label, batched=True)\r\n```\r\nBut the default collate function complained (data_collator.py line 117) because `first[\"label\"]` is now a vector\r\n```\r\nlabel = first[\"label\"].item() if isinstance(first[\"label\"], torch.Tensor) else first[\"label\"]\r\n```\r\nSo I had to create a new custom collate function and I run into other problems... I hoped to make it less painful with this update.\r\n\r\n(If you have any other idea to use one hot labels in a more straightforward way, that would be great :))"
] | 1,705 | 1,705 | 1,705 | NONE | null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Here is a simple script that illustrates the problem
```python
import this
from transformers import (AutoConfig,
RobertaForSequenceClassification,
RobertaTokenizerFast, Trainer, TrainingArguments)
from itertools import cycle
from datasets import Dataset
def main():
# dataset
print("dataset")
text = this.s.split("\n")
num_labels = 4
labels = [int(cycle("1234").__next__()) for _ in range(len(text))]
ds = Dataset.from_dict({"text": text, "label": labels})
ds = ds.train_test_split(test_size=0.3)
output_folder_path = "/tmp/roberta"
# model and parameters
print("model and parameters")
model_id = "distilroberta-base"
config = AutoConfig.from_pretrained(model_id)
config.problem_type = "multi_label_classification"
config.num_labels = num_labels
model = RobertaForSequenceClassification.from_pretrained(
model_id, config=config
)
args = {
"batch_size": 100,
"tokenizer_max_length": 512,
"training_args": {
"num_train_epochs": 2,
"learning_rate": 1e-5,
"warmup_steps": 500,
"report_to": "none",
},
}
# tokenizer
print("tokenizer")
tokenizer = RobertaTokenizerFast.from_pretrained(model_id)
def tokenize(batch):
return tokenizer(batch["text"], padding=True, truncation=True, max_length=args["tokenizer_max_length"])
ds = ds.map(tokenize, batched=True, batch_size=args["batch_size"])
ds.set_format("torch", columns=["input_ids", "attention_mask", "label"])
# Training
print("training")
training_args = TrainingArguments(
output_dir=output_folder_path,
**args["training_args"]
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=ds["train"],
eval_dataset=ds["test"],
)
trainer.train(resume_from_checkpoint=False)
if __name__ == "__main__":
main()
```
Output error is
```
ValueError: Target size (torch.Size([2])) must be the same as input size (torch.Size([2, 4]))
```
Because transformers/models/roberta/modeling_roberta.py L1236 expects labels to be one hot encoded.
The code in this PR solves this issus.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28526/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28526",
"html_url": "https://github.com/huggingface/transformers/pull/28526",
"diff_url": "https://github.com/huggingface/transformers/pull/28526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28526.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28525/comments | https://api.github.com/repos/huggingface/transformers/issues/28525/events | https://github.com/huggingface/transformers/issues/28525 | 2,083,493,702 | I_kwDOCUB6oc58L5dG | 28,525 | [Whisper] TFWhisperFromPretrained : Can we run transcription by using the call method instead of generate from the transformers class | {
"login": "monowaranjum",
"id": 19803082,
"node_id": "MDQ6VXNlcjE5ODAzMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/19803082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monowaranjum",
"html_url": "https://github.com/monowaranjum",
"followers_url": "https://api.github.com/users/monowaranjum/followers",
"following_url": "https://api.github.com/users/monowaranjum/following{/other_user}",
"gists_url": "https://api.github.com/users/monowaranjum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monowaranjum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monowaranjum/subscriptions",
"organizations_url": "https://api.github.com/users/monowaranjum/orgs",
"repos_url": "https://api.github.com/users/monowaranjum/repos",
"events_url": "https://api.github.com/users/monowaranjum/events{/privacy}",
"received_events_url": "https://api.github.com/users/monowaranjum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @monowaranjum, thanks for raising an issue! \r\n\r\nThe supported way to save and load models is using the `model.save_pretrained(checkpoint)` and `ModelClass.from_pretrained(checkpoint)` methods. Is there a reason you're using the keras methods instead? ",
"Hi @monowaranjum, just to expand on @amyeroberts's comment - all `transformers` TF models are Keras models, but not all Keras models are `transformers` models. When you use Keras save methods, what is saved is a pure Keras model, not a `transformers` model. This is okay when you just want to export a model for inference, but you will lose access to methods like `generate()`, which are part of `transformers`, not Keras.\r\n\r\nIf you want to save and load a `transformers` model, you should use `save_pretrained` and `TFWhisperForConditionalGeneration.from_pretrained()`. We recommend saving pure TF/Keras models only when you want to export the model for inference with e.g. TFLite or something like that.",
"HI, Thank you so much for the responses. It clears up a few confusions I had. Before I proceed any farther: I would like to give some context as to why I am trying to do so. \r\n\r\nI am trying to compile the whisper model for a specific GPU. For the compilation process to complete, I am using tensorflow and OpenXLA (IREE) compilation toolchain. In order to lower the model in MLIR and then lowering it all the way to LLVM bitcode that can be processed for the specific GPU backend, I need the input model to be in tensorflow saved_model format. So, the process look like this: \r\n\r\nTF Saved model -> Import into MLIR -> Convert to LLVM -> GPU specific Kernel Generation -> GPU executable for inference. \r\n\r\nNow in order to do that, I tried to save the model using keras's save function. \r\n\r\nQ1: I see when I use keras's save function and load it there are still ```call``` methods in it. Does that perform inference on the model?\r\nQ2: If Q1 is answered no, is there a way I can run inference on the keras saved model ? Which function to call for that purpose and where can I find a documentation/worked out example for that. \r\nQ3: If Q1 is yes, can you please point me to a direction where I can find a worked out example/ documentation on how to use the call function? I am specifically looking for how to set the following parameters: ```decoder_attention_mask, decoder_input_ids```. I tried calling the call function, but could not understand the following error message. \r\n\r\n\r\n```\r\nPositional arguments (16 total):\r\n * <tf.Tensor 'input_features:0' shape=(1, 80, 3000) dtype=float64>\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * False\r\n Keyword arguments: {}\r\n\r\n Expected these arguments to match one of the following 2 option(s):\r\n\r\nOption 1:\r\n Positional arguments (16 total):\r\n * {'decoder_attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='decoder_attention_mask'),\r\n 'decoder_input_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='decoder_input_ids'),\r\n 'input_features': TensorSpec(shape=(None, 80, None), dtype=tf.float32, name='input_features_input_features')}\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * True\r\n Keyword arguments: {}\r\n\r\nOption 2:\r\n Positional arguments (16 total):\r\n * {'decoder_attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='decoder_attention_mask'),\r\n 'decoder_input_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='decoder_input_ids'),\r\n 'input_features': TensorSpec(shape=(None, 80, None), dtype=tf.float32, name='input_features_input_features')}\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * None\r\n * False\r\n Keyword arguments: {}\r\n```\r\n\r\nThank you so much again for the wonderful work you are doing for the community. "
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.7
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi @gante @Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is a short script to reproduce the issue.
```
import tensorflow as tf
import numpy as np
from transformers import AutoProcessor, TFWhisperForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
import librosa
processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en")
base_asr_model = TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
# Read some inputs
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], return_tensors="tf")
input_features = inputs.input_features
# Generate some predictions
generated_ids = base_asr_model.generate(input_features=input_features)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(transcription)
# Save the model
base_asr_model.save('./whisper-saved-unsigned')
# Load the model from saved state
loaded_base_asr_model = tf.keras.models.load_model('./whisper-saved-unsigned')
# Try running inference on the loaded model
new_generated_ids = loaded_base_asr_model.generate(input_features = input_features) # <-- This won't work
transcription = processor.batch_decode(new_generated_ids, skip_special_tokens=True)[0]
print(transcription)
```
The script fails for the second call of ```generate()``` function with the following error:
```
Traceback (most recent call last):
File "/home/rashik/Documents/reproduction/reproduction.py", line 31, in <module>
new_generated_ids = loaded_base_asr_model.generate(input_features = input_features) # <-- This won't work
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'TFWhisperForConditionalGeneration' object has no attribute 'generate'
```
### Expected behavior
I expected the loaded model to behave exactly like the model originally behaved. I listed all the attributes of the loaded model using ```dir(loaded_base_asr_model)```. Here is a screenshot of the output:
![image](https://github.com/huggingface/transformers/assets/19803082/a16d49a4-1c23-4e07-8cde-1a0fa9a15224)
On the other hand, I did the same for the original model. Here is the screenshot of that output:
![image](https://github.com/huggingface/transformers/assets/19803082/707710d3-fdbb-461f-9d63-044b0afa95dc)
Clearly, I am missing something about how the model is saved and how it is loaded later.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28525/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28524/comments | https://api.github.com/repos/huggingface/transformers/issues/28524/events | https://github.com/huggingface/transformers/issues/28524 | 2,083,465,489 | I_kwDOCUB6oc58LykR | 28,524 | Exception in inference when using the pipeline with output_scores=True to get logits | {
"login": "andersonm-ibm",
"id": 63074550,
"node_id": "MDQ6VXNlcjYzMDc0NTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/63074550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andersonm-ibm",
"html_url": "https://github.com/andersonm-ibm",
"followers_url": "https://api.github.com/users/andersonm-ibm/followers",
"following_url": "https://api.github.com/users/andersonm-ibm/following{/other_user}",
"gists_url": "https://api.github.com/users/andersonm-ibm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andersonm-ibm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andersonm-ibm/subscriptions",
"organizations_url": "https://api.github.com/users/andersonm-ibm/orgs",
"repos_url": "https://api.github.com/users/andersonm-ibm/repos",
"events_url": "https://api.github.com/users/andersonm-ibm/events{/privacy}",
"received_events_url": "https://api.github.com/users/andersonm-ibm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@amyeroberts For reference.\r\n\r\nThis is totally OK, pipelines are not supposed to support every option models do. Use the example code if you want more control over the generation.\r\n\r\nClosing for now since this is normal behavior."
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.29.1
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Easily reproduced by [using the pipeline with output_scores=True](https://gist.github.com/andersonm-ibm/d8baeea66afca89cefebc108f1ce08f3)
Results in:
```
test_transformers_bug.py:27: in <module>
for out in pipe(KeyDataset(dataset, "text")):
/home/mayaa/miniconda3/envs/fme/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py:124: in __next__
item = next(self.iterator)
/home/mayaa/miniconda3/envs/fme/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py:125: in __next__
processed = self.infer(item, **self.params)
/home/mayaa/miniconda3/envs/fme/lib/python3.10/site-packages/transformers/pipelines/base.py:1025: in forward
model_outputs = self._forward(model_inputs, **forward_params)
/home/mayaa/miniconda3/envs/fme/lib/python3.10/site-packages/transformers/pipelines/text_generation.py:264: in _forward
out_b = generated_sequence.shape[0]
E AttributeError: 'GreedySearchEncoderDecoderOutput' object has no attribute 'shape'
```
### Expected behavior
Pipeline should handle the case where the model output is a `GreedySearchEncoderDecoderOutput `and not a simple tensor without exceptions, like in [the example](https://gist.github.com/andersonm-ibm/766d4892c92310a7889b2b3dfdc8ff44#file-model_generate_with_logits_output-py) . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28524/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28523/comments | https://api.github.com/repos/huggingface/transformers/issues/28523/events | https://github.com/huggingface/transformers/issues/28523 | 2,083,462,217 | I_kwDOCUB6oc58LxxJ | 28,523 | Huggingface Agents Error 422: {'error': 'Input validation error: `max_new_tokens` must be <= 192 | {
"login": "dashapetr",
"id": 54349415,
"node_id": "MDQ6VXNlcjU0MzQ5NDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/54349415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dashapetr",
"html_url": "https://github.com/dashapetr",
"followers_url": "https://api.github.com/users/dashapetr/followers",
"following_url": "https://api.github.com/users/dashapetr/following{/other_user}",
"gists_url": "https://api.github.com/users/dashapetr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dashapetr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dashapetr/subscriptions",
"organizations_url": "https://api.github.com/users/dashapetr/orgs",
"repos_url": "https://api.github.com/users/dashapetr/repos",
"events_url": "https://api.github.com/users/dashapetr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dashapetr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hey! Thanks, you are probably right. Would you like to open a PR to change this and make it more friendly? "
] | 1,705 | 1,708 | null | NONE | null | ### System Info
Transformers v4.29.0, v4.36.2
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run in Google Colab:
```
#@title Setup
transformers_version = "v4.36.2" # you can use "v4.29.0", the issue and output are the same
print(f"Setting up everything with transformers version {transformers_version}")
!pip install huggingface_hub>=0.14.1 git+https://github.com/huggingface/transformers@$transformers_version -q diffusers accelerate datasets torch soundfile sentencepiece opencv-python openai
from huggingface_hub import notebook_login
notebook_login()
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", token='hf_my_token') # token is passed directly here to avoid the issue https://github.com/huggingface/transformers/issues/28217
agent.run("Is the following `text` (in Spanish) positive or negative?", text="¡Este es un API muy agradable!")
```
### Expected behavior
It should generate results, but instead, I am getting an error:
`ValueError: Error 422: {'error': 'Input validation error: `max_new_tokens` must be <= 192. Given: 200', 'error_type': 'validation'}`
![image](https://github.com/huggingface/transformers/assets/54349415/a6f7b720-befe-41b9-8973-d40d717a5b15)
To my mind, it seems like it could be related to a strict limitation on max_new_tokens [here](https://github.com/huggingface/transformers/blob/a7cab3c283312b8d4de5df3bbe719971e24f4281/src/transformers/tools/agents.py#L640C40-L640C40) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28523/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28522/comments | https://api.github.com/repos/huggingface/transformers/issues/28522/events | https://github.com/huggingface/transformers/pull/28522 | 2,083,404,188 | PR_kwDOCUB6oc5kKdzb | 28,522 | [`SpeechT5Tokenization`] Add copied from and fix the `convert_tokens_to_string` to match the fast decoding scheme | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28522). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Fixes #26547 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28522/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28522",
"html_url": "https://github.com/huggingface/transformers/pull/28522",
"diff_url": "https://github.com/huggingface/transformers/pull/28522.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28522.patch",
"merged_at": 1705420202000
} |
https://api.github.com/repos/huggingface/transformers/issues/28521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28521/comments | https://api.github.com/repos/huggingface/transformers/issues/28521/events | https://github.com/huggingface/transformers/pull/28521 | 2,083,219,859 | PR_kwDOCUB6oc5kJ0te | 28,521 | Add is_model_supported for fx | {
"login": "inisis",
"id": 46103969,
"node_id": "MDQ6VXNlcjQ2MTAzOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/46103969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inisis",
"html_url": "https://github.com/inisis",
"followers_url": "https://api.github.com/users/inisis/followers",
"following_url": "https://api.github.com/users/inisis/following{/other_user}",
"gists_url": "https://api.github.com/users/inisis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inisis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inisis/subscriptions",
"organizations_url": "https://api.github.com/users/inisis/orgs",
"repos_url": "https://api.github.com/users/inisis/repos",
"events_url": "https://api.github.com/users/inisis/events{/privacy}",
"received_events_url": "https://api.github.com/users/inisis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Hi @inisis, thanks for opening this PR!\r\n> \r\n> A word of caution - functions like these (not importable from the top-level of transformers, not part of the public documentation) are subject to change and not guaranteed to never be renamed, have modified behaviour or deleted. In fact - this PR does just that!\r\n> \r\n> I don't think we should make this change. Semantically the function is performing a check. The equivalent, modified function would be `is_model_supported`, and users can implement that themselves easily.\r\n> \r\n> Moreover, we don't know who might already be using this function, and relying on an exception being raised.\r\n> \r\n> I do however agree it's better to have a single function return the bool and then the user can decide what to do with the output (raise etc.). If you want, we can add `is_model_supported` and have `check_if_model_is_supported` use that.\r\n\r\nHi @amyeroberts, thanks for your comment, I do agree that adding `is_model_supported` is far better than changing existing one, and I do feel this pr is needed, because I'm writing an intergrated tracer, it will choose different tracer based on the input model, this pr would be very helpful and elegant."
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
symbolic_trace within transformers is only applicable to PreTrainedModel, by calling check_if_model_is_supported we can check if model to be traced is supported, however this function will raise if not supported, I think we can return True/Fasle, so others can can call it outside transformers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28521/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28521",
"html_url": "https://github.com/huggingface/transformers/pull/28521",
"diff_url": "https://github.com/huggingface/transformers/pull/28521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28521.patch",
"merged_at": 1705427564000
} |
https://api.github.com/repos/huggingface/transformers/issues/28520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28520/comments | https://api.github.com/repos/huggingface/transformers/issues/28520/events | https://github.com/huggingface/transformers/pull/28520 | 2,083,197,751 | PR_kwDOCUB6oc5kJvxu | 28,520 | [ `TokenizationUtils`] Fix `add_special_tokens` when the token is already there | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Fixes #27888 : the method was missing a check that was overwriting the special token list. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28520/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28520",
"html_url": "https://github.com/huggingface/transformers/pull/28520",
"diff_url": "https://github.com/huggingface/transformers/pull/28520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28520.patch",
"merged_at": 1705419389000
} |
https://api.github.com/repos/huggingface/transformers/issues/28519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28519/comments | https://api.github.com/repos/huggingface/transformers/issues/28519/events | https://github.com/huggingface/transformers/issues/28519 | 2,083,084,349 | I_kwDOCUB6oc58KVg9 | 28,519 | AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer' | {
"login": "pydaxing",
"id": 129026999,
"node_id": "U_kgDOB7DLtw",
"avatar_url": "https://avatars.githubusercontent.com/u/129026999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pydaxing",
"html_url": "https://github.com/pydaxing",
"followers_url": "https://api.github.com/users/pydaxing/followers",
"following_url": "https://api.github.com/users/pydaxing/following{/other_user}",
"gists_url": "https://api.github.com/users/pydaxing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pydaxing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pydaxing/subscriptions",
"organizations_url": "https://api.github.com/users/pydaxing/orgs",
"repos_url": "https://api.github.com/users/pydaxing/repos",
"events_url": "https://api.github.com/users/pydaxing/events{/privacy}",
"received_events_url": "https://api.github.com/users/pydaxing/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @pydaxing, thanks for raising an issue! \r\n\r\nSo that we can best help you, could you make sure to follow the issue template and provide: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* A minimal code reproducer\r\n* The full error traceback",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,707 | null | NONE | null | ### System Info
AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer'
### Who can help?
AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer'
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer'
Transformers==4.34.0
### Expected behavior
AttributeError: module 'transformers_modules.Qwen-72B-Chat.tokenization_qwen' has no attribute 'QWenTokenizer' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28519/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28518/comments | https://api.github.com/repos/huggingface/transformers/issues/28518/events | https://github.com/huggingface/transformers/issues/28518 | 2,082,976,347 | I_kwDOCUB6oc58J7Jb | 28,518 | KOSMOS-2 Entities giving null | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @andysingal. To use Kosmos-2 for image grounding, you have to add a special `<grounding>` token before the prompt, as they do in the [paper](https://arxiv.org/abs/2306.14824). Also you can use `<phrase>` token to get bboxes of specific phrases in the format `prev_prompt_text <phrase>the_object</phrase>`\r\n\r\n```from transformers import AutoProcessor, AutoModelForVision2Seq\r\nimport requests\r\nfrom PIL import Image\r\n\r\nprocessor = AutoProcessor.from_pretrained(\"microsoft/kosmos-2-patch14-224\")\r\nmodel = AutoModelForVision2Seq.from_pretrained(\"microsoft/kosmos-2-patch14-224\", device_map=\"cpu\")\r\n\r\nprompt_grounded = \"<grounding>An image of\"\r\nprompt_refer = \"An image of <phrase>a snowman</phrase>\"\r\n\r\nurl = \"https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nimage\r\n\r\ninputs = processor(text=prompt_refer, images=image, return_tensors=\"pt\").to(model.device)\r\n\r\n# autoregressively generate completion\r\ngenerated_ids = model.generate(**inputs, max_new_tokens=100)\r\ninput_len = inputs['input_ids'].shape[-1]\r\n\r\n# convert generated token IDs back to strings\r\ngenerated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]\r\n\r\n# By default, the generated text is cleaned up and the entities are extracted.\r\nprocessed_text, entities = processor.post_process_generation(generated_text)\r\n\r\nprint(processed_text)\r\nprint(entities)\r\n>>> An image of a snowman warming himself by a campfire\r\n>>> [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a campfire', (41, 51), [(0.109375, 0.640625, 0.546875, 0.984375)])]\r\n```",
"Thank you very much\r\n\r\nOn Wed, Jan 17, 2024 at 3:08 PM Raushan Turganbay ***@***.***>\r\nwrote:\r\n\r\n> Hi @andysingal <https://github.com/andysingal>. To use Kosmos-2 for image\r\n> grounding, you have to add a special <grounding> token before the prompt,\r\n> as they do in the paper <https://arxiv.org/abs/2306.14824>. Also you can\r\n> use <phrase> token to get bboxes of specific phrases in the format prev_prompt_text\r\n> <phrase>the_object</phrase>\r\n>\r\n> import requests\r\n> from PIL import Image\r\n>\r\n> processor = AutoProcessor.from_pretrained(\"microsoft/kosmos-2-patch14-224\")\r\n> model = AutoModelForVision2Seq.from_pretrained(\"microsoft/kosmos-2-patch14-224\", device_map=\"cpu\")\r\n>\r\n> prompt_grounded = \"<grounding>An image of\"\r\n> prompt_refer = \"An image of <phrase>a snowman</phrase>\"\r\n>\r\n> url = \"https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png\"\r\n> image = Image.open(requests.get(url, stream=True).raw)\r\n> image\r\n>\r\n> inputs = processor(text=prompt_refer, images=image, return_tensors=\"pt\").to(model.device)\r\n>\r\n> # autoregressively generate completion\r\n> generated_ids = model.generate(**inputs, max_new_tokens=100)\r\n> input_len = inputs['input_ids'].shape[-1]\r\n>\r\n> # convert generated token IDs back to strings\r\n> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]\r\n>\r\n> # By default, the generated text is cleaned up and the entities are extracted.\r\n> processed_text, entities = processor.post_process_generation(generated_text)\r\n>\r\n> print(processed_text)\r\n> print(entities)\r\n> >>> An image of a snowman warming himself by a campfire\r\n> >>> [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a campfire', (41, 51), [(0.109375, 0.640625, 0.546875, 0.984375)])]\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/28518#issuecomment-1895434657>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNNJZCFNQ6UKHCMAE3DYO6LYHAVCNFSM6AAAAABB4ENBYKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJVGQZTINRVG4>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,707 | null | NONE | null | ### System Info
google colab, T4
```
!pip install -q git+https://github.com/huggingface/transformers.git accelerate bitsandbytes
```
### Who can help?
@amy
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoProcessor, AutoModelForVision2Seq
processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224")
model = AutoModelForVision2Seq.from_pretrained("microsoft/kosmos-2-patch14-224", load_in_4bit=True, device_map={"":0})
import requests
from PIL import Image
prompt = "An image of"
url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png"
image = Image.open(requests.get(url, stream=True).raw)
image
inputs = processor(text=prompt, images=image, return_tensors="pt").to("cuda:0")
# autoregressively generate completion
generated_ids = model.generate(**inputs, max_new_tokens=128)
# convert generated token IDs back to strings
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
# By default, the generated text is cleaned up and the entities are extracted.
processed_text, entities = processor.post_process_generation(generated_text)
print(processed_text)
print(entities)
```
gives
```
An image of a snowman warming up by a fire.
[]
```
### Expected behavior
needs to give entities https://github.com/NielsRogge/Transformers-Tutorials/blob/master/KOSMOS-2/Inference_with_KOSMOS_2_for_multimodal_grounding.ipynb | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28518/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28517/comments | https://api.github.com/repos/huggingface/transformers/issues/28517/events | https://github.com/huggingface/transformers/pull/28517 | 2,082,922,067 | PR_kwDOCUB6oc5kI1nz | 28,517 | Exclude the load balancing loss of padding tokens in Mixtral-8x7B | {
"login": "khaimt",
"id": 145790391,
"node_id": "U_kgDOCLCVtw",
"avatar_url": "https://avatars.githubusercontent.com/u/145790391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khaimt",
"html_url": "https://github.com/khaimt",
"followers_url": "https://api.github.com/users/khaimt/followers",
"following_url": "https://api.github.com/users/khaimt/following{/other_user}",
"gists_url": "https://api.github.com/users/khaimt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khaimt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khaimt/subscriptions",
"organizations_url": "https://api.github.com/users/khaimt/orgs",
"repos_url": "https://api.github.com/users/khaimt/repos",
"events_url": "https://api.github.com/users/khaimt/events{/privacy}",
"received_events_url": "https://api.github.com/users/khaimt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28517). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thanks for the update. Can you add a good test to make sure this behaves as expected? 🤗\r\n\r\nSure, I will add a test to make sure that the implementation is correct",
"> Thanks for the update. Can you add a good test to make sure this behaves as expected? 🤗\r\n\r\nHi @ArthurZucker, I have just added the tests for load balancing loss ",
"@ArthurZucker I don't know why The test is failed on github but was passed in my local test.\r\nIn my local test, I only changed the test_mixtral.py so I ran:\r\npytest --picked\r\nAnd test was passed, but was failed on github.\r\nIt is as if the modeling_mixtral.py was not updated on github test ",
"> I don't know why The test is failed on github but was passed in my local test. \r\n\r\nFrom details of [check_code_quality](https://circleci.com/gh/huggingface/transformers/1066145):\r\ntests/models/mixtral/test_modeling_mixtral.py:19:8: F401 [*] `random` imported but unused\r\n\r\nUnused import of the random module in the test file.",
"> > I don't know why The test is failed on github but was passed in my local test.\r\n> \r\n> From details of [check_code_quality](https://circleci.com/gh/huggingface/transformers/1066145): tests/models/mixtral/test_modeling_mixtral.py:19:8: F401 [*] `random` imported but unused\r\n> \r\n> Unused import of the random module in the test file.\r\n\r\nI mean this assert: https://app.circleci.com/pipelines/github/huggingface/transformers/82724/workflows/61975ffc-0bdb-4aa7-935e-70f69f11e748/jobs/1066148/parallel-runs/0/steps/0-117",
"Some thoughts:\r\n- Regarding `∣actual−expected∣>atol+rtol⋅∣expected∣`.. You aren't getting the fabs() of result.aux_loss before multiplying. \r\n- There's also an [assertNotAlmostEqual](https://github.com/huggingface/transformers/blob/3f69f415adcbdaedec154ba8eac220ef3276975d/tests/models/encoder_decoder/test_modeling_encoder_decoder.py#L817) function for comparing if they're not close.\r\n- Specifying the device to use might give different results on different systems\r\n\r\nThank you for doing this btw!\r\n\r\nEdit for clarity",
"> Some thoughts:\r\n> \r\n> * Regarding `∣actual−expected∣>atol+rtol⋅∣expected∣`.. You aren't getting the fabs() of result.aux_loss before multiplying.\r\n> \r\n> * There's also an [assertNotAlmostEqual](https://github.com/huggingface/transformers/blob/3f69f415adcbdaedec154ba8eac220ef3276975d/tests/models/encoder_decoder/test_modeling_encoder_decoder.py#L817) function for comparing if they're not close.\r\n> \r\n> * Specifying the device to use might give different results on different systems\r\n> \r\n> \r\n> Thank you for doing this btw!\r\n> \r\n> Edit for clarity\r\n\r\nHi @congruency Thank you for your comments\r\nActually we no need to get the fabs() of result.aux_loss, because it is already positive, and actually the value is around 2. You can see the value from previous assert: \r\nhttps://github.com/khaimt/transformers/blob/exclude_padding_tokens_in_aux_loss_Mixtral_Moe/tests/models/mixtral/test_modeling_mixtral.py#L478 \r\nOh, thankyou for reminding me of: assertNotAlmostEqual, I will fix my code with this suggestion",
"@ArthurZucker The tests have been fully added and passed, can you review ?",
"Am a business ",
"UC eye care center ",
"Sorry for the delay will review! ",
"@ArthurZucker I have just fixed my PR:\r\n+ move attention_mask to the last position\r\n+ pad_leng --> pad_length\r\n+ reduce the tol for assert_close"
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
This PR implements excluding the load balancing loss of padding tokens in Mixtral-8x7B
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28505
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28517/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28517",
"html_url": "https://github.com/huggingface/transformers/pull/28517",
"diff_url": "https://github.com/huggingface/transformers/pull/28517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28517.patch",
"merged_at": 1706087534000
} |
https://api.github.com/repos/huggingface/transformers/issues/28516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28516/comments | https://api.github.com/repos/huggingface/transformers/issues/28516/events | https://github.com/huggingface/transformers/issues/28516 | 2,082,719,913 | I_kwDOCUB6oc58I8ip | 28,516 | EarlyStoppingCallback Not Working with Accelerate | {
"login": "superleesa",
"id": 88019950,
"node_id": "MDQ6VXNlcjg4MDE5OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/88019950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/superleesa",
"html_url": "https://github.com/superleesa",
"followers_url": "https://api.github.com/users/superleesa/followers",
"following_url": "https://api.github.com/users/superleesa/following{/other_user}",
"gists_url": "https://api.github.com/users/superleesa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/superleesa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/superleesa/subscriptions",
"organizations_url": "https://api.github.com/users/superleesa/orgs",
"repos_url": "https://api.github.com/users/superleesa/repos",
"events_url": "https://api.github.com/users/superleesa/events{/privacy}",
"received_events_url": "https://api.github.com/users/superleesa/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Actually, the problem occurred even after removing the EarlyStoppingCallback..... ",
"cc @muellerzr @pacman100 ",
"The weirdest part is that the error always occurs at a training step between 229~235. I was thinking that this is due to a deadlock caused by evaluation / saving but even if I set the the evaluation & saving step to 300 (i.e. the first evaluation & saving step happens at 300 which is after the error occurs), the error still occurs, implying that they are not likely the cause. \r\n\r\nAlso, I tried this with both DDP and Deepspeed but the same problem happens for the both."
] | 1,705 | 1,707 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-1042-gcp-x86_64-with-glibc2.38
- Python version: 3.11.7
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: Yes
### Who can help?
_No response_
### Information
- [] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Train llama2 (I used a fine-tuned llama2 for Japanese) with accelerate (I'm training with DDP using four GPUs), using a script that uses Trainer API and EarlyStoppingCallback
1. please download the scripts i used [finetune.py and data_loader.py](https://github.com/superleesa/dump)
1. on terminal run `accelerate launch finetune.py`
note: i ran this without any accelerate configurations
### Expected behavior
I'm fine-tuning llama2 with accelerate with DDP using four GPUs. I used a script that uses Trainer API and EarlyStoppingCallback. Whenever I run the code and after few iterations of training, I get following error:
```
[E ProcessGroupNCCL.cpp:475] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3523, OpType=ALLREDUCE, NumelIn=7828736, NumelOut=7828736, Timeout(ms)=1800000) ran for 1800228 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:475] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3523, OpType=ALLREDUCE, NumelIn=7828736, NumelOut=7828736, Timeout(ms)=1800000) ran for 1800550 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:475] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3523, OpType=ALLREDUCE, NumelIn=7828736, NumelOut=7828736, Timeout(ms)=1800000) ran for 1800903 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:475] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3524, OpType=BROADCAST, NumelIn=33554432, NumelOut=33554432, Timeout(ms)=1800000) ran for 1800371 milliseconds before timing out.
972ee88fbdea:1599:1757 [0] NCCL INFO [Service thread] Connection closed by localRank 0
972ee88fbdea:1599:1724 [0] NCCL INFO comm 0x1c635a70 rank 0 nranks 4 cudaDev 0 busId 40 - Abort COMPLETE
[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.
[E ProcessGroupNCCL.cpp:916] [Rank 0] NCCL watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3524, OpType=BROADCAST, NumelIn=33554432, NumelOut=33554432, Timeout(ms)=1800000) ran for 1800371 milliseconds before timing out.
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 0] NCCL watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=3524, OpType=BROADCAST, NumelIn=33554432, NumelOut=33554432, Timeout(ms)=1800000) ran for 1800371 milliseconds before timing out.
[2024-01-15 15:57:12,861] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1600 closing signal SIGTERM
[2024-01-15 15:57:12,861] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1601 closing signal SIGTERM
[2024-01-15 15:57:12,861] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1602 closing signal SIGTERM
[2024-01-15 15:57:12,976] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0
```
Some insights:
- I first searched online for the error and some people suggested disabling NCCL P2P and increasing the timeout threshold but both of them did not work.
- then, **when i removed theEarlyStoppingCallback the code worked fine, so i found that it must be related to EarlyStoppingCallback**. Therefore, although I use lots of other components in the code including LoRA and 4 bit quantization, the error must be related to EarlyStoppingCallback.
- because the error only happens whenever the training loop should stop with the earlystopping callback (i.e. the validation loss does not decrease for the specified "persistance" number of times), i suspected that this is caused by the training not being able to quit from loop.
- i read the source code and found that when the early stopping counter stops, control.should_training_stop is to set True. however, within the training loop in the Trainer class, i believe this is not handled properly for training that uses multiple GPUs.
- particularly, i suspect it's not breaking the training loop and there might be an improper use of the break statement in combination with how Accelerate works.
- i assume this is basically the same problem as [discussed here](https://discuss.huggingface.co/t/early-stopping-for-eval-loss-causes-timeout/51349). only the difference is that he is using his own training script but i'm using the Trainer.
- to fix this, i think we need to use accelerate.set_breakpoint and accelerate.check_breakpoint functions within the [_inner_training_loop function in the Trainer class](https://github.com/huggingface/transformers/blob/7e0ddf89f483f53107870cddabb2e1cc93069705/src/transformers/trainer.py#L1933C1-L1934C26), as they were used to fix the problem in the discussion link above.
- **so i believe the real problem here is not the EarlyStoppingCallback but the condition to exit the training loop.** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28516/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28516/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28515/comments | https://api.github.com/repos/huggingface/transformers/issues/28515/events | https://github.com/huggingface/transformers/issues/28515 | 2,082,611,854 | I_kwDOCUB6oc58IiKO | 28,515 | AttributeError: 'LlamaForCausalLM' object has no attribute 'merge_and_unload' | {
"login": "tshrjn",
"id": 8372098,
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshrjn",
"html_url": "https://github.com/tshrjn",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions",
"organizations_url": "https://api.github.com/users/tshrjn/orgs",
"repos_url": "https://api.github.com/users/tshrjn/repos",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshrjn/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @tshrjn, thanks for raising an issue! \r\n\r\nIt looks like this is an issue on how the transformers model is wrapped on unsloth's side. Our model's don't natively have a `merge_and_upload` method. ",
"@amyeroberts Thanks for taking a look. \n\nThis might be a PEFT library issue. The docs for which states the `merge_and_unload` function [here](https://huggingface.co/docs/peft/v0.7.1/en/package_reference/lora#peft.LoraModel.merge_and_unload).\n",
"@tshrjn Thanks for the pointer! cc @younesbelkada ",
"Hi @tshrjn \r\nThanks a lot for the issue! Have you correctly called a `FastLanguageModel.get_peft_model` before running the training? I think that unsloth properly takes care of merging lora weights in the base model if you call `merge_and_unload()` - Can you also print `model` before that error? \r\ncc @danielhanchen as well",
"Oh I will check this! I'll run the notebook again to investigate :)",
"I just updated the Tiny Llama notebook: https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing\r\n\r\nWill announce tomorrow, but we now support merging to GGUF / float16 directly. The saving modules are on the very bottom.\r\n\r\nBut `model.merge_and_unload()` only works if calling `FastLanguageModel.get_peft_model` as Younes described.\r\n\r\nI'll investigate the HF notebooks I uploaded as well + will update them with the latest Unsloth release!",
"Thanks very much @danielhanchen ! \r\n@tshrjn as stated by @danielhanchen can you confirm running the cell with `get_peft_model` fixes your issue ? 🙏 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,707 | null | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the TinyLlama model with Lora training as in unsloth's [colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) shown in readme. On trying to 'merge_and_unload' post training, I get the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[43], line 1
----> 1 model = model.merge_and_unload()
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1695, in Module.__getattr__(self, name)
1693 if name in modules:
1694 return modules[name]
-> 1695 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'LlamaForCausalLM' object has no attribute 'merge_and_unload'
```
### Expected behavior
To be able to merge the adaptors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28515/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28514/comments | https://api.github.com/repos/huggingface/transformers/issues/28514/events | https://github.com/huggingface/transformers/pull/28514 | 2,082,547,057 | PR_kwDOCUB6oc5kHlXS | 28,514 | Config: warning when saving generation kwargs in the model config | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28514). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@amyeroberts PR comments addressed!\r\n\r\n@amyeroberts @ArthurZucker What do you think would be a sensible deprecation time frame? I do agree it can become a sensible exception: raising an exception at `save_pretrained` corresponds to breaking the training runs (the model artifact is only stored after these lines) ⚠️ Perhaps 6 minor versions, instead of the standard 2?\r\n\r\n(I wouldn't want to leave it to `v5`, as there would be little incentive for the users to do something about it)",
"@gante Definitely not until v5 💀 I'd say 4? I don't feel strongly, so happy to go with what you or @ArthurZucker think is best. ",
"Bumped to 4 minor versions 👍 (i.e. set to deprecate in v4.41)",
"Sorry saw this a bit late, LGTM"
] | 1,705 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
## Context
`generate` is ideally controlled by a `GenerationConfig`. However, to remain retrocompatible, a `PretrainedConfig` may control `generate` in the following conditions:
1. `generate` does not receive a `generation_config` argument; AND
2. `model.generation_config._from_model_config is True`, which means the user never manually created a `GenerationConfig`; AND
3. the user has not modified `model.generation_config` since it was created, which (together with 2.) means that `model.generation_config` holds a copy of the generation parameterization in `model.config` at init time; AND
4. [added in this PR] the model config holds non-default generation kwargs, which means there is some intent to control generation through the model config
Having the legacy behavior active essentially means there are two places to control generation, which has been causing some GH issues. We can't get rid of it (we would have to submit PRs to thousands of models), but we can be more persuasive in slowly shifting new models entirely towards the `GenerationConfig`. This should help with documentation, ease of use across tasks such as fine-tuning modern models, as well as reducing the number of occurrences of the legacy behavior warning (see [1:03 in this video](https://twitter.com/reach_vb/status/1736471172970086792) -- many @TheBloke models suffer from it, as `max_length` is set in the model config and not in the generation config).
## This PR
This PR adds:
1. Clause 4. in the context list above, to avoid showing the warning when there was no intent of controlling `generate` through `model.config`
2. Two future deprecation warnings in the following legacy-triggering situations:
a. When saving a `PretrainedConfig` with non-default generation attributes, which demonstrates an intent to control `generate` through it. Users are nudged towards using `GenerationConfig` instead;
b. When saving a model where `model.generation_config` is built from `model.config`, but `model.config`'s generation attributes have been modified since the creation of `model.generation_config` (i.e. the two hold different `generate` parameterization). Users are nudged towards creating a brand new `GenerationConfig` instead. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28514/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28514",
"html_url": "https://github.com/huggingface/transformers/pull/28514",
"diff_url": "https://github.com/huggingface/transformers/pull/28514.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28514.patch",
"merged_at": 1705429862000
} |
https://api.github.com/repos/huggingface/transformers/issues/28513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28513/comments | https://api.github.com/repos/huggingface/transformers/issues/28513/events | https://github.com/huggingface/transformers/pull/28513 | 2,082,412,751 | PR_kwDOCUB6oc5kHIwg | 28,513 | Exclude the load balancing loss of padding tokens in Mixtral-8x7B | {
"login": "khaimt",
"id": 145790391,
"node_id": "U_kgDOCLCVtw",
"avatar_url": "https://avatars.githubusercontent.com/u/145790391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khaimt",
"html_url": "https://github.com/khaimt",
"followers_url": "https://api.github.com/users/khaimt/followers",
"following_url": "https://api.github.com/users/khaimt/following{/other_user}",
"gists_url": "https://api.github.com/users/khaimt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khaimt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khaimt/subscriptions",
"organizations_url": "https://api.github.com/users/khaimt/orgs",
"repos_url": "https://api.github.com/users/khaimt/repos",
"events_url": "https://api.github.com/users/khaimt/events{/privacy}",
"received_events_url": "https://api.github.com/users/khaimt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null |
# What does this PR do?
This PR implements excluding the load balancing loss of padding tokens in Mixtral-8x7B
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) https://github.com/huggingface/transformers/issues/28505
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28513/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28513",
"html_url": "https://github.com/huggingface/transformers/pull/28513",
"diff_url": "https://github.com/huggingface/transformers/pull/28513.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28513.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28512/comments | https://api.github.com/repos/huggingface/transformers/issues/28512/events | https://github.com/huggingface/transformers/issues/28512 | 2,082,400,458 | I_kwDOCUB6oc58HujK | 28,512 | AMP autocast not invoked with CUDA 11.8 build of Pytorch | {
"login": "haixpham",
"id": 32718796,
"node_id": "MDQ6VXNlcjMyNzE4Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/32718796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haixpham",
"html_url": "https://github.com/haixpham",
"followers_url": "https://api.github.com/users/haixpham/followers",
"following_url": "https://api.github.com/users/haixpham/following{/other_user}",
"gists_url": "https://api.github.com/users/haixpham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haixpham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haixpham/subscriptions",
"organizations_url": "https://api.github.com/users/haixpham/orgs",
"repos_url": "https://api.github.com/users/haixpham/repos",
"events_url": "https://api.github.com/users/haixpham/events{/privacy}",
"received_events_url": "https://api.github.com/users/haixpham/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @pacman100 @muellerzr "
] | 1,705 | 1,708 | null | NONE | null | ### System Info
pytorch 2.1 + CUDA 11.8
transformers 4.36.2
accelerate 0.26.0
### Who can help?
@pacman100 , @muellerz
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
in `Trainer.autocast_smart_context_manager()`, only CPU AMP is supported, and CUDA autocast wrapper is managed by `accelerate` when training starts. This design works with pytorch 2.1 built with CUDA 12.1, but not with the CUDA 11.8 version.
### Expected behavior
CUDA AMP works with torch 2.1+CUDA 11.8
My simple fix is as follows:
- add `force_cuda_amp` to TrainingArguments to flag the code to enable CUDA AMP autocast
- derive `Trainer.autocast_smart_context_manager()` to return CUDA AMP context if `force_cuda_amp` is flagged.
A more systematic solution (more like a hack) is to detect CUDA version when Trainer is initialized, if CUDA is < 12 then enable this flag automatically.
~~Edit: my fix resulted in NaN so no fix yet~~
Edit 2: my fix actually worked. The NaN problem was from the hidden `_fast_init` flag of `from_pretrained` that caused some new modules not properly initialized. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28512/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28511/comments | https://api.github.com/repos/huggingface/transformers/issues/28511/events | https://github.com/huggingface/transformers/pull/28511 | 2,082,340,540 | PR_kwDOCUB6oc5kG5aK | 28,511 | Add a use_safetensors arg to TFPreTrainedModel.from_pretrained() | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28511). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | MEMBER | null | PyTorch's `from_pretrained()` method has a `use_safetensors` argument. Our TF code doesn't, and just always tries safetensors if available. This PR adds the argument to match the PyTorch API. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28511/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28511",
"html_url": "https://github.com/huggingface/transformers/pull/28511",
"diff_url": "https://github.com/huggingface/transformers/pull/28511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28511.patch",
"merged_at": 1705338055000
} |
https://api.github.com/repos/huggingface/transformers/issues/28510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28510/comments | https://api.github.com/repos/huggingface/transformers/issues/28510/events | https://github.com/huggingface/transformers/issues/28510 | 2,082,098,659 | I_kwDOCUB6oc58Gk3j | 28,510 | With deepspeed zero3 enabled, loading from_pretrained() and resize_token_embeddings() do not work correctly | {
"login": "haixpham",
"id": 32718796,
"node_id": "MDQ6VXNlcjMyNzE4Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/32718796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haixpham",
"html_url": "https://github.com/haixpham",
"followers_url": "https://api.github.com/users/haixpham/followers",
"following_url": "https://api.github.com/users/haixpham/following{/other_user}",
"gists_url": "https://api.github.com/users/haixpham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haixpham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haixpham/subscriptions",
"organizations_url": "https://api.github.com/users/haixpham/orgs",
"repos_url": "https://api.github.com/users/haixpham/repos",
"events_url": "https://api.github.com/users/haixpham/events{/privacy}",
"received_events_url": "https://api.github.com/users/haixpham/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Not sure if this helps, but I'm also having trouble reproducing accuracy of CodeLlama 13B-I trained with Zero2 using Zero3. ",
"Meet the same problem when using [this snippet](https://github.com/huggingface/accelerate/blob/cea6aaa1161d45f7f23ef33fcc3b0a5999ebb5a1/examples/by_feature/deepspeed_with_config_support.py#L712-L723) to save a zero-3 model."
] | 1,705 | 1,707 | null | NONE | null | ### System Info
torch 2.1.1 - CUDA 12.1
transformers 4.36.2
accelerate 0.26.0
deepspeed 0.12.3
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This problem exists in `PretrainedModel` class in `modeling_utils.py` and would affect any code.
With deepspeed enabled, the model is wrapped by deepspeed engine, and normal model parameter `weight` and `bias` are changed: they are empty having shape = torch.Size([0]), and the actual weights are stored in `ds_tensor` attributes of `weight` and `bias`, respectively. This leads to a few problems in `modeling_utils.py`
- Calling `model.state_dict().keys()` to get expected model parameters. This would use pytorch Module's original function to get state_dict, and with deepspeed enabled, this method failed to get all param keys.
- Checking mismatched keys: `state_dict[checkpoint_key].shape != model_state_dict[model_key].shape`. Here `model_state_dict[model_key].shape` is 0, so this method failed, resulting in matched key becoming unmatched. This caused matched keys being removed from checkpoint's state_dict, and those params' weights are not loaded.
- `Tied_params`: Should call accelerate's `find_tied_parameters()` for search for tied parameters in case deepspeed is enabled, instead of relying on `model.state_dict().items()`
- `resize_token_embedding()`:
- when creating new_embedding, this call is not wrapped in a deepspeed context, so the new_embedding is not managed by deepspeed.
- With the above fixed, before tying weights, the `embedding.shape` check must be wrapped in deepspeed `GatheredParamters()` context.
### Expected behavior
I made a fork of `transformers` and modified `modeling_utils.py` as in the following commit:
https://github.com/haixpham/transformers/commit/e300792ccb6fc53666b4971bab87ea7179a4e3bb
I would love to hear any feedback about my changes. I checked and compared the result values with/without deepspeed and they appeared similar. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28510/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28510/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28509/comments | https://api.github.com/repos/huggingface/transformers/issues/28509/events | https://github.com/huggingface/transformers/pull/28509 | 2,081,998,273 | PR_kwDOCUB6oc5kFvWH | 28,509 | SiLU activation wrapper for safe importing | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh The same capturelogger errors arose again. A cheeky re-run (with ssh) made them pass, so merging as they're not related to this PR"
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
The custom implementation of `SiLUActivation` was removed in #27136. This causes two issues:
1. Users unable to unpickle objects - c.f. #28177
2. Users unable to import the class - c.f. #28496
For 1. - the unpickling of modified transformers models through torch.load isn't something we officially support. However, this will (temporarily) provide an equivalent class.
For 2 - provides a class with deprecation warning that can be imported
Fixes #28177 #28496
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28509/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28509",
"html_url": "https://github.com/huggingface/transformers/pull/28509",
"diff_url": "https://github.com/huggingface/transformers/pull/28509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28509.patch",
"merged_at": 1705347420000
} |
https://api.github.com/repos/huggingface/transformers/issues/28508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28508/comments | https://api.github.com/repos/huggingface/transformers/issues/28508/events | https://github.com/huggingface/transformers/pull/28508 | 2,081,991,253 | PR_kwDOCUB6oc5kFtya | 28,508 | Fix `_speculative_sampling` implementation | {
"login": "ofirzaf",
"id": 18296312,
"node_id": "MDQ6VXNlcjE4Mjk2MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ofirzaf",
"html_url": "https://github.com/ofirzaf",
"followers_url": "https://api.github.com/users/ofirzaf/followers",
"following_url": "https://api.github.com/users/ofirzaf/following{/other_user}",
"gists_url": "https://api.github.com/users/ofirzaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ofirzaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ofirzaf/subscriptions",
"organizations_url": "https://api.github.com/users/ofirzaf/orgs",
"repos_url": "https://api.github.com/users/ofirzaf/repos",
"events_url": "https://api.github.com/users/ofirzaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/ofirzaf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I was thinking about changing `max_new_tokens` inside the `get_candidates` method so we won't generate more tokens than `max_length - 1`:\r\n```python\r\nmax_new_tokens = min(int(self.num_assistant_tokens), self.generation_config.max_length - new_cur_len - 1)\r\n```\r\nThis will help avoid checking in several places if `n_matches` is too big, it can also cause a problem in `_speculative_sampling`, and prevent unnecessary compute. \r\n\r\nWhat do you think?\r\n",
"@ofirzaf good suggestion, the candidate model should indeed be set up such that the resulting candidate sequence has at most a length of `max length - 1`, as the main model will always generate one additional token 👍 ",
"> Thanks for fixing this!\r\n> \r\n> +1 on Joao's comment reworking to make more understandable and adding epsilon.\r\n> \r\n> Could you also add at least one test which fails on main but fixes with this fix? It's fine if it's an integration test checks generated outputs\r\n\r\nI've been trying to verify the correctness of the Speculative Decoding (SD) implementation. To do so, I'm planning to add a test which verifies that the token distributions we get using SD are indeed similar (or very close) to the token distributions when using standard sample based decoding. If I understand correctly, this is what the SD paper guarantees. \r\n\r\nSo far, I found that when comparing the token scores of the above methods on a sample input, they are very similar but not identical, perhaps only due to hardware numerics errors.\r\nSince tokens with very low probability do not affect the sampling outcome, I think the test should verify that the top `k` token scores of both methods are close enough to each other (for some chose `k` like 5-10). Do you think there's a better way?\r\n\r\nWhen I benchmark Assisted Generation vs SD, I get very similar latencies, even though SD should theoretically be faster, due to a more relaxed acceptance criteria. In consequence, I think it's important to verify the correctness of the SD implementation e.g. by comparing outputs like above. \r\n\r\n@gante @ofirzaf \r\nWDYT?\r\n\r\nMy results on A100:\r\n```bash\r\nTarget: bigcode/starcoder\r\nAssistant: bigcode/tiny_starcoder_py\r\n\r\nData: openai_humaneval, 20 random examples (args.seed=42)\r\n\r\nSetup: \r\nargs.seed=42\r\nmax_new_tokens=128\r\noutput_scores=True\r\n\r\n======================================================================================================================================================\r\nMethod Token Latency Acceptance Rate Gen. Args\r\n======================================================================================================================================================\r\n\r\nsample 45.19ms ------ {'do_sample': True, 'temperature': 0.2, 'assistant_model': False}\r\nsd 25.53ms 67.81% {'do_sample': True, 'temperature': 0.2, 'assistant_model': True}\r\nag 24.04ms 70.78% {'assistant_model': True}\r\n======================================================================================================================================================\r\n```",
"@amyeroberts I am not sure what kind of test do you expect to see. I wrote the following test which tests `_speculative_sampling` directly with dummy input data. Is this the kind of testing you are looking for?\r\n```python\r\nimport unittest \r\nfrom transformers.generation.utils import _speculative_sampling\r\nimport torch\r\n\r\nclass Test(unittest.TestCase):\r\n def test_speculative_sampling(self):\r\n # assume vocab size 10, input length 5 + 3 generated candidates\r\n candidate_input_ids = torch.tensor([[8, 0, 3, 9, 8, 1, 4, 5]]) # input tokens\r\n candidate_logits = torch.tensor([[\r\n [-10., 10., -10., -10., -10., -10., -10., -10., -10., -10.], # generated 1\r\n [-10., -10., -10., -10., 10., -10., -10., -10., -10., -10.], # generated 4\r\n [-10., -10., -10., -10., -10., 10., -10., -10., -10., -10.], # generated 5\r\n ]])\r\n candidate_length = 3\r\n inf = float('inf')\r\n new_logits = torch.tensor([[\r\n [-10., 10., -10., -10., -10., -10., -10., -10., -10., -10.], # accepts 1\r\n [-10., -10., -10., -10., 10., -10., -10., -10., -10., -10.], # accepts 4\r\n [-inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, 10., -inf], # rejects 5, accepts 8\r\n [-10., -10., -10., -10., -10., -10., -10., -10., -10., -10.], # N/A\r\n ]])\r\n last_assistant_token_is_eos=False\r\n max_matches = 5\r\n validated_tokens, n_matches = _speculative_sampling(\r\n candidate_input_ids,\r\n candidate_logits,\r\n candidate_length,\r\n new_logits,\r\n last_assistant_token_is_eos,\r\n max_matches,\r\n )\r\n self.assertTrue(n_matches.item() == 2)\r\n self.assertTrue(validated_tokens.tolist()[0] == [1, 4, 8])\r\n```",
"@danielkorat we want to test that the probability to sample token $t$ from the target model is the same as the probability to sample $t$ using speculative. Maybe the following test will work, given a prompt and a target model, calculate the distribution for the next token by simply running the model on the prompt and applying `softmax` on the resulted logits. Then, we can generate $n$ guesses for the next token using speculative. If speculative is implemented correctly and $n$ is large enough you should get a distribution that would match the calculated distribution from before. \r\n\r\nDoes that make sense? WDYT?",
"@ofirzaf Yes we can either do that or compare the logits using `np.allclose()`.\r\nI wonder though if the guarantee holds for the next token only and not for all subsequent tokens.\r\nIf it holds only for the next token, we can have the test feed the SD algo with standard sampled (STD) input_ids one at a time (max_new_tokens=1) and compare outputs (one at a time) with STD in one of the suggested methods above.\r\n________________________________\r\nFrom: Ofir Zafrir ***@***.***>\r\nSent: Thursday, January 18, 2024 3:16:46 AM\r\nTo: huggingface/transformers ***@***.***>\r\nCc: Korat, Daniel ***@***.***>; Mention ***@***.***>\r\nSubject: Re: [huggingface/transformers] Fix `_speculative_sampling` implementation (PR #28508)\r\n\r\n\r\n@danielkorat<https://github.com/danielkorat> we want to test that the probability to sample token $t$ from the target model is the same as the probability to sample $t$ using speculative. Maybe the following test will work, given a prompt and a target model, calculate the distribution for the next token by simply running the model on the prompt and applying softmax on the resulted logits. Then, we can generate $n$ guesses for the next token using speculative. If speculative is implemented correctly and $n$ is large enough you should get a distribution that would match the calculated distribution from before.\r\n\r\nDoes that make sense? WDYT?\r\n\r\n—\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/28508#issuecomment-1897597109>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AH26TASLOFOHUY467DYRXPTYPBZX5AVCNFSM6AAAAABB3HU4LOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJXGU4TOMJQHE>.\r\nYou are receiving this because you were mentioned.Message ID: ***@***.***>\r\n",
"@ofirzaf your test suggestion would be a great addition 👍 ",
"@danielkorat If the claim in the paper is correct, then @ofirzaf suggestion should work regardless of the distance of the target token to the end of the prompt. In other words, we should see the same distribution in the 1st generated token, in the 2nd generated token given the 1st generated token, and so on.\r\n\r\nMy suggestion would be to generate ~10 tokens for a given prompt ~1M times using super small models (e.g. pythia), to confirm it empirically.",
"@gante @amyeroberts I have pushed all the changes we discussed. Can you please take a look?",
"@gante @ofirzaf \r\nI wrote this code to check our assumptions about the distributions:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, set_seed\r\n\r\ndef top_k(a, k):\r\n sorted, indices = a.sort(descending=True)\r\n return sorted[:k], indices[:k]\r\ndef get_dist(outputs, num_samples=10**6, norm_factor=1e7):\r\n probs = outputs.scores[0][0].softmax(dim=-1)\r\n print(\"probs:\", top_k(probs, k=5))\r\n next_std_token = torch.multinomial(probs, num_samples=num_samples, replacement=True)\r\n counts = torch.bincount(next_std_token, minlength=probs.shape[0]).float()\r\n print(\"counts:\", top_k(counts, k=5))\r\n norm_counts = counts.float() / norm_factor\r\n print(\"softmaxed_counts:\", top_k(counts.softmax(dim=-1), k=5)[0])\r\n print(\"norm_counts:\", top_k(norm_counts, k=5))\r\n softmaxed_norm_counts = top_k(norm_counts.softmax(dim=-1), k=5)[0]\r\n print(\"softmaxed_norm_counts:\", softmaxed_norm_counts)\r\n return softmaxed_norm_counts\r\n\r\ncheckpoint = \"EleutherAI/pythia-1.4b-deduped\"\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint)\r\nassistant_model = AutoModelForCausalLM.from_pretrained(\"EleutherAI/pythia-160m-deduped\")\r\ngen_kwargs = dict(**tokenizer(\"In other words, we\", return_tensors=\"pt\"),\r\n pad_token_id=tokenizer.eos_token_id,\r\n max_new_tokens=1,\r\n do_sample=True,\r\n temperature=0.1,\r\n return_dict_in_generate=True,\r\n output_scores=True)\r\nprint(\"Standard sampling:\")\r\nset_seed(0)\r\nstd_dist = get_dist(model.generate(**gen_kwargs))\r\nprint(\"\\nSpeculative decoding:\")\r\nset_seed(0)\r\nsd_dist = get_dist(model.generate(**gen_kwargs, assistant_model=assistant_model))\r\nprint(std_dist == sd_dist)\r\n```\r\n\r\nIn short, I get the token probs for some input for both decoding methods, then I sample 1M times from this prob, and get the token counts (I always look at top 5 scores in all the following computations). As you can see below, the `counts` are very close to each other but not equal of course. If we apply `softmax` to the counts, they produce an insignificant result of `[1., 0., 0., 0., 0.]` (tested other inputs too). So I normalize the counts and then apply `softmax`, and then the output distributions look similar, however, they are still not equal to each other (final output). \r\n\r\nWDYT? Is this the way to go?\r\nIf so, I can extend this test to the next 10 generated tokens as well.\n\r\nOutput::\r\n```bash\r\nStandard sampling:\r\nprobs: (tensor([5.9927e-01, 3.8301e-01, 1.7703e-02, 8.4421e-06, 1.5724e-06]), tensor([452, 403, 476, 878, 513]))\r\ncounts: (tensor([5.9925e+05, 3.8286e+05, 1.7879e+04, 1.7000e+01, 1.0000e+00]), tensor([452, 403, 476, 878, 513]))\r\nsoftmaxed_counts: tensor([1., 0., 0., 0., 0.])\r\nnorm_counts: (tensor([5.9925e-02, 3.8286e-02, 1.7879e-03, 1.7000e-06, 1.0000e-07]), tensor([452, 403, 476, 878, 513]))\r\nsoftmaxed_norm_counts: tensor([2.1106e-05, 2.0654e-05, 1.9914e-05, 1.9878e-05, 1.9878e-05])\r\n\r\nSpeculative decoding:\r\nprobs: (tensor([5.9925e-01, 3.8304e-01, 1.7706e-02, 8.4404e-06, 1.5720e-06]), tensor([452, 403, 476, 878, 513]))\r\ncounts: (tensor([5.9922e+05, 3.8288e+05, 1.7883e+04, 1.7000e+01, 1.0000e+00]), tensor([452, 403, 476, 878, 457]))\r\nsoftmaxed_counts: tensor([1., 0., 0., 0., 0.])\r\nnorm_counts: (tensor([5.9922e-02, 3.8288e-02, 1.7883e-03, 1.7000e-06, 1.0000e-07]), tensor([452, 403, 476, 878, 457]))\r\nsoftmaxed_norm_counts: tensor([2.1106e-05, 2.0654e-05, 1.9914e-05, 1.9878e-05, 1.9878e-05])\r\ntensor([False, False, False, False, False])\r\n```",
"@danielkorat there are tiny fluctuations to be expected: doing the forward pass with different shapes (e.g. 1 token at a time vs all at once, 1 row at a time vs batched) will result in slightly different outputs. I've written about it in more detail [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535).\r\n\r\nFactoring in this source of numerical differences, the results do look similar! Thank you for running the experiment 🤗 ",
"> @danielkorat there are tiny fluctuations to be expected: doing the forward pass with different shapes (e.g. 1 token at a time vs all at once, 1 row at a time vs batched) will result in slightly different outputs. I've written about it in more detail [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535).\n> \n> \n> \n> Factoring in this source of numerical differences, the results do look similar! Thank you for running the experiment 🤗 \n\n@gante My intention is to integrate this code into the repo tests. WDYT?\nShould I do this in a separate PR?",
"@danielkorat This test requires a lot of compute, so it's not fit for our CI 😅 "
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Current implementation of `_speculative_sampling` accepts the draft model tokens all the time due to faulty test of the number of matches (`n_matches`). After fixing this issue I found and fixed several more issues in the implementation to reproduce the exact algorithm presented in the paper.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
@echarlaix
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28508/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28508",
"html_url": "https://github.com/huggingface/transformers/pull/28508",
"diff_url": "https://github.com/huggingface/transformers/pull/28508.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28508.patch",
"merged_at": 1705673252000
} |
https://api.github.com/repos/huggingface/transformers/issues/28507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28507/comments | https://api.github.com/repos/huggingface/transformers/issues/28507/events | https://github.com/huggingface/transformers/pull/28507 | 2,081,721,163 | PR_kwDOCUB6oc5kEzd5 | 28,507 | Correct model_type in PretrainedConfig's to_dict | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for opening this PR @fxmarty! \r\n\r\nThe reason this happens at the moment is that `model_type` is intended to be a class attribute, not an instance attribute i.e. to be consistent and shared for all instances of the config object and not modified instance to instance. What's the motivation for modifying it like this? ",
"Understood!"
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | As per title, now
```python
from transformers import AutoConfig, PretrainedConfig
cfg = AutoConfig.from_pretrained("bert-base-uncased")
config = PretrainedConfig.from_dict(cfg.to_dict())
config.model_type = "my-model"
print(config.to_dict()["model_type"])
```
rightfully yields `my-model`, while it use to give `""` (the class attribute value of PretrainedConfig).
I think instance attributes (if any) should take precedence over class attributes.
Related to https://github.com/huggingface/optimum/pull/1645 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28507/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28507",
"html_url": "https://github.com/huggingface/transformers/pull/28507",
"diff_url": "https://github.com/huggingface/transformers/pull/28507.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28507.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28506/comments | https://api.github.com/repos/huggingface/transformers/issues/28506/events | https://github.com/huggingface/transformers/pull/28506 | 2,081,718,248 | PR_kwDOCUB6oc5kEy0M | 28,506 | Use `weights_only` only if torch >= 1.13 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28506). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"failing tests are known flaky"
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Fix https://github.com/huggingface/transformers/pull/27282#issuecomment-1887859328
cc @julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28506/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28506",
"html_url": "https://github.com/huggingface/transformers/pull/28506",
"diff_url": "https://github.com/huggingface/transformers/pull/28506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28506.patch",
"merged_at": 1705575330000
} |
https://api.github.com/repos/huggingface/transformers/issues/28505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28505/comments | https://api.github.com/repos/huggingface/transformers/issues/28505/events | https://github.com/huggingface/transformers/issues/28505 | 2,081,624,681 | I_kwDOCUB6oc58ExJp | 28,505 | Exclude the load balancing loss of padding tokens in Mixtral-8x7B | {
"login": "khaimt",
"id": 145790391,
"node_id": "U_kgDOCLCVtw",
"avatar_url": "https://avatars.githubusercontent.com/u/145790391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khaimt",
"html_url": "https://github.com/khaimt",
"followers_url": "https://api.github.com/users/khaimt/followers",
"following_url": "https://api.github.com/users/khaimt/following{/other_user}",
"gists_url": "https://api.github.com/users/khaimt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khaimt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khaimt/subscriptions",
"organizations_url": "https://api.github.com/users/khaimt/orgs",
"repos_url": "https://api.github.com/users/khaimt/repos",
"events_url": "https://api.github.com/users/khaimt/events{/privacy}",
"received_events_url": "https://api.github.com/users/khaimt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"cc @ArthurZucker ",
"feel free to open a PR for this! Otherwise will mark it as a good second issue 🤗 ",
"I would like to work on this issue, i will go through the linked file today and ask any questions i have.",
"I was looking at the code.\r\nBelow is what the model outputs\r\n`return MoeModelOutputWithPast(\r\n last_hidden_state=hidden_states,\r\n past_key_values=next_cache,\r\n hidden_states=all_hidden_states,\r\n attentions=all_self_attns,\r\n router_logits=all_router_logits,\r\n )`\r\n \r\nThe attention from the model output can be passed during load_balancing_loss_func, and the function can be changed appropriately to handle the pad tokens.\r\nAm I right in my understanding? @ArthurZucker ",
"> feel free to open a PR for this! Otherwise will mark it as a good second issue 🤗\r\n\r\nHi @ArthurZucker, \r\nI have just raised a PR for this issue, can you help me review it?\r\nhttps://github.com/huggingface/transformers/pull/28517\r\n\r\n\r\nPS, I close the old PR: `https://github.com/huggingface/transformers/pull/28513` because it didn't follow the Contributor Guideline"
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | ### Feature request
The auxiliary loss in Mixtral-MoE shouldn't **include the loss from padding tokens**.
### Motivation
I think it is better to change the function
[load_balancing_loss_func](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mixtral/modeling_mixtral.py#L77) by adding an additional parameter: `attention_mask` and change the implementation inside to remove the loss from padding tokens
### Your contribution
I would be happy to review the PR implemeting this feature ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28505/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28505/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28504/comments | https://api.github.com/repos/huggingface/transformers/issues/28504/events | https://github.com/huggingface/transformers/pull/28504 | 2,081,337,584 | PR_kwDOCUB6oc5kDgar | 28,504 | Allow to train dinov2 with different dtypes like bf16 | {
"login": "StarCycle",
"id": 33491471,
"node_id": "MDQ6VXNlcjMzNDkxNDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/33491471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StarCycle",
"html_url": "https://github.com/StarCycle",
"followers_url": "https://api.github.com/users/StarCycle/followers",
"following_url": "https://api.github.com/users/StarCycle/following{/other_user}",
"gists_url": "https://api.github.com/users/StarCycle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StarCycle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StarCycle/subscriptions",
"organizations_url": "https://api.github.com/users/StarCycle/orgs",
"repos_url": "https://api.github.com/users/StarCycle/repos",
"events_url": "https://api.github.com/users/StarCycle/events{/privacy}",
"received_events_url": "https://api.github.com/users/StarCycle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @StarCycle, thanks for this contribution! \r\n\r\nCould you share a small code snippet to show how you're training the model in the PR description and the error? This helps us for anyone coming back to this PR in the future.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28504). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I am using another training framework (xtuner), which is a little complicated. But you can easily reproduce the problem with the following code:\r\n\r\n```\r\nimport torch\r\nfrom transformers import AutoImageProcessor, AutoModel\r\nfrom PIL import Image\r\nimport requests\r\n\r\n# URL of the image you want to process\r\nurl = 'http://images.cocodataset.org/val2017/000000039769.jpg'\r\n\r\n# Open the image\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\n# Load the image processor and the model\r\nprocessor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')\r\nmodel = AutoModel.from_pretrained('facebook/dinov2-base', torch_dtype=torch.bfloat16)\r\n\r\n# Prepare the inputs\r\ninputs = processor(images=image, return_tensors=\"pt\")\r\n\r\n# Get the model outputs\r\noutputs = model(**inputs)\r\n```\r\nNow there will be an error:\r\n```\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/dinov2/modeling_dinov2.py](https://localhost:8080/#) in forward(self, pixel_values)\r\n 165 f\" Expected {self.num_channels} but got {num_channels}.\"\r\n 166 )\r\n--> 167 embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)\r\n 168 return embeddings\r\n 169 \r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n 1519 \r\n 1520 def _call_impl(self, *args, **kwargs):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1528 \r\n 1529 try:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in forward(self, input)\r\n 458 \r\n 459 def forward(self, input: Tensor) -> Tensor:\r\n--> 460 return self._conv_forward(input, self.weight, self.bias)\r\n 461 \r\n 462 class Conv3d(_ConvNd):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in _conv_forward(self, input, weight, bias)\r\n 454 weight, bias, self.stride,\r\n 455 _pair(0), self.dilation, self.groups)\r\n--> 456 return F.conv2d(input, weight, bias, self.stride,\r\n 457 self.padding, self.dilation, self.groups)\r\n 458 \r\n\r\nRuntimeError: Input type (float) and bias type (c10::BFloat16) should be the same\r\n```",
"@StarCycle Thanks for sharing and again for this contribution! "
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | I want to train dinov2 with bf16 but I get the following error in https://github.com/huggingface/transformers/blob/bc72b4e2cdcbc80d5f56731f35dbc9c18b4c8de6/src/transformers/models/dinov2/modeling_dinov2.py#L635:
```
RuntimeError: Input type (float) and bias type (c10::BFloat16) should be the same
```
Since the input dtype is torch.float32, the parameter dtype has to be torch.float32...
@LZHgrla and I checked the code of clip vision encoder and found there is an automatic dtype transformation (https://github.com/huggingface/transformers/blob/bc72b4e2cdcbc80d5f56731f35dbc9c18b4c8de6/src/transformers/models/clip/modeling_clip.py#L181-L182).
So I add similar automatic dtype transformation to modeling_dinov2.py.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28504/reactions",
"total_count": 6,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28504/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28504",
"html_url": "https://github.com/huggingface/transformers/pull/28504",
"diff_url": "https://github.com/huggingface/transformers/pull/28504.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28504.patch",
"merged_at": 1705518188000
} |
https://api.github.com/repos/huggingface/transformers/issues/28503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28503/comments | https://api.github.com/repos/huggingface/transformers/issues/28503/events | https://github.com/huggingface/transformers/pull/28503 | 2,081,147,575 | PR_kwDOCUB6oc5kC3Xh | 28,503 | Add sudachi_projection option to BertJapaneseTokenizer | {
"login": "hiroshi-matsuda-rit",
"id": 40782025,
"node_id": "MDQ6VXNlcjQwNzgyMDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/40782025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hiroshi-matsuda-rit",
"html_url": "https://github.com/hiroshi-matsuda-rit",
"followers_url": "https://api.github.com/users/hiroshi-matsuda-rit/followers",
"following_url": "https://api.github.com/users/hiroshi-matsuda-rit/following{/other_user}",
"gists_url": "https://api.github.com/users/hiroshi-matsuda-rit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hiroshi-matsuda-rit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hiroshi-matsuda-rit/subscriptions",
"organizations_url": "https://api.github.com/users/hiroshi-matsuda-rit/orgs",
"repos_url": "https://api.github.com/users/hiroshi-matsuda-rit/repos",
"events_url": "https://api.github.com/users/hiroshi-matsuda-rit/events{/privacy}",
"received_events_url": "https://api.github.com/users/hiroshi-matsuda-rit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Example model for sudachi_projection option:\r\nhttps://huggingface.co/hiroshi-matsuda-rit/bert-base-sudachitra-v11/blob/main/tokenizer_config.json#L16",
"Nice 🔥 Feel free to ping me for a review! ",
"The errors which were raised from the test cases for jumanpp and mecab-unidic in local environment were not reproduced in circle-ci test environment, so I removed following section from the top description of this PR.\r\n\r\n> As a side note, some errors have been reported from the test cases for mecab-unidic and jumanpp when turning on `RUN_CUSTOM_TOKENIZERS`.\r\n> I would also like to update these outdated test cases in this PR if required.\r\n```console\r\n$ RUN_CUSTOM_TOKENIZERS=1 pytest -rfs tests/models/bert_japanese/\r\n...\r\n======================================================================================= short test summary info =======================================================================================\r\nFAILED tests/models/bert_japanese/test_tokenization_bert_japanese.py::BertJapaneseTokenizationTest::test_jumanpp_tokenizer - AssertionError: Lists differ: ['アップ[28 chars]', '\\\\ ', 'が', '\\\\ \\\\ ', '\\\\ ', '発売', 'さ', 'れた', '\\\\ ', '。'] != ['アップ[28 chars]', '\\u3000', 'が', '\\u3000', '\\u3000', '\\u3000'[28 chars]...\r\nFAILED tests/models/bert_japanese/test_tokenization_bert_japanese.py::BertJapaneseTokenizationTest::test_jumanpp_tokenizer_ext - AssertionError: Lists differ: ['ありがとう', 'ございます', 'm', '(', '_', '\\\\ _', ')', 'm', '見つける', 'の[15 chars] '。'] != ['ありがとう', 'ございます', 'm(_ _)m', '見つける', 'の', 'が', '大...\r\nFAILED tests/models/bert_japanese/test_tokenization_bert_japanese.py::BertJapaneseTokenizationTest::test_jumanpp_tokenizer_lower - AssertionError: Lists differ: ['アップ[28 chars]', '\\\\ ', 'が', '\\\\ \\\\ ', '\\\\ ', '発売', 'さ', 'れた', '\\\\ ', '。'] != ['アップ[28 chars]', '\\u3000', 'が', '\\u3000', '\\u3000', '\\u3000'[28 chars]...\r\nFAILED tests/models/bert_japanese/test_tokenization_bert_japanese.py::BertJapaneseTokenizationTest::test_jumanpp_tokenizer_no_normalize - AssertionError: Lists differ: ['ア',[45 chars]', '\\\\ ', 'が', '\\\\ \\\\ ', '\\\\ ', '発売', 'さ', 'れた', '\\u3000', '。'] != ['ア',[45 chars]', '\\u3000', 'が', '\\u3000', '\\u3000', '\\u3000'[28 chars] '。']\r\nFAILED tests/models/bert_japanese/test_tokenization_bert_japanese.py::BertJapaneseTokenizationTest::test_jumanpp_tokenizer_trim_whitespace - AssertionError: Lists differ: ['アップ[21 chars]ne', '8', '\\\\', 'が', '\\\\ \\\\', '\\\\', '発売', 'さ', 'れた', '\\\\', '。'] != ['アップ[21 chars]ne', '8', 'が', '発売', 'さ', 'れた', '。']\r\nFAILED tests/models/bert_japanese/test_tokenization_bert_japanese.py::BertJapaneseTokenizationTest::test_mecab_tokenizer_unidic - RuntimeError: The unidic dictionary itself is not found. See https://github.com/polm/unidic-py for installation.\r\nSKIPPED [2] tests/test_tokenization_common.py:2394: test is PT+TF test\r\nSKIPPED [2] tests/test_tokenization_common.py:2534: test is slow\r\nSKIPPED [2] tests/test_tokenization_common.py:2499: test requires TensorFlow\r\nSKIPPED [2] tests/test_tokenization_common.py:2448: test is slow\r\n======================================================================== 6 failed, 193 passed, 8 skipped, 7 warnings in 4.52s =========================================================================\r\n```",
"@ArthurZucker All tests passed! Please review and merge this PR.",
"> Let's add a small test where you pass sudachi_kwargs to a BertJapaneseTokenizer to make sure this is usable\r\n\r\nI added some test cases which pass kwargs to word tokenizers.",
"> I think we need a sudachi version check for this feature then!\r\n\r\nThanks for your suggestion. Let me explain the background.\r\n\r\nI've been developing a Japanese Dependency parser GiNZA utilizing spacy-transformers and sudachipy, and its [ELECTRA model](https://huggingface.co/megagonlabs/transformers-ud-japanese-electra-base-ginza-520/tree/main) has become widely used for Japanese text analysis in recent years.\r\nI'm wondering that certain users of GiNZA ELECTRA model may hope to keep sudachipy as is but they need to upgrade transformers to use newer models together with GiNZA.\r\n\r\nThis is the reason why I wrote the if else clause.",
"@ArthurZucker Finally, I decided to pass projection arg to Dictionary.create() directly.\r\nI would like to inform GiNZA users to update transformers and sudachipy at the same time.",
"@ArthurZucker What should I do for this type of errors? It seems I do not have a permission to rerun the failed actions.\r\n\r\n> FAILED tests/test_pipeline_mixin.py::ImageSegmentationPipelineTests::test_small_model_pt - requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))",
"Yes if else is alright don't worry! Just wanted to make sure someone that tries to use the new feature with an older version would get an error saying he needs to update the sudachi version to use it! ",
"I implemented `is_sudachi_projection_available()` and `@require_sudachi_projection` to check the sudachipy version.\r\nDoes this additional implementation meet your requirements? @ArthurZucker ",
"Reviewing! ",
"@ArthurZucker Thanks! I just revised the changes as per your instructions.",
"@ArthurZucker Thank you. I have completed the fix.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,707 | 1,707 | CONTRIBUTOR | null | # What does this PR do?
A new feature of SudachiPy v0.6.8 allows normalization of words based on Japanese morphological analysis.
https://github.com/WorksApplications/sudachi.rs/issues/230
This morphology-based normalization functionality, named "projection" in SudachiPy, makes Japanese sub-tokenization more efficient and can improve transformer performance.
Very few changes are required to add `sudachi_projection` option to `BertJapaneseTokenizer`, and the models that do not specify `sudachi_projection` option can be used as before in environments using older versions of SudachiPy.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- text models: @ArthurZucker and @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28503/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28503",
"html_url": "https://github.com/huggingface/transformers/pull/28503",
"diff_url": "https://github.com/huggingface/transformers/pull/28503.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28503.patch",
"merged_at": 1707796040000
} |
https://api.github.com/repos/huggingface/transformers/issues/28502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28502/comments | https://api.github.com/repos/huggingface/transformers/issues/28502/events | https://github.com/huggingface/transformers/issues/28502 | 2,081,119,761 | I_kwDOCUB6oc58C14R | 28,502 | Tokenizer should be serializable | {
"login": "hk6an6",
"id": 2327624,
"node_id": "MDQ6VXNlcjIzMjc2MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2327624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hk6an6",
"html_url": "https://github.com/hk6an6",
"followers_url": "https://api.github.com/users/hk6an6/followers",
"following_url": "https://api.github.com/users/hk6an6/following{/other_user}",
"gists_url": "https://api.github.com/users/hk6an6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hk6an6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hk6an6/subscriptions",
"organizations_url": "https://api.github.com/users/hk6an6/orgs",
"repos_url": "https://api.github.com/users/hk6an6/repos",
"events_url": "https://api.github.com/users/hk6an6/events{/privacy}",
"received_events_url": "https://api.github.com/users/hk6an6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It seems the problem goes away if I change from `paraphrase-distilroberta-base-v1` to `paraphrase-distilroberta-base-v2`. Could it be that the tokenizer_config for v1 is incompatible with json.dumps?",
"Hey! This seems to be related to `sentence-transfromers`, as tokenizers are serializable. \r\nCould you provide the stactrace and open an issue on the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) repo? ",
"Hi Arthur!\r\n\r\nYes, I can provide a stack trace.\r\n\r\nI’ll close this issue and open a new issue on the sentence-transformers\r\nrepo.\r\n\r\nOn Mon, Jan 15, 2024 at 02:32 Arthur ***@***.***> wrote:\r\n\r\n> Hey! This seems to be related to sentence-transfromers, as tokenizers are\r\n> serializable.\r\n> Could you provide the stactrace and open an issue on the\r\n> sentence-transformers <https://github.com/UKPLab/sentence-transformers>\r\n> repo?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/28502#issuecomment-1891827117>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AARYISEUDT4D3TRGGZYR7GLYOUAUDAVCNFSM6AAAAABB2R4YT6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJRHAZDOMJRG4>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"[stacktrace.txt](https://github.com/huggingface/transformers/files/13959334/stacktrace.txt)\r\n\r\nKeeping my word, this is the stack trace. I'll cose this ticket and open one in the sentence-transformers repo.",
"https://github.com/UKPLab/sentence-transformers/issues/2418"
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
- transformers.__version__: '4.36.1'
- platform: macOS 14.2.1 (23C71)
- Python version: Python 3.11.6
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('paraphrase-distilroberta-base-v1', device='mps')
model.max_seq_length = 384
model.save('my_path')
```
### Expected behavior
`transformers/tokenization_utils_base.py` fails to serialize the local variable `tokenizer_config`. and breaks `model.save`. `model.save` should succeed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28502/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28501/comments | https://api.github.com/repos/huggingface/transformers/issues/28501/events | https://github.com/huggingface/transformers/issues/28501 | 2,081,085,913 | I_kwDOCUB6oc58CtnZ | 28,501 | remote tokenizers trust remote code prompt doesn't not work as expected | {
"login": "mzbac",
"id": 7523197,
"node_id": "MDQ6VXNlcjc1MjMxOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7523197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mzbac",
"html_url": "https://github.com/mzbac",
"followers_url": "https://api.github.com/users/mzbac/followers",
"following_url": "https://api.github.com/users/mzbac/following{/other_user}",
"gists_url": "https://api.github.com/users/mzbac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mzbac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mzbac/subscriptions",
"organizations_url": "https://api.github.com/users/mzbac/orgs",
"repos_url": "https://api.github.com/users/mzbac/repos",
"events_url": "https://api.github.com/users/mzbac/events{/privacy}",
"received_events_url": "https://api.github.com/users/mzbac/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, `AutoTokenizer.from_pretrained('Qwen/Qwen-1_8B', trust_remote_code = True)` should be used as this model is not natively supported",
"Otherwise, yes we should prompt the user for that cc @Rocketknight1 if you can have a look! ",
"> Hey, `AutoTokenizer.from_pretrained('Qwen/Qwen-1_8B', trust_remote_code = True)` should be used as this model is not natively supported\r\n\r\n@ArthurZucker Thanks for the reply. Yeah, explicitly passing trust_remote_code works. However, it should prompt the user in terminal to ask whether they want to trust the remote code if they didn't explicitly pass trust_remote_code. As I understand it, it should behave similarly to AutoModelForCausalLM.from_pretrained.",
"Yes agree with you! ",
"Hi @mzbac, I made a PR last week that specifically deals with this issue here: #28419.\r\n\r\nCan you install `transformers` from `main` with `pip install --upgrade git+https://github.com/huggingface/transformers.git` and then try just running `AutoTokenizer.from_pretrained('Qwen/Qwen-1_8B')` again, without needing to edit any repo files? It should work properly now!",
"Thanks, @Rocketknight1. I can confirm that using the main branch's transformers resolves the issue.",
"Cool! That fix will be included in our next release. I'm going to close this issue for now, but feel free to reopen it if you encounter any other issues with loading custom code tokenizers."
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
transformers: 4.36.2
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. use `AutoTokenizer.from_pretrained('Qwen/Qwen-1_8B')
2. The tokenizer failed to load and threw an error. The tokenizer class was not found, and it didn't prompt the user to allow trust for remote code.
3. Delete the `tokenizer_class` setting in [config.jso](https://huggingface.co/Qwen/Qwen-1_8B/blob/main/config.json#L30)n and [tokenizer_config.json](https://huggingface.co/Qwen/Qwen-1_8B/blob/main/tokenizer_config.json)
4. After that, when using `AutoTokenizer.from_pretrained('Qwen/Qwen-1_8B')`, it prompts the user to trust remote code. However, instead of asking once, it prompts the user to confirm three times.
### Expected behavior
The `AutoTokenizer.from_pretrained` function should prompt the user whether they want to enable trust remote code only once when the user did not pass the `trust_remote_code` parameter. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28501/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28500/comments | https://api.github.com/repos/huggingface/transformers/issues/28500/events | https://github.com/huggingface/transformers/pull/28500 | 2,080,999,703 | PR_kwDOCUB6oc5kCWrW | 28,500 | Log a warning when best model is not loaded | {
"login": "akwako",
"id": 31602350,
"node_id": "MDQ6VXNlcjMxNjAyMzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/31602350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akwako",
"html_url": "https://github.com/akwako",
"followers_url": "https://api.github.com/users/akwako/followers",
"following_url": "https://api.github.com/users/akwako/following{/other_user}",
"gists_url": "https://api.github.com/users/akwako/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akwako/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akwako/subscriptions",
"organizations_url": "https://api.github.com/users/akwako/orgs",
"repos_url": "https://api.github.com/users/akwako/repos",
"events_url": "https://api.github.com/users/akwako/events{/privacy}",
"received_events_url": "https://api.github.com/users/akwako/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @amyeroberts, \r\n\r\nThank you for your reply and feedback!\r\n\r\nAs far as I can tell, the main issue is when `max_steps` < `args.save_steps`, and only when `save_strategy=\"steps\"`. Since `max_steps` is computed in the `_inner_training_loop`, I have added a check there to make sure training fails fast, as you suggested. However, in case this is happening in other ways, it might be good to keep the warning in the log, too, as per the original PR. What do you think? ",
"Possibly related to #27332 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28500). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @amyeroberts, \r\n\r\nThanks for your suggestions! \r\n\r\nI'm looking into [#27332](https://github.com/huggingface/transformers/issues/27332). You may be right that these are related issues. When I run `trainer` with `EarlyStoppingCallback`, I do get the (newly added) message: `ValueError: args.save_steps must be less than max_steps...` I will look further into `EarlyStoppingCallback` to confirm.\r\n\r\nUpdate: In order to determine if the issue [#27332](https://github.com/huggingface/transformers/issues/27332) is due to max_steps < args.save_steps, I think we need to know what the other training arguments are, and how large the training sample is. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,707 | null | NONE | null | Specifying "load_best_model_at_end=True" in TrainingArguments should ensure that the best model is always loaded at end of training. However, this is not always the case: When best_model_checkpoint is None, the best model is not loaded, and the user may be unaware of this behavior.
Add a warning to the log to let the user know when the best model is not loaded at the end of training. Suggest that the user check the "save_strategy" TrainingArguments, as failing to to do so is one possible reason why the best model failed to load.
@muellerzr
@pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28500/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28500",
"html_url": "https://github.com/huggingface/transformers/pull/28500",
"diff_url": "https://github.com/huggingface/transformers/pull/28500.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28500.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28499/comments | https://api.github.com/repos/huggingface/transformers/issues/28499/events | https://github.com/huggingface/transformers/issues/28499 | 2,080,927,400 | I_kwDOCUB6oc58CG6o | 28,499 | activation_checkpointing error when using --fsdp | {
"login": "getao",
"id": 12735658,
"node_id": "MDQ6VXNlcjEyNzM1NjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/getao",
"html_url": "https://github.com/getao",
"followers_url": "https://api.github.com/users/getao/followers",
"following_url": "https://api.github.com/users/getao/following{/other_user}",
"gists_url": "https://api.github.com/users/getao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/getao/subscriptions",
"organizations_url": "https://api.github.com/users/getao/orgs",
"repos_url": "https://api.github.com/users/getao/repos",
"events_url": "https://api.github.com/users/getao/events{/privacy}",
"received_events_url": "https://api.github.com/users/getao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @getao, thanks for raising an issue! \r\n\r\nCould you provide a minimal reproducer for this error. Specifically the full CLI command used to launch the training job? \r\n\r\ncc @muellerzr @pacman100 ",
"Sure.\r\n\r\n```\r\ndef main():\r\n parser = transformers.HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n\r\n\r\n data_prefix = data_args.data_path\r\n \r\n train_file = f\"{data_prefix}.train.json\" # text data for language modeling (the next word prediction task)\r\n eval_file = f\"{data_prefix}.eval.json\"\r\n \r\n dataset = load_dataset(\"json\", data_files={\"train\": train_file, \"eval\": eval_file})\r\n train_dataset = dataset[\"train\"]\r\n eval_dataset = dataset[\"eval\"]\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path)\r\n train_dataset = train_dataset.map(tokenize_function, batched=True, fn_kwargs={\"tokenizer\": tokenizer, \"max_seq_length\": data_args.max_seq_length, \"add_special_tokens\": data_args.add_special_tokens}, load_from_cache_file=True)\r\n eval_dataset = eval_dataset.map(tokenize_function, batched=True, fn_kwargs={\"tokenizer\": tokenizer, \"max_seq_length\": data_args.max_seq_length, \"add_special_tokens\": data_args.add_special_tokens, \"dev\": True}, load_from_cache_file=True) \r\n\r\n model_download_flag = False\r\n while model_download_flag is False: \r\n try:\r\n model = AutoModelForCausalLM.from_pretrained(model_args.model_name_or_path, torch_dtype=torch.float16 if training_args.bf16 is False else torch.bfloat16, use_flash_attention_2=True, resume_download=True) # Llama-2\r\n model_download_flag = True\r\n except Exception as e:\r\n print(e)\r\n\r\n train_model(model, train_dataset, eval_dataset, training_args, data_collator)\r\n\r\nmain()\r\n```\r\n\r\nThe CLI looks as follows -- nothing special except 2 --fsdp related flags:\r\n\r\n```\r\ntorchrun --nproc-per-node=6 train_script.py --adam_beta2 0.95 --adam_epsilon 1e-6 --num_train_epochs $epoch --per_device_train_batch_size $batch --per_device_eval_batch_size $batch --gradient_accumulation_steps 1 --fsdp shard_grad_op --fsdp_config my_fsdp_config_path \\\r\n --learning_rate $lr --warmup_steps $warmup --max_grad_norm 1.0 --seed $seed --data_seed $seed --logging_steps 10 --save_strategy 'no' --evaluation_strategy 'steps' --eval_steps $eval_steps \\\r\n --save_steps $save_steps --bf16 --output_dir $OUT_DIR --logging_dir $OUT_DIR --data_path $INPUT_DATA | tee $OUT_DIR/train.log\r\n```"
] | 1,705 | 1,707 | null | NONE | null | ### System Info
transformers == 4.36.2
pytorch == 2.1.0
### Who can help?
When using deepspeed to enable activation checkpointing, everything goes well. However, when I switch to torchrun with the native pytorch fsdp integrated into the huggingface: https://huggingface.co/docs/transformers/main/main_classes/trainer#transformers.TrainingArguments.fsdp
I can't run the training process properly with the following errors:
```
File "/workspace/training_script.py", line 77, in train_model
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1854, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2744, in training_step
self.accelerator.backward(loss)
File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 1905, in backward
loss.backward(**kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1065, in unpack_hook
args = ctx.get_args(ctx.saved_tensors)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook
frame.check_recomputed_tensors_match(gid)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 13:
saved metadata: {'shape': torch.Size([1, 3112, 32, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
recomputed metadata: {'shape': torch.Size([1, 9336, 32, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
tensor at position 14:
saved metadata: {'shape': torch.Size([1, 3112, 32, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
recomputed metadata: {'shape': torch.Size([1, 9336, 32, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
```
The model I used is Llama-2 and I didn't change its forward function and use Trainer to train it. I wonder if there is something wrong with activation_checkpointing (enabling it in the fsdp_config.json) feature used together with --fsdp.
Thank you
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Training Llama using Trainer with the following arguments:
--fsdp shard_grad_op --fsdp_config fsdp_config.json (where activation_checkpointing is set to true)
### Expected behavior
Properly running the training process with memory saved. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28499/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28498/comments | https://api.github.com/repos/huggingface/transformers/issues/28498/events | https://github.com/huggingface/transformers/pull/28498 | 2,080,667,469 | PR_kwDOCUB6oc5kBSdl | 28,498 | add dataloader prefetch factor in training args and trainer | {
"login": "qmeeus",
"id": 25608944,
"node_id": "MDQ6VXNlcjI1NjA4OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmeeus",
"html_url": "https://github.com/qmeeus",
"followers_url": "https://api.github.com/users/qmeeus/followers",
"following_url": "https://api.github.com/users/qmeeus/following{/other_user}",
"gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions",
"organizations_url": "https://api.github.com/users/qmeeus/orgs",
"repos_url": "https://api.github.com/users/qmeeus/repos",
"events_url": "https://api.github.com/users/qmeeus/events{/privacy}",
"received_events_url": "https://api.github.com/users/qmeeus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @muellerzr ",
"Thanks for the suggestions @amyeroberts ! I included them in the PR and synced the branch to the latest commits\r\n\r\nHave a nice day\r\n"
] | 1,705 | 1,707 | 1,706 | CONTRIBUTOR | null | What does this PR do?
I added an option to the trainer to prefetch batches during data loading.
When training a model with heavy transformations and an iterable dataset, the dataloader might struggle to deliver fast enough for the GPU. I've found that prefetching batches helps to solve this issue.
The option is implemented in torch.utils.data.DataLoader but not in HF Trainer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28498/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28498",
"html_url": "https://github.com/huggingface/transformers/pull/28498",
"diff_url": "https://github.com/huggingface/transformers/pull/28498.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28498.patch",
"merged_at": 1706022498000
} |
https://api.github.com/repos/huggingface/transformers/issues/28497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28497/comments | https://api.github.com/repos/huggingface/transformers/issues/28497/events | https://github.com/huggingface/transformers/pull/28497 | 2,080,667,462 | PR_kwDOCUB6oc5kBSdf | 28,497 | Improving Training Performance and Scalability Documentation | {
"login": "HamzaFB",
"id": 24733081,
"node_id": "MDQ6VXNlcjI0NzMzMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/24733081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamzaFB",
"html_url": "https://github.com/HamzaFB",
"followers_url": "https://api.github.com/users/HamzaFB/followers",
"following_url": "https://api.github.com/users/HamzaFB/following{/other_user}",
"gists_url": "https://api.github.com/users/HamzaFB/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamzaFB/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamzaFB/subscriptions",
"organizations_url": "https://api.github.com/users/HamzaFB/orgs",
"repos_url": "https://api.github.com/users/HamzaFB/repos",
"events_url": "https://api.github.com/users/HamzaFB/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamzaFB/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28497). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks!"
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | This PR improves the docs.
A strategy for improving Memory Performance for Large Models (Billions of parameters) is PEFT.
Actual documentation does not mention it.
This PR adds PEFT and provides an example as to why this reduces memory needs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28497/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28497",
"html_url": "https://github.com/huggingface/transformers/pull/28497",
"diff_url": "https://github.com/huggingface/transformers/pull/28497.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28497.patch",
"merged_at": 1705401027000
} |
https://api.github.com/repos/huggingface/transformers/issues/28496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28496/comments | https://api.github.com/repos/huggingface/transformers/issues/28496/events | https://github.com/huggingface/transformers/issues/28496 | 2,080,646,687 | I_kwDOCUB6oc58BCYf | 28,496 | No name 'SiLUActivation' in module 'transformers.activations' | {
"login": "qxpBlog",
"id": 96739096,
"node_id": "U_kgDOBcQfGA",
"avatar_url": "https://avatars.githubusercontent.com/u/96739096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qxpBlog",
"html_url": "https://github.com/qxpBlog",
"followers_url": "https://api.github.com/users/qxpBlog/followers",
"following_url": "https://api.github.com/users/qxpBlog/following{/other_user}",
"gists_url": "https://api.github.com/users/qxpBlog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qxpBlog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qxpBlog/subscriptions",
"organizations_url": "https://api.github.com/users/qxpBlog/orgs",
"repos_url": "https://api.github.com/users/qxpBlog/repos",
"events_url": "https://api.github.com/users/qxpBlog/events{/privacy}",
"received_events_url": "https://api.github.com/users/qxpBlog/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @qxpBlog, thanks for raising this issue! \r\n\r\nCould you share a minimal code reproducer for this error and a full traceback of the error? ",
"@amyeroberts this question appear on the eighth line(i think that this version of transformers exclude the module):\r\nimport os\r\nimport sys\r\nimport argparse\r\nimport torch\r\nimport numpy as np\r\nimport transformers\r\nfrom transformers import GenerationConfig, LlamaForCausalLM, LlamaTokenizer\r\n**from transformers.activations import SiLUActivation**\r\n\r\nfrom ptflops import get_model_complexity_info\r\nfrom ptflops.pytorch_ops import bn_flops_counter_hook, pool_flops_counter_hook\r\n\r\nfrom LLMPruner.models.hf_llama.modeling_llama import LlamaForCausalLM, LlamaRMSNorm, LlamaAttention, LlamaMLP\r\nfrom LLMPruner.peft import PeftModel\r\n\r\nif torch.cuda.is_available():\r\n device = \"cuda\"\r\nelse:\r\n device = \"cpu\"\r\ntorch_version = int(torch.__version__.split('.')[1])\r\n\r\ndef LlamaAttention_counter_hook(module, input, output):\r\n # (1) Ignore past-key values\r\n # (2) Assume there is no attention mask\r\n # Input will be empty in some pytorch version. use output here since input.shape == output.shape\r\n flops = 0\r\n q_len = output[0].shape[1]\r\n linear_dim = output[0].shape[-1]\r\n num_heads = module.num_heads\r\n head_dim = module.head_dim\r\n\r\n rotary_flops = 2 * (q_len * num_heads * head_dim) * 2\r\n attention_flops = num_heads * (q_len * q_len * head_dim + q_len * q_len + q_len * q_len * head_dim) #QK^T + softmax + AttentionV\r\n linear_flops = 4 * (q_len * linear_dim * num_heads * head_dim) # 4 for q, k, v, o. \r\n flops += rotary_flops + attention_flops + linear_flops\r\n module.__flops__ += int(flops)\r\n\r\ndef rmsnorm_flops_counter_hook(module, input, output):\r\n input = input[0]\r\n\r\n batch_flops = np.prod(input.shape)\r\n batch_flops *= 2\r\n module.__flops__ += int(batch_flops)\r\n\r\ndef main(args):\r\n if args.model_type == 'pretrain':\r\n tokenizer = LlamaTokenizer.from_pretrained(args.base_model)\r\n model = LlamaForCausalLM.from_pretrained(\r\n args.base_model,\r\n low_cpu_mem_usage=True if torch_version >=9 else False\r\n )\r\n elif args.model_type == 'pruneLLM':\r\n pruned_dict = torch.load(args.ckpt, map_location='cpu')\r\n tokenizer, model = pruned_dict['tokenizer'], pruned_dict['model']\r\n else:\r\n raise NotImplementedError\r\n\r\n def input_constructor(x):\r\n return {'input_ids': torch.ones(x).long().to(device)}\r\n\r\n if device == \"cuda\":\r\n model.half()\r\n model = model.cuda()\r\n \r\n with torch.cuda.device(0):\r\n macs, params = get_model_complexity_info(model, (1, 64,), as_strings=True,\r\n input_constructor = input_constructor,\r\n print_per_layer_stat=True, verbose=True,\r\n custom_modules_hooks={\r\n LlamaAttention: LlamaAttention_counter_hook,\r\n LlamaRMSNorm: rmsnorm_flops_counter_hook,\r\n SiLUActivation: pool_flops_counter_hook,\r\n },)\r\n else:\r\n model.float()\r\n macs, params = get_model_complexity_info(model, (1, 64,), as_strings=True,\r\n input_constructor = input_constructor,\r\n print_per_layer_stat=True, verbose=True,\r\n custom_modules_hooks={\r\n LlamaAttention: LlamaAttention_counter_hook,\r\n LlamaRMSNorm: rmsnorm_flops_counter_hook,\r\n SiLUActivation: pool_flops_counter_hook,\r\n },)\r\n\r\n print('{:<30} {:<8}'.format('Computational complexity: ', macs))\r\n print('{:<30} {:<8}'.format('Number of parameters: ', params))\r\n print(\"GPU Memory Requirement: {} MiB\\n\".format(torch.cuda.memory_allocated()/1024/1024))\r\n\r\n\r\nif __name__ == \"__main__\":\r\n parser = argparse.ArgumentParser(description='Tuning Pruned LLaMA (huggingface version)')\r\n\r\n parser.add_argument('--base_model', type=str, default=\"decapoda-research/llama-7b-hf\", help='base model name')\r\n parser.add_argument('--model_type', type=str, required=True, help = 'choose from [pretrain, pruneLLM]')\r\n parser.add_argument('--ckpt', type=str, default=None)\r\n parser.add_argument('--lora_ckpt', type=str, default=None)\r\n \r\n args = parser.parse_args()\r\n main(args)\r\n\r\n",
"@qxpBlog OK. So the custom silu acitvation class was removed and so can't be imported from `transformers.activations`. To fix this script, you can use `nn.SiLU` instead. \r\n\r\nAs the class wasn't part of the public API - not importable from the top level of transformers and not documented - we don't guarantee that it'll never be moved, renamed or deleted. We recognise that this can still cause unexpected errors for our users however! I've open a PR to add a dummy class so this goes through a proper deprecation cycle first. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,707 | null | NONE | null | ### System Info
No name 'SiLUActivation' in module 'transformers.activations'
why i meet this problem , my version of transformers is 4.37.0.dev0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers.activations import SiLUActivation
### Expected behavior
get the module SiLUActivation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28496/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28495/comments | https://api.github.com/repos/huggingface/transformers/issues/28495/events | https://github.com/huggingface/transformers/pull/28495 | 2,080,589,832 | PR_kwDOCUB6oc5kBDN2 | 28,495 | improve dev setup comments and hints | {
"login": "4imothy",
"id": 40186632,
"node_id": "MDQ6VXNlcjQwMTg2NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/40186632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4imothy",
"html_url": "https://github.com/4imothy",
"followers_url": "https://api.github.com/users/4imothy/followers",
"following_url": "https://api.github.com/users/4imothy/following{/other_user}",
"gists_url": "https://api.github.com/users/4imothy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/4imothy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/4imothy/subscriptions",
"organizations_url": "https://api.github.com/users/4imothy/orgs",
"repos_url": "https://api.github.com/users/4imothy/repos",
"events_url": "https://api.github.com/users/4imothy/events{/privacy}",
"received_events_url": "https://api.github.com/users/4imothy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For the failing tests, you'll need to update the expected strings. "
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Changes 'pip install -e .[dev]' -> \`pip install -e '.[dev]'\` in multiple comments and hints.
New command runs on both *zsh* and *bash*, previously it did not work on *zsh*.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28495/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28495",
"html_url": "https://github.com/huggingface/transformers/pull/28495",
"diff_url": "https://github.com/huggingface/transformers/pull/28495.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28495.patch",
"merged_at": 1705343800000
} |
https://api.github.com/repos/huggingface/transformers/issues/28494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28494/comments | https://api.github.com/repos/huggingface/transformers/issues/28494/events | https://github.com/huggingface/transformers/pull/28494 | 2,080,434,260 | PR_kwDOCUB6oc5kAlMw | 28,494 | Generate: consolidate output classes | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
Does some cleanup that's been on my mind for a while 🧹
We had a bunch of classes that were a copy of each other, named after each internal generation method. This PR consolidates them. As a result, the documentation becomes more concise and with less risk of suffering from incomplete updates 🤗
Full retrocompatibility is kept (and tested)! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28494/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28494",
"html_url": "https://github.com/huggingface/transformers/pull/28494",
"diff_url": "https://github.com/huggingface/transformers/pull/28494.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28494.patch",
"merged_at": 1705338248000
} |
https://api.github.com/repos/huggingface/transformers/issues/28493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28493/comments | https://api.github.com/repos/huggingface/transformers/issues/28493/events | https://github.com/huggingface/transformers/pull/28493 | 2,080,413,740 | PR_kwDOCUB6oc5kAhZ4 | 28,493 | Generate: fix candidate device placement | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
#27775 was merged, and the branch was not synced with #27995 (already on `main`) -- the two branches together result in CI failures. Fortunately, the fix is simple :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28493/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28493",
"html_url": "https://github.com/huggingface/transformers/pull/28493",
"diff_url": "https://github.com/huggingface/transformers/pull/28493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28493.patch",
"merged_at": 1705177885000
} |
https://api.github.com/repos/huggingface/transformers/issues/28492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28492/comments | https://api.github.com/repos/huggingface/transformers/issues/28492/events | https://github.com/huggingface/transformers/pull/28492 | 2,080,118,727 | PR_kwDOCUB6oc5j_lMO | 28,492 | Fixing Issue #17488. Add changes to make the error thrown consistent in both decode and encode functions of Tokenizer | {
"login": "bayllama",
"id": 142558246,
"node_id": "U_kgDOCH9EJg",
"avatar_url": "https://avatars.githubusercontent.com/u/142558246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayllama",
"html_url": "https://github.com/bayllama",
"followers_url": "https://api.github.com/users/bayllama/followers",
"following_url": "https://api.github.com/users/bayllama/following{/other_user}",
"gists_url": "https://api.github.com/users/bayllama/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayllama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayllama/subscriptions",
"organizations_url": "https://api.github.com/users/bayllama/orgs",
"repos_url": "https://api.github.com/users/bayllama/repos",
"events_url": "https://api.github.com/users/bayllama/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayllama/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ArthurZucker ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28492). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,708 | 1,708 | CONTRIBUTOR | null | **Fixing Issue #17488. Add changes to make the error thrown consistent in both decode and encode functions of Tokenizer**
# What does this PR do?
This PR is with regards to fixing the issue to make both the encode and decode function return the same error when an unexpected argument is passed to the function. Below is the Issue ID and the Issue Title
"_batch_encode_plus() got an unexpected keyword argument 'is_pretokenized' using BertTokenizerFast #17488"
https://github.com/huggingface/transformers/issues/17488
Fixes # 17488
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28492/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28492",
"html_url": "https://github.com/huggingface/transformers/pull/28492",
"diff_url": "https://github.com/huggingface/transformers/pull/28492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28492.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28491/comments | https://api.github.com/repos/huggingface/transformers/issues/28491/events | https://github.com/huggingface/transformers/issues/28491 | 2,080,104,945 | I_kwDOCUB6oc57--Hx | 28,491 | Inconsistent in batch generation results | {
"login": "qlwang25",
"id": 38132016,
"node_id": "MDQ6VXNlcjM4MTMyMDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/38132016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qlwang25",
"html_url": "https://github.com/qlwang25",
"followers_url": "https://api.github.com/users/qlwang25/followers",
"following_url": "https://api.github.com/users/qlwang25/following{/other_user}",
"gists_url": "https://api.github.com/users/qlwang25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qlwang25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qlwang25/subscriptions",
"organizations_url": "https://api.github.com/users/qlwang25/orgs",
"repos_url": "https://api.github.com/users/qlwang25/repos",
"events_url": "https://api.github.com/users/qlwang25/events{/privacy}",
"received_events_url": "https://api.github.com/users/qlwang25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @qlwang25 👋 \r\n\r\nHave a look at [this comment](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535), which I believe answers your question :) ",
"Thank you very much for your reply. \r\nThe reason is ```torch type``` instead of ```padding```. Just changing it to ```torch.bfloat16``` (or ```torch.float32```) works."
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
load model code
```
model_path = "../../../pre-trained_models/h2oai-llama2-7b-chat"
tokenizer = LlamaTokenizer.from_pretrained(model_path)
config = LlamaConfig.from_pretrained(model_path)
config.max_length = 512
with init_empty_weights():
model = LlamaForCausalLM._from_config(config, torch_dtype=torch.float16)
model.tie_weights()
model = load_checkpoint_and_dispatch(model, model_path, device_map="auto", no_split_module_classes=["LlamaDecoderLayer"], dtype=torch.float16)
model = model.eval()
model.generation_config = GenerationConfig.from_pretrained(pretrained_model_name=model_path, config_file_name='generation_config.json')
```
generate code
```
prompt1 = "Given a review, extract the aspect term(s) and determine their corresponding sentiment polarity. Here are some examples: \n"
prompt1 += "Review: It is always reliable , never bugged and responds well ." + "\n"
prompt1 += "Label:[[responds, positive]]" + "\n"
prompt1 += "Review: The machine is slow to boot up and occasionally crashes completely ." + "\n"
prompt1 += "Label:[[boot up, negative]]" + "\n"
prompt1 += "Review: Enabling the battery timer is useless ." + "\n"
prompt1 += "Label:"
prompt2 = "Given a review, extract the aspect term(s) and determine their corresponding sentiment polarity. Here are some examples: \n"
prompt2 += "Review: It rarely works and when it does it's incredibly slow ." + "\n"
prompt2 += "Label:[[works, negative]]" + "\n"
prompt2 += "Review: The machine is slow to boot up and occasionally crashes completely ." + "\n"
prompt2 += "Label:[[boot up, negative]]" + "\n"
prompt2 += "Review: Boot time is super fast , around anywhere from 35 seconds to 1 minute ." + "\n"
prompt2 += "Label:"
prompt3 = "Given a review, extract the aspect term(s) and determine their corresponding sentiment polarity. Here are some examples: \n"
prompt3 += "Review: It is always reliable , never bugged and responds well ." + "\n"
prompt3 += "Label:[[responds, positive]]" + "\n"
prompt3 += "Review: It rarely works and when it does it's incredibly slow ." + "\n"
prompt3 += "Label:[[works, negative]]" + "\n"
prompt3 += "Review: Boot time is super fast , around anywhere from 35 seconds to 1 minute ." + "\n"
prompt3 += "Label:"
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer([prompt1, prompt2, prompt3], padding="longest", return_tensors="pt")
padding_len = inputs["input_ids"].size(1)
outputs = model.generate(**inputs, max_length=padding_len + 80, do_sample=False, num_beams=1)
for output in outputs:
pred = tokenizer.decode(output[padding_len:], skip_special_tokens=True)
pred = pred.split("\n")[0]
print(pred)
```
---
When I use ```[prompt1, prompt2, prompt3]``` as input, the result is:
```
Љ [[battery timer, negative]]
Љ [[boot time, positive]]
[[fast, positive]]
```
When I use ```[prompt3, prompt2, prompt1]``` as input, the result is:
```
[[fast, positive]]
. [[boot time, positive]]
[[useless, negative]]
```
Again, when I use ```[prompt3, prompt2, prompt1]``` as input, the result is: (the second result is empty)
```
[[fast, positive]]
Љ [[battery timer, negative]]
```
![image](https://github.com/huggingface/transformers/assets/38132016/2ccdcf52-ff80-482c-bfc7-ca3e47bd3640)
**Problem**
(1) Why does the prompt produce different results with different inputs (or the same input)?
(2) Why is this character (Љ) generated?
(3) Why do batch generation (batch_size=3) and individual generation (batch_size=1) have different results
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
please see the code above
### Expected behavior
I expect the results by the batch generated to not be affected by the prompt order.
And, each sample should be the same as the results generated separately (batch_size) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28491/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28490/comments | https://api.github.com/repos/huggingface/transformers/issues/28490/events | https://github.com/huggingface/transformers/issues/28490 | 2,080,102,592 | I_kwDOCUB6oc57-9jA | 28,490 | [AutoGPTQ] The notebook tutorial of AutoGPTQ is not working. | {
"login": "DjangoPeng",
"id": 16943353,
"node_id": "MDQ6VXNlcjE2OTQzMzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/16943353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DjangoPeng",
"html_url": "https://github.com/DjangoPeng",
"followers_url": "https://api.github.com/users/DjangoPeng/followers",
"following_url": "https://api.github.com/users/DjangoPeng/following{/other_user}",
"gists_url": "https://api.github.com/users/DjangoPeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DjangoPeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DjangoPeng/subscriptions",
"organizations_url": "https://api.github.com/users/DjangoPeng/orgs",
"repos_url": "https://api.github.com/users/DjangoPeng/repos",
"events_url": "https://api.github.com/users/DjangoPeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/DjangoPeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @SunMarc - let me know if you want me to have a look at this one",
"I had the similar issue too. It would be great if someone can take a look at this.\r\n\r\nThere is a builder error on \"allenai--c4\" when I use the following code snippet:\r\n\r\n```python\r\n\r\nraw_datasets = load_dataset(\r\n \"allenai/c4\",\r\n \"allenai--c4\",\r\n data_files={\r\n \"validation\": \"en/c4-validation.00000-of-00008.json.gz\",\r\n },\r\n )\r\n\r\n```\r\n\r\nI found commenting out `allenai--c4` can solve the issue, but the building time seems become much longer than before.",
"Hi @DjangoPeng, this is due to a breaking change in datasets library. See related [thread](https://huggingface.co/datasets/allenai/c4/discussions/7).The issue will be fixed after this [PR](https://github.com/huggingface/optimum/pull/1646) is merged (need to install optimum from source) and I recommend you installing the latest version of datasets. The building time should be faster with it. In the meantime, I am switching the the dataset `c4` -> `wikitext2` in the notebook since I don't want users to be blocked by this. ",
"SGTM.\r\n\r\nThanks for your reply. :)",
"Thanks a lot @SunMarc ! 🎉 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I hit an issue( `... can not find xxx on aws3`) when I use `wikitext2 `. Here, I'd like to append more options for the datasets: [‘wikitext2’,‘c4’,‘c4-new’,‘ptb’,‘ptb-new’]. All these datasets are used in GPTQ paper. So, it is ok to use it. I change to `c4-new` and it works well. \r\n\r\nSource:\r\nhttps://huggingface.co/docs/transformers/v4.33.0/en/main_classes/quantization#transformers.GPTQConfig.dataset"
] | 1,705 | 1,708 | 1,707 | NONE | null | ### System Info
System Info:
- `transformers` version: 4.37.0.dev0
- Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.23
- Python version: 3.11.5
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.0
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU): 2.1.2+cu121 (True)
### Who can help?
@SunMarc and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The official notebook introduced by [AutoGPTQ docs](https://huggingface.co/docs/transformers/quantization#autogptq) is not working after upgrading Transformers and dependencies.
I estimate that this should be an incompatible issue caused by the update of `BuilderConfig`. It can be easily reproduced in Google Colab [here](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing).
```shell
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-3-f66cf2cc3929>](https://localhost:8080/#) in <cell line: 14>()
12
13 tokenizer = AutoTokenizer.from_pretrained(model_id)
---> 14 quant_model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config, device_map='auto')
9 frames
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs)
590 builder_config = self.builder_configs.get(config_name)
591 if builder_config is None and self.BUILDER_CONFIGS:
--> 592 raise ValueError(
593 f"BuilderConfig '{config_name}' not found. Available: {list(self.builder_configs.keys())}"
594 )
ValueError: BuilderConfig 'allenai--c4' not found. Available: ['en', 'en.noblocklist', 'en.noclean', 'realnewslike', 'multilingual', 'af', 'am', 'ar', 'az', 'be', 'bg', 'bg-Latn', 'bn', 'ca', 'ceb', 'co', 'cs', 'cy', 'da', 'de', 'el', 'el-Latn', 'en-multi', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fil', 'fr', 'fy', 'ga', 'gd', 'gl', 'gu', 'ha', 'haw', 'hi', 'hi-Latn', 'hmn', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'iw', 'ja', 'ja-Latn', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lb', 'lo', 'lt', 'lv', 'mg', 'mi', 'mk', 'ml', 'mn', 'mr', 'ms', 'mt', 'my', 'ne', 'nl', 'no', 'ny', 'pa', 'pl', 'ps', 'pt', 'ro', 'ru', 'ru-Latn', 'sd', 'si', 'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'sr', 'st', 'su', 'sv', 'sw', 'ta', 'te', 'tg', 'th', 'tr', 'uk', 'und', 'ur', 'uz', 'vi', 'xh', 'yi', 'yo', 'zh', 'zh-Latn', 'zu']
```
### Expected behavior
Fix and identify the root cause. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28490/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28489/comments | https://api.github.com/repos/huggingface/transformers/issues/28489/events | https://github.com/huggingface/transformers/pull/28489 | 2,080,052,597 | PR_kwDOCUB6oc5j_WtM | 28,489 | Fixed minor typos | {
"login": "rishit5",
"id": 24509842,
"node_id": "MDQ6VXNlcjI0NTA5ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24509842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rishit5",
"html_url": "https://github.com/rishit5",
"followers_url": "https://api.github.com/users/rishit5/followers",
"following_url": "https://api.github.com/users/rishit5/following{/other_user}",
"gists_url": "https://api.github.com/users/rishit5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rishit5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rishit5/subscriptions",
"organizations_url": "https://api.github.com/users/rishit5/orgs",
"repos_url": "https://api.github.com/users/rishit5/repos",
"events_url": "https://api.github.com/users/rishit5/events{/privacy}",
"received_events_url": "https://api.github.com/users/rishit5/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Fixed typos in readme files.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Typos fixed in:
1. .circleci/TROUBLESHOOT.md
2. .github/workflows/TROUBLESHOOT.md
3. docs/README.md
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28489/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28489",
"html_url": "https://github.com/huggingface/transformers/pull/28489",
"diff_url": "https://github.com/huggingface/transformers/pull/28489.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28489.patch",
"merged_at": 1705337115000
} |
https://api.github.com/repos/huggingface/transformers/issues/28488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28488/comments | https://api.github.com/repos/huggingface/transformers/issues/28488/events | https://github.com/huggingface/transformers/issues/28488 | 2,079,974,478 | I_kwDOCUB6oc57-eRO | 28,488 | fine tuning the updated Phi-2 with flash-attn-2 produces very high loss > 2 | {
"login": "abacaj",
"id": 7272343,
"node_id": "MDQ6VXNlcjcyNzIzNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7272343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abacaj",
"html_url": "https://github.com/abacaj",
"followers_url": "https://api.github.com/users/abacaj/followers",
"following_url": "https://api.github.com/users/abacaj/following{/other_user}",
"gists_url": "https://api.github.com/users/abacaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abacaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abacaj/subscriptions",
"organizations_url": "https://api.github.com/users/abacaj/orgs",
"repos_url": "https://api.github.com/users/abacaj/repos",
"events_url": "https://api.github.com/users/abacaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/abacaj/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I experienced the same thing! Over 3 epochs same set up just updated code and flash attention, the loss went from 6 to 2. And on the old code without flash attention it was .60 to ~.29 . Very strange.",
"cc @younesbelkada @ArthurZucker ",
"Hi @abacaj, as per @pacman100 guidelines in https://github.com/huggingface/transformers/pull/28142 / https://github.com/huggingface/transformers/pull/28142#issuecomment-1869513914 you need to make sure to load your model in full-precision and train with autocast (bf16=True). Also can you share more insights on how you train your model? (do you load the model in bf16/fp16, do you use PEFT, packing, etc.) ?",
"Hi @younesbelkada, this is a full fine tune using HF trainer. Padding only. Model is loaded in bf16. I try loading in \"fp32\" but get error:\r\n\r\n```\r\nValueError: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed torch.float32, this might lead to unexpected behaviour.\r\n```\r\n\r\n```python\r\n model = AutoModelForCausalLM.from_pretrained(\r\n model_args.model_name_or_path,\r\n trust_remote_code=True,\r\n config=config,\r\n attn_implementation=\"flash_attention_2\",\r\n torch_dtype=torch.float32,\r\n cache_dir=training_args.cache_dir,\r\n )\r\n```\r\n",
"Ok thanks @abacaj for getting back ! I think you get that error because the patch #28142 has not been released on pypi - can you try to build transformers from source? \r\n```bash\r\npip install -U git+https://github.com/huggingface/transformers.git\r\n```\r\nThat should hopefully solve it, let me know if you face into more issues!",
"Ok so I remove the explicit `torch_dtype` following the comments in your link. The loss is still very high with flash-attn-2 using [phi-2](https://huggingface.co/microsoft/phi-2) model",
"@abacaj which padding side are you using for training?",
"I use `padding_side=\"left\"`. Here is how the loss goes with and without FA2 (green line has FA2) using phi-2:\r\n\r\n![image](https://github.com/huggingface/transformers/assets/7272343/2c2ac3da-d685-44d2-a919-164e40300ee8)\r\n",
"FWIW changing padding side doesn't do anything to the loss, it's the same",
"I see, as a sanity check, can you share your `TrainingArguments` ?",
"```\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.95,\r\nadam_epsilon=1e-08,\r\nauto_find_batch_size=False,\r\nbf16=True,\r\nbf16_full_eval=False,\r\ncache_dir=None,\r\ndata_seed=None,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_persistent_workers=False,\r\ndataloader_pin_memory=True,\r\nddp_backend=None,\r\nddp_broadcast_buffers=None,\r\nddp_bucket_cap_mb=None,\r\nddp_find_unused_parameters=None,\r\nddp_timeout=1800,\r\ndebug=[],\r\ndeepspeed=src/configs/deepspeed_2_config.json,\r\ndisable_tqdm=False,\r\ndispatch_batches=None,\r\ndo_eval=False,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_delay=0,\r\neval_steps=0.0,\r\nevaluation_strategy=no,\r\nfp16=False,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\nfsdp=[],\r\nfsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\r\nfsdp_min_num_params=0,\r\nfsdp_transformer_layer_cls_to_wrap=None,\r\nfull_determinism=False,\r\ngradient_accumulation_steps=2,\r\ngradient_checkpointing=True,\r\ngradient_checkpointing_kwargs=None,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nhalf_precision_backend=auto,\r\nhub_always_push=False,\r\nhub_model_id=None,\r\nhub_private_repo=False,\r\nhub_strategy=every_save,\r\nhub_token=<HUB_TOKEN>,\r\nignore_data_skip=False,\r\ninclude_inputs_for_metrics=False,\r\ninclude_num_input_tokens_seen=False,\r\ninclude_tokens_per_second=False,\r\ninference_length=2048,\r\njit_mode_eval=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=5e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=1,\r\nlog_level=passive,\r\nlog_level_replica=warning,\r\nlog_on_each_node=True,\r\nlogging_dir=checkpoints/results/2k-2k-dynamic-5e-5/runs/Jan15_13-42-15_sgpu,\r\nlogging_first_step=False,\r\nlogging_nan_inf_filter=True,\r\nlogging_steps=1.0,\r\nlogging_strategy=steps,\r\nlr_scheduler_kwargs={},\r\nlr_scheduler_type=cosine,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmodel_max_position_embeddings=2048,\r\nmp_parameters=,\r\nneftune_noise_alpha=None,\r\nno_cuda=False,\r\nnum_train_epochs=3.0,\r\noptim=adamw_torch,\r\noptim_args=None,\r\noutput_dir=checkpoints/results/2k-2k-dynamic-5e-5,\r\noverwrite_output_dir=False,\r\npast_index=-1,\r\nper_device_eval_batch_size=4,\r\nper_device_train_batch_size=4,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=None,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=<PUSH_TO_HUB_TOKEN>,\r\nray_scope=last,\r\nremove_unused_columns=True,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrope_scaling_factor=1.0,\r\nrope_scaling_type=dynamic,\r\nrun_name=checkpoints/results/2k-2k-dynamic-5e-5,\r\nsave_on_each_node=False,\r\nsave_only_model=False,\r\nsave_safetensors=True,\r\nsave_steps=100.0,\r\nsave_strategy=epoch,\r\nsave_total_limit=None,\r\nseed=70,\r\nskip_memory_metrics=True,\r\nsplit_batches=False,\r\ntf32=None,\r\ntorch_compile=False,\r\ntorch_compile_backend=None,\r\ntorch_compile_mode=None,\r\ntorchdynamo=None,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_cpu=False,\r\nuse_ipex=False,\r\nuse_legacy_prediction_loop=False,\r\nuse_mps_device=False,\r\nwarmup_ratio=0.02,\r\nwarmup_steps=0,\r\nweight_decay=0.1\r\n```",
"During my testing, I used bf16, trust remote code, no gradient ckpt, for SFT, with flshattn. The resulting model was terrible, I knew off the of (6 to 2)loss it wasn’t going to preform, but during testing it was worse than expected, very mangled answers. However when I trained the model, same arguments; just using the old phi repo code, and no flshattnt I got a great model. The loss from .6 to .29. Both were full fine tunes. Flash attention is critical for 24gb cards as without it it’s training off shared memory. I can help out more with testing when it’s done training in ~30 hours off shared mem 😭. The script I used is on #28381 . (Keep in mind the script doesn’t reflect me using bf16, however both times I trained the model I did have compute dtype set to bf16.)",
"Hello everyone!\r\n\r\nCould you all please test using the latest revision on `microsoft/phi-2` and report the results? We might have found the issue.\r\n\r\nRegards,\r\nGustavo.",
"FWIW - the model still comes out significantly worse using FA2. If anyone wants to fine-tune this model, I recommend you use it without FA2 currently. Running code benchmarks with FA2 < 50% on heval. Without FA2 (and all other hparams are identical, including seed) > 60% heval.",
"The first graph is a comparison between using and not using flash attention 2. It seems that the loss doesn't change much with fa2 (yellowish curve).\r\n<img width=\"1037\" alt=\"截屏2024-01-18 16 43 57\" src=\"https://github.com/huggingface/transformers/assets/86560128/b2722683-6b1f-4cf0-88a0-f6928ceb3efd\">\r\n",
"@abacaj could you please provide a minimal snippet to reproduce your fine-tuning?\r\n\r\nWe want to investigate it further more and attempt to find the root of the problem. We are doing a line-by-line comparison between the new model's code and the previous one.",
"> FWIW - the model still comes out significantly worse using FA2. If anyone wants to fine-tune this model, I recommend you use it without FA2 currently. Running code benchmarks with FA2 < 50% on heval. Without FA2 (and all other hparams are identical, including seed) > 60% heval.\r\n\r\nI second this, just woke up and checked training after 3 epochs with FA2 I went from .54 to ~.40, meanwhile, with no FA2 I went from .60 to .30. Both full fine tunes. I’m gonna train the fa2 check point on an additional epoch to see if it gives me the same loss as with out FA2. Or to see if it over fits.\r\n\r\n\r\n\r\nEDIT:\r\nThe loss is off to a terrible start. Went as low as .39 then up to as high as .49.( It’s only at .07 of an epoch. But i’m training a check point that has trained on this exact dataset already for 3 epochs.) Significantly better than before with the soft max scaling issues, but there is still something up.\r\n\r\n![IMG_2363](https://github.com/huggingface/transformers/assets/122953474/3a16cc43-2886-4f9a-99ff-f7d339efd68c)\r\n\r\nThe loss is acting quite random, in comparison to no FA2. Where the loss consistently went down.\r\n\r\nSECOND UPDATE: I restarted the training with the same checkpoint and upped the learning rate by a order of 1, so from 2e5 to 2e6 and now the loss is more consistent, confusing why this hyper parameter differs in training when using fa2 and not using fa2.\r\n![IMG_2364](https://github.com/huggingface/transformers/assets/122953474/509c54c2-e054-452d-9484-c348f3137a69)\r\n\r\nNot perfect but better.\r\n\r\nTHIRD UPDATE: I tried retraining the base model with fa2 and the loss isnt going anywhere. After 1.5 epochs. Its almost as if the weights aren’t being updated at all, and if so very marginally. Just consistently staying between .5 and .4 but random at every logging step.",
"sorry to comment on this closed issue but I still have issues with FA2\r\n\r\n1. loss is different with FA2 compared to without\r\n2. loss is different even between two FA2 runs (used `set_seed`. doesn't happen without FA2 - loss always exactly the same)\r\n\r\n![W B Chart 26_01_2024, 22_43_04](https://github.com/huggingface/transformers/assets/141400217/7e390c2b-27de-4e6f-9ea6-f40a7af537af)\r\n ",
"I do another two runs using FA2 and without FA2 (blue line). Testing the models out using vLLM, the model without FA2 (blue line) scores 58% on humaneval, the model with FA2 scores 49%. I basically stopped using FA2 because the model comes out so much worse and I can't pinpoint why (the runs are identical with exception of use_flash_2)\r\n\r\n![image](https://github.com/huggingface/transformers/assets/7272343/b1b3702f-4e07-4086-9342-a323adde06d7)\r\n",
"Hi @ArthurZucker,\r\nCould you reconsider opening this issue again? I think it’s worth opening, as training with flash attention on phi-2 is still not viable. The performance gains are almost essential though. I appreciate it thank you!",
"Just wanted to acknowledge I have the same issue with using Fast Attention 2 with phi-2, the training loss hardly decreases with FA2 turned on, and works pretty well with it turned off.",
"same question...",
"> We want to investigate it further more and attempt to find the root of the problem. We are doing a line-by-line comparison between the new model's code and the previous one.\r\n\r\n@gugarosa is there any update on fixing FA2 for this amazing model? \r\n",
"There is a PR that should fix it but is hard to merge #28673",
"I am also seeing similar issue where loss is trending downwards but quite unstable and it seems to learn very slowly. I am running full fine-tuning of latest Phi2 model on my dataset.\r\n<img width=\"497\" alt=\"Screenshot 2024-02-12 at 9 22 57 PM\" src=\"https://github.com/huggingface/transformers/assets/5215386/26dec373-a4a3-4180-b86b-3c9a841ef042\">\r\n\r\n@ArthurZucker I just started another run after reinstalling transformers with changes from [#28673](https://github.com/huggingface/transformers/pull/28673) to see if it fixes the issue (still using FA2). will post loss curve in next few hours. \r\n\r\n**[Incorrect] Update-1**: loss curve after reinstalling transformers with changes from [#28673]. Looks like there is no change..\r\n![W B Chart 2_13_2024, 8_30_06 AM](https://github.com/huggingface/transformers/assets/5215386/94632c89-8e0f-47df-9153-ebe977d4496b)\r\n\r\n**Update-2**: Looks like my new transformer installation didn't include changes from [#28673](https://github.com/huggingface/transformers/pull/28673) so essentially both plot should be same. I tried reinstalling transformers again with PR changes and now training is failing:\r\n\r\n```\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/amp/autocast_mode.py\", line 16, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/amp/autocast_mode.py\", line 16, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/home/minimalist/work/projects/transformers/src/transformers/models/phi/modeling_phi.py\", line 318, in forward\r\n query_states = self.q_proj(hidden_states)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/nn/modules/linear.py\", line 114, in forward\r\n return F.linear(input, self.weight, self.bias)\r\nRuntimeError: mat1 and mat2 must have the same dtype, but got Float and BFloat16\r\n```",
"Update, it was a torch error, its training now, but the loss is the same as before, I lowered my dataset to 1k examples over 3 epochs with a lr of 2e6 and still the loss is random. Never consistently going down.",
"How are you guys testing this? It does seem to matter when doing a full fine tune, and a lora fine tune. Using FA2 I could never get the loss to even be consistent with a full fine tune ( with SFT). Right now I am doing a DPO of phi2 with QLORA, and the loss is not only consistent, it’s consistently going down; from .69 to .27 at just a single epoch.\r\n\r\nI have not tried SFT with a lora, but maybe if we wanna use FA2 its just better to stick with using lora.",
"hi there, now that SDPA has been merged #29108 you can use FA-2 through pytorch interface:\r\n\r\n0- Install pytorch 2.2\r\n1- make sure to load SDPA attention by passing `attn_implementation=\"sdpa\"` in `from_pretrained`\r\n2- Force-dispatch the SDPA kernel to use FA2 as follows:\r\n```diff\r\n- trainer.train()\r\n+ with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):\r\n+ trainer.train()\r\n```\r\nPerhaps this will fix all instability issues with respect to FA2 !",
"@younesbelkada Hey! Thanks for your response!(before starting my training run) I got pytorch2.2, and I pulled the latest commits from transformers and installed from source. I’m using the DPO.py, from TRL, and I saw the commit, so I tried to pass “—attn_implementation SPDA” but it gave me a SPDA not currently supported error, I wish I still had the error up, I’ll try it out again, once my training run ends in a little less than an hour. However I had only tried and pass it as a flag, not how you are just now telling me.",
"Hi @NickWithBotronics ! \r\nYou need to use transformers from source: `pip install -U git+https://github.com/huggingface/transformers`"
] | 1,705 | 1,708 | null | NONE | null | ### System Info
The updated code of phi-2 produces a high loss, I have tried fp16, bf16, deepspeed and fsdp the result is the same -> loss starts at 2 and keeps going higher. Setting `use_flash_attention_2=False` fixes this or using the old phi-2 modeling file.
torch==2.1.2
flash-attn==2.4.2
transformers==4.37.0.dev0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Fine-tune the updated phi-2 model using transformers trainer
### Expected behavior
Loss go down | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28488/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28488/timeline | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28487/comments | https://api.github.com/repos/huggingface/transformers/issues/28487/events | https://github.com/huggingface/transformers/issues/28487 | 2,079,721,297 | I_kwDOCUB6oc579gdR | 28,487 | add support for custom pipeline | {
"login": "not-lain",
"id": 70411813,
"node_id": "MDQ6VXNlcjcwNDExODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/not-lain",
"html_url": "https://github.com/not-lain",
"followers_url": "https://api.github.com/users/not-lain/followers",
"following_url": "https://api.github.com/users/not-lain/following{/other_user}",
"gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/not-lain/subscriptions",
"organizations_url": "https://api.github.com/users/not-lain/orgs",
"repos_url": "https://api.github.com/users/not-lain/repos",
"events_url": "https://api.github.com/users/not-lain/events{/privacy}",
"received_events_url": "https://api.github.com/users/not-lain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"also it would be cool to use something like this : \r\n```python\r\n# Use a pipeline as a high-level helper\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"custom\", model=\"model_id\", trust_remote_code=True)\r\n```",
"Hi @not-lain, thanks for raising this! \r\n\r\nYou might be interested in [using tools](https://huggingface.co/docs/transformers/transformers_agents#tools), which you can [define and use from the hub](https://huggingface.co/docs/transformers/custom_tools). \r\n\r\nWith regards to things not working, could you open a new issue following the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) including details about the running environment and the errors encountered? ",
"@amyeroberts although it is interesting to find about tools but what i'm trying to adress that \r\n* pipeline doesn't work for custom model architectures on the hub [ feature requested] \r\n* the documentation says you can only work with a new pipeline if you add it to the transformers github repo (which will bloat the library on the long run)",
"@amyeroberts, i found the mistake \r\ni needed to call the model as : \r\n```python\r\nfrom transformers import pipeline\r\n\r\nclassifier = pipeline(model=\"sgugger/test-dynamic-pipeline\", trust_remote_code=True)\r\n```\r\nnot as:\r\n```python\r\n# Use a pipeline as a high-level helper\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"text-classification\", model=\"sgugger/test-dynamic-pipeline\")\r\n```\r\ni will close this issue "
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | ### Feature request
is it possible to add a support for custom pipelines.
something like this
```
| - config.json
| - custom_config.py
| - custom_architecture.py
| - custom_pipeline.py
```
### Motivation
i was following this documentation https://huggingface.co/docs/transformers/add_new_pipeline and i can't make it workout
it's also pointing out at the following custom pipeline `sgugger/finetuned-bert-mrpc` which i checked all the previous commits and nothing seems to workout.
also it should be a good idea since adding support to all the models directly to the transformers library will bloat the library so i hope this might be taken into consideration.
### Your contribution
i will help out if possible. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28487/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28486/comments | https://api.github.com/repos/huggingface/transformers/issues/28486/events | https://github.com/huggingface/transformers/pull/28486 | 2,079,348,302 | PR_kwDOCUB6oc5j884U | 28,486 | [ASR Pipe] Update init to set model type and subsequently call parent init method | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The problem with the current solution is that it's a patch over a larger issue: the ASR pipeline is a child of `Pipeline` but doesn't share the `__init__`. So, if there's any new logic added to `Pipeline` which relies on attributes being set or new kwargs this will still fail. \r\n\r\nIt also doesn't make sense for this class to accept an image processor. \r\n\r\nThe simplest solution (but doesn't address the divergence here) would be making certain attributes - `tokenizer`, `feature_extractor`, `image_processor` - to class attributes with default `None`. This way, any new pipeline which is similar to ASR will already have these required objects without having to handle / set them. \r\n\r\nA better solution is to update the `__init__` so that `super().__init__(*args, **kwargs)` is called at some point either modifying the kwargs before, setting self.type where necessary or adding any additional logic after. ",
"Thanks for your comments. I agree that calling the parent `__init__` method is a more elegant solution. Resolved in https://github.com/huggingface/transformers/pull/28486/commits/e942642024aff014d5e7cd739ce7e5eb85abbfc7",
"@sanchit-gandhi Before merging - could you update the title to reflect the change in fix applied? ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28486). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Fixes #28162 by overriding the init method of the ASR pipeline class. We first set the model type, then call the parent init method.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28486/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28486",
"html_url": "https://github.com/huggingface/transformers/pull/28486",
"diff_url": "https://github.com/huggingface/transformers/pull/28486.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28486.patch",
"merged_at": 1705594310000
} |
https://api.github.com/repos/huggingface/transformers/issues/28485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28485/comments | https://api.github.com/repos/huggingface/transformers/issues/28485/events | https://github.com/huggingface/transformers/pull/28485 | 2,079,326,176 | PR_kwDOCUB6oc5j838y | 28,485 | [Whisper Tok] Move token ids to CPU when computing offsets | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Fixes #28097 by moving token ids in pytorch on GPU to the CPU before converting to numpy.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28485/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28485",
"html_url": "https://github.com/huggingface/transformers/pull/28485",
"diff_url": "https://github.com/huggingface/transformers/pull/28485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28485.patch",
"merged_at": 1705594335000
} |
https://api.github.com/repos/huggingface/transformers/issues/28484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28484/comments | https://api.github.com/repos/huggingface/transformers/issues/28484/events | https://github.com/huggingface/transformers/pull/28484 | 2,079,242,765 | PR_kwDOCUB6oc5j8lU4 | 28,484 | Dataloader prefetch batch | {
"login": "qmeeus",
"id": 25608944,
"node_id": "MDQ6VXNlcjI1NjA4OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmeeus",
"html_url": "https://github.com/qmeeus",
"followers_url": "https://api.github.com/users/qmeeus/followers",
"following_url": "https://api.github.com/users/qmeeus/following{/other_user}",
"gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions",
"organizations_url": "https://api.github.com/users/qmeeus/orgs",
"repos_url": "https://api.github.com/users/qmeeus/repos",
"events_url": "https://api.github.com/users/qmeeus/events{/privacy}",
"received_events_url": "https://api.github.com/users/qmeeus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @qmeeus - thanks for opening a PR! \r\n\r\nAt the moment, there's a very large diff including unrelated changes to those in the PR description. Could you make sure to resolve all conflicts and include the latest updates in `main` using `rebase` or `merge` into this branch? ",
"> ed changes to those in the PR description. Could you make sure to resolve all conflicts and include the latest updates in `m`\r\n\r\nYes, this is a work in progress, I was not aware that this was available yet for reviewing. I will sort it out, thank you :)",
"@amyeroberts I closed this one and created a new one [available here](https://github.com/huggingface/transformers/pull/28498)"
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
I added an option to the trainer to prefetch batches during data loading.
When training a model with heavy transformations and an iterable dataset, the dataloader might struggle to deliver fast enough for the GPU. I've found that prefetching batches helps to solve this issue.
The option is implemented in `torch.utils.data.DataLoader` but not in HF Trainer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28484/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28484",
"html_url": "https://github.com/huggingface/transformers/pull/28484",
"diff_url": "https://github.com/huggingface/transformers/pull/28484.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28484.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28483/comments | https://api.github.com/repos/huggingface/transformers/issues/28483/events | https://github.com/huggingface/transformers/pull/28483 | 2,079,209,690 | PR_kwDOCUB6oc5j8d9P | 28,483 | TF: purge `TFTrainer` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28483). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
Removes `TFTrainer` and all its traces. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28483/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28483/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28483",
"html_url": "https://github.com/huggingface/transformers/pull/28483",
"diff_url": "https://github.com/huggingface/transformers/pull/28483.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28483.patch",
"merged_at": 1705078594000
} |
https://api.github.com/repos/huggingface/transformers/issues/28482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28482/comments | https://api.github.com/repos/huggingface/transformers/issues/28482/events | https://github.com/huggingface/transformers/pull/28482 | 2,079,041,827 | PR_kwDOCUB6oc5j74gO | 28,482 | Don't set `finetuned_from` if it is a local path | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Fix `base_model` issue in #28286 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28482/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28482",
"html_url": "https://github.com/huggingface/transformers/pull/28482",
"diff_url": "https://github.com/huggingface/transformers/pull/28482.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28482.patch",
"merged_at": 1705315101000
} |
https://api.github.com/repos/huggingface/transformers/issues/28481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28481/comments | https://api.github.com/repos/huggingface/transformers/issues/28481/events | https://github.com/huggingface/transformers/pull/28481 | 2,078,977,415 | PR_kwDOCUB6oc5j7qDm | 28,481 | Fix/speecht5 bug | {
"login": "NimaYaqmuri",
"id": 62163525,
"node_id": "MDQ6VXNlcjYyMTYzNTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/62163525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NimaYaqmuri",
"html_url": "https://github.com/NimaYaqmuri",
"followers_url": "https://api.github.com/users/NimaYaqmuri/followers",
"following_url": "https://api.github.com/users/NimaYaqmuri/following{/other_user}",
"gists_url": "https://api.github.com/users/NimaYaqmuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NimaYaqmuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NimaYaqmuri/subscriptions",
"organizations_url": "https://api.github.com/users/NimaYaqmuri/orgs",
"repos_url": "https://api.github.com/users/NimaYaqmuri/repos",
"events_url": "https://api.github.com/users/NimaYaqmuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/NimaYaqmuri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"To observe the issue addressed by this PR, visit the [Hugging Face SpeechT5 Fine-Tuning Tutorial](https://huggingface.co/learn/audio-course/chapter6/fine-tuning). During training, you encounter a concatenation error due to incorrect dimensions.",
"Thanks for your advice! I'm going to tweak the `generate_speech` for these two cases.",
"Hello @ylacombe , @sanchit-gandhi and @ArthurZucker\r\n\r\nI've implemented your suggestions and made updates to the SpeechT5 Text to Speech class. My focus was on improving how we handle speaker embeddings in two key scenarios:\r\n\r\n1. **Matching Batch Size**: When there's one embedding per sample.\r\n2. **One-to-Many**: When a single embedding is used for all samples in the batch.\r\n\r\nKey updates include:\r\n--------------------\r\n\r\n* **Embedding Replication**: Introduced logic to replicate a single speaker embedding across multiple samples when necessary.\r\n* **Error Handling**: Implemented a ValueError to alert for dimension mismatches in speaker embeddings.\r\n* **Testing**: Added comprehensive test cases to ensure robust functionality across both scenarios.\r\n\r\nAdditionally, I've adapted the handling of speaker embedding dimensions outside the main model classes, aligning with the approach used in the original [SpeechT5 implementation by Microsoft](https://github.com/microsoft/SpeechT5). This decision avoids altering the SpeechT5 speech decoder's pre-net forward method, maintaining consistency with the existing model structure.\r\n\r\nPlease let me know if there are other scenarios or considerations I should account for. Your feedback is greatly appreciated.\r\n\r\nThank you for your guidance,\r\n\r\nNima Yaqmuri",
"It seems there exists a related issue, [Issue #28189](https://github.com/huggingface/transformers/issues/28189), which my PR's speaker embedding updates could potentially address.",
"Alongside the issue highlighted by @ylacombe in the Hugging Face thread \"[Audio Course Unit 6: Unable to Train SpeechT5](https://discuss.huggingface.co/t/audio-course-unit-6-unable-to-train-speech-t5/68888)\", my PR also aims to resolve a similar problem outlined in \"[SpeechT5 Text-to-Speech Fine-Tuning Runtime Error](https://discuss.huggingface.co/t/speecht5-text-to-speech-fine-tuning-runtime-error/67472)\", which seems to be related to the solution.",
"Hi @ylacombe and @amyeroberts,\r\n\r\nThanks a lot for your valuable feedback and approvals!\r\n\r\nI've implemented the suggested changes, introducing randomized speaker embeddings with a fixed seed in the `speecht5` tests. \r\n\r\nAdditionally, I've run all the relevant slow tests for `speecht5` on my end, and everything works as expected. It seems we're ready for the merge.\r\n\r\nI appreciate your support and guidance throughout this process!\r\n\r\nBest.",
"Thanks again @NimaYaqmuri for this great contribution! "
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Fixes a Critical Issue in SpeechT5 Speech Decoder Prenet and Enhances Test Suite
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ylacombe
@sanchit-gandhi
@Spycsh
Key Changes:
------------
* **Critical Bug Fix in Speech Decoder Prenet**: I discovered that the `repeat` operation in the speech decoder prenet's forward method was mistakenly duplicating the `speaker_embeddings` tensor. This erroneous behavior, likely an oversight in previous contributions, resulted in incorrect tensor dimensions for concatenation, leading to raised errors and halting the training process
* **Refined Testing Approach**: Alongside this fix, I have updated the SpeechT5ForTextToSpeechIntegrationTests. These updates include:
* **Adaptability to Variability in Sequence Lengths**: Modifications to handle variability due to dropout in the speech decoder pre-net, ensuring test reliability against random variations.
* **Dynamic Dimension Checks**: Replacement of hardcoded dimensions with dynamic checks based on the model's configuration and seed settings, ensuring test validity across various scenarios.
* **New and Improved Test Cases**: Introduction of new test cases for validation of spectrogram and waveform shapes, addressing potential issues in speech generation and vocoder processing.
* **Correction of Misassumptions in Tests**: Adjustment of existing test cases where previous assumptions about output shapes led to inaccuracies. This includes considering varying batch sizes in tests, which were not adequately addressed before, possibly due to an oversight in considering the speaker embeddings' shape (initially 1x512) in batch scenarios. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28481/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28481/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28481",
"html_url": "https://github.com/huggingface/transformers/pull/28481",
"diff_url": "https://github.com/huggingface/transformers/pull/28481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28481.patch",
"merged_at": 1705414469000
} |
https://api.github.com/repos/huggingface/transformers/issues/28480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28480/comments | https://api.github.com/repos/huggingface/transformers/issues/28480/events | https://github.com/huggingface/transformers/pull/28480 | 2,078,963,133 | PR_kwDOCUB6oc5j7m67 | 28,480 | chore: Just fix some typo | {
"login": "hugo-syn",
"id": 61210734,
"node_id": "MDQ6VXNlcjYxMjEwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/61210734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugo-syn",
"html_url": "https://github.com/hugo-syn",
"followers_url": "https://api.github.com/users/hugo-syn/followers",
"following_url": "https://api.github.com/users/hugo-syn/following{/other_user}",
"gists_url": "https://api.github.com/users/hugo-syn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hugo-syn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hugo-syn/subscriptions",
"organizations_url": "https://api.github.com/users/hugo-syn/orgs",
"repos_url": "https://api.github.com/users/hugo-syn/repos",
"events_url": "https://api.github.com/users/hugo-syn/events{/privacy}",
"received_events_url": "https://api.github.com/users/hugo-syn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @hugo-syn thanks for opening a PR! There doesn't seem to be any changes in this PR? ",
"Indeed, I don't understand. I've synchronized the main repo and my changes don't appear here, but I can see them on my fork. ",
"The typo fixes in b30d46622e7b3f839fb93cde269dfc797319979b have already been applied to the repo, in a previous PR last week #28361 by you.",
"Closing then sorry for my mistake :)"
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Just fix some typo
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28480/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28480",
"html_url": "https://github.com/huggingface/transformers/pull/28480",
"diff_url": "https://github.com/huggingface/transformers/pull/28480.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28480.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28479/comments | https://api.github.com/repos/huggingface/transformers/issues/28479/events | https://github.com/huggingface/transformers/pull/28479 | 2,078,880,947 | PR_kwDOCUB6oc5j7UzI | 28,479 | Improved type hinting for all attention parameters | {
"login": "nakranivaibhav",
"id": 67785830,
"node_id": "MDQ6VXNlcjY3Nzg1ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/67785830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nakranivaibhav",
"html_url": "https://github.com/nakranivaibhav",
"followers_url": "https://api.github.com/users/nakranivaibhav/followers",
"following_url": "https://api.github.com/users/nakranivaibhav/following{/other_user}",
"gists_url": "https://api.github.com/users/nakranivaibhav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nakranivaibhav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nakranivaibhav/subscriptions",
"organizations_url": "https://api.github.com/users/nakranivaibhav/orgs",
"repos_url": "https://api.github.com/users/nakranivaibhav/repos",
"events_url": "https://api.github.com/users/nakranivaibhav/events{/privacy}",
"received_events_url": "https://api.github.com/users/nakranivaibhav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I had no space between the \",\" and \"...\" That failed the formatting. \r\nYeah @Rocketknight1 makes sense doing that to hidden states too, since it is the same as attentions, depends on num of layers. I can do that too",
"Looks good to me, pinging @ArthurZucker for core maintainer review!",
"@amyeroberts I will look into it. I can do that too. Happy to help :)",
"@nakranivaibhav Amazing 🤗 Let me know when it's ready for another review! ",
"@amyeroberts Alright 🚀",
"@amyeroberts can you approve recent changes. Chnaged 15 files. \r\nI also noticed there is no equivalent of CLIPTextModelOutput in the modeling_tf_clip.py script. Is it intended or that part is missed?",
"@nakranivaibhav Awesome work! For CLIPTextModelOutput I believe that's just an oversight. Feel free to add as part of this PR! \r\n\r\nThe current diff is quite large and contains changes recently merged into `main`. From the commit history it looks like you've rebase and pushed without forcing. To make sure the branch contains just your commits and is up-to-date with the development branch - working on this branch: \r\n* `git fetch upstream main`\r\n* `git rebase upstream/main`\r\n* `git push -f`",
"@amyeroberts Yeah I did mess up something. Sorry, I am new to git. I will push the changes again.\r\nI'll also add CLIPTextModelOutput in modeling_tf_clip.py\r\n",
"@nakranivaibhav No worries - we've all done it at least once! Let me know when those changes are pushed and I should review",
"@amyeroberts I have pushed a fresh commit. You can review it now.",
"@nakranivaibhav There's currently a whole directory `myenv` which is part of this PR. You'll need to remove these: `git rm -r myenv`",
"@amyeroberts I do have myvenv in the .gitignore file. For some reason, it got pushed too. I did git rm -r myvenv. Is that it or do i have to push a fresh commit?\r\n",
"`git rm` is the complement to `git add`. In the same way, it's just a step which moves the files to staging - they still need to be applied with a commit and push to the remote. You can check this yourself by looking at the [files changed](https://github.com/huggingface/transformers/pull/28479/files) tab for this PR (you can see `myenv` is still there). \r\n\r\nI would recommend reviewing the diff in the file changes for a PR to see all the changes and make any modifications if needed before asking for review. ",
"@amyeroberts I have removed the myvenv directory. Do I need to update the .gitignore file as well and make a new generic venv already listed in the .gitignore file so that it does not get pushed and the .gitignore file remains unchanged?\r\n\r\nP.S: Thank you for bearing this. I don't know my way around git and you are helping me a lot.",
"@amyeroberts Yes I have checked there is no confidential information in that commit. You can go ahead and merge.",
"@nakranivaibhav Thanks for confirming and all the work improving our types! ",
"@amyeroberts Always happy to help. On to the next one 🚀"
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
The type hinting for all attention parameters has been changed to 'Optional[Tuple[torch.FloatTensor,...]] = None' to reflect a tuple of arbitrary size.
Fixes #28345
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @Rocketknight1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28479/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28479",
"html_url": "https://github.com/huggingface/transformers/pull/28479",
"diff_url": "https://github.com/huggingface/transformers/pull/28479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28479.patch",
"merged_at": 1706114855000
} |
https://api.github.com/repos/huggingface/transformers/issues/28478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28478/comments | https://api.github.com/repos/huggingface/transformers/issues/28478/events | https://github.com/huggingface/transformers/pull/28478 | 2,078,735,350 | PR_kwDOCUB6oc5j60kp | 28,478 | Generate: deprecate old public functions | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28478). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
Schedules for deprecation old public functions -- these functions are not used anywhere in the code base, and haven't been since I've been in charge of `generate`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28478/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28478",
"html_url": "https://github.com/huggingface/transformers/pull/28478",
"diff_url": "https://github.com/huggingface/transformers/pull/28478.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28478.patch",
"merged_at": 1705072875000
} |
https://api.github.com/repos/huggingface/transformers/issues/28477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28477/comments | https://api.github.com/repos/huggingface/transformers/issues/28477/events | https://github.com/huggingface/transformers/pull/28477 | 2,078,710,190 | PR_kwDOCUB6oc5j6vBn | 28,477 | Generate: refuse to save bad generation config files | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28477). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
This PR converts a warning into an exception. This warning stated that it would be converted to an exception in v4.34 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28477/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28477/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28477",
"html_url": "https://github.com/huggingface/transformers/pull/28477",
"diff_url": "https://github.com/huggingface/transformers/pull/28477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28477.patch",
"merged_at": 1705075277000
} |
https://api.github.com/repos/huggingface/transformers/issues/28476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28476/comments | https://api.github.com/repos/huggingface/transformers/issues/28476/events | https://github.com/huggingface/transformers/issues/28476 | 2,078,647,827 | I_kwDOCUB6oc575aYT | 28,476 | How to avoid the peak RAM memory usage of a model when I want to load to GPU | {
"login": "JoanFM",
"id": 19825685,
"node_id": "MDQ6VXNlcjE5ODI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/19825685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoanFM",
"html_url": "https://github.com/JoanFM",
"followers_url": "https://api.github.com/users/JoanFM/followers",
"following_url": "https://api.github.com/users/JoanFM/following{/other_user}",
"gists_url": "https://api.github.com/users/JoanFM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoanFM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoanFM/subscriptions",
"organizations_url": "https://api.github.com/users/JoanFM/orgs",
"repos_url": "https://api.github.com/users/JoanFM/repos",
"events_url": "https://api.github.com/users/JoanFM/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoanFM/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! I am not sure you can prevent CPU usage (transfer from SSD to GPU), not sure anything supports it. However device_map = \"auto\" should always allow you to load the model without going over the ram usage. \r\nThe peak can come from `torch. set_default_dtype(torch.float16)` and the fact that you are not specifying a dtype. So the model might be loaded in float32, then casted then transfered. ",
"so what would you actually suggest to do? What `dtype` parameter should I pass?",
"`float16` or something like that. Or use TEI https://huggingface.co/docs/text-embeddings-inference/index ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Thanks for helping, it was indeed an issue with dtype"
] | 1,705 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.10.201-191.748.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
I am using transformers to load a model into GPU, and I observed that before moving the model to GPU there is a peak of RAM usage that later gets unused. I assume the model is loaded into CPU before moving into GPU.
In GPU model takes around 4Gi and to load it I need more than 7Gi of RAM which seems weird.
Is there a way to load it direcly to the GPU without spending so much RAM?
I have tried with the `low_cpu_mem_usage` and `device_map` parameter to `cuda` and `auto` but no luck.
```python
from transformers import AutoModel; m = AutoModel.from_pretrained("jinaai/jina-embeddings-v2-base-en", trust_remote_code=True, low_cpu_mem_usage=True, device_map="auto")
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModel; m = AutoModel.from_pretrained("jinaai/jina-embeddings-v2-base-en", trust_remote_code=True, low_cpu_mem_usage=True, device_map="auto")
```
### Expected behavior
Not having such a memory peak | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28476/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28476/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28475/comments | https://api.github.com/repos/huggingface/transformers/issues/28475/events | https://github.com/huggingface/transformers/pull/28475 | 2,078,570,207 | PR_kwDOCUB6oc5j6P4D | 28,475 | Docs: add model paths | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28475). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
As reported by @sayakpaul: some models had placeholder paths. This PR corrects it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28475/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28475",
"html_url": "https://github.com/huggingface/transformers/pull/28475",
"diff_url": "https://github.com/huggingface/transformers/pull/28475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28475.patch",
"merged_at": 1705073144000
} |
https://api.github.com/repos/huggingface/transformers/issues/28474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28474/comments | https://api.github.com/repos/huggingface/transformers/issues/28474/events | https://github.com/huggingface/transformers/pull/28474 | 2,078,491,706 | PR_kwDOCUB6oc5j5-0S | 28,474 | filter out callable attributes from tokenizer_config in save_pretrained | {
"login": "shuttie",
"id": 999061,
"node_id": "MDQ6VXNlcjk5OTA2MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/999061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuttie",
"html_url": "https://github.com/shuttie",
"followers_url": "https://api.github.com/users/shuttie/followers",
"following_url": "https://api.github.com/users/shuttie/following{/other_user}",
"gists_url": "https://api.github.com/users/shuttie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuttie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuttie/subscriptions",
"organizations_url": "https://api.github.com/users/shuttie/orgs",
"repos_url": "https://api.github.com/users/shuttie/repos",
"events_url": "https://api.github.com/users/shuttie/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuttie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | 1,708 | NONE | null | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/28472
As discussed in the upstream bug report, `add_special_token` can be both kwargs parameter passed to the `from_pretrained` and a method in `SpecialTokensMixin.add_special_tokens`. Not sure that it's the best way for doing this, but this PR:
* ensures that no methods are passed into the tokenizer config
* so it can be safely serialized to json with `json.dumps`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28474/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28474",
"html_url": "https://github.com/huggingface/transformers/pull/28474",
"diff_url": "https://github.com/huggingface/transformers/pull/28474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28474.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28473/comments | https://api.github.com/repos/huggingface/transformers/issues/28473/events | https://github.com/huggingface/transformers/pull/28473 | 2,078,438,756 | PR_kwDOCUB6oc5j5zXV | 28,473 | feat: support indicating prefix token of chat template | {
"login": "congchan",
"id": 18083731,
"node_id": "MDQ6VXNlcjE4MDgzNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/18083731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/congchan",
"html_url": "https://github.com/congchan",
"followers_url": "https://api.github.com/users/congchan/followers",
"following_url": "https://api.github.com/users/congchan/following{/other_user}",
"gists_url": "https://api.github.com/users/congchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/congchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/congchan/subscriptions",
"organizations_url": "https://api.github.com/users/congchan/orgs",
"repos_url": "https://api.github.com/users/congchan/repos",
"events_url": "https://api.github.com/users/congchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/congchan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @Rocketknight1 ",
"Hi @congchan - firstly, apologies for taking so long to get to this one - it slipped past me the first time I was pinged! This seems like a clean PR, but I'm not sure we can accept it as-is: The list of special tokens that we have specific code for is very short, and I think this would make more sense as an added token in models that support it, since most models will not.\r\n\r\nHowever, you're not the only user who wants a clean way to separate user and assistant messages in the tokens from `apply_chat_template`. Another user has suggested getting the method to return an optional mask array (similar to `attention_mask`), which you could use to mask assistant/user messages: #28950",
"> Hi @congchan - firstly, apologies for taking so long to get to this one - it slipped past me the first time I was pinged! This seems like a clean PR, but I'm not sure we can accept it as-is: The list of special tokens that we have specific code for is very short, and I think this would make more sense as an added token in models that support it, since most models will not.\r\n> \r\n> However, you're not the only user who wants a clean way to separate user and assistant messages in the tokens from `apply_chat_template`. Another user has suggested getting the method to return an optional mask array (similar to `attention_mask`), which you could use to mask assistant/user messages: #28950\r\n\r\nHi, thanks for your feedback. Indeed it is better to keep the special tokens shorts.\r\n\r\nBesides, I suggest `apply_chat_template` with `tokenize=True` takes in accounts for \"weight\" or \"mask\" key in the input list of json, to provide the most flexible end-to-end tokenization, which unify both single turn and multi-turn chats tuning. \r\n\r\nThe reason is, in production environment with multi-turns dataset curated or bad case hot fixing, we can modify some specific turns to become high-quality without changing the rest of the other turns.\r\n\r\nUser can choose to train their model to learn only specific turns that they believe to be high quality, and ignore others.\r\ne.g.s..:\r\n```\r\nchat = [\r\n {\"role\": \"system\", \"content\": \"You are a friendly chatbot who always responds in the style of a pirate\", \"weight\": 1.0},\r\n {\"role\": \"user\", \"content\": \"Hello, how are you?\", \"weight\": 0.0},\r\n {\"role\": \"assistant\", \"content\": \"I'm doing great. How can I help you today?\", \"weight\": 1.0},\r\n {\"role\": \"user\", \"content\": \"Cool, and who are you?\", \"weight\": 0.0},\r\n {\"role\": \"assistant\", \"content\": \"I'm ChatGPT.\", \"weight\": 0.0},\r\n ....\r\n {\"role\": \"user\", \"content\": \"Which is bigger, a virus or a bacterium?\", \"weight\": 0.0},\r\n {\"role\": \"assistant\", \"content\": \"A bacterium.\", \"weight\": 1.0}\r\n]\r\n```\r\n\r\n`tokenizer.apply_chat_template(chat, tokenize=True)` Will set labels for those turns with `\"weight\": 0.0` to `ignore_index`.\r\n\r\nI have already been using this in-out pipeline in my local training(but not yet make use of the `apply_chat_template`).\r\n\r\nWhat do you think? I can also help on it."
] | 1,705 | 1,707 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In chat language model training, sometimes we need to mask the input from real users, and train the model solely from assistant's outputs.
This PR add a special prefix token, which can be applied in `chat_template`, so that we can make use of this `prefix_token` to dynamically separate dialogs from `user` and `assistant`.
For example:
```
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
"""
```
The prefix_token could be `<|im_start|>assistant\n`, we can make use of this token:
- to set the model's `chat_template`, for example `{% if add_generation_prompt %}{{ prefix_token }}`
- To separate the dialogs from user's and model's turns, and mask the loss from user's turns, by access `tokenizer.prefix_token` and `tokenizer.eos_token`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28473/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28473",
"html_url": "https://github.com/huggingface/transformers/pull/28473",
"diff_url": "https://github.com/huggingface/transformers/pull/28473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28473.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28472/comments | https://api.github.com/repos/huggingface/transformers/issues/28472/events | https://github.com/huggingface/transformers/issues/28472 | 2,078,425,643 | I_kwDOCUB6oc574kIr | 28,472 | Tokenizer.save_pretrained fails when add_special_tokens=True|False | {
"login": "shuttie",
"id": 999061,
"node_id": "MDQ6VXNlcjk5OTA2MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/999061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuttie",
"html_url": "https://github.com/shuttie",
"followers_url": "https://api.github.com/users/shuttie/followers",
"following_url": "https://api.github.com/users/shuttie/following{/other_user}",
"gists_url": "https://api.github.com/users/shuttie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuttie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuttie/subscriptions",
"organizations_url": "https://api.github.com/users/shuttie/orgs",
"repos_url": "https://api.github.com/users/shuttie/repos",
"events_url": "https://api.github.com/users/shuttie/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuttie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It is indeed related to that PR, but it is also related to the fact that `add_special_tokens` even if saved, is not used when doing an encode pass. Thus it's better to error out than save it IMO as it won't be checked when encoding. I'll have a look at the PR ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | 1,708 | NONE | null | ### System Info
transformers-4.34
python-3.11
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", add_special_tokens=True)
tok.save_pretrained("out")
```
The snippet:
* works well on `add_special_tokens=` being present, absent, True/False on 4.33 and below
* works well when `add_special_tokens=` is not added to the list of tokenizer parameters on 4.34+
* fails when `add_special_tokens=` is present in parameters (with both True/False values) on 4.34+ with the following error:
```
Traceback (most recent call last):
File "/home/shutty/private/code/savepbug/test.py", line 4, in <module>
tok.save_pretrained("tokenz")
File "/home/shutty/private/code/savepbug/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2435, in save_pretrained
out_str = json.dumps(tokenizer_config, indent=2, sort_keys=True, ensure_ascii=False) + "\n"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/__init__.py", line 238, in dumps
**kw).encode(obj)
^^^^^^^^^^^
File "/usr/lib/python3.11/json/encoder.py", line 202, in encode
chunks = list(chunks)
^^^^^^^^^^^^
File "/usr/lib/python3.11/json/encoder.py", line 432, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.11/json/encoder.py", line 439, in _iterencode
o = _default(o)
^^^^^^^^^^^
File "/usr/lib/python3.11/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type method is not JSON serializable
```
The issue happens on any tokenizer, not only on LLama one. I can confirm it failing the same way on `bert-base-uncased`
If you go to the `tokenization_utils_base` and dump the tokenizer_config just before the `json.dumps`, you may see that `add_special_tokens` surprizingly became a method, and not a bool:
```
{'clean_up_tokenization_spaces': False, 'unk_token': '<unk>', 'bos_token': '<s>', 'eos_token': '</s>', 'add_bos_token': True,
'add_eos_token': False, 'use_default_system_prompt': False, 'additional_special_tokens': [], 'legacy': True,
'model_max_length': 1000000000000000019884624838656, 'pad_token': None, 'sp_model_kwargs': {},
'spaces_between_special_tokens': False,
'add_special_tokens': <bound method SpecialTokensMixin.add_special_tokens of LlamaTokenizerFast(name_or_path='mistralai/Mistral-7B-v0.1', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=True,
padding_side='left', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}, clean_up_tokenization_spaces=False), added_tokens_decoder={
0: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
1: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}>, 'added_tokens_decoder': {0: {'content': '<unk>', 'single_word': False, 'lstrip': False, 'rstrip': False,
'normalized': False, 'special': True}, 1: {'content': '<s>', 'single_word': False, 'lstrip': False, 'rstrip': False,
'normalized': False, 'special': True}, 2: {'content': '</s>', 'single_word': False, 'lstrip': False, 'rstrip': False,
'normalized': False, 'special': True}}, 'tokenizer_class': 'LlamaTokenizer'}
```
My feeling that the issue is related to the https://github.com/huggingface/transformers/pull/23909 PR which refactored a lot of tokenizer internals, so in the current version:
* `add_special_tokens` is a part of kwargs passed to the tokenizer
* there is also a method `SpecialTokensMixin.add_special_tokens` having the same name
* when everything is being joined together before `json.dumps`, the method is being serialized instead of the kwargs parameter.
### Expected behavior
Not crashing with `TypeError: Object of type method is not JSON serializable` as in was pre https://github.com/huggingface/transformers/pull/23909 in 4.33. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28472/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28472/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28471/comments | https://api.github.com/repos/huggingface/transformers/issues/28471/events | https://github.com/huggingface/transformers/pull/28471 | 2,078,363,551 | PR_kwDOCUB6oc5j5jQl | 28,471 | Fix torch.ones usage in xlnet | {
"login": "sungho-ham",
"id": 19978686,
"node_id": "MDQ6VXNlcjE5OTc4Njg2",
"avatar_url": "https://avatars.githubusercontent.com/u/19978686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sungho-ham",
"html_url": "https://github.com/sungho-ham",
"followers_url": "https://api.github.com/users/sungho-ham/followers",
"following_url": "https://api.github.com/users/sungho-ham/following{/other_user}",
"gists_url": "https://api.github.com/users/sungho-ham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sungho-ham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sungho-ham/subscriptions",
"organizations_url": "https://api.github.com/users/sungho-ham/orgs",
"repos_url": "https://api.github.com/users/sungho-ham/repos",
"events_url": "https://api.github.com/users/sungho-ham/events{/privacy}",
"received_events_url": "https://api.github.com/users/sungho-ham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When creating a causal attention mask in xlnet, the device parameter of torch.ones can be interpreted as one of the dimensions. Because of this, the code throws an error in torch 1.13.1. I have modified to specify the name of the device parameter.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28471/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28471",
"html_url": "https://github.com/huggingface/transformers/pull/28471",
"diff_url": "https://github.com/huggingface/transformers/pull/28471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28471.patch",
"merged_at": 1705069861000
} |
https://api.github.com/repos/huggingface/transformers/issues/28470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28470/comments | https://api.github.com/repos/huggingface/transformers/issues/28470/events | https://github.com/huggingface/transformers/issues/28470 | 2,078,272,669 | I_kwDOCUB6oc573-yd | 28,470 | Running a `forward` pass before `generate` with AWQ fused modules breaks it | {
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @younesbelkada and @fxmarty, if they use static cache then that is expected. I might fix it in #27931 ",
"cc @younesbelkada 🤗 "
] | 1,705 | 1,707 | null | MEMBER | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MEGATRON_LM
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- megatron_lm_config: {'megatron_lm_gradient_clipping': 1.0, 'megatron_lm_pp_degree': 1, 'megatron_lm_recompute_activations': True, 'megatron_lm_sequence_parallelism': False, 'megatron_lm_tp_degree': 2, 'megatron_lm_use_distributed_optimizer': True}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM, AwqConfig, AutoTokenizer
awq_config = AwqConfig(do_fuse=True, fuse_max_seq_len=512)
model = AutoModelForCausalLM.from_pretrained(
"casperhansen/tinyllama-1b-awq",
quantization_config=awq_config,
).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("casperhansen/tinyllama-1b-awq")
input_ids = tokenizer("Hello, my dog is cute", return_tensors="pt").input_ids.to("cuda")
model.forward(input_ids)
model.generate(input_ids, max_new_tokens=100)
```
### Expected behavior
code works if only generate is called but not if a forward pass precedes it.
looking at the traceback:
```
Traceback (most recent call last):
File "/workspace/llm-perf/test_.py", line 29, in <module>
model.generate(input_ids, max_new_tokens=100)
File "/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation/utils.py", line 1718, in generate
return self.greedy_search(
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation/utils.py", line 2579, in greedy_search
outputs = self(
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1181, in forward
outputs = self.model(
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1033, in forward
attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py", line 372, in _prepare_4d_causal_attention_mask_for_sdpa
expanded_4d_mask = attn_mask_converter.to_4d(
File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py", line 136, in to_4d
expanded_attn_mask = causal_4d_mask.masked_fill(expanded_attn_mask.bool(), torch.finfo(dtype).min)
RuntimeError: The size of tensor a (9) must match the size of tensor b (25) at non-singleton dimension 3
```
the problems seems to be related to the sdpa integration | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28470/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28470/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28469/comments | https://api.github.com/repos/huggingface/transformers/issues/28469/events | https://github.com/huggingface/transformers/issues/28469 | 2,078,206,863 | I_kwDOCUB6oc573uuP | 28,469 | `dataloader_persistent_workers=True` causes fork-bomb due to repeated creation of `eval_dataloader` | {
"login": "naba89",
"id": 12119806,
"node_id": "MDQ6VXNlcjEyMTE5ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/12119806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naba89",
"html_url": "https://github.com/naba89",
"followers_url": "https://api.github.com/users/naba89/followers",
"following_url": "https://api.github.com/users/naba89/following{/other_user}",
"gists_url": "https://api.github.com/users/naba89/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naba89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naba89/subscriptions",
"organizations_url": "https://api.github.com/users/naba89/orgs",
"repos_url": "https://api.github.com/users/naba89/repos",
"events_url": "https://api.github.com/users/naba89/events{/privacy}",
"received_events_url": "https://api.github.com/users/naba89/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"gentle ping: @muellerzr @pacman100"
] | 1,705 | 1,707 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: does not matter
- Using distributed or parallel set-up in script?: does not matter
### Who can help?
@muellerzr @pacman100
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import os
from dataclasses import dataclass
import torch
import torch.nn.functional as F
from torch.utils.data import Dataset
from transformers import TrainingArguments, Trainer
from transformers.modeling_outputs import BaseModelOutput
# Dummy Dataset
class DummyDataset(Dataset):
def __init__(self, size=100):
self.size = size
self.data = torch.rand(size, 10) # Random data
self.labels = torch.randint(0, 2, (size,)) # Binary labels
def __len__(self):
return self.size
def __getitem__(self, idx):
return {'input_ids': self.data[idx], 'labels': self.labels[idx]}
@dataclass
class DummyModelOutput(BaseModelOutput):
loss: torch.Tensor = None
logits: torch.Tensor = None
# Dummy Model
class DummyModel(torch.nn.Module):
def __init__(self):
super(DummyModel, self).__init__()
self.linear = torch.nn.Linear(10, 2)
def forward(self, input_ids, labels=None) -> DummyModelOutput:
outputs = self.linear(input_ids)
loss = F.cross_entropy(outputs, labels)
return DummyModelOutput(loss=loss, logits=outputs)
if __name__ == '__main__':
# using wandb, because it logs system metrics periodically
os.environ["WANDB_PROJECT"] = "dummy_project"
# Create dataset and model instances
dataset = DummyDataset(size=1000)
model = DummyModel()
persistent_workers = False # set to True to enable persistent workers
# Training arguments
training_args = TrainingArguments(
output_dir="./test_trainer",
run_name=f'dataloader_peristent_workers={persistent_workers}',
num_train_epochs=20,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
dataloader_num_workers=8,
dataloader_persistent_workers=persistent_workers,
logging_strategy="no",
evaluation_strategy="epoch",
)
# Initialize the custom trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
eval_dataset=dataset,
)
# Train the model
trainer.train()
```
### Expected behavior
Since the [get_eval_loader](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3065C16-L3065C16) is called on every evaluate call, with `dataloader_persistent_workers=True` the previous worker processes are not killed and leads to a fork-bomb and exhausts system resources and causes instability/crash.
As you can see in the below plots generated with the reproduction script (in the wandb system metrics section),
- persistent data loader workers cause speedup (mainly because the training loader does not recreate all processes at every epoch), but evaluation loaders cause the fork-bomb.
- without persistent data loader workers, speed is slow, but the number of processes is constant.
![image](https://github.com/huggingface/transformers/assets/12119806/dd3559bb-e6fa-4318-9f9a-fef5faff152e)
Having the persistent dataloader option is good. Still, it is necessary to fix the eval loader logic, create it once, and reuse it since the eval datasets won't change in the middle of training.
This option was added in #27058 and #27189
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28469/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28468/comments | https://api.github.com/repos/huggingface/transformers/issues/28468/events | https://github.com/huggingface/transformers/issues/28468 | 2,078,022,384 | I_kwDOCUB6oc573Brw | 28,468 | Train LLaMA 2 with PEFT(LoRA) + Deepspeed Zero3 on v100 * 8, raise assert param.ds_status == ZeroParamStatus.AVAILABLE | {
"login": "ZetangForward",
"id": 123983104,
"node_id": "U_kgDOB2PVAA",
"avatar_url": "https://avatars.githubusercontent.com/u/123983104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZetangForward",
"html_url": "https://github.com/ZetangForward",
"followers_url": "https://api.github.com/users/ZetangForward/followers",
"following_url": "https://api.github.com/users/ZetangForward/following{/other_user}",
"gists_url": "https://api.github.com/users/ZetangForward/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZetangForward/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZetangForward/subscriptions",
"organizations_url": "https://api.github.com/users/ZetangForward/orgs",
"repos_url": "https://api.github.com/users/ZetangForward/repos",
"events_url": "https://api.github.com/users/ZetangForward/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZetangForward/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, would you mind updating to the latest version of transformers, test again and also share the output of `transformers-cli env`",
"> ransformers-cli env\r\n\r\nHi, sorry for the late reply. Sure, I update the transformers version, and here is my transformers version:\r\n\r\n```\r\nPython 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import transformers\r\n>>> transformers.__version__\r\n'4.36.2'\r\n>>> \r\n```\r\n\r\nI test again and meet the same issues\r\n```\r\narameter Offload: Total persistent parameters: 177793 in 57 params\r\n 0%| | 0/248900 [00:00<?, ?it/s]/opt/conda/envs/llama/lib/python3.10/site-packages/torch/utils/data/dataloader.py:557: UserWarning: This DataLoader will create 32 worker processes in total. Our suggested max number of worker in current system is 28, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\r\n warnings.warn(_create_warning_msg(\r\nTraceback (most recent call last):\r\n File \"/workspace/zecheng/modelzipper/projects/custom_llama/train_vqllama_lora.py\", line 229, in <module>\r\n train()\r\n File \"/workspace/zecheng/modelzipper/projects/custom_llama/train_vqllama_lora.py\", line 223, in train\r\n trainer.train()\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/transformers/trainer.py\", line 1556, in train\r\n # number of training epochs: num_train_epochs\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/transformers/trainer.py\", line 1838, in _inner_training_loop\r\n # Skip past any already trained steps if resuming training\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/transformers/trainer.py\", line 2693, in training_step\r\n A helper wrapper to group together context managers.\r\n File \"/workspace/zecheng/modelzipper/projects/custom_llama/train_vqllama_lora.py\", line 105, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/engine.py\", line 1833, in forward\r\n loss = self.module(*inputs, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1568, in _call_impl\r\n result = forward_call(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/peft/peft_model.py\", line 1073, in forward\r\n return self.base_model(\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1568, in _call_impl\r\n result = forward_call(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/peft/tuners/tuners_utils.py\", line 103, in forward\r\n return self.model.forward(*args, **kwargs)\r\n File \"/workspace/zecheng/modelzipper/projects/custom_llama/models/vqllama.py\", line 84, in forward\r\n svg_token_embeddings = self.vqvae_embedding(svg_token_ids) # Encode svg tokens\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1568, in _call_impl\r\n result = forward_call(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/peft/utils/other.py\", line 219, in forward\r\n return self.modules_to_save[self.active_adapter](*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1557, in _call_impl\r\n args_result = hook(self, args)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 392, in _pre_forward_module_hook\r\n self.pre_sub_module_forward_function(module)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py\", line 505, in pre_sub_module_forward_function\r\n param_coordinator.fetch_sub_module(sub_module, forward=prev_grad_state)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/utils/nvtx.py\", line 15, in wrapped_fn\r\n ret_val = func(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py\", line 310, in fetch_sub_module\r\n assert param.ds_status == ZeroParamStatus.AVAILABLE, param.ds_summary()\r\nAssertionError: {'id': 347, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 0, 'shape': (0,), 'ds_shape': (0,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {846}, 'ds_tensor.shape': torch.Size([0])}\r\n\r\n```\r\n\r\nhere is the transformers-cli env\r\n\r\n```\r\n╰─➤ transformers-cli env\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.36.2\r\n- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.27\r\n- Python version: 3.10.13\r\n- Huggingface_hub version: 0.20.2\r\n- Safetensors version: 0.4.1\r\n- Accelerate version: 0.26.1\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.1.2 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```\r\n",
"same issue",
"@ArthurZucker any solutions?",
"To me it looks like deep speed 0 is not available. \r\nPinging @pacman100 as he might have an idea of a quick fix! ",
"Ok, this is a strange issue, and I also find similar questions in many other issues... @pacman100 \r\n\r\n> To me it looks like deep speed 0 is not available. Pinging @pacman100 as he might have an idea of a quick fix!\r\n\r\n",
"Hello @ZetangForward, I think this is due to `modules_to_save` not being compatible with DeepSpeed. A workaround in the meantime would be to target the modules (add them in `target_modules`) mentioned in `modules_to_save`. If you are targeting embedding layers, the lora modules along with the embedding layers are saved when calling `save_pretrained` to support adding new tokens to the embedding layers. ",
"> Hello @ZetangForward, I think this is due to `modules_to_save` not being compatible with DeepSpeed. A workaround in the meantime would be to target the modules (add them in `target_modules`) mentioned in `modules_to_save`. If you are targeting embedding layers, the lora modules along with the embedding layers are saved when calling `save_pretrained` to support adding new tokens to the embedding layers.\r\n\r\nSorry for the late reply. Actually, I already put them in the ``target_modules`` at the beginning. Below is my code:\r\n\r\n```\r\n\r\nsvgllama = VQSVGLlama.from_pretrained(\r\n model_args.model_name_or_path, \r\n config=llamaconfig, \r\n codebook_size=vqvae_config.vqvae.l_bins,\r\n cache_dir=training_args.cache_dir\r\n )\r\n\r\n\r\n config = LoraConfig(\r\n r=16,\r\n lora_alpha=32,\r\n lora_dropout=0.05,\r\n target_modules=[\"q_proj\", \"v_proj\"],\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n modules_to_save=[\"vqvae_embedding\", \"vqvae_head\", \"up_adapter\", \"down_adapter\"]\r\n )\r\n svgllama = get_peft_model(svgllama, config)\r\n```\r\n\r\nand this is my model:\r\n\r\n```\r\nclass VQSVGLlama(LlamaForCausalLM): \r\n def __init__(self, config, vq_loss_weight=2.0, convert_token_weight=1.5, tokenizer=None, svg_begin_token_id=None, vqvae=None, codebook_size=8192): \r\n super(VQSVGLlama, self).__init__(config)\r\n self.config = config\r\n self.tokenizer = tokenizer\r\n self.svg_begin_token_id = svg_begin_token_id\r\n self.vq_loss_weight = vq_loss_weight\r\n self.convert_token_weight = convert_token_weight\r\n self.codebook_size = codebook_size + 1 # add one for svg end token\r\n self.svg_end_token_id = codebook_size\r\n self.vqvae = vqvae\r\n self.up_adapter = nn.Linear(config.hidden_size, config.hidden_size)\r\n self.down_adapter = nn.Linear(config.hidden_size, config.hidden_size)\r\n self.vqvae_embedding = nn.Embedding(self.codebook_size, config.hidden_size)\r\n self.vqvae_head = nn.Linear(config.hidden_size, self.codebook_size)\r\n\r\n self.post_init()\r\n if config.frozen_llm: \r\n print_c(\"Attention! LLM is freezed!\")\r\n self.base_model.requires_grad_ = False \r\n self.lm_head.requires_grad_ = False\r\n self.base_model.embed_tokens.requires_grad_ = True \r\n\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | 1,708 | NONE | null | ### System Info
Huggingface Version == 4.31.0
## Environment
Deepspeed Zero3 Config:
```
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 0,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 0,
"stage3_max_reuse_distance": 0,
"stage3_gather_16bit_weights_on_model_save": true
},
"fp16": {
"enabled": true,
"auto_cast": false,
"loss_scale": 0,
"initial_scale_power": 32,
"loss_scale_window": 2000,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": [
0.9,
0.999
],
"eps": 1e-8,
"weight_decay": 0
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"gradient_accumulation_steps": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
Launch Config:
```
deepspeed --num_gpus 1 \
--num_nodes 1 \
train_vqllama_lora.py \
--model_name_or_path "/model/llama2" \
--data_path "/data/data.pkl" \
--output_dir ${OUTPUT_DIR} \
--num_train_epochs 100 \
--model_max_length 1024 \
--per_device_train_batch_size 68 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "steps" \
--eval_steps 5 \
--greater_is_better False \
--save_strategy "steps" \
--load_best_model_at_end True \
--save_steps 5 \
--save_total_limit 10 \
--learning_rate 3e-5 \
--warmup_steps 20 \
--logging_steps 5 \
--dataloader_num_workers 0 \
--lr_scheduler_type "cosine" \
--report_to "tensorboard" \
--deepspeed configs/deepspeed/stage3_test.json \
--fp16 True \
--remove_unused_columns False;
```
LoRA Config:
```
myllama = CustomLLama.from_pretrained(
model_args.model_name_or_path,
config=llamaconfig,
cache_dir=training_args.cache_dir
)
config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="CAUSAL_LM",
modules_to_save=["custom_embedding", "custom_head", "wte", "lm_head"]
)
myllama = get_peft_model(myllama , config)
```
Then, I train `myllama` with Huggingface Trainer
## Errors
I come with this error
```
Parameter Offload: Total persistent parameters: 8663041 in 198 params
0%| | 0/280000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/workspace/zecheng/modelzipper/projects/custom_llama/train_vqllama_lora.py", line 230, in <module>
train()
File "/workspace/zecheng/modelzipper/projects/custom_llama/train_vqllama_lora.py", line 224, in train
trainer.train()
File "/opt/conda/envs/llama/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/opt/conda/envs/llama/lib/python3.10/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/transformers/trainer.py", line 2654, in training_step
loss = self.compute_loss(model, inputs)
File "/workspace/zecheng/modelzipper/projects/custom_llama/train_vqllama_lora.py", line 105, in compute_loss
outputs = model(**inputs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1833, in forward
loss = self.module(*inputs, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/peft/peft_model.py", line 1073, in forward
return self.base_model(
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 103, in forward
return self.model.forward(*args, **kwargs)
File "/workspace/zecheng/modelzipper/projects/custom_llama/models/vqllama.py", line 84, in forward
svg_token_embeddings = self.vqvae_embedding(svg_token_ids) # Encode svg tokens
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/peft/utils/other.py", line 219, in forward
return self.modules_to_save[self.active_adapter](*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1557, in _call_impl
args_result = hook(self, args)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 392, in _pre_forward_module_hook
self.pre_sub_module_forward_function(module)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 505, in pre_sub_module_forward_function
param_coordinator.fetch_sub_module(sub_module, forward=prev_grad_state)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/envs/llama/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 310, in fetch_sub_module
assert param.ds_status == ZeroParamStatus.AVAILABLE, param.ds_summary()
AssertionError: {'id': 423, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 0, 'shape': (0,), 'ds_shape': (0,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {1038}, 'ds_tensor.shape': torch.Size([0])}
```
Any help for this problem ? THX
### Who can help?
@muellerz @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. initiate the llama2 model from HF and add some extra modules for training, such as LoRA, one extra embedding / LM Head
2. use peft to wrap the model above
3. apply Deepspeed Zero3 (with my config) and HF Trainer to start training.
### Expected behavior
Seek for help and may solve some potential bugs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28468/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28467/comments | https://api.github.com/repos/huggingface/transformers/issues/28467/events | https://github.com/huggingface/transformers/issues/28467 | 2,078,016,506 | I_kwDOCUB6oc573AP6 | 28,467 | ImportError: cannot import name 'is_g2p_en_available' from 'transformers.utils' (/usr/local/lib/python3.10/dist-packages/transformers/utils/__init__.py) | {
"login": "kli017",
"id": 14877573,
"node_id": "MDQ6VXNlcjE0ODc3NTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/14877573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kli017",
"html_url": "https://github.com/kli017",
"followers_url": "https://api.github.com/users/kli017/followers",
"following_url": "https://api.github.com/users/kli017/following{/other_user}",
"gists_url": "https://api.github.com/users/kli017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kli017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kli017/subscriptions",
"organizations_url": "https://api.github.com/users/kli017/orgs",
"repos_url": "https://api.github.com/users/kli017/repos",
"events_url": "https://api.github.com/users/kli017/events{/privacy}",
"received_events_url": "https://api.github.com/users/kli017/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I don't know which section this is, but thefunction is defined here: \r\nhttps://github.com/huggingface/transformers/blob/07f5cdcac14b672ea6934b16da432518717f5b74/src/transformers/utils/import_utils.py#L448\r\n\r\nit was merged to transformers last week, so can't be there for earlier versions. Can you share a small reproducer? \r\n",
"I have the same problem following this example: https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb\r\n\r\nat AutoImageProcessor.from_pretrained(model_checkpoint)\r\n\r\nwith transformers 4.37.1\r\n\r\nLooks like there is also this issue: https://github.com/huggingface/peft/issues/1351",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,706 | null | NONE | null | ### System Info
env: colab
python=3.10
trasformers=4.37.0.dev0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I was run the peft_bnb_whisper_large_v2_training.ipynb from peft project. Everything is good till I met the error when I run the code `import evaluate`. I also try transformers=4.27.4, 4.33,1 and 4.36.2 and get the same error.
### Expected behavior
Anyone can help? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28467/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28466/comments | https://api.github.com/repos/huggingface/transformers/issues/28466/events | https://github.com/huggingface/transformers/issues/28466 | 2,077,979,494 | I_kwDOCUB6oc5723Nm | 28,466 | LlamaForCausalLM does not support Flash Attention 2.0 yet | {
"login": "Patrick-Ni",
"id": 59468866,
"node_id": "MDQ6VXNlcjU5NDY4ODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/59468866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Patrick-Ni",
"html_url": "https://github.com/Patrick-Ni",
"followers_url": "https://api.github.com/users/Patrick-Ni/followers",
"following_url": "https://api.github.com/users/Patrick-Ni/following{/other_user}",
"gists_url": "https://api.github.com/users/Patrick-Ni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Patrick-Ni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Patrick-Ni/subscriptions",
"organizations_url": "https://api.github.com/users/Patrick-Ni/orgs",
"repos_url": "https://api.github.com/users/Patrick-Ni/repos",
"events_url": "https://api.github.com/users/Patrick-Ni/events{/privacy}",
"received_events_url": "https://api.github.com/users/Patrick-Ni/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"make sure you are using the latest version of transformers and not a code on the hub! 🤗 closing as this was added in #25598"
] | 1,705 | 1,705 | 1,705 | NONE | null | The model was loaded with use_flash_attention_2=True, which is deprecated and may be removed in a future release. Please use `attn_implementation="flash_attention_2"` instead.
Traceback (most recent call last):
File "/root/paddlejob/workspace/env_run/benchmark/generation/main.py", line 116, in <module>
main()
File "/root/paddlejob/workspace/env_run/benchmark/generation/main.py", line 91, in main
pipeline = load_model_and_tokenizer(model_home, args.model, args.use_pipeline)
File "/root/paddlejob/workspace/env_run/benchmark/generation/load_models_and_datasets.py", line 26, in load_model_and_tokenizer
model = AutoModelForCausalLM.from_pretrained(
File "/root/paddlejob/workspace/env_run/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained
return model_class.from_pretrained(
File "/root/paddlejob/workspace/env_run/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3456, in from_pretrained
config = cls._autoset_attn_implementation(
File "/root/paddlejob/workspace/env_run/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1302, in _autoset_attn_implementation
cls._check_and_enable_flash_attn_2(
File "/root/paddlejob/workspace/env_run/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1382, in _check_and_enable_flash_attn_2
raise ValueError(
ValueError: LlamaForCausalLM does not support Flash Attention 2.0 yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28466/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28465/comments | https://api.github.com/repos/huggingface/transformers/issues/28465/events | https://github.com/huggingface/transformers/pull/28465 | 2,077,920,592 | PR_kwDOCUB6oc5j4EEy | 28,465 | Update README.md | {
"login": "kit1980",
"id": 420184,
"node_id": "MDQ6VXNlcjQyMDE4NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/420184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kit1980",
"html_url": "https://github.com/kit1980",
"followers_url": "https://api.github.com/users/kit1980/followers",
"following_url": "https://api.github.com/users/kit1980/following{/other_user}",
"gists_url": "https://api.github.com/users/kit1980/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kit1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kit1980/subscriptions",
"organizations_url": "https://api.github.com/users/kit1980/orgs",
"repos_url": "https://api.github.com/users/kit1980/repos",
"events_url": "https://api.github.com/users/kit1980/events{/privacy}",
"received_events_url": "https://api.github.com/users/kit1980/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28465/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28465",
"html_url": "https://github.com/huggingface/transformers/pull/28465",
"diff_url": "https://github.com/huggingface/transformers/pull/28465.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28465.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28463/comments | https://api.github.com/repos/huggingface/transformers/issues/28463/events | https://github.com/huggingface/transformers/issues/28463 | 2,077,796,716 | I_kwDOCUB6oc572Kls | 28,463 | Mixtral inference on multi gpu is broken with 4.37.0dev (995a7ce) | {
"login": "nepeee",
"id": 13850451,
"node_id": "MDQ6VXNlcjEzODUwNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/13850451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nepeee",
"html_url": "https://github.com/nepeee",
"followers_url": "https://api.github.com/users/nepeee/followers",
"following_url": "https://api.github.com/users/nepeee/following{/other_user}",
"gists_url": "https://api.github.com/users/nepeee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nepeee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nepeee/subscriptions",
"organizations_url": "https://api.github.com/users/nepeee/orgs",
"repos_url": "https://api.github.com/users/nepeee/repos",
"events_url": "https://api.github.com/users/nepeee/events{/privacy}",
"received_events_url": "https://api.github.com/users/nepeee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Reverting back to nvidia driver 535.154.05 / cuda 12.2 seems to fixed the issue.",
"Thanks for providing the solution"
] | 1,705 | 1,706 | 1,705 | NONE | null | ### System Info
Ubuntu 22.04 RTX 3090 + RTX 3080TI transformers 4.37.0dev (995a7ce)
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GPTQConfig
prompt = 'SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation. \nUSER: how long will take to travel from Malmö to Stockholm by foot? \nASSISTANT: '
device = torch.device("cuda:0")
dmap = {
'model.embed_tokens':0,
'model.layers.0': 0, 'model.layers.1': 0, 'model.layers.2': 0, 'model.layers.3': 0, 'model.layers.4':0,
'model.layers.5': 0, 'model.layers.6': 0, 'model.layers.7': 0, 'model.layers.8': 0, 'model.layers.9':0,
'model.layers.10': 0, 'model.layers.11': 0, 'model.layers.12': 0, 'model.layers.13': 0, 'model.layers.14':0,
'model.layers.15': 0, 'model.layers.16': 0, 'model.layers.17': 0, 'model.layers.18': 0, 'model.layers.19':0,
'model.layers.20': 0, 'model.layers.21': 0, 'model.layers.22': 0, 'model.layers.23': 0, 'model.layers.24':0,
'model.layers.25': 1, 'model.layers.26': 1, 'model.layers.27': 1, 'model.layers.28': 1, 'model.layers.29':1,
'model.layers.30': 1, 'model.layers.31': 1,
'model.norm': 0,
'lm_head': 1,
}
quantization_config_loading = GPTQConfig(bits=3, use_exllama=False)
model_q = AutoModelForCausalLM.from_pretrained("TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ", device_map=dmap, quantization_config=quantization_config_loading, revision='gptq-3bit--1g-actorder_True')
tokenizer = AutoTokenizer.from_pretrained(model_id)
inp = tokenizer(prompt, return_tensors="pt").to(device)
res = model_q.generate(**inp, num_beams=1, min_new_tokens=60, max_new_tokens=60, do_sample=False)
predicted_text = tokenizer.decode(res[0])
print(predicted_text)
```
### Expected behavior
Works on my single 3090 with device_map="auto" but it produces errors with multi gpu in model parallel. It worked before with the device_map in the example.
Seen many errors like segfaults, device-side assert triggered and even full hang of the machine.
Most common one is:
idx, top_x = torch.where(expert_mask[expert_idx])
RuntimeError: CUDA error: device-side assert triggered
At layer 26 on the first token prediction
Both GPU-s are working with other models like mistral, i made this example because my lora training code had the same issues. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28463/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28462/comments | https://api.github.com/repos/huggingface/transformers/issues/28462/events | https://github.com/huggingface/transformers/issues/28462 | 2,077,558,874 | I_kwDOCUB6oc571Qha | 28,462 | Move layer_idx from a layer property to function argument. | {
"login": "siddartha-RE",
"id": 55106295,
"node_id": "MDQ6VXNlcjU1MTA2Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/55106295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddartha-RE",
"html_url": "https://github.com/siddartha-RE",
"followers_url": "https://api.github.com/users/siddartha-RE/followers",
"following_url": "https://api.github.com/users/siddartha-RE/following{/other_user}",
"gists_url": "https://api.github.com/users/siddartha-RE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddartha-RE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddartha-RE/subscriptions",
"organizations_url": "https://api.github.com/users/siddartha-RE/orgs",
"repos_url": "https://api.github.com/users/siddartha-RE/repos",
"events_url": "https://api.github.com/users/siddartha-RE/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddartha-RE/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"I don't mind taking a look at a PR, feel free to ping me! 🤗 "
] | 1,705 | 1,705 | null | CONTRIBUTOR | null | ### Feature request
Currently the layer_idx is recorded in the attention module of each `LlamaDecoderLayer`. This has the unfortunate side effect that the layers cannot easily be moved around or reused within the layer list. It seems simple enough to pass in the layer index as part of loop over layers in the forward pass. That way the layers once again will be decouple from their position information.
Backward compatibility could be preserved by still accepting the argument in the constructor but defaulting it to None and then just ignoring it in favor of the passed forward argument.
### Motivation
The motivation is to allow for simple layer stacking (like we have been seeing with pass through merged models) at inference time without actually expanding the memory usage of the model.
### Your contribution
I am happy to send a PR. Seems simple enough. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28462/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28461/comments | https://api.github.com/repos/huggingface/transformers/issues/28461/events | https://github.com/huggingface/transformers/issues/28461 | 2,077,408,711 | I_kwDOCUB6oc570r3H | 28,461 | Pytorch can have its default dtype permanently set to the "wrong" value if there is an exception when loading a model | {
"login": "Taytay",
"id": 1330693,
"node_id": "MDQ6VXNlcjEzMzA2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1330693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Taytay",
"html_url": "https://github.com/Taytay",
"followers_url": "https://api.github.com/users/Taytay/followers",
"following_url": "https://api.github.com/users/Taytay/following{/other_user}",
"gists_url": "https://api.github.com/users/Taytay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Taytay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taytay/subscriptions",
"organizations_url": "https://api.github.com/users/Taytay/orgs",
"repos_url": "https://api.github.com/users/Taytay/repos",
"events_url": "https://api.github.com/users/Taytay/events{/privacy}",
"received_events_url": "https://api.github.com/users/Taytay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"thanks for the deep investigation the proposed solution! \r\nI think having a context manager that handles that looks like a valid solution and might cover all edge cases including yours - I would be curious to hear what you think @amyeroberts @ArthurZucker ?\r\nI think that @Taytay is right here in the sense that if any error occurs here: https://github.com/huggingface/transformers/blob/995a7ce9a80b80062ccfe0b2d78857fb17351e27/src/transformers/modeling_utils.py#L1272-L1287 for some reason, the original dtype will never get set back, leading to a \"corrupted\" `torch.get_default_dtype()`, and having a context manager with a proper `__exit__` method will correctly set back the original dtype after an exception has been thrown\r\n\r\n",
"We have a similar call in `from_pretrained`, I'd be in favor of a context manager but internal:\r\n```python \r\nwith _temp_default_dtype():\r\n config = copy.deepcopy(config) # We do not want to modify the config inplace in _from_config.\r\n config._attn_implementation = kwargs.pop(\"attn_implementation\", None)\r\n config = cls._autoset_attn_implementation(\r\n config, use_flash_attention_2=use_flash_attention_2, check_device_map=False\r\n )\r\n\r\n if is_deepspeed_zero3_enabled():\r\n import deepspeed\r\n\r\n logger.info(\"Detected DeepSpeed ZeRO-3: activating zero.init() for this model\")\r\n # this immediately partitions the model across all gpus, to avoid the overhead in time\r\n # and memory copying it on CPU or each GPU first\r\n with deepspeed.zero.Init(config_dict_or_path=deepspeed_config()):\r\n model = cls(config, **kwargs)\r\n else:\r\n model = cls(config, **kwargs)\r\n```\r\nsame for https://github.com/huggingface/transformers/blob/07f5cdcac14b672ea6934b16da432518717f5b74/src/transformers/modeling_utils.py#L3734\r\n\r\ntry catch is also fine since we only put them in two places",
"@Taytay Thanks for raising this issue and the detailed investigation and proposal! Yes, I think a context manager here sounds like a good idea! "
] | 1,705 | 1,707 | null | NONE | null | ### System Info
I just ran into the most head-scratching issue. My data collator was crashing because a tensor it made was in half precision (fp16). I couldn't figure out why, but then I realized my `torch.get_default_dtype()` was `torch.float16`!
Then I realized it's because my model code threw an exception in a previous run of a notebook cell.
And if you look at this code PreTrainedModel:_from_config : [def _from_config(cls, config, **kwargs):](https://github.com/huggingface/transformers/blob/995a7ce9a80b80062ccfe0b2d78857fb17351e27/src/transformers/modeling_utils.py#L1256-L1294)
You can see that it tries to set the dtype back to the original value, but doesn't do so in a `finally` block:
```python
# override default dtype if needed
dtype_orig = None
if torch_dtype is not None:
dtype_orig = cls._set_default_torch_dtype(torch_dtype)
# do some stuff here....maybe throw an exception...
# restore default dtype if it was modified (assuming we get to this line)
if dtype_orig is not None:
torch.set_default_dtype(dtype_orig)
return model
```
This would of course leave my torch default dtype in whatever it was in when I was trying to load the model.
We could sprinkle some `finally` blocks around, or we could write a class like this:
```python
class temporily_set_default_torch_dtype:
def __init__(self, dtype):
self.new_dtype = dtype
if dtype is not None:
self.original_dtype = torch.get_default_dtype()
else:
# try to make this a no-op
self.original_dtype = None
def __enter__(self):
if self.new_dtype is not None:
torch.set_default_dtype(self.new_dtype)
def __exit__(self, exc_type, exc_val, exc_tb):
if self.original_dtype is not None:
torch.set_default_dtype(self.original_dtype)
```
And use it like so:
```python
torch.set_default_dtype(torch.float32)
print(f"default dtype is this before: {torch.get_default_dtype()}")
try:
with temporily_set_default_torch_dtype(torch.float16):
print(f"default dtype is now this inside: {torch.get_default_dtype()}")
raise ValueError("Throwing an exception to make sure it works")
except ValueError as e:
print("We caught the exception")
pass
print(f"default dtype is this after: {torch.get_default_dtype()}")
# prints:
# default dtype is this before: torch.float32
# default dtype is now this inside: torch.float16
# default dtype is this after: torch.float32
```
### Who can help?
Think @ArthurZucker and @younesbelkada are correct here?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1: Run a notebook cell that loads a model from_pretrained, in dtype=float16, and throws an exception while doing so.
2: Note that your torch.get_default_dtype() is still set to float16.
(This causes a real problem when things like the `DataCollatorForLanguageModeling` calls `torch_mask_tokens`, and then:
```python
# this will accidentally create a float16 tensor:
probability_matrix = torch.full(labels.shape, self.mlm_probability)
#...
probability_matrix.masked_fill_(special_tokens_mask, value=0.0)
masked_indices = torch.bernoulli(probability_matrix).bool()
```
An exception gets thrown when you try to call `bernoulli` on a cpu tensor at half precision:
`RuntimeError: "bernoulli_tensor_cpu_self_" not implemented for 'Half'`
### Expected behavior
My default torch dtype should not get "corrupted" even if the model loading code throws an exception | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28461/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28461/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28460/comments | https://api.github.com/repos/huggingface/transformers/issues/28460/events | https://github.com/huggingface/transformers/pull/28460 | 2,077,385,630 | PR_kwDOCUB6oc5j2OXG | 28,460 | Fix docstrings and update docstring checker error message | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Okay there's something more deeply wrong with the docstring checker than I realized - the new docstring is definitely more correct, but is failing checks in the CI! Will investigate.",
"Quick re-review request @ArthurZucker, I'm going to update the error message in this PR too!"
] | 1,704 | 1,705 | 1,705 | MEMBER | null | While I was making fixes to the docstring checker, I found another issue - this one seems to be intermittent, and I'm not sure why it only fails tests sometimes. Still, it's definitely wrong, so this fix should hopefully avoid issues in future! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28460/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28460",
"html_url": "https://github.com/huggingface/transformers/pull/28460",
"diff_url": "https://github.com/huggingface/transformers/pull/28460.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28460.patch",
"merged_at": 1705082051000
} |
https://api.github.com/repos/huggingface/transformers/issues/28459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28459/comments | https://api.github.com/repos/huggingface/transformers/issues/28459/events | https://github.com/huggingface/transformers/issues/28459 | 2,077,372,971 | I_kwDOCUB6oc570jIr | 28,459 | `get_imports` failing to respect conditionals on imports | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"For reference, this only happens when `trust_remote_code=True`. Thus, we switched from using `if is_flash_attn_2_available():` to a `try/except` block when trying to import the `flash_attn` package.\r\n\r\nSeems to be working!",
"Thanks @gugarosa for finding a workaround, that works because `get_imports` includes a special regex for `try`-`except`: https://github.com/huggingface/transformers/blob/v4.36.2/src/transformers/dynamic_module_utils.py#L149.\r\n\r\n---\r\n\r\nTo share, adding the below case to https://github.com/huggingface/transformers/blob/v4.36.2/tests/utils/test_dynamic_module_utils.py will expose the issue:\r\n\r\n```python\r\n...\r\n\r\nTOP_LEVEL_CONDITIONAL_IMPORT = \"\"\"\r\nimport os\r\nif False:\r\n import pathlib\r\n\"\"\"\r\n\r\n...\r\n\r\nCASES = [\r\n ...,\r\n TOP_LEVEL_CONDITIONAL_IMPORT\r\n]\r\n```\r\n\r\nLooking at the other test cases, to properly fix this bug, I am now thinking it will involve use of `ast` as shown in https://stackoverflow.com/a/42195575",
"Note a generalized importer should also be able to take into account `contextlib.suppress`:\r\n\r\n```python\r\nimport contextlib\r\n\r\nwith contextlib.suppress(ImportError):\r\n from flash_attn import flash_attn_func\r\n```",
"same problem for deepseeker moe [https://github.com/deepseek-ai/DeepSeek-MoE](deepseeker moe)\r\n![image](https://github.com/huggingface/transformers/assets/55798671/8ae7ffdf-ebca-49c6-8391-f7615db21a26)\r\n",
"Fixed by\r\n````\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig\r\n\r\nmodel_name = \"/root/models/deepseek-moe-16b-base\"\r\n# model_name = \"/root/models/Llama-2-7B\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\r\n\r\n# With Python 3.11.7, transformers==4.36.2\r\nimport os\r\nfrom unittest.mock import patch\r\n\r\nfrom transformers import AutoModelForCausalLM\r\nfrom transformers.dynamic_module_utils import get_imports\r\n\r\n\r\ndef fixed_get_imports(filename: str | os.PathLike) -> list[str]:\r\n \"\"\"Work around for https://huggingface.co/microsoft/phi-1_5/discussions/72.\"\"\"\r\n if not str(filename).endswith(\"/modeling_deepseek.py\"):\r\n return get_imports(filename)\r\n imports = get_imports(filename)\r\n imports.remove(\"flash_attn\")\r\n return imports\r\n\r\n\r\nwith patch(\"transformers.dynamic_module_utils.get_imports\", fixed_get_imports):\r\n # model = AutoModelForCausalLM.from_pretrained(\"microsoft/phi-1_5\", trust_remote_code=True)\r\n model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map=\"auto\", trust_remote_code=True)\r\n\r\nmodel.generation_config = GenerationConfig.from_pretrained(model_name)\r\nmodel.generation_config.pad_token_id = model.generation_config.eos_token_id\r\n\r\ntext = \"An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is\"\r\ninputs = tokenizer(text, return_tensors=\"pt\")\r\noutputs = model.generate(**inputs.to(model.device), max_new_tokens=100)\r\n\r\nresult = tokenizer.decode(outputs[0], skip_special_tokens=True)\r\nprint(result)\r\n```",
"I think all custom models ( which need `trust_remote_code=True`) trigger this problem",
"(^ ping @LysandreJik about the `trust_remote_code` mechanism?) ",
"> (^ ping @LysandreJik about the `trust_remote_code` mechanism?)\r\n\r\nYes, the code is here:\r\n![image](https://github.com/huggingface/transformers/assets/55798671/8555e0b8-c7be-46f9-b3d4-080804d7c895)\r\n",
"Yep have already heard of such feedback! Would you like to open a PR for a fix? ",
"> Yep have already heard of such feedback! Would you like to open a PR for a fix?\r\n\r\nOf course, I will make a PR for fix",
"> Yep have already heard of such feedback! Would you like to open a PR for a fix?\r\nHer is my pull request.\r\nhttps://github.com/huggingface/transformers/pull/28811"
] | 1,704 | 1,706 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: macOS-13.5.2-arm64-arm-64bit
- Python version: 3.11.7
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
From `git blame`: @Wauplin @sgugger
From issue template (it's a LLM): @ArthurZucker @you
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running the below snippet on a MacBook without an Nvidia GPU and `transformers==4.36.2` will throw an `ImportError` to `pip install flash_attn`. However, `flash_attn` isn't actually a requirement for this model, so something's off here.
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
```
Leads to:
```
File "/Users/user/code/project/venv/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 315, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/code/project/venv/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 180, in check_imports
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn`
python-BaseException
```
Investigating this, it seems https://github.com/huggingface/transformers/blob/v4.36.2/src/transformers/dynamic_module_utils.py#L154 is picking up `flash_attn` from https://github.com/huggingface/transformers/blob/v4.36.2/src/transformers/models/phi/modeling_phi.py#L50-L52. However, if you look at the file, it's within an `if` statement.
Therein is the bug, that `transformers.dynamic_module_utils.get_imports` is not respecting conditionals before imports.
Please see https://huggingface.co/microsoft/phi-1_5/discussions/72 for more info.
### Expected behavior
My goal is some way to avoid monkey patching `get_imports` to remove the extra inferred `flash_attn` dependency.
The most generalized solution is probably moving `get_imports` from regex searching the source to either use `inspect` (see [here](https://stackoverflow.com/a/47093697)) or some other AST walking method. I am pretty sure there is a simple fix here, it just involves moving away from a regex. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28459/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28458/comments | https://api.github.com/repos/huggingface/transformers/issues/28458/events | https://github.com/huggingface/transformers/pull/28458 | 2,077,343,501 | PR_kwDOCUB6oc5j2FPX | 28,458 | Mark two logger tests as flaky | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Sure, will add! ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28458). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,704 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Two tests which capture and check the logger's output occasionally fail.
```
FAILED tests/test_modeling_utils.py::ModelUtilsTest::test_model_from_pretrained_with_different_pretrained_model_name - AssertionError: False is not true
FAILED tests/test_modeling_utils.py::ModelUtilsTest::test_unexpected_keys_warnings - AssertionError: "were not used when initializing ModelWithHead: ['added_key']" not found in ''
```
It looks like the logger isn't capturing the output. I have never been able to replicate the errors outside of circle CI, locally or on a VM.
The reason for the failure is unclear: there are other tests in the same module which utilise `CaptureLogger`. However, it's always these two tests which fail.
Example runs, where failing tests were unrelated to the PR:
* https://app.circleci.com/pipelines/github/huggingface/transformers/81004/workflows/4919e5c9-0ea2-457b-ad4f-65371f79e277/jobs/1038999
* https://app.circleci.com/pipelines/github/huggingface/transformers/82051/workflows/8674dab8-35ac-4336-8db2-24d90426554f/jobs/1054942
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28458/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28458",
"html_url": "https://github.com/huggingface/transformers/pull/28458",
"diff_url": "https://github.com/huggingface/transformers/pull/28458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28458.patch",
"merged_at": 1705060739000
} |