user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.4k
Namco0816
2024-11-13T07:38:02
It seems like that I encountered the same issue. I also use a dummy reward model which do not take any GPU memory. And the training goes smoothly at the early stage, however after monitoring it for couples of iterations, the GPU memory usage keeps growing and at a specific iteration (in my case, 15 % total training steps for 8 GPUs, 7% total training steps for 4 GPUs), the GPU OOM when performing unwrap generation. I've tried to del as much variables as possible after each iteration and also empty the caches, however not works at all.
2,250
Mefisto04
2024-10-21T19:31:15
hey @qgallouedec , i have made a pr for this issue #2237 , please review all the changes that i have made.
2,249
HuggingFaceDocBuilderDev
2024-10-24T13:08:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2249). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,249
qgallouedec
2024-10-24T13:22:57
Thanks for helping improving this @Mefisto04. Can you make sure to run `make precommit`? A few suggestions, but it all looks good to me.
2,249
Mefisto04
2024-10-24T18:37:47
hey @qgallouedec i have commits all the changes that you have provided, please review this
2,249
HuggingFaceDocBuilderDev
2024-10-18T14:23:02
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2248). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,248
qgallouedec
2024-10-18T10:18:01
Thanks for reporting, it's about to be fixed: #2246
2,247
ArcherShirou
2024-10-18T10:54:51
thanks, its work
2,247
HuggingFaceDocBuilderDev
2024-10-18T09:31:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2246). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,246
kashif
2024-10-24T08:31:33
release is out
2,245
HuggingFaceDocBuilderDev
2024-10-24T08:35:33
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2245). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,245
HuggingFaceDocBuilderDev
2024-10-17T11:44:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2244). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,244
HuggingFaceDocBuilderDev
2024-10-16T15:58:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2243). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,243
qgallouedec
2024-10-17T07:13:24
@kashif can you also add an example in the online dpo documentation? And a test?
2,243
kashif
2024-10-17T07:19:39
working on test thanks!
2,243
qgallouedec
2024-10-21T15:17:08
I'm just updating the doc and running some tests
2,243
qgallouedec
2024-10-22T11:23:13
``` # 8 GPUs accelerate launch examples/scripts/dpo_online.py \ --model_name_or_path trl-lib/pythia-1b-deduped-tldr-sft \ --judge pairrm \ --dataset_name trl-lib/tldr \ --learning_rate 5.0e-7 \ --logging_steps 25 \ --output_dir pythia-1b-tldr-online-dpo-reward \ --warmup_ratio 0.1 ``` https://wandb.ai/huggingface/huggingface/runs/usqmcs3e
2,243
qgallouedec
2024-10-23T15:44:00
https://wandb.ai/huggingface/huggingface/runs/mq66mdbt ``` accelerate launch examples/scripts/dpo_online.py \ --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \ --judge pair_rm \ --dataset_name trl-lib/ultrafeedback-prompt \ --learning_rate 5.0e-7 \ --logging_steps 25 \ --output_dir Qwen2.5-0.5B-Online-DPO-PairRM \ --warmup_ratio 0.1 ```
2,243
qgallouedec
2024-10-18T13:49:17
You can use it, feel free to report if it causes any issues.
2,242
zwhe99
2024-10-20T05:00:09
Thanks for the response!
2,242
coding-famer
2024-10-17T23:41:52
I'm interested in working on this!
2,241
qgallouedec
2024-10-18T13:49:57
Nice! Thanks @coding-famer. Feel free to open a PR then and request any help if needed
2,241
August-murr
2024-10-25T10:28:42
@lewtun After reading the paper, I noticed that the DPO checkpoints were combined with a different model rather than the reference model used in DPO training. So, I added an option in my PR to set an external model for merging instead of the reference model.
2,241
coding-famer
2024-10-25T18:01:36
Hi @August-murr , happy to see that you have already worked it out! However I noticed that your implementation only allows merge models in the disk after training, this could be done by user using mergekit directly after training. I think the thing here is to merge the model during the training steps/epochs?
2,241
August-murr
2024-10-25T18:41:13
@coding-famer The callback has an optional parameter called `merge_at_every_checkpoint`, which merges the saved checkpoint at either every step or at the end of each epoch during training.
2,241
coding-famer
2024-10-25T19:21:02
> @coding-famer The callback has an optional parameter called `merge_at_every_checkpoint`, which merges the saved checkpoint at either every step or at the end of each epoch during training. Sounds great!
2,241
HuggingFaceDocBuilderDev
2024-10-17T08:30:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2239). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,239
qgallouedec
2024-10-17T09:03:46
Thanks @August-murr!
2,239
qgallouedec
2024-10-18T14:21:33
Thanks for pointing this out, #2248 will fix it
2,238
reihig-ut
2024-10-24T05:07:42
Thank you for your PR! I retried the reproduction process on branch `kto-conv-data-support`, I got this error: ``` /home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py:479: UserWarning: When using DPODataCollatorWithPadding, you should set `max_length` in the KTOTrainer's init it will be set to `512` by default, but you should do it yourself in the future. warnings.warn( /home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py:489: UserWarning: When using DPODataCollatorWithPadding, you should set `max_prompt_length` in the KTOTrainer's init it will be set to `128` by default, but you should do it yourself in the future. warnings.warn( /home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py:519: UserWarning: When using DPODataCollatorWithPadding, you should set `remove_unused_columns=False` in your KTOConfig we have set it for you, but you should do it yourself in the future. warnings.warn( Traceback (most recent call last): File "/home/hoge/project/test/trl/examples/scripts/kto.py", line 97, in <module> trainer = KTOTrainer( ^^^^^^^^^^^ File "/home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py", line 721, in __init__ super().__init__( TypeError: Trainer.__init__() got an unexpected keyword argument 'processing_class' ```
2,238
benchay1999
2024-10-24T07:47:50
Changing `processing_class` to `tokenizer` worked for me.
2,238
kashif
2024-10-24T08:44:08
should be fixed now in main with latest transformer release
2,238
chenyang399
2024-11-08T04:35:47
How much memory it needs to run the KTO script ? is using the KTO script must have a GPU memory more than 24G? i use the 4090 with 24G memory failed.
2,238
Mefisto04
2024-10-16T19:16:43
hey @qgallouedec, please review this and assign me this issue
2,237
qgallouedec
2024-10-18T17:23:07
Hi, thanks for reporting @Mefisto04. Feel free to open a PR if you can improve it.
2,237
Mefisto04
2024-10-21T19:28:56
hey @qgallouedec , i have made a pr #2249 , please review that.
2,237
qgallouedec
2024-10-25T16:04:41
Closed via #2249
2,237
HuggingFaceDocBuilderDev
2024-10-21T09:44:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2236). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,236
edbeeching
2024-10-24T06:39:45
HI @sergiopaniego , thanks for impementing this. Could you run `make precommit` to format the code so the quality tests pass (you may have to `pip install pre-commit`) We are discussing internally how feasible it is to hormonize this script with the other VLM training scripts, I will let you know when we have a conclusion.
2,236
sergiopaniego
2024-10-30T09:12:56
Updated! Any updates on the harmonization discussion? I’m happy to make any modifications needed! 😊
2,236
mshuffett
2024-11-04T01:57:33
@sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA? Is it possible to set this up to train on multiple GPUs?
2,236
sergiopaniego
2024-11-17T20:25:35
> @sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA? > > Is it possible to set this up to train on multiple GPUs? Sorry for the late response @mshuffett. It still needs some polishing. While testing it, it seems like something is still missing from the artifacts for the model shared. You can see more details about it in the [README](https://github.com/2U1/Molmo-Finetune). For example, since the `grad-checkpoint` is disabled, memory consumption increases a lot. It's also not yet merged in the official transformers repo https://github.com/huggingface/transformers/pull/33962
2,236
qgallouedec
2024-10-18T17:18:01
This operation replaces tokens outside the attention mask with token 0. This operation has no influence on model output within the attention mask: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_id = "Qwen/Qwen2.5-0.5B-Instruct" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) pad_token_id = tokenizer.pad_token_id input_ids = torch.tensor([[pad_token_id, pad_token_id, 1, 2, 3, 4, 5, pad_token_id]]) attention_mask = input_ids != pad_token_id # [[False, False, True, True, True, True, True, False]] position_ids = attention_mask.cumsum(1) - attention_mask.long() # [[0, 0, 1, 2, 3, 4, 5, 0]] output_wo_mask_fill = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids) input_ids = torch.masked_fill(input_ids, ~attention_mask, 0) # [[0, 0, 0, 1, 2, 3, 4, 0]] output_w_mask_fill = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids) print(torch.mean(torch.abs(output_wo_mask_fill.logits - output_w_mask_fill.logits), dim=-1)) # [[0.8371, 0.8371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.6457]] ``` This operation is not absolutely necessary, since invalid logits are then masked: https://github.com/huggingface/trl/blob/a67f2143c38d6520be8735463ce715ad5c281db8/trl/trainer/rloo_trainer.py#L413-L415
2,235
Chios-C
2024-10-19T05:46:57
Thanks for your great response.
2,235
HuggingFaceDocBuilderDev
2024-10-15T10:04:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2233). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,233
qgallouedec
2024-10-15T08:59:33
Thanks again @DhruvKadam-git. Can you update your branch?
2,232
DhruvKadam-git
2024-10-17T07:36:04
I have updated my branch
2,232
HuggingFaceDocBuilderDev
2024-10-18T17:26:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2232). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,232
qgallouedec
2024-10-18T17:37:28
LGTM now!
2,232
HuggingFaceDocBuilderDev
2024-10-15T08:00:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2231). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,231
qgallouedec
2024-10-15T05:46:49
Thanks @sergiopaniego !
2,230
kashif
2024-10-15T07:52:08
@wenxindongwork I suspect we will need to have our own `prediction_step` method as we use our own datacollator instead of the default one, and the tests didn't catch this bug since the `eval_steps` in the tests were > the `max_steps` so it never ran the evaluation...
2,228
qgallouedec
2024-10-14T12:11:16
Using an iterable dataset might be more suited. If the way you update the dataset depends on the results, you'll probably need to set a callback as well
2,227
qgallouedec
2024-10-12T08:56:34
You're referring to the dev version of the doc (main) while you have v0.11.3 installed. In general, you should either: Use the doc associated with you version: https://huggingface.co/docs/trl/v0.11.3/en/dpo_trainer or Install the dev version `pip install git+https//github.com/huggingface/trl` Regarding the example code, if you decide to keep v0.11, then make the following modification ```diff - trainer = DPOTrainer(model=model, args=training_args, processing_class=tokenizer, train_dataset=preference_example) + trainer = DPOTrainer(model=model, args=training_args, tokenizer=tokenizer, train_dataset=preference_example) ```
2,226
qgallouedec
2024-10-12T08:57:57
Duplicate #2207 #2218
2,226
zhang-tuo-pdf
2024-10-12T16:43:02
Thank you so much! I made the changes based on your suggestions and keep using v0.11. However, I got another error when I am passing the raw text with explicit prompt format into the DPO trainer. The error shows "'dict' object has no attribute 'map'". My code is below: ``` import torch from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer from trl import ( DPOConfig, DPOScriptArguments, DPOTrainer, ModelConfig, TrlParser, get_kbit_device_map, get_peft_config, get_quantization_config, ) def dpo_training(): model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct", cache_dir='/vault/ultraz/llm_models') tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct", cache_dir='/vault/ultraz/llm_models') preference_example = { "prompt": [ "hello", "how are you", "What is your name?", "What is your name?", "Which is the best programming language?", "Which is the best programming language?", "Which is the best programming language?", ], "chosen": [ "hi nice to meet you", "I am fine", "My name is Mary", "My name is Mary", "Python", "Python", "Java", ], "rejected": [ "leave me alone", "I am not fine", "Whats it to you?", "I dont have a name", "Javascript", "C++", "C++", ], } training_args = DPOConfig(output_dir="Qwen2-0.5B-DPO", logging_steps=10) trainer = DPOTrainer(model=model, args=training_args, tokenizer=tokenizer, train_dataset=preference_example) trainer.train() if __name__ == "__main__": dpo_training() ``` And then the error message is below: ``` The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes` was set to a value of `1` `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. /home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py:660: UserWarning: `max_length` is not set in the DPOConfig's init it will default to `512` by default, but you should do it yourself in the future. warnings.warn( /home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py:673: UserWarning: `max_prompt_length` is not set in the DPOConfig's init it will default to `128` by default, but you should do it yourself in the future. warnings.warn( /home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py:708: UserWarning: When using DPODataCollatorWithPadding, you should set `remove_unused_columns=False` in your TrainingArguments we have set it for you, but you should do it yourself in the future. warnings.warn( Traceback (most recent call last): File "/vault/ultraz/DPO_FL/dpo_trainer.py", line 53, in <module> dpo_training() File "/vault/ultraz/DPO_FL/dpo_trainer.py", line 49, in dpo_training trainer = DPOTrainer(model=model, args=training_args, tokenizer=tokenizer, train_dataset=preference_example) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f return f(*args, **kwargs) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py", line 804, in __init__ train_dataset = train_dataset.map( AttributeError: 'dict' object has no attribute 'map' Traceback (most recent call last): File "/home/ultraz/.conda/envs/pretext/bin/accelerate", line 8, in <module> sys.exit(main()) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main args.func(args) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1057, in launch_command simple_launcher(args) File "/home/ultraz/.conda/envs/pretext/lib/python3.10/site-packages/accelerate/commands/launch.py", line 673, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/home/ultraz/.conda/envs/pretext/bin/python', 'dpo_trainer.py']' returned non-zero exit status 1. ```
2,226
qgallouedec
2024-10-12T16:48:44
This is another thing, please open another issue next time. `preference_example` is an dict, but `train_dataset` is expected to be a `datasets.Dataset` you need to convert it into a dataset via the `datasets.Dataset.from_dict(preference_example)`.
2,226
qgallouedec
2024-10-12T09:20:42
Good catch @Ben-Schneider-code, thanks for reporting. I'd like to take this opportunity to be a little more specific about what our precommit does. Here's a suggestion: ````md TRL relies on `ruff` for maintaining consistent code formatting across its source files. Before submitting any PR, you should apply automatic style corrections and run code verification checks. We provide a `precommit` target in the `Makefile` that simplifies this process by running all required checks and optimizations on only the files modified by your PR. To apply these checks and corrections in one step, use: ```bash $ make precommit ``` This command runs the following: - Executes `pre-commit` hooks to automatically fix style issues with `ruff` and other tools. - Runs additional scripts such as adding copyright information. If you prefer to apply the style corrections separately or review them individually, the `pre-commit` hook will handle the formatting for the files in question. ````
2,225
Ben-Schneider-code
2024-10-12T21:52:07
@qgallouedec done 👍
2,225
qgallouedec
2024-10-13T07:21:45
Thanks a lot @Ben-Schneider-code!
2,225
Ben-Schneider-code
2024-10-30T02:58:59
Hi! @qgallouedec I updated the branch align with main. Please review when you get a chance. Also ```make precommit``` still doesn't work for me? I had to add run the ruff stuff manually. ``` pre-commit run --all-files make: pre-commit: No such file or directory make: *** [Makefile:18: precommit] Error 127 ``` edit: I just didn't have the pre-commit pip package installed, my bad.
2,224
qgallouedec
2024-10-11T16:09:21
> The trainers on [TRL docs from the website](https://huggingface.co/docs/trl/en/dataset_formats#which-dataset-format-to-use) have links attached, but [the markdown file in the repo](https://github.com/huggingface/trl/blob/main/docs/source/dataset_formats.mdx) didn't contain any of the links. So, I wasn't sure If I should add the [`GKDTrainer`](https://huggingface.co/docs/trl/v0.11.3/en/gkd_trainer#trl.GKDTrainer) docs link to the table, please let me know if I need to add it to this PR. When you write ```[`GKDTrainer`]```, the link is automatically created, no need to add it
2,222
qgallouedec
2024-10-11T19:51:04
Lgtm thanks @August-murr!
2,222
qgallouedec
2024-10-11T13:14:33
Looks good!!
2,221
kashif
2024-10-11T10:32:16
thanks @mst272 can you also kindly add these options to the docs-strings and the documentation of `GKDTrainer`?
2,220
mst272
2024-10-11T15:47:09
hi@kashif,I've added these to the docs-strings and the documentation
2,220
moussaKam
2024-11-05T09:42:27
Hi there, I don't really understand how this PR adds seq_kd. To my understanding the seq_kd computes the standard `cross_entropy` between student logits and the output generated by the teacher. In this PR we are simply generating the teacher output then computing the same `generalized_jsd_loss`. Am I missing something ? In the [documentation](https://huggingface.co/docs/trl/main/gkd_trainer#usage-tips) it says: > seq_kd: controls whether to perform Sequence-Level KD (can be viewed as supervised FT on teacher-generated out). When seq_kd=True and lmbda=0.0, the loss reduces to supervised JSD, where the teacher generates output sequences and the student receives token-specific feedback on these sequences from the teacher. But this is the definition of the `supervised_kd` if I understand correctly.
2,220
kashif
2024-11-05T09:49:35
@moussaKam So recall there are 2 other parameters apart from the `seq_kd` flag, namely the `lmbda` so set that to zero and then the `beta` that interpolates between the forward and reverse KL. SeqKD then would be `lmbda=0`, and `beta=0` if i am not mistaken... its an interesting question and might need exploring, what happens with other values for these 2 hyper-parameters... or do you mean that to be exact we would need to replace the KL div by the cross-entropy loss?
2,220
kashif
2024-11-05T10:03:23
@moussaKam also note that in this case, the KL-div is the same as the CE and a constant term, i.e. the entropy of the target which we assume does not change
2,220
moussaKam
2024-11-05T10:03:35
@kashif thanks for you're reply, yes according to the definition from the paper > Sequence-Level KD (Kim & Rush, 2016). SeqKD maximizes the likelihood of high probability sequences generated by the teacher, and can be viewed as supervised FT on teacher-generated output. I understand that we compute the cross-entropy in the case of seq_kd. This is what we do in standard sft no?
2,220
moussaKam
2024-11-05T10:11:16
@kashif another point, if `seq_kd` is put to `True` we are computing the teacher inference twice. [here](https://github.com/huggingface/trl/blob/74e20cbbbcbac7ac8d426df09eda5f310c637def/trl/trainer/gkd_trainer.py#L286) and [here](https://github.com/huggingface/trl/blob/74e20cbbbcbac7ac8d426df09eda5f310c637def/trl/trainer/gkd_trainer.py#L227). Do we really need that?
2,220
kashif
2024-11-05T10:17:22
@moussaKam so in the first we generate completions and in the 2nd we calculate the logits of the completions... i suppose we could do that once, and then keep track of it with a bunch of if-else but opted for some cleaner logic here that could work for any of the different hyperparams... any ideas on how to make it a bit more dry?
2,220
kashif
2024-11-05T10:22:03
@moussaKam Mind you there is an orthogonal abstraction i have been working on where instead of the logits (which are assumed to come from the same vocab. size for both the student and teacher) we allow for the student-teacher to have different vocabs: see https://github.com/huggingface/trl/pull/2263 and I would welcome any thoughts if this should be a separate class?
2,220
moussaKam
2024-11-05T10:22:13
@kashif, we don't need to compute the logits, we generate the output with the teacher which becomes the new labels, then we run the forward of the student and compute the cross entropy using just the teacher output tokens. I can implement it in the afternoon if it sounds good for you.
2,220
bjb19
2024-10-11T03:55:32
Updating as I have solved the source of the issue 1) Is how the datset is being processed in the example 2) Is it looks like there were changes to the constructor class that are in the docs but not the most recent version Might put in a PR this weekend to fix
2,218
qgallouedec
2024-10-11T06:09:08
You're probably using an example for the dev version (0.12.0dev) while having the latest released version (0.11.3) installed. Either use the latest version example, or install the dev version.
2,218
qgallouedec
2024-10-11T06:11:28
Duplicate #2207
2,218
nivibilla
2024-10-10T22:04:12
followup from #2215
2,217
kashif
2024-10-11T09:39:22
@nivibilla what are the keys in your dataset, as currently the datacollator also checks if there is a `prompt` key to get the prompts only: https://github.com/huggingface/trl/blob/main/trl/trainer/utils.py#L265
2,217
nivibilla
2024-10-11T10:13:07
![image](https://github.com/user-attachments/assets/6e1e75c4-ea7d-4265-b268-cb448c80e875) I just have the prompt column with the name `prompt`
2,217
kashif
2024-10-24T08:44:56
any update on this @nivibilla ? I suspect its the data issue?
2,217
nivibilla
2024-10-10T19:42:40
Actually Im stupid. I figured it out while I was typing the issue. I should be looking at the vocab size not the tokenizer length. https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/aa8e72537993ba99e69dfaafa59ed015b17504d1/config.json#L26
2,215
nivibilla
2024-10-10T19:43:25
Is it worth adding a check in the GKD trainer for this param so this error is more readable for others?
2,215
nivibilla
2024-10-10T19:46:13
Llama 3.1 70b and llama 3.2 1B seem to have the same vocab size I will test with that. It will probably work.
2,215
qgallouedec
2024-10-10T15:32:44
I think you just need to pass this arg. ```python from trl import SFTConfig, TrlParser if __name__ == "__main__": parser = TrlParser(SFTConfig) training_args = parser.parse_args_and_config() print("✅") ``` Fails if you don't specify it ``` $ python 2213.py usage: 2213.py [-h] --output_dir OUTPUT_DIR [--overwrite_output_dir [OVERWRITE_OUTPUT_DIR]] [--do_train [DO_TRAIN]] [--do_eval [DO_EVAL]] [--do_predict [DO_PREDICT]] ... 2213.py: error: the following arguments are required: --output_dir ``` Works if you do: ``` $ python 2213.py --output_dir my_output_dir ✅ ```
2,213
qgallouedec
2024-10-10T16:16:34
When using a notebook, instead of ```python parser = TrlParser((AriaSFTScriptArguments, SFTConfig, AriaModelConfig)) sft_script_args, training_args, model_config = parser.parse_args_and_config() ``` use ```python sft_script_args = AriaSFTScriptArguments() training_args = SFTConfig(output_dir="./aria_ft") model_config = AriaModelConfig() ```
2,213
himanshushukla12
2024-10-10T09:57:15
@qgallouedec please consider the PR [check it here](https://github.com/huggingface/trl/compare/main...himanshushukla12:trl:main?expand=1)
2,212
qgallouedec
2024-10-11T09:09:58
This issue occurs before the training start, right? In my setup everything runs smoothly: ``` $ python examples/scripts/reward_modeling.py \ > --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ > --dataset_name trl-lib/ultrafeedback_binarized \ > --output_dir Qwen2-0.5B-Reward-LoRA \ > --per_device_train_batch_size 8 \ > --num_train_epochs 1 \ > --gradient_checkpointing True \ > --learning_rate 1.0e-4 \ > --logging_steps 25 \ > --eval_strategy steps \ > --eval_steps 50 \ > --max_length 2048 \ > --use_peft \ > --lora_r 32 \ > --lora_alpha 16 [2024-10-11 09:08:16,210] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) Some weights of Qwen2ForSequenceClassification were not initialized from the model checkpoint at Qwen/Qwen2-0.5B-Instruct and are newly initialized: ['score.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /fsx/qgallouedec/trl/examples/scripts/reward_modeling.py:99: UserWarning: You are using a `task_type` that is different than `SEQ_CLS` for PEFT. This will lead to silent bugs Make sure to pass --lora_task_type SEQ_CLS when using this script with PEFT. warnings.warn( wandb: WARNING The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter. wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. wandb: Currently logged in as: qgallouedec (huggingface). Use `wandb login --relogin` to force relogin wandb: Tracking run with wandb version 0.18.0 wandb: Run data is saved locally in /fsx/qgallouedec/trl/wandb/run-20241011_090830-zp3efu8k wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run Qwen2-0.5B-Reward-LoRA wandb: ⭐️ View project at https://wandb.ai/huggingface/huggingface wandb: 🚀 View run at https://wandb.ai/huggingface/huggingface/runs/zp3efu8k 0%| | 0/3875 [00:00<?, ?it/s]You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. /fsx/qgallouedec/miniconda3/envs/trl/lib/python3.11/site-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] Could not estimate the number of tokens of the input, floating-point operations will not be computed {'loss': 0.8947, 'grad_norm': 3.841621160507202, 'learning_rate': 9.935483870967742e-05, 'epoch': 0.01} 1%|█▍ | 36/3875 [00:54<1:48:03, 1.69s/it] ``` Our system info seem close: ``` - Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 - Python version: 3.11.9 - PyTorch version: 2.4.1 - CUDA device(s): NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3 - Transformers version: 4.46.0.dev0 - Accelerate version: 1.0.0 - Accelerate config: not found - Datasets version: 3.0.0 - HF Hub version: 0.24.7 - TRL version: 0.12.0.dev0+45129fc - bitsandbytes version: 0.41.1 - DeepSpeed version: 0.15.1 - Diffusers version: 0.30.3 - Liger-Kernel version: 0.3.0 - LLM-Blender version: 0.0.2 - OpenAI version: 1.46.0 - PEFT version: 0.13.0 ``` You have 2 GPUs, right? Are your 2 GPUs the same?
2,212
himanshushukla12
2024-10-11T09:51:08
> This issue occurs before the training start, right? In my setup everything runs smoothly: > > ``` > $ python examples/scripts/reward_modeling.py \ > > --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ > > --dataset_name trl-lib/ultrafeedback_binarized \ > > --output_dir Qwen2-0.5B-Reward-LoRA \ > > --per_device_train_batch_size 8 \ > > --num_train_epochs 1 \ > > --gradient_checkpointing True \ > > --learning_rate 1.0e-4 \ > > --logging_steps 25 \ > > --eval_strategy steps \ > > --eval_steps 50 \ > > --max_length 2048 \ > > --use_peft \ > > --lora_r 32 \ > > --lora_alpha 16 > [2024-10-11 09:08:16,210] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) > Some weights of Qwen2ForSequenceClassification were not initialized from the model checkpoint at Qwen/Qwen2-0.5B-Instruct and are newly initialized: ['score.weight'] > You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. > /fsx/qgallouedec/trl/examples/scripts/reward_modeling.py:99: UserWarning: You are using a `task_type` that is different than `SEQ_CLS` for PEFT. This will lead to silent bugs Make sure to pass --lora_task_type SEQ_CLS when using this script with PEFT. > warnings.warn( > wandb: WARNING The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter. > wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. > wandb: Currently logged in as: qgallouedec (huggingface). Use `wandb login --relogin` to force relogin > wandb: Tracking run with wandb version 0.18.0 > wandb: Run data is saved locally in /fsx/qgallouedec/trl/wandb/run-20241011_090830-zp3efu8k > wandb: Run `wandb offline` to turn off syncing. > wandb: Syncing run Qwen2-0.5B-Reward-LoRA > wandb: ⭐️ View project at https://wandb.ai/huggingface/huggingface > wandb: 🚀 View run at https://wandb.ai/huggingface/huggingface/runs/zp3efu8k > 0%| | 0/3875 [00:00<?, ?it/s]You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. > /fsx/qgallouedec/miniconda3/envs/trl/lib/python3.11/site-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. > with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] > Could not estimate the number of tokens of the input, floating-point operations will not be computed > {'loss': 0.8947, 'grad_norm': 3.841621160507202, 'learning_rate': 9.935483870967742e-05, 'epoch': 0.01} > 1%|█▍ | 36/3875 [00:54<1:48:03, 1.69s/it] > ``` > > Our system info seem close: > > ``` > - Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 > - Python version: 3.11.9 > - PyTorch version: 2.4.1 > - CUDA device(s): NVIDIA H100 80GB HBM3, NVIDIA H100 80GB HBM3 > - Transformers version: 4.46.0.dev0 > - Accelerate version: 1.0.0 > - Accelerate config: not found > - Datasets version: 3.0.0 > - HF Hub version: 0.24.7 > - TRL version: 0.12.0.dev0+45129fc > - bitsandbytes version: 0.41.1 > - DeepSpeed version: 0.15.1 > - Diffusers version: 0.30.3 > - Liger-Kernel version: 0.3.0 > - LLM-Blender version: 0.0.2 > - OpenAI version: 1.46.0 > - PEFT version: 0.13.0 > ``` > > You have 2 GPUs, right? Are your 2 GPUs the same? Yes, I don't know why this weird thing is happening...😭😭😭
2,212
qgallouedec
2024-11-05T10:34:50
Closing as it is not possible to reproduce the error with the provided information. If someone finds the way to reproduce please reopen a new issue and link this one.
2,212
qgallouedec
2024-10-10T09:46:10
Currently, SFT support VLM, see examples
2,211
MonolithFoundation
2024-10-30T03:46:37
How about DPO support for MLLM? I have some issues on modify it for latest trl
2,211
HuggingFaceDocBuilderDev
2024-10-14T13:44:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2209). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,209
qgallouedec
2024-10-15T09:06:02
## Regression report I ran regression tests to ensure we don't break our DPO. ### Scenarios tested The following scenarios were assessed for potential impact by recent changes: - Encoder-decoder model - Decoder-only - Precompute ref - Auxiliary loss - Vision models ### Dataset Selection As discussed earlier, the new and old (`main`) implementations are not equivalent in cases involving: - Merging of prompt and completion leading to token merging - Truncation needed To avoid these cases, I used a **conversational** dataset with **short** content: `trl-lib/ultrafeedback_binarized`. I applied the following truncation preprocessing to limit sequence length: ```python def truncate(example): return { "prompt": [{"role": "user", "content": example["chosen"][0]["content"][:100]}], "chosen": [{"role": "assistant", "content": example["chosen"][1]["content"][:100]}], "rejected": [{"role": "assistant", "content": example["rejected"][1]["content"][:100]}], } dataset = dataset.map(truncate, desc="Truncate examples") ``` ### Expected Changes Differences in log probabilities (logps) are expected due to initial miscalculations, as mentioned in my previous post. ### Encoder-decoder For this one I needed a custom script: ```python # dpo_encdec.py import torch from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from trl import ( DPOConfig, DPOScriptArguments, DPOTrainer, ModelConfig, TrlParser, get_kbit_device_map, get_peft_config, get_quantization_config, ) from trl.trainer.utils import SIMPLE_CHAT_TEMPLATE if __name__ == "__main__": parser = TrlParser((DPOScriptArguments, DPOConfig, ModelConfig)) script_args, training_args, model_config = parser.parse_args_and_config() torch_dtype = ( model_config.torch_dtype if model_config.torch_dtype in ["auto", None] else getattr(torch, model_config.torch_dtype) ) quantization_config = get_quantization_config(model_config) model_kwargs = dict( revision=model_config.model_revision, attn_implementation=model_config.attn_implementation, torch_dtype=torch_dtype, use_cache=False if training_args.gradient_checkpointing else True, device_map=get_kbit_device_map() if quantization_config is not None else None, quantization_config=quantization_config, ) model = AutoModelForSeq2SeqLM.from_pretrained( model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code, **model_kwargs ) model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-v1_1-small") peft_config = get_peft_config(model_config) if peft_config is None: ref_model = AutoModelForSeq2SeqLM.from_pretrained( model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code, **model_kwargs ) else: ref_model = None tokenizer = AutoTokenizer.from_pretrained( model_config.model_name_or_path, trust_remote_code=model_config.trust_remote_code ) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token if tokenizer.chat_template is None: tokenizer.chat_template = SIMPLE_CHAT_TEMPLATE dataset = load_dataset(script_args.dataset_name) def truncate(example): return { "prompt": [{"role": "user", "content": example["chosen"][0]["content"][:100]}], "chosen": [{"role": "assistant", "content": example["chosen"][1]["content"][:100]}], "rejected": [{"role": "assistant", "content": example["rejected"][1]["content"][:100]}], } dataset = dataset.map(truncate, desc="Truncate examples") trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset[script_args.dataset_train_split], eval_dataset=dataset[script_args.dataset_test_split], processing_class=tokenizer, peft_config=peft_config, ) trainer.train() metrics = trainer.evaluate() trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) # Save and push to hub trainer.save_model(training_args.output_dir) if training_args.push_to_hub: trainer.push_to_hub(dataset_name=script_args.dataset_name) ``` ``` # 8 GPUs accelerate launch dpo_encdec.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path google/t5-v1_1-small \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir t5-v1_1-DPO-main \ --no_remove_unused_columns ``` <img width="1303" alt="Screenshot 2024-10-16 at 18 06 03" src="https://github.com/user-attachments/assets/9e293e60-f5d9-43ba-aa33-8f294b270fb0"> ## Decoder-only ``` # 8 GPUs accelerate launch examples/scripts/dpo.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir Qwen2-0.5B-DPO-main \ --no_remove_unused_columns ``` <img width="1303" alt="Screenshot 2024-10-16 at 17 52 25" src="https://github.com/user-attachments/assets/db0e3f6d-57df-4442-894b-5600e4a9cce0"> ### Comment Not sure exactly why the chosen and rejected don't match but the margin seems still to be very close ## Precompute reference ``` # 8 GPUs accelerate launch examples/scripts/dpo.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir Qwen2-0.5B-DPO-main \ --no_remove_unused_columns \ --precompute_ref_log_probs ``` <img width="1303" alt="Screenshot 2024-10-16 at 18 29 21" src="https://github.com/user-attachments/assets/3e34789e-3862-4b06-8d5b-a847f061f049"> ### Comment The curves precisely match their corresponding run without `--precompute_ref_log_probs`. ## Auxiliary loss modify the example script and add ```python model.config.output_router_logits = True ``` ``` accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero3.yaml examples/scripts/dpo.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path mistralai/Mixtral-8x7B-v0.1 \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir Qwen2-0.5B-DPO-2209 \ --gradient_checkpointing \ --max_length 256 \ --use_peft \ --bf16 ``` <img width="1195" alt="Screenshot 2024-10-17 at 17 07 18" src="https://github.com/user-attachments/assets/161971bf-99ac-41a6-9847-6c81acaf602a"> ### Comment Not sure if the training helped a lot, but at least you have consistent results between main and #2209 We've a new `aux_loss` plot! ## Vision model
2,209
qgallouedec
2024-10-16T16:42:07
I still have 2 regression that I'd like to run. But you can already take a look. I'd also like to check the performance difference related to the fixing of "Wrong truncation logic"
2,209
qgallouedec
2024-10-17T08:34:29
Trying to fix the CI. It's annoying because it fails without log, and I can't reproduce locally. Sorry for the numerous commits it implies.
2,209
qgallouedec
2024-10-17T10:33:47
> Regarding the difference in the chosen/rejected rewards of your regression tests, have you looked at the impact on downstream evals like IFEval / AlpacaEval / MixEval? I can run those for you if you have the checkpoints handy and then we can be pretty sure it's fine Nice idea, I'll send you the checkpoints!
2,209
qgallouedec
2024-10-17T20:37:17
@lewtun Here is one: - https://huggingface.co/qgallouedec/Qwen2.5-7B-DPO-2209 - https://huggingface.co/qgallouedec/Qwen2.5-7B-DPO-main Trained with ``` accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero2.yaml examples/scripts/dpo.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path Qwen/Qwen2.5-7B-Instruct \ --learning_rate 5.0e-7 \ --num_train_epochs 1 \ --gradient_checkpointing \ --logging_steps 10 \ --eval_strategy steps \ --eval_steps 100 \ --output_dir Qwen2.5-7B-DPO-2209 \ --gradient_checkpointing \ --max_length 512 \ --use_peft \ --bf16 \ --push_to_hub ``` Another data point for the regression test: <img width="1128" alt="Screenshot 2024-10-18 at 00 10 32" src="https://github.com/user-attachments/assets/157212bd-d82e-4fc5-9d85-cbc628fbcfa0">
2,209
qgallouedec
2024-10-21T09:52:20
## IFEval The new implementation seems to improve results | Model | inst_level_loose_acc | inst_level_strict_acc | prompt_level_loose_acc | prompt_level_strict_acc | | ----------------------------------------------------------------------------- | -------------------- | --------------------- | ---------------------- | ----------------------- | | [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | 0.7122 | 0.6631 | 0.6026 ± 0.0211 | 0.5416 ± 0.0214 | | [Qwen2.5-7B-DPO-main](https://huggingface.co/qgallouedec/Qwen2.5-7B-DPO-main) | 0.7182 | 0.6751 | 0.6155 ± 0.0209 | 0.5693 ± 0.0213 | | [Qwen2.5-7B-DPO-2209](https://huggingface.co/qgallouedec/Qwen2.5-7B-DPO-2209) | 0.7326 | 0.6775 | 0.6303 ± 0.0208 | 0.5656 ± 0.0213 |
2,209
HuggingFaceDocBuilderDev
2024-10-09T15:52:31
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2208). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,208