user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.4k
|
---|---|---|---|
HuggingFaceDocBuilderDev | 2024-10-26T20:28:04 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2286). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,286 |
PhilipMay | 2024-10-27T17:00:19 | I don't think the CI problems have anything to do with the changes in this PR... | 2,286 |
qgallouedec | 2024-11-05T18:17:15 | Thanks @PhilipMay! Do you mind updating your branch? I don't have the writing rights on your branch. | 2,286 |
HuggingFaceDocBuilderDev | 2024-10-28T10:49:49 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2285). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,285 |
qgallouedec | 2024-10-25T13:00:16 | Wonderfull! Thanks @ccs96307
Can you also replace `pytest.raises(...)` by `self.assertRaises(...)`? | 2,283 |
HuggingFaceDocBuilderDev | 2024-10-25T13:08:16 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2283). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,283 |
qgallouedec | 2024-10-25T13:32:59 | and make sure to run `make precommit` | 2,283 |
ccs96307 | 2024-10-25T14:59:16 | Hi @qgallouedec, thank you so much for taking the time to review my PR. I really appreciate your suggestions.
I'll replace `pytest.raises(...)` with `self.assertRaises(...)` as you recommended, and will also make sure to run `make precommit` to get everything aligned with the project's guidelines. Thanks again for your helpful feedback—I’ll get these changes pushed soon! | 2,283 |
ccs96307 | 2024-10-26T06:48:24 | Hi @qgallouedec, I've noticed that the `tests (3.11, windows-latest)` failed due to the following error:
```
FAILED tests/test_nash_md_trainer.py::TestNashMDTrainer::test_nash_md_trainer_judge_training_0_standard_prompt_only - ValueError: Cannot find pytorch_model.bin or model.safetensors in C:\Users\runneradmin\.cache\huggingface\hub\llm-blender\PairRM
FAILED tests/test_nash_md_trainer.py::TestNashMDTrainer::test_nash_md_trainer_judge_training_1_conversational_prompt_only - ValueError: Cannot find pytorch_model.bin or model.safetensors in C:\Users\runneradmin\.cache\huggingface\hub\llm-blender\PairRM
```
These errors seem to be unrelated to my changes, as the tests passed locally and the files I edited do not directly involve this functionality. I suspect this might be a network issue or a cached problem on Windows?
Could this be a common issue you've seen before? If there's anything I need to change or investigate further, please let me know. | 2,283 |
qgallouedec | 2024-10-28T15:15:48 | > Could this be a common issue you've seen before? If there's anything I need to change or investigate further, please let me know.
Yes, don't worry, not related with your PR, it will be solved in #2276 | 2,283 |
August-murr | 2024-10-28T07:12:44 | @lewtun
@qgallouedec
Feedback would be appreciated! | 2,282 |
qgallouedec | 2024-11-05T18:21:30 | Thanks a lot @August-murr for the work. Can you add documentation, and test? | 2,282 |
HuggingFaceDocBuilderDev | 2024-11-05T18:24:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2282). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,282 |
August-murr | 2024-11-05T20:10:44 | > Thanks a lot @August-murr for the work. Can you add documentation, and test?
I've already added most of the docs, as for the tests, unfortunately I won't be able to do it for a few days and if nobody else added them, I'll do it later.
| 2,282 |
August-murr | 2024-11-14T18:22:33 | The tests I added validate the success of the merge and I could expand it if necessary.
I also added docs to the callbacks file but was unable to produce the HTML file similar to the [callback docs](https://huggingface.co/docs/trl/main/en/callbacks) so I'd appreciate it if you could confirm whether the docs are properly generated or not. | 2,282 |
August-murr | 2024-11-18T09:14:53 | > Thanks for iterating @August-murr ! The PR LGTM now and once the CI is green & @qgallouedec approves, I think we can merge it
The tests without optional dependency failed because Mergekit is an optional dependency | 2,282 |
kashif | 2024-11-18T09:27:03 | @August-murr in the `import_utils` you can define a new `is_mergekit_available` helper and then in the tests you can skip the tests if its not available | 2,282 |
qgallouedec | 2024-11-18T13:07:35 | Like here:
https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/trl/import_utils.py#L39-L40
https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/tests/testing_utils.py#L42-L46
https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/tests/test_judges.py#L62-L63
Don't hesitate to ask for help if you want the maintainers to do it for you. | 2,282 |
August-murr | 2024-11-18T13:30:28 | > Like here:
>
> https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/trl/import_utils.py#L39-L40
>
> https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/tests/testing_utils.py#L42-L46
>
> https://github.com/huggingface/trl/blob/6f8fe59aebc1153c6000c922b8edc4bb11efd506/tests/test_judges.py#L62-L63
>
> Don't hesitate to ask for help if you want the maintainers to do it for you.
Done! | 2,282 |
qgallouedec | 2024-11-18T13:35:47 | Nice, thanks! Just running some tests, waiting for the CI to be green, and we're good to merge (expect some commits from me on this branch) | 2,282 |
qgallouedec | 2024-11-18T14:47:30 | Another question that came up during the review: why have a new configuration class when we can use the mergekit one directly? I'm afraid of confusing the user, tempted to use :
```python
from mergekit import MergeConfiguration
from trl import MergeModelCallback
merge_callback = MergeModelCallback(MergeConfiguration())
``` | 2,282 |
August-murr | 2024-11-18T17:11:24 | > Another question that came up during the review: why have a new configuration class when we can use the mergekit one directly? I'm afraid of confusing the user, tempted to use :
>
> ```python
> from mergekit import MergeConfiguration
> from trl import MergeModelCallback
>
> merge_callback = MergeModelCallback(MergeConfiguration())
> ```
Actually, ease of use for the user was the reason why I had to write the class in mergekit_utils since mergekit uses a yaml file to get it's Merge config, which is easier to implement but more complicated for the user.
and if you wanted to use `MergeConfiguration` directly from mergekit:
```python
from mergekit.config import MergeConfiguration
merge_config_dict = {
"dtype": "float16",
"merge_method": "linear",
"models": [
{"model": "path_to_model_1", "parameters": {"weight": 0.4}},
{"model": "path_to_model_2", "parameters": {"weight": 0.6}},
],
}
config = MergeConfiguration.model_validate(merge_config_dict)
```
As you add more parameters to the configuration, the dictionary becomes increasingly nested.
The current implementation, although harder to maintain, simplifies everything for the user:
```python
from trl.mergekit_utils import MergeConfig
config = MergeConfig("linear")
config.policy_model_weight = 0.4
config.target_model_weight = 0.6
``` | 2,282 |
qgallouedec | 2024-11-19T11:03:06 | That makes sense.
Do you think we can get the best of both worlds by making `trl.MergeConfig` inherits from `mergekit.config.MergeConfigurationMergeConfig`? | 2,282 |
August-murr | 2024-11-19T13:02:02 | > That makes sense.
> Do you think we can get the best of both worlds by making `trl.MergeConfig` inherits from `mergekit.config.MergeConfigurationMergeConfig`?
I'll figure it out. | 2,282 |
August-murr | 2024-11-19T19:10:37 | > That makes sense. Do you think we can get the best of both worlds by making `trl.MergeConfig` inherits from `mergekit.config.MergeConfigurationMergeConfig`?
The main issue with using Mergekit's `MergeConfiguration` directly is that it’s not really designed to work on its own. It relies heavily on dictionaries, usually loaded from a YAML file, or using a bunch of classes from `mergekit` to set things up:
```python
class MergeConfiguration(BaseModel):
merge_method: str
slices: Optional[List[OutputSliceDefinition]] = None
models: Optional[List[InputModelDefinition]] = None
parameters: Optional[Dict[str, ParameterSetting]] = None
base_model: Optional[ModelReference] = None
dtype: Optional[str] = None
tokenizer_source: Union[
Literal["union"], Literal["base"], ModelReference, None
] = None
tokenizer: Optional[TokenizerConfig] = None
chat_template: Optional[str] = None
out_dtype: Optional[str] = None
```
If someone wanted to set up the configuration manually, they’d either need to:
1. Write or add to a YAML file, or
2. Write a big, nested dictionary themselves (which only gets more complicated as you add more details), or
3. Use multiple classes from `mergekit` (e.g., `OutputSliceDefinition`, `InputModelDefinition`, etc.), as seen [here](https://github.com/arcee-ai/mergekit/blob/57e7d14e2a732f532970e2c9dada00e2d8f15a7a/mergekit/config.py#L85).
Neither option is user-friendly.
I admit the current implementation looks messy, but the alternative would create more complications for the user. Maybe in future versions, the Mergekit team will make `MergeConfiguration` simpler and easier to work with. | 2,282 |
August-murr | 2024-11-20T16:28:46 | @qgallouedec
Anything else you'd want me to do? | 2,282 |
qgallouedec | 2024-11-21T11:21:56 | LGTM thanks!
I've just applied some minor refinements:
- compat with windows file path
- use tmp dir in tests
- sort imports and function
- common method for saving and pushing in the callback
- add "trl" to model tags | 2,282 |
August-murr | 2024-11-21T11:53:12 | @qgallouedec
About the failed tests:
The tests do not fail on Ubuntu; they only fail on Windows. I realized that the issue arose from a permission error from the temporary directory when trying to delete the merged files, specifically the `model.safetensors.` | 2,282 |
qgallouedec | 2024-11-21T11:58:22 | > @qgallouedec About the failed tests: The tests do not fail on Ubuntu; they only fail on Windows. I realized that the issue arose from a permission error from the temporary directory when trying to delete the merged files, specifically the `model.safetensors.`
Ah thanks, I was debugging, but I don't have access to windows vm right now (explains https://github.com/huggingface/trl/pull/2282/commits/fa5bafe617793ed340303cf0ebded6ac03cab39f). Any idea how to solve it? | 2,282 |
qgallouedec | 2024-11-21T12:41:58 | Found a solution with a57d88a1b317785fa85e3b09bd463ecb0b9eef06 | 2,282 |
August-murr | 2024-11-21T13:10:22 | @qgallouedec
Sorry I wasn't able to sort it out myself. | 2,282 |
qgallouedec | 2024-11-21T14:32:33 | No worry, thanks a lot for this nice addition! | 2,282 |
qgallouedec | 2024-11-05T10:38:20 | Is the use of this type of procedure common in the community/literature? Do you have any reference results? | 2,280 |
qgallouedec | 2024-10-25T14:37:34 | Thanks for this. Indeed I realized it while working on #2209 | 2,279 |
HuggingFaceDocBuilderDev | 2024-10-25T14:40:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2279). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,279 |
seanexp | 2024-10-25T02:58:57 | What is the primary difference between this PR and #1628 ? | 2,278 |
mnoukhov | 2024-10-25T14:06:30 | This is an updated and multi-gpu extension of #1628. It is also work between @vwxyzjn and I!
Instead of keeping vllm models on the same GPU, we move them to another. It also uses the more flexible `vllm_utils.py` written by @vwxyzjn in `allenai/open_instruct` (https://github.com/allenai/open-instruct/blob/main/open_instruct/vllm_utils.py) which allows using any version of `vllm` as opposed to the fixed `0.4.2` from #1628.
Finally, this has been tested and verified to match regular Online DPO performance while being faster and more efficient, see our new preprint https://arxiv.org/abs/2410.18252 | 2,278 |
HuggingFaceDocBuilderDev | 2024-10-28T13:17:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2278). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,278 |
HuggingFaceDocBuilderDev | 2024-10-25T13:20:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2277). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,277 |
HuggingFaceDocBuilderDev | 2024-10-24T20:48:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2276). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,276 |
qgallouedec | 2024-10-25T10:11:11 | Results for a gemma reward model
```
accelerate launch examples/scripts/dpo_online.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--reward_model_path Ray2333/GRM-Gemma-2B-rewardmodel-ft \
--dataset_name trl-lib/ultrafeedback-prompt \
--learning_rate 5.0e-7 \
--logging_steps 10 \
--output_dir Qwen2-0.5B-OnlineDPO-GRM-Gemma \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--warmup_ratio 0.1 \
--missing_eos_penalty 1.0 \
--push_to_hub
```
https://wandb.ai/huggingface/huggingface/runs/520cnnjl
For ref, with Pair RM judge instead:
```
accelerate launch examples/scripts/dpo_online.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--judge pair_rm \
--dataset_name trl-lib/ultrafeedback-prompt \
--learning_rate 5.0e-7 \
--logging_steps 10 \
--output_dir Qwen2-0.5B-OnlineDPO-PairRM \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--warmup_ratio 0.1 \
--push_to_hub
```
https://wandb.ai/huggingface/huggingface/runs/ffd4u5wa
<img width="1685" alt="Screenshot 2024-10-25 at 14 30 30" src="https://github.com/user-attachments/assets/433ba62a-8d76-48eb-9172-e0e61c3c9d3a">
| 2,276 |
qgallouedec | 2024-10-28T15:00:07 | > Have you done a test run of e.g. trying to optimise Qwen2.5-0.5B-Instruct with the 7B ArmoRM model?
ArmoRM is a custom classifier (its code for using it is not standard). So our `get_reward` function probably won't work for it. However, by modifying the code a little, I still manage to use it, and this is what I get:
https://wandb.ai/huggingface/huggingface/runs/merlfqgx (screenshot to come)
```
accelerate launch examples/scripts/dpo_online.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--reward_model_path RLHFlow/ArmoRM-Llama3-8B-v0.1 \
--dataset_name trl-lib/ultrafeedback-prompt \
--learning_rate 5.0e-7 \
--logging_steps 10 \
--output_dir Qwen2-0.5B-OnlineDPO-AutoRM \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--warmup_ratio 0.1 \
--missing_eos_penalty 1.0 \
--push_to_hub
```
<img width="1189" alt="Screenshot 2024-10-28 at 16 50 30" src="https://github.com/user-attachments/assets/da2deffd-8c84-42e5-a996-18ba47629b95"> | 2,276 |
qgallouedec | 2024-10-24T20:30:53 | The issue has been solved with #2246
TRL 0.11.4 is not compatible with Transformers 4.46.
We will release TRL 0.12 very soon | 2,275 |
swamymushini | 2024-10-30T17:15:44 | What is the working fix for this issue now? which library versions we can use now for temp solution? should be downgrade transformers
| 2,275 |
bibhudutta-p | 2024-10-30T17:19:19 | Yes, use the latest version of TRL and v4.45.2 of Transformers. This fixed it for me. | 2,275 |
swamymushini | 2024-10-30T17:21:53 | > Yes, use the latest version of TRL and v4.45.2 of Transformers. This fixed it for me.
u mean the TRL 0.11.4? | 2,275 |
bibhudutta-p | 2024-10-30T17:31:34 | yes | 2,275 |
swamymushini | 2024-10-30T17:33:43 | > yes
Really thanks.. it worked for me..
| 2,275 |
qgallouedec | 2024-10-24T18:49:40 | Nice! Thanks @zhanwenchen! | 2,274 |
HuggingFaceDocBuilderDev | 2024-10-24T18:54:10 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2274). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,274 |
qgallouedec | 2024-10-24T18:27:09 | PPO expect `reward_model` to be a model (torch module), not a function. | 2,273 |
HuggingFaceDocBuilderDev | 2024-10-24T15:52:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2272). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,272 |
HuggingFaceDocBuilderDev | 2024-10-24T10:06:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2270). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,270 |
qgallouedec | 2024-10-25T16:12:25 | Some tests are failing due to PairRM loading: it is fixed in #2276, you can safely ignore it | 2,270 |
edbeeching | 2024-10-28T09:30:36 | Hi @cutecharmingkid , unfortunately the answer is not trivial. Does the domain of your task match the tasks used to fine-tune the base vision-instruct model? I would imagine 10k-100k example would be enough, but I have not tested extensively. | 2,269 |
qgallouedec | 2024-10-25T16:02:36 | Thanks for reporting, please share your system info | 2,268 |
Isaaclgz | 2024-10-27T05:14:50 | > Thanks for reporting, please share your system info
Thanks for looking into this!
System:
Debian 11
Python 3.10
1xA100-80GB
Nvidia driver 550.90.07, CUDA 12.4
(running this on a GCP CE instance based on the c0-deeplearning-common-cu123-v20240922-debian-11-py310 image)
Env:
torch==2.4.0
transformers==4.44.0
trl==0.11.3
flash-attn==2.6.3
accelerate==1.0.1
| 2,268 |
chenyang399 | 2024-11-08T04:40:19 | is there any chance that we can run KTO script with 24G GPU
| 2,268 |
qgallouedec | 2024-10-24T18:10:55 | Thanks @cameronphchen! | 2,266 |
HuggingFaceDocBuilderDev | 2024-10-24T18:15:16 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2266). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,266 |
qgallouedec | 2024-10-23T08:12:32 | Thanks for reporting, it should have been fixed with #2261. CAN you confirm? | 2,264 |
ArcherShirou | 2024-10-24T02:28:19 | Thank you for your response. After updating the code and testing it, everything is running smoothly now. For the 14B and 72B models, quantization is necessary when using the 0.5B reward model. However, if I switch to the 70B or 72B reward model, I still encounter out-of-memory (OOM) issues midway, even with quantization and LoRA applied. Do you have any good solutions for this? | 2,264 |
qgallouedec | 2024-10-24T18:34:55 | You can try reducing the generation length. Closing the issue as the initial question is answered | 2,264 |
HuggingFaceDocBuilderDev | 2024-10-24T13:49:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2263). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,263 |
qgallouedec | 2024-11-23T12:50:57 | Looks good overall. Feel free to request a final review from me when you think it's ready to be merged | 2,263 |
HuggingFaceDocBuilderDev | 2024-10-21T16:47:46 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2261). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,261 |
qgallouedec | 2024-10-21T15:04:46 | Thanks @cameronphchen! | 2,259 |
HuggingFaceDocBuilderDev | 2024-10-21T15:08:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2259). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,259 |
qgallouedec | 2024-10-24T13:01:30 | Thanks for the PR! However, I was actually considering simply removing this bot. In my opinion, it's fine to leave issues open for extended periods. I generally review all the issues and follow up when more information is needed and there hasn't been any activity for a while. From my experience, this bot tends to close issues that should remain open more often than it helps track active ones. See #1949 #1956.
What's more, the bot doesn't seem to have been working for a while, and nobody here seems to miss it.
What do you think @lewtun @kashif? | 2,258 |
Ananya54321 | 2024-10-25T02:02:26 | Ohh that makes sense! Thank you for responding!
| 2,258 |
lewtun | 2024-10-28T20:07:28 | Yes I agree, let's disable the bot since it's more of a nuisance than a help | 2,258 |
qgallouedec | 2024-11-11T23:16:04 | Close as a consequence of #2300 | 2,258 |
SinclairCoder | 2024-10-21T18:07:30 | I solved it with torchrun launch. | 2,257 |
Qinghao-Hu | 2024-10-22T01:37:47 | same problem
| 2,257 |
SinclairCoder | 2024-10-22T11:50:10 | @Qinghao-Hu launch it with torchrun if also a multigpu training case. | 2,257 |
innat | 2024-10-24T07:31:44 | what does it mean? , [src](https://huggingface.co/docs/accelerate/usage_guides/big_modeling).
> Multiple GPUs, or “model parallelism”, can be utilized but only one GPU will be active at any given moment. This forces the GPU to wait for the previous GPU to send it the output. You should launch your script normally with Python instead of other tools like torchrun and accelerate launch.
> You may also be interested in pipeline parallelism which utilizes all available GPUs at once, instead of only having one GPU active at a time. This approach is less flexbile though. For more details, refer to the [Memory-efficient pipeline parallelism](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference#memory-efficient-pipeline-parallelism-experimental) guide.
| 2,256 |
gaetanlop | 2024-10-22T00:27:31 | Hey @mertege, adding the possibility to store teacher logits in the `GKDTrainer` is only useful when setting the parameter `lmbda` to 0 (which corresponds to standard KD). The all point of GKD is to enable on-policy KD (KD on sequences generated by the student) which means that we cannot store teacher logits offline during a pre-processing step. | 2,255 |
mertege | 2024-10-22T07:03:50 | Thanks for reply @gaetanlop. | 2,255 |
qgallouedec | 2024-10-21T16:50:10 | > all latest
can you run `trl env` please? | 2,254 |
qgallouedec | 2024-10-21T16:50:37 | Also please provide the full traceback | 2,254 |
saxenarohit | 2024-10-21T17:42:36 | Thanks
```
- Platform: Linux-5.4.0-187-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- PyTorch version: 2.2.0a0+81ea7a4
- CUDA device(s): NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB
- Transformers version: 4.45.2
- Accelerate version: 1.0.1
- Accelerate config: not found
- Datasets version: 3.0.1
- HF Hub version: 0.26.0
- TRL version: 0.12.0.dev0
- bitsandbytes version: 0.43.1
- DeepSpeed version: not installed
- Diffusers version: not installed
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: not installed
- PEFT version: 0.13.
```
There is no traceback. It's a request to check for a possible bug.
During evaluation in the collate_fn
`labels = batch["input_ids"].clone()`
this will possibly have the gold answer in the input_ids during the evaluation?
| 2,254 |
edbeeching | 2024-10-23T08:45:08 | Hi @saxenarohit. This is normal, we are just looking at the eval loss. I think you might be thinking of a generative eval, where given a prompt, `model.generate` is used to autoregressively compute an answer, which can then be compared to the ground truth "gold answer". I will close the issue, but feel free to reopen if needed. | 2,254 |
qgallouedec | 2024-10-19T17:13:40 | This is because you need to provide a split dataset (containing both a training split and an evaluation split) when you use TRL scripts .
I realize the following limitations:
- when you're not evaluating, you still need to have a split dataset
- you may want the script to split the dataset when necessary.
This could be solved by adding something like :
```python
if training_args.eval_strategy != "none" and script_args.dataset_test_split not in dataset :
dataset = dataset[script_args.dataset_train_split].split(test_size=0.05)
...
trainer = AnyTrainer(
...
train_dataset=dataset[script_args.dataset_train_split],
eval_dataset=dataset[script_args.dataset_test_split] if training_args.eval_strategy != "none" else None,
...
)
```
WDYT @kashif @lewtun ? Is this situation common enough to justify this addition?
| 2,253 |
lewtun | 2024-10-24T09:34:00 | I don't think we should automatically generate a test split for the user (it's a bit too much magic), but I would be in favour of having the logic to set `eval_dataset` to `None` if no eval strategy is provided
| 2,253 |
qgallouedec | 2024-10-24T09:36:01 | > I don't think we should automatically generate a test split for the user (it's a bit too much magic), but I would be in favour of having the logic to set `eval_dataset` to `None` if no eval strategy is provided
Sounds reasonable. | 2,253 |
HuggingFaceDocBuilderDev | 2024-10-18T22:38:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2252). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,252 |
qgallouedec | 2024-10-20T13:52:17 | Thanks for the PR! Can you just run `make precommit` | 2,252 |
ngxson | 2024-10-20T22:25:27 | @qgallouedec Thanks! Should be good now | 2,252 |
qgallouedec | 2024-10-21T07:35:04 | It seems like this case occurs twice in our tests:
```
FAILED tests/test_dataset_formatting.py::SetupChatFormatTestCase::test_example_with_setup_model - ValueError: Chat template is already added to the tokenizer. If you want to overwrite it, please set it to None
FAILED tests/test_dataset_formatting.py::SetupChatFormatTestCase::test_setup_chat_format - ValueError: Chat template is already added to the tokenizer. If you want to overwrite it, please set it to None
```
Can you update the example so that they use this function correctly? | 2,252 |
qgallouedec | 2024-10-22T10:39:33 | Lgtm, thanks @ngxson | 2,252 |
ngxson | 2024-10-22T10:47:07 | Thanks! I don't have merge permission, so please merge when you want 🤗 | 2,252 |
kashif | 2024-10-21T11:04:55 | @gaetanlop can we use the `pad` helpers?
```py
# Use pad helper to handle padding
padded_query_responses = pad(query_responses, padding_value=pad_token_id, padding_side="right")
padded_logitss = pad(logitss, padding_value=0, padding_side="right")
```
| 2,251 |
gaetanlop | 2024-10-21T15:05:37 | @kashif, ~~the `pad` function expects the tensor to have no leading dimension corresponding to the batch size.~~
Here is an example `query_responses`:
```python
query_responses = [
torch.randint(vocab_size, (bs, seq_length1)),
torch.randint(vocab_size, (bs, seq_length2)),
torch.randint(vocab_size, (remaining_samples, seq_length3))
]
```
~~Using the `pad` function as it is would require the following change before passing the `query_responses` to the `pad` function:~~
```python
query_responses=[query_reps[i] for query_reps in query_responses for i in range(query_reps.size(0))]
```
~~We can also change the pad function? What do you prefer?~~
After looking more closely to the pad function, you are rigth, we can use the pad function as it is, it just requires reshaping the tensor afterwards.
I am gonna make the update, thanks for pointing it | 2,251 |
HuggingFaceDocBuilderDev | 2024-10-21T16:26:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2251). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,251 |
gaetanlop | 2024-10-21T16:34:19 | This won't work @kashif, it still requires reshaping the tensors
| 2,251 |
kashif | 2024-10-21T16:35:13 | ah damn! my bad sorry! | 2,251 |
gaetanlop | 2024-10-21T16:49:21 | No problem, this should be fixed now
| 2,251 |
JiahuiSun | 2024-10-27T01:37:26 | I also met the same issue. I use the official example script, dpo_online.py, to train a 75b LLM with a 75b reward model. Even with 60x8 H100 GPUs, the problem still happens. Any help please? | 2,250 |
lewtun | 2024-10-29T05:53:16 | Hello @hlnchen would you mind sharing a reproducible example that uses the `unwrap_model_for_generation()` method in a simple training loop that simulates your application? | 2,250 |
KAKSIS | 2024-11-08T06:46:37 | I encountered a similar issue while training a 72B model on an 8x H100 (80G) setup. I’m using the Hugging Face online DPO trainer scripts from [this link](https://huggingface.co/docs/trl/main/en/online_dpo_trainer). To reduce GPU memory usage, I've substituted the reward model with a random judger, so no reward model is loaded in GPU memory.
However, when running the code in zero3-offload mode, I encounter a CUDA out-of-memory (OOM) error at the unwrap_model_for_generation step, specifically in trl.trainer.online_dpo_trainer on line 395.
It seems that when executing this command, each process/graphics card collects parameters distributed across other processes, resulting in OOM. In the debug model, I can observe that the memory usage of each graphics card increases directly from 20GB to 80GB at that point.
Does anyone know the actual function of the command 'unwarp_madel_for_generation' in zero3 mode
here are my scripts.
```python
from datasets import load_dataset
from trl import OnlineDPOConfig, OnlineDPOTrainer
from transformers import AutoTokenizer
from typing import List, Optional, Union
class TestJudge():
def judge(self, prompts: List[str], completions: List[List[str]], return_scores=False) -> List[Union[int, float]]:
return [0]*len(prompts)
model_path = "Qwen2.5-72B-Instruct" #path to 72B model
judge = TestJudge()
data_path = "trl-lib/ultrafeedback-prompt"#path to dataset
tokenizer = AutoTokenizer.from_pretrained(model_path, local_files_only=True)
train_dataset = load_dataset(data_path, split="train")
training_args = OnlineDPOConfig(output_dir="online-dpo", logging_steps=2, bf16=True, fp16=False, per_device_train_batch_size=1, max_new_tokens=2048,
num_train_epochs=5, gradient_accumulation_steps=2, save_only_model=True,
save_steps=2000, save_total_limit=2)
trainer = OnlineDPOTrainer(
model=model_path,
ref_model=model_path,
judge=judge,
args=training_args,
processing_class=tokenizer,
train_dataset=train_dataset,
)
trainer.train()
#In OnlineDPOTrainer.__init__
#from transformers import AutoModelForCausalLM
#ref_model = AutoModelForCausalLM.from_pretrained(model, local_files_only=True)
#model = AutoModelForCausalLM.from_pretrained(model, local_files_only=True)
``` | 2,250 |