user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.4k
lvwerra
2023-01-30T11:51:34
Thanks @vwxyzjn for the clarification of the nomenclature! I think the hyperparameter your are citing are for the initialization of the policy before the PPO training. For the PPO training they mention: > The batch size for each iteration is 512, with a minibatch size of 64. In other words, each batch is randomly split into 8 minibatches and is trained on for only a single inner epoch (Schulman et al., 2017). So indeed a mini-bs>1 is used. I think we can address that quite easily with #100 if we use the attention mask to mask out the appropriate parts of the input. cc @younesbelkada
72
vwxyzjn
2023-02-06T15:22:48
> Does this mean this is building multi-envs to collect rollouts? I think multi-envs in this case is kind of like multiple instances of conversations :) > The batch size for each iteration is 512, Ah, my mistake. Thanks for the info 🙏 > So indeed a mini-bs>1 is used. I think we can address that quite easily with https://github.com/lvwerra/trl/pull/100 if we use the attention mask to mask out the appropriate parts of the input. cc @younesbelkada Sorry, I am probably missing something... What parts of the input should we mask out related to the minibatch size? It sounds like a minibatch of size 64 would mean 64 independent prompts as obs, 64 responses as actions, and 64 scalar rewards. We are trying to mask out the future tokens in each of these 64 prompts, right?
72
lvwerra
2023-02-07T09:47:58
@vwxyzjn mostly a practical thing: when we batch 64 sequences together which can have unequal length we need to pad the tensors. In transformers the tensors then usually come with an attention mask telling you where the paddings are: we can use this to know where each prompt/response starts and ends and where the paddings are we can ignore.
72
younesbelkada
2023-01-04T20:11:08
Hi, yes we are currently refactoring the repository to make it more accessible for more models & to do distributed training if you want to use the examples on the notebook please use `trl` from the previous release `pip install trl` Check #64
71
HuggingFaceDocBuilderDev
2023-01-04T09:16:34
_The documentation is not available anymore as the PR was closed or merged._
70
HuggingFaceDocBuilderDev
2023-01-01T08:23:29
_The documentation is not available anymore as the PR was closed or merged._
69
HuggingFaceDocBuilderDev
2022-12-31T06:49:33
_The documentation is not available anymore as the PR was closed or merged._
68
lewtun
2023-01-05T12:23:33
Thanks for the comments @lvwerra ! I left a few questions that could do with your feedback - in the meantime I'll add some tests :)
68
lewtun
2023-01-23T15:52:16
🔴 Don't merge until I have a fix! Hmm, using the staging endpoint of the Hub for the test is causing some issues because I rely on `whoami()` to get the username in the model card, and that method doesn't allow me to distinguish between endpoints
68
HuggingFaceDocBuilderDev
2022-12-30T10:28:17
_The documentation is not available anymore as the PR was closed or merged._
67
HuggingFaceDocBuilderDev
2022-12-30T10:00:41
_The documentation is not available anymore as the PR was closed or merged._
66
lvwerra
2022-12-30T10:03:15
This should also address #42
66
HuggingFaceDocBuilderDev
2022-12-30T08:56:32
_The documentation is not available anymore as the PR was closed or merged._
65
LouisCastricato
2023-01-08T16:58:18
BTW, I can confirm that SetFit does make for a really good zero shot RM. There are some issues with using contrastive models as RMs though. It often requires very careful data cleaning and identifying what kinds of clusters work as RMs is a dark art to the point where we decided that it wasn't worth seriously pursing further after CARP CoOp. Rerank models are much better.
64
TristanThrush
2023-01-19T19:29:26
I think that the "coolest" dataset we can use to train a model could be https://huggingface.co/datasets/openai/webgpt_comparisons, but it is hard to evaluate this sort of model after we train it. I might start by adding a summarization example, and then some decent ways by which it can be evaluated. Then the webgpt comparisons example
64
AlexWortega
2023-01-24T17:56:46
https://colab.research.google.com/drive/1hkPBFtMP5xBAjNYMjWH7NqYn118kRLOJ?usp=sharing I am trying to implement own gpt + trl with QA retrival reward, but i think something is wrong with reward/or generation
64
natolambert
2023-02-07T01:15:06
@AlexWortega can you open a separate issue / PR for this? Looks interesting, but may get loss in this big 1.0 roadmap thread.
64
lvwerra
2023-02-07T09:38:15
We ended up calling this release `0.2` (not `1.0`). I am closing the issue and will move the open tasks to a new issue.
64
AlexWortega
2023-02-16T08:58:42
Hi @lvwerra i opened PR https://github.com/lvwerra/trl/pull/149 with this feature(?) idea
64
HuggingFaceDocBuilderDev
2022-12-29T17:19:48
_The documentation is not available anymore as the PR was closed or merged._
63
HuggingFaceDocBuilderDev
2022-12-30T08:56:09
_The documentation is not available anymore as the PR was closed or merged._
62
lvwerra
2022-12-30T08:59:44
All comments should be addressed. Also applied the quality to the recent merges.
62
HuggingFaceDocBuilderDev
2022-12-30T08:42:02
_The documentation is not available anymore as the PR was closed or merged._
61
HuggingFaceDocBuilderDev
2022-12-27T17:59:06
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_59). All of your documentation changes will be reflected on that endpoint.
59
younesbelkada
2022-12-29T11:55:32
wandb run (multi-GPU) after the latest commit: https://wandb.ai/distill-bloom/trl/runs/1mps4h09?workspace=user-younesbelkada
58
younesbelkada
2022-12-29T17:28:09
Wandb log of the final run: https://wandb.ai/distill-bloom/trl/runs/dcd2gqn1?workspace=user-younesbelkada
58
HuggingFaceDocBuilderDev
2022-12-29T17:28:46
_The documentation is not available anymore as the PR was closed or merged._
58
lvwerra
2023-01-13T15:39:35
Regarding 1: see equation (11) in https://arxiv.org/abs/1506.02438 and 2) yes you are correct.
57
lvwerra
2023-01-13T15:35:52
It seems like the reward of your model increases, no? So maybe worth investigating if the classifier actually works well?
56
lvwerra
2023-01-13T15:40:17
Also the KL-divergence is allowed to raise but the controller should at some point bring it back down.
56
lvwerra
2022-12-21T07:38:31
Coming soon - see #53!
54
22Mukesh22
2022-12-22T05:48:51
That's Great , waiting for GPT-J to learn through human feedback ? But what in your thought, Bert classifier will be able to reward the text generated ?? Or there will be any other reward model who can give the score for the generated task.
54
conceptofmind
2022-12-28T03:35:55
Are we able to use any Causal LLM from the model hub now that #53 is merged?
54
lvwerra
2023-01-13T15:25:51
Yes, that should work!
54
younesbelkada
2022-12-21T12:17:55
Seems to be converging with the latest changes: https://wandb.ai/distill-bloom/gpt2-test/runs/1sxufahx?workspace=user-younesbelkada
53
younesbelkada
2022-12-19T21:25:11
Moved all images inside the org https://huggingface.co/trl-internal-testing and fixed all image links on README + notebooks with the correct ones Also as discussed, I removed the 3 first notebooks ;) Let me know what is missing here!
52
lvwerra
2022-12-20T08:48:43
Seems not possible https://stackoverflow.com/questions/66587174/how-to-remove-generated-from-tag
52
younesbelkada
2022-12-20T08:50:41
Thanks for the review! I should have removed the CI, done the renaming of the files ;-)
52
younesbelkada
2022-12-14T13:37:29
For now I am testing my implementation with `accelerate launch example/ppo-accelerate.py`
50
younesbelkada
2022-12-15T10:49:55
Regarding tests, this is tricky but from what I can see we can for now: - Test if all trainers respects the inheritance from `BaseTrainer` (by checking if all the needed functions are implemented) - Test if all models work as expected (thinking of `generate` method) and if we can in fact support all `xxxForCausalLM` architectures as claimed above. From what I can see, as long as the model has a proper `generate` method the PPOTrainer should work
50
younesbelkada
2022-12-27T12:50:55
Closing in favor of https://github.com/lvwerra/trl/pull/58
50
lvwerra
2022-12-07T09:30:10
Thanks, I'll fix that!
48
lvwerra
2023-01-30T11:59:33
Should be fixed with #80.
48
lvwerra
2022-12-07T09:30:28
Thanks, I'll fix that! 🤗
47
lvwerra
2022-12-21T10:29:36
Closed with #49
47
Alymostafa
2022-11-18T03:48:50
Try to work on a new env and install the transformers library again. Also, make sure to load and import pyarrow.
46
lvwerra
2022-12-07T09:31:26
This seems like an issue with the `tokenizers` library. Can you install it `pip install tokenizers` alone?
46
lvwerra
2022-12-07T09:43:50
Thanks, the README is from `nbs/index.ipynb` so this is a limitation of `nbdev`. Might remove that in the next iteration.
45
JulesGM
2022-12-07T16:46:43
weird that nbdev doesn't do that, maybe sending a pull request their way would be good On Wed, Dec 7, 2022 at 4:44 AM Leandro von Werra ***@***.***> wrote: > Thanks, the README is from nbs/index.ipynb so this is a limitation of > nbdev. Might remove that in the next iteration. > > — > Reply to this email directly, view it on GitHub > <https://github.com/lvwerra/trl/pull/45#issuecomment-1340666778>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAYU34NJRTBJ77WQRXWYN4DWMBL6DANCNFSM6AAAAAARCXKCKU> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> >
45
lvwerra
2023-01-30T12:05:38
Interesting, you might be right! I'll have a look at this :)
44
lvwerra
2023-02-07T15:09:29
Should be fixed now :)
44
clam004
2022-08-30T22:23:27
So I did some research on my own and basically my first 2 questions can be answered by looking at the huggingface transformers repository: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py
43
danjohnvelasco
2022-09-08T01:32:44
> So I did some research on my own and basically my first 2 questions can be answered by looking at the huggingface transformers repository: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py Hi @clam004, do you mind explaining your answer/understanding on why they do it? Thanks!
43
clam004
2022-12-14T22:00:52
@danjohnvelasco as long as you use the same name `self.lm_head`, when you load the pretrained model from the dictionary of parameters, these linear parameters will be replaced with the trained ones. So thats why the model still works (question 2). Also regarding question 3, I suspect somehow it doesnt matter, although Im not sure why, cause when I run this repo without the dropout layer, as expected, it behaves the same.
43
lvwerra
2023-01-13T15:33:57
Regarding 3 I agree and we moved the dropout before the linear layer in https://github.com/lvwerra/trl/pull/70.
43
lvwerra
2022-12-07T09:41:54
Soon? :P
42
lvwerra
2022-12-30T10:03:49
Closing this in favour of #66. Let me know if you had something else in mind and we can re-open :)
42
MichaelKarpe
2023-01-08T17:30:25
Hey, sorry for not coming back sooner on this with an explanation, I wanted to provide evidence the proposed changes were necessary as it was a change in the requirements. If I remember well, I needed `transformers>=4.15.0` and I couldn't make it work without `wandb>=0.12.17`. The `wandb>=0.12.17` change could eventually still be needed, this is not urgent however to make this change as an installation from scratch should install the most recent version. I will eventually check later that the project cannot work without `wandb>=0.12.17`, but this time I am not providing a timeline on when I'll check this! :slightly_smiling_face:
42
parshinsh
2022-09-19T15:59:20
I confirm that this issue happens. I'm facing the same problem with my own task. Can anyone help with this?
41
Alymostafa
2022-10-31T03:12:31
same problem here with a longer sequence. @vblagoje @lvwerra
41
Alymostafa
2022-11-18T03:45:49
@adhitya-synth I used the same configuration as you mentioned and I found out that when the batch size is small it happens as you said but with a larger batch size as in the notebook, the reward increases.
41
hdvvip
2022-11-18T03:58:30
Recently, I came across OpenAI InstructGPT which is an upgrade version of GPT-3 that has been trained with reinforcement learning. The reinforcement learning they used for training InstructGPT is PPO which is implemented in this github repository. Related to the problem that the reward is stagnant or going down, I think even OpenAI (fathers of PPO) also face the same issue. Please see the Figure 13 below. "As shown in Figure 13, the reward saturates after the initial 400k examples of training." ![Selection_1566](https://user-images.githubusercontent.com/42698038/202613363-c47bc6c4-cc30-45f6-b8de-30d436a6b687.png) Here is InstructGPT paper. https://arxiv.org/pdf/2203.02155.pdf
41
hdvvip
2022-11-18T04:01:20
Thus, based on the OpenAI experiments in InstructGPT paper, I think that it's based on the dataset you used to train your model. In OpenAI case, with the best implementation of PPO, they still failed to improve the rewards when they train GPT-3 using PPO on FLAN and T0 datasets. ![Selection_1567](https://user-images.githubusercontent.com/42698038/202613922-a35816a5-a367-40a6-a6bf-72ca71c04322.png)
41
hdvvip
2022-11-18T04:20:16
Thus, if you used PPO on your task and it doesn't work. Don't be surprised! Like I said above, some tasks PPO will work. Some tasks, it won't.
41
Alymostafa
2022-11-18T05:12:10
Thanks for the clarification. But, I am mentioning that based on his observations when the batch size is small what he mentioned happens, but when I increased the batch size I was able to reproduce the same results as in the notebook.
41
hdvvip
2022-11-18T05:46:25
Well, I think we have some misunderstanding here. I didn't specifically mention you in post. I just want to explain to everyone here that depend on your tasks, PPO may work or not. So, it's not your fault when PPO failed on your NLP task. Everyone here has different tasks, so my answer didn't have anything to do with batch size. BTW, OpenAI used batch size of 128 but still failed.
41
lvwerra
2022-12-07T09:37:03
Thanks for the discussion here. Indeed, it can depend a lot on the hyperparameters as well as the task. Great you found that increasing the BS works. I think this is still a very underexplored area!
41
leoribeiro
2023-03-22T21:32:32
@adhitya-synth I face the same problem when using larger text. Did you figure it out a way to overcome this?
41
hdvvip
2022-07-18T04:39:23
Ok I understood, you used [logprob](https://github.com/lvwerra/trl/blob/4fe9988eb8adf0227c26432f8eb3e57a66556350/trl/ppo.py#L156) of the current network as theta_old train_stats = self.train_minibatch(logprobs[idx].unsqueeze(0), values[idx].unsqueeze(0), rewards[idx].unsqueeze(0), queries[idx].unsqueeze(0), responses[idx].unsqueeze(0), torch.cat([queries[idx],responses[idx]]).unsqueeze(0)) This works similarly to update theta_old after every iteration.
40
Alymostafa
2022-11-18T03:46:38
What is the value of the Batch size you use?
38
lvwerra
2023-01-13T15:29:37
See #41
38
lvwerra
2022-12-07T09:44:06
Will have a look!
37
22Mukesh22
2022-12-22T05:46:46
Hi @lvwerra Any fix on the above error ? I was running the notebook '04-gpt2-sentiment-ppo-training.ipynb' for the first time, and received a Key Error when running the training loop section. It was in this line: rewards = torch.tensor([output[1]["score"] for output in pipe_outputs]).to(device) I presume it is safe to omit the '[1]'? rewards = torch.tensor([output["score"] for output in pipe_outputs]).to(device)
37
lvwerra
2023-01-30T12:06:05
It should be fixed now!
37
lvwerra
2022-05-15T16:13:36
Also this PR finally fixes the tests.
35
lvwerra
2022-05-15T15:58:01
This should be in principle possible, maybe this needs some modifications to the `PPOTrainer` but you can probably treat the decoder of an encoder-decoder architecture such as BART or T5 like the GPT-2 decoder. This was also requested in #13 and #23. Feel free to open a PR if you have a working solution!
33
lvwerra
2022-12-07T09:40:34
You should be using the same class to load the model e.g. `GPT2HeadWithValueModel` or `AutoModelForCausalLM` (although I haven't tested the latter). `AutoModel` will load the model without LM head.
32
lvwerra
2022-05-15T15:50:02
Hi @dhruv2601, with #35 this should be fixed.
31
lvwerra
2021-12-23T09:25:35
I think that makes sense. I have not used a seq2seq model, yet. So you might want to start with a decoder only model that should work and then compare the results to your enc-dec approach. Good luck!
23
lvwerra
2021-08-09T07:59:31
You could set the `init_kl_coeff=0` (see [here](https://github.com/lvwerra/trl/blob/750f5fd5329bb81c79b00243c4c8923ac14981d5/trl/ppo.py#L93)) to liberate the model from the reference completely or increase the KL target `target` (which is 6 by default).
22
yananchen1989
2021-08-09T09:39:40
> You could set the `init_kl_coeff=0` (see [here](https://github.com/lvwerra/trl/blob/750f5fd5329bb81c79b00243c4c8923ac14981d5/trl/ppo.py#L93)) to liberate the model from the reference completely or increase the KL target `target` (which is 6 by default). Thanks.
22
yananchen1989
2021-08-09T09:57:57
By the way, do you have investigations on how to tune the txt_in_len, txt_out_len to better sever the topic/sentiment preservation of the generated texts? Currently, I find that fine-tuning the GPT2 before applying it into generation makes difference.
22
lvwerra
2021-08-09T10:08:05
No, I have not experimented much with these parameters. The main motivations for using input text at all is to force some variations in the generation. Yes, I suspect one gets the best (or rather quickest) performance gains when first using supervised training to bring the initial LM distribution as close to the desired target distribution. This also makes the KL constrained better defined as you measure it against a LM on the same domain.
22
yananchen1989
2021-11-06T21:52:56
@lvwerra Hi, I recently find you that you added a simple code demo here https://lvwerra.github.io/trl// where `ppo_config = {'batch_size': 1, 'forward_batch_size': 1}` I suppose this is single sample mode, rather than batch. Based on your experience, did you find any difference on performance between single and batch mode? Is there any other cautions when using single mode to update the GPT2? Thanks in advance.
22
lvwerra
2022-01-01T16:13:52
Hi @yananchen1989, the simple code demo is just a proof of concept demo and I never used that config for the actual training. I did not run many experiments changing these settings and just sticked to the settings from the [original paper](https://arxiv.org/abs/1909.08593).
22
yananchen1989
2022-01-06T18:32:40
@lvwerra Thanks. I find that it is so crucial to design a good reward feedback module that can return a reward with positive or negative value. And the reference GPT also need to be fine-tuned on some related corpus. These two points make it very unpractical. During my trials, if I do not fine-tune the reference GPT to some texts, (as there are no appropriate texts for finetuning), or only has a reward classifier which only give positive feedbacks, for example, if the generated text is not much like a politics article, the reward module would just score it to, say, 0.001; or on the contrary, if it is much like a politics news, the score would be 0.973, then the generated texts after several iterations of PPO training would deteriorate, ending up into repetitive snippets or meaningless results, even though I have tuned the parameters such as kl coefficients, etc.
22
lvwerra
2022-05-15T15:53:37
I think the fine-tuning is not a necessary step but improves stability and convergence. For the reward function, I don't see the point for a strictly positive reward. What would you try to learn from it?
22
ozyyshr
2021-08-03T13:17:23
Hi, thanks for the great work. I also want to know whether and how it can be used for masked token predictions. Thanks in advance!
21
lvwerra
2021-08-09T08:02:36
Reinforcement learning is designed for sequential decision problems and thus works well for causal language modeling (such as GPT-2). BERT however does not fall in that category since it is a one-shot prediction and not a sequential prediction such as in language modeling. So I don't think it is straight forward to adapt this approach.
21
lvwerra
2021-08-09T08:06:53
As you can see later in the code the advantages are used for the loss calculations and not the returns: https://github.com/lvwerra/trl/blob/750f5fd5329bb81c79b00243c4c8923ac14981d5/trl/ppo.py#L240
19
lvwerra
2021-03-18T18:07:12
Yes, that is true - well spotted! I'll add it as a TODO.
18
lvwerra
2021-08-09T08:04:34
Interesting - must be an issue with the newer verisons of `pip`. Will likely drop the dependency to `simpletransformers` in the next release.
17
lvwerra
2022-01-01T16:29:25
Dropped `simpletransformers` requirement in #25.
17
vblagoje
2021-02-26T14:11:04
@lvwerra I tried this branch on both imdb ppo notebooks (the basic ppo sentiment training and the controlled sentiment ppo). They both work as expected, please try it as well. Let me know if any other checks should be done.
16
lvwerra
2021-02-26T14:55:54
awesome! did you also use weights and biases? in case you did, would you mind sharing the logs?
16
vblagoje
2021-02-26T16:04:18
Yes, I did but I deleted the first report for `04-gpt2-sentiment-ppo-training.ipynb`. Here is the report for [05-gpt2-sentiment-control.ipynb](https://wandb.ai/vblagoje/gpt2-ctrl/reports/05-gpt2-sentiment-control-ipyn--Vmlldzo0OTI4MjA?accessToken=0ogcb46btflg488lfuw1zu3j46sgsl3v83u45xdsloijmtfobav7dqmqq8s75trw)
16
lvwerra
2021-01-17T15:18:43
1. The model outputs predictions for the next token whereas the `log_probs` are the log probabilities for the current token. This simply aligns the two. 2. The main motivation was to decouple the generation from the training as much as possible. Since it takes a fraction of the time of the backward pass the speedup would be minimal. That way the PPOTrainer interface is cleaner. 3. That's possible. It could be that the `transformer` function `generate` handles this, but I had to implement my own, simple decoding function since the model would exploit several aspects of it. See the comments [here](https://github.com/lvwerra/trl/blob/master/nbs/01-gpt2-with-value-head.ipynb) about the custom response function. Feel free to make a PR if you can fix the weaknesses and improve the performance. Cheers, Leandro
15
lvwerra
2020-12-17T08:14:28
Hi! You can actually control these parameters. Later in the paper they also talk about dynamically adjusting beta. You can control this through the keyword arguments `"adap_kl_ctrl"` and `"init_kl_coef"` when initialising the `PPOTrainer`. You can also adjust the target KL-divergence through `"target"` and the windowing through `"horizon"` as well as all the PPO parameters (see [here](https://github.com/lvwerra/trl/blob/1662d78b5c5e688823b06c69495632abd68b7484/trl/ppo.py#L59)).
14
danyaljj
2020-12-04T23:27:52
Side note: it'd be good to update the `transformers` dependency to the latest (v4.0.0).
13
lvwerra
2020-12-17T08:18:50
You are right, when I have time I'll upgrade it to v4.0.0. I haven't tested it but I suspect if you take a model with a text generation head it should work. Note that you need add a value head to your model architecture (see [here](https://github.com/lvwerra/trl/blob/master/trl/gpt2.py)).
13