user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.39k
|
---|---|---|---|
polarbeargo | 2024-03-09T12:51:42 | In my case, My original work already separated import SFTTrainer and TrainerArguments but still burst the same error
![git](https://github.com/huggingface/trl/assets/8589224/3331df79-6d07-4c13-aab0-06ec4b8ad039)
| 6 |
dangl00 | 2024-03-09T14:41:00 | Make sure that you have transformers version 4.38.2 as "top_k_top_p_filtering" is removed in the next version. Then, as previously mentioned by @ashokchhetri7 , importing it from"transformers.generation.utils" should work. | 6 |
jstephencorey | 2024-03-22T18:19:01 | Changing the transformer version to 4.38.2 did work for me, as well. | 6 |
elyhahami18 | 2024-04-29T06:10:41 | pip install transformers==4.38.2 worked for me as well
| 6 |
aseef2289 | 2024-05-16T03:29:21 | > pip install transformers==4.38.2 worked for me as well
![image](https://github.com/huggingface/trl/assets/100977702/dfe7e5d8-20fb-46e8-a057-880f40637feb)
![image](https://github.com/huggingface/trl/assets/100977702/694705cf-0137-40d2-821f-9987fda56539)
I'm still getting the import error even with transformers==4.38.2
Any idea what could be wrong here? | 6 |
lvwerra | 2020-06-09T20:19:47 | Hi @seekingpeace! Thanks for the PR. The readme is autogenerated by nbdev from the index.ipynb. Therefore, the formatting should be changed in the nbdev library. If you get in a PR in there I am happy to rerun the generation on my end. | 5 |
lvwerra | 2020-05-17T12:17:16 | Hi @deepanwayx, thanks for your interest in the library. Lets see if I can answer your question:
## 1. Calculation of KL-divergence
I think both of your questions here can be answered by looking at the equation for the KL divergence:
<img src="https://render.githubusercontent.com/render/math?math=KL(p,q) = \mathbb{E}_{p(x)} [\log \frac{p(x)}{q(x)}]=\mathbb{E}_{p(x)} [\log p(x) - \log q(x)]">
which can be approximated for discrete values by the following formula:
<img src="https://render.githubusercontent.com/render/math?math=KL(p,q) = \sum p(x) [\log \frac{p(x)}{q(x)}]= \sum p(x) [\log p(x) - \log q(x)]">
This is a weighted sum of the term in the first equation. Each term is weighted by the probability p(x). Since we sample the tokens from p(x) we already took that into account implicitly. Tokens that are unlikely are rarely selected while tokens with high probability are selected more often. If we average over all elements in the sequence we achieve the same weighting as by weighting each possible tokens with its probability. In that case the step you propose would be redundant.
## 2. About the ratio
One important detail to mention here is that the PPO optimisation runs for several steps for every batch of data. For this reason the model changes after each optimisation step. Therefore, `old_logprobs` stays the same while `logprobs` change after each optimisation.
Now, the `ratio` is an essential part of the PPO algorithm. The idea is that after the optimisation step you calculate the `ratio` to see if the chosen action gets a higher or lower probability than during rollout. That value multiplied with the advantage yields the unclipped objective function (that is used in TRPO). The idea is that you want to increase the policy's probability of the actions with a high advantage and vice versa. PPO uses a clipped version of this objective for better stability. For more detail I highly recommend the excellent [original paper](https://arxiv.org/pdf/1707.06347.pdf)!
I haven't thought about the effects of Dropout. I suspect the effect of the optimised model are larger than the fluctuations from dropout. But feel free to experiment with it and create a PR if it yields training improvement.
## Remarks
Finally, I want to mention that my main contribution in this project was translating OpenAI's TensorFlow code to PyTorch and making it compatible with the Hugging Face library. The details above were already implemented in the original code and these are merely my explanations. See the original [code](https://github.com/openai/lm-human-preferences/) and [paper](https://arxiv.org/pdf/1909.08593.pdf) for more details. For the PPO implementation check out the [train_policy.py](https://github.com/openai/lm-human-preferences/blob/master/lm_human_preferences/train_policy.py) script. | 4 |
deepanwayx | 2020-05-17T13:39:27 | Thanks for your detailed explanations. I think it makes a lot more sense now. I will check out the original PPO paper for more details. | 4 |
yanghoonkim | 2021-05-26T01:35:36 | Hi @lvwerra
About the difference between `logprobs` and `old_logprobs`: You mentioned in the #10 that
> So the reason we run PPO for 4 epochs is that we want to make most of the data we gathered. If generating samples was cheap we could only train for one epoch.
and in this issue, you said that `logprobs` and `old_logprobs` will different after one epoch, which means that i can't set the `ppo_epoch` to 1
Quite confused about that.
| 4 |
lvwerra | 2021-08-09T07:57:18 | You can set `ppo_epoch` to 1 and only the `logprobs` will change, which makes sense since you the model changes after each `ppo_epoch`, this the predictions are not the same. Why would that be a problem? | 4 |
JoaoLages | 2023-01-12T18:12:48 | > You can set `ppo_epoch` to 1 and only the `logprobs` will change, which makes sense since you the model changes after each `ppo_epoch`, this the predictions are not the same. Why would that be a problem?
In the first epoch `log_probs` is the same as `old_logprobs` (if we disregard the dropout effect) so I think that @yanghoonkim's comment makes sense, right? I.e., if `ratio` is essential as you pointed, `ppo_epoch` must be bigger than 1 for `ratio` to ever be different than 1. | 4 |
lvwerra | 2020-04-01T09:48:12 | Hi @trisongz
Glad you find the library useful. Let's see if I understand your objective correctly:
- You have a dataset with protein sequences and you would like GPT-2 to generate realistic sequences.
- You have trained BERT to classify whether two subsequences are compatible.
Now your question is how to setup the PPO training step? Before running PPO I would fine-tune (or train from scratch) GPT-2 on your dataset with the language modeling objective. Check out the [training script](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) from Hugging Face.
Then I would probably start by using the first subsequence (18 characters) as the query and then let GPT-2 respond for 18 characters. Although GPT-2 uses BPE encodings and not character level encodings, so the actual number might differ. Then I would pass query/response pairs to BERT for the prediction and use its output as a reward (I used the unnormalised logits, but you can also try to use the class predictions 0/1).
Regarding the other PPO parameters I didn't change them much from the original implementation except the batch size for memory reason. I would start there and adjust them later if it does not work. You also want to keep an eye on the KL-divergence (logged as objective/kl) to make sure the output distribution stays close to your initial data distribution. | 1 |
trisongz | 2020-04-01T16:53:05 | Hi @lvwerra thanks for the advice!
Yes, you're correct. I had actually started training GPT-2 from scratch with a custom tokenizer on the dataset prior to seeing this comment so I'm glad I am on the right track.
I also switched over to using RoBERTa as the classifier to test as well, which is currently at
```
‘mcc’: 0.9997714069569512,
‘tp’: 308736, ‘tn’: 164108,
‘fp’: 49,
‘fn’: 0
‘acc’: 0.9998963824797575
‘eval_loss’: 0.00044921892891853013
```
after 50k steps, although I am concerned that's a potential result of me not shuffling the csv data prior to training the model, as I wrote the csv file sequentially from the raw dataset. Is there a way you suggest to easily shuffle the csv file prior to the training step? I used your extremely helpful train_test_split function for eval and train data.
For this specific task, since it is sequence based, do you think a Masked LM would perform better at generation than GPT-2 since unlike human written text, there's likely sequence pairs that repeat?
So far what I currently have
**BERT/RoBERTa Classifier:**
_Dataset structure_
GTGG ACCA TATG GCCA, ACCA TATG GCCA TAAT, 1
ATCA GGAA GGCA AGAG, AAGT ACAC ATCA GGAA, 0
```
------------------------------
The Predictions below should result in 1
GTGG ACCA TATG GCCA -> ACCA TATG GCCA TAAT: [1]
GCCA TAAT CAAA AAGT -> TAAT CAAA AAGT ACAC: [1]
------------------------------
The Predictions below should result in 0
ATCA GGAA GGCA AGAG -> AAGT ACAC ATCA GGAA: [0]
CAAA AAGT ACAC ATCA -> GCCA TAAT CAAA AAGT: [0]
```
**For GPT-2 LM:**
_Single line by line text file of only true (1) sequences_
GTGG ACCA TATG GCCA ACCA TATG GCCA TAAT
GCCA TAAT CAAA AAGT TAAT CAAA AAGT ACAC
Does this look correct so far?
Thank you for the tips! | 1 |
lvwerra | 2020-04-01T17:04:57 | > `‘acc’: 0.9998`
That seems suspiciously high. either your task is trivial or there is some leakage in your dataset. Maybe entries exist more than once and are therefore in both train and test split. `train_test_split` should shuffle the dataset already. I would definitely investigate that further.
I have not much experience with such sequences so I don't know if MLM would work better. Also if training GPT-2 from scratch makes sense probably depends the size of the dataset and resources you have available. I guess you could try the simple, pretrained approach and if that does not work out consider moving to MLM or training GPT-2 from scratch.
For the GPT-2 LM that looks fine to me. You could also consider adding the EOS token at the end of each line (see [here](https://huggingface.co/lvwerra/gpt2-imdb) for a snippet how I processed the IMDB dataset).
Good luck. | 1 |
trisongz | 2020-04-02T05:42:11 | I'm at part 4 now where I'm running the RL environment, and looking through your comments. I also updated GPT-2 to train with a EOS token. I messed up a few things originally, but I think I'm on the right track. Since I created a custom tokenizer for GPT-2, each sequence of 4 letters is 1 token for I/o.
Currently I have my txt_in_len as well as txt_out_len set to 4, to match what BERT expects to see for sequence pair classification.
However, I realized after the scores returned that I hadn't updated the reward mechanism to 0/1 so the scores are a mess. (This is prior to updating the txt lengths properly to 4x4).
![image](https://user-images.githubusercontent.com/4735784/78214258-5dfe4b00-747a-11ea-8174-ac8dac282c22.png)
Could you point me to how I would be able to switch up the rewards based on the Classifier output? I was looking around here:
```
def compute_rewards(self, scores, logprobs, ref_logprobs):
"""Compute per token rewards from scores and KL-penalty."""
kl = logprobs - ref_logprobs
non_score_reward = -self.kl_ctl.value * kl
rewards = non_score_reward.clone().detach()
rewards[:, -1] += scores
return rewards, non_score_reward, self.kl_ctl.value
```
But wasn't entirely sure | 1 |
lvwerra | 2020-04-02T09:45:14 | You should normalise the scores before running the `PPTrainer.step`. The outputs you get from the BERT model are logits. So you would need to apply Softmax to the outputs and then find the max probability
```Python
probs = F.softmax(bert_outputs, dim=-1)
max_id = torch.argmax(probs, dim=-1)
```
`max_id` corresponds to the output index with the largest probability. If position 0 in your outputs corresponds to "not entailed" and position 1 to "entailed" that should be what you are looking for. | 1 |
trisongz | 2020-04-02T17:14:55 | I'm still relatively new to Torch, so I apologize for silly questions.
Would it be here that you add that step before appending it to rewards?
```
#### tokenize text for sentiment analysis
t = time.time()
texts = [q + r for q,r in zip(game_data['query'], game_data['response'])]
sentiment_inputs, attention_masks = build_bert_batch_from_txt(texts, sentiment_tokenizer, device)
timing['time/build_input_sentiment'] = time.time()-t
#### get sentiment score
t = time.time()
rewards = []
for i in range(int(config['batch_size']/fbs)):
res = sentiment_model.forward(sentiment_inputs[i*fbs:(i+1)*fbs],
attention_masks[i*fbs:(i+1)*fbs])[0][:, 1].detach()
probs = F.softmax(res, dim=-1)
max_id = torch.argmax(probs, dim=-1)
rewards.append(max_id)
#rewards.append(res)
rewards = torch.cat(rewards)
timing['time/get_sentiment_preds'] = time.time()-t
#### Run PPO training
t = time.time()
stats = ppo_trainer.step(query_tensors, response_tensors, rewards)
timing['time/optimization'] = time.time()-t
``` | 1 |
lvwerra | 2020-04-03T09:49:59 | That looks about right. You need to remove the logit slicing in line:
```Python
res = sentiment_model.forward(sentiment_inputs[i*fbs:(i+1)*fbs],
attention_masks[i*fbs:(i+1)*fbs])[0].detach()
```
Since `[:, 1]` slices out the logits for the positive sentiment in my example. Since you want to create discrete rewards you will need both positive and negative logits for the softmax layer. | 1 |
trisongz | 2020-04-03T19:53:07 | I think I'm getting closer. I had to do one additional step and transform max_id to max_id.float(). However, the outputs are all showing rewards as 0.0 so far - wanted to confirm.
Result of res step
```
[ 5.5575, -6.0447],
[ 5.5397, -6.0370],
[ 5.5577, -6.0430],
[ 5.5556, -6.0427],
[ 5.5585, -6.0432],
[ 5.5494, -6.0396],
[ 5.5576, -6.0438],
[ 5.5544, -6.0420],
[ 5.5584, -6.0439],
[ 5.5490, -6.0390],
[ 5.5601, -6.0438],
[ 5.5527, -6.0437],
[ 5.5541, -6.0416],
[ 5.5583, -6.0435],
[ 5.5514, -6.0416],
[ 5.5590, -6.0440],
[ 5.5556, -6.0430],
[ 5.5468, -6.0402],
[ 5.5564, -6.0439],
[ 5.5545, -6.0405],
[ 5.5537, -6.0446],
[ 5.5563, -6.0434],
[ 5.5566, -6.0431],
[ 5.5564, -6.0429],
[ 5.5527, -6.0419],
[ 5.5535, -6.0425],
[ 5.5531, -6.0433],
[ 5.5546, -6.0427],
[ 5.5518, -6.0417],
[ 5.5573, -6.0431],
[ 5.5567, -6.0428]], device='cuda:0')
```
result of probs
```
tensor([[9.9999e-01, 9.2180e-06],
[9.9999e-01, 9.1457e-06],
[9.9999e-01, 9.3818e-06],
[9.9999e-01, 9.1595e-06],
[9.9999e-01, 9.1816e-06],
[9.9999e-01, 9.1508e-06],
[9.9999e-01, 9.2678e-06],
[9.9999e-01, 9.1529e-06],
[9.9999e-01, 9.1989e-06],
[9.9999e-01, 9.1447e-06],
[9.9999e-01, 9.2768e-06],
[9.9999e-01, 9.1304e-06],
[9.9999e-01, 9.1988e-06],
[9.9999e-01, 9.2063e-06],
[9.9999e-01, 9.1496e-06],
[9.9999e-01, 9.2305e-06],
[9.9999e-01, 9.1391e-06],
[9.9999e-01, 9.1788e-06],
[9.9999e-01, 9.2861e-06],
[9.9999e-01, 9.1639e-06],
[9.9999e-01, 9.2114e-06],
[9.9999e-01, 9.1817e-06],
[9.9999e-01, 9.1681e-06],
[9.9999e-01, 9.1684e-06],
[9.9999e-01, 9.1717e-06],
[9.9999e-01, 9.2159e-06],
[9.9999e-01, 9.2032e-06],
[9.9999e-01, 9.1985e-06],
[9.9999e-01, 9.1905e-06],
[9.9999e-01, 9.2262e-06],
[9.9999e-01, 9.1625e-06],
[9.9999e-01, 9.1699e-06]], device='cuda:0')
```
result of max_id (non-float)
```
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0')
```
result of max_id.float()
```
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0')
```
As a sanity check, I ran the post-training step to see the results, with modifying the rewards to match with the above.
```
#### sentiment analysis of query/response pairs before/after
texts = [q + r for q,r in zip(game_data['query'], game_data['response (before)'])]
sentiment_inputs, attention_masks = build_bert_batch_from_txt(texts, sentiment_tokenizer, device)
#rewards = sentiment_model.forward(sentiment_inputs, attention_masks)[0][:, 1].detach()
res = sentiment_model.forward(sentiment_inputs, attention_masks)[0].detach()
probs = F.softmax(res, dim=-1)
max_id = torch.argmax(probs, dim=-1)
max_id = max_id.float()
rewards = max_id
game_data['rewards (before)'] = rewards.cpu().numpy()
texts = [q + r for q,r in zip(game_data['query'], game_data['response (after)'])]
sentiment_inputs, attention_masks = build_bert_batch_from_txt(texts, sentiment_tokenizer, device)
#rewards = sentiment_model.forward(sentiment_inputs, attention_masks)[0][:, 1].detach()
res = sentiment_model.forward(sentiment_inputs, attention_masks)[0].detach()
probs = F.softmax(res, dim=-1)
max_id = torch.argmax(probs, dim=-1)
max_id = max_id.float()
rewards = max_id
game_data['rewards (after)'] = rewards.cpu().numpy()
```
![image](https://user-images.githubusercontent.com/4735784/78399408-72e1f800-75ba-11ea-801c-ca87f5dc5a0b.png)
Does this look right to you so far? I'm also not sure whether the classifier is issuing 0 as a result of not seeing all 8 tokens, as it's trained on 4/4 sequence pairs.
When I run
```
text_a = 'AGAC CACT GTGG ACCA'
text_b = 'CACT GTGG ACCA TATG'
output = sentiment_model.forward(sentiment_tokenizer.encode([text_a, text_b], return_tensors="pt"))
output
output[0][0, 1]
```
I get
`tensor(0.2771, grad_fn=<SelectBackward>)`
Whereas with
```
text = 'CACT GTGG ACCA TATG'
output = sentiment_model.forward(sentiment_tokenizer.encode(text, return_tensors="pt"))
output
output[0][0, 1]
```
It shows
`tensor(-6.0448, grad_fn=<SelectBackward>)` | 1 |
lvwerra | 2020-04-04T10:23:47 | Indeed, it seems like the LM is not generating good sequences at the beginning. There are several things you could try:
- Further fine-tune GPT2 on the language modeling task
- Play with the language generation (e.g. try changing the sampling temperature)
- Use the logits as reward function (like in my example), since they provide a continuous reward signal. In your case it only ever gets a reward when the probability for 1 is larger than that for 0. If you take the raw logits it gets a reward even if it's only getting closer.
- Try to simplify the task by reducing the number of generated characters. Maybe try 12 query characters vs. 4 response characters.
These are just some ideas off the top of my head. I am sure there could be other problems and solutions. | 1 |
lvwerra | 2020-04-27T14:55:29 | I am closing this issue for now. If you have further questions just contact me. | 1 |