jfacevedo commited on
Commit
1f35363
1 Parent(s): 2f1eb43

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,9 +14,9 @@ This model contains the LoRA weights for GPTJ-6B. The model was fine tuned on a
14
 
15
  This was trained in a Google Cloud Platform Compute Engine spot VM for 3k steps, costing less than $2 dollars.
16
 
17
- The license should follow the same as [Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html). However, you can use this method with your own dataset and not have the same restrictions.
18
 
19
- Also want to shout out to @tloen as I used his code to generate prompts for training and inference. Please check out the author's repo https://github.com/tloen/alpaca-lora
20
 
21
  ## Generations
22
 
 
14
 
15
  This was trained in a Google Cloud Platform Compute Engine spot VM for 3k steps, costing less than $2 dollars.
16
 
17
+ The license should follow the same as [Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html). However, you should be able to use this method with your own dataset and not have the same restrictions.
18
 
19
+ Also want to shout out to @tloen as I used his some of his code to generate the prompts for training and inference. Please check out the author's repo https://github.com/tloen/alpaca-lora
20
 
21
  ## Generations
22