roneneldan commited on
Commit
8cd464c
1 Parent(s): 5f6b961

Update readme.md

Browse files
Files changed (1) hide show
  1. readme.md +4 -4
readme.md CHANGED
@@ -1,10 +1,10 @@
1
- This model was trained on the TinyStories dataset - https://arxiv.org/abs/2305.07759
2
 
3
- In order to use the model, put the files config.json and pytorch_model.pt in any folder, and then run the following code:
4
 
5
  from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
6
 
7
- model = AutoModelForCausalLM.from_pretrained('path to folder here') # REPLACE BY PATH
8
 
9
  tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
10
 
@@ -13,7 +13,7 @@ prompt = "Once upon a time there was"
13
  input_ids = tokenizer.encode(prompt, return_tensors="pt")
14
 
15
  # Generate completion
16
- output = model.generate(input_ids, max_length = 1000, num_beams=1, generation_config = generation_config)
17
 
18
  # Decode the completion
19
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)
 
1
+ Model trained on the TinyStories-Instruct Dataset, see https://arxiv.org/abs/2305.07759
2
 
3
+ ------ EXAMPLE USAGE ---
4
 
5
  from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
6
 
7
+ model = AutoModelForCausalLM.from_pretrained('roneneldan/TinyStories-Instruct-1M')
8
 
9
  tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
10
 
 
13
  input_ids = tokenizer.encode(prompt, return_tensors="pt")
14
 
15
  # Generate completion
16
+ output = model.generate(input_ids, max_length = 1000, num_beams=1)
17
 
18
  # Decode the completion
19
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)