oktrained commited on
Commit
11c1208
1 Parent(s): ce44516

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -30,4 +30,26 @@ As the model is untrained, it has not been evaluated on any benchmark datasets.
30
 
31
  Limitations
32
  Untrained: The model is untrained and will not perform well on any task until it has been fine-tuned.
33
- Ethical Considerations: Users should be mindful of the ethical implications of deploying fine-tuned models, especially in sensitive applications.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  Limitations
32
  Untrained: The model is untrained and will not perform well on any task until it has been fine-tuned.
33
+ Ethical Considerations: Users should be mindful of the ethical implications of deploying fine-tuned models, especially in sensitive applications.
34
+
35
+
36
+
37
+ from transformers import AutoModelForCausalLM, AutoTokenizer
38
+
39
+ # Load the fine-tuned model and tokenizer
40
+ model = AutoModelForCausalLM.from_pretrained("oktrained/llama3.1_180M_untrained")
41
+ tokenizer = AutoTokenizer.from_pretrained("oktrained/llama3.1_180M_untrained")
42
+
43
+ # Sample input text
44
+ input_text = "Once upon a time"
45
+
46
+ # Tokenize input
47
+ inputs = tokenizer(input_text, return_tensors="pt")
48
+
49
+ # Generate output
50
+ output = model.generate(**inputs, max_length=50)
51
+
52
+ # Decode output
53
+ generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
54
+
55
+ print(generated_text)