oktrained commited on
Commit
35aa02e
1 Parent(s): 11c1208

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -21
README.md CHANGED
@@ -32,24 +32,3 @@ Limitations
32
  Untrained: The model is untrained and will not perform well on any task until it has been fine-tuned.
33
  Ethical Considerations: Users should be mindful of the ethical implications of deploying fine-tuned models, especially in sensitive applications.
34
 
35
-
36
-
37
- from transformers import AutoModelForCausalLM, AutoTokenizer
38
-
39
- # Load the fine-tuned model and tokenizer
40
- model = AutoModelForCausalLM.from_pretrained("oktrained/llama3.1_180M_untrained")
41
- tokenizer = AutoTokenizer.from_pretrained("oktrained/llama3.1_180M_untrained")
42
-
43
- # Sample input text
44
- input_text = "Once upon a time"
45
-
46
- # Tokenize input
47
- inputs = tokenizer(input_text, return_tensors="pt")
48
-
49
- # Generate output
50
- output = model.generate(**inputs, max_length=50)
51
-
52
- # Decode output
53
- generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
54
-
55
- print(generated_text)
 
32
  Untrained: The model is untrained and will not perform well on any task until it has been fine-tuned.
33
  Ethical Considerations: Users should be mindful of the ethical implications of deploying fine-tuned models, especially in sensitive applications.
34