Corianas's picture
Update README.md
541bfcd
metadata
language:
  - en
pipeline_tag: text-generation
model: PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T
dataset: ArmelR/oasst1_guanaco_english
license: apache-2.0

TinyLLama 1.5T checkpoint trained to answer questions.

f"{'prompt'}\n{'completion'}\n<END>"

No special formatting, just question, then newline to begin the answer.

from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

pipe = pipeline("text-generation", model="Corianas/tiny-llama-miniguanaco-1.5T")# Load model directly

tokenizer = AutoTokenizer.from_pretrained("Corianas/tiny-llama-miniguanaco-1.5T")
model = AutoModelForCausalLM.from_pretrained("Corianas/tiny-llama-miniguanaco-1.5T")

# Run text generation pipeline with our next model
prompt = "What is a large language model?"
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=500)
result = pipe(f"<s>{prompt}")
print(result[0]['generated_text'])

Result will have the answer, ending with <END> on a new line.