metadata
license: gemma
datasets:
- ayoubkirouane/Small-Instruct-Alpaca_Format
language:
- en
library_name: transformers
pipeline_tag: text-generation
base model :
- google/gemma-2-9b
dataset :
- ayoubkirouane/Small-Instruct-Alpaca_Format
Get Started :
- Load model directly :
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ayoubkirouane/gemma-2-9b-alpaca-small-Instruct")
model = AutoModelForCausalLM.from_pretrained("ayoubkirouane/gemma-2-9b-alpaca-small-Instruct")
- Use a pipeline as a high-level helper :
from transformers import pipeline
pipe = pipeline("text-generation", model="ayoubkirouane/gemma-2-9b-alpaca-small-Instruct")