Jokestral
This model was created by fine-tuning unsloth/mistral-7b-v0.3-bnb-4bit
on Short jokes dataset.
So the only purpose of this model is the generation of cringe jokes.
Just write the first few words and get your joke.
Usage
pip install transformers
pip install --no-deps "trl<0.9.0" peft accelerate bitsandbytes
from transformers import AutoTokenizer,AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("SantaBot/Jokestral_4bit",)
tokenizer = AutoTokenizer.from_pretrained("SantaBot/Jokestral_4bit")
inputs = tokenizer(
[
"My doctor" # YOUR PROMPT HERE
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)
The output should be something like :
['<s> My doctor told me I have to stop m4sturb4t1ng. I asked him why and he said ""Because I\'m trying to examine you.""\n</s>']
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for SantaBot/Jokestral_4bit
Base model
mistralai/Mistral-7B-v0.3
Quantized
unsloth/mistral-7b-v0.3-bnb-4bit