metadata
license: apache-2.0
datasets:
- mrm8488/CHISTES_spanish_jokes
language:
- es
pipeline_tag: text-generation
TEST LORA
Adapter for BERTIN-GPT-J-6B fine-tuned on Jokes for jokes generation
Adapter Description
This adapter was created by using the PEFT library and allows the base model BERTIN-GPT-J-6B to be fine-tuned on the dataset mrm8488/CHISTES_spanish_jokes for Spanish jokes generation by using the method LoRA.
Model Description
BERTIN-GPT-J-6B is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
Training data
Dataset from Workshop for NLP introduction with Spanish jokes
Training procedure
TBA
How to use
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "mrm8488/bertin-gpt-j-6B-es-finetuned-chistes_spanish_jokes-500"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
# Inference
batch = tokenizer("Esto son dos amigos", return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=50)
print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))