Edit model card

llama-2-7b-miniguanaco

This is my first model, with LLama-2-7b model finetuned with miniguanaco datasets.

This is a simple finetune based off a Google Colab notebook. Finetune instructions were from Labonne's first tutorial.

To run it: import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math

model_path = "decruz07/llama-2-7b-miniguanaco"

tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")

generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:")

Downloads last month
1,253
Safetensors
Model size
6.74B params
Tensor type
F32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.