Any views, comments and doubts about the model / dataset
You can ask me anything or how to replicate my model you can contact me
getting the error:
RuntimeError: Unsloth: Your repo has a LoRA adapter and a base model.
You have 2 files config.json
and adapter_config.json
.
We must only allow one config file.
Please separate the LoRA and base models to 2 repos.
when doing :
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "suyash2739/English_to_Hinglish_fintuned_lamma_3_8b_instruct",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
at the line : FastLanguageModel.from_pretrained(
versions:
unsloth : 2024.11.7
transformers: 2024.11.7
python: 3.11.5