Edit model card

Phi-3.5-mini-instruct-uncensored

  • Developed by: Carsen Klock
  • License: apache-2.0
  • Finetuned from model : unsloth/phi-3.5-mini-instruct-bnb-4bit

This Phi3.5 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Trained on 1 x 4080 SUPER over 10500 Epochs as a test. This is for test purposes only.

GGUFs are included in this repository for inference

Running in transformers

# Use a pipeline as a high-level helper
from transformers import pipeline

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="carsenk/phi3.5_mini_exp_825_uncensored")
pipe(messages)

print(pipe(messages))

Running in llama.cpp (Use GGUF)

from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="carsenk/phi3.5_mini_exp_825_uncensored",
    filename="unsloth.BF16.gguf",
)

llm.create_chat_completion(
        messages = [
            {
                "role": "user",
                "content": "What is the capital of France?"
            }
        ]
)
Downloads last month
633
Safetensors
Model size
3.82B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train carsenk/phi3.5_mini_exp_825_uncensored