Edit model card

🐾 Piccolo-4x7b 🐾

In loving memory of my dog Klaus (Piccolo)

~ Piccolo (Italian): the little one ~

piccolo.png

Code Example

Inference and Evaluation colab available here

from transformers import AutoModelForCausalLM, AutoTokenizer

def generate_response(prompt):
    """
    Generate a response from the model based on the input prompt.
    Args:
    prompt (str): Prompt for the model.

    Returns:
    str: The generated response from the model.
    """
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)

    response = tokenizer.decode(outputs[0], skip_special_tokens=True)

    return response

model_id = "macadeliccc/piccolo-4x7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,load_in_4bit=True)

prompt = "What is the best way to train Cane Corsos?"

print("Response:")
print(generate_response(prompt), "\n")

The model is capable of quality code, math, and logical reasoning. Try whatever questions you think of.

πŸ† Evaluations

Tasks Version Filter n-shot Metric Value Stderr
arc_easy Yaml none 0 acc 0.8371 Β± 0.0076
none 0 acc_norm 0.8064 Β± 0.0081
boolq Yaml none 0 acc 0.8685 Β± 0.0059
hellaswag Yaml none 0 acc 0.6687 Β± 0.0047
none 0 acc_norm 0.8416 Β± 0.0036
openbookqa Yaml none 0 acc 0.3580 Β± 0.0215
none 0 acc_norm 0.4740 Β± 0.0224
piqa Yaml none 0 acc 0.8243 Β± 0.0089
none 0 acc_norm 0.8308 Β± 0.0087
winogrande Yaml none 0 acc 0.7609 Β± 0.0120
Downloads last month
8
Safetensors
Model size
24.2B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including macadeliccc/piccolo-4x7b