Edit model card

MODEL: "mychen76/tinyllama-colorist-v2" - is a finetuned TinyLlama model using color dataset.

MOTIVATION: A fun experimental model for using TinyLlama as Llama2 replacement for resource constraint environment.

PROMPT FORMAT: "<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:""

MODEL USAGE:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline

def print_color_space(hex_color):
    def hex_to_rgb(hex_color):
        hex_color = hex_color.lstrip('#')
        return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
    r, g, b = hex_to_rgb(hex_color)
    print(f'{hex_color}: \033[48;2;{r};{g};{b}m           \033[0m')

tokenizer = AutoTokenizer.from_pretrained(model_id_colorist_final)
pipe = pipeline(
    "text-generation",
    model=model_id_colorist_final,
    torch_dtype=torch.float16,
    device_map="auto",
)

from time import perf_counter
start_time = perf_counter()

prompt = formatted_prompt('give me a pure brown color')
sequences = pipe(
    prompt,
    do_sample=True,
    temperature=0.1,
    top_p=0.9,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=12
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

output_time = perf_counter() - start_time
print(f"Time taken for inference: {round(output_time,2)} seconds")

Result: #807070

Result: <|im_start|>user
give me a pure brown color<|im_end|>
<|im_start|>assistant: #807070<|im_end>

Time taken for inference: 0.19 seconds

Dataset: "burkelibbey/colors"

Downloads last month
26
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mychen76/tinyllama-colorist-v2

Quantizations
1 model

Space using mychen76/tinyllama-colorist-v2 1

Collection including mychen76/tinyllama-colorist-v2