--- language: - en - es library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- # Model Card for Model ID ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6320e992beec1969845be447/25pTrbjySoblu8cuiHASu.png) Introducing Pixie Zehir Nano. Excelling in writing. A fine tune of H2O Danube 1.8b on HQ DATA™ from Pixie Zehir. ## Model Details - **Developed by:** [Maani x BLNKBLK] - **Language(s) (NLP):** [English, Spanish] - **License:** [Apache 2.0] - **Finetuned from model :** [h2oai/h2o-danube-1.8b-chat] ## Agreements Model is created for research purposes, it can and will hallucinate, use with caution. ## Usage ```bash pip install transformers==4.36.1 ``` ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="Maani/PixieZehirNano", torch_dtype=torch.bfloat16, device_map="auto", ) # We use the HF Tokenizer chat template to format each message # https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "Write a haiku."}, ] prompt = pipe.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) res = pipe( prompt, max_new_tokens=256, ) print(res[0]["generated_text"]) # <|prompt|>Write a haiku.<|answer|> In the windowless room, Digital dreams consume, Unseen sun sets on a white rabbit's ears: [...] ```