Text Generation
Transformers
GGUF
Inference Endpoints
conversational
Edit model card

QuantFactory/Mistral-NeMo-Minitron-8B-Chat-GGUF

This is quantized version of rasyosef/Mistral-NeMo-Minitron-8B-Chat created using llama.cpp

Original Model Card

Mistral-NeMo-Minitron-8B-Chat

This is an instruction-tuned version of nvidia/Mistral-NeMo-Minitron-8B-Base that has underwent supervised fine-tuning with 32k instruction-response pairs from the teknium/OpenHermes-2.5 dataset.

How to use

Chat Format

Given the nature of the training data, the phi-2 instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follows:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Question?<|im_end|>
<|im_start|>assistant

For example:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
How to explain Internet for a medieval knight?<|im_end|>
<|im_start|>assistant

where the model generates the text after <|im_start|>assistant .

Sample inference code

This code snippets show how to get quickly started with running the model on a GPU:

import torch 
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline 

torch.random.manual_seed(0) 

model_id = "rasyosef/Mistral-NeMo-Minitron-8B-Chat"
model = AutoModelForCausalLM.from_pretrained( 
    model_id,  
    device_map="auto",  
    torch_dtype=torch.bfloat16 
) 

tokenizer = AutoTokenizer.from_pretrained(model_id) 

messages = [ 
    {"role": "system", "content": "You are a helpful AI assistant."}, 
    {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, 
    {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, 
    {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, 
] 

pipe = pipeline( 
    "text-generation", 
    model=model, 
    tokenizer=tokenizer, 
) 

generation_args = { 
    "max_new_tokens": 256, 
    "return_full_text": False, 
    "temperature": 0.0, 
    "do_sample": False, 
} 

output = pipe(messages, **generation_args) 
print(output[0]['generated_text'])  

Note: If you want to use flash attention, call AutoModelForCausalLM.from_pretrained() with attn_implementation="flash_attention_2"

Downloads last month
321
GGUF
Model size
8.41B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/Mistral-NeMo-Minitron-8B-Chat-GGUF

Quantized
(25)
this model

Dataset used to train QuantFactory/Mistral-NeMo-Minitron-8B-Chat-GGUF