Edit model card

Norobara-ZLoss-8x7B

This is an instruct-tuned TinyLlama-1.1B-32k on several open-source instruct datasets, intended primarily for speculative decoding.

Usage:

The intended prompt format is a modified multi-turn Alpaca instruction format:

### Instruction:
{system prompt}

### Input:
{user message}

### Response:
{model response}

### Input:
{user message}

### Response:
{model response}

(etc.)

Bias, Risks, and Limitations

The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.

Training Details

This model was trained as a full finetune for 3 epochs using a single A100 GPU for around 3.5 hours.

Downloads last month
4,492
Safetensors
Model size
1.1B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct

Finetunes
5 models
Quantizations
5 models

Datasets used to train Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct

Collection including Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct