|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
library_name: transformers |
|
datasets: |
|
- NeuralNovel/Neural-Story-v1 |
|
base_model: mistralai/Mistral-7B-Instruct-v0.2 |
|
inference: false |
|
model-index: |
|
- name: Mistral-7B-Instruct-v0.2-Neural-Story |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 64.08 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 83.97 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 60.67 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 66.89 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 75.85 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 38.29 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
![Neural-Story](https://i.ibb.co/JFRYk6g/OIG-27.jpg) |
|
# NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story |
|
[GGUF FILES HERE](https://huggingface.co/Kquant03/Mistral-7B-Instruct-v0.2-Neural-Story-GGUF) |
|
|
|
The **Mistral-7B-Instruct-v0.2-Neural-Story** model, developed by NeuralNovel and funded by Techmind, is a language model finetuned from Mistral-7B-Instruct-v0.2. |
|
|
|
Designed to generate instructive and narrative text, with a specific focus on storytelling. |
|
This fine-tune has been tailored to provide detailed and creative responses in the context of narrative and optimised for short story telling. |
|
|
|
Based on mistralAI, with apache-2.0 license, suitable for commercial or non-commercial use. |
|
|
|
[Join our Discord!](https://discord.gg/rJXGjmxqzS) |
|
|
|
### Data-set |
|
The model was finetuned using the Neural-Story-v1 dataset. |
|
|
|
### Benchmark |
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | **64.96** | |
|
| ARC | 64.08 | |
|
| HellaSwag | **66.89** | |
|
| MMLU | 60.67 | |
|
| TruthfulQA | 66.89 | |
|
| Winogrande | **75.85** | |
|
| GSM8K | 38.29 | |
|
|
|
Evaluated on **HuggingFaceH4/open_llm_leaderboard** |
|
|
|
### Summary |
|
|
|
Fine-tuned with the intention of generating creative and narrative text, making it more suitable for creative writing prompts and storytelling. |
|
|
|
#### Out-of-Scope Use |
|
|
|
The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes. |
|
|
|
### Bias, Risks, and Limitations |
|
|
|
The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences. |
|
|
|
While the Neural-Story-v0.1 dataset serves as an excellent starting point for testing language models, users are advised to exercise caution, as there might be some inherent genre or writing bias. |
|
|
|
### Hardware and Training |
|
|
|
|
|
``` |
|
|
|
n_epochs = 3, |
|
n_checkpoints = 3, |
|
batch_size = 12, |
|
learning_rate = 1e-5, |
|
|
|
|
|
|
|
``` |
|
|
|
*Sincere appreciation to Techmind for their generous sponsorship.* |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Mistral-7B-Instruct-v0.2-Neural-Story) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |64.96| |
|
|AI2 Reasoning Challenge (25-Shot)|64.08| |
|
|HellaSwag (10-Shot) |83.97| |
|
|MMLU (5-Shot) |60.67| |
|
|TruthfulQA (0-shot) |66.89| |
|
|Winogrande (5-shot) |75.85| |
|
|GSM8k (5-shot) |38.29| |
|
|
|
|