File size: 4,793 Bytes
4f897a4 410266c 4f897a4 a5c4003 6d508c9 7930d28 4f897a4 f9579f3 4f89b82 4f897a4 1e79ae5 6d93177 1e79ae5 fa0fea3 e3d2a43 4f897a4 fa0fea3 85c5064 0ea8415 4f897a4 7930d28 4f897a4 7930d28 4f897a4 7930d28 55af27f 7930d28 55af27f 4f897a4 7930d28 4f897a4 7930d28 4f897a4 7930d28 4f897a4 19a781c 7930d28 55af27f 7930d28 55af27f 7930d28 4f897a4 7930d28 4f897a4 7930d28 db943ee 4f897a4 7930d28 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
library_name: transformers
base_model: mistralai/Mistral-7B-Instruct-v0.3
---
![Header image of a very philosophical lama](header-sm.jpg)
# Finetuned Mistral 7B model on philosophical texts
This model is a finetune of `mistralai/Mistral-7B-Instruct-v0.3`. It was tuned with a semi-synthetic dataset based on the Stanford Encyclopedia of Philosophy (SEP) from here: `ruggsea/stanford-encyclopedia-of-philosophy_instruct`.
The model was trained on an A10G GPU with 24GB of VRAM for about one hour.
### MMLU Philosophy Benchmark
| Model | Score |
|----------------------|----------|
| Mistral 7B | 59.16% |
| PhilosophyMistral 7B | 64.31% |
| Mistral Large 2 | 83.28% |
## Example:
Q: "Please explain the allegory of the cave to me."
**Answer before fine-tuning:**
> *The Allegory of the Cave is a philosophical thought experiment by Plato, presented in his work The Republic. In this allegory, Plato describes a group of people who have lived chained in a dark cave since birth, with their backs facing a wall on which shadows are projected from objects passing in front of a fire behind them. These prisoners believe that the shadows they see are reality itself. One prisoner is freed and leaves the cave, eventually coming to understand that the shadows are mere representations of real things outside the cave. He returns to the cave to try to enlighten the other prisoners about the true nature of reality, but they reject his claims because they cannot conceive of anything beyond what they can perceive through their senses. Plato uses this allegory to illustrate the idea that our understanding of the world is limited by our sensory experiences and that we must strive to gain knowledge through reason and philosophy in order to truly comprehend reality. It also highlights the difficulty in convincing others of truths that go beyond their immediate experience.*
**Answer after finetuning:**
> *The Allegory of the Cave is a philosophical thought experiment presented by Plato in his work, Republic (Book VII). In this allegory, Plato describes a group of people who have lived chained in an underground cave all their lives, facing a blank wall. The people watch shadows projected on the wall from objects passing in front of a fire behind them and, because of their lack of experience with the world outside the cave, believe that these shadows are the real things. A philosopher then frees one of the prisoners and leads him out into the sunlight above ground where he sees actual objects for the first time. This prisoner is initially blinded by the light but eventually becomes accustomed to it and comes to understand that the shadows were mere imitations of reality. He returns to the cave and tries to tell the other prisoners about his newfound knowledge, but they do not want to leave their comfortable existence and refuse to believe him. The Allegory of the Cave is often interpreted as a metaphor for the human condition: we are like the prisoners in the cave, living our entire lives within the confines of our own limited perceptions and beliefs.*
## Using the model
To load the model, you can apply the adapter straight to the original base model:
```python
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U bitsandbytes
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
from huggingface_hub import notebook_login
# notebook_login() # You may need to log in to HuggingFace to download the Mistral model
base_model_id = "mistralai/Mistral-7B-Instruct-v0.3"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id, # Mistral, same as before
quantization_config=bnb_config, # Same quantization config as before
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token=True, trust_remote_code=True)
ft_model = PeftModel.from_pretrained(base_model, "LeonardPuettmann/PhilosophyMistral-7B-Instruct-v0.3")
ft_model.eval()
prompt = "What is the nature of the self? Is there a soul?"
model_input = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
print(tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=256, repetition_penalty=1.15)[0], skip_special_tokens=True))
```
## Sources
Base model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3
Dataset for finetuning: https://huggingface.co/datasets/ruggsea/stanford-encyclopedia-of-philosophy_instruct
I hold no rights to the base model or the dataset used. |