File size: 2,319 Bytes
e2858b5 c863a93 e2858b5 c863a93 404abbd c863a93 64b3807 c863a93 64b3807 c863a93 72360ff c863a93 72360ff c863a93 72360ff c863a93 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: apache-2.0
language:
- en
tags:
- Mathematical Reasoning
---
# Model Card for Model ID
[![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](CODE_LICENSE)
[![Model Weight License](https://img.shields.io/badge/Model%20Weights%20License-Apache_2.0-green.svg)](LICENSE)
[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/)
This model is instruction-tuned [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) model using QLoRA on single RTX 4090 GPU. Model is tuned to reason and answer mathematical problems. Model is also capable of writing a Python program that upon compilation prints answer to the question. To generate python program, you can ask model to write a Python program (as part of prompt) along with question. Refer to **Results** section for examples.
## Model Details
It is a Instruction-tuned Mistral-7B and performs mathematical reasoning and optionally write a Python program.
### Model Description
- **Project GitHub Page:** https://github.com/akjindal53244/Arithmo-Mistral-7B
- **Developed by:** [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/)
- **Funded by:** self-work
- **Model type:** Instruction-tuned
- **Language(s) (NLP):** English
- **Finetuned from model:** mistralai/Mistral-7B-v0.1
## How to query the model
Arithmo-Mistral-7B is trained with the following format:
### CoT Format:
```
Question: <question>
Answer:
```
### PoT Format:
```
Question: <question> <python_prompt>
Answer:
```
It will perform best if queried in this way.
## How to Get Started with the Model
Model is compatibale with Huggingface. I will publish a generation/inference script soon. Model inference on CPU also works; I have tested it on Macbook M1 Pro. GPU inference is much faster than CPU inference.
### Results
Here are sample screenshots of model output for few questions :)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/qE0V8cZnvQDRIq6qANuYp.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/rXEzumBHG-y2HEhOhSRt2.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/X_hLjlNRBavb473ejgDIl.png) |