language: | |
- en | |
license: apache-2.0 | |
tags: | |
- pretrained | |
- mlx | |
pipeline_tag: text-generation | |
inference: | |
parameters: | |
temperature: 0.7 | |
# Ice1/my-mistral-q-finetune | |
This model was converted to MLX format from [`mistralai/Mistral-7B-v0.1`]() using mlx-lm version **0.12.1**. | |
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model. | |
## Use with mlx | |
```bash | |
pip install mlx-lm | |
``` | |
```python | |
from mlx_lm import load, generate | |
model, tokenizer = load("Ice1/my-mistral-q-finetune") | |
response = generate(model, tokenizer, prompt="hello", verbose=True) | |
``` | |