Text Generation
MLX
mistral
pcuenq HF staff radames commited on
Commit
bd10b28
1 Parent(s): 87f8067

fix code snippet (#10)

Browse files

- fix code snippet (8bc8f3e4e0d83fbf1e740a0397542242cbeade4f)


Co-authored-by: Radamés Ajna <[email protected]>

Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -23,13 +23,14 @@ This repository contains the `mistral-7B-v0.1` weights in `npz` format suitable
23
  pip install mlx
24
  pip install huggingface_hub hf_transfer
25
  git clone https://github.com/ml-explore/mlx-examples.git
 
26
 
27
  # Download model
28
  export HF_HUB_ENABLE_HF_TRANSFER=1
29
  huggingface-cli download --local-dir-use-symlinks False --local-dir mistral-7B-v0.1 mlx-community/mistral-7B-v0.1
30
 
31
  # Run example
32
- python mlx-examples/mistral/mistral.py --prompt "My name is"
33
  ```
34
 
35
  Please, refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on Mistral-7B-v0.1.
 
23
  pip install mlx
24
  pip install huggingface_hub hf_transfer
25
  git clone https://github.com/ml-explore/mlx-examples.git
26
+ cd mlx-examples
27
 
28
  # Download model
29
  export HF_HUB_ENABLE_HF_TRANSFER=1
30
  huggingface-cli download --local-dir-use-symlinks False --local-dir mistral-7B-v0.1 mlx-community/mistral-7B-v0.1
31
 
32
  # Run example
33
+ python llms/mistral/mistral.py --prompt "My name is"
34
  ```
35
 
36
  Please, refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on Mistral-7B-v0.1.