vanakema commited on
Commit
0ff760b
1 Parent(s): 4ef7b77

Add note to warn users that this is not actually the bf16 version of this model

Browse files

As noted in (https://huggingface.co/mlx-community/Meta-Llama-3.1-70B-Instruct-bf16/discussions/2)[https://huggingface.co/mlx-community/Meta-Llama-3.1-70B-Instruct-bf16/discussions/2], this is not actually the bf16.

This PR updates the readme to warn users that this model is incorrect, and points them to the corrected bf16 version I generated and uploaded to hf (and transferred to the mlx-community)

Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -192,6 +192,10 @@ extra_gated_button_content: Submit
192
 
193
  # mlx-community/Meta-Llama-3.1-70B-Instruct-bf16
194
 
 
 
 
 
195
  The Model [mlx-community/Meta-Llama-3.1-70B-Instruct-bf16](https://huggingface.co/mlx-community/Meta-Llama-3.1-70B-Instruct-bf16) was converted to MLX format from [meta-llama/Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) using mlx-lm version **0.16.0**.
196
 
197
  ## Use with mlx
 
192
 
193
  # mlx-community/Meta-Llama-3.1-70B-Instruct-bf16
194
 
195
+ # NOTE: THIS IS NOT ACTUALLY 70B-bf16. This is must be an accidental reupload of 70B-4bit
196
+ More info on this issue can be found (here)[https://huggingface.co/mlx-community/Meta-Llama-3.1-70B-Instruct-bf16/discussions/2]
197
+ A proper 70B bf16 MLX version of this can be found (here)[https://huggingface.co/mlx-community/Meta-Llama-3.1-70B-Instruct-bf16-CORRECTED]
198
+
199
  The Model [mlx-community/Meta-Llama-3.1-70B-Instruct-bf16](https://huggingface.co/mlx-community/Meta-Llama-3.1-70B-Instruct-bf16) was converted to MLX format from [meta-llama/Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) using mlx-lm version **0.16.0**.
200
 
201
  ## Use with mlx