--- library_name: transformers tags: - mergekit - merge - mlx base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # typealias/Llama-3-6B-Instruct-pruned-mlx-4bit The Model [typealias/Llama-3-6B-Instruct-pruned-mlx-4bit](https://huggingface.co/typealias/Llama-3-6B-Instruct-pruned-mlx-4bit) was converted to MLX format from [kuotient/Llama-3-6B-Instruct-pruned](https://huggingface.co/kuotient/Llama-3-6B-Instruct-pruned) using mlx-lm version **0.13.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("typealias/Llama-3-6B-Instruct-pruned-mlx-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```