--- language: - en tags: - facebook - meta - pytorch - llama - llama-2 - functions - function calling - sharded - mlx pipeline_tag: text-generation inference: false --- # mlx-community/Llama-2-7b-chat-hf-function-calling-v2-MLX This model was converted to MLX format from [`Trelis/Llama-2-7b-chat-hf-function-calling-v2`](). Refer to the [original model card](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v2) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Llama-2-7b-chat-hf-function-calling-v2-MLX") response = generate(model, tokenizer, prompt="hello", verbose=True) ```