--- license: apache-2.0 library_name: transformers tags: - code - mlx base_model: ibm-granite/granite-3b-code-base datasets: - bigcode/commitpackft - TIGER-Lab/MathInstruct - meta-math/MetaMathQA - glaiveai/glaive-code-assistant-v3 - glaive-function-calling-v2 - bugdaryan/sql-create-context-instruction - garage-bAInd/Open-Platypus - nvidia/HelpSteer metrics: - code_eval pipeline_tag: text-generation inference: false model-index: - name: granite-3b-code-instruct results: - task: type: text-generation dataset: name: HumanEvalSynthesis(Python) type: bigcode/humanevalpack metrics: - type: pass@1 value: 51.2 name: pass@1 - type: pass@1 value: 43.9 name: pass@1 - type: pass@1 value: 41.5 name: pass@1 - type: pass@1 value: 31.7 name: pass@1 - type: pass@1 value: 40.2 name: pass@1 - type: pass@1 value: 29.3 name: pass@1 - type: pass@1 value: 39.6 name: pass@1 - type: pass@1 value: 26.8 name: pass@1 - type: pass@1 value: 39.0 name: pass@1 - type: pass@1 value: 14.0 name: pass@1 - type: pass@1 value: 23.8 name: pass@1 - type: pass@1 value: 12.8 name: pass@1 - type: pass@1 value: 26.8 name: pass@1 - type: pass@1 value: 28.0 name: pass@1 - type: pass@1 value: 33.5 name: pass@1 - type: pass@1 value: 27.4 name: pass@1 - type: pass@1 value: 31.7 name: pass@1 - type: pass@1 value: 16.5 name: pass@1 --- # mlx-community/granite-3b-code-instruct-8bit The Model [mlx-community/granite-3b-code-instruct-8bit](https://huggingface.co/mlx-community/granite-3b-code-instruct-8bit) was converted to MLX format from [ibm-granite/granite-3b-code-instruct](https://huggingface.co/ibm-granite/granite-3b-code-instruct) using mlx-lm version **0.12.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/granite-3b-code-instruct-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```