--- base_model: - google/gemma-2-2b - google/gemma-2-2b-it - rinna/gemma-2-baku-2b language: - ja - en library_name: transformers license: gemma pipeline_tag: text-generation tags: - gemma2 - conversational - mlx thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png base_model_relation: merge --- # mlx-community/gemma-2-baku-2b-it-4bit The Model [mlx-community/gemma-2-baku-2b-it-4bit](https://huggingface.co/mlx-community/gemma-2-baku-2b-it-4bit) was converted to MLX format from [rinna/gemma-2-baku-2b-it](https://huggingface.co/rinna/gemma-2-baku-2b-it) using mlx-lm version **0.17.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/gemma-2-baku-2b-it-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```