mlx-community/quantized-gemma-2b
This model was converted to MLX format from google/gemma-2b
.
Refer to the original model card for more details on the model.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/quantized-gemma-2b")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 809
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.