|
--- |
|
library_name: keras-nlp |
|
extra_gated_heading: Access CodeGemma on Hugging Face |
|
extra_gated_prompt: >- |
|
To access CodeGemma on Hugging Face, you’re required to review and agree to |
|
Google’s usage license. To do this, please ensure you’re logged-in to Hugging |
|
Face and click below. Requests are processed immediately. |
|
extra_gated_button_content: Acknowledge license |
|
license: gemma |
|
license_link: https://ai.google.dev/gemma/terms |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# CodeGemma |
|
|
|
**Google Model Page**: [CodeGemma](https://ai.google.dev/gemma/docs/codegemma) |
|
|
|
This model card corresponds to the latest 2B base version of the Code Gemma 1.1 model for usage in keras. |
|
|
|
Keras models can be used with JAX, PyTorch or TensorFlow as numerical backends. |
|
JAX, with its support for SPMD model paralellism, is recommended for large models. |
|
For more information: [distributed training with Keras and JAX](https://keras.io/guides/distribution/). |
|
|
|
You can find other models in the CodeGemma family here: |
|
|
|
| | Base | Instruct | |
|
|----|----------------------------------------------------|----------------------------------------------------------------------| |
|
| 2B | [**codegemma-1.1-2b-keras**](https://huggingface.co/google/codegemma-1.1-2b-keras) | | |
|
| 7B | [codegemma-7b-keras](https://huggingface.co/google/codegemma-7b-keras) | [codegemma-1.1-7b-it-keras](https://huggingface.co/google/codegemma-1.1-7b-it-keras) | |
|
|
|
For more information about the model, visit https://huggingface.co/google/codegemma-2b. |
|
|
|
Google Model Page |
|
: [CodeGemma](https://ai.google.dev/gemma/docs/codegemma) |
|
|
|
Resources and Technical Documentation |
|
: [Technical Report](https://goo.gle/codegemma) |
|
: [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) |
|
|
|
Terms of Use |
|
: [Terms](https://ai.google.dev/gemma/terms) |
|
|
|
Authors |
|
: Google |
|
|
|
## Loading the model |
|
|
|
```python |
|
import keras_nlp |
|
|
|
gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("hf://google/codegemma-1.1-2b-keras") |
|
``` |