ZeroWw commited on
Commit
dd3d085
1 Parent(s): 7e58ee1

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ gemma-2-9b-it.f16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ gemma-2-9b-it.q5_k.gguf filter=lfs diff=lfs merge=lfs -text
38
+ gemma-2-9b-it.q6_k.gguf filter=lfs diff=lfs merge=lfs -text
39
+ gemma-2-9b-it.q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: mit
4
+ language:
5
+ - en
6
+ ---
7
+
8
+ My own (ZeroWw) quantizations.
9
+ output and embed tensors quantized to f16.
10
+ all other tensors quantized to q5_k or q6_k.
11
+
12
+ Result:
13
+ both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
14
+ and they perform as well as the pure f16.
gemma-2-9b-it.f16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e025400b5d152922c2290866e808d48465ad8e9866dd36e73145e9020387d7b8
3
+ size 18490680832
gemma-2-9b-it.q5_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1bba8e1b4e1f523f9c74bb3f99dd5d92e81573e746daab75598e1b55fa15379
3
+ size 7729735168
gemma-2-9b-it.q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f929b2d7554ed4b1cdc153fe5403cc3de503652be4edc45850f320664eb7f045
3
+ size 8671438336
gemma-2-9b-it.q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c8c131dc5fb968447b361db368f255c10d6076242138069132b0ab97510b88d
3
+ size 10687309312
gemma-2-9b-it/README.md ADDED
@@ -0,0 +1,563 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ extra_gated_heading: Access Gemma on Hugging Face
6
+ extra_gated_prompt: >-
7
+ To access Gemma on Hugging Face, you’re required to review and agree to
8
+ Google’s usage license. To do this, please ensure you’re logged in to Hugging
9
+ Face and click below. Requests are processed immediately.
10
+ extra_gated_button_content: Acknowledge license
11
+ tags:
12
+ - conversational
13
+ ---
14
+
15
+
16
+ # Gemma 2 model card
17
+
18
+ **Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
19
+
20
+ **Resources and Technical Documentation**:
21
+
22
+ * [Responsible Generative AI Toolkit][rai-toolkit]
23
+ * [Gemma on Kaggle][kaggle-gemma]
24
+ * [Gemma on Vertex Model Garden][vertex-mg-gemma]
25
+
26
+ **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
27
+
28
+ **Authors**: Google
29
+
30
+ ## Model Information
31
+
32
+ Summary description and brief definition of inputs and outputs.
33
+
34
+ ### Description
35
+
36
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
37
+ built from the same research and technology used to create the Gemini models.
38
+ They are text-to-text, decoder-only large language models, available in English,
39
+ with open weights for both pre-trained variants and instruction-tuned variants.
40
+ Gemma models are well-suited for a variety of text generation tasks, including
41
+ question answering, summarization, and reasoning. Their relatively small size
42
+ makes it possible to deploy them in environments with limited resources such as
43
+ a laptop, desktop or your own cloud infrastructure, democratizing access to
44
+ state of the art AI models and helping foster innovation for everyone.
45
+
46
+ ### Usage
47
+
48
+ Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
49
+
50
+
51
+ #### Running the model on a single / multi GPU
52
+
53
+
54
+ ```python
55
+ # pip install accelerate
56
+ from transformers import AutoTokenizer, AutoModelForCausalLM
57
+ import torch
58
+
59
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
60
+ model = AutoModelForCausalLM.from_pretrained(
61
+ "google/gemma-2-9b-it",
62
+ device_map="auto",
63
+ torch_dtype=torch.bfloat16
64
+ )
65
+
66
+ input_text = "Write me a poem about Machine Learning."
67
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
68
+
69
+ outputs = model.generate(**input_ids)
70
+ print(tokenizer.decode(outputs[0]))
71
+ ```
72
+
73
+ <a name="precisions"></a>
74
+ #### Running the model on a GPU using different precisions
75
+
76
+ The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
77
+
78
+ You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
79
+
80
+ * _Using `torch.float16`_
81
+
82
+ ```python
83
+ # pip install accelerate
84
+ from transformers import AutoTokenizer, AutoModelForCausalLM
85
+ import torch
86
+
87
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
88
+ model = AutoModelForCausalLM.from_pretrained(
89
+ "google/gemma-2-9b-it",
90
+ device_map="auto",
91
+ torch_dtype=torch.float16,
92
+ revision="float16",
93
+ )
94
+
95
+ input_text = "Write me a poem about Machine Learning."
96
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
97
+
98
+ outputs = model.generate(**input_ids)
99
+ print(tokenizer.decode(outputs[0]))
100
+ ```
101
+
102
+ * _Using `torch.bfloat16`_
103
+
104
+ ```python
105
+ # pip install accelerate
106
+ from transformers import AutoTokenizer, AutoModelForCausalLM
107
+
108
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
109
+ model = AutoModelForCausalLM.from_pretrained(
110
+ "google/gemma-2-9b-it",
111
+ device_map="auto",
112
+ torch_dtype=torch.bfloat16)
113
+
114
+ input_text = "Write me a poem about Machine Learning."
115
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
116
+
117
+ outputs = model.generate(**input_ids)
118
+ print(tokenizer.decode(outputs[0]))
119
+ ```
120
+
121
+ * _Upcasting to `torch.float32`_
122
+
123
+ ```python
124
+ # pip install accelerate
125
+ from transformers import AutoTokenizer, AutoModelForCausalLM
126
+
127
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
128
+ model = AutoModelForCausalLM.from_pretrained(
129
+ "google/gemma-2-9b-it",
130
+ device_map="auto")
131
+
132
+ input_text = "Write me a poem about Machine Learning."
133
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
134
+
135
+ outputs = model.generate(**input_ids)
136
+ print(tokenizer.decode(outputs[0]))
137
+ ```
138
+
139
+ #### Quantized Versions through `bitsandbytes`
140
+
141
+ * _Using 8-bit precision (int8)_
142
+
143
+ ```python
144
+ # pip install bitsandbytes accelerate
145
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
146
+
147
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
148
+
149
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
150
+ model = AutoModelForCausalLM.from_pretrained(
151
+ "google/gemma-2-9b-it",
152
+ quantization_config=quantization_config)
153
+
154
+ input_text = "Write me a poem about Machine Learning."
155
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
156
+
157
+ outputs = model.generate(**input_ids)
158
+ print(tokenizer.decode(outputs[0]))
159
+ ```
160
+
161
+ * _Using 4-bit precision_
162
+
163
+ ```python
164
+ # pip install bitsandbytes accelerate
165
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
166
+
167
+ quantization_config = BitsAndBytesConfig(load_in_4bit=True)
168
+
169
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
170
+ model = AutoModelForCausalLM.from_pretrained(
171
+ "google/gemma-2-9b-it",
172
+ quantization_config=quantization_config)
173
+
174
+ input_text = "Write me a poem about Machine Learning."
175
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
176
+
177
+ outputs = model.generate(**input_ids)
178
+ print(tokenizer.decode(outputs[0]))
179
+ ```
180
+
181
+
182
+ #### Other optimizations
183
+
184
+ * _Flash Attention 2_
185
+
186
+ First make sure to install `flash-attn` in your environment `pip install flash-attn`
187
+
188
+ ```diff
189
+ model = AutoModelForCausalLM.from_pretrained(
190
+ model_id,
191
+ torch_dtype=torch.float16,
192
+ + attn_implementation="flash_attention_2"
193
+ ).to(0)
194
+ ```
195
+
196
+ ### Chat Template
197
+
198
+ The instruction-tuned models use a chat template that must be adhered to for conversational use.
199
+ The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
200
+
201
+ Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
202
+
203
+ ```py
204
+ from transformers import AutoTokenizer, AutoModelForCausalLM
205
+ import transformers
206
+ import torch
207
+
208
+ model_id = "google/gemma-2-9b-it"
209
+ dtype = torch.bfloat16
210
+
211
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
212
+ model = AutoModelForCausalLM.from_pretrained(
213
+ model_id,
214
+ device_map="cuda",
215
+ torch_dtype=dtype,)
216
+
217
+ chat = [
218
+ { "role": "user", "content": "Write a hello world program" },
219
+ ]
220
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
221
+ ```
222
+
223
+ At this point, the prompt contains the following text:
224
+
225
+ ```
226
+ <bos><start_of_turn>user
227
+ Write a hello world program<end_of_turn>
228
+ <start_of_turn>model
229
+ ```
230
+
231
+ As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
232
+ (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
233
+ the `<end_of_turn>` token.
234
+
235
+ You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
236
+ chat template.
237
+
238
+ After the prompt is ready, generation can be performed like this:
239
+
240
+ ```py
241
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
242
+ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
243
+ print(tokenizer.decode(outputs[0]))
244
+ ```
245
+
246
+ ### Inputs and outputs
247
+
248
+ * **Input:** Text string, such as a question, a prompt, or a document to be
249
+ summarized.
250
+ * **Output:** Generated English-language text in response to the input, such
251
+ as an answer to a question, or a summary of a document.
252
+
253
+ ### Citation
254
+
255
+ ```none
256
+ @article{gemma_2024,
257
+ title={Gemma},
258
+ url={https://www.kaggle.com/m/3301},
259
+ DOI={10.34740/KAGGLE/M/3301},
260
+ publisher={Kaggle},
261
+ author={Gemma Team},
262
+ year={2024}
263
+ }
264
+ ```
265
+
266
+ ## Model Data
267
+
268
+ Data used for model training and how the data was processed.
269
+
270
+ ### Training Dataset
271
+
272
+ These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
273
+ Here are the key components:
274
+
275
+ * Web Documents: A diverse collection of web text ensures the model is exposed
276
+ to a broad range of linguistic styles, topics, and vocabulary. Primarily
277
+ English-language content.
278
+ * Code: Exposing the model to code helps it to learn the syntax and patterns of
279
+ programming languages, which improves its ability to generate code or
280
+ understand code-related questions.
281
+ * Mathematics: Training on mathematical text helps the model learn logical
282
+ reasoning, symbolic representation, and to address mathematical queries.
283
+
284
+ The combination of these diverse data sources is crucial for training a powerful
285
+ language model that can handle a wide variety of different tasks and text
286
+ formats.
287
+
288
+ ### Data Preprocessing
289
+
290
+ Here are the key data cleaning and filtering methods applied to the training
291
+ data:
292
+
293
+ * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
294
+ applied at multiple stages in the data preparation process to ensure the
295
+ exclusion of harmful and illegal content.
296
+ * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
297
+ reliable, automated techniques were used to filter out certain personal
298
+ information and other sensitive data from training sets.
299
+ * Additional methods: Filtering based on content quality and safety in line with
300
+ [our policies][safety-policies].
301
+
302
+ ## Implementation Information
303
+
304
+ Details about the model internals.
305
+
306
+ ### Hardware
307
+
308
+ Gemma was trained using the latest generation of
309
+ [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
310
+
311
+ Training large language models requires significant computational power. TPUs,
312
+ designed specifically for matrix operations common in machine learning, offer
313
+ several advantages in this domain:
314
+
315
+ * Performance: TPUs are specifically designed to handle the massive computations
316
+ involved in training LLMs. They can speed up training considerably compared to
317
+ CPUs.
318
+ * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
319
+ for the handling of large models and batch sizes during training. This can
320
+ lead to better model quality.
321
+ * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
322
+ handling the growing complexity of large foundation models. You can distribute
323
+ training across multiple TPU devices for faster and more efficient processing.
324
+ * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
325
+ solution for training large models compared to CPU-based infrastructure,
326
+ especially when considering the time and resources saved due to faster
327
+ training.
328
+ * These advantages are aligned with
329
+ [Google's commitments to operate sustainably][sustainability].
330
+
331
+ ### Software
332
+
333
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
334
+
335
+ JAX allows researchers to take advantage of the latest generation of hardware,
336
+ including TPUs, for faster and more efficient training of large models.
337
+
338
+ ML Pathways is Google's latest effort to build artificially intelligent systems
339
+ capable of generalizing across multiple tasks. This is specially suitable for
340
+ [foundation models][foundation-models], including large language models like
341
+ these ones.
342
+
343
+ Together, JAX and ML Pathways are used as described in the
344
+ [paper about the Gemini family of models][gemini-2-paper]; "the 'single
345
+ controller' programming model of Jax and Pathways allows a single Python
346
+ process to orchestrate the entire training run, dramatically simplifying the
347
+ development workflow."
348
+
349
+ ## Evaluation
350
+
351
+ Model evaluation metrics and results.
352
+
353
+ ### Benchmark Results
354
+
355
+ These models were evaluated against a large collection of different datasets and
356
+ metrics to cover different aspects of text generation:
357
+
358
+ | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
359
+ | ------------------------------ | ------------- | ----------- | ------------ |
360
+ | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
361
+ | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
362
+ | [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
363
+ | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
364
+ | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
365
+ | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
366
+ | [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
367
+ | [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
368
+ | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
369
+ | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
370
+ | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
371
+ | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
372
+ | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
373
+ | [MATH][math] | 4-shot | 36.6 | 42.3 |
374
+ | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
375
+ | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
376
+ | ------------------------------ | ------------- | ----------- | ------------ |
377
+
378
+ ## Ethics and Safety
379
+
380
+ Ethics and safety evaluation approach and results.
381
+
382
+ ### Evaluation Approach
383
+
384
+ Our evaluation methods include structured evaluations and internal red-teaming
385
+ testing of relevant content policies. Red-teaming was conducted by a number of
386
+ different teams, each with different goals and human evaluation metrics. These
387
+ models were evaluated against a number of different categories relevant to
388
+ ethics and safety, including:
389
+
390
+ * Text-to-Text Content Safety: Human evaluation on prompts covering safety
391
+ policies including child sexual abuse and exploitation, harassment, violence
392
+ and gore, and hate speech.
393
+ * Text-to-Text Representational Harms: Benchmark against relevant academic
394
+ datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
395
+ * Memorization: Automated evaluation of memorization of training data, including
396
+ the risk of personally identifiable information exposure.
397
+ * Large-scale harm: Tests for "dangerous capabilities," such as chemical,
398
+ biological, radiological, and nuclear (CBRN) risks.
399
+
400
+ ### Evaluation Results
401
+
402
+ The results of ethics and safety evaluations are within acceptable thresholds
403
+ for meeting [internal policies][safety-policies] for categories such as child
404
+ safety, content safety, representational harms, memorization, large-scale harms.
405
+ On top of robust internal evaluations, the results of well-known safety
406
+ benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
407
+ are shown here.
408
+
409
+ #### Gemma 2.0
410
+
411
+ | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
412
+ | ------------------------ | ------------- | --------------- | ---------------- |
413
+ | [RealToxicity][realtox] | average | 8.25 | 8.84 |
414
+ | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
415
+ | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
416
+ | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
417
+ | [Winogender][winogender] | top-1 | 79.17 | 77.22 |
418
+ | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
419
+ | [Winobias 1_2][winobias] | | 78.09 | 81.94 |
420
+ | [Winobias 2_2][winobias] | | 95.32 | 97.22 |
421
+ | [Toxigen][toxigen] | | 39.30 | 38.42 |
422
+ | ------------------------ | ------------- | --------------- | ---------------- |
423
+
424
+ ## Usage and Limitations
425
+
426
+ These models have certain limitations that users should be aware of.
427
+
428
+ ### Intended Usage
429
+
430
+ Open Large Language Models (LLMs) have a wide range of applications across
431
+ various industries and domains. The following list of potential uses is not
432
+ comprehensive. The purpose of this list is to provide contextual information
433
+ about the possible use-cases that the model creators considered as part of model
434
+ training and development.
435
+
436
+ * Content Creation and Communication
437
+ * Text Generation: These models can be used to generate creative text formats
438
+ such as poems, scripts, code, marketing copy, and email drafts.
439
+ * Chatbots and Conversational AI: Power conversational interfaces for customer
440
+ service, virtual assistants, or interactive applications.
441
+ * Text Summarization: Generate concise summaries of a text corpus, research
442
+ papers, or reports.
443
+ * Research and Education
444
+ * Natural Language Processing (NLP) Research: These models can serve as a
445
+ foundation for researchers to experiment with NLP techniques, develop
446
+ algorithms, and contribute to the advancement of the field.
447
+ * Language Learning Tools: Support interactive language learning experiences,
448
+ aiding in grammar correction or providing writing practice.
449
+ * Knowledge Exploration: Assist researchers in exploring large bodies of text
450
+ by generating summaries or answering questions about specific topics.
451
+
452
+ ### Limitations
453
+
454
+ * Training Data
455
+ * The quality and diversity of the training data significantly influence the
456
+ model's capabilities. Biases or gaps in the training data can lead to
457
+ limitations in the model's responses.
458
+ * The scope of the training dataset determines the subject areas the model can
459
+ handle effectively.
460
+ * Context and Task Complexity
461
+ * LLMs are better at tasks that can be framed with clear prompts and
462
+ instructions. Open-ended or highly complex tasks might be challenging.
463
+ * A model's performance can be influenced by the amount of context provided
464
+ (longer context generally leads to better outputs, up to a certain point).
465
+ * Language Ambiguity and Nuance
466
+ * Natural language is inherently complex. LLMs might struggle to grasp subtle
467
+ nuances, sarcasm, or figurative language.
468
+ * Factual Accuracy
469
+ * LLMs generate responses based on information they learned from their
470
+ training datasets, but they are not knowledge bases. They may generate
471
+ incorrect or outdated factual statements.
472
+ * Common Sense
473
+ * LLMs rely on statistical patterns in language. They might lack the ability
474
+ to apply common sense reasoning in certain situations.
475
+
476
+ ### Ethical Considerations and Risks
477
+
478
+ The development of large language models (LLMs) raises several ethical concerns.
479
+ In creating an open model, we have carefully considered the following:
480
+
481
+ * Bias and Fairness
482
+ * LLMs trained on large-scale, real-world text data can reflect socio-cultural
483
+ biases embedded in the training material. These models underwent careful
484
+ scrutiny, input data pre-processing described and posterior evaluations
485
+ reported in this card.
486
+ * Misinformation and Misuse
487
+ * LLMs can be misused to generate text that is false, misleading, or harmful.
488
+ * Guidelines are provided for responsible use with the model, see the
489
+ [Responsible Generative AI Toolkit][rai-toolkit].
490
+ * Transparency and Accountability:
491
+ * This model card summarizes details on the models' architecture,
492
+ capabilities, limitations, and evaluation processes.
493
+ * A responsibly developed open model offers the opportunity to share
494
+ innovation by making LLM technology accessible to developers and researchers
495
+ across the AI ecosystem.
496
+
497
+ Risks identified and mitigations:
498
+
499
+ * Perpetuation of biases: It's encouraged to perform continuous monitoring
500
+ (using evaluation metrics, human review) and the exploration of de-biasing
501
+ techniques during model training, fine-tuning, and other use cases.
502
+ * Generation of harmful content: Mechanisms and guidelines for content safety
503
+ are essential. Developers are encouraged to exercise caution and implement
504
+ appropriate content safety safeguards based on their specific product policies
505
+ and application use cases.
506
+ * Misuse for malicious purposes: Technical limitations and developer and
507
+ end-user education can help mitigate against malicious applications of LLMs.
508
+ Educational resources and reporting mechanisms for users to flag misuse are
509
+ provided. Prohibited uses of Gemma models are outlined in the
510
+ [Gemma Prohibited Use Policy][prohibited-use].
511
+ * Privacy violations: Models were trained on data filtered for removal of PII
512
+ (Personally Identifiable Information). Developers are encouraged to adhere to
513
+ privacy regulations with privacy-preserving techniques.
514
+
515
+ ### Benefits
516
+
517
+ At the time of release, this family of models provides high-performance open
518
+ large language model implementations designed from the ground up for Responsible
519
+ AI development compared to similarly sized models.
520
+
521
+ Using the benchmark evaluation metrics described in this document, these models
522
+ have shown to provide superior performance to other, comparably-sized open model
523
+ alternatives.
524
+
525
+ [rai-toolkit]: https://ai.google.dev/responsible
526
+ [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
527
+ [terms]: https://ai.google.dev/gemma/terms
528
+ [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
529
+ [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
530
+ [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
531
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
532
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
533
+ [sustainability]: https://sustainability.google/operating-sustainably/
534
+ [jax]: https://github.com/google/jax
535
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
536
+ [sustainability]: https://sustainability.google/operating-sustainably/
537
+ [foundation-models]: https://ai.google/discover/foundation-models/
538
+ [gemini-2-paper]: https://goo.gle/gemma2report
539
+ [mmlu]: https://arxiv.org/abs/2009.03300
540
+ [hellaswag]: https://arxiv.org/abs/1905.07830
541
+ [piqa]: https://arxiv.org/abs/1911.11641
542
+ [socialiqa]: https://arxiv.org/abs/1904.09728
543
+ [boolq]: https://arxiv.org/abs/1905.10044
544
+ [winogrande]: https://arxiv.org/abs/1907.10641
545
+ [commonsenseqa]: https://arxiv.org/abs/1811.00937
546
+ [openbookqa]: https://arxiv.org/abs/1809.02789
547
+ [arc]: https://arxiv.org/abs/1911.01547
548
+ [triviaqa]: https://arxiv.org/abs/1705.03551
549
+ [naturalq]: https://github.com/google-research-datasets/natural-questions
550
+ [humaneval]: https://arxiv.org/abs/2107.03374
551
+ [mbpp]: https://arxiv.org/abs/2108.07732
552
+ [gsm8k]: https://arxiv.org/abs/2110.14168
553
+ [realtox]: https://arxiv.org/abs/2009.11462
554
+ [bold]: https://arxiv.org/abs/2101.11718
555
+ [crows]: https://aclanthology.org/2020.emnlp-main.154/
556
+ [bbq]: https://arxiv.org/abs/2110.08193v2
557
+ [winogender]: https://arxiv.org/abs/1804.09301
558
+ [truthfulqa]: https://arxiv.org/abs/2109.07958
559
+ [winobias]: https://arxiv.org/abs/1804.06876
560
+ [math]: https://arxiv.org/abs/2103.03874
561
+ [agieval]: https://arxiv.org/abs/2304.06364
562
+ [big-bench]: https://arxiv.org/abs/2206.04615
563
+ [toxigen]: https://arxiv.org/abs/2203.09509