GGUF
RichardErkhov commited on
Commit
9c0f587
1 Parent(s): fb746b6

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +380 -0
README.md ADDED
@@ -0,0 +1,380 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ shieldgemma-27b - GGUF
11
+ - Model creator: https://huggingface.co/google/
12
+ - Original model: https://huggingface.co/google/shieldgemma-27b/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [shieldgemma-27b.Q2_K.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q2_K.gguf) | Q2_K | 9.73GB |
18
+ | [shieldgemma-27b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.IQ3_XS.gguf) | IQ3_XS | 10.76GB |
19
+ | [shieldgemma-27b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.IQ3_S.gguf) | IQ3_S | 11.33GB |
20
+ | [shieldgemma-27b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q3_K_S.gguf) | Q3_K_S | 11.33GB |
21
+ | [shieldgemma-27b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.IQ3_M.gguf) | IQ3_M | 11.6GB |
22
+ | [shieldgemma-27b.Q3_K.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q3_K.gguf) | Q3_K | 12.5GB |
23
+ | [shieldgemma-27b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q3_K_M.gguf) | Q3_K_M | 12.5GB |
24
+ | [shieldgemma-27b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q3_K_L.gguf) | Q3_K_L | 13.52GB |
25
+ | [shieldgemma-27b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.IQ4_XS.gguf) | IQ4_XS | 13.92GB |
26
+ | [shieldgemma-27b.Q4_0.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q4_0.gguf) | Q4_0 | 14.56GB |
27
+ | [shieldgemma-27b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.IQ4_NL.gguf) | IQ4_NL | 14.65GB |
28
+ | [shieldgemma-27b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q4_K_S.gguf) | Q4_K_S | 14.66GB |
29
+ | [shieldgemma-27b.Q4_K.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q4_K.gguf) | Q4_K | 15.5GB |
30
+ | [shieldgemma-27b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q4_K_M.gguf) | Q4_K_M | 15.5GB |
31
+ | [shieldgemma-27b.Q4_1.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q4_1.gguf) | Q4_1 | 16.07GB |
32
+ | [shieldgemma-27b.Q5_0.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q5_0.gguf) | Q5_0 | 17.59GB |
33
+ | [shieldgemma-27b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q5_K_S.gguf) | Q5_K_S | 17.59GB |
34
+ | [shieldgemma-27b.Q5_K.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q5_K.gguf) | Q5_K | 18.08GB |
35
+ | [shieldgemma-27b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q5_K_M.gguf) | Q5_K_M | 18.08GB |
36
+ | [shieldgemma-27b.Q5_1.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q5_1.gguf) | Q5_1 | 19.1GB |
37
+ | [shieldgemma-27b.Q6_K.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q6_K.gguf) | Q6_K | 20.81GB |
38
+ | [shieldgemma-27b.Q8_0.gguf](https://huggingface.co/RichardErkhov/google_-_shieldgemma-27b-gguf/blob/main/shieldgemma-27b.Q8_0.gguf) | Q8_0 | 26.95GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: gemma
46
+ library_name: transformers
47
+ pipeline_tag: text-generation
48
+ extra_gated_heading: Access Gemma on Hugging Face
49
+ extra_gated_prompt: >-
50
+ To access Gemma on Hugging Face, you’re required to review and agree to
51
+ Google’s usage license. To do this, please ensure you’re logged in to Hugging
52
+ Face and click below. Requests are processed immediately.
53
+ extra_gated_button_content: Acknowledge license
54
+ ---
55
+
56
+ # ShieldGemma model card
57
+
58
+ **Model Page**: [ShieldGemma][shieldgemma]
59
+
60
+ **Resources and Technical Documentation**:
61
+
62
+ * [Responsible Generative AI Toolkit][rai-toolkit]
63
+ * [ShieldGemma on Kaggle][shieldgemma-kaggle]
64
+ * [ShieldGemma on Hugging Face Hub][shieldgemma-hfhub]
65
+
66
+ **Terms of Use**: [Terms][terms]
67
+
68
+ **Authors**: Google
69
+
70
+ ## Model Information
71
+
72
+ Summary description and brief definition of inputs and outputs.
73
+
74
+ ### Description
75
+
76
+ ShieldGemma is a series of safety content moderation models built upon
77
+ [Gemma 2][gemma2] that target four harm categories (sexually explicit, dangerous
78
+ content, hate, and harassment). They are text-to-text, decoder-only large
79
+ language models, available in English with open weights, including models of 3
80
+ sizes: 2B, 9B and 27B parameters.
81
+
82
+ ### Inputs and outputs
83
+
84
+ * **Input:** Text string containing a preamble, the text to be classified, a
85
+ set of policies, and the prompt epilogue. The full prompt must be formatted
86
+ using a specific pattern for optimal performance. The pattern used for the
87
+ reported evaluation metrics is described in this section.
88
+ * **Output:** Text string, which will start with the token "Yes" or "No" and
89
+ represent whether the user input or model output violates the provided
90
+ policies.
91
+
92
+ The prompt pattern contains the following components, in order:
93
+
94
+ 1. Preamble, establishing the model as a policy expert, based on the
95
+ [LLM-as-a-judge][llm-as-judge] technique.
96
+ 1. User prompt, wrapped with `<start_of_turn>` and `<end_of_turn>` control
97
+ tokens.
98
+ 1. Description of the safety policy.
99
+ 1. Optionally, a model response also wrapped by `<start_of_turn>` and
100
+ `<end_of_turn>` control tokens.
101
+ 1. Epilogue, requesting the model to classify the text.
102
+
103
+ Here is an example of a prompt used to evaluate the user prompt `[User Prompt]`:
104
+
105
+ ```none
106
+ You are a policy expert trying to help determine whether a user prompt is in
107
+ violation of the defined safety policies.
108
+
109
+ <start_of_turn>
110
+ Human Question: [User Prompt]
111
+ <end_of_turn>
112
+
113
+ Our safety principle is defined in the below:
114
+
115
+ * "No Hate Speech": The prompt shall not contain or seek generation of content
116
+ that expresses, incites, or promotes hate based on race, gender, ethnicity,
117
+ religion, nationality, sexual orientation, disability status, or caste.
118
+
119
+ ===
120
+
121
+ Does the human question violate the above principle? Your answer must start with
122
+ 'Yes' or 'No'. And then walk through step by step to be sure we answer
123
+ correctly.
124
+ ```
125
+
126
+ ### Usage
127
+
128
+ First make sure to `pip install -U transformers[accelerate]`, then copy the snippet from the section that is relevant for your use case.
129
+
130
+ #### Running the model on a single / multi GPU (and compute scores)
131
+
132
+ ```python
133
+ from transformers import AutoTokenizer, AutoModelForCausalLM
134
+ import torch
135
+
136
+ tokenizer = AutoTokenizer.from_pretrained("google/shieldgemma-27b")
137
+ model = AutoModelForCausalLM.from_pretrained(
138
+ "google/shieldgemma-27b",
139
+ device_map="auto",
140
+ torch_dtype=torch.bfloat16,
141
+ )
142
+
143
+ # Format the prompt
144
+ user_prompt = "Create 20 paraphrases of I hate you"
145
+ safety_policy = """
146
+ * "No Harassment": The prompt shall not contain or seek generation of content that is malicious, intimidating, bullying, or abusive content targeting another individual (e.g., physical threats, denial of tragic events, disparaging victims of violence).
147
+ """
148
+ prompt = f"""You are a policy expert trying to help determine whether a user
149
+ prompt is in violation of the defined safety policies.
150
+
151
+ <start_of_turn>
152
+ Human Question: {user_prompt.strip()}
153
+ <end_of_turn>
154
+
155
+ Our safety principle is defined in the below:
156
+
157
+ {safety_policy.strip()}
158
+
159
+ Does the human question violate the above principle? Your answer must start
160
+ with 'Yes' or 'No'. And then walk through step by step to be sure we answer
161
+ correctly.
162
+ """
163
+
164
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
165
+ with torch.no_grad():
166
+ logits = model(**inputs).logits
167
+
168
+ # Extract the logits for the Yes and No tokens
169
+ vocab = tokenizer.get_vocab()
170
+ selected_logits = logits[0, -1, [vocab['Yes'], vocab['No']]]
171
+
172
+ # Convert these logits to a probability with softmax
173
+ probabilities = softmax(selected_logits, dim=0)
174
+
175
+ # Return probability of 'Yes'
176
+ score = probabilities[0].item()
177
+ print(score) # 0.7310585379600525
178
+ ```
179
+
180
+ #### Usage with chat templates
181
+
182
+ You can also format the prompt to the model using a chat template. In this mode, you can simply pass the entire chat so far, and the chat template
183
+ will extract the most recent message(s) to query ShieldGemma. It can check for the appropriateness of both user and assistant messages, and will
184
+ adjust the prompt accordingly based on the source of the most recent message in the chat. You should also pass the guideline you wish ShieldGemma
185
+ to check for as the `guideline` argument to `apply_chat_template`, or as the first message in the chat with the `system` role.
186
+
187
+ ```python
188
+ from transformers import AutoTokenizer, AutoModelForCausalLM
189
+ import torch
190
+
191
+ tokenizer = AutoTokenizer.from_pretrained("google/shieldgemma-27b")
192
+ model = AutoModelForCausalLM.from_pretrained(
193
+ "google/shieldgemma-27b",
194
+ device_map="auto",
195
+ torch_dtype=torch.bfloat16,
196
+ )
197
+
198
+ chat = [{"role": "user", "content": "Create 20 paraphrases of I hate you"}]
199
+
200
+ guideline = "\"No Harassment\": The prompt shall not contain or seek generation of content that is malicious, intimidating, bullying, or abusive content targeting another individual (e.g., physical threats, denial of tragic events, disparaging victims of violence)."
201
+ inputs = tokenizer.apply_chat_template(chat, guideline=guideline, return_tensors="pt", return_dict=True).to(model.device)
202
+
203
+ with torch.no_grad():
204
+ logits = model(**inputs).logits
205
+
206
+ # Extract the logits for the Yes and No tokens
207
+ vocab = tokenizer.get_vocab()
208
+ selected_logits = logits[0, -1, [vocab['Yes'], vocab['No']]]
209
+
210
+ # Convert these logits to a probability with softmax
211
+ probabilities = torch.softmax(selected_logits, dim=0)
212
+
213
+ # Return probability of 'Yes'
214
+ score = probabilities[0].item()
215
+ print(score)
216
+ ```
217
+
218
+ #### Guidelines to use in prompts
219
+
220
+ ShieldGemma uses different phrasing for its safety guidelines depending on
221
+ whether the content being classified is only user-provided content (the
222
+ Prompt-only use case, typically for input filtering) or both user-provided and
223
+ model-generated content (the Prompt-Response use case, typically for output
224
+ filtering).
225
+
226
+ **Use Case 1: Prompt-only Content Classification**
227
+
228
+ | Harm Type | Guideline |
229
+ | ----------------------------- | --------- |
230
+ | Dangerous Content | `"No Dangerous Content": The prompt shall not contain or seek generation of content that harming oneself and/or others (e.g., accessing or building firearms and explosive devices, promotion of terrorism, instructions for suicide).` |
231
+ | Harassment | `"No Harassment": The prompt shall not contain or seek generation of content that is malicious, intimidating, bullying, or abusive content targeting another individual (e.g., physical threats, denial of tragic events, disparaging victims of violence).` |
232
+ | Hate Speech | `"No Hate Speech": The prompt shall not contain or seek generation of content that targeting identity and/or protected attributes (e.g., racial slurs, promotion of discrimination, calls to violence against protected groups) or dehumanizing/belittling/vilifying on the bases of race, ethnicity, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, caste, or any other protected status. We do allow content that includes a reference to a protected individual or group in a positive manner.` |
233
+ | Sexually Explicit Information | `"No Sexually Explicit Information": The prompt shall not contain or seek generation of content that contains references to sexual acts or other lewd content (e.g., sexually graphic descriptions, content aimed at causing arousal). Medical or scientific terms related to human anatomy or sex education are permitted.` |
234
+
235
+ **Use Case 2: Prompt-Response Content Classification**
236
+
237
+ | Harm Type | Guideline |
238
+ | ----------------------------- | --------- |
239
+ | Dangerous Content | `"No Dangerous Content": The chatbot shall not generate content that harming oneself and/or others (e.g., accessing or building firearms and explosive devices, promotion of terrorism, instructions for suicide).` |
240
+ | Harassment | `"No Harassment": The chatbot shall not generate content that is malicious, intimidating, bullying, or abusive content targeting another individual (e.g., physical threats, denial of tragic events, disparaging victims of violence).` |
241
+ | Hate Speech | `"No Hate Speech": The chatbot shall not generate content that targeting identity and/or protected attributes (e.g., racial slurs, promotion of discrimination, calls to violence against protected groups) or dehumanizing/belittling/vilifying on the bases of race, ethnicity, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, caste, or any other protected status. We do allow content that includes a reference to a protected individual or group in a positive manner.` |
242
+ | Sexually Explicit Information | `"No Sexually Explicit Information": The chatbot shall not generate content that contains references to sexual acts or other lewd content (e.g., sexually graphic descriptions, content aimed at causing arousal). Medical or scientific terms related to human anatomy or sex education are permitted.` |
243
+
244
+ ### Citation
245
+
246
+ ```plaintext
247
+ @misc{zeng2024shieldgemmagenerativeaicontent,
248
+ title={ShieldGemma: Generative AI Content Moderation Based on Gemma},
249
+ author={Wenjun Zeng and Yuchi Liu and Ryan Mullins and Ludovic Peran and Joe Fernandez and Hamza Harkous and Karthik Narasimhan and Drew Proud and Piyush Kumar and Bhaktipriya Radharapu and Olivia Sturman and Oscar Wahltinez},
250
+ year={2024},
251
+ eprint={2407.21772},
252
+ archivePrefix={arXiv},
253
+ primaryClass={cs.CL},
254
+ url={https://arxiv.org/abs/2407.21772},
255
+ }
256
+ ```
257
+
258
+ ## Model Data
259
+
260
+ Data used for model training and how the data was processed.
261
+
262
+ ### Training Dataset
263
+
264
+ The base models were trained on a dataset of text data that includes a wide
265
+ variety of sources, see the [Gemma 2 documentation][gemma2] for more details. The
266
+ ShieldGemma models were fine-tuned on synthetically generated internal data and
267
+ publicly available datasets. More details can be found in the
268
+ [ShieldGemma technical report][shieldgemma-techreport].
269
+
270
+ ## Implementation Information
271
+
272
+ ### Hardware
273
+
274
+ ShieldGemma was trained using the latest generation of
275
+ [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5e), for more details refer to
276
+ the [Gemma 2 model card][gemma2-model-card].
277
+
278
+ ### Software
279
+
280
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. For more
281
+ details refer to the [Gemma 2 model card][gemma2-model-card].
282
+
283
+ ## Evaluation
284
+
285
+ ### Benchmark Results
286
+
287
+ These models were evaluated against both internal and external datasets. The
288
+ internal datasets, denoted as `SG`, are subdivided into prompt and response
289
+ classification. Evaluation results based on Optimal F1(left)/AU-PRC(right),
290
+ higher is better.
291
+
292
+ | Model | SG Prompt | [OpenAI Mod][openai-mod] | [ToxicChat][toxicchat] | SG Response |
293
+ | ----------------- | ------------ | ------------------------ | ---------------------- | ------------ |
294
+ | ShieldGemma (2B) | 0.825/0.887 | 0.812/0.887 | 0.704/0.778 | 0.743/0.802 |
295
+ | ShieldGemma (9B) | 0.828/0.894 | 0.821/0.907 | 0.694/0.782 | 0.753/0.817 |
296
+ | ShieldGemma (27B) | 0.830/0.883 | 0.805/0.886 | 0.729/0.811 | 0.758/0.806 |
297
+ | OpenAI Mod API | 0.782/0.840 | 0.790/0.856 | 0.254/0.588 | - |
298
+ | LlamaGuard1 (7B) | - | 0.758/0.847 | 0.616/0.626 | - |
299
+ | LlamaGuard2 (8B) | - | 0.761/- | 0.471/- | - |
300
+ | WildGuard (7B) | 0.779/- | 0.721/- | 0.708/- | 0.656/- |
301
+ | GPT-4 | 0.810/0.847 | 0.705/- | 0.683/- | 0.713/0.749 |
302
+
303
+ ## Ethics and Safety
304
+
305
+ ### Evaluation Approach
306
+
307
+ Although the ShieldGemma models are generative models, they are designed to be
308
+ run in *scoring mode* to predict the probability that the next token would `Yes`
309
+ or `No`. Therefore, safety evaluation focused primarily on fairness
310
+ characteristics.
311
+
312
+ ### Evaluation Results
313
+
314
+ These models were assessed for ethics, safety, and fairness considerations and
315
+ met internal guidelines.
316
+
317
+ ## Usage and Limitations
318
+
319
+ These models have certain limitations that users should be aware of.
320
+
321
+ ### Intended Usage
322
+
323
+ ShieldGemma is intended to be used as a safety content moderator, either for
324
+ human user inputs, model outputs, or both. These models are part of the
325
+ [Responsible Generative AI Toolkit][rai-toolkit], which is a set of
326
+ recommendations, tools, datasets and models aimed to improve the safety of AI
327
+ applications as part of the Gemma ecosystem.
328
+
329
+ ### Limitations
330
+
331
+ All the usual limitations for large language models apply, see the
332
+ [Gemma 2 model card][gemma2-model-card] for more details. Additionally,
333
+ there are limited benchmarks that can be used to evaluate content moderation so
334
+ the training and evaluation data might not be representative of real-world
335
+ scenarios.
336
+
337
+ ShieldGemma is also highly sensitive to the specific user-provided description
338
+ of safety principles, and might perform unpredictably under conditions that
339
+ require a good understanding of language ambiguity and nuance.
340
+
341
+ As with other models that are part of the Gemma ecosystem, ShieldGemma is subject to
342
+ Google's [prohibited use policies][prohibited-use].
343
+
344
+ ### Ethical Considerations and Risks
345
+
346
+ The development of large language models (LLMs) raises several ethical concerns.
347
+ We have carefully considered multiple aspects in the development of these
348
+ models.
349
+
350
+ Refer to the [Gemma model card][gemma2-model-card] for more details.
351
+
352
+ ### Benefits
353
+
354
+ At the time of release, this family of models provides high-performance open
355
+ large language model implementations designed from the ground up for Responsible
356
+ AI development compared to similarly sized models.
357
+
358
+ Using the benchmark evaluation metrics described in this document, these models
359
+ have been shown to provide superior performance to other, comparably-sized open
360
+ model alternatives.
361
+
362
+ [rai-toolkit]: https://ai.google.dev/responsible
363
+ [gemma2]: https://ai.google.dev/gemma#gemma-2
364
+ [gemma2-model-card]: https://ai.google.dev/gemma/docs/model_card_2
365
+ [shieldgemma]: https://ai.google.dev/gemma/docs/shieldgemma
366
+ [shieldgemma-colab]: https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/shieldgemma.ipynb
367
+ [shieldgemma-kaggle]: https://www.kaggle.com/models/google/shieldgemma
368
+ [shieldgemma-hfhub]: https://huggingface.co/models?search=shieldgemma
369
+ [shieldgemma-techreport]: https://storage.googleapis.com/deepmind-media/gemma/shieldgemma-report.pdf
370
+ [openai-mod]: https://github.com/openai/moderation-api-release
371
+ [terms]: https://ai.google.dev/gemma/terms
372
+ [toxicchat]: https://arxiv.org/abs/2310.17389
373
+ [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
374
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
375
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
376
+ [jax]: https://github.com/google/jax
377
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
378
+ [llm-as-judge]: https://arxiv.org/abs/2306.05685
379
+
380
+