Update README.md
#5
by
bhatta1
- opened
README.md
CHANGED
@@ -46,6 +46,11 @@ with torch.no_grad():
|
|
46 |
## Cookbook on Model Usage as a Guardrail
|
47 |
This recipe illustrates the use of the model either in a prompt, the output, or both. This is an example of a “guard rail” typically used in generative AI applications for safety.
|
48 |
[Guardrail Cookbook](https://github.com/ibm-granite-community/granite-code-cookbook/blob/main/recipes/Guard-Rails/HAP.ipynb)
|
|
|
|
|
|
|
|
|
|
|
49 |
## Performance Comparison with Other Models
|
50 |
The model outperforms most popular models with significantly lower inference latency. If a better F1 score is required, please refer to IBM's 12-layer model [here](https://huggingface.co/ibm-granite/granite-guardian-hap-125m).
|
51 |
|
|
|
46 |
## Cookbook on Model Usage as a Guardrail
|
47 |
This recipe illustrates the use of the model either in a prompt, the output, or both. This is an example of a “guard rail” typically used in generative AI applications for safety.
|
48 |
[Guardrail Cookbook](https://github.com/ibm-granite-community/granite-code-cookbook/blob/main/recipes/Guard-Rails/HAP.ipynb)
|
49 |
+
|
50 |
+
## Cookbook on Model Usage for Bulk HAP Annotations of Documents
|
51 |
+
This recipe illustrates the use of the model for bulk HAP annotation of documents. The documents are read from a parquet file. It is then fed to the model sentence by sentence for a document and a HAP score for the document is decided. This is then stored back in the parquet file. [Document Annotation Cookbook](https://github.com/IBM/data-prep-kit/tree/dev/transforms/universal/hap/python)
|
52 |
+
|
53 |
+
|
54 |
## Performance Comparison with Other Models
|
55 |
The model outperforms most popular models with significantly lower inference latency. If a better F1 score is required, please refer to IBM's 12-layer model [here](https://huggingface.co/ibm-granite/granite-guardian-hap-125m).
|
56 |
|