Update README.md
#4
by
bhatta1
- opened
README.md
CHANGED
@@ -46,6 +46,9 @@ with torch.no_grad():
|
|
46 |
## Cookbook on Model Usage as a Guardrail
|
47 |
This recipe illustrates the use of the model either in a prompt, the output, or both. This is an example of a “guard rail” typically used in generative AI applications for safety.
|
48 |
[Guardrail Cookbook](https://github.com/ibm-granite-community/granite-code-cookbook/blob/main/recipes/Guard-Rails/HAP.ipynb)
|
|
|
|
|
|
|
49 |
## Performance Comparison with Other Models
|
50 |
The model outperforms most popular models with significantly lower inference latency. If a better F1 score is required, please refer to IBM's 12-layer model [here](https://huggingface.co/ibm-granite/granite-guardian-hap-125m).
|
51 |
|
|
|
46 |
## Cookbook on Model Usage as a Guardrail
|
47 |
This recipe illustrates the use of the model either in a prompt, the output, or both. This is an example of a “guard rail” typically used in generative AI applications for safety.
|
48 |
[Guardrail Cookbook](https://github.com/ibm-granite-community/granite-code-cookbook/blob/main/recipes/Guard-Rails/HAP.ipynb)
|
49 |
+
|
50 |
+
## Cookbook on Model Usage for Bulk Annotation of Documents
|
51 |
+
This recipe illustrates the use of the model for HAP annotation of documents. The recipe reads documents from a parquet file, checks every sentence of the document for HAP and then decides on a HAP score for the document. This is then added to the parquet file [Document Annotation Cookbook] (https://github.com/IBM/data-prep-kit/tree/dev/transforms/universal/hap/python)
|
52 |
## Performance Comparison with Other Models
|
53 |
The model outperforms most popular models with significantly lower inference latency. If a better F1 score is required, please refer to IBM's 12-layer model [here](https://huggingface.co/ibm-granite/granite-guardian-hap-125m).
|
54 |
|