EDS-Pseudo
This project aims at detecting identifying entities documents, and was primarily tested on clinical reports at AP-HP's Clinical Data Warehouse (EDS).
The model is built on top of edsnlp, and consists in a
hybrid model (rule-based + deep learning) for which we provide
rules (eds-pseudo/pipes
)
and a training recipe train.py
.
We also provide some fictitious
templates (templates.txt
) and a script to
generate a synthetic
dataset generate_dataset.py
.
The entities that are detected are listed below.
Label | Description |
---|---|
ADRESSE |
Street address, eg 33 boulevard de Picpus |
DATE |
Any absolute date other than a birthdate |
DATE_NAISSANCE |
Birthdate |
HOPITAL |
Hospital name, eg Hôpital Rothschild |
IPP |
Internal AP-HP identifier for patients, displayed as a number |
MAIL |
Email address |
NDA |
Internal AP-HP identifier for visits, displayed as a number |
NOM |
Any last name (patients, doctors, third parties) |
PRENOM |
Any first name (patients, doctors, etc) |
SECU |
Social security number |
TEL |
Any phone number |
VILLE |
Any city |
ZIP |
Any zip code |
Downloading the public pre-trained model
The public pretrained model is available on the HuggingFace model hub at
AP-HP/eds-pseudo-public and was trained on synthetic data
(see generate_dataset.py
). You can also
test it directly on the demo.
Install the latest version of edsnlp
pip install "edsnlp[ml]" -U
Get access to the model at AP-HP/eds-pseudo-public
Create and copy a huggingface token with permission "READ" at https://huggingface.co/settings/tokens?new_token=true
Register the token (only once) on your machine
import huggingface_hub huggingface_hub.login(token=YOUR_TOKEN, new_session=False, add_to_git_credential=True)
Load the model
import edsnlp nlp = edsnlp.load("AP-HP/eds-pseudo-public", auto_update=True) doc = nlp( "En 2015, M. Charles-François-Bienvenu " "Myriel était évêque de Digne. C’était un vieillard " "d’environ soixante-quinze ans ; il occupait le " "siège de Digne depuis 2006." ) for ent in doc.ents: print(ent, ent.label_, str(ent._.date))
To apply the model on many documents using one or more GPUs, refer to the documentation of edsnlp.
Metrics
AP-HP Pseudo Test Token Scores | Precision | Recall | F1 | Redact | Redact Full |
---|---|---|---|---|---|
ADRESSE | 98.2 | 96.9 | 97.6 | 97.6 | 96.7 |
DATE | 99 | 98.4 | 98.7 | 98.8 | 85.9 |
DATE_NAISSANCE | 97.5 | 96.9 | 97.2 | 99.3 | 99.4 |
IPP | 91.9 | 90.8 | 91.3 | 98.5 | 99.3 |
96.1 | 99.8 | 97.9 | 99.8 | 99.7 | |
NDA | 92.1 | 83.5 | 87.6 | 87.4 | 97.2 |
NOM | 94.4 | 95.3 | 94.8 | 98.2 | 89.5 |
PRENOM | 93.5 | 96.6 | 95 | 99 | 93.2 |
SECU | 88.3 | 100 | 93.8 | 100 | 100 |
TEL | 97.5 | 99.9 | 98.7 | 99.9 | 99.6 |
VILLE | 96.7 | 93.8 | 95.2 | 95.1 | 91.1 |
ZIP | 96.8 | 100 | 98.3 | 100 | 100 |
micro | 97 | 97.8 | 97.4 | 98.8 | 63.1 |
Installation to reproduce
If you'd like to reproduce eds-pseudo's training or contribute to its development, you should first clone it:
git clone https://github.com/aphp/eds-pseudo.git
cd eds-pseudo
And install the dependencies. We recommend pinning the library version in your projects, or use a strict package manager like Poetry.
poetry install
How to use without machine learning
import edsnlp
nlp = edsnlp.blank("eds")
# Some text cleaning
nlp.add_pipe("eds.normalizer")
# Various simple rules
nlp.add_pipe(
"eds_pseudo.simple_rules",
config={"pattern_keys": ["TEL", "MAIL", "SECU", "PERSON"]},
)
# Address detection
nlp.add_pipe("eds_pseudo.addresses")
# Date detection
nlp.add_pipe("eds_pseudo.dates")
# Contextual rules (requires a dict of info about the patient)
nlp.add_pipe("eds_pseudo.context")
# Apply it to a text
doc = nlp(
"En 2015, M. Charles-François-Bienvenu "
"Myriel était évêque de Digne. C’était un vieillard "
"d’environ soixante-quinze ans ; il occupait le "
"siège de Digne depuis 2006."
)
for ent in doc.ents:
print(ent, ent.label_)
# 2015 DATE
# Charles-François-Bienvenu NOM
# Myriel PRENOM
# 2006 DATE
How to train
Before training a model, you should update the configs/config.cfg and pyproject.toml files to fit your needs.
Put your data in the data/dataset
folder (or edit the paths configs/config.cfg
file to point
to data/gen_dataset/train.jsonl
).
Then, run the training script
python scripts/train.py --config configs/config.cfg --seed 43
This will train a model and save it in artifacts/model-last
. You can evaluate it on the test set (defaults
to data/dataset/test.jsonl
) with:
python scripts/evaluate.py --config configs/config.cfg
To package it, run:
python scripts/package.py
This will create a dist/eds-pseudo-aphp-***.whl
file that you can install with pip install dist/eds-pseudo-aphp-***
.
You can use it in your code:
import edsnlp
# Either from the model path directly
nlp = edsnlp.load("artifacts/model-last")
# Or from the wheel file
import eds_pseudo_aphp
nlp = eds_pseudo_aphp.load()
Documentation
Visit the documentation for more information!
Publication
Please find our publication at the following link: https://doi.org/mkfv.
If you use EDS-Pseudo, please cite us as below:
@article{eds_pseudo,
title={Development and validation of a natural language processing algorithm to pseudonymize documents in the context of a clinical data warehouse},
author={Tannier, Xavier and Wajsb{\"u}rt, Perceval and Calliger, Alice and Dura, Basile and Mouchet, Alexandre and Hilka, Martin and Bey, Romain},
journal={Methods of Information in Medicine},
year={2024},
publisher={Georg Thieme Verlag KG}
}
Acknowledgement
We would like to thank Assistance Publique – Hôpitaux de Paris and AP-HP Foundation for funding this project.
- Downloads last month
- 84,445
Evaluation results
- Token Scores / ADRESSE / Precision on AP-HP Pseudo Testself-reported0.982
- Token Scores / ADRESSE / Recall on AP-HP Pseudo Testself-reported0.969
- Token Scores / ADRESSE / F1 on AP-HP Pseudo Testself-reported0.976
- Token Scores / ADRESSE / Redact on AP-HP Pseudo Testself-reported0.976
- Token Scores / ADRESSE / Redact Full on AP-HP Pseudo Testself-reported0.967
- Token Scores / DATE / Precision on AP-HP Pseudo Testself-reported0.990
- Token Scores / DATE / Recall on AP-HP Pseudo Testself-reported0.984
- Token Scores / DATE / F1 on AP-HP Pseudo Testself-reported0.987
- Token Scores / DATE / Redact on AP-HP Pseudo Testself-reported0.988
- Token Scores / DATE / Redact Full on AP-HP Pseudo Testself-reported0.859