NoHarm Care - Patient Risk
Hello guys,
Just to share our results with you: we previous were using FlairBBP language model for our patient risk tool at NoHarm.ai.
With FlairBBP we achieve f1-score (macro avg) 0.7711
This week, we try biobertpt-all LM and had an excellent result: f1-score (macro avg) 0.8917 (so far)
We searched for a transformer model so we can optimize it for AWS Neuron production inference using Flair framework.
Next week will evaluate the inference speed between FlairBBP (running in a g4dn) and BioBertPt (running in a inf1).
Thanks for sharing your LM!
Hello guys,
Here is the comparison between Flair and Bert models running in several GPUs in AWS Instances.
inf1 (bert) | g4dn (bert) | g4dn (flair) | g3s (flair) | Instance |
---|---|---|---|---|
2.28 | 3.25 | 1.43 | 1.10 | text per second |
196,914.24 | 280,972.80 | 123,698.88 | 95,169.60 | text per day |
$49.25 | $113.62 | $113.62 | $162.00 | price per month |
$0.000008 | $0.000013 | $0.000031 | $0.000057 | price per text |
0.91 | 0.91 | 0.77 | 0.77 | f1-score (macro) |
518 | 691 | 237 | 237 | size (MB) |
The price considers Spot Instances, since I run the inference in ECS Spot.
Cheers!!
@hdpsantos Great! We are glad that our model is being useful. Thanks for sharing your results and analysis.