katielink commited on
Commit
b574bbe
1 Parent(s): 9984ad0

Fix inference print logger and froc

Browse files
README.md CHANGED
@@ -70,7 +70,8 @@ Inference is performed on WSI in a sliding window manner with specified stride.
70
  # Model Performance
71
 
72
  FROC score is used for evaluating the performance of the model. After inference is done, `evaluate_froc.sh` needs to be run to evaluate FROC score based on predicted probability map (output of inference) and the ground truth tumor masks.
73
- This model achieve the ~0.92 accuracy on validation patches, and FROC of ~0.72 on the 48 Camelyon testing data that have ground truth annotations available.
 
74
 
75
  # Commands example
76
 
@@ -89,16 +90,16 @@ torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run training
89
  Please note that the distributed training related options depend on the actual running environment, thus you may need to remove `--standalone`, modify `--nnodes` or do some other necessary changes according to the machine you used.
90
  Please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for more details.
91
 
92
- Override the `train` config to execute evaluation with the trained model:
93
 
94
  ```
95
- python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file "['configs/train.json','configs/evaluate.json']" --logging_file configs/logging.conf
96
  ```
97
 
98
- Execute inference:
99
 
100
  ```
101
- python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf
102
  ```
103
 
104
  Export checkpoint to TorchScript file:
 
70
  # Model Performance
71
 
72
  FROC score is used for evaluating the performance of the model. After inference is done, `evaluate_froc.sh` needs to be run to evaluate FROC score based on predicted probability map (output of inference) and the ground truth tumor masks.
73
+ This model achieve the ~0.91 accuracy on validation patches, and FROC of 0.685 on the 48 Camelyon testing data that have ground truth annotations available.
74
+ ![model performance](<https://developer.download.nvidia.com/assets/Clara/Images/monai_pathology_tumor_detection_train_and_val_metrics.png>)
75
 
76
  # Commands example
77
 
 
90
  Please note that the distributed training related options depend on the actual running environment, thus you may need to remove `--standalone`, modify `--nnodes` or do some other necessary changes according to the machine you used.
91
  Please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for more details.
92
 
93
+ Execute inference:
94
 
95
  ```
96
+ CUDA_LAUNCH_BLOCKING=1 python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf
97
  ```
98
 
99
+ Evaluate FROC metric:
100
 
101
  ```
102
+ cd scripts && source evaluate_froc.sh
103
  ```
104
 
105
  Export checkpoint to TorchScript file:
configs/inference.json CHANGED
@@ -106,7 +106,7 @@
106
  {
107
  "_target_": "StatsHandler",
108
  "tag_name": "progress",
109
- "iteration_print_logger": "$lambda engine: print(f'image: \"{engine.state.batch[\"metadata\"][\"name\"][0]}\", iter: {engine.state.iteration}/{engine.state.epoch_length}') if engine.state.iteration % 100 == 0 else None",
110
  "output_transform": "$lambda x: None"
111
  },
112
  {
 
106
  {
107
  "_target_": "StatsHandler",
108
  "tag_name": "progress",
109
+ "iteration_print_logger": "$lambda engine: print(f'image: \"{engine.state.batch[\"image\"].meta[\"name\"][0]}\", iter: {engine.state.iteration}/{engine.state.epoch_length}') if engine.state.iteration % 100 == 0 else None",
110
  "output_transform": "$lambda x: None"
111
  },
112
  {
configs/metadata.json CHANGED
@@ -1,7 +1,8 @@
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
- "version": "0.4.0",
4
  "changelog": {
 
5
  "0.4.0": "add lesion FROC calculation and wsi_reader",
6
  "0.3.3": "update to use monai 1.0.1",
7
  "0.3.2": "enhance readme on commands example",
 
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
+ "version": "0.4.1",
4
  "changelog": {
5
+ "0.4.1": "Fix inference print logger and froc",
6
  "0.4.0": "add lesion FROC calculation and wsi_reader",
7
  "0.3.3": "update to use monai 1.0.1",
8
  "0.3.2": "enhance readme on commands example",
docs/README.md CHANGED
@@ -63,7 +63,8 @@ Inference is performed on WSI in a sliding window manner with specified stride.
63
  # Model Performance
64
 
65
  FROC score is used for evaluating the performance of the model. After inference is done, `evaluate_froc.sh` needs to be run to evaluate FROC score based on predicted probability map (output of inference) and the ground truth tumor masks.
66
- This model achieve the ~0.92 accuracy on validation patches, and FROC of ~0.72 on the 48 Camelyon testing data that have ground truth annotations available.
 
67
 
68
  # Commands example
69
 
@@ -82,16 +83,16 @@ torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run training
82
  Please note that the distributed training related options depend on the actual running environment, thus you may need to remove `--standalone`, modify `--nnodes` or do some other necessary changes according to the machine you used.
83
  Please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for more details.
84
 
85
- Override the `train` config to execute evaluation with the trained model:
86
 
87
  ```
88
- python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file "['configs/train.json','configs/evaluate.json']" --logging_file configs/logging.conf
89
  ```
90
 
91
- Execute inference:
92
 
93
  ```
94
- python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf
95
  ```
96
 
97
  Export checkpoint to TorchScript file:
 
63
  # Model Performance
64
 
65
  FROC score is used for evaluating the performance of the model. After inference is done, `evaluate_froc.sh` needs to be run to evaluate FROC score based on predicted probability map (output of inference) and the ground truth tumor masks.
66
+ This model achieve the ~0.91 accuracy on validation patches, and FROC of 0.685 on the 48 Camelyon testing data that have ground truth annotations available.
67
+ ![model performance](<https://developer.download.nvidia.com/assets/Clara/Images/monai_pathology_tumor_detection_train_and_val_metrics.png>)
68
 
69
  # Commands example
70
 
 
83
  Please note that the distributed training related options depend on the actual running environment, thus you may need to remove `--standalone`, modify `--nnodes` or do some other necessary changes according to the machine you used.
84
  Please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for more details.
85
 
86
+ Execute inference:
87
 
88
  ```
89
+ CUDA_LAUNCH_BLOCKING=1 python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf
90
  ```
91
 
92
+ Evaluate FROC metric:
93
 
94
  ```
95
+ cd scripts && source evaluate_froc.sh
96
  ```
97
 
98
  Export checkpoint to TorchScript file:
models/model.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:90c8ae76f342c6fa5d89b9df9f5af014c9fd547067db5bae00a8e9a94e242e7a
3
- size 44790201
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b2684dad87e6c1d25e765c6408a6a0f387c1887eb318a6b5b367e3d963231ac
3
+ size 44780565
scripts/evaluate_froc.sh CHANGED
@@ -4,7 +4,7 @@ LEVEL=6
4
  SPACING=0.243
5
  READER=openslide
6
  EVAL_DIR=../eval
7
- GROUND_TRUTH_DIR=/workspace/data/medical/pathology/ground_truths
8
 
9
  echo "=> Level= ${LEVEL}"
10
  echo "=> Spacing = ${SPACING}"
 
4
  SPACING=0.243
5
  READER=openslide
6
  EVAL_DIR=../eval
7
+ GROUND_TRUTH_DIR=/workspace/data/medical/pathology/testing/ground_truths
8
 
9
  echo "=> Level= ${LEVEL}"
10
  echo "=> Spacing = ${SPACING}"
scripts/lesion_froc.py CHANGED
@@ -17,7 +17,7 @@ def load_data(ground_truth_dir: str, eval_dir: str, level: int, spacing: float):
17
  for prob_name in prob_files:
18
  if prob_name.endswith(".npy"):
19
  sample = {
20
- "tumor_mask": full_path(ground_truth_dir, prob_name.replace("npy", "tif")),
21
  "prob_map": full_path(eval_dir, prob_name),
22
  "level": level,
23
  "pixel_spacing": spacing,
 
17
  for prob_name in prob_files:
18
  if prob_name.endswith(".npy"):
19
  sample = {
20
+ "tumor_mask": full_path(ground_truth_dir, prob_name[:-4]),
21
  "prob_map": full_path(eval_dir, prob_name),
22
  "level": level,
23
  "pixel_spacing": spacing,