File size: 1,606 Bytes
c426a27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
### Prepare SAM
```
pip install git+https://github.com/facebookresearch/segment-anything.git
```
or
```
git clone [email protected]:facebookresearch/segment-anything.git
cd segment-anything; pip install -e .
```

```
pip install opencv-python pycocotools matplotlib onnxruntime onnx
```
### Download the checkpoint:

https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth

### Inference

The prompts are in json format:

```
prompts = [
        {
            "prompt_type":["click"],
            "input_point":[[500, 375]],
            "input_label":[1],
            "multimask_output":"True",
        },
        {
            "prompt_type":["click"],
            "input_point":[[500, 375], [1125, 625]],
            "input_label":[1, 0],
        },
        {
            "prompt_type":["click", "box"],
            "input_box":[425, 600, 700, 875],
            "input_point":[[575, 750]],
            "input_label": [0]
        },
        {
            "prompt_type":["box"],
            "input_boxes": [
                [75, 275, 1725, 850],
                [425, 600, 700, 875],
                [1375, 550, 1650, 800],
                [1240, 675, 1400, 750],
            ]
        },
        {
            "prompt_type":["everything"]
        },
    ]
```

In `base_segmenter.py`:
```
segmenter = BaseSegmenter(
        device='cuda',
        checkpoint='sam_vit_h_4b8939.pth',
        model_type='vit_h'
    )

for i, prompt in enumerate(prompts):
    masks = segmenter.inference(image_path, prompt)
```

Outputs are masks (True and False numpy Matrix), shape: (num of masks, height, weight)