glenn-jocher
commited on
Commit
β’
7a6870b
1
Parent(s):
83dc1b4
Update README.md
Browse files
README.md
CHANGED
@@ -89,17 +89,15 @@ To run inference on example images in `data/images`:
|
|
89 |
```bash
|
90 |
$ python detect.py --source data/images --weights yolov5s.pt --conf 0.25
|
91 |
|
92 |
-
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', img_size=640, iou_thres=0.45,
|
93 |
-
|
94 |
-
|
95 |
-
Downloading https://github.com/ultralytics/yolov5/releases/download/v3.1/yolov5s.pt to yolov5s.pt... 100%|ββββββββββββββ| 14.5M/14.5M [00:00<00:00, 21.3MB/s]
|
96 |
|
97 |
Fusing layers...
|
98 |
-
Model Summary:
|
99 |
-
image 1/2 data/images/bus.jpg: 640x480 4 persons, 1
|
100 |
-
image 2/2 data/images/zidane.jpg: 384x640 2 persons,
|
101 |
-
Results saved to runs/detect/
|
102 |
-
Done. (0.
|
103 |
```
|
104 |
<img src="https://user-images.githubusercontent.com/26833433/97107365-685a8d80-16c7-11eb-8c2e-83aac701d8b9.jpeg" width="500">
|
105 |
|
@@ -108,18 +106,17 @@ Done. (0.113s)
|
|
108 |
To run **batched inference** with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36):
|
109 |
```python
|
110 |
import torch
|
111 |
-
from PIL import Image
|
112 |
|
113 |
# Model
|
114 |
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
|
115 |
|
116 |
# Images
|
117 |
-
|
118 |
-
|
119 |
-
imgs = [img1, img2] # batched list of images
|
120 |
|
121 |
# Inference
|
122 |
-
|
|
|
123 |
```
|
124 |
|
125 |
|
|
|
89 |
```bash
|
90 |
$ python detect.py --source data/images --weights yolov5s.pt --conf 0.25
|
91 |
|
92 |
+
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=False, source='data/images/', update=False, view_img=False, weights=['yolov5s.pt'])
|
93 |
+
YOLOv5 v4.0-96-g83dc1b4 torch 1.7.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)
|
|
|
|
|
94 |
|
95 |
Fusing layers...
|
96 |
+
Model Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPS
|
97 |
+
image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, Done. (0.010s)
|
98 |
+
image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 1 tie, Done. (0.011s)
|
99 |
+
Results saved to runs/detect/exp2
|
100 |
+
Done. (0.103s)
|
101 |
```
|
102 |
<img src="https://user-images.githubusercontent.com/26833433/97107365-685a8d80-16c7-11eb-8c2e-83aac701d8b9.jpeg" width="500">
|
103 |
|
|
|
106 |
To run **batched inference** with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36):
|
107 |
```python
|
108 |
import torch
|
|
|
109 |
|
110 |
# Model
|
111 |
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
|
112 |
|
113 |
# Images
|
114 |
+
dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/'
|
115 |
+
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batched list of images
|
|
|
116 |
|
117 |
# Inference
|
118 |
+
results = model(imgs)
|
119 |
+
results.print() # or .show(), .save()
|
120 |
```
|
121 |
|
122 |
|