glenn-jocher commited on
Commit
bd5cfff
β€’
1 Parent(s): ebe563e

Creado con Colaboratory

Browse files
Files changed (1) hide show
  1. tutorial.ipynb +14 -7
tutorial.ipynb CHANGED
@@ -661,7 +661,7 @@
661
  "id": "eyTZYGgRjnMc"
662
  },
663
  "source": [
664
- "### 2.1 COCO val2017\n",
665
  "Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L14) dataset (1GB - 5000 images), and test model accuracy."
666
  ]
667
  },
@@ -786,7 +786,7 @@
786
  "id": "rc_KbFk0juX2"
787
  },
788
  "source": [
789
- "### 2.2 COCO test-dev2017\n",
790
  "Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (20,000 images). Results are saved to a `*.json` file which can be submitted to the evaluation server at https://competitions.codalab.org/competitions/20794."
791
  ]
792
  },
@@ -996,15 +996,22 @@
996
  }
997
  ]
998
  },
 
 
 
 
 
 
 
 
 
999
  {
1000
  "cell_type": "markdown",
1001
  "metadata": {
1002
  "id": "DLI1JmHU7B0l"
1003
  },
1004
  "source": [
1005
- "# 4. Visualize\n",
1006
- "\n",
1007
- "## 4.1 Weights & Biases Logging (πŸš€ NEW)\n",
1008
  "\n",
1009
  "[Weights & Biases](https://www.wandb.com/) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration among team members. To enable W&B logging install `wandb`, and then train normally (you will be guided setup on first use).\n",
1010
  "```bash\n",
@@ -1022,7 +1029,7 @@
1022
  "id": "-WPvRbS5Swl6"
1023
  },
1024
  "source": [
1025
- "## 4.2 Local Logging\n",
1026
  "\n",
1027
  "All results are logged by default to the `runs/exp0` directory, with a new directory created for each new training as `runs/exp1`, `runs/exp2`, etc. View train and test jpgs to see mosaics, labels/predictions and augmentation effects. Note a **Mosaic Dataloader** is used for training (shown below), a new concept developed by Ultralytics and first featured in [YOLOv4](https://arxiv.org/abs/2004.10934)."
1028
  ]
@@ -1093,7 +1100,7 @@
1093
  "id": "Zelyeqbyt3GD"
1094
  },
1095
  "source": [
1096
- "## Environments\n",
1097
  "\n",
1098
  "YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\n",
1099
  "\n",
 
661
  "id": "eyTZYGgRjnMc"
662
  },
663
  "source": [
664
+ "## COCO val2017\n",
665
  "Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L14) dataset (1GB - 5000 images), and test model accuracy."
666
  ]
667
  },
 
786
  "id": "rc_KbFk0juX2"
787
  },
788
  "source": [
789
+ "## COCO test-dev2017\n",
790
  "Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yaml#L15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (20,000 images). Results are saved to a `*.json` file which can be submitted to the evaluation server at https://competitions.codalab.org/competitions/20794."
791
  ]
792
  },
 
996
  }
997
  ]
998
  },
999
+ {
1000
+ "cell_type": "markdown",
1001
+ "metadata": {
1002
+ "id": "15glLzbQx5u0"
1003
+ },
1004
+ "source": [
1005
+ "# 4. Visualize"
1006
+ ]
1007
+ },
1008
  {
1009
  "cell_type": "markdown",
1010
  "metadata": {
1011
  "id": "DLI1JmHU7B0l"
1012
  },
1013
  "source": [
1014
+ "## Weights & Biases Logging (πŸš€ NEW)\n",
 
 
1015
  "\n",
1016
  "[Weights & Biases](https://www.wandb.com/) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration among team members. To enable W&B logging install `wandb`, and then train normally (you will be guided setup on first use).\n",
1017
  "```bash\n",
 
1029
  "id": "-WPvRbS5Swl6"
1030
  },
1031
  "source": [
1032
+ "## Local Logging\n",
1033
  "\n",
1034
  "All results are logged by default to the `runs/exp0` directory, with a new directory created for each new training as `runs/exp1`, `runs/exp2`, etc. View train and test jpgs to see mosaics, labels/predictions and augmentation effects. Note a **Mosaic Dataloader** is used for training (shown below), a new concept developed by Ultralytics and first featured in [YOLOv4](https://arxiv.org/abs/2004.10934)."
1035
  ]
 
1100
  "id": "Zelyeqbyt3GD"
1101
  },
1102
  "source": [
1103
+ "# Environments\n",
1104
  "\n",
1105
  "YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):\n",
1106
  "\n",