metadata
license: agpl-3.0
pipeline_tag: object-detection
tags:
- ultralytics
- tracking
- instance-segmentation
- image-classification
- pose-estimation
- obb
- object-detection
- yolo
- yolov8
- license_plate
- Iran
- veichle_lisence_plate
- onnx
Model Overview
This model is an ONNX version of a YOLOv8 medium fine tuned on Iran veichle plate dataset. YOLOv8 is designed to efficiently detect objects in images by generating bounding boxes around objects of interest and predicting their associated class probabilities.
How to Use
Inference Using ONNX Runtime
import onnxruntime as rt
sess = rt.InferenceSession("path_to_model.onnx")
# View model input and output details
input_name = sess.get_inputs()[0].name
print("Input name:", input_name)
input_shape = sess.get_inputs()[0].shape
print("Input shape:", input_shape)
input_type = sess.get_inputs()[0].type
print("Input type:", input_type)
output_name = sess.get_outputs()[0].name
print("Output name:", output_name)
output_shape = sess.get_outputs()[0].shape
print("Output shape:", output_shape)
output_type = sess.get_outputs()[0].type
print("Output type:", output_type)
Pre-processing
- Load Image: Load the image using
cv2.imread()
. - Resize: Resize the input image to a 224x224 resolution.
- Normalize: Scale pixel values to the range [0, 1] by dividing by 255.
- Transpose: Change the image array to channel-first format
(C, H, W)
. - Convert: Convert the image to a float32 NumPy array and add a batch dimension.
import cv2
import numpy as np
# Pre-process the input image
image_path = "/path_to_image.png"
input_image = cv2.imread(image_path)
resized_image = cv2.resize(input_image, (224, 224))
scaled_image = resized_image / 255.0
transposed_image = scaled_image.transpose((2, 0, 1))
prep_image = np.array(transposed_image, dtype='float32')[None, :]
Inference
After pre-processing, run the model on the prepared image.
# Run inference
output_probabilities = sess.run([output_name], {input_name: prep_image})
Post-processing
To extract bounding box details:
- Identify the most probable bounding box.
- Extract the coordinates (x, y, width, height) and the associated probability.
# Extract bounding box information
most_prob_idx = output_probabilities[0][0][4].argmax()
x, y, width, height, prob = output_probabilities[0][0][:, most_prob_idx]
print(x, y, width, height, prob)