🚀 YOLOv8: Extended Edition is a fork of Ultralytics YOLOv8 repository. but with new features focuses on Automotive Vechicles 🚗.
🙋♂️ Who are we ? :
We are a team from Egypt studied at Faculty of Engineering, Electronics and Communications Department in Mansoura University.
These features were created to be used for our graduation project 🎓 (RTSD System) to make a system that can detect Roads Traffic Signs in real-time.
We are using YOLOv8 as a base model and we are adding new features to it to make it more suitable for our use case.
If you are new to YOLOv8, please unpack the original YOLOv8 Readme below.
Original YOLOv8 Readme
Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.
We hope that the resources here will help you get the most out of YOLOv8. Please browse the YOLOv8 Docs for details, raise an issue on GitHub for support, and join our Discord community for questions and discussions!
To request an Enterprise License please complete the form at Ultralytics Licensing.
See below for a quickstart installation and usage example, and see the YOLOv8 Docs for full documentation on training, validation, prediction and deployment.
Install
Pip install the ultralytics package including all requirements in a Python>=3.7 environment with PyTorch>=1.7.
pip install ultralytics
Usage
YOLOv8 may be used directly in the Command Line Interface (CLI) with a yolo
command:
yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
yolo
can be used for a variety of tasks and modes and accepts additional arguments, i.e. imgsz=640
. See the YOLOv8 CLI Docs for examples.
YOLOv8 may also be used directly in a Python environment, and accepts the same arguments as in the CLI example above:
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Use the model
model.train(data="coco128.yaml", epochs=3) # train the model
metrics = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
success = model.export(format="onnx") # export the model to ONNX format
Models download automatically from the latest Ultralytics release. See YOLOv8 Python Docs for more examples.
All YOLOv8 pretrained models are available here. Detect, Segment and Pose models are pretrained on the COCO dataset, while Classify models are pretrained on the ImageNet dataset.
Models download automatically from the latest Ultralytics release on first use.
Detection
See Detection Docs for usage examples with these models.
Model | size (pixels) |
mAPval 50-95 |
Speed CPU ONNX (ms) |
Speed A100 TensorRT (ms) |
params (M) |
FLOPs (B) |
---|---|---|---|---|---|---|
YOLOv8n | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
YOLOv8s | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
YOLOv8m | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
YOLOv8l | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
YOLOv8x | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
- mAPval values are for single-model single-scale on COCO val2017 dataset.
Reproduce byyolo val detect data=coco.yaml device=0
- Speed averaged over COCO val images using an Amazon EC2 P4d instance.
Reproduce byyolo val detect data=coco128.yaml batch=1 device=0|cpu
Segmentation
See Segmentation Docs for usage examples with these models.
Model | size (pixels) |
mAPbox 50-95 |
mAPmask 50-95 |
Speed CPU ONNX (ms) |
Speed A100 TensorRT (ms) |
params (M) |
FLOPs (B) |
---|---|---|---|---|---|---|---|
YOLOv8n-seg | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
YOLOv8s-seg | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
YOLOv8m-seg | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
YOLOv8l-seg | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
YOLOv8x-seg | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
- mAPval values are for single-model single-scale on COCO val2017 dataset.
Reproduce byyolo val segment data=coco.yaml device=0
- Speed averaged over COCO val images using an Amazon EC2 P4d instance.
Reproduce byyolo val segment data=coco128-seg.yaml batch=1 device=0|cpu
Classification
See Classification Docs for usage examples with these models.
Model | size (pixels) |
acc top1 |
acc top5 |
Speed CPU ONNX (ms) |
Speed A100 TensorRT (ms) |
params (M) |
FLOPs (B) at 640 |
---|---|---|---|---|---|---|---|
YOLOv8n-cls | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
YOLOv8s-cls | 224 | 72.3 | 91.1 | 23.4 | 0.35 | 6.4 | 13.5 |
YOLOv8m-cls | 224 | 76.4 | 93.2 | 85.4 | 0.62 | 17.0 | 42.7 |
YOLOv8l-cls | 224 | 78.0 | 94.1 | 163.0 | 0.87 | 37.5 | 99.7 |
YOLOv8x-cls | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 |
- acc values are model accuracies on the ImageNet dataset validation set.
Reproduce byyolo val classify data=path/to/ImageNet device=0
- Speed averaged over ImageNet val images using an Amazon EC2 P4d instance.
Reproduce byyolo val classify data=path/to/ImageNet batch=1 device=0|cpu
Pose
See Pose Docs for usage examples with these models.
Model | size (pixels) |
mAPpose 50-95 |
mAPpose 50 |
Speed CPU ONNX (ms) |
Speed A100 TensorRT (ms) |
params (M) |
FLOPs (B) |
---|---|---|---|---|---|---|---|
YOLOv8n-pose | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
YOLOv8s-pose | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
YOLOv8m-pose | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
YOLOv8l-pose | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 |
YOLOv8x-pose | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
YOLOv8x-pose-p6 | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
- mAPval values are for single-model single-scale on COCO Keypoints val2017
dataset.
Reproduce byyolo val pose data=coco-pose.yaml device=0
- Speed averaged over COCO val images using an Amazon EC2 P4d instance.
Reproduce byyolo val pose data=coco8-pose.yaml batch=1 device=0|cpu
Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
---|---|---|---|
Label and export your custom datasets directly to YOLOv8 for training with Roboflow | Automatically track, visualize and even remotely train YOLOv8 using ClearML (open-source!) | Free forever, Comet lets you save YOLOv8 models, resume training, and interactively visualize and debug predictions | Run YOLOv8 inference up to 6x faster with Neural Magic DeepSparse |
Experience seamless AI with Ultralytics HUB ⭐, the all-in-one solution for data visualization, YOLOv5 and YOLOv8 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly Ultralytics App. Start your journey for Free now!
We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see our Contributing Guide to get started, and fill out our Survey to send us feedback on your experience. Thank you 🙏 to all our contributors!
YOLOv8 is available under two different licenses:
- AGPL-3.0 License: See LICENSE file for details.
- Enterprise License: Provides greater flexibility for commercial product development without the open-source requirements of AGPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at Ultralytics Licensing.
For YOLOv8 bug reports and feature requests please visit GitHub Issues, and join our Discord community for questions and discussions!
We added new features to YOLOv8, including:
- 🌚
Night Vision
:
YOLOv8 can now see in the dark! We added support for night vision cameras, which can be used to detect objects in low-light conditions.
- 🛣
Lane Line Detection
:
YOLOv8 can now detect lane lines on the road! This feature is useful for self-driving cars and other autonomous vehicles.
- 🔌
SPI output
:
YOLOv8 can now output data over SPI, which is useful for connecting to other devices such as Arduino boards, Raspberry Pis or ESP32s.
We will talk about each of these features in more detail below.
Unpack Night Vision
Note:
This feature is available fordetect
mode.
andval
mode, only withdevice=cpu
.
How does it work ⚙?
Gamma correction is pretty simple, it's just a non-linear transformation of the input image.
It is used to adjust the overall brightness of the image.
The formula for gamma correction is as follows:
where Image_out
is the output normalized image, Image_in
is the input normalized image, and Gamma
is the gamma value (power).
Note:
1- The value ofgamma
is between 0 and 1 (closer to 0 brighter, closer to 1 darker).
2- The value of bothImage_out
andImage_in
pixels are between 0 and 1.
To make it more clear, let's take a look at the following example:
Let's assume we have an image with only one pixel, and its value is 0.8 (normalized value).
Now, let's apply gamma correction as follows:
Image_out = Image_in ^ gamma
when
gamma = 0
, thenImage_out = 1
whengamma = 0.25
, thenImage_out = 0.94
whengamma = 0.5
, thenImage_out = 0.89
whengamma = 0.75
, thenImage_out = 0.84
whengamma = 1
, thenImage_out = 0.8
let's plot the results:
Note:
Decreasing the value ofgamma
will make the image brighter.
How to use it 🚄?
There are some arguments added for Night Vision Feature :
- night_vision
- image_gamma (Optional)
-
if integer or float number :
- No need to pass any other parameters.
-
if 'auto', then you can pass the following parameters (all of them have values between 0 and 1):
- 🟢 min_normalized_intensity (Optional)
- 🟢 min_gamma (Optional)
- 🟢 max_gamma (Optional)
-
these parameters are described as below:
(default value is False
)
If you use camera in a dark environment, then you may get poor results.
We have made a preprocess feature to enhance image brightness, it actually may help you get better results in dark environments.
You can use Night Vision mode by adding parameter night_vision
as follows:
Using CLI:
yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' night_vision
and it has 3 modes:
1- Default mode:
By not passing parameter
night_vision
, it will use the original image as it is.
2- Night Vision mode (Only apply on input image):
night_vision
Or
night_vision=true
It will apply night vision on input image, pass it to the model and get the result based on night-processed image, But the shown/saved image will be the original image without night filter.
3- Night Vision mode (Apply on both input image and shown/saved image):
night_vision=show
It will apply night vision on input image, pass it to the model and get the result based on night-processed image, The shown/saved image will be also night-processed image with night filter applied.
An example to differentiate between Night Vision modes and their (Saved/Shown) results:
(default value is auto
)
image_gamma parameter has 2 modes:
1- Fixed gamma value:
If you want to use fixed gamma value (from 0 to 1), you can pass
image_gamma
parameter as follows:
Note 1: 1 means no change in image brightness, 0 means white image.
Note 2: Lower gamma value means more lightening applied to image (brighter image).
Using CLI:
yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' night_vision=show image_gamma=0.8
It will use fixed gamma value of 0.8
2- Auto gamma value:
If you want to use auto gamma value, you can pass
image_gamma
parameter as follows:
Using CLI:
yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' night_vision=show image_gamma=auto
It will test normalized image intensity (its value from 0 to 1), and its value describe image brightness (closer to 0 means dark image, closer to 1 means bright image).
if it is less than
min_normalized_intensity
(default value is 0.25) meaning that image is dark, then it will use scaled gamma value (frommin_gamma
tomax_gamma
) - based on image intensity - to enhance image brightness.
Note:min_gamma
andmax_gamma
have default values of 0.6 and 1.0 respectively.
if it is greater than
min_normalized_intensity
(default value is 0.25) meaning that image is bright, then it will use fixed gamma value of 1 (no change in image brightness).
For full customized gamma value, you can pass min_normalized_intensity
, min_gamma
and max_gamma
as follows:
Using CLI:
yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' night_vision=show image_gamma=auto min_normalized_intensity=0.5 min_gamma=0.8 max_gamma=1.0
An example to show effect of gamma (image_gamma
) on image brightness :
Unpack Lane Line Detection
Lane Line Detection is a feature that can be used to detect lane lines in images and videos.
How does it work ⚙?
Lane Line Detection is a feature that can be used to detect lane lines in images and videos.
The algorithm is based on the following steps:
- Detecting edges using Canny Edge Detection.
- Applying a mask to the image to remove unnecessary parts.
- Applying Hough Transform to detect lines.
- Filtering the detected lines to get the left and right lane lines.
- Drawing the detected lane lines on the image.
How to use it 🚄?
There are some arguments added for Lane Line Detection Feature :
lane_detection
:- if
lane_detection
isTrue
, then it will apply lane line detection on the input image. - if
lane_detection
isFalse
, then it will not apply lane line detection on the input image.
Note:
Default value isFalse
.- if
- Optional parameters:
CANNY_THRESHOLD_1
: The first threshold for the hysteresis procedure in Canny Edge Detection. (default value is 50)CANNY_THRESHOLD_2
: The second threshold for the hysteresis procedure in Canny Edge Detection. (default value is 150)MIN_VOTES
: The minimum number of votes (intersections in Hough grid cell). (default value is 100)MIN_LINE_LEN
: The minimum number of pixels making up a line. (default value is 50)MAX_LINE_GAP
: The maximum gap between two points to be considered in the same line. (default value is 10)lane_line_color
: The color of the detected lane lines. (default value is [243, 105, 14])lane_line_thickness
: The thickness of the detected lane lines. (default value is 12)
You can use Lane Line Detection by adding parameter lane_detection
as follows:
Using CLI:
yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' lane_detection
and it has 2 modes:
1- Default mode:
By not passing parameter
lane_detection
, it will not apply lane line detection on the input image.
2- Lane Line Detection mode:
lane_detection
Or
lane_detection=true
It will apply lane line detection on input image, pass it to the model and get the result based on lane line detected image, The shown/saved image will have lane line detected.
An example to differentiate between Lane Line Detection modes and their (Saved/Shown) results:
Unpack SPI output
SPI output is a feature that can be used to output data over SPI, which is useful for connecting to other devices such as Arduino boards, Raspberry Pis or ESP32s.
How does it work ⚙?
SPI stands for Serial Peripheral Interface. It is a synchronous serial communication interface specification used for short-distance communication, primarily in embedded systems.
SPI devices communicate in full duplex mode using a master-slave architecture with a single master. The master device originates the frame for reading and writing. Multiple slave devices are supported through selection with individual slave select (SS) lines.
How to use it 🚄?
There are some arguments added for SPI output Feature :
spi
:- if
spi
isTrue
, then it will output data over SPI. - if
spi
isFalse
, then it will not output data over SPI.
Note:
Default value isFalse
.- if
- Optional parameters:
spi_mode
: The SPI mode to use. (default value is 3)spi_speed
: The SPI speed to use. (default value is 2000000)spi_sleep
: The SPI delay to use. (default value is 0)spi_port
: The SPI port to use. (default value is 0)spi_device
: The SPI device to use. (default value is 0)
You can use SPI output by adding parameter spi
as follows:
Using CLI:
yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' spi
and it has 2 modes:
1- Default mode:
By not passing parameter
spi
, it will not output data over SPI.
2- SPI output mode:
spi
Or
spi=true
It will output data over SPI.