-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detecting Botox injection points in facial images #13505
Comments
👋 Hello @ebkablan, thank you for your interest in YOLOv5 🚀! Detecting small objects like Botox injection points can indeed be challenging. Please visit our ⭐️ Tutorials for guidance on custom data training and optimization techniques. For similar projects, you might find our Custom Data Training and Tips for Best Training Results pages particularly useful. If this is a 🐛 Bug Report, please provide a minimum reproducible example (MRE) to help us investigate further. If this is a ❓ Question or general request for advice, please include as much relevant detail as possible to help us assist you effectively, such as:
RequirementsEnsure your environment meets the following: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 can be run in the following verified environments, which include pre-installed dependencies like CUDA/CUDNN:
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify the functionality of YOLOv5 features like training, validation, inference, export, and benchmarks across macOS, Windows, and Ubuntu. This is an automated response, but don't worry—an Ultralytics engineer will also review your issue and provide further assistance soon! 😊 |
MERVECELIK_5.txt You can access an example image and label is here. batch size:4, image size:1280, epochs:300, augmentations: hyp-scratch-low.yaml |
@ebkablan thanks for sharing the training details and sample image; for such small objects, you might try increasing the effective resolution (via cropping or tiling) and manually tailoring your anchor settings to better match the 40×40 box size instead of relying solely on autoanchor, as referenced in our anchor-based detectors glossary (https://ultralytics.com/glossary/anchor-based-detectors). |
Thanks for the feedback! Although the detection results seem good during inference, the precision and recall values remain very low during training and validation. Given that the objects are very small, are there any threshold values (e.g., IoU, confidence, anchor-related settings) that I should consider adjusting to improve performance? My image resolution is 6240×4160, and the object size is fixed at 40×40. During training, I use imgsz=1280. |
For small 40×40 objects in high-res images (6240×4160) trained at imgsz=1280:
from utils.autoanchor import kmean_anchors
kmean_anchors(path='your_dataset.yaml', imgsz=1280, n=9)
Consider tiling your original images before resizing to maintain object visibility. For anchor tuning details, see our Anchor-Based Detectors Guide. |
After this command : |
The anchor generation warning suggests potential label scaling issues. Since your objects are 40x40, ensure your labels are normalized (0-1) relative to the 6240x4160 image size, not absolute pixels. Your anchors [[17,52], ...] appear too large for 40x40 objects. Try: kmean_anchors(path='derma.yaml', imgsz=1280, n=9, gen=1000) # match training imgsz If anchors remain mismatched, you might benefit from YOLOv8's anchor-free approach, which eliminates anchor tuning challenges. For details on anchor-free benefits, see YOLOv11 Anchor-Free Detector Guide. |
After finding the anchors, I place them into three scales in the yolov5n.yaml file. Should I set noautoanchor=True when training the model? Is it possible to use the pretrained models without recalculating the anchors? |
Search before asking
Question
I am currently working on detecting Botox injection points (very small objects) in facial images using the YOLOv5 model. The images have a resolution of 6240×4160, and the injection points were labeled by an expert dermatologist using 40×40 bounding boxes. I have trained different YOLOv5 versions (nano, small, medium) using pretrained models, but the precision and recall values remain low during training.
Since YOLOv5 automatically applies the autoanchor feature, I expected better performance. However, the detection results suggest that the model may struggle with small object detection or resolution scaling. I would appreciate any insights or recommendations on improving detection accuracy, such as potential adjustments to the model configuration, anchor tuning, or alternative training strategies.
Looking forward to your advice.
Additional
No response
The text was updated successfully, but these errors were encountered: