Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detecting Botox injection points in facial images #13505

Open
1 task done
ebkablan opened this issue Feb 6, 2025 · 8 comments
Open
1 task done

Detecting Botox injection points in facial images #13505

ebkablan opened this issue Feb 6, 2025 · 8 comments
Labels
detect Object Detection issues, PR's question Further information is requested

Comments

@ebkablan
Copy link

ebkablan commented Feb 6, 2025

Search before asking

Question

I am currently working on detecting Botox injection points (very small objects) in facial images using the YOLOv5 model. The images have a resolution of 6240×4160, and the injection points were labeled by an expert dermatologist using 40×40 bounding boxes. I have trained different YOLOv5 versions (nano, small, medium) using pretrained models, but the precision and recall values remain low during training.

Since YOLOv5 automatically applies the autoanchor feature, I expected better performance. However, the detection results suggest that the model may struggle with small object detection or resolution scaling. I would appreciate any insights or recommendations on improving detection accuracy, such as potential adjustments to the model configuration, anchor tuning, or alternative training strategies.

Looking forward to your advice.

Additional

No response

@ebkablan ebkablan added the question Further information is requested label Feb 6, 2025
@UltralyticsAssistant UltralyticsAssistant added the detect Object Detection issues, PR's label Feb 6, 2025
@UltralyticsAssistant
Copy link
Member

👋 Hello @ebkablan, thank you for your interest in YOLOv5 🚀! Detecting small objects like Botox injection points can indeed be challenging. Please visit our ⭐️ Tutorials for guidance on custom data training and optimization techniques. For similar projects, you might find our Custom Data Training and Tips for Best Training Results pages particularly useful.

If this is a 🐛 Bug Report, please provide a minimum reproducible example (MRE) to help us investigate further.

If this is a ❓ Question or general request for advice, please include as much relevant detail as possible to help us assist you effectively, such as:

  • Example images with corresponding labels.
  • Your training settings (e.g., batch size, image size, epochs, augmentations).
  • Training logs, plots, or metrics (precision/recall, loss curves), if available.

Requirements

Ensure your environment meets the following:
Python>=3.8.0 with all requirements.txt dependencies installed, including PyTorch>=1.8. To set up, simply:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 can be run in the following verified environments, which include pre-installed dependencies like CUDA/CUDNN:

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify the functionality of YOLOv5 features like training, validation, inference, export, and benchmarks across macOS, Windows, and Ubuntu.

This is an automated response, but don't worry—an Ultralytics engineer will also review your issue and provide further assistance soon! 😊

@ebkablan
Copy link
Author

ebkablan commented Feb 6, 2025

Image

MERVECELIK_5.txt You can access an example image and label is here. batch size:4, image size:1280, epochs:300, augmentations: hyp-scratch-low.yaml

@pderrenger
Copy link
Member

@ebkablan thanks for sharing the training details and sample image; for such small objects, you might try increasing the effective resolution (via cropping or tiling) and manually tailoring your anchor settings to better match the 40×40 box size instead of relying solely on autoanchor, as referenced in our anchor-based detectors glossary (https://ultralytics.com/glossary/anchor-based-detectors).

@ebkablan
Copy link
Author

ebkablan commented Feb 7, 2025

@ebkablan thanks for sharing the training details and sample image; for such small objects, you might try increasing the effective resolution (via cropping or tiling) and manually tailoring your anchor settings to better match the 40×40 box size instead of relying solely on autoanchor, as referenced in our anchor-based detectors glossary (https://ultralytics.com/glossary/anchor-based-detectors).

Thanks for the feedback! Although the detection results seem good during inference, the precision and recall values remain very low during training and validation. Given that the objects are very small, are there any threshold values (e.g., IoU, confidence, anchor-related settings) that I should consider adjusting to improve performance? My image resolution is 6240×4160, and the object size is fixed at 40×40. During training, I use imgsz=1280.

@pderrenger
Copy link
Member

For small 40×40 objects in high-res images (6240×4160) trained at imgsz=1280:

  1. Increase imgsz to 2560+ if GPU permits to preserve small object details
  2. Generate custom anchors specific to your 40×40 boxes using:
from utils.autoanchor import kmean_anchors
kmean_anchors(path='your_dataset.yaml', imgsz=1280, n=9)
  1. Adjust IoU thresholds: Lower iou_t in hyp.yaml (try 0.1) to better match small objects
  2. Modify conf thresholds: Lower val conf from 0.001 to 0.0001 in val.py
  3. Use high-aug hyp: Switch to hyp.scratch-high.yaml for stronger scaling/translation

Consider tiling your original images before resizing to maintain object visibility. For anchor tuning details, see our Anchor-Based Detectors Guide.

@ebkablan
Copy link
Author

ebkablan commented Feb 8, 2025

For small 40×40 objects in high-res images (6240×4160) trained at imgsz=1280:

  1. Increase imgsz to 2560+ if GPU permits to preserve small object details
  2. Generate custom anchors specific to your 40×40 boxes using:

from utils.autoanchor import kmean_anchors
kmean_anchors(path='your_dataset.yaml', imgsz=1280, n=9)
3. Adjust IoU thresholds: Lower iou_t in hyp.yaml (try 0.1) to better match small objects
4. Modify conf thresholds: Lower val conf from 0.001 to 0.0001 in val.py
5. Use high-aug hyp: Switch to hyp.scratch-high.yaml for stronger scaling/translation

Consider tiling your original images before resizing to maintain object visibility. For anchor tuning details, see our Anchor-Based Detectors Guide.

After this command :
optimal_anchors = kmean_anchors('D:\OneDrive\03_Research\14_YOLO_ES\yolov5\derma.yaml',img_size=2560, n=9)
I got following output. Any problem?
AutoAnchor: Running kmeans for 9 anchors on 914 points...
AutoAnchor: WARNING switching strategies from kmeans to random init
100%|██████████| 1000/1000 [00:00<00:00, 2751.42it/s]
AutoAnchor: thr=0.25: 0.0000 best possible recall, 0.00 anchors past thr
AutoAnchor: n=9, img_size=1280, metric_all=0.038/0.157-mean/best, past_thr=nan-mean: 17,52, 61,92, 101,255, 362,446, 556,660, 673,689, 712,1019, 1108,1200, 1205,1236
[[ 16.945 52.276]
[ 61.116 92.386]
[ 100.62 255.41]
[ 361.63 445.98]
[ 556.04 659.86]
[ 673.47 689.07]
[ 712.31 1018.7]
[ 1108.4 1199.9]
[ 1205.3 1236.1]]

@pderrenger
Copy link
Member

The anchor generation warning suggests potential label scaling issues. Since your objects are 40x40, ensure your labels are normalized (0-1) relative to the 6240x4160 image size, not absolute pixels. Your anchors [[17,52], ...] appear too large for 40x40 objects. Try:

kmean_anchors(path='derma.yaml', imgsz=1280, n=9, gen=1000)  # match training imgsz

If anchors remain mismatched, you might benefit from YOLOv8's anchor-free approach, which eliminates anchor tuning challenges. For details on anchor-free benefits, see YOLOv11 Anchor-Free Detector Guide.

@ebkablan
Copy link
Author

The anchor generation warning suggests potential label scaling issues. Since your objects are 40x40, ensure your labels are normalized (0-1) relative to the 6240x4160 image size, not absolute pixels. Your anchors [[17,52], ...] appear too large for 40x40 objects. Try:

kmean_anchors(path='derma.yaml', imgsz=1280, n=9, gen=1000) # match training imgsz
If anchors remain mismatched, you might benefit from YOLOv8's anchor-free approach, which eliminates anchor tuning challenges. For details on anchor-free benefits, see YOLOv11 Anchor-Free Detector Guide.

After finding the anchors, I place them into three scales in the yolov5n.yaml file. Should I set noautoanchor=True when training the model? Is it possible to use the pretrained models without recalculating the anchors?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
detect Object Detection issues, PR's question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants