-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Yolov5 train sample #13510
base: master
Are you sure you want to change the base?
Yolov5 train sample #13510
Conversation
I have read the CLA Document and I sign the CLA 1 out of 3 committers have signed the CLA. |
π Hello @wangxc2006, thank you for submitting a
To support further review:
For more details, please check out our Contributing Guide. This is an automated response to guide the PR process π. An Ultralytics engineer will review this in more detail shortly. Thank you for contributing to Ultralytics! π |
π οΈ PR Summary
Made with β€οΈ by Ultralytics Actions
π Summary
This PR introduces support for TPU-specific model compilation and execution, enhancing training performance on specialized hardware. π
π Key Changes
tpu_mlir_jit
, enabling model compilation and execution on TPU hardware.fx2mlir
) for converting PyTorch models into TPU-compatible formats.aot_autograd
backend to enable ahead-of-time (AOT) module export and joint graph compilation for TPU acceleration.torch.compile
.π― Purpose & Impact