You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I came across a paper suggesting DyT functions could replace traditional normalization methods. Surprisingly, using this normalization-free approach actually boosted performance in certain models. Does this sound practical enough for us to test it out?
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question or feature suggestion, please provide as much information as possible, including links to relevant research (like the one you shared), implementation details, and any initial experiments or logs if available. Also, verify you are following our Tips for Best Training Results.
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
This is an automated response 🤖 — an Ultralytics engineer will review your suggestion and assist you soon!
I came across a paper suggesting DyT functions could replace traditional normalization methods. Surprisingly, using this normalization-free approach actually boosted performance in certain models. Does this sound practical enough for us to test it out?
Paper URL
The text was updated successfully, but these errors were encountered: