Label-Noise Robust Diffusion Models
- categorize
- Machine Learning
- Conference Name
- International Conference on Learning Representations (ICLR 2024)
- Presentation Date
- May 7-11
- City
- Vienna
- Country
- Austria
- File
- [ICLR24] Label_Noise_Robust_Diffusion_Models-camera_ready-ver-2.pdf (13.5M) 11회 다운로드 DATE : 2024-02-06 15:17:52
Byeonghu Na, Yeongmin Kim, HeeSun Bae, Jung Hyun Lee, Se Jung Kwon, Wanmo Kang, and Il-Chul Moon, Label-Noise Robust Diffusion Models, International Conference on Learning Representations (ICLR 2024), Vienna, Austria, May 7-11, 2024
Abstract
Learning with noisy labels is a practically challenging problem in weakly supervised learning. In the existing literature, open-set noises are always considered to be poisonous for generalization, similar to closed-set noises. In this paper, we empirically show that open-set noisy labels can be non-toxic and even benefit the robustness against inherent noisy labels. Inspired by the observations, we propose a simple yet effective regularization by introducing Open-set samples with Dynamic Noisy Labels (ODNL) into training. With ODNL, the extra capacity of the neural network can be largely consumed in a way that does not interfere with learning patterns from clean data. Through the lens of SGD noise, we show that the noises induced by our method are random-direction, conflict-free and biased, which may help the model converge to a flat minimum with superior stability and enforce the model to produce conservative predictions on Out-of-Distribution instances. Extensive experimental results on benchmark datasets with various types of noisy labels demonstrate that the proposed method not only enhances the performance of many existing robust algorithms but also achieves significant improvement on Out-of-Distribution detection tasks even in the label noise setting.