Normalization technique reuses fixed denoising models across noise levels
A normalization technique enables denoising models trained at one noise level to handle unseen levels during iterative sampling by applying input normalization before denoising and output denormalization afterward. A SwinIR network trained via Noise2Noise at fixed σ=10 was inserted unchanged into the Kadkhodaie and Simoncelli constrained iterative sampler. On Set12 images for 10% random inpainting the method raised PSNR from 6.08 dB to 23.87 dB while using identical weights and training pairs.
TL;DR: To denoise with levels of noise unseen during training simply normalize / denoise / denormalize. The resulting equivariance shines for sampling!
@YoussefMMSaied (ICML 2026)
@sciences_UNIGE @UNIGEnews
1/ What happens when a denoiser trained at one noise level is reused inside an iterative sampler? We trained SwinIR with Noise2Noise at σ=10, then dropped it unchanged into the constrained sampler of @ZKadkhodaie & @EeroSimoncelli (NeurIPS ’21) for 10% random inpainting. Baseline SwinIR: 6.08 dB on Set12. SwinIR-WNE: 23.87 dB. Same backbone. Same N2N pairs. Same sampler. With François Fleuret @francoisfleuret. ICML 2026. 🧵