Neural Networks Can Automatically Adapt to Low-Dimensional Structure in Inverse Problems

Date:

Abstract: Machine learning methods are increasingly used to solve inverse problems, wherein a signal must be estimated from few measurements generated via a known acquisition procedure. While approaches based on neural networks perform well empirically, they have limited theoretical guarantees. Specifically, it is unclear whether neural networks can reliably take advantage of low-dimensional structure shared by signals of interest — thus facilitating recovery in settings where the signal dimension far exceeds the number of available measurements. In this talk, I will present a positive resolution to this question for the special case of underdetermined linear inverse problems. I will show that, when trained with standard techniques and without explicit guidance, deep linear neural networks automatically adapt to underlying low-dimensional structure in the data, resulting in improved robustness against noise. These results shed light on how neural networks generalize well in practice by naturally capturing hidden patterns in data.

Slides