Abstract: In many imaging reconstruction problems, we wish to estimate an image from linear projections, resulting in an ill-posed or underdetermined problem. Examples include deblurring, inpainting, compressed sensing reconstruction, and more. Traditionally, images are estimated in an optimization framework with a prespecified regularizer such as Tikhonov regularization or terms that promote sparsity in a known basis. More recent efforts focus on leveraging training images to learn a regularizer by using deep neural networks.
In this talk, I will illustrate how the sample complexity of many approaches in this vein is dramatically higher than necessary. I will then describe a new approach to learning to solve inverse problems with much lower sample complexity. Our method leverages the Neumann operator expansion to yield a novel network architecture that sidesteps bottlenecks associated with “unrolled” optimization frameworks and yields compelling empirical results in a variety of settings .