Deep image prior
article ID on arXiv: 1711.10925
Image reconstruction such as denoising and inpainting can be expressed as an optimization problem where R(x) as an regularizer was difficult to design:
min E + R(x)
in this paper, the researcher find that naturally-looking images are more likely to be generated by the net(maybe a network of certain architecture or by a certain optimization method), so R(x) can be represented implicitly by the generating the image x through a network.
- How to decide when should we stop traning to get the best result?
- Why can't we change the input vector Z? (because that may lead to a very big change to the output image?)(I think if we can do, this algorithm may learn faster(maybe wrong...hhh))
- What would happen if we only update the last two layers of the net or decay the learning_rate when the score begin to drop
- Why naturally-looking images are more likely to be generated bu the net, is there any intuitively interpretation?