Deep image prior
This paper gives a no learning image reconstruction method using a neural network.
In stead of the traditional regularization term like TV, wavelet and etc. This paper consider "The likelihood of the image can be reconstructed by an encoder-decoder like neural network" as a regularization.
Architechture: an encoder-decoder like neural network like U-net
(+) To best of my knowledge, this image reconstruction is the sate-of-the-art method to reconstruction by a single image[beats BM3D, DDTF etc.] It is also surprising that this algorithm also beats lots of the state-of-the-art learning algorithm.
(-) The computational cost is very high, so this algorithm may not useful.
A Better regularization term design, this paper gives us confidence that
How to utilize the encoder-decoder network after the training?
faster algorithms, like using a image generator.
Dynamic Routing Between Capsules
confidence: I'm not quite sure I have understood this paper
Motivation: Max pooling loose information of position relationship?
Best of my knowledge, Here are several works that also can utilize this information like some recent paper in video/image detection
- Relation Networks for Object Detection: https://arxiv.org/pdf/1711.11575 [use self attention in object detection]
- Nonlocal neural network: https://arxiv.org/pdf/1711.07971
I need to read carefully about the part of dynamic routing methods and the motivation!
adding a generator/decoder network as regularization
Individual dimensions of a capsule represent in section 5.1
better generalization:Robustness to Affine Transformations
Maybe i need to read carefully about the MultiMNIST results!