# Seminar 12.1 Review by csc

# Review on Deep image prior

Click here to view the paper.

## Pros

- This paper reveals a very interesting phenomenon that a deep convolutional network aiming to reconstruct the input corrupted image tend to reconstruct 'natural' part first and then the 'corrupted' part, so we can get satisfying results after proper number of epochs.
- The results are comparable with STOA results yielded by NNs trained by extra image dataset, while the method
**only utilize the corrupted image**.

## Cons

- The phenomenon is very interesting, however, it's a pity that we are not able to derive some practical method from it, due to the long trainning time and lack of a stopping criterion.
- I don't see any results of
**other types of NN**applied to the same problem. Without ruling out alternative explanations for this phenomenon, the author cannot defend the conclusion that this property is exclusive to deep CNNs.

# Review on capsnet

Click here to view the paper.

## Pros

- In this paper the author presents a new conception "capsule", substituting the scalar inputs and outputs in traditional neural networks by vectors. the vector-version neurons are called
**capsules**. - A vector contains
**much more information**than a scalar does. While the length of a vector indicates the possibility just as a scalar can do, the direction of that vector can certainly give us more information such as position, size, etc. The author shows that the input handwritten digital image can be reconstructed by one output vector. - The author claims that the so-called "dynamic routing by agreement" is a better substitute for the max pooling operations in standard NN architectures, for the reception field of a pooling operation is
**fixed**by the stucture of NN and cannot by trained.

## Cons

- The MNIST dataset may not be sufficient to illustrate the power of capsules. I wonder how it performs on more complicated dataset (for example, cifar10?).
- It seem that the capsnet has a great demand for large storage and computation when dealing with a large-scale dataset (imagenet).

## Questions

- I need to take a closer look at the dynamic routing method.
- The intercept term in tradition neurons seems to be missing in capsules. Why?