in Seminar ~ read.

12.1 Seminar Review by zdh

Deep Image Prior

The article can be read here.

The authors think that a great deal of image statistics are captured by the structure of a convolutional image generator rather than by any learned capability.

As a result they come up with a new kind of generative model which is simple but competitive -- use a convolutional NN with random weights \(\theta\) to process images. By optimizing
they can accomplish many image process works such as denoising and super-resolution very well.

What's more, researchers find that the model generates natural images first before it overfits, so it's reasonable to use early stopping to get more natural images.

Pros

  • do not need pre-training
  • surpass many mainstream image process function

Cons

  • computationally heavy
  • needs further types of experiment

Further Thinking

I am very curious about how will this model behave if we use different kinds of NNs, or if we use some optimal tricks on it.

Dynamic Routing Between Capsules

The article can be read here.

Because of the drawbacks of CNNs, Hinton invents a new kind of network called CapsuleNet, which mainly have two noteworthy points:

  • Replacing the scalar-output feature detectors of CNNs with vector-output capsules (this is said to make somewhat biological sense)
  • Replacing max-pooling with "routing-by-agreement"

The novel dynamic routing takes something which has positive feedback control so that one vector in its DigitCap will grow much bigger than others (which means it's the most likely class). Compared to traditional neural networks computed simply by weights, this is more comprehensive.

I think Hinton is a great scientist in our period. While others are satisfied with a "black-box" that they even don't know why it works, he, though at an age of seventy, is still fighting at the frontier trying to discover more insight of deeplearning.

However, researchers just use several simply experiments to prove the CapsNet does better than CNN which is not persuasive. Hope we can see more proofs in the future.