paper review: “Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data” This work suggests that surrogate data need not be drawn from the original data distribution.This paper investigates the question of whether we can train a data-generating network that can produce synthetic data that effectively and efficiently teaches a target task to a learner propose new method to create synthetic Read more…

paper review: “High-Performance Large-Scale Image Recognition Without Normalization”

arxiv: key points introduce NF nets which combines multiple ideas to avoid using batch norm to get on-par performance but along with just using a bunch of non-BN techniques, this paper introduces adaptive gradient clipping(AGC) to make it actually train well to reach comparable results matching that of using Read more…

paper review: “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications”

arxiv: key points focus on optimizing for latency, small networks. use depthwise separable convolutions, to reduce computation as much as possible further reduce size of models based on width/resolution multiplier, but at the cost of accuracy depthwise separable convolution This is a combination of depthwise convonlution + pointwise convolution. Read more…

paper review: “FastDepth: Fast Monocular Depth Estimation on Embedded Systems”


key points

  • model to predict depth map
  • maximize speed by making it light as possible
  • focus not only on encoder network but also on decoder network for speed improvement
  • mobilenet for encoder, nearest-neighbor interpolation + NNConv5 for decoders, use skip connection, use depthwise separable convolution where ever possible, do network pruning, use TVM compiler stack to optimize depthwise separable convolution which is not optimized in populate DL frameworks.