paper review: “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications”

arxiv: https://arxiv.org/pdf/1704.04861.pdf key points focus on optimizing for latency, small networks. use depthwise separable convolutions, to reduce computation as much as possible further reduce size of models based on width/resolution multiplier, but at the cost of accuracy depthwise separable convolution This is a combination of depthwise convonlution + pointwise convolution. Read more…

paper review: “FastDepth: Fast Monocular Depth Estimation on Embedded Systems”

arxiv: https://arxiv.org/abs/1903.03273

key points

  • model to predict depth map
  • maximize speed by making it light as possible
  • focus not only on encoder network but also on decoder network for speed improvement
  • mobilenet for encoder, nearest-neighbor interpolation + NNConv5 for decoders, use skip connection, use depthwise separable convolution where ever possible, do network pruning, use TVM compiler stack to optimize depthwise separable convolution which is not optimized in populate DL frameworks.
(more…)

paper review: “EfficientDet: Scalable and Efficient Object Detection”

arxiv link: https://arxiv.org/abs/1911.09070 key points multi scale with weighted bi-directional fpn. model scaling. compound scaling method, which jointly scales up resolution/depth/width for all backbone, feature network, box/class prediction network. use efficientnet backbone Bi-directional FPN Here are the key points of bi-directional FPN enhancement from PANet with some modifications remove nodes Read more…

good summary on normalization methods

link: https://towardsdatascience.com/different-normalization-layers-in-deep-learning-1a7214ff71d6 Batch Normalization Weight Normalization Layer Normalization Group Normalization Weight Standarization Recently, Siyun Qiao et al. introduced Weight Standardization in their paper “Micro-Batch Training with Batch-Channel Normalization and Weight Standardization” and found that group normalization when mixed with weight standardization, could outperform or perform equally well as BN even Read more…