paper summary “Perceiver IO: A General Architecture for Structured Inputs & Outputs”

arxiv: https://arxiv.org/abs/2107.14795 Key points developing upon the Perceiver idea, Perceiver IO proposes a Perceiver like structure but where output size can be much larger and still keep overall complexity linear. (Checkout summary on Perceiver here) same with Perceiver, this work use latent array to save input information and run this through multiple self Read more…

paper summary: Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

arxiv: https://arxiv.org/abs/2103.14030 Key points multi scale feature extraction. Could think of as adoption of FPN idea. restrict transformer operation to within each window and not the entire feature map → allows to keep overall complexity linear instead of quadratic apply shifted window to allow inter-window interaction fuse relative position information in Read more…

paper summary: “VarifocalNet: An IoU-aware Dense Object Detector”(VFNet)

arxiv: https://arxiv.org/abs/2008.13367 key points another anchor-free point based object detection network introduce new loss, varifocal loss which is a forked version from focal loss. Makes some changes from focal loss to compensate positive/negative imbalance futher. instead of prediction classification and IOU score separately, this work predicts a single scalar which Read more…

paper summary: “Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization”

arxiv: https://arxiv.org/pdf/1703.06868.pdf key points arbitrary style transfer in real time use adaptive instance normalization(AdaIN) layers which aligns the mean and variance of content features allows to control content-style trade-off, style interpolation, color/spatial control previous works optimization approach using backprop of network to minimize style loss and content loss. This can be Read more…

few shot learning good articles

https://towardsdatascience.com/advances-in-few-shot-learning-a-guided-tour-36bc10a68b77 great brief summary over matching networks, prototypical networks, model-agnostic meta-learning(MAML). well, the first two topics are well explained but MAML section needs a lot more thinking to understand. Also, I think MAML is closer to the topic of meta-learning rather than few shot learning. https://arxiv.org/pdf/2008.06365.pdf “An Overview of Deep Read more…

paper summary: “Aggregated Residual Transformations for Deep Neural Networks” (ResNext Paper)

key point compared to resnet, the residual blocks are upgraded to have multiple “paths” or as the paper puts it “cardinality” which can be treated as another model architecture design hyper parameter. resnext architectures that have sufficient cardinality shows improved performance tldr: use improved residual blocks compared to resnet Different Read more…

paper review: “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”

arxiv: https://arxiv.org/pdf/1905.11946.pdf key point propose ‘compound scaling method’ which scales all width/depth/resolution together which is an efficient scaling method that can be applied to any existing structure introduce a new family of baseline structure called ‘EfficientNets’. The very smallest baseline structure was found by authors through NAS, and then the Read more…