paper summary: “Aggregated Residual Transformations for Deep Neural Networks” (ResNext Paper)

key point compared to resnet, the residual blocks are upgraded to have multiple “paths” or as the paper puts it “cardinality” which can be treated as another model architecture design hyper parameter. resnext architectures that have sufficient cardinality shows improved performance tldr: use improved residual blocks compared to resnet Different Read more…

paper review: “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”

arxiv: https://arxiv.org/pdf/1905.11946.pdf key point propose ‘compound scaling method’ which scales all width/depth/resolution together which is an efficient scaling method that can be applied to any existing structure introduce a new family of baseline structure called ‘EfficientNets’. The very smallest baseline structure was found by authors through NAS, and then the Read more…

paper review: “Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data”

https://arxiv.org/abs/1912.07768 This work suggests that surrogate data need not be drawn from the original data distribution.This paper investigates the question of whether we can train a data-generating network that can produce synthetic data that effectively and efficiently teaches a target task to a learner propose new method to create synthetic Read more…