mountaincar RL tips

when adding samples, I modified the reference code to exclude terminating status samples in hope that this would less complicate the batch creation process. the original code: the modified code: However, this small change made a huge difference in training convergence. The modification failed to ever get the total reward Read more…

paper review: Explaining and Harnessing Adversarial Examples (FGSM adversarial attack)

paper link: https://arxiv.org/abs/1412.6572 This paper introduces Fast Gradient Signed Method(FGSM) adversarial attack along with some useful insights on why linearity of deep learning networks would allow such attacks. The FGSM method is regarded as the method introduced after using L-BGFS method to generate adversarial samples. These two share similar ideas Read more…

How to implement ctc loss using tensorflow keras (feat. CRNN example)

Code: using tensorflow 1.14 The tk.keras.backend.ctc_batch_cost uses tensorflow.python.ops.ctc_ops.ctc_loss functions which has preprocess_collapse_repeated parameter. In some threads, it comments that this parameters should be set to True when the tf.keras.backend.ctc_batch_cost function does not seem to work, such as inconverging loss. However, my experience is that although setting this parameter to True Read more…