In case you need a raw cross entropy loss calculation done not with logits but with the final value, and no reductions on the output whatsoever, here’s a code snippet for it.

def ce_loss(y_true, y_pred):
    ce_loss = -((1-y_true) * tf.log(1-y_pred) + y_true * tf.log(y_pred))
    return ce_loss


Leave a Reply

Your email address will not be published.