F1-Score rises while Loss keeps increasing
I’ve recently run into a paradoxical situation while training a network to distinguish between to classes.
I’ve used cross entropy as my loss of choice. On my training set the loss steadily decreased while the F1-Score improved. On the validation set the loss decreased shortly before increasing and leveling off around ~2, normally a clear sign for overfitting. However the F1-Score on the validation set kept rising and reached ~0.92, with similarly high presicion and recall.