F1-Score rises while Loss keeps increasing
I’ve recently run into a paradoxical situation while training a network to distinguish between to classes.
I’ve used cross entropy as my loss of choice. On my training set the loss steadily decreased while the F1-Score improved. On the validation set the loss decreased shortly before increasing and leveling off around ~2, normally a clear sign for overfitting. However the F1-Score on the validation set kept rising and reached ~0.92, with similarly high presicion and recall.
Loss of Attribution
The digitalisation of the world and the Internet changed the rules in many, if not all, fields. One major change I just realised is the loss of attribution. I often made these errors when in contact with opinions that I didn’t share. When it came to my personal views and groups I feel associated to, I felt mistreated or misunderstood if people made general claims about such group.
Before the internet it was relatively easy to attribute an action or statement to an organisation, such as a political party or a state.