Penalizing small errors using an Adaptive Logarithmic Loss

Chaitanya Kaul, Nicholas Edwin Pears, Hang Dai, Roderick Murray-Smith, Suresh Manandhar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Loss functions are error metrics that quantify the difference between a prediction and its corresponding ground truth. Fundamentally, they define a functional landscape for traversal by gradient descent. Although numerous loss functions have been proposed to date in order to handle various machine learning problems, little attention has been given to enhancing these functions to better traverse the loss landscape. In this paper, we simultaneously and significantly mitigate two prominent problems in medical image segmentation namely: i) class imbalance between foreground and background pixels and ii) poor loss function convergence. To this end, we propose an adaptive logarithmic loss function. We compare this loss function with the existing state-of-the-art on the ISIC 2018 dataset, the nuclei segmentation dataset as well as the DRIVE retinal vessel segmentation dataset. We measure the performance of our methodology on benchmark metrics and demonstrate state-of-the-art performance. More generally, we show that our system can be used as a framework for better training of deep neural networks.
Original languageEnglish
Title of host publication25th International Conference on Pattern Recognition
Subtitle of host publicationAIHA-2020 – ICPR International Workshop on Artificial Intelligence for Healthcare Applications
PublisherSpringer
Publication statusPublished - 10 Jan 2021
Event25th International Conference on Pattern Recognition - Milan, Italy
Duration: 10 Jan 202115 Jan 2021
https://www.micc.unifi.it/icpr2020/

Conference

Conference25th International Conference on Pattern Recognition
Abbreviated titleICPR 2020
Country/TerritoryItaly
CityMilan
Period10/01/2115/01/21
Internet address

Cite this