|When dealing with Deep Learning applications for open-set problems, detecting unknown samples is crucial for ensuring the model's robustness. Numerous methods aim to distinguish between known and unknown samples by analyzing different patterns generated by the model. Among these methods, those that utilize the model's output are considered the most widely applicable and practical for pre-trained models. Despite their effectiveness, there are also other techniques that can enhance Out-of-distribution detection methods by calibrating or transforming logit scores. In this study, we propose two approaches for out-of-distribution detection using logit transformation. One approach is based on the likelihood from a Gaussian distribution of logits. Additionally, we extend our method to a multivariate perspective using a mixture of Gaussian distributions to obtain better score disentanglement for traditional out-of-distribution detection methods. Our approaches were evaluated in various multi-class classification scenarios. The results showed that our logit transformation method using Gaussian distribution led to an improvement of up to 11% in terms of AUROC, and up to 32.6% in FPR95 if compared to other methods.
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.