Neural networks are often utilised in critical domain applications, even though they exhibit overconfident predictions for ambiguous inputs. This deficiency demonstrates a fundamental flaw: that neural networks often overfit on spurious correlations. We address this limitation by presenting two novel objectives that improve out-of-distribution (OOD) detection. We empirically demonstrate that our methods outperform the baseline while still maintaining a competitive performance against the rest. Additionally, we empirically demonstrate the robustness of our approach against common corruptions and the importance of regularisation and auxiliary information in OOD detection.
CITATION STYLE
Mitros, J., & Mac Namee, B. (2021). On the Importance of Regularisation and Auxiliary Information in OOD Detection. In Communications in Computer and Information Science (Vol. 1517 CCIS, pp. 361–368). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-92310-5_42
Mendeley helps you to discover research relevant for your work.