Deep learning tools are now widely used across various areas due to the increasing interest in applied machine learning. While these tools demonstrate exceptional performance in prediction and classification tasks, they are often deployed as black-box inferencing entities without any precise measure of uncertainty associated with their outputs. Uncertainty quantification is essential for ensuring reliability and robustness, particularly in safety-critical applications. However, accurately quantifying model/epistemic uncertainty in machine learning-based regression and classification tasks is challenging. In this paper, we provide an analytical approach to quantify the epistemic uncertainty related to deep neural network models using neural stochastic differential equations. Through experiments carried out on synthetic data, we demonstrate that our proposed framework successfully addresses the challenge of representing uncertainty in deep neural network-based regression and classification without the computational complexity associated with the classic Monte Carlo dropout method. |
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.