Supervision Adaptation Balancing In-distribution Generalization and Out-of-distribution Detection
Zhilin Zhao, Longbing Cao, and Kun-Yu Lin. IEEE Transaction on Pattern Recognition and Machine Intelligence, 2023.
The discrepancy between in-distribution (ID) and out-of-distribution (OOD) samples can lead to distributional vulnerability in
deep neural networks, which can subsequently lead to high-confidence predictions for OOD samples. This is mainly due to the
absence of OOD samples during training, which fails to constrain the network properly. To tackle this issue, several state-of-the-art
methods include adding extra OOD samples to training and assign them with manually-defined labels. However, this practice can
introduce unreliable labeling, negatively affecting ID classification. The distributional vulnerability presents a critical challenge for
non-IID deep learning, which aims for OOD-tolerant ID classification by balancing ID generalization and OOD detection. In this paper,
we introduce a novel supervision adaptation approach to generate adaptive supervision information for OOD samples, making them
more compatible with ID samples. Firstly, we measure the dependency between ID samples and their labels using mutual information,
revealing that the supervision information can be represented in terms of negative probabilities across all classes. Secondly, we
investigate data correlations between ID and OOD samples by solving a series of binary regression problems, with the goal of refining
the supervision information for more distinctly separable ID classes. Our extensive experiments on four advanced network
architectures, two ID datasets, and eleven diversified OOD datasets demonstrate the efficacy of our supervision adaptation approach
in improving both ID classification and OOD detection capabilities.
Access the relevant information on non-IID learning at the non-IID learning webpage.