# Because of the Bayes’ laws, the newest posterior odds of y = step 1 shall be shown as the:

Because of the Bayes’ laws, the newest posterior odds of y = step 1 shall be shown as the:

(Failure of OOD detection under invariant classifier) Consider an out-of-distribution input which contains the environmental feature: ? out ( x ) = M inv z out + M e z e , where z out ? ? inv . Given the invariant classifier (cf. Lemma 2), the posterior probability for the OOD input is p ( y = 1 ? ? out ) = ? ( 2 p ? z e ? + log ? / ( 1 ? ? ) ) , where ? is the logistic function. Thus for arbitrary confidence 0 < c : = P ( y = 1 ? ? out ) < 1 , there exists ? out ( x ) with z e such that p ? z e = 1 2 ? log c ( 1 ? ? ) ? ( 1 ? c ) .

Proof. Think an out-of-delivery input x out with Meters inv = [ We s ? s 0 step one ? s ] , and Yards age = [ 0 s ? elizabeth p ? ] , then ability logo are ? e ( x ) = [ z aside p ? z age ] , where p ‘s the tool-standard vector outlined within the Lemma 2 .

Then we have P ( y = 1 ? ? out ) = P ( y = 1 ? z out , p ? z e ) = ? ( 2 p ? z e ? + log ? / ( 1 ? ? ) ) , where ? is the logistic function. Thus for arbitrary confidence 0 < c : = P ( y = 1 ? ? out ) < 1 , there exists ? out ( x ) with z e such that p ? z e = 1 2 ? log c ( 1 ? ? ) ? ( 1 ? c ) . ?

Remark: When you look at the a very standard instance, z aside should be modeled since the a random vector which is in addition to the in-distribution names y = step 1 and you may y = ? step one and you can ecological features: z out ? ? y and you will z out ? ? z elizabeth . Thus inside the Eq. 5 i have P ( z away ? y = 1 ) = P ( z aside ? y = ? step 1 ) = P ( z out ) . Upcoming P ( y = 1 ? ? out ) = ? ( dos p ? z age ? + diary ? / ( step one ? ? ) ) , identical to when you look at the Eq. seven . Thus our very own chief theorem still keeps around way more standard circumstances.

## Appendix B Extension: Color Spurious Relationship

To advance validate all of our results beyond history and intercourse spurious (environmental) enjoys, we provide even more experimental show towards ColorMNIST dataset, because shown when you look at the Shape 5 .

## Investigations Activity step 3: ColorMNIST.

[ lecun1998gradient ] , which composes colored backgrounds on digit images. In this dataset, E = < red>denotes the background color and we use Y = < 0>as in-distribution classes. The correlation between the background color e and the digit y is explicitly controlled, with r ? < 0.25>. That is, r denotes the probability of P ( e = red ? y = 0 ) = P ( e = purple ? y = 0 ) = P ( e = green ? y = 1 ) = P ( e = pink ? y = 1 ) , while 0.5 ? r instanthookups = P ( e = green ? y = 0 ) = P ( e = pink ? y = 0 ) = P ( e = red ? y = 1 ) = P ( e = purple ? y = 1 ) . Note that the maximum correlation r (reported in Table 4 ) is 0.45 . As ColorMNIST is relatively simpler compared to Waterbirds and CelebA, further increasing the correlation results in less interesting environments where the learner can easily pick up the contextual information. For spurious OOD, we use digits < 5>with background color red and green , which contain overlapping environmental features as the training data. For non-spurious OOD, following common practice [ MSP ] , we use the Textures [ cimpoi2014describing ] , LSUN [ lsun ] and iSUN [ xu2015turkergaze ] datasets. We train on ResNet-18 [ he2016deep ] , which achieves 99.9 % accuracy on the in-distribution test set. The OOD detection performance is shown in Table 4 .

This site uses Akismet to reduce spam. Learn how your comment data is processed.