Abstract

Tests of conditional independence (CI) of random variables play an important role in machine learning and causal inference. Of particular interest are kernel-based CI tests which allow us to test for independence among random variables with complex distribution functions. The efficacy of a CI test is measured in terms of its power and its calibratedness. We show that the Kernel CI Permutation Test (KCIPT) suffers from a loss of calibratedness as its power is increased by increasing the number of bootstraps. To address this limitation, we propose a novel CI test, called Self- Discrepancy Conditional Independence Test (SDCIT). SDCIT uses a test statistic that is a modified unbiased estimate of maximum mean discrepancy (MMD), the largest difference in the means of features of the given sample and its permuted counterpart in the kernel-induced Hilbert space. We present results of experiments that demonstrate SDCIT is, relative to the other methods: (i) competitive in terms of its power and calibratedness, outperforming other methods when the number of conditioning variables is large; (ii) more robust with respect to the choice of the kernel function; and (iii) competitive in run time.

Original languageEnglish (US)
StatePublished - 2017
Event33rd Conference on Uncertainty in Artificial Intelligence, UAI 2017 - Sydney, Australia
Duration: Aug 11 2017Aug 15 2017

Other

Other33rd Conference on Uncertainty in Artificial Intelligence, UAI 2017
Country/TerritoryAustralia
CitySydney
Period8/11/178/15/17

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Cite this