Causal conclusions that flip

Kelly, K. and Conor Mayo-Wilson, “Causal Conclusions That Flip Repeatedly and Their Justification.” Proceedings of the Twenty Sixth Conference on Uncertainty in Artificial Intelligence, 2010: 277-286.

Over the past two decades, several consistent procedures have been designed to infer causal conclusions from observational data. We prove that if the true causal network might be an arbitrary, linear Gaussian network or a discrete Bayes network, then every unambiguous causal conclusion produced by a consistent method from non-experimental data is subject to reversal as the sample size increases any finite number of times. That result, called the causal flipping theorem, extends prior results to the effect that causal discovery cannot be reliable on a given sample size. We argue that since repeated flipping of causal conclusions is unavoidable in principle for consistent methods, the best possible discovery methods are consistent methods that retract their earlier conclusions no more than necessary.

Status of Research
Completed/published
Share