Lifted representation of relational causal models revisited: Implications for reasoning and structure learning

Sanghack Lee, Vasant Honavar

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations


Maier et al. (2010) introduced the relational causal model (RCM) for representing and inferring causal relationships in relational data. A lifted representation, called abstract ground graph (AGG), plays a central role in reasoning with and learning of RCM. The correctness of the algorithm proposed by Maier et al. (2013a) for learning RCM from data relies on the soundness and completeness of AGG for relational dseparation to reduce the learning of an RCM to learning of an AGG. We revisit the definition of AGG and show that AGG, as defined in Maier et al. (2013b), does not correctly abstract all ground graphs. We revise the definition of AGG to ensure that it correctly abstracts all ground graphs. We further show that AGG representation is not complete for relational d-separation, that is, there can exist conditional independence relations in an RCM that are not entailed by AGG. A careful examination of the relationship between the lack of completeness of AGG for relational d-separation and faithfulness conditions suggests that weaker notions of completeness, namely adjacency faithfulness and orientation faithfulness between an RCM and its AGG, can be used to learn an RCM from data.

Original languageEnglish (US)
Pages (from-to)56-65
Number of pages10
JournalCEUR Workshop Proceedings
StatePublished - 2015
EventUAI 2015 Workshop on Advances in Causal Inference, UAI 2015-ACI 2015 - co-located with the 31st Conference on Uncertainty in Artificial Intelligence, UAI 2015 - Amsterdam, Netherlands
Duration: Jul 16 2015 → …

All Science Journal Classification (ASJC) codes

  • General Computer Science

Cite this