Rating : ⭐⭐⭐⭐⭐
Price : \$10.99
Language:EN
Pages: 2

# Let the dag pattern corresponding this markov equivalence class

2.3. ENTAILING DEPENDENCIES WITH A DAG 99

Pearl  obtains necessary but not sufficient conditions for a probability distribution to admit a faithful DAG representation.

Proof. Suppose gp is the DAG pattern faithful to P. Then due to Theorem 2.6, all and only the independencies in P are identified by d-separation in gp, which are the d-separations in any DAG G in the equivalence class represented by gp. Therefore, Condition 1 follows Lemma 2.4, and Condition 2 follows from and Lemma 2.5.

In the other direction, suppose Conditions (1) and (2) hold for gp and P. Since we’ve assumed P admits a faithful DAG representation, there is some DAG pattern gp0faithful to P. By what was just proved, we know Conditions (1) and (2) also hold for gp0and P. However, this mean any DAG G in the Markov equivalence class represented by gp must have the same links and same set of uncoupled head-to-head meetings as any DAG G0in the Markov equivalence class represented by gp0. Theorem 2.4 therefore says G and G0are in the same Markov equivalence class, which means gp = gp0.

2. All conditional independencies in P are entailed by G, based on the Markov condition.

100 CHAPTER 2. MORE DAG/PROBABILITY RELATIONSHIPS

Proof. The proof is obvious.

.

Definition 2.10 has only to do with independencies entailed by a DAG. It says nothing about P being a marginal of a distribution of the variables in V. There are other cases of embedded faithfulness. Example 2.14 shows one such case. Before giving that example, we discuss embedded faithfulness further.

Proof. The proof is left as an exercise.

Note that the theorem says ‘all those DAGS’, but, unlike the corresponding theorem for faithfulness, it does not say ‘only those DAGs’. If a distribution can be embedded faithfully, there are an infinite number of non-Markov equivalent DAGs in which it can be embedded faithfully. Trivially, we can always replace an edge by a directed linked list of new variables. Figure 2.22 shows a more complex example. The distribution P(v, s, l, f) in Example 2.11 is embedded faithfully in both DAGs in that figure. However, even though the DAGs contain the same nodes, they are not Markov equivalent.

How It Works      