❓
Understanding Causal Inference
  • A Guide to Causal Inference
  • Table of Contents
  • About-us
  • Preface
  • What is Causality?
  • Why bother with Causality?
  • Origin of Causality
  • Statistical Inference Vs Causal Inference
  • Decision-Making
  • Why we need Causality?
    • Leaders in the Industry
  • Key Causal Terms and FAQ
  • Assumptions
    • Causal Assumptions
  • Bias
    • Selection Bias
    • Correlation is not Causation
      • Simpsons Paradox
  • Causal Graphs
    • Colliders
    • Confounders
    • Mediators
    • Back Door Paths
    • Front Door Paths
    • Structural Causal Model
    • do-calculus
    • Graph Theory
    • Build your DAG
    • Testable Implications
    • Limitations of Causal Graphs
  • Counterfactuals
    • Potential Outcomes Framework
  • Modeling for Causal Inference
    • Experimental Data
      • Randomization
        • Problems with Randomization
        • A/B Testing
          • Experiment
    • Non-Experimental / Observational Data
      • Instrumental Variables
      • Weighting
        • Inverse Propensity Weighting
      • Propensity Score
      • Sensitivity Analysis
      • Regression Discontinuity
      • Matching
      • Stratification
        • Methods
        • Implications
  • Tools and Libraries
    • DoWhy
      • Do-Sampler
      • EconML
      • Workflow
    • Causal Graphical Models
    • CausalInference
    • Dagitty
    • Other Libraries
  • Limitations of Causal Inference
    • Fundamental Problem of Causal Inference
  • Real-World Implementations
  • What's Next
  • References
Powered by GitBook
On this page

Was this helpful?

  1. Causal Graphs

Testable Implications

Bayesian Network

PreviousBuild your DAGNextLimitations of Causal Graphs

Last updated 4 years ago

Was this helpful?

The represents the probabilistic relationships between the treatment and the outcome. The conditional dependencies drawn from the DAG are used for performing inference and for learning more about these networks

DAG can help us with probability Distributions as well:

  1. Variables that are independent from each other

  2. variables conditionally independent from each other

  3. methods to factor and simplify the joint distribution

DAG are the model representation of how we think the world works. DAG is becoming a very essential tool in the field of Data Science.

Dependencies and Independencies are important to know and understand as it can tell us alot of information.

Example:

Decomposition of Joint distribution :

P(A,B,C,D,E,F,G)=P(A)P(G)P(B∣A)P(D∣G,B)P(C∣B)P(E∣B,C,D)P(F∣E)P(A,B,C,D,E,F,G) = P(A) P(G) P(B|A) P(D|G,B) P(C|B) P(E|B,C,D) P(F|E)P(A,B,C,D,E,F,G)=P(A)P(G)P(B∣A)P(D∣G,B)P(C∣B)P(E∣B,C,D)P(F∣E)

Note: We start from the root node(which has no parent and move from there to subsequent child nodes)

Probability distributions from DAG

  • F is dependent on E

P(F∣E)P(F|E)P(F∣E)
  • F is independent of A, B, C, D, G conditional on E (The only thing that affects F is E, so if we condition on E we can control what affects F )

P(F∣A,B,C,D,E,G)=P(F∣E)F⊥ ⁣ ⁣ ⁣⊥A,B,C,D,F∣EP(F|A,B,C,D,E,G) = P(F|E) \\ F\perp \!\!\!\perp A,B,C,D,F |EP(F∣A,B,C,D,E,G)=P(F∣E)F⊥⊥A,B,C,D,F∣E
  • F & C are marginally dependent or marginally associated with each other

P(F∣C)≠P(F)P(F|C) \neq P(F)P(F∣C)=P(F)

Similarly, we can check the dependency for other variables. Thus we just saw that probability and this DAG are compatible with each other.

****

Bayesian network