❓
Understanding Causal Inference
  • A Guide to Causal Inference
  • Table of Contents
  • About-us
  • Preface
  • What is Causality?
  • Why bother with Causality?
  • Origin of Causality
  • Statistical Inference Vs Causal Inference
  • Decision-Making
  • Why we need Causality?
    • Leaders in the Industry
  • Key Causal Terms and FAQ
  • Assumptions
    • Causal Assumptions
  • Bias
    • Selection Bias
    • Correlation is not Causation
      • Simpsons Paradox
  • Causal Graphs
    • Colliders
    • Confounders
    • Mediators
    • Back Door Paths
    • Front Door Paths
    • Structural Causal Model
    • do-calculus
    • Graph Theory
    • Build your DAG
    • Testable Implications
    • Limitations of Causal Graphs
  • Counterfactuals
    • Potential Outcomes Framework
  • Modeling for Causal Inference
    • Experimental Data
      • Randomization
        • Problems with Randomization
        • A/B Testing
          • Experiment
    • Non-Experimental / Observational Data
      • Instrumental Variables
      • Weighting
        • Inverse Propensity Weighting
      • Propensity Score
      • Sensitivity Analysis
      • Regression Discontinuity
      • Matching
      • Stratification
        • Methods
        • Implications
  • Tools and Libraries
    • DoWhy
      • Do-Sampler
      • EconML
      • Workflow
    • Causal Graphical Models
    • CausalInference
    • Dagitty
    • Other Libraries
  • Limitations of Causal Inference
    • Fundamental Problem of Causal Inference
  • Real-World Implementations
  • What's Next
  • References
Powered by GitBook
On this page

Was this helpful?

Bias

Unfairness

Definition: The unjust or prejudicial treatment of different kinds of individuals from different categories/groups resulting in favor(benefits & opportunities) of a particular group. It is usually based on age, sex, skin color, language, economic condition, etc.

In artificial intelligence, it can be considered as an incorrect interpretation of the true relationship between the exposure and the outcome. Biases in the model can distort the true causal relationship.

A model is said to be biased if the predictor (y) is independent of the protected attribute (p) such that :

P(y∣p)≠P(y)P(y | p) \neq P( y )P(y∣p)=P(y)

Importance:

  1. Artificial Intelligence Models are increasingly being used in real-world use cases such as Loan Approval, Healthcare, Judiciary, etc which makes it imperative to on the AI community to minimize bias.

  2. Learning more about bias helps us understand more about Human biases.

Where does bias come from?

  1. Data Collection

  2. Data inherently reflecting human bias(cognitive bias)

  3. Biased Feedback loops

Removing Bias:

  1. Awareness - find, understand and point out biases

  2. Bias mitigation methods - Adding Fairness to the models, Datasheets for data sets & Model cards for Model reporting

  3. Demonstrating Causal Relationships

Challenges:

  1. More than 180 biases have been defined and classified and any one of which can affect the decisions we make.

  2. Bias Feedback Loops

PreviousCausal AssumptionsNextSelection Bias

Last updated 4 years ago

Was this helpful?