Causal Diagram: Confronting the Achilles’ Heel in Observational Data | by Zijing Zhu, PhD | Nov, 2023 - image  on https://aiquantumintelligence.com
Photo by Андрей Сизов on Unsplash

“The Book of Why” Chapters 3&4, a Read with Me series

Zijing Zhu, PhD

In my previous two articles, I kicked off the “Read with Me” series and finished reading the first two chapters from “The Book of Why” by Judea Pearl. These articles discuss the necessity of introducing causality in enabling human-like decision-making and emphasize the Ladder of Causation that sets up the foundation for future discussions. In this article, we will explore the keyholes that open the door from the first to the second rung of the ladder of causation, allowing us to move beyond probability and into causal thinking. We will go from Bayes’s rule to the Bayesian network to, finally, the causal diagrams.

As a fan of detective novels, my favorite series is Sherlock Holmes. I still remember all these days and nights I read them without noticing time passing by. Years later, lots of the case details had already disappeared from my memories, but I still remember the famous quotes like everyone else:

When you have eliminated the impossible, whatever remains, however improbable, must be the truth.

Translating this quote into the field of statistics, there are two types of probabilities — — forward probability and inverse probability. Based on Sherlock Holmes’s deductive reasoning, detective work is just finding the murderer with the highest inverse probability.

Causal Diagram: Confronting the Achilles’ Heel in Observational Data | by Zijing Zhu, PhD | Nov, 2023 - image  on https://aiquantumintelligence.com
Photo by Markus Winkler on Unsplash

Going from forward probability to inverse probability, we are not only just flipping the variables sequentially but also enforcing a causal relationship. As briefly discussed in the previous article, Bayes’s rule provides a bridge that connects objective data (evidence) with subjective opinions (prior belief). Based on Bayes’s rule, we can calculate conditional probabilities from any two variables. For any variable A and B, given that B has happened, the probability of A happening is:

P(A|B) = P(A&B)/P(B)

Source link