Causality 101: The Book of Why – Part 2 (Work in Progress)

This blog post is a part of a series of twelve posts doing chapter wise reviews of The Book of Why.

In today's post, we will be reviewing Chapter 1 for the Book of Why called The Ladder of Causation.

  • The chapter starts with a review of the top three reasons this work is important.
1. For humans, the world isn't made up of dry facts (or data). Rather, these facts are glued together by an intricate web of cause-effect relationships.

This is partly true because our brains are excellent at in-taking new information and somehow joining it with our existing knowledge in a web-like manner. While it is hard to understand how and where does our brain store new information (not temporally but rather spatially and relatively) but stating that these facts are universally glued together by cause-effect relationships might rather be misleading.

2. Causal explanations make up the bulk of human knowledge and should be the cornerstone of machine intelligence.

There are two key activities performed by our brain at all times. Firstly, decision making and second, reasoning. These causal explanations are often a key resource in reasoning as Judea pointed in the Adam, Eve and forbidden apple story. But since decisions can be made consciously or unconsciously, thus humans do not necessarily always consciously use cause-effect relationships to make a decision. But when justifying their actions, surely.

3. Finally, our transition from processors of data to makers of explanations was not gradual; it was a leap that required an external push from an uncommon fruit. No machine can derive explanations from raw data - it needs a push.

The information connections need not be causal explanations. However, this addresses a very important issue in the area of autonomous agents that today's machine lack the autonomy and without an external or internal push mechanism, we can never quite reach our goal for AGI.
  • The key idea of this chapter is however positing that correlation and causation are two different mechanisms and thus both need a different language. The first is while supported by the language of conditional probabilities, Judea mentions that explaining the latter might not be necessarily possible by conditional probability. Thus, a new language of do-calculus was created. However, this importantly highlights the need for reproducibility among these two works of statistical inference. Two key efforts in this direction are Granger Causality and Vector Auto-Correlation.

The connection between imagining and causal relations is almost self-evident. It is useless to ask for the causes of things unless you can imagine their consequences. This causal imagination allows humans (our ancestors) to do many things more efficiently through a tricky process we call "planning".

  • Any thinking entity must possess, consult and be able to manipulate a mental model of its reality.

Leave a Comment

Your email address will not be published. Required fields are marked *