Books

Short works

Books : reviews

Dana Mackenzie, Barry A. Cipra.
What's Happenining in the Mathematical Sciences 10.
AMS. 2015

What’s Happening in the Mathematical Sciences is a collection of articles highlighting some of the most recent developments in mathematics. These include important achievements in pure mathematics, as well as its fascinating applications.

On the pure mathematics side, “Prime Clusters and Gaps: Out-Experting the Experts” talks about new insights into the distribution of prime numbers, the perpetual source of new problems, and new results. Recently, several mathematicians (including Yitang Zhang and James Maynard) significantly improved our knowledge of the distribution of prime numbers. Advances in the so-called Kadison-Singer problem and its applications in signal processing algorithms used to analyze and synthesize signals are described in “The Kadison-Singer Problem: A Fine Balance”. “Quod Erat Demonstrandum” presents two examples of perseverance in mathematicians’ pursuit of truth using, in particular, computers to verify their arguments. And “Following in Sherlock Holmes’ Bike Tracks” shows how an episode in one of Sir Arthur Conan Doyle’s stories about Sherlock Holmes naturally led to very interesting problems and results in the theory of completely integrable systems.

On the applied side, “Climate Past, Present, and Future” shows the importance of mathematics in the study of climate change and global warming phenomena. Mathematical models help researchers to understand the past, present, and future changes of climate, and to analyze their consequences. “The Truth Shall Set Your Fee” talks about algorithms of information exchange in cyberspace. Economists have known for a long time that trust is a cornerstone of commerce, and this becomes even more important nowadays when a lot of transactions, big and small, are done over the Internet. Recent efforts of theoretical computer scientists led to the development of so-called “rational protocols” for information exchange, where the parties in the information exchange process find that lies do not pay off.

Over the last 100 years many professional mathematicians and devoted amateurs contributed to the problem of finding polygons that can tile the plane, e.g., used as floor tiles in large rooms and walls. Despite all of these efforts, the search is not yet complete, as the very recent discovery of a new plane-tiling pentagon shows in “A Pentagonal Search Pays Off”. Mathematics can benefit coaches and players in some of the most popular team sports as shown in “The Brave New World of Sports Analytics”. The increased ability to collect and process statistics, big data, or “analytics” has completely changed the world of sports analytics. The use of modern methods of statistical modeling allows coaches and players to create much more detailed game plans as well as create many new ways of measuring a player’s value. Finally, “Origami: Unfolding the Future” talks about the ancient Japanese paper-folding art and origami’s unexpected connections to a variety of areas including mathematics, technology, and education.

Judea Pearl, Dana Mackenzie.
The Book of Why: the new science of cause and effect.
Penguin. 2018

rating : 2 : great stuff
review : 28 July 2019

We have all heard the old saying “correlation is not causation”. This is a problem for statistics, since all it can measure is correlation. Pearl here argues that this is because statisticians are restricting themselves too much, and that it is possible to do more. There is no magic; to get this more, you have to add something into the system, but that something is very reasonable: a causal model.

He organises his argument using the three-runged “ladder of causation”. On the bottom rung is pure statistics, reasoning about observations: what is the probability of recovery, found from observing these people who have taken a drug. The second rung allows reasoning about interventions: what is the probability of recovery, if I were to give these other people the drug. And the top rung includes reasoning about counterfactuals: what would have happened if that person had not received the drug?

Intervention (rung 2) is different from observation alone (rung 1) because the observations may be (almost certainly are) of a biassed group: observing only those who took the drug for whatever reason, maybe because they were already sick in a particular hospital, or because they were rich enough to afford it, or some other confounding variable. The intervention, however, is a different case: people are specifically given the drug. The purely statistical way of moving up to rung 2 is to run a randomised control trial (RCT), to remove the effect of confounding variables, and thereby to make the observed results the same as the results from intervention. The RCT is often known as the “gold standard” for experimental research for this reason.

But here’s the thing: what is a confounding variable, and what is not? In order to know what to control for, and what to ignore, the experimenter has to have some kind of implicit causal model in their head. It has to be implicit, because statisticians are not allowed to talk about causality! Yet it must exist to some degree, otherwise how do we even know which variables to measure, let alone control for? Pearl argues to make this causal model explicit, and use it in the experimental design. Then, with respect to this now explicit causal model, it is possible to reason about results more powerfully. (He does not address how to discover this model: that is a different part of the scientific process, of modelling the world. However, observations can be used to test the model to some degree: some models are simply too causally strong to support the observed situation.)

Pearl uses this framework to show how and why the RCT works. More importantly, he also shows that it is possible to reason about interventions sometimes from observations alone (hence data mining pure observations becomes more powerful), or sometimes with fewer controlled variables, without the need for a full RCT. This is extremely useful, since there are many cases where RCTs are unethical, impractical, or too expensive. RCTs are not the “gold standard” after all; they are basically a dumb sledgehammer approach. He also shows how to use the causal model to calculate which variables do need to be controlled for, and how controlling for certain variables is precisely the wrong thing to do.

Using such causal models also allows us to ascend to the third rung: reasoning about counterfactuals, where experiments are in principle impossible. This gives us power to reason about different worlds: What’s the probability that Fred would have died from lung cancer if he hadn’t smoked? What’s the probability that heat wave would have happened with less CO2 in the atmosphere?

[p51] probabilities encode our beliefs about a static world, causality tells us whether and how probabilities change when the world changes, be it by intervention or by act of imagination.

This is a very nicely written book, with many real world examples. The historical detail included shows how and why statisticians neglected causality. It is not always an easy read – the concepts are quite intricate in places – but it is a crucially important read. We should never again bow down to “correlation is not causation”: we now know how to discover when it is.