Real Statistics

A Radical Approach

Innovations of this textbook In the early 20th Century, Sir Ronald Fisher initiated an approach to statistics which he characterized as follows: : “… the object of statistical methods is the reduction of data. A quantity of data, which usually by its mere bulk is incapable of entering the mind, is to be replaced by relatively few quantities which shall adequately represent the whole …” As he clearly indicates, we want to reduce the data because our minds cannot comprehend large amounts of data. Therefore, we want to summarize the data in a few numbers which adequately represent the whole data set. It should be obvious from the start that this is an impossible task. One cannot reduce the information contained in 1000 points of data to two or three numbers. There must be loss of information in this process. Fisher developed a distinctive methodology, which is still at the heart of conventional statistics. The central element of this methodology was an ASSUMPTION – the data is a random sample from a larger population, where the larger population is characterized by a few key parameters. Under these assumptions, the key parameters which characterized the larger population would be sufficient to characterize the data set at hand. Under such assumptions, Fisher showed that there were “sufficient statistics” – a small set of numbers which captured all of the information available in the data. Thus, once in possession of the sufficient statistics, the data analyst could actually throw away the original data, as all relevant information from the data set had been captured in the sufficient statistics. Our goal in this section is to explain how this methodology works, why it was a brilliant contribution of Fisher at his time, and why this methodology is now obsolete, and a handicap to progress in statistics. In the raw data, each data point is unique and informative. But Fisher’s approach anonymizes all of the data by making them all equally representative of a population. This actually has parallels to our real approach – we think of the data as informing us about the real world which is hidden. But the problem is that Fisher uses an imaginary world from which the data comes, whereas we are interested in the real world. According to conventional statistical methodology, the statistician is free to make up a class of imaginary populations, and pretend that the data is a random sample from this imagined population. It is a little-noticed effect of this approach that the data is actually replaced by the imaginary population. Using this methodological freedom, the statistician can restrict the imaginary populations to satisfy some desired prerequisite or bias. Then statistical inference will confirm this bias, making it appear as if the data is providing us with this information, when in fact, it is the bias has been built into the assumptions, and all data sets will confirm this bias. At this introductory stage, it is hard to provide a deep and detailed discussion of all the innovations, both methodological and substantive, in this textbook. We therefore provide a bullet point list, which highlights the innovations chapter by chapter: 1. The first chapter provides a more detailed discussion of the Islamic approach to pedagogy which lies at the heart of this textbook. 2. The second chapter shows that even the simplest of operations – comparing two numbers to see which one is larger – requires considerations of the real-world context from which these numbers emerge. In contrast, conventional statistics methodology confines attention to the numbers. 3. The theme of this book is that statistics must be learnt within context of real-world applications. The third chapter discusses computation and analysis of life expectancies. It shows how assumptions go into the manufacture of numbers which are presented as objective and concrete. It also illustrates the use of some basic statistical tools like the histogram. 4. The fourth chapter discusses an issue about which conventional statistical methodology has created enormous confusion: index numbers. When objects – like universities, automobiles, research productivity – are ranked, an “index” number must be created to enable such ranking. A little-known fact is that there is no way of creating an objective index number. This means that there is no objective way of deciding which university is best, or which author has the highest research productivity, or which student has the highest overall performance. Even though rankings are done routinely, all of them necessarily incorporate subjective judgments about the relative worth of different dimensions of performance. 5. The fifth chapter provides a detailed discussion of Sir Ronald Fisher’s approach to statistics, and the biases that it inherited from his racist agenda. 6. The sixth chapter illustrates how we can use statistics to compare infant mortality across time and across countries. The discussion introduces basic tools for such comparisons which differ substantially from conventional tools which are based on assuming that the data is normal. 7. The seventh chapter introduces basic probability contexts using a binomial distribution. It provides a new non-positivist definition of probability. This is different from the frequentist and the Bayesian approaches, both of which are based on positivist ideology. 8. Chapter 8 introduces causality, while Chapter 9 discusses associations (termed correlation in conventional statistics). 9. Chapter 10 discusses several real-world applications and shows how to differentiate between correlation and causation. Conventional textbooks mention the famous aphorism the “correlation is not causation” but provide students with no tools to distinguish between the two concepts in real-world data sets. 10. Chapter 11 and 12 provide a deeper discussion of causation, and the technical tools required for its analysis. These provide basic foundations for understanding causation, something which is not currently available in conventional textbooks of statistics. More advanced discussion of causation is left for a later book.

Read More

Dr. Asad Zamans image
Dr. Asad Zaman

Skilled & professional Instructor


Before his retirement, Dr. Asad Zaman was serving as the Vice Chancellor of PIDE, Islamabad, and as an external member of the Monetary Policy Committee of the State Bank of Pakistan. He received his BS Math from MIT in 1974, MS Stat, and Ph.D. Econ from Stanford Univ in 1976 and 1978 respectively. He has taught at Economics Departments of highly ranked international universities like Columbia, U. Penn., Cal. Tech. and Johns Hopkins as well as Bilkent University, Ankara and Lahore University of Managements Sciences. His econometrics textbook Statistical Foundations of Econometric Techniques is widely used as a reference in graduate econometrics courses, internationally. He is managing editor of International Econometric Reviews and on the editorial board of numerous journals. He has more than 100 publications, with more than a 1600 citations, in top ranked journals like Annals of Statistics, Journal of Econometrics, Econometric Theory, Journal of Labor Economics, etc. He has published widely in Islamic Economics, and is a leading authority in the field. For more details, see the Wikipedia entry https://en.wikipedia.org/wiki/Asad_Zaman and https://asadzaman.net/about-me/. The substance of the course will be covered in 12 Chapters, but there are two preliminary chapters. The first is introductory material which discusses the life experiences of the author which led to the creation of the course. This is followed by Chapter 0 which provides a more detailed discussion of the contrast between Islamic and Western theories of knowledge which lead to the radically different approach to statistics taken in this course.

Subscribe Now

Join our Newsletter now to get exclusive discounts, offers, & event invites!

telegrammessengerwhatsapp