Nearly all of the books about Statistical Process Control are quite awful. They contain errors, poor examples, and promote bad practices. Except for a few that may still exist, there is only one book one can recommend for sure: it's "Statistical Method from the Viewpoint of Quality Control" by Walter A. Shewhart, the original founder of Industrial Quality Management.

For superficial readers and students in statistics, this book could be easily misunderstood as an other book on Probability Calculations whereas it does fundamentally deal with the Epistemology of Probability and Mathematics in the Real World of Manufacturing and Science.

The current publisher's description on Amazon doesn't say much. It's better to read the original publisher's description of the book in the 1920s:

*The application of statistical methods in mass production makes possible the most efficient use of raw materials and manufacturing processes, economical production, and the highest standards of quality for manufactured goods. In this classic volume, based on a series of ground-breaking lectures given to the Graduate School of the Department of Agriculture in *1938*, Dr Shewhart illuminates the fundamental principles and techniques basic to the efficient use of statistical method in attaining statistical control, establishing tolerance limits, presenting data, and specifying accuracy and precision.*

*In the first chapter, devoted to statistical control, the author broadly defines the three steps in quality control: specification, production and inspection; he then outlines the historical background of quality control. This is followed by a rigorous discussion of the physical and mathematical states of statistical control, statistical control as an operation, the significance of statistical control and the future of statistics in mass production.*

*Chapter II offers a thought-provoking treatment of the problem of establishing limits of variability, including the meaning of tolerance limits, establishing tolerance limits in the simplest cases and in practical cases, and standard methods of measuring. Chapter III explores the presentation of measurements of physical properties and constants. Among the topics considered are measurements presented as original data, characteristics of original data, summarizing original data *(*both by symmetric functions and by Chebyshev's theorem*)*, measurement presented as meaningful predictions, and measurement presented as knowledge.*

*Finally, Dr Shewhart deals with the problem of specifying accuracy and precision - the meaning of accuracy and precision, operational meaning, verifiable procedures, minimum quantity of evidence needed for forming a judgment and more.*

In this book Shewhart asks:-

*What can statistical practice, and science in general, learn from the experience of industrial quality control?*

He wrote in this book:-

*The definition of random in terms of a physical operation is notoriously without effect on the mathematical operations of statistical theory because so far as these mathematical operations are concerned random is purely and simply an undefined term. The formal and abstract mathematical theory has an independent and sometimes lonely existence of its own. But when an undefined mathematical term such as random is given a definite operational meaning in physical terms, it takes on empirical and practical significance. Every mathematical theorem involving this mathematically undefined concept can then be given the following predictive form: If you do so and so, then such and such will happen.*

## No comments:

Post a Comment