It is mentioned in an introductory text [10], that the term statistics refers to a collection of numerical facts and estimates, the purpose of statistics being to enable correct decisions to be taken. Elsewhere [33], it is noted that one of the functions of statistics is the provision of techniques for making inductive inferences based upon data. It is important also to have an estimate of the uncertainty attaching to those inferences.
In real life situations, information can often be usefully summarised numerically. For example, percentage unemployment, mortality rate for males aged 65, or maximum stress level below which a structure is likely to survive. Statistics have long been used to estimate such quantities based on observed data. For example a random survey of cars aged 8 to 10 years in a particular country, may show that, say, 20 out of 100 examined were structurally unsound due to rust damage. From this it may be inferred that the proportion of cars in the country of that particular age which had severe rust is in the region of 20%. Of course, there is some uncertainty attached to this estimate, and if 100 different cars were surveyed then a different answer may have been obtained, and there are ways of estimating the uncertainty. In classical statistical inference what one is doing is making an estimate of the true (but unknown) proportion, based on data. The assumption is that the proportion of the total population of cars which experience severe rust is a fixed unknown, and that data is being used to estimate it.
In the context of this research, statistics may be defined to be concerned with the analysis of data collected under uncertainty. Specifically, the aim is to develop suitable models, in order to make reliability predictions based upon recorded test data. Classical, or frequentist methodology in statistics concentrates on making inferences about the true situation having observed certain data, whereas the Bayesian approach is concerned with updating subjective knowledge in the light of data.