(*The polls were each taken the day before the election by different organizations, and the election results were used as the standard reference.)
By analogy, accuracy is a statement of how far the shot is from the bull's eye. Precision is a statement of how closely clustered repeated shots are (or might be). That "margin of error" means that repeated polls under identical conditions (same questions, same questioners, etc.) but on different samples taken randomly from the same population, 95% of the repeated polls will likely fall within ±3%-points of the grand mean of all possible polls. Quite obviously, one may have a tight cluster of shots, all of which are high and to the left (as in the lower left of the drawing). That is, a precise estimate that is biased.
IN STATISTICS, bias retains its original meaning of being simply "slant, or oblique" in this case from the unknown (and often unknowable) true value, as the shot cluster in the lower left panel is "oblique" from the bull's eye. It no more implies deliberate nefariousness or cheating than the statistical term error implies a mistake or blunder. The only way to assess bias is from a True Value (aka Reference Value). For example, a gage block or other working standard traceable to master standards kept at NIST. In the case of a political poll, we can take the actual election as the True Value, though we must keep in mind Dewey v. Truman: the sentiments of the electorate may have shifted over the time since the poll was taken and the election was held.*
BUT THERE IS ANOTHER PITFALL to be wary of when hearing of precision. Precision of what?
In a recent exchange of messages on the topic of
INDEED. Let's see how that works. We will take a very simple model: estimating the density of coal in a bunker by measuring the radiation backscatter of the coal. Since different veins of coal may differ in their radioactivity, the relationship of the surrogate measure (backscatter) to the desired measure (density) must be calibrated for each mine from which shipments come. This is done by taking a known amount (weight) of coal from the coal pile following ASTM methods and packing it into a calibration tube to various volumes, hence to various densities. Since on the sample, both the density and the backscatter will be known, a relationship can be determined by the usual linear regression methods. (The range of likely densities is such that non-linear effects should not appear.) The technician may then proceed to the bunker and obtain backscatter readings from this great big honking pile of coal and, using the surrogate and the calibration curve, predict the density of the coal in the bunker.
What we see here is the scatter diagram with the fitted model. (Now you know why I'm using a model with one X and one Y.)
You will notice that the model has an R^2 of 91.6%. This means that 91.6% of the variation in backscatter is explained by the relationship to coal density. That's pretty good. The relationship is Y=9946-59.17X. Since this implies a backscatter of 9946 when the coal has density 0, we realize that there may be boundary effects, regime changes, or the like. But so long as we are far from the boundaries, this will do, fapp.
But all estimates are estimates. That is, they all come with wiggle. The mean of a sample has a standard error of SE = σ/√n, for example. Hence, the two parameters of the model, the slope and the intercept, are also subject to sampling variation. Fortunately, these can be estimated:
The red dashed lines show the confidence interval within which repeated regression lines may fall 95% of the time, were the sampling done repeatedly. Imagine the regression line moving up and down (change of intercept) or rotating like a propeller (change of slope) within the boundaries of the dotted lines. (Notice also that the confidence interval is wider for the extreme values than for the central values.) If the coal density were 68, the regression line tells us to expect a backscatter of 5922. Well and good. But this is an expected value; that is, a mean. The confidence interval tells us that this estimate is good to a precision of ±60. That is, the "true mean" (assuming no bias) lies between 5862 and 5982.
BUT THIS IS A BOUND ON THE ESTIMATE OF THE MEAN, A PARAMETER. It does not tell us the limits on the actual backscatter readings we may expect, as you can see in the next graph:
Actual backscatters spread out beyond the confidence limits, because those are limits on the mean value of Y at each X. To understand the variation of the actual data, we need the prediction limits:
These are the outer, light blue dotted lines on the chart. We would expect 95% of the data to fall within these limits by random chance. That is, 5% of the data may fall outside the limits for "no particular reason." Data may also fall outside the limits due to assignable causes; that is "for a particular reason." (Regular readers of this blog will recollect that there is no point hunting for the specific cause of a random fluctuation, even if the random fluctuation is undesirable. This is because there is no particular cause for a random fluctuation; rather, there are many causes.)
Here we see the distinction between the confidence interval and the prediction interval. If the coal density were really 68, the expected value of the backscatter would be somewhere between 5862 and 5982, or ±60; but the actual measured value the technician gets with his ray gun will fall between 5712 and 6132, that is, ±210.
AND NOW YOU SEE THE GREAT TRAP. Confidence bands are always narrower than prediction bands. The scientist may emphasize the precision of the parameter estimate and the media reporter may hear what he thinks is a high precision for the model for the forecasted values of Y. Dishonest scientists may do this deliberately to pretend to a greater precision than they have achieved. And to be charitable, the scientist may be more savvy in physics (vel al.) than in statistics. We know this is true of social "scientists."