A beautifully told story with colorful characters out of epic tradition, a tight and complex plot, and solid pacing. -- Booklist, starred review of On the Razor's Edge

Great writing, vivid scenarios, and thoughtful commentary ... the stories will linger after the last page is turned. -- Publisher's Weekly, on Captive Dreams

Sunday, May 25, 2014

America's Next Top Model -- Part V

Now, where is that equation hiding?
Now that we have a lovely bunch of Xs, we need to knit them into equations, and thus introduce entire new species of uncertainty. This is done in two basic ways:
  1. pulling the equation out of our vast fund of scientificalistic knowledge.
  2. pulling the equation out of the back pocket of our jeans, or somewhere close by.
In the continuum of knowledge, the most certain knowledge is mathematical, for once the reasoning is grasped, the conclusions follow with a determinism that even the most determined materialist cannot match. Physics being so highly mathematized -- to the extent that some physicists have fallen prey to idealism and regard the mathematics as the "real" world and the empirical world as something of a nuisance -- physical knowledge is sometimes regarded also as deterministic. This was Descartes' explicit program: by mathematizing the physics he hoped to prove physical truths by demonstrating corresponding mathematical truths.(*) Although, since the physical world never quite conforms to the mathematical model (especially at the margins) that a mathematical model has a term in it does not obligate the natural world to go along with the gag. (Can we say "epicycle"?) 
(*) Descartes' program. In the meantime, mathematicians have latterly found that they can't even prove everything in mathematics... Damn!

As matters become more complex, the methods of science devised for the physics of motion become less and less appropriate and one finds fewer equations and "laws" and more correlations and statistics. Hence, while modeling in physics and chemistry is often grounded in firmly established laws expressed in the privileged language of mathematics, there is a sharp drop off in such equations as we cross into biology and social science. Indeed, one often says social 'science' with scare quotes to indicate how faint is the shadow cast by Galileo and Newton over sociology and similar species of voodoo.

The Fifth Uncertainty: Selecting the Transfer Functions

When dealing with the hard sciences, the likely form of the transfer function can oft be derived from scientific principles. For example, the stack height of an assembly is obviously the sum of the heights of the stack elements. Duh? The press force required to insert an rod into a pulley is:
Y = f(X) = Z0X² 
where X is the relative interference: (ODshaft – IDpulley)/ODshaft and Z0 is the joint material stiffness coefficient. (There's a lot of that constant-times-variable-squared thingie going around in physics.)

Other transfer functions are more complex; e.g., surface tension of a liquid as a function of its temperature; burst stress of a spinning disc (σa) as a function of density, rotational speed, and radius.

However, these equations are usually continuous functions and the World has the nasty habit of going digital. Math can rush in where physical things fear to tread. There may be boundary conditions not incorporated into the equations.  When X crosses a boundary, the relationship Y=f(X) may cease to hold. You cannot predict from measurements of successive speeds and positions the brick wall toward which the vehicle is headed.

When dealing with "soft science" systems, there are no established physical equations, and the uncertainty inherent in the choice of function is magnified several times over. Often, the relations are determined heuristically by running regressions over a great many possible Xs and selecting those that seem to have important correlations. The dangers of this are evident: the function may reflect only temporary and local correlations rather than real causal connections.

TOF tells you three times: correlation is a mathematical artifact; causation comes from the physics.(*)
(*) physics broadly understood as the acts of anything physical, even if it talks back.

An example of how consideration of principles can help arrange the selected variables into coherent equations is the diffusion of a human population across a landscape, which was modeled by Robert Rosen in "Morphogenesis in Biological and Social Systems."

Example 1: Distribution of a population over a landscape.  Each point (x.y) in the landscape has some affinity for humans due to soil, water, or other resources. This is modeled by a scalar field ρ=ρ(x,y). Given this alone, the population would coalesce around the most attractive site as if sucked into a black hole. 

However, humans tend to shun crowded places for wide open spaces wherever we can. So the resulting population density (a = a(x,y)) is itself a repulsive "force." Given density alone, human populations would spread out evenly over the landscape in a sort of cultural heat death. 

The reaction-diffusion equation.
bracketed terms are the reaction.
The second part is the spatial diffusion.
Combining the two, we find the population is attracted into loci by their affinities and pushed outward by the resulting densities. This is analogous to the way a star balances between the outward pressure of radiation and the gravitational imperative to collapse forever. It also describes the way a chemical compound diffuses through a reactor vessel.

The star's driver is nuclear fusion; the chemical driver is the reaction. For population diffusion, the driver is the proliferation of people. So a reaction term specifies how resources stimulate -- f(ρ) -- and inhibit -- g(ρ) -- proliferation. (▼² is the Laplacian operator: the divergence of the gradient of the scalar fields for density and affinity.)
Gravity models.  In a similar fashion, we can calculate something we might call cultural "gravity." Settlements exert an attraction on the surrounding countryside that diminishes with the square of the distance (because the area to be influenced increases with the square of the distance). Alden ("Reconstruction of Toltec period political units in the Valley of Mexico") used the area covered by the ruins of Aztec-era centers as a surrogate for their "mass," and took their known state boundaries to indicate where a center's influence had become less than that of neighboring centers. He calculated the exponent to be the 1.9 power of the distance. Close enough to the inverse square law for horseshoes. 
The boundary between two colonies
reflecting the attraction of Philly/NYC
TOF once did something similar for New York and Philadelphia, using metro area population as a surrogate for "mass." The "gravitational" potentials of the two cities were equal along a diagonal line just northeast of Trenton. Anyone who has spent time in the Garden State knows where Philly fans outnumber Yankee fans and vice versa.

In a fully-developed "gravity" model, the distance between centers would be measured by the time and energy needed to travel between them.

The Sixth Uncertainty: Unprincipled Data

But sometimes there are no principles like reaction-diffusion or gravity from which to derive likely equations. In such cases, we have to suck transfer functions directly from data. This runs the risk of the data being local and particular rather than general and universal.

Example 2: Coups d'Etat in Sub-Saharan Africa. It should be obvious that not all relations are linear -- although soft science types like to follow the linear lemmings right off the cliff. However, it is also true that a linear sum of Xs often provides a reasonable estimate of Y within the range of interest. That is, the Earth is flat for sufficiently short distances. You only get into trouble when you go global.

To estimate the parameters mentioned in the previous post of this series, Jackman fit the following model:
Coups = b0 +  b1M + b2C - b3D - b4P
+ b5DP + b6CD + b7CP + b8CDP + ε
to actual coup indices from 1960 (or year of independence) to 1975, obtaining an R² = 0.84. He concludes:
This paper specifies and estimates a model of the structural determinants of coups d'etat for the new states of black Africa in the years from 1960 through 1975. Results indicate that (1) both social mobilization and the presence of a dominant ethnic group are destabilizing (these effects are additive); (2) multipartyism is destabilizing while electoral turnout in the last election before independence is stabilizing; (3) multipartyism is particularly destabilizing where a dominant ethnic group exists; (4) the presence of such a group reduces (but does not eliminate) the stabilizing effect of turnout; and (5) multipartyism has no pronounced effect on elite instability where turnout is high. Taken together, these patterns account for over four-fifths of the variance in coups d'etat in black Africa in the period.
Jackman was not trying to predict coups, but was trying to understand why some countries were fairly stable while others were Coups-R-Us. This goes back to the context of the model: what the model is intended to do. Models ought not be over-interpreted beyond that. If multipartyism really is destabilizing, it is not because of the statistical correlation, but because of real physical factors that must be sought and verified. For all we know, both coup-proneness and multipartyism are responses to deeper lurking variables.
Example 3. Social Change. Hamblin, et al. decided that ideas circulate in a culture in much the same way as diseases:
  • Type I: Ideas are learned from "knowers" (≈ contagious diseases)
  • Type II: Ideas are learned from a common source (≈ environmental diseases)
The former  can be characterized by a differential equation: dx/dt = kx(1-x), where x is the proportion of knowers. This integrates out to the well-known logistic equation. The second type, where the idea is contracted from a common source, such as the Internet, a favored radio personality, or a contaminated well, follows a decaying exponential equation. This parallels similar models in epidemiology and ecology (Hamblin, 1973; Pielou, 1969). 
Similar kinds of models relating to interaction of spatially isolated groups, development of aggressiveness, etc. can be found in (Rashevsky, 1968 and Renfrew, 1979).

But it should be clear by now that the model gains uncertainty through the choice of equations for the model, and that this uncertainty cannot be expressed in a simple precision statement like "±ε."

The Seventh Uncertainty: Data versus Data

The uncertainty of deriving the transfer equations from the data -- which must be done where coefficients are unknown from mathematical or scientific knowledge or from prior relevant experience -- is that the data themselves are uncertain. If we are to build our model on the data, we must not be like the fellow who built his house upon the sand (Matt 7: 26-27).

Uncertainty in the data comes in three flavors:
  1. What data do we want to measure?
  2. How do we make the measurement?
  3. How do we collect a sample of measurements?
Cans got ears. How much earring is in
the aluminum material received?
Surrogate Measurehood.  We dealt with #1 in the previous post. But it is not enough to say you want to include X in your model, you must specify how X is to be measured (#2). If the model includes skinned cats, keep in mind there is more than one way to skin them and different methods produce different results. The degree of ear-ing on a deep-drawn aluminum cup (first step in making a can) is found by subtracting the height of the can at the bottoms (min) of the ears from the height at the tops (max) and expressing this as a proportion of the cup height. The customer divided this difference by the minimum while the supplier divided by the maximum. Both called their resulting measurement "% Earring" but the difference between (M-m)/m and (M-m)/M could be considerable!

Sometimes the difference in the measurement method (and hence, in the thing measured) is obvious, as in the case of overflow capacity. Sometimes it is more subtle, as in the case of Wally's Gauge. But such differences may lurk undetected in one's data, adding yet another non-quantified uncertainty to the model.

Sometimes it is necessary to measure something other than that in which you are interested. Let's say you want to include tensile strength in your model, but you are working with large structural pieces or ingots or something, so you measure Rockwell hardness instead. Similarly, viscosity may stand in for "degree of polymerization" or tree ring width for air temperature. Earlier, we have seen a case where radiation backscatter stood in for density of coal at a power plant. This will be the case whenever the desired measurement is expensive or time consuming, but a surrogate is fast or cheap. But this requires that the correlation between the surrogate and the desired measurement be known and (more importantly) understood. At long last we have an uncertainty that can be quantified. But because we have just added an additional transfer function to the model -- the transfer from the actual measurement to the desired measurement -- the model has just become that much more uncertain.

Two ways to measure temps.
A. old Stephenson Screen
B. new thermocouple
Instrumental Practice. If we want to include the diameter of a rod in the model, what diameter should we take? The rod has infinitely many diameters down its length and around its circumference -- and may be tapered or imperfectly cylindrical. So where on the part will we measure the diameter? What kind of instrument will we use to measure it? (Calipers? Micrometer? Laser?) Should we take more than one measurement on the same part? How many? Where? Will we use the mean? The median? The mode? The extreme values? It is no simple thing to measure the diameter of a rod, let alone the temperature of a world.

Monthly reported temperatures at National Observatory
Athens, Greece. (
Steirou, E. and D. Koutsoyiannis, 2012)
The importance of the instrument is seen in the history of air temperature readings at the National Observatory at Athens, Greece (Steirou/Koutsoyiannis, 2012). When a new thermometer was installed in June 1995, the temperatures increased. Global warming? When the new thermometer was calibrated a year and a half later (yes, sic!) the temperatures dropped below what they had been previously. Behold! Global cooling! The foo-foo with the instrument caused changes in reported temperatures greater than usually attributed to AGW.

But the uncertainty in the instrument itself cannot account for differences among instruments. Unless such issues are dealt with by the modeler, important signals may be masked -- or artificially imposed!

The Eighth Uncertainty: Accuracy and Precision 

The method used to make the measurement will have an inherent accuracy and precision. Elements include:
  1. Resolution: The least increment the measurement system can detect. If resolution is too small, minor fluctuations will seem like quantum leaps because an "active" decimal place is blank.
  2. Accuracy: The difference between the mean of repeated measurements and the "true" value of a standard part. The difference between the mean and the reference value is called the bias. "Centered around the bull's eye."
  3. Precision: The variation among repeated measurements of the same part, not necessarily a standard. "Tight grouping of shots."
  4. Linearity: The difference in bias across the range of use of the measurement system. E.g., there may be greater bias at the extreme values than at the central values.
  5. Stability: The change in bias over time. There is no such thing as a measurement if the measurement system is not in a state of statistical control.
  6. Reproducibility: The variation among results obtained by different people using the same instrument on the same part.
Example: Three QC inspectors were given the same micrometer and told to measure the wall thickness of an indicated location on an aluminum can. Two inspectors obtained similar results, but the third, Barb, obtained smaller values. Barb thought the micrometer was a C-clamp and screwed the barrel real tight to get a "good" reading. But aluminum is compressible and she "pinched" the aluminum in the act of measuring it. (Similar, though more widespread, were efforts at a medical device manufacturer to measure the outside diameter of flexible plastic tubing using a calipers.)

Two judges tasting the same 50 brews.
Yes, they really were Judges A and W.  TOF has
derived much amusement from this coincidence
ever since. And it wasn't even root beer!
Example: Deming tells of a panel of three traffic experts evaluating waybills for the Rock Island Line in a sample study to determined the likely effect of a merger between two rivals (Rosander, 1977). The traffic experts were to determine for a sample of bills whether a lot would have been shipped over the rival line had the merger been in effect at the time. To ensure that the instruments (the traffic experts!) were calibrated, all three experts worked blindly the same subset of the larger sample, and the results of the three judges were compared for this subsample. They were sufficiently in agreement that the results of the three could then be regarded as a single sample for analysis.

OTOH, TOF once encountered taste test data on the same fifty brews of a beer in which the "hedonic rating" given to a brew was more indicative of the judge than the brew. Judge A generally rated brews five points higher than Judge W and pretty much gave every brew a "95." In the absence of a reference value, or at least a Master Taster, one cannot determine which judge (or both!) was biased.

Suggested Reading

  1. Alden, J.R. "Reconstruction of Toltec period political units in the Valley of Mexico," in Transformations: Mathematical Approaches to Culture Change., pp. 169-200. (Academic Press, 1979)
  2. Box, George E.P., William G. Hunter, J. Stuart Hunter. Statistics for Experimenters, Pt.IV “Building Models and Using Them.” (John Wiley and Sons, 1978)
  3. Carroll, Sean. "Physics and the Immortality of the Soul," Scientific American guest blog (May 23, 2011) 
  4. Curry, Judith and Peter Webster. “Climate Science and the Uncertainty Monster”  Bull. Am. Met. Soc., V. 92, Issue 12 (December 2011)
  5. El-Haik, Basem and Kai Yang. "The components of complexity in engineering design," IIE Transactions (1999) 31, 925-934
  6. Hamblin, Robert L., R. Brooke Jacobsen, Jerry L.L. Miller. A Mathematical Theory of Social Change, (John Wiley and Sons, 1973). [link is to a journal article preceding the book]
  7. Jackman, Robert W., "The Predictability of Coups d'Etat: A Model with African Data." (Am.Pol.Sci.Rev. (72) 4, (Dec. 1978)  
  8. Joiner, Brian. "Lurking Variables: Some Examples," The American Statistician, Vol. 35, No. 4. (Nov 1981), pp. 227-233.
  9. Kuttler, Prof. Dr. Christina. "Reaction-Diffusion equations with applications." Technische Universität München, Lehrstuhl für Mathematische Modellierung, Sommersemester 2011
  10. Petersen, Arthur Caesar. "Simulating Nature" (dissertation, Vrije Universiteit, 2006)
  11. Pielou, E.C. An Introduction to Mathematical Ecology, (Wiley-Interscience, 1969)
  12. Rashevsky, N. Looking at History Through Mathematics, (MIT Press, 1968)
  13. Renfrew, Colin and Kenneth Cooke (eds.) Transformations: Mathematical Approaches to Culture Change, (Academic Press, 1979)
  14. Rosander, A.C. Case Studies in Sample Design, (Marcel Dekker, 1977)
  15. Rosen, Robert. "Morphogenesis in Biological and Social Systems," in Transformations: Mathematical Approaches to Culture Change., pp. 91-111. (Academic Press, 1979)
  16. Steirou, E. and D. Koutsoyiannis. "Investigation of methods for hydroclimatic data homogenization," European Geosciences Union General Assembly, Vienna 2012
  17. Turney, Jon. "A model world." aeon magazine (16 December 2013)
  18. Walker, W.E., et al. "Defining Uncertainty: A Conceptual Basis for Uncertainty Management in Model-Based Decision Support." Integrated Assessment (2003), Vol. 4, No. 1, pp. 5–17
  19. Weaver, Warren. "Science and Complexity," American Scientist, 36:536 (1948)
  20. Weinberger, David. "To Know, but Not Understand," The Atlantic (3 Jan 2012)
  21. E. C. Zeeman, E.C. "A geometrical model of ideologies," in Transformations: mathematical approaches to culture change, pp. 463-479, Academic Press, 1979

No comments:

Post a Comment