Monday, January 27, 2014

Science Marches On!

Spanish researchers discover the first black hole orbiting a “spinning” star
-- headline, Astronomy Magazine (January 16, 2014)

Stephen Hawking: 'There are no black holes'
-- headline, Nature (24 January 2014)
+++
 And speaking of black holes...

Among those many things you never thought would explode...

+ + +
It's not that it's a jelly-filled donut.  It's that it was not there a moment before.  TOF imagines a giggling gaggle of Martians lined up behind the rover tossing stuff out in front of it just for slaps and giggles.  Quick! Rotate the camera!  We might catch them!

+ + +
We had an information revolution, and information was overthrown:
I fear we are witnessing the “death of expertise”: a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laymen, students and teachers, knowers and wonderers – in other words, between those of any achievement in an area and those with none at all. 
– Tom Nicholls

which leads us to the all-time best description ever of the Internet:
...another meaningless piece of “information,” arguably searchable in the steaming electronic pile.  -- Dave Warren

+  + +

Not to be out-done, biology too marches on, though in a curious direction, by which "Scientific" American announces that biology has no subject matter.  To wit:
  1. It's real hard to define "life." 
  2. So there isn't any. 
Against such incisive logic one sits in stunned amaze. No wonder there is so little math in biology. You need logic for that. Instead, we are urged to replace "life" with "complexity," which as we all know is phat easy to define. (#Facepalm)

The notion is that when complexity (whatever that might be) reaches a high enough level, then *magically* life (whatever that might be) will "emerge."


+ + +
It is now clear that the majority – perhaps the vast majority – of neuroscience findings are as spurious as brain waves in a dead fish...
Why, one may ask?  Bad statistics, on small sample sizes, or enthusiastic over-interpretation of a really-truly scientificalistic measuring machine.  It's not that the numbers are meaningless. John Lukacs said in another context, it's that they might not mean what you think they do.

+ + +
We are often told that science is "self-correcting."  Bogus or erroneous findings are discovered when the experiments or analyses are replicated. Problems arise when data are withheld or when the experiment requires a superconducting supercollider, which not many folks have tucked away in the campus lab. But the deeper problem is that journals don't like to publish duplications of experiments (under the assumption that they won't deliver anything new) and professors don't get to hold press conferences if all they've done is confirm someone else's findings. In the context of publish or perish, this comes down firmly in the perish column.  (Compare the Jesuits confirming Galileo's findings. They held a big freaking party.  In some ways, the 17th century was more advanced than we are.)

And when replication studies are done, more often then not they fail to replicate. This is especially acute in soft sciences like psychology, but it is also the case in pharmaceutical trials:
A few years ago, scientists at the Thousand Oaks biotech firm Amgen set out to double-check the results of 53 landmark papers in their fields of cancer research and blood biology.

The idea was to make sure that research on which Amgen was spending millions of development dollars still held up. They figured that a few of the studies would fail the test — that the original results couldn't be reproduced because the findings were especially novel or described fresh therapeutic approaches.

But what they found was startling: Of the 53 landmark papers, only six could be proved valid.

"Even knowing the limitations of preclinical research," observed C. Glenn Begley, then Amgen's head of global cancer research, "this was a shocking result."

Unfortunately, it wasn't unique. A group at Bayer HealthCare in Germany similarly found that only 25% of published papers on which it was basing R&D projects could be validated, suggesting that projects in which the firm had sunk huge resources should be abandoned.
Some researchers have concluded that "most published results are false." Yes, there is a recursive irony in that announcement.  But ironically, it has been holding up. The publish or perish syndrome leads to a lot of published dreck based on inadequate studies improperly analyzed, and which are never followed up on by fellow scientists.  Peer review turns out to be a weak reed.

Apparently, when science needs to be corrected, it is the capitalistic business firm with real money at stake that does the corrections.  Even as recently as forty years ago, this was not so, and it is some comfort to know that physics remains more self-correcting than biology, medicine, or the social "sciences."  Perhaps it has to do with whether the object of the science has a mind of its own.

8 comments:

  1. "I fear we are witnessing the “death of expertise”: a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laymen, students and teachers, knowers and wonderers – in other words, between those of any achievement in an area and those with none at all."

    So, um, where does that leave us enthusiastic amateurs?

    ReplyDelete
  2. About twenty years ago, I found, together with my dissertation advisor and another scientist, that a published equation in a classic paper in Materials Science was wrong. This led to a paper of ours, "Correction in the Equation for the Third Order Kroener Bound on the Shear Modulus of a Polycrystalline Aggregate of Randomly Oriented Cubic Crystals," by N.D. Rosen et al., published in the International Journal of Engineering Science. ("Kroener" should really be "Kroner" with an o-umlaut.)

    I conjectured that the error had not been caught before because there's a recursion relation for getting from lower-order to higher-order bounds of the shear modulus, and people had probably plugged numbers into the recursion relation, rather than into the printed formula for the third-order bound.

    ReplyDelete
  3. I can't argue against the factuality of Nicholls' observations. Nevertheless I'm not 100% convinced that we should join him in lamenting the Wikipedia-led coup against the technocrats whose expertise dominates most areas of life.

    Yes, the tone of public conversation is coarsening. Sure, there's no shortage of armchair commenters who took their degrees in philosophy, physics, genetics, art criticism, etc. from Reddit. But perhaps the experts have earned our distrust. As writers like David Freedman warn, the experts' track record isn't exactly sterling.
    http://content.time.com/time/health/article/0,8599,1998644,00.html

    That said, I concur that people who've put in the time and effort to earn an advanced degree or master a craft deserves respect. I don't think their word should be accepted uncritically.

    ReplyDelete
    Replies
    1. The contribution of non-experts, it seems to me, is two-fold: to point out logical, philosophical and procedural problems with claims made by experts, and to approach problems with different sets of expertise. For my part, I tend to see claims that Science Has Shown within an historical and philosophical framework, which seems to me to often flush out assumptions and biases that might otherwise skate. Is it of any value to anyone for me to do this? Dunno. But it's fun - for me at least.

      Third, we non-experts are the sounding-board: Feynman once said that if he couldn't explain something in terms a 10 year old could understand, it meant he, Feynman, didn't really understand it. We enthusiastic amateurs fill the smart 10 year old role for science in general, and therefore have a right to opinions based on how well science has been presented.

      Our host is being his usual funny/ironic self by posting both the cowing assertion of Nicholls AND the findings of Amgen in the same post. We can't trust either ourselves OR the experts! So, right when I'm feeling guilty for opining on stuff on which I am by no means an expert, Mr Flynn supplies us with examples of where the experts got it all wrong.

      Science history is, basically, the story of how the experts got it all wrong. We non-experts need to develop the intellectual chops to know where and how far we are obliged by reason and our love of truth to give out assent to the claims of experts. That seems like a pretty noble goal.

      Delete
    2. CSLewis said the same thing: if you can't explain it using works a 6 year old can understand you probably don't understand it as well as you think you do. It's a good trick and good practice. More english teachers and teachers in general should recommend it to their students, especially the really bright ones.

      Delete
    3. When the experts are wrong about matters within their own expertise, they are seldom stupidly wrong. More often we find that they are wrong because they have blithely wandered outside that expertise, and are opining on some other matter in which they are as butt ignorant as most of us. Sagan on history, Hawking on philosophy, and so on. Even within the bounds of a broader discipline they may be ignorant of other areas outside their specialty. P.Z.Myers on quantum theory suggests itself; but one may find an historian of modern Europe saying things that medievalists have long known as false, or analytical philosophers making goofs when discussing scholastic philosophy. This was brought home to me when I was a master's candidate and I gave a talk at the Math Dept monthly symposium on my theorem (which regarded conjoining and splitting topologies on function spaces). Afterward, I was commended by an analyst and algebraist: "Great talk. Didn't understand a word of it."

      When a philosopher like the late David Stove criticizes Darwinian evolution, he does so on the basis of logic, not biology.

      But amateurs can make contributions. Both C.V.Wedgwood and Barbara Tuchman were amateur historians, but their work deserves a great deal of respect.

      Delete
  4. Isn't the very basis of the strength of science the idea that it is never truly settled that there is always the possibility of a better more accurate explanation with even more predictive power? If so, it shouldn't surprise anyone that new findings are continuously replacing older discoveries. Except i suppose that in physics and the other hard sciences the "mountain of knowledge" seems to grow up and out as new pieces are added to existing pieces and in the soft sciences new bits of data often seem to turn into new "mountains". Plus the softer sciences especially sociology and psych are occupied with matters of the heart and so find themselves in treacherous water everything they "put in" for another journery to new discoveries.

    ReplyDelete
  5. Jelly donuts? I'm envious. There's no Tim Horton's here in Baltimore, but they have one on Mars. It isn't fair!

    ReplyDelete

Wonder and Anticipation, the Likes of Which We Have Never Seen

  Hello family, friends and fans of Michael F. Flynn.   It is with sorrow and regret that I inform you that my father passed away yesterday,...