Archive for December, 2007
Charm etc. posted an interesting discussion on Information Entropy and Experiments. The bloggers describe an attempt to evaluate the worth of experiments on a statistical basis, comparing the results they produce against a priori expectations from theory. (See arXiv:0712.3572 from Bruce Knuteson.) They point out very perceptively that this procedure relies too much, if not entirely, on those expectations from theory. One might wonder where else we should get our expectations, but this is not the point – the most important discoveries are the ones for which there was no expectation. And after the discovery was made, theory was altered radically, therefore changing the priors. See Charm etc. for a succinct discussion.
I think there is another problem with the approach. How does one evaluate the accumulated value of crucial measurements of particle properties and interactions? How much does a factor two improvement in the W mass measurement increase the value of a Tevatron experiment? What about the Bs oscillation frequency? These measurements fit in the standard model – in fact, they are important if not crucial empirical inputs to model calculations, without which one could not move beyond the standard model. I don’t see how to set a value for such information in the sense of theoretical expectations; if we discard the standard model in the next ten years, and replace it with something much better, is the value of the W mass measurement diminished or enhanced? At what point does the W mass (for example) become mundane or less than crucial, in contrast to the present time? I suppose there will be a day when the W mass can be predicted sufficiently well, or when it ceases to provide any insights into new physics, and these developments might be reflected in the nature of the theoretical priors required by this evaluation of scientific merit, but I doubt this could be made clear or concrete.
The bloggers at Charm etc. are also skeptical of this approach, interesting though it may be to talk or blog about it. But if someone takes it more seriously, then perhaps the next question would be: how do you place a quantitative value on individual, and improving, measurements, taking into account the possibility that some measurements are wrong?
Several bloggers have detailed the budget disaster of December 2007 (for example, Joanne Hewett, Peter Woit and Andrey Petrov), so I will not try to do the same. Furthermore, it is tempting to declaim the demise of science in the United States, or to fret about the future of Fermilab. I don’t think I would improve anything or anyone by doing so.
Instead, let me try to make the following point: More than ever, the success of the LHC program is crucial. Exciting and profound discoveries, if they do come, must be carried out with the utmost professionalism and care. We cannot afford to bungle any highly visible analysis, nor can we mire ourselves in excessive regulations and procedures for approval of new results. Senior physicists need to be engaged in the stuff of physics, not only in managing cadres of people. Young people may not have the experience to avoid naive mistakes; they should be educated and trained as well encouraged to be innovative and iconoclastic. Only Nature can determine the sources of new physics at the LHC, and only we experimenters can raise the quality and depth of our work to levels never seen before. This may be the last chance to show the world that high-energy physics is worthwhile…
(A.A.Michelson, and M.Faraday)
Slowly I am managing to return to the land of physics blogging, and much of what I see now is great!
One item which strikes me is a post written in Charm, etc. in which the author describes how a very nice 5-sigma pentaquark signal came and went! This may well serve as a cautionary tale for all of us at high-energy colliding beam experiments. If this peak had been the Higgs boson, the collaboration(s) might have been delighted to announce discovery. (Some would argue that the original W boson and top quark discoveries were on shakier ground, statistically speaking.)
I think it is great that the CLAS Collaboration wrote a article demonstrating the results from their first set of data, and the second, which is fives times larger than the first. This plot sums up the situation:
The points represent the first data sample, on the basis of which a 5-sigma significance was claimed. The solid line represents the second data sample, which is more than five times larger than the first, which contradicts that claim. Both data samples were taken with the same apparatus under the same conditions, by the same people. The collaboration wrote a paper exploring the statistical issues involved (see arXiv:0709.3154). See the clear and concise article for details, or or Charm’s original peak-finding post.
Despite recent recriminations against some people who like to discuss controversial results from collider experiments, I would like to return to this blog to share some of the excitement – and frustrations – of preparing for the first LHC data and pursuing advanced research at the Tevatron.