Archive for January, 2008

DOE cuts: bad P.R. ?

You probably have seen the latest announcements from Dennis Kovar (DOE) about stopping BaBar and all work on Nova and the ILC. See Alexey Petrov’s post, Tommaso Dorigo’s post and Gordon Watt’s post for some details and commentary.

If, however, you go to the DOE web site, you will find a happy announcement of President Bush’s visionary plan Twenty in Ten calling for a mandatory renewable fuel standard and new CAFE standards. Here’s the picture:

President Bush at the DOE

There is absolutely no trace that anything is wrong with the budget, or that sacrifices in basic science will now be made for the sake of other national priorities… (and I’ll keep my thoughts about that private, though I am sympathetic to Bee’s post from the beginning of the year.)

You might think that is simply the wrong place to look – one should check the Office of Science web site to find out about the impact of these budget cuts, and how the DOE / Office of Science plan to adapt to the new reality. But there again, there is not a single byte of information about the new budget. In fact, the central image is the front page of a “landmark publication” called Facilities for the Future of Science: a Twenty-Year Outlook (Nov, 2003). They should certainly take that down!

If you go to the DOE Press Release page, you will not find any information or announcements about the budget, though there is a call for nominations to the Enrico Fermi Prize. And the HEP page also contains no information.

Of course I do not expect to see dramatic announcements full of woe and gnashing of teeth, but I do expect official statements, full of facts and not p.r., for the benefit of people in the field and for the interested public (including the news media). Aren’t web sites like those intended for the responsible dissemination of information? That is, after all, why we invented the internet…

January 8, 2008 at 9:06 am 6 comments

More commentary on judging experiments by their surprise discovery level

No, the additional commentary is not from me – it is quite good! 😉 If you find this topic interesting, then you should read the post Scientific Bang for the Buck, by Tommaso Dorigo, and comments to that post. Also, there is an intriguing discussion on the Deep Thoughts and Silliness blog, called “A physicist stumbles into a statistical field.”

January 5, 2008 at 9:20 am Leave a comment

Good resolutions for a blogger

I have come across an excellent blog called nOnoscience, and the authors have posted a fine set of resolutions for a good blogger. I won’t reproduce the list here – you should go read their blog!  You will surely find several posts worth reading immediately…

January 3, 2008 at 7:15 am 2 comments

Judging a theoretical speculation by data

It strikes me that using theoretical priors to evaluate the importance of a discovery is, to a certain extent, putting the cart before the horse. Canonically, we would like to collect data impartially, and then confront our hypotheses with those data. Ideally, we would place all hypotheses on an equal footing (which is one reason why null-hypothesis tests are bad), and let the data tell us which ones to discard.

We all know that HEP does not work that way, not quite. First, there are so many facts that we need to agglomerate them in some quantitative way. The chi-squared or likelihood tests based on precision electroweak measurements (see the “blue-band” plot below) or the constraints on the unitary matrix from key measurements in the flavor sector are good examples of this. Furthermore, our hypotheses are really models and theories, some more speculative, some less, which are already based on a chain of reasoning and experimental results, and rarely can be rejected with just a few more data. (There are exceptions, though – the non-observation of a SM-like Higgs boson with a mass below 150 GeV would be very bad for minimal low-energy supersymmetry.) So there is a huge difference between overthrowing the standard model, and ruling out the theory of universal extra dimensions (attractive though it may be). We expect one to (continue to) succeed, and the other we hope to succeed, but expect to fail. As a community, we have our “priors” as to which theory is more likely to be correct.

To take the step of constructing a function which represents these priors and using it to place numbers on discoveries, or potential discoveries, goes well beyond common practice or thinking. It seems to me like an unwanted and perhaps dangerous feed-back loop. It supposes that we can quantify the very things that we can’t anticipate by the level at which we can’t anticipate them. Since this is clearly impossible, we should substitute deviation from our priors for how successfully an experiment guides us to a better understanding. Aside from overlooking the role played by measurement, this approach would not have helped us to overcome the quagmire of the Bootstrap Model (a.k.a. particle democracy) or to move from Regge Theory to quantum field theory, and ultimately, to QCD.

I do believe strongly in searching for new physics (deviations from the standard model) in a way that is broad and unfettered by theoretical prejudice, so I am sympathetic to many of Bruce’s ideas. But I don’ think we can quantify our potential results in the manner he has suggested.

January 2, 2008 at 8:30 am Leave a comment

What is the value of measurement?

I was happy to see that Evolving Thoughts picked up the discussion of judging experiments based on concepts taken from information theory. Apparently these ideas are not new and show the fault lines between the statistics community and certain scientific groups, including HEP.

Today I would like to iterate an earlier point: what is the value of measurement? (in the context of judging experiments)

The use of “surprisals” and the like limit the scope to discoveries only, with the idea that more surprising discoveries are worth more than “ordinary” discoveries. (One goes on from there to develop the notion that an experiment, or analysis, which is better directed toward making a surprising discoveries is inherently worth more than one which makes “expected” discoveries – the problems in this formulation seem pretty clear when stated this way…) But this overlooks completely one of the main roles of experiment, which is to measure. We know that the standard model is successful not only because someone discovered the W and the Z bosons, the top quark, etc., but also because a host of precision measurements when taken together conform to the expectations of the standard model. No one would argue that experimental (or theoretical) particle physics would be just fine without this corpus of measurements, and I doubt that many people would say that looking for new physics beyond the standard model doesn’t need concrete and precise knowledge of standard model particle properties and interactions. After all, we know in which range the mass of the standard model Higgs boson must lie, and some people are enthusiastic about supersymmetry because of the way it conforms to measurements of precision electroweak observables. What situation would we be in today if we lacked those measurements, or they were considered unimportant? (Another example might be the current empirical knowledge of CP violation, culminating in the beautiful sets constraints on the CKM matrix and the unitarity triangle, but I am more familiar with electroweak physics, myself.)

top quark mass measurementsHiggs blue-band plot

(The top quark mass plot comes from the Tevatron Electroweak Working Group, and the “blue-band” Higgs mass plot comes from the LEP Electroweak working group.)

Of course I hope for a surprising discovery from the LHC or Tevatron data, one which is not foreseen by theorists. And I would be happy for clear signs of new physics even if it corresponds to one of the many available excellent theoretical ideas. But if people like the idea of judging experiments in some semi-quantitative way, how might one incorporate the value of measurement? Any ideas?

January 1, 2008 at 9:38 am 4 comments

January 2008