Archive for February, 2010

Earthquake Resource

I think it is time to learn something about earthquakes, given the terrible tragedy in Haiti and more recent event in Chile.

The United States Geological Service hosts an excellent web site at http://earthquake.usgs.gov/. There are quasi-interactive maps allowing you to see where recent earthquakes are occurring. Here is a map of South America showing many after-shocks in Chile within the past 24 hours:

earthquakes in South America

recent earthquakes in South America (USGS)

From the brief summary of the Chile earthquake, one learns that Chile has a lot of earthquakes – 13 events of magnitude 7.0 or greater since 1973! No wonder President Michelle Bachelet is able to handle such a terrible event with calm determination. In 1960 there was an earthquake of magnitude 9.5 – the worst one in 200 years – which spawned a tsunami that engulfed the entire Pacific Ocean. In 2007 there was an earthquake of magnitude 7.7, in 2005 one of magnitude 7.8, and in 1995 and 1985 two of magnitude 8.0.

Chile has an “extravagant” geology, fringed by the Pacific Ocean and the Andes. It rests on the Nazca Plate which moves eastward about 10 cm per year, and forces itself under the continental plate of South America proper. One consequence of this movement is the formation of the Peru-Chile Trench, which is 150 km wide and 5 km deep. Basically, Chile rests on top of a giant precipice, covered from view by the ocean. Another consequence is the formation of the Andes mountains, including many volcanoes. The movements of the Nazca Plate is responsible for the high incidence of earthquakes. (Source: country studies)

Scientists are able to fit the data to a model of the movement of the plates. They need some knowledge of the geometry and geology of the region, which they refine each time earthquakes are recorded. Fits to the data indicate a depth of 35 km and a “strike-slip” fracture. The USGS web site has a wonderful glossary with an animation. Here is the rate of energy released as a function of time – the numbers are astronomical:

earthquake moment rate

rate of moment release, ie, the energy radiated as a function of time

Here is the predicted travel times around the globe, in minutes:

earthquake travel times

predicted travel times in minutes


From what I can gather, a fault generates an acoustic wave which is transmitted in a channel defined by the surface of the earth and a deeper level with higher density in which the wave can propagate fast. Refraction plays a major role. See some nice animations at Jeffrey Barker’s web pages (SUNY Binghamton). Another source of animations and elementary explanations is the web site for the Incorporated Research Institutions for Seismology.

Despite the experience Chileans have with earthquakes, the situation there is very bad. According to an article in the New York Times, two million people are displaced, with several hundred killed. This earthquake is about 1000 times stronger than the one in Haiti, but because earthquakes are far more common in Chile, buildings and infrastructure are much better designed, and emergency services are much better prepared.

Update: Apparently the earth’s rotation has been measurably changed by this earthquake, shortening the day by a bit more than a microsecond. For more information, see The Reference Frame.

February 28, 2010 at 4:57 pm Leave a comment

QCD Predictions agree with the data – NOT!

The D0 Collaboration just released a nice short paper on the measurement of the di-jet invariant mass (arXiv:1002.4594):

Measurement of the dijet invariant mass cross section in p-pbar collisions at √s = 1.96 TeV

This is a bread-and-butter measurement performed many times at hadron colliders, and it will surely be repeated at the LHC in the coming months. Conceptually, there is not a lot to the measurement: one selects events with two or more jets passing quality requirements; cuts on the missing energy reduce backgrounds to a negligible level. Some art is needed in the handling of jet energy corrections, and the authors of the paper have made careful and conservative choices to reduce the possibility of systematic biases due to these corrections or the unfolding of the spectrum.

The interesting feature of this analysis is the use of a large rapidity range. Jets are used out to a rapidity |y| of 2.4; older analyses tended to stick to the central region |y|≤1. The D0 Collaboration thereby publishes a very pretty double-differential cross section:

D0 di-jet mass distribution

Double-differential cross section with respect to di-jet mass and rapidity compared to NLO QCD predictions


The horizontal axis is the di-jet invariant mass, and the six sets of points correspond to six ranges in rapidity, |y|max, for the most energetic of the jets in each event. (Most events have only two jets, in fact.)

The smooth curves appear to go directly through the points – a triumph of pQCD calculations! These are serious calculations, incorporating next-to-leading-order (“NLO”) radiative corrections, which reduce the dependence of the theoretical prediction on arbitrarily chosen factorization scale. Credit goes to Zoltan Nagy for these calculations (arXiv:0307268).

Let’s take a closer look. The D0 Collaboration provides nice plots of the ratio of their measurements to the theoretical prediction:

D0 MJJ ratio

ratio of the measurement to the theoretical prediction


Now the agreement does not look so perfect, so let’ s explain what is in these plots.

Each panel corresponds to the ratio (data/theory) as a function of the di-jet invariant mass, MJJ, so perfect agreement would be a series of dots with error bars, at one. Uncertainties on the jet corrections (energy scale, resolution, and unfolding) lead to correlated uncertainties indicated by the yellow bands. These are uncertainties on the expectation, in contrast to the actual observed values, so the authors center those bands on one, not on the dots – something that I personally approve of. The calculation done by Nagy has NLO corrections, which make them much more accurate than leading-order calculations, but still some theoretical uncertainties remain, as indicated by the pairs of blue lines. Finally, the cross-section calculation depends on empirical knowledge of the parton distribution functions (p.d.f.s), and since that knowledge is imperfect, there is an associated uncertainty indicated by the dashed red lines.

For the central rapidity bins (top three rows), the dots fall within the yellow bands, the blue lines, and more-or-less between the dashed red lines. This means that the uncertainties, from the experimental measurement, the theoretical prediction and the pdfs cover the deviation of the ratio from one.

For the high rapidity bins, however, things don’t look as nice. Still, collider physicists are conservative and tolerate such discrepancies – measurements at high rapidity and high jet energies are difficult to control, so few people would claim that there is a serious problem, even in the highest rapidity bin. Hence the statement in the abstract: Next-to-leading perturbative QCD predictions are found to be in agreement with the data.

But that’s not the end of the story. The D0 Collaboration used a very up-to-date parametrization for the pdfs, called MSTW2008NLO (arXiv:0901.002). Another recent parametrization, called CTEQ6.6 (arXiv:0802.0007), also from 2008, was considered, leading to a very different result. If the theory prediction is computed using CTEQ6.6, and the ratio (data/MC) is re-computed, a large deviation at high rapidity is observed. See the dot-dashed lines in the ratio plots above – this is the ratio of the CTEQ6.6-based prediction to the MSTW2008NLO prediction. It appears that the prediction is off by nearly a factor of two in the high-mass, high-rapidity region, if CTEQ6.6 is used.

Normally, predictions based on these two competing pdf parametrizations agree pretty well. But the Tevatron experiments are entering a regime in which real differences can be sniffed out. The measurement, while bread-and-butter, is not an easy one, and I am sure the authors worked long and hard on it. The result is that the two most popular pdf parametrizations can be cuttingly compared.

The D0 Collaboration point out some important facts about CTEQ6.6 and MSTW2008NLO. Even though both appeared around 2008, MSTW2008NLO incorporates more recent Tevatron data than CTEQ6.6. In fact, the D0 jet energy spectra, which are correlated with the measurements in this latest paper, were used, so one should expect agreement. Meanwhile, CTEQ have produced new parametrizations – I do not know whether they would agree better with these D0 measurements. One can infer, though, that there has been a significant evolution of the pdfs (sorry for the pun) since the Tevatron Run I era, and the pdf fits are determined in a significant way by measurements like this one. Several other Tevatron measurements wait to be included in the pdf fits.

Finally, let me point out that predictions for the LHC employ pdf sets, often rather old versions of the MSTW or CTEQ fits. Will we see factors-of-two changes in such predictions, once newer parametrizations are used? Maybe we should expect rather large discrepancies when the first jet spectra are measured…

February 27, 2010 at 6:31 pm 4 comments

Update of the Higgs Boson Mass p.d.f.

Jens Erler recently updated his calculations for a probability density function for the Higgs boson mass (MH), based on measurements and searches. The article is arXiv:1002.1320.

The whole discussion is couched in the standard model, so the conclusions pertain only to the standard model Higgs boson, the properties of which are well known as a function of its unknown mass. If you want to think about theories beyond the standard model, then the results may not apply.

In this context, the interesting question is: What is the mass of the Higgs boson?. One could also add: Is the standard model consistent with a Higgs boson that has not already been ruled out by searches at colliders?

As many bloggers have explained, some electroweak observables depend on MH through radiative corrections. A good theorist such as Jens can confront the calculations of these radiative corrections with precision measurements of the observables, and find the value of MH which gives the best match, or “fit.” Nearby values of MH give a slightly poorer match, while others are simply incompatible. This qualitative behavior can be quantified in terms of a chi-squared or other figure of merit. Using Bayes’ Theorem, one can even interpret the variation of that chi-squared as a probability density function, telling you where to place your bet.

Jens Erler p.d.f. for MH

Jens' probability density function for MH subject to all data. The nominal 95% exclusion ranges from LEP 2 and the Tevatron are shown as shaded boxes along the upper edge.

Experts in Statistics will take issue with this, but I find the results interesting nonetheless. One can see that some values really don’t fit the present picture, for example, MH = 200 GeV, where the famous H→ZZ→4 leptons would be most fruitful. Even more cruelly, Nature points a finger at 117 GeV, where the Tevatron a priori is unable to establish a signal, and the LHC would require a very large data sample. (On the other hand, the Tevatron might be expected to place a 95% CL bound in this mass range, if the Higgs does not exist with that mass, and there are no untoward statistical fluctuations. See the nice discussion by Tommaso Dorigo, especially the last few paragraphs.) Don’t bet a lot on the basis of this distribution, however, since it assumes the standard model which is known to be inadequate, and one can argue about the statistical basis for these calculations.

For the record, Jens reports the 90% preferred range (95% CL lower and upper limits) to be: 115 GeV≤MH≤148 GeV. This represents a huge improvement over the past decade, thanks to precision measurements of the top quark and W boson masses at the Tevatron. For illustration, here is Jens’ result published in 2001 (hep-ph/0102143):

p.d.f. for MH in 2001

So, what will this look like in a year or two? How much will 1 fb-1 of LHC data at 7 TeV change this picture, if at all?

February 24, 2010 at 3:51 pm 1 comment

First CMS Physics Paper!

Today the first CMS physics paper appeared on the arXiv, 1002.0621,

Transverse momentum and pseudorapidity distributions of charged hadrons in pp collisions at √s = 0.9 and 2.36 TeV.

This paper reports measurements similar to, but going beyond, the ALICE paper,
which I discussed earlier on this blog.

Notice that data from the √s = 2.36 TeV data are included – these are the highest-energy data in the world at the present time, hence a kind of feather in CMS’s cap.

a feather in the cap

More soberly, the actual parton-parton energies are quite low, since the events are non-single-diffractive interactions, basically, glancing blows of the two protons and a far cry from, say, the production of a pair of top quarks or W bosons.

The measurements concern the transverse momentum pT and (pseudo-)rapidity η distributions of charged hadrons. As I discussed earlier, these distributions can be related to scaling arguments started by Feynman, and as such lie in the area of non-perturbative physics of hadron production, for which there are phenomenological models. At a minimum, these models must be tested and constrained so that they can be used for modeling underlying event structure for high-energy collisions. These measurements also serve as a baseline for heavy-ion collisions.

The event selection was as open and simple as can be imagined, demanding little more than signals in beam monitors indicating that bunches as collided at the center of the CMS detector. A very loose cut on the number of pixel detector hits was enough to eliminate beam-gas events entirely. The event had to have a reconstructed vertex, too. Selections efficiencies are high, naturally, around 86% or so.

Three methods were used to measure the rapidity distribution, dN/dη. The most primitive method simple counts reconstructed clusters in the pixel barrel detector, since the shape of a cluster already provides a good indication for a charged hadron track. The second method links such clusters to build short track segments called “tracklets” which do not provide curvature information but clearly allow for a better indication of the origin of the track. Finally, a full-blown track reconstruction, using both the pixel and the silicon strip detector, was used, which provides momentum measurements as well as direction. The point of the three methods is to demonstrate the robustness of the measurements with respect to the methods used and the performance of the detector – which was excellent in any case.

The systematic uncertainties concern the acceptance and efficiency estimates and to what they degree they depend on the phenomenological (Monte Carlo) models. The exclusion of single-diffractive events, in which the hadronic final state is typically very forward and difficult to observe, is only partially successful; the purity of the final samples is roughly 95%. This purity estimate depends again on the models, so there is a systematic uncertainty. The net uncertainty is only 3%. An additional 2-3% comes from reconstruction efficiencies, and 1% for knowledge of the tracker geometry.

Transverse Momentum:

The transverse momentum distribution of primary charged hadrons is shown below, for both 0.9 and 2.36 TeV. The mean pT is measured to be 0.46±0.01±0.01 GeV at 0.9 TeV, increasing slightly to 0.50±0.01±0.01 GeV at 2.36 TeV. This increase is clearly seen in the tail of the pT distribution:

CMS dN/dpT

measured transverse momentum distributions for primary charged hadrons

Pseudorapidity:

The number density as a function of pseudorapidity, dN/dη, is presented for both √s = 0.9 TeV and 2.36 TeV. The results from the three methods (not shown) agree within errors. One might notice the much greater accuracy provided by the CMS measurement as compared to the early one by ALICE.

pseudorapidity distribution, dN/dη

The distribution dN/dY, where Y is the rapidity, is expected to be flat for Y≈0. The pseudorapidity usually coincides with the rapidity except when the rest mass of the particle is not negligible, which is the case with pions and kaons with a transverse momentum of a couple hundred MeV. The subtle wavy effect seen in the figure comes from the numerical differences between η and Y.

It is clear from the figure that more hadrons are produced per unit of rapidity at high energies than at lower energies, which is interesting given that peripheral nature of these collisions. According to the original arguments of Feynman, the variation with center-of-mass energy should go as ln(s), but some years later experiments showed that the rise is quadratic in ln(s). It is also worth noting that the difference between p+p and p+anti-p collisions is less than a couple of percent, again underscoring the soft, peripheral nature of these collisions.

From the CMS data, dN/dη = 3.48±0.02±0.13 at 0.9 TeV and dN/dη = 4.47±0.04±0.15 at 2.36 TeV.

As stated in the paper, the increase of 28.4% is significantly more than the 18.5% predicted by a tuned version of PYTHIA, and the 14.5% predicted by PHOJET models. Interesting.

Here are the summary plots showing the dependence on center-of-mass energy culled from several experiments over the years:
variation with root(s)
The red points are the ones from CMS.

So the first paper from CMS is interesting, with a non-trivial result, and perhaps a good harbinger for the next few months, when we will surely see many measurements of hadronic event properties from the LHC experiments. :)

(Note: I am a member of the CMS Collaboration and my name appears on the author’s list.)

February 4, 2010 at 12:06 pm 4 comments


Recent Posts

February 2010
S M T W T F S
« Jan   Mar »
 123456
78910111213
14151617181920
21222324252627
28  

Follow

Get every new post delivered to your Inbox.

Join 48 other followers