Archive for March, 2010

CMS Muons in 7 TeV Collisions

It is an exciting day, with thousands of 7 TeV collision events flowing in. I watched the frustrating first failures from my laptop at home, in the middle of the night. Now that the sun is up, the scenario is much brighter – CERN just held a press conference in which the Steve Myers and the leaders of the four LHC experiments proudly and joyfully showed that they are taking data.

Here is a pretty event from my own experiment, CMS, in which a clean muon track is reconstructed well:
event display muon event 7 TeV
The muon is the long red track the curves down to the lower right-hand corner. It has been detected and reconstructed in the Cathode Strip Chambers (CSCs), and my group at Northwestern belongs to the CSC group within CMS. So I am pleased and proud to see this nice event!

Edgar Carrera posted a different CMS event at US LHC Blog, where you can also see events from ATLAS.

The official CMS Press Release, in several world languages, is available here.

March 30, 2010 at 7:02 am 2 comments

Some Notes from the LHC Commissioning Report

Mike Lamont delivered a report on the commissioning of the LHC. The slides are here. Below I offer some notes from his talk.

LHC beam commissioning strategy (Mike Lamont)

Problem: as discovered at the final stages of the hardware commissioning, the quench protection system (QPS) can be triggered erroneously due to a converter switch being turned off at the same time as a fast discharge. The workaround involves new thresholds, and the implication is that ramping must be slower than planned – it will take 3/4 of an hour. A real solution will come in a few weeks.

Success: the availability of the technical systems was good. For week 10 of 2010, the LHC was available 66% of the time, with 17% planned downtime (technical stop) and only 17% unavailable for unplanned reasons.

Success: lifetimes of 450 GeV beams were quite high, on the order of 100 hours.

Success: the aperture was measured by kicking the beam and seeing at what transverse distance and where along the beam losses occur. The optics show apertures that are more than ten times the size (σ) of the beam.

Success: the magnet model turns out to be remarkably accurate. The largest deviations are at the couple 10-4 level.

Success: the beam dump system, which is highly nontrivial and may be crucial to the survival of the experiments, works beautifully. See this illustration of 10 bunches dumped into the target precisely where they are meant to go (red line).
beam dump

Critical Path: the machine protection system will dump the beam if “anything out there decides it’s had enough.” There are many inputs to the decision to dump the beam, and subtle interplay among them. Careful testing has proceeded well so far.

Mystery: “the hump” drives beam excitations which are bad because the emittance blows up leading to lower luminosities and higher backgrounds. The experts are open to suggestions…

Success: the ramp Friday morning with two pilot beams was a complete success – on the first try. The orbit looks stable and reproducible.

Puzzle/Problem: apparently the bunch length is not as short as expected after the ramp is completed, and it grows over time. This is not understood and may be a concern if fills last for many hours – see the plot below.
bunch length vs. time  (Lamont)
The ramp ends just before 60 minutes. The lower green cross shows what the bunch length should be, while the upper cross shows that it is significantly larger (don’t miss the suppressed zero!). The purple crosses show a small increase from 60 to 130 minutes.

Success: beta beating relates to the stability of the orbit and is a measure of success of the commissioning of the machine. For the LHC, this is already at the 20% level – it took weeks/months for earlier machines to achieve this.

In short, the LHC is in good shape, and the main area of concern is the machine and quench protection systems. According to Lamont, there are no show stoppers, and all of the hard work and thorough preparations over the last months and years puts the LHC in an excellent position for physics this year. :)

March 27, 2010 at 5:40 am 2 comments

Underlying Event – a definitive study by CDF

This weekend I read a superb paper by the CDF Collaboration (arXiv:1003.3146):


Studying the Underlying Event in Drell-Yan and High Transverse Momentum Jet Production at the Tevatron

This paper is written so very clearly that a very thorny and confusing phenomenon can be grasped with little effort. Even better, new results are presented which elucidate the underlying event in ways that help a lot to understand what is going on.

First, what is the “underlying event” and why do we care about it? The UE is all that you see in a hadron collider event which is not coming from the primary hard scattering process. So, aside from a q-qbar pair annihilating to give you a pair of leptons for example, there are tracks and calorimeter energy coming from the non-scattering fragments of the two beam particles, potentially from other partons in the same beam particles which interact along with the “primary” ones, and other beam particles which happen to interact. All three pieces are hard to calculate because these processes are soft and so non-perturbative interactions play a dominant role. We must use models (HERWIG or PHOJET or PYTHIA, with several “tunes” of model parameters) to simulate these events. Since the UE has several components, adjusting these models to mimic the data is challenging.

The need to tune and test these models is urgent now, since they tend to give wildly different predictions for the UE at the LHC. Why? The three main components of the UE (beam remnant, multiple parton interaction or MPI, and pile-up of different beam-beam interactions) do not vary the same way with c.m. energy, so a match at 1.96 TeV does not guarantee a match at 7 TeV.

Let’s build up the picture as follows. Consider a typical hard 2→2 scatter, producing two jets which are back-to-back in the transverse plane. Inevitably, one of them will have a higher transverse energy, ET, than the other. The higher-ET jet is the “tag” for the event, and we’ll put it at 12 o’clock. The other jet, called the “away” jet, would normally be at 6 o’clock. The authors call the 12 o’clock direction the “toward” side, which matches better with the word “away.”

In an e+e- collider, that would be the end of the story, for two-jet events. In a hadron collider, however, the “transverse” regions at 3 o’clock and 9 o’clock are interesting because they will not be empty – the UE contributes there as well as anywhere else. The CDF analysis method consists of examining the transverse regions, characterizing the level of activity by the particle density and by the mean pT there.

An important innovation is to substitute a Z-boson for the tag jet. The Z boson is color-neutral, so the particle density in the toward region should be low, similar to the transverse regions (after excluding the two leptons from the Z decay). Thus the intense particle flow from the tag jet does not obscure the UE in the toward region.
CDF UE regions

The authors compare to a selection of models. PYTHIA is the work-horse of event generators, and the UE model in PYTHIA has been tuned multiple times. The interesting cases are called “A” (good for di-jet events), “AW” (good for Z+jet) and “ATLAS” (meant to be accurate at the LHC). In addition, there is HERWIG augmented by an MPI model – this is expected to be less accurate that PYTHIA and it is.

The first interesting results are shown below:
CDF density vs. pT
The horizontal axis is a measure of the “hardness” of the primary interaction – the pT of the jet [top plot] or Z-boson [bottom plot]. In both cases, the particle density opposite the tag object (jet or Z) increases rapidly with pT – see the blue dots . This is clear – the recoil jet in the away side must balance this pT. In contrast, the particle density in the transverse region (green dots) does not care about pT, which is also intuitive – this is the UE. Notice the dramatic contrast in particle density for the toward (or tag) area: it matches the recoil jet when the tag is a jet, and it matches the transverse density when the tag is a pair of leptons. This is very pretty, and very well reproduced by PYTHIA.

There is plenty of evidence that HERWIG does not reproduce the data as well as a tuned version of PYTHIA. Here is one particularly clear example:
PYTHIA (AW) v HERWIG
This plot shows the particle density in the transverse regions, for Z-boson events, as a function of the pT of the Z-boson. Clearly, HERWIG (“HW”) is too low while PYTHIA (“pyAW”) gets it right. The main cause for the difference is the lack of MPI in HERWIG. So the difference between the two curves can be seen as a measure of MPI (multiple-parton interactions). It is not negligible, and should increase rapidly with c.m. energy.

if the pT of the Z boson is large, then there is a lot of energy available for gluon radiation in the initial state. If there are more than one ISR jet, the particle density in the transverse region will rise – basically, the second and third jets will tend to leak into the transverse region and will not be confined to the away region. The toward region, however, where the Z-boson went, will still be clear of ISR jets. Here is a comparison of the particle density in the transverse and toward regions, illustrating this effect:
DY transverse vs. toward

The authors differentiate the two transverse regions (at 3 o’clock and 9 o’clock) according to their energies – TransMIN is the lesser of the two, and should be more sensitive to the UE; TransMAX will have the contamination of any extra ISR. This is an excellent place to check models (HERWIG vs PYTHIA) and tunes of models. Here is the plot of the charged particle density as a function of the pT of the Z-boson, in the TransMIN region:
compare models Tevatron data
PYTHIA matches the data very well, both with tune “AW” and tune “DW”. The ATLAS tune of PYTHIA is also reasonably good. A more detailed look (available in the paper), however, indicates that the ATLAS tune produces somewhat too many particles which are, however, too soft. HERWIG alone (“HW”) is too low, as already noted. Trying to fix HERWIG by adding in a model for MPI called JIMMY (“JIM” curve) does not work – it is too high. Perhaps a tune of the combination HERWIG+JIMMY would yield better results.

The point of showing so many models at Tevatron energies is to prepare for an extrapolation to LHC energies (14 TeV). Here is a comparison of two successful PYTHIA tunes as well as HERWIG:
compare models LHC
The two tunes (DW and DWT) show a large increase in the particle density, but the increase is significantly different and could be easily verified with the first LHC data. CDF data taken at 630 GeV favor the DW tune over the DWT tune. HERWIG, which lacks MPI, shows a much smaller increase. Thus, much of the increase comes from MPI (in which there are two parton-parton scatters from the same beam particles).

The final study in this excellent paper involves soft collisions logged through a minimum-bias trigger. Such collisions are related to the UE, but are not the same as the UE. Despite this fact, collider physicists use min-bias events as a model of the UE. Thus it is important to study min-bias models to unravel what is truly happening in the UE and how well or how poorly it is represented by min-bias events.

The main observable is the increase of the average transverse momentum, <pT>, with the number of charged particles, Nch. The soft and hard components of min-bias events play varying roles as a function of Nch. This also gives a handle in the MPI. A comparison of the models to CDF min-bias data shows that PYTHIA with tune AW gets it right, while the ATLAS tune is too low. (Again, too many particles with too low momentum.)

avepT in min-bias events

One can also compare this observable for min-bias events and Z-boson events with a low pT (so that there is little ISR). The behavior is very similar even though the theoretical framework is, at first glance, very different.
ave PT vs NCH
The PYTHIA tunes match the data in both cases, indicating the MPI plays an important role in both cases.

This blog post gives only a cursory treatment of this fine paper. The analysis is done in a very careful way – there are many interesting technical details which I did not even mention. Also, details of the various PYTHIA tunes are spelled out. Of course, there are many plots and discussion which I cannot reproduce here.

All students of collider physics should read this paper. The very murky issues in understanding the underlying event are clearly explained, and there are new results which elucidate the role of multiple-parton interactions.

I wrote already about double-parton scattering some months ago, and I’ll bet that this phenomenon crops up again and again, since its role increases rapidly with c.m. energy.

March 26, 2010 at 6:29 am 1 comment

Webcast tomorrow on LHC machine commissioning

FYI: tomorrow (Friday 26-March) there will be a webcast of a report by Mike Lamont on the current status of the LHC commissioning and of the plans for the next steps.

The time is 15:15 at CERN, or 9:15am in Chicago (for example). For more information, see http://indico.cern.ch/conferenceDisplay.py?confId=88711 and http://www.cern.ch/webcast.

This should be very interesting, in light of the 7 TeV Collisions scheduled for Tuesday, 30-March. (More information here.)

The next presentation will be on 9-April.

March 25, 2010 at 6:41 am Leave a comment

Collisions at 7 TeV c.m. energy scheduled for 30-March

Rolf Heuer, Director General of CERN, announced today that the LHC is on schedule to provide collisions at 7 TeV on 30-March-2010, Tuesday a week from now.

A quote from Steve Myers: “With two beams at 3.5 TeV, we’re on the verge of launching the LHC physics programme. But we’ve still got a lot of work to do before collisions. Just lining the beams up is a challenge in itself: it’s a bit like firing needles across the Atlantic and getting them to collide half way.

And a quote from Heuer: “The LHC is not a turnkey machine. The machine is working well, but we’re still very much in a commissioning phase and we have to recognize that the first attempt to collide is precisely that. It may take hours or even days to get collisions.”

The CERN press release points out that three days were required to bring the electron and positron beams at LEP into collision. There will be a live webcast and, I presume, lots of press at the LHC control room as well as at CMS, ATLAS, LHCb and ALICE.

I’ll bet that all experiments will perform splendidly, and I hope that the LHC performs as wonderfully as it has these past few weeks…

March 23, 2010 at 4:57 am 1 comment

Twenty-three papers: Commissioning CMS

We were all rather down when the LHC magnet blew up in December 2008. Enough has been written about that. Quickly enough, the CMS Collaboration made the best of the situation and launched a serious campaign to commission the detector as much as possible using cosmic rays. The result is twenty-three scientific papers appearing as a special volume in the Journal of Instrumentation published by IOP Science. The link is 2010 J. Inst. 5. The editors of this journal have been very helpful and the CMS Collaboration is grateful for their cooperation.

The papers cover nearly every aspect of the detector. There is an overview paper, which describes how the data were logged and how the data acquisition and event processing were carried out. There are papers on the alignment of the tracking devices and of the muon system – the result is equivalent to tens of pb-1 of collision data. In a related effort, the magnetic field map in the muon chambers, with highly non-trivial spatial variations, was verified at the percent level. The energy deposited in the electromagnetic and hadronic calorimeters as a function of muon momentum was measured and compared to simulations. Anomalous signals (“noise”) in the calorimeters were also studied extensively, pointing the way to filters to remove them in collision data.

The event sample amounts to approximately 300 million cosmic ray muon triggers (remember that the CMS detector is installed in an underground cavern) collected over a four-week period. Essentially all of the CMS subdetectors delivered high-quality data, good enough for physics analysis. The operational efficiency was rather high, above 80%, and sustained over that four-week period. The superconducting solenoid was on for most of that period, delivering a field of 3.8 T.

The HCAL group at my home institution, Northwestern University, contributed in a major way to all three HCAL papers. For example, they studied the per-tower calibration as quantified with cosmic ray muons, deriving corrections that clearly improve the uniformity of response:
HB response in CRAFT
Here is the measured response as a function of muon energy. One can see clearly the relativistic rise for high-momentum muons, which evidently is reproduced well by the CMS detector simulation:

HCAL dE vs P

HCAL measured energy as a function of muon momentum

The muon group at Northwestern contributed to several papers as well. In fact, I was the main author of the paper on the cathode strip chambers (CSCs) and my group produced more than half of the content of this paper. (Yes I am proud of that!) We checked the detector simulation. It is good but far from perfect. We performed serious measurements of the efficiency, and found some problems which have been fixed since then. Here is an example of the efficiency of the local charged track triggers – basically, the trigger primitives generated in the on-board CSC electronics:

CSC ALCT and CLCT efficiency

Efficiencies for the ALCT and CLCT (see text)


The plot on the left shows the efficiency for the anode LCT, or ALCT, as a function of the angle. The trigger is designed to be efficient when the muons point to the interaction point, and in that region, they are well above 99% efficient. On the right, the corresponding plot for the cathode LCT, or CLCT. Again, for the region where the efficiency should be high, we measured above 99%.

We also studied the chamber resolution. Due to the nature of cathode strip chambers, the resolution is better with more charge measured than with small charge. The 1/Q trend can be seen in the plot below, up to about 300 fC (the mean charge left by a muon). The plot shows a worsening in the resolution at very high charges, due to the interference of delta ray electrons with the charge measurement.

CSC resolution vs Q

CSC resolution as a function of the measured charge


We also studied the resolution as a function of the muon impact within a strip, muon incident angle, and magnetic field.

I could write pages and pages about all the nice results obtained for all the CMS detectors – after all, there is a lot of material in twenty-three articles accepted by the referees of JINST. If you are interested (and I am sure the experts of you have specific interests), please go take a look using the link above.

The work done for these papers has been of immense value for the CMS Collaboration, placing the detectors in an unprecedented degree of preparedness – the impact of the understanding of the tracking devices for the first CMS physics paper has been tremendous.

Finally, let me point out that there is some interest in cosmic ray characteristics and one can hope that CMS will use their cosmic ray data to perform some physics measurements, too.

March 20, 2010 at 11:24 am Leave a comment

Ramping to 3.5 TeV: time line

Here is a summary of accelerator operations last night:

- 16:30 Started ramp without beam at 2 A/s
- 17:38 Trip of Sector 78 during dry ramp to 6 kA. To be understood.
- Dump during the ramp due to BLM (problem with optical link 
  for BLM in pt1, fixed).
- 20:45 Try a second ramp without beam
- 22:04 Ramp to 3.5 TeV without beam completed. LBDS/BETS OK.
- 00:20 Ready at injection
- 01:45 Injecction started
- Found large chromaticity (negative) and important orbit distortion
  => regenerated sextupole functions (SF SD), much better
- Performed checks:
  * beam1: 
    o Q' Trim : H = -25.0 & V = 0.0, now at H= 5, V= 5
    o Q trim : H only by +0.02. Tunes at 0.28 and 0.31
    o Orbit corrected against "golden"
  * beam2:
    o Q' trim : H by -20.0 & V by +8. Now at H= 10, V= 9
    o Small tune trims to get back to nominal
    o Orbit corrected
- 04:00: Ramp started with buckets 1 (beam1) and 1001 (beam2)
  with tune feedback on, LHCPROBE, ~5e9.
- 05:23: both bams at 3.5 TeV ! 100 hrs lifetime
- At 5:25: RQTF.A81B2, RQTF.A45B2, RQTF.A34B2, RQTF.A23B2 tripped.
  PM indicates that QPS triggered
- Beams dumped properly
- At flat top, aad time to send 50% of an orbit correction on
  both beams, before RQTFs tripped. 
- Tune feedback was on
  * Final Q-FB trims before beam loss: 
    o beam1: dQH = -0.008 , dQV = -0.054
    o beam2: dQH = -0.019 , dQV = -0.066 
  * Tunes at nominal settings after the end of the ramp. 
  * Trims requested to RQT circuits therefore increase and probably
    led to trip of QPS. To be understood.
- Transverse emittances
  * BSRT vs WS Vertical excellent up to ~ 2.5 TeV, then BSRTa
    calibration with D3 light to be studied.
    WS profiles with small sigmas to be looked in detail.
    BSRT vs WS HOR with already seen systematic difference, to be
    understood .

Note the innocuous mention at 05:23 hours: both beams at 3.5 TeV!. As has been pointed out in Cosmic Variance and Not Even Wrong, this is a new world record, far surpassing the beam energies of the Tevatron and the earlier running of the LHC itself. You can also read the CERN Press Release, and a nice upbeat message from Director General Rolf Heuer. Details from the LHC operations center are available at http://cern.ch/lpc.

LHC screen shot

LHC screen shot - 3500 GeV!

Bravo! Now let’s see the beams held for a significant period of time, and then… collisions!

March 19, 2010 at 4:05 am 2 comments

First ATLAS Physics Paper: Charged Particle Multiplicities

Today the ATLAS Collaboration posted their first physics paper on the archive (arXiv:1003.3124):

Charged-particle multiplicities in pp interactions at √s = 900  measured with the ATLAS detector at the LHC

This is the third physics result from the LHC. The first paper was submitted by the ALICE Collaboration on 28-November (see my discussion here) and by the CMS Collaboration on 3-February (my discussion is here). All three papers concern the same measurement – the number of primary charged hadrons produced in non-single-diffractive events, and their distributions in pseudorapidity (η) and transverse momentum (pT).

The authors take care to define their suite of models for min-bias events. There are essentially two event generators available: PYTHIA and PHOJET, with different theoretical underpinnings, and there are several “tunes” of free parameters in PYTHIA which have been set using data from the Tevatron.

In this kind of analysis, the selection of tracks is crucial. The authors set a minimum pT of 500 MeV (CMS used 100 MeV and the ALICE magnet was off). The pseudorapidity range is |η|<2.5, compared to 2.4 in CMS and 1.6 in ALICE. The agreement of the detector simulation with the data is remarkable, allowing a precise control of contamination from secondary hadrons (e.g. pions produced in K-short decays). In contrast to the CMS and ALICE papers, the ATLAS paper shows clear measurements of vertex and track reconstruction efficiencies. (This is not to say that ALICE and CMS did not measure the efficiencies – they did not chose to transmit the results.) Furthermore, this paper describes in detail how corrections were made for inefficiencies, contamination and resolution. These corrections are not large, and the ATLAS methods are standard. There is a thorough investigation of possible systematic effects which all turn out to be quite small.

Corrected distributions are nicely presented:

ATLAS distributions for charged multiplicities

ATLAS corrected distributions for charged multiplicities.

The plot of dN/dη shows the characteristic sea-gull shape I discussed earlier (it arises from the use of pseudorapidity η in place of rapidity y). It is plain that the charged multiplicity is significantly underestimated by all models, and the discrepancy is worst in the center. The paper reports an average value of 1.333±0.003±0.040 for |η|<0.2. This cannot be directly compared to the CMS result because it is restricted to pT>0.5 GeV; nonetheless, the fact that the models underestimate the multiplicity by a few percent is nicely confirmed.

The plot of dN/dpT shows that models also fail to reproduce the distribution for pT>0.7 GeV. For the highest pT reached, around 10 GeV, the discrepancy is as large as 50%. (The CMS paper shows no such comparisons, showing instead a fit to an empirical function.) The lower plot on the right shows that the discrepancy is worst in events with many charged hadrons.

Here is a very nice composite of measurements by ATLAS, CMS and UA1:

composite of pT spectrum measurements

Comparison of the pT spectra from ATLAS, CMS and UA1


If you look carefully at the ratios displayed in the lower part of the plot, you will see that the CMS multiplicities are systematically lower than those measured by ATLAS. The authors understand this to come from the way CMS normalize their result. The UA1 result is higher due to the definition of the trigger used. Evidently a question as simple as What is the charged multiplicity? inevitably requires choices that can be arbitrary to some degree. The ATLAS authors go the extra mile and derive a measurement that can be directly compared to the CMS result by reducing their range in |η| and applying a model-dependent correction for non-single-diffractive events as was done by the CMS authors. They find, for pT>0.5 GeV, dN/dη=1.240±0.040 to be compared to the CMS measurement 1.202±0.043; in both cases the errors are mainly systematic uncertainties. The figure above shows the CMS data stopping at 4 GeV because the CMS paper does not report results above that value.

The ATLAS analysis looks to be very solid and thoroughly done. Due to the limitation pT > 0.5 GeV, the authors cannot go further, so there is no result on the mean pT or the total charged multiplicity to be compared to earlier experiments. There is also no result in this paper from the 2.36 TeV data, for which CMS observed an increase in the charged multiplicity that is substantially higher than predicted by the models.

In short, both ATLAS and CMS agree that the models for min-bias events are inaccurate and need to be tuned in order to understand the LHC data. This has spurred lots of activity between the experimental and theoretical communities concerned with this kind of physics – in the coming months there will be lots of discussion of models for non-perturbative QCD interactions and more measurements with which to tune them.

Congratulations to the ATLAS colleagues! :)

March 17, 2010 at 3:55 am 3 comments

b jets are hard to count!

b-jets are hard to count – or rather, they are hard to predict? Who knows? There is a major discrepancy reported by CDF in what should be a concrete, direct, basic quantity – the cross section for producing a W-boson and one or two b-quark jets (arXiv:0909.1505):

First Measurement of the b-jet Cross Section in Events with a W Boson in p-pbar Collisions at √s = 1.86 TeV

Noteworthy, already, is the fact that this is the first such measurement… The authors have aimed for a very pure event sample, sacrificing a lot of b-jet efficiency, so a lot of integrated luminosity is required for making the measurement. For this report, 1.9 fb-1 were analyzed.

In a nutshell, the analysis runs as follows. Events with W-bosons are selected in the usual way, by requiring a high-pT electron or muon and significant missing energy. Jets are reconstructed with the cone algorithm with a radius ΔR = 0.4, and a minimal set of jet energy corrections are applied. Events are retained if there are exactly one or two jets satisfying ET>20 GeV and |η|<2, which are reasonably safe cuts in terms of jet reconstruction and acceptance. There are about 175k such event.

The next crucial step is to tag the jets produced from b-quarks. The authors use a super-tight version of the well-known secondary vertex tag. These secondary vertices are reconstructed from at least three well-reconstructed tracks with significant individual impact parameters, and then the position of the reconstructed secondary vertex must be at least 7.5 σ from the primary vertex – this is a very hard cut which reduces the contamination from light quarks and gluons by an order of magnitude, and from charm quarks by a factor of four with respect to the standard cuts. This reduces the number of tagged jets down to 934.

Here is the interesting part, technically. In order to infer the composition of their sample, the authors make the distribution of the vertex mass, that is, the invariant mass of all the tracks coming from the secondary vertex. This quantity is known to be higher, statistically, for heavy flavor jets than for light quark jets, and it was used in the past in jet-shape tagging methods. The distribution is fit to three templates, one for b-jets, one for c-jets, and one for light quark and gluon jets. The b-jet and c-jet templates are taken from simulation, and have been shown to be reliable using a sample of b-jets tagged with muons inside the jet. The light quark and gluon jet template is taken from simulation, and checked using jets that give a negative secondary vertex parameter. In any case, the c-quark and light-quark templates are not crucial since the sample to be fit is so pure, as can be seen from the distribution itself:
CDF distribution of vertex mass
(By the way, let me remark that this is the right way to make a plot – use shading with simple colors, large dots for data and a thin line for the sum, include a clear key and the measured quantities all in a fashion and style that is easy to read even if the plot is not large.)

The fraction of b-jets is 71%, which is quite pure in a situation like this. The corresponding number of b-jets is 670±44. Of these, 152±21 b-jets come from physics background processes such as top-quark production, single-top and di-boson production. The QCD contribution (i.e., events with a fake W) is estimated to be 25±8, which is cross checked with a control sample.

In order to compare the number of signal events (493±48) to predictions, the authors convert this yield into a cross section defined by the geometric and kinematic acceptance of the lepton and jets. This eliminates the uncertainty on the acceptance from the experimental number, and forces the theoretical predictions to be done within the cuts chosen for this analysis. Since the authors are able to run the theoretical codes to obtain the predictions, this can be done in a consistent manner. It would be difficult, however, for D0 to check the CDF result since D0 will necessarily use different cuts dictated by their detector.

The CDF result is:

σ×B = 2.74 ± 0.27 ± 0.42  pb

where the B stands for the branching ratio of W-bosons decaying to leptons. A recent NLO calculation by Campell. Febres Cordero and Reina gives

σ×B = 1.22 ± 0.14  pb

which is clearly too low by a factor of 2.25. This is a huge discrepancy. If we compute the difference between the measured and predicted cross sections, we obtain

Δσ×B = 1.54 ± 0.52  pb

which is clearly not consistent with zero.

Where might the experiments gone wrong? Is it hard to see where a factor of two could appear. The plot of the vertex mass above shows that the sample is, basically, b-jets and known at least to 10% or so. The acceptance correction has been minimized so that can’t be off by a factor of two, either. How about the b-tag efficiency? The simulation gives ε=0.177 which must be corrected by a factor of 0.88 to conform to the data. Could this be off by a factor of two? Unlikely, given all the measurements of top quark production already successfully published by CDF and checked by D0. All other efficiencies are in the high nineties in percent.

So the ball is in the theorists’ court. A response has already been published by Febres Cordero, Reina and Wackenroth in arXiv:1001.3362. I know Reina and Wackenroth and they have an established record of very high quality work. As stated above, their best number right now is a factor of 2.25 below the CDF result. More primitive calculations from PYTHIA give 1.10 pb and from ALPGEN, 0.78 pb.

There have been other measurement of the rate of b-jet production, of course. Usually they do not attain the purity and incisiveness of this analysis. A recent measurement of the production of a Z-boson with a pair of b-jets, relative to Z-bosons with a pair of any kind of jets (arXiv:0812.4458″), gave (3.32±0.53±0.42)×10-3 which does not contradict the ALPGEN prediction of 2.1×19-3. Other predictions include 2.3-2.8×10-3 from MCFM and 3.5×10-3 from PYTHIA. Basically, the predictions are correct.

We have a real mystery here. The theorists are investigating this result, and maybe they will find the answer. Meanwhile, the LHC is scheduled to provide collision data later this month, and one can hope that the LHC experiments will make this measurement before the end of the year. Will there be a factor of two discrepancy more events than predicted – which would have major implications for new particle searches – or will the discrepancy be even higher??

March 14, 2010 at 8:55 am 5 comments

OPERA measured the cosmic muon charge ratio

Muons from cosmic rays rain down from the atmosphere all the time. If you were an empiricist, what would you do? You might compare the number of positive to the number of negative muons, and find that the ratio (pos/neg) is not one – it is about 1.3.

Why is that interesting now, many decades after the first studies of cosmic rays were conducted? Answer: this ratio is different from one due to identity of the cosmic particles that impinge upon the atmosphere, and due to the particles physics of the cascades they cause. Specifically, the production of pions, kaons and even charmed hadrons play a role that varies with the energy of the cascade. A high charge ratio may be linked to kaons, since there is an asymmetry in the production of K+ and K-, or to a pure proton flux; a ratio close to one hints at heavy nuclei with a nearly equal number of protons and neutrons, and a predominant production of pions in the cascade for which there is no charge asymmetry.

Let me point out that the “chemical composition” of cosmic rays is an interesting topic – see an earlier post about extra-galactic iron nuclei detected by the Pierre Auger Collaboration.

The OPERA experiment was designed and built to study neutrino oscillations. It was installed deep underground in the Gran Sasso Laboratory (Italy), so the cosmic rays that pass through the apparatus have a high energy, on the order of a TeV. During the course of four months, they recorded a few thousand good events with a trajectory that allowed the authors to measure the momentum and charge. They posted their paper on the archives yesterday (arXiv:1003.1907).

For the purposes of this measurement, the OPERA detector is essentially a double-arm spectrometer which can give two measurements of the deflection of a muon in a 1.53 T magnetic field. The size of the deflection tells you the momentum, while the direction of the deflection tells you the charge of the muon. The trajectory is constructed from hits recorded in resistive plate chambers. The problem of false charge measurements due to bad tracking or large multiple scattering can be controlled adequately by comparing the two measurements. The simulation underestimates these effects somewhat, but not enough to spoil the measurement. Special attention is paid to the alignment of the chambers.

Here is the measurement by the OPERA Collaboration, compared to other recent measurements:

OPERA measurement of cosmic muon charge ratio

Ratio (pos/neg) in bins of Eμ cos(θ*).

The vertical axis is the ratio Rμ of positive to negative muons at the surface of the earth – the authors corrected their observations made in the underground cavern in Gran Sasso for the propagation of muons through the rock above. The horizontal axis is not just the muon energy Eμ, it includes a factor cos(θ*) which takes into account the inclination of the cascade, the radius of the earth and the height of the cascade.

In the figure, the OPERA points are five filled circles above 103 GeV. The curves represent the predictions of four models of cascades in the atmosphere produced by cosmic rays. The OPERA measurements are not able to distinguish those models, but they do agree with and extend prior measurements by MINOS (another underground neutrino experiment) and an experiment in Utah (1975).

This is a nice measurement which adds incrementally to our empirical knowledge of cosmic rays.

March 11, 2010 at 6:07 am 3 comments

Older Posts


Recent Posts

March 2010
S M T W T F S
« Feb   Apr »
 123456
78910111213
14151617181920
21222324252627
28293031  

Follow

Get every new post delivered to your Inbox.

Join 44 other followers