Archive for March, 2011

Impressive New Results from LHCb

The LHCb Collaboration has accomplished with 37 pb-1 what the Tevatron experiments required several fb-1 to do. They put the limit (arXiv:1103.2465, 12-Mar-2011):

Br(Bs→μ+μ) < 5.6 × 10-8 at 85% C.L.

Impressive!

What is this about? The Bs meson is neutral and contains a b-quark and an s-quark. In the standard model, the complete annihilation of these two quarks to produce a rather neutral but very distinctive μ+μ pair is exceedingly rare – the predicted branching ratio is about 0.3×10-8. Since it is so very small, one can hope that new physics would lead to a large enhancement. Indeed, factors of 10 or a 100 are possible in SUSY if tanβ is large. Observation of this extremely rare decay would constitute an unequivocal discovery of physics beyond the standard model, albeit through a loop effect.

How did the LHCb Collaboration achieve such an impressive result? The main answer is: they have a wonderful detector and they made an intelligent analysis. Here are some of the salient points:

They are looking for a narrow peak on an almost-flat background, so the resolution on the μ+μ mass is crucial. They truly have a wonderful spectrometer – the momentum resolution is 0.5% at 100 GeV, which is two or three times better than CMS, which is better that ATLAS. This means they can make a narrow window in which to look for Bs decays: they choose ±60 MeV to be compared to the ±120 MeV window from CDF. This already means less background.

Another important fact about their muon sample is its purity. Only 10% of the muons selected for this analysis are really hadrons decaying in flight. So their combinatorial and physics backgrounds can be studied and understood through simulations. In fact, though, their analysis makes minimal use of simulations and follows the kind of self-calibrating methods used in the CDF analysis (arXiv:0712.1708).

Aside from the mass, LHCb uses a so-called geometrical likelihood (GL) which incorporates topological and kinematic information independent from the mass: the decay time, the muon impact parameter, the B impact parameter, the distance of closest approach of the muons, a kind of isolation variable, and the pT of the B. This GL gives a huge boost to background suppression.

LHCb use an interesting strategy based on bins in mass and the GL. They fit the background and signal in each bin, knowing that some will have background only even if a Bs→μ+μ signal is present. This allows a nice control of the data – it builds in the concept of control regions in an organic way. Here are the plots of the four bins in GL:

LHCb mass distributions

LHCb mass distributions in four GL bins


As you can see, there is plenty of background at low GL, which diminishes quickly for higher GL values. There are, alas, no peaks.

There are many other technical details that I won’t describe here. They concern the careful control of uncertainties on the efficiencies, the handling of the normalization modes and the trigger which is rather amazing in and of itself. Please read through the paper if this analysis interests you.

In the absence of signal, LHCb sets limits on the branching ratio using the standard CLs method:

LHCb limit

LHCb limit on Bs


Their limit is completely consistent with expectation, and compares well to the best limit from the Tevatron: 4.3×10-8, a preliminary CDF result based on 3.7 fb-1 — i.e., based on one hundred times more luminosity.

(Note: they also set a limit on Bd→μ+μ but the interest from the SUSY point of view is with the Bs limit.)

This result will not extend exclusion regions in MSSM parameter space because it is not better than existing Tevatron bounds. But one expects some tens or hundreds of pb-1 by the summer, and at that point, LHCb might actually see a signal (even if only from the standard model). That would be very interesting indeed!

Advertisement

March 17, 2011 at 6:48 pm 2 comments

Looking for Exotic SUSY Signals

We do not know where the first signs of new physics will show up, so it is best to look everywhere at the same time. This includes classical channels for SUSY like those I discussed earlier, (jets and missing energy with or without isolated, high-pT leptons) as well as strange signals that were suggested long after SUSY was an established favorite theory.

One of the more exotic signals for new physics comes from a heavy charged quasi-stable particle – one that travels well below the speed of light and which leaves a trail of anomalous ionization in the detector. Such hypothetical particles have various names – “CHAMPS” at CDF and “SMPs” at ATLAS, for example. They appear as predictions from some versions of SUSY and also in other theories. From an experimenter’s viewpoint, they are simply wonderful things to go look for, since a search for SMPs draws upon the capabilities of the experiment that are not important for most other searches for new physics. In a word, such searches are “fun.”

The ATLAS Collaboration employed two subdetectors to try to find stable charged massive particles (arXiv:1103.1984 10-March). An important strength of their approach is that these two detectors provide independent information with different systematics, so a false signal in one will not be correlated with a false signal in the other. Neither detector is powerful enough to find a clean signal, but the two combined are quite powerful.

Firstly they use their pixel detector to measure the specific ionization of a particle. The ionization is high for particles that are traveling slowly, so even if the pT is high, the velocity v will be small if the mass M is high. They require pT>50 GeV, so only a particle with a mass of hundreds of GeV will give a large ionization signal in the pixel chambers. They use standard measures and calibration techniques for establishing their ionization signals. The observed distribution confirms their simulation of standard processes. There are a bit more than 5k particles.

Secondly they use their tile calorimeter to measure the time-of-flight. This device provides a time resolution of about 1 ns based on time-sampled signals from three layers of scintillating tiles which are interspersed with iron. These layers are 2.3 to 5.3 m away from the interaction point, and an SMP can produce up to six independent time measurements.

The basic performance of these two devices can be gleaned from the plots below. Notice that signals from hypothetical SMPs are very different from particles produced in standard model processes. A signal of this type would be experimentally much more dramatic than, say, an excess in the tail of the
missing energy distribution, or a few extra tri-lepton events.

two measurement techniques

Two measurements sensitive to SMPS

The ATLAS scientists understand these two detectors well enough to be able to construct an accurate model for the resolution as a function of pT. Using a Monte Carlo technique, they convolve these resolution functions with observed particle spectra to predict the tails of the distributions, as shown in these plots:

two background estimations

Background estimations for two measurement techniques

The data conform to this prediction very well, which means there is no obvious sign for SMPs, unfortunately. Notice the dramatic difference between standard model and new physics signals. Keep in mind, also, that these two quantities are independent, so a signal in one would be confirmed, presumably, in the other. In practice the two observables are combined and a double two-sigma window defined to try to isolate a clean signal sample as a function of the hypothetical SMP mass. For a nominal mass of 100 GeV, 5.4 events are predicted and 5 are observed; for higher masses the predicted background falls rapidly and no events are observed. The systematic uncertainty on the signal yield slightly less than 20%. A plot of the upper limit on the cross section as a function of mass is given below.

exclusion plot

Upper limits on the cross section as a function of mass

The green falling curves are theoretical predictions for gluinos and squarks; the lower limit on gluinos is about 580 GeV depending on the details of their hadronization and the way the resulting hadron (called an R-hadron) interacts with the detector material. This is much, much better than the result published earlier by the CMS Collaboration (arXiv:1101.1645 9-Jan-2011). (The results on squarks are likewise much better than bounds coming from LEP and CDF.)

Why is the ATLAS limit so much better than the CMS one? The reason is very simple: The CMS result was based on only 3.1 pb-1 while this nice ATLAS result uses ten times more data. The CMS analysis used the ionization in the silicon detectors, and the presence of hits in the muon chambers; no time-of-flight information was used. Finally the R-hadron scenario was a pessimistic one. The upper limits on cross sections are about 7 pb for the CMS analysis, to be compared to about 0.8 pb for the ATLAS analysis. If these background-free limits scale as 1/Luminosity, the two analyses appear to have similar sensitivity.

In any case, the bottom line is that ATLAS has put a new stringent upper bound on quasi-stable charged massive particles. The reach should extend up toward 1 TeV later in 2011.

March 14, 2011 at 12:38 pm 5 comments

A puzzle concerning the underlying event

I love this period of high energy physics. Of course we are all hoping for a wonderful discovery that ushers in a new era in particle physics, etc. etc. Also, the challenge to SUSY is interesting at least for those of us who had positive subjective attitudes toward it in the past.

But there’s more! There is something worth your attention in nearly every subject you can think of. Today’s post is meant to be an example.

The underlying event is the unwanted stuff that comes along with the main interaction. On the one hand, it is a nuisance. On the other, it is very rich – and we don’t know how to describe it in detail. While advances in this area will not elucidate electroweak symmetry breaking or dark matter, the problems are knotty and you should “get your hands dirty” by working on them – or at least making the effort to pay attention.

Look at this plot produced by the ATLAS Collaboration (arXiv:1103.1816, 9-March-2011):

ATLAS plot of density vs dphi

ATLAS plot of particle density vs. dφ


It shows the particle density as a function of the angle with respect to a moderately high-pT particle, called the lead particle. The choice of colors and dash/dot lines make the plot hard to interpret, but if you make an effort, you will notice that none of the theoretical curves match the data points. (The blue-grey boxes around the points represent the systematic uncertainties which are not important in this situation.)

I have written about plots like this before and I am sure that Tommaso Dorigo has written about the underlying event too. The peak in the middle comes form the quasi-jet to which the leading particle belongs. Think of a quark flying out at Δφ=0, leading to a spray (“jet”) of hadrons in a narrow cone – the leading particle happens to be the most energetic of them. The broad hump at ±π is just the recoil, ie, the quark or gluon on the other side.

The interesting region is in between – what we call the “transverse” region. What should go at right angles to the main jet-jet axis (given by Δφ=0 and π)? Evidently, this is hard to say – witness the wide range of predictions. The solid black line is the favorite choice of the ATLAS Collaboration. It does a good job in the jet-jet region, but it is too low in the transverse region, despite extensive tuning on other distributions and observables. The funny thing is that it is good in the transverse region and not so good in the jet-jet region if the leading particle is less energetic. So this bull-horn behavior changes as a function of the transverse momentum of the leading particle, and the relative successes of the models vary, too.

Think about what we’re talking about here. The jet-jet axis part seems clear enough – two partons fly out back-to-back in produce two sprays of particles, one narrower than the other. But – what is the origin of particles in the transverse region? In a naive, texbook view with arrows and Feynman diagrams, there are no partons flying out in that direction. Could the particles simply be produced by the hadronization of the main process – i.e., by the color string wiggling and breaking or by the color dipoles radiating? Not according to the models. Apparently these particles are not produced by the main process – they have some other origin. Our best guess are other partons scattering when the protons collide, or something like that. This sounds pretty murky and it is – hence the difficulty in modeling it successfully. Maybe this is a good opportunity for someone to clarify and shed light on the topic?

Here is another observable – the mean pT2 as a function of the pT of the leading particle, in the transverse region.

pT2 density

ATLAS plot of mean pT2 as a function of the pT of the leading particle


If you trace the individual predictions, you will see that some of the fail completely, while others fail only for part of the range. Somehow, getting the right answer is not easy.

These measurements have been made before. The first measurements at the LHC were done by CMS (Eur. Phys. C70 555) and afterward ATLAS published a paper too (arXiv:1012.0791). These two analysis are based on charged tracks while the new ATLAS paper is based on calorimeter energy clusters in addition to charged tracks. While this may seem like a minor difference, in fact is is important as the ability to resolve fine structure with the ATLAS calorimeter is very good, with some advantages over charged tracks. Thus there is a technical improvement, of sorts, in the new ATLAS paper with respect to the old.

So – can anyone explain the transverse region in detail? Is this an eternal mystery? Maybe someone in an alternate universe managed it. 😉

March 12, 2011 at 9:49 am 1 comment

Comparing the ATLAS and CMS SUSY Searches

The ATLAS jet+MET paper (arXiv:1102.5290) came out on 25-Feb, some weeks after the CMS jet+MET paper (arXiv:1101.1628) on 8-Jan. Remarkably, they are really quite different.

Neither paper reports evidence for supersymmetry, a fact that has been discussed on several other blogs, such as Not Even Wrong, Quantum Diaries Survivor and Backreaction among others. I’ll avoid the tedious debates about whether SUSY is in trouble or is a bad idea in the first place.

My interest is much more in what the experiments did with their data.

Someone made the pithy statement: CMS optimized for background rejection, while ATLAS optimized for the SUSY signal. There is some real truth in this statement. The CMS analysis is extremely robust, clever and safe. There is little chance of a false positive and a very conservative, even skeptical attitude toward the detector and reconstruction algorithms has been adopted. The ATLAS analysis follows the paths laid out at the Tevatron, grasping the SUSY signatures by the horns and simply going for it as if they had years of experience with multi-jet event topologies.

Keep in mind that both papers are looking at events with multiple hadronic jets and missing transverse energy (“MET”). They both veto events with leptons and the jet reconstruction is quite similar. The initial event samples from ordinary pp collisions are huge and looking for a signal for new physics means picking a promising corner of the haystack and hoping to find the needle there.

The CMS analysis works with a kinematic variable called αT, given as the ratio of the ET of the second jet to MT, the transverse mass of the event reduced to a di-jet system. The αT distribution is the crux of the analysis, because the QCD background should fall below 0.5 but a SUSY signal will have a (small) tail above 0.5. Look at these plots from the CMS paper:

alpha_T distribution from CMS

αT distribution from CMS


Notice the log scale. There is a tremendous amount of background below the cut αT<0.55 which is clearly very well controlled by the CMS reconstruction; almost nothing leaks above this cut. Unfortunately, most of the likely SUSY signal also falls below the cut so the CMS search is really trying to catch the cat by its tail.

The ATLAS analysis takes samples of 2-jet and 3-jet events and applies hard kinematic cuts on the MET and on an effective mass variable, meff which is just the sum of MET and the ET of the two most energetic jets. For the most part they end up counting Z+jets events in which the Z had decayed to neutrinos, as you can see from these two plots:

m_eff from ATLAS

meff from the ATLAS SUSY search


Of course, the agreement of the data with the simulation is excellent, as in the case of the CMS plot above. One should not under-appreciate how important such agreement is, given the coming challenges of finding (or not) supersymmetry in larger data samples.

One point of the CMS analysis which I particularly like is the way the backgrounds are estimated through several data-driven methods. You must read the paper for the details. There are two ways of extrapolating from one kinematic region dominated by background into the signal region. There is a direct measurement based on a W→μν + jets signal which is exploited in two different ways. Finally, Z→νν backgrounds are checked with γ+jets. One has no doubt that the background really is pinned at something like 10±3 events, somewhat below the 13 events observed; the CMS exclusion contour is weaker than one would have predicted in advance.

The ATLAS analysis, in contrast, relies on rates and shapes taken from Monte Carlo simulations. To be sure, they have made many tests and studies to convince themselves that the simulations are reliable. But one wonders whether they would be willing to claim a discovery with this kind of analysis. Out of four subsamples of events, three of them have fluctuated downward, so their limits are slightly better than predicted.

ATLAS published a good plot which includes the CMS contour for comparison:

ATLAS SUSY exclusion plot

ATLAS SUSY exclusion plot in the (m0,m1/2) plane


The red line shows the exclusion the ATLAS, which is far more impressive than the black line showing the CMS exclusion. As already pointed out, it is slightly better than the expected exclusion contour, given by the blue dashed line. Note that 1σ fluctuations given by the dotted lines are quite far off showing that small ordinary fluctuations will have a big impact on the contour obtained from the data. The CMS expected exclusion is only slightly better than the one achieved; overall the ATLAS expected exclusion is better than the CMS expected exclusion.

Thus, we have to return to that earlier pithy statement: CMS optimized for background rejection, while ATLAS optimized for the SUSY signal. As a result, CMS has complete supreme control of its background, while ATLAS achieved a more stringent exclusion in the (m0,m1/2) plane.

Of course, this is only the beginning, and both collaborations have more results in the pipeline.

(Notice: I am a member of the CMS Collaboration.)

March 6, 2011 at 8:49 am 8 comments

Where have I been?

I stopped posting to this blog, reluctantly, over a year ago. Where have I been? Why did I stop posting?

The answer is a happy one. My private life took a dramatic turn for the better and I became the head of a household with a lovely wife and two bright children. Normally I share no details about my private life on this blog, but they are the only way to explain why I appeared to abandon my blog. In reality, I merely set it aside, and will try to resume posts between my professional life at the university and on CMS, and the often challenging and yet wonderful lie with my new family. I hope people will resume reading my posts from time to time, and I hope even more that people will post a comment…

March 5, 2011 at 2:52 pm 4 comments


March 2011
S M T W T F S
 12345
6789101112
13141516171819
20212223242526
2728293031