Yesterday the AMS Collaboration released updated results on the positron excess. The press release is available at the CERN press release site. (Unfortunately, the AMS web site is down due to syntax error – I’m sure this will be fixed very soon.)
The Alpha Magnetic Spectrometer was installed three years ago at the International Space Station. As the name implies, it can measure the charge and momenta of charged particles. It can also identify them thanks to a suite of detectors providing redundant and robust information. The project was designed and developed by Prof. Sam Ting (MIT) and his team. An international team including scientists at CERN coordinate the analysis of data.
There are more electrons than positrons striking the earth’s atmosphere. Scientists can predict the expected rate of positrons relative to the rate of electrons in the absence of any new phenomena. It is well known that the observed positron rate does not agree with this prediction. This plot shows the deviation of the AMS positron fraction from the prediction. Already at an energy of a couple of GeV, the data have taken off.
The positron fraction unexpectedly increases starting around 8 GeV. At first it increases rapidly, with a slower increase above 10 GeV until 250 GeV or so. AMS reports the turn-over to a decrease to occur at 275 ± 32 GeV though it is difficult to see from the data:
This turnover, or edge, would correspond notionally to a Jacobian peak — i.e., it might indirectly indicate the mass of a decaying particle. The AMS press release mentions dark matter particles with a mass at the TeV scale. It also notes that no sharp structures are observed – the positron fraction may be anomalous but it is smooth with no peaks or shoulders. On the other hand, the observed excess is too high for most models of new physics, so one has to be skeptical of such a claim, and think carefully for an astrophysics origin of the “excess” positrons — see the nice discussion in Resonaances.
As an experimenter, it is a pleasure to see this nice event display for a positron with a measured energy of 369 GeV:
Finally, AMS reports that there is no preferred direction for the positron excess — the distribution is isotropic at the 3% level.
There is no preprint for this article. It was published two days ago in PRL 113 (2014) 121101″
A bit more than a year ago I was pleased to see a clear signal from CMS for the decay of Z bosons to four leptons. Of course there are literally millions of recorded Z decays to two leptons (e+ e– and μ+ μ–) used for standard model physics studies, lepton efficiency measurements, momentum/energy scale determinations and detector alignment. But Z→4L is cuter and of some intrinsic interest, being relatively rare.
It turned out the main interest of physicists who analyzed the signal was Higgs boson decays to four leptons. By now that Higgs signal is well established and plays an important role in the Higgs mass measurement, but at the time of the CMS publication (InSpire link, i.e., arXiv:1210.3844 October 2012), Z→4L provided the ideal benchmark for H→4L.
You might think that the rare decay Z→4L had been well studied at LEP. In fact, it was quite well studied because the ALEPH Collaboration had once reported an anomaly in the 4L final state when two of the leptons were tau leptons. (At the time, this observation hinted at a light supersymmetric Higgs boson signal.) The anomaly was not confirmed by the other LEP experiments. A perhaps definitive study was published by the L3 Collaboration in 1994 (InSpire link). Here are the plots of the two di-lepton masses:
Most of the events consist, in essence, of a virtual photon emitted by one of the primary leptons, with that virtual photon materializing as two more leptons – hence the peak at low masses for the Mmin distribution. Note there is no point in plotting the 4-lepton mass since the beam energies were tuned to the Z peak resonance – the total invariant mass will be, modulo initial-state radiation, a narrow peak at the center-of-mass-energy.
Here is the Z resonance from the CMS paper:
A rather clear and convincing peak is observed, in perfect agreement with the standard model prediction. This peak is based on the 5 fb-1 collected in 2011 at 7TeV.
ATLAS have released a study of this final state based on their entire 7 TeV and 8 TeV data set (ATLAS-CONF-2013-055, May 2013). Here is their preliminary 4-lepton mass peak:
Clearly the number of events is higher than in the CMS plot above, since five times the integrated luminosity was used. ATLAS also published the di-lepton sub-masses:
This calibration channel is not meant to be the place where new physics is discovered. Nonetheless, we have to compare the rate observed in the real data with the theoretical prediction – a discrepancy would be quite interesting since this decay is theoretically clean and the prediction should be solid.
Since the rate of pp→Z→2L is very well measured, and the branching ratio Z→2L already well known from LEP and SLD, we can extract branching fractions for Z decays to four leptons:
SM... BF(Z→4L) = (4.37 ± 0.03) × 10-6
CMS.. BF(Z→4L) = (4.2 ± 0.9 ± 0.2) × 10-6
ATLAS BF(Z→4L) = (4.2 ± 0.4 ) × 10^-6
So, as it turns out, the SM prediction matches the observed rate very well.
The inclusive W and Z production cross sections are benchmarks for any hadron collider. Excellent measurements were published by CDF and D0, but the superior detector capabilities of CMS and ATLAS allows for even better measurements. Fiducial cross sections are relatively free from theoretical uncertainties and can be used to constrain the parton distribution functions (PDFs), which are of central importance for nearly all measurements done at a hadron collider. In fact, ATLAS published an interesting constraint on the strange-quark density on the basis of inclusive cross section measurements. I’ll return to this result in a future post.
The first results were published back in 2010 and then updated in 2011 and 2012, based on 7 TeV data. Since W and Z bosons are produced copiously at the LHC, very small statistical uncertainties can be achieved with a rather small amount of integrated luminosity. (We have tens of millions of Z bosons detected in leptonic decay channels, for example, far more than the LEP experiments recorded. And we have roughly ten times the number of W bosons.) Remarkably, experimental systematic uncertainties are reduced to the 1% – 1.5% level, which is amazing considering the need to control lepton efficiencies and background estimates. (I am setting aside the luminosity uncertainty, which was about 3% – 4% for the early data.) The measurements done with only 35 pb-1 are nearly as precise as the theoretical predictions, whose errors are dominated by the PDF uncertainties. We knew, back in 2011, that a new era of electroweak physics had begun.
Experimenters know the power of ratios. We can often remove a systematic uncertainty by normalizing a measured quantity judiciously. For example, PDFs are a major source of uncertainty. These uncertainties are highly correlated, however, in the production of W and Z bosons. So we can extract the ratio (W rate)/(Z rate) with a relatively small error. Even better, we can plot the W cross section against the Z cross section, as ATLAS have done:
The elongated ellipses show that variations of the PDFs affect the W and Z cross sections is nearly the same way. The theoretical predictions are consistent with the data, and tend to lie all together. (The outlier, JR09, is no longer a favored PDF set.)
It is even more interesting to plot the W+ cross section against the W– cross section, because the asymmetry between W+ and W– production relates to the preponderance of up-quarks over down-quarks (don’t forget we are colliding two protons). Since the various PDF sets describe the d/u-ratio differently, there is a larger spread in theoretical predictions:
During the 8 TeV running in 2012, the instantaneous luminosity was much higher than in 2010, leading to high pile-up (overlapping interactions) which complicate the analysis. The LHC collaborations took a small amount of data (18 pb-1) in a low pile-up configuration in order to measure the W and Z cross sections at 8 TeV, and CMS have reported preliminary results. They produced ellipses plots similar to what ATLAS published:
You might notice that the CMS ellipse appear larger than the ATLAS ones. This is because the ATLAS results are based on fiducial cross sections – i.e., cross sections for particle produced within the detector acceptance. One has to apply an acceptance correction to convert a fiducial cross section to a total cross section. This acceptance correction is easily obtained from Monte Carlos simulations, but it comes with a systematic uncertainty coming mainly from the PDFs. (If a PDF favors a harder u-quark momentum distribution, then the vector bosons will have a slightly larger momentum component along the beam, and the leptons from the vector boson decay will be missed down the beam pipe more often. Such things matter at the percent level.) Since modern theoretical tools can calculate fiducial cross sections accurately, it is not necessary to apply an acceptance correction in order to compare to theory. Clearly it is wise to make the comparison at the level of fiducial cross sections, though total cross sections are also useful in other contexts. The CMS result is preliminary.
Back when I was electroweak physics co-convener in CMS, I produced a plot summarizing hadron collider measurements of W and Z production. My younger colleagues have updated that plot to include the new 8 TeV measurements:
This plot nicely summarizes the history of these measurements, and suggests that W and Z production processes are well understood.
As I learned a the WNL workshop, the collaborations are learning how to measure these cross section in the high pile-up data. We may see even more precise values, soon.
I regret that I have nearly abandoned this blog. It has been a long time since I tried to point out interesting results in high energy physics. Happily, I was asked to give an hour-long talk about electroweak physics at the LHC, at the workshop hosted by the (TIFR) in Mumbai. I was able to spend many days reviewing the recent publications of the LHC experiments in order to prepare for my talk. On my way back to the US, I realized that I could use my new understanding as the basis for a series of posts. I hope I will manage to keep with it…
I just heard about a youtube video written by Don Lincoln about the role the United States plays in CERN science – specifically, the LHC. You can view it here:
Apparently Ernie Monitz, the new Secretary of Energy, linked this to his facebook page: https://www.facebook.com/ErnestJMoniz/posts/222609781237091
A really good video “explaining” why the Higgs is important is associated with the one by Don. It comes from MinutePhysics and is fun. Take a look:
Let’s review the tests of the Higgs couplings to fermions and massive vector bosons completed by CMS and ATLAS.
The Higgs mechanism generates mass terms in the standard model Lagrangian. Electroweak symmetry is broken at the same time that the W and Z bosons get their mass. Fermions masses, on the other hand, are generated via Yukawa terms, and have nothing to do with electroweak symmetry breaking. The Higgs coupling to the electroweak gauge bosons go as MV2/v, while the Higgs coupling to the fermions goes as Mf/v, where v is the Higgs vacuum expectation value (v = 246 GeV).
Could there be a new physics effect that modifies these tree-level couplings? CMS and ATLAS have taken all their Higgs data and performed a fit with two free scale factors: κV for the vector boson couplings and κF for the fermions. The effective couplings for the Higgs boson to gluon pairs and photon pairs are expressed as their standard model loops modified by κV and κF as appropriate.
The CMS result is here. The contours centered on the black cross are the constraints from CMS data, and the yellow diamond is the standard model expected values, that fall within the 1σ CMS curve. While best value for κF is less than the SM value, the best value for κV agrees perfectly with the SM. There is a second local minimum with κF < 0 but that one is not favored by the data. (Source: CMS public Higgs page)
This plot shows contours from ATLAS data including the solution with κF < 0. The best value is marked by the X and the SM value is marked by the blue cross. The ATLAS data agree with the SM for κF and are a bit above the SM for κV. (Source: ATLAS Higgs page)
I have tried to put the two curves on the same grid:
Let’s use the word “Higgs boson” for the new state discovered by ATLAS and CMS. The collider physics community is trying to measure everything they can about this new particle. One of the “easiest” properties to measure is its mass, MH. (One of the more difficult is the CP Nature of the Higgs boson.)
ATLAS: 126.0 +/- 0.4(stat) +/- 0.4(sys)
CMS : 125.3 +/- 0.4(stat) +/- 0.5(sys)
Although there is a difference in the central values, this difference is not significant even at the level of the statistic uncertainty alone.
CMS have updated their value and shown it at the HCP Symposium in Tokyo last week. Their new value is
CMS new: 125.8 ± 0.4(stat) ± 0.4(sys) GeV.
This measurement is documented in CMS PAS HIG-12-045.
For theorists, the exact value (125? 126?) is generally of little concern, because the real mystery is why MH is not closer to the Planck scale. For good reason they have thought hard about this, inventing several possible explanations why the Higgs mass is near the electroweak scale and not at the Planck scale.
For experimentalists, on the other hand, the data are there to make a precise measurement, so we should do so.
As the reader surely knows by now, signals for the “Higgs boson” have been established in several channels, including di-photons, a pair of Z bosons decaying to four charged leptons (total), a pair of W bosons decaying to two charged leptons and two neutrinos, and possibly some evidence in the b-bbar mode. Of these modes, only the di-photon and the four-lepton modes provide a narrow peak at the putative Higgs boson mass, so the CMS and ATLAS collaborations derive their mass measurements from those channels only.
When the Higgs decays to two photons, two energetic and narrow showers are recorded in the electromagnetic calorimeter (“EM calorimeter”). The EM calorimeters of CMS and ATLAS are especially advanced in design and capability — years ago physicists had the decay mode H→γγ in mind when they designed these two wonderful devices. They are truly state-of-the art.
When the Higgs decays to four leptons, we mean electrons and/or muons, not tau leptons, because the Z boson masses and kinematics can be fully reconstructed for the Z→ee and Z→μμ decay modes. So the four leptons can be four electrons, four muons, or two electrons and two muons. The electron energies are measured primarily by the EM calorimeters, so those measurements are quite good. The muon momenta are measured mainly by the precision Silicon detectors which provide several precise points along the trajectory of the muon through a well-known and very uniform magnetic field. Once again, the ATLAS and CMS spectrometers are exceptionally performant, thanks both to the precision of the points (a few microns each) and the high magnetic fields (2 – 4 Tesla). The decay mode H→4L was a leading consideration in the design of these two spectrometers. (CMS and ATLAS also have very advanced dedicated muon detectors mounted outside the calorimetry, that play the essential role in distinguishing muons from other charged particles. For very high momenta, the muon detectors also contribute to the muon momentum measurement, but for the momenta occurring in H→4L decays, they do not contribute much.) For CMS, the measurement resolution (the estimated error for a single lepton) is about the same for electrons and muons produced in Higgs decays. (At higher momenta, though, electrons are more precise than muons, while at lower energies, muons are more precise. This is why we reconstruct J/ψ decays with muons, while heavy Z’ boson decays will show up as narrower peak in the ee channel than in the μμ channel.)
It turns out that the statistical precision on the mass measurement for H→γγ and for H→4L is about the same. This did not have to be the case – the statistical precision depends on the measurements errors, on the size of the signal and on the size and shape of the background. Here is the CMS plot comparing the H→γγ and for H→4L mass measurements.
The green curve shows the H→γγ measurement, and the red curve, H→4L. They plainly are compatible, justifying their combination by CMS; this combination is represented by the black curve.
(You can find this and other plots at the CMS public web site for the combination of CMS Higgs searches.)
There is a subtle point about the combination: should the two channels be given a weight corresponding to the rate expected in the standard model, or should the weight depend on the observed rates? The H→γγ rate is a bit higher than expected, and the H→4L rate is very slightly lower. (It used to be significantly lower, but the signal strength is now quite close to unity.) CMS have produced a two-dimensional plot that allows the normalization of the modes to be free independent of the masses:
Within the present uncertainties, it does not make much difference. (The black curve in the first figure above was computed taking the ratio of rates for H→γγ and H→4L from the standard model, but allowing the normalization of the two together to float.)
The statistical uncertainty is at the level of 0.3% (δM/M), so one has to be careful about systematic uncertainties. Is the absolute photon and electron scale, determined by the EM calorimetry, and the absolute muon scale, determined by the magnetic field, accurate at the level of a fraction of a percent?
Needless to say, much effort was expended in calibrating these absolute energy/momentum scales. The copious production of Z bosons, decaying the electron and muon pairs, provides the key to getting the scale right: the Z mass is known at the 10-4 level, thanks to resonant depolarization techniques applied at LEP. Narrow J/ψ and Υ resonances provide cross checks of the momentum scale. Uniformity of response is achieved with huge sets of isolated and well-measured tracks. There is a real art to this, and decades of experience. One should keep in mind the essential role played by the people who design, care for, and calibrate the detectors, without which no clever analysis would produce results this good.
The details of the way systematic uncertainties are handled when combining H→γγ and H→4L are not yet public. One should take into account the degree of correlation between the two mass peaks coming from possible errors on the EM calorimeter energy scale. This correlation will not be large, however.
CMS derived their total systematic uncertainty by eliminating it and seeing how the total uncertainty shrinks. This corresponds to a narrowing of the black parabola in the first plot. (Technically, the systematic uncertainty is eliminated by fixing the nuisance parameters to their best-fit values, and then scanning in MH.) The difference in quadrature of the full width of the parabola and the narrower width obtained when the systematic uncertainty is eliminated is taken as the systematic uncertainty; this is valid so long as there is no correlation between the statistical and systematic uncertainties, and as long as the minimum of the parabola stays basically in the same place. Here is a comparison of the parabola with and without the systematic uncertainty:
In this manner the CMS Collaboration arrived at their updated and preliminary measurement of the Higgs mass, MH = 125.8 ± 0.4(stat) ± 0.4(sys) GeV. Taken in its entirety, this measurement is precise at the level of 0.5%. One can expect this value to be refined somewhat as the measurement is improved by including more data and perhaps reducing systematic uncertainties.