Archive for November, 2012
Higgs couplings to fermions and massive vector bosons
Let’s review the tests of the Higgs couplings to fermions and massive vector bosons completed by CMS and ATLAS.
The Higgs mechanism generates mass terms in the standard model Lagrangian. Electroweak symmetry is broken at the same time that the W and Z bosons get their mass. Fermions masses, on the other hand, are generated via Yukawa terms, and have nothing to do with electroweak symmetry breaking. The Higgs coupling to the electroweak gauge bosons go as MV2/v, while the Higgs coupling to the fermions goes as Mf/v, where v is the Higgs vacuum expectation value (v = 246 GeV).
Could there be a new physics effect that modifies these tree-level couplings? CMS and ATLAS have taken all their Higgs data and performed a fit with two free scale factors: κV for the vector boson couplings and κF for the fermions. The effective couplings for the Higgs boson to gluon pairs and photon pairs are expressed as their standard model loops modified by κV and κF as appropriate.
The CMS result is here. The contours centered on the black cross are the constraints from CMS data, and the yellow diamond is the standard model expected values, that fall within the 1σ CMS curve. While best value for κF is less than the SM value, the best value for κV agrees perfectly with the SM. There is a second local minimum with κF < 0 but that one is not favored by the data. (Source: CMS public Higgs page)
This plot shows contours from ATLAS data including the solution with κF < 0. The best value is marked by the X and the SM value is marked by the blue cross. The ATLAS data agree with the SM for κF and are a bit above the SM for κV. (Source: ATLAS Higgs page)
I have tried to put the two curves on the same grid:
Measurement of the Mass of the Higgs Boson
Let’s use the word “Higgs boson” for the new state discovered by ATLAS and CMS. The collider physics community is trying to measure everything they can about this new particle. One of the “easiest” properties to measure is its mass, MH. (One of the more difficult is the CP Nature of the Higgs boson.)
Values for MH were published by both ATLAS (arXiv:1207.7214) and CMS (arXiv:1207.7235) last July:
ATLAS: 126.0 +/- 0.4(stat) +/- 0.4(sys)
CMS : 125.3 +/- 0.4(stat) +/- 0.5(sys)
Although there is a difference in the central values, this difference is not significant even at the level of the statistic uncertainty alone.
CMS have updated their value and shown it at the HCP Symposium in Tokyo last week. Their new value is
CMS new: 125.8 ± 0.4(stat) ± 0.4(sys) GeV.
This measurement is documented in CMS PAS HIG-12-045.
For theorists, the exact value (125? 126?) is generally of little concern, because the real mystery is why MH is not closer to the Planck scale. For good reason they have thought hard about this, inventing several possible explanations why the Higgs mass is near the electroweak scale and not at the Planck scale.
For experimentalists, on the other hand, the data are there to make a precise measurement, so we should do so.
As the reader surely knows by now, signals for the “Higgs boson” have been established in several channels, including di-photons, a pair of Z bosons decaying to four charged leptons (total), a pair of W bosons decaying to two charged leptons and two neutrinos, and possibly some evidence in the b-bbar mode. Of these modes, only the di-photon and the four-lepton modes provide a narrow peak at the putative Higgs boson mass, so the CMS and ATLAS collaborations derive their mass measurements from those channels only.
When the Higgs decays to two photons, two energetic and narrow showers are recorded in the electromagnetic calorimeter (“EM calorimeter”). The EM calorimeters of CMS and ATLAS are especially advanced in design and capability — years ago physicists had the decay mode H→γγ in mind when they designed these two wonderful devices. They are truly state-of-the art.
When the Higgs decays to four leptons, we mean electrons and/or muons, not tau leptons, because the Z boson masses and kinematics can be fully reconstructed for the Z→ee and Z→μμ decay modes. So the four leptons can be four electrons, four muons, or two electrons and two muons. The electron energies are measured primarily by the EM calorimeters, so those measurements are quite good. The muon momenta are measured mainly by the precision Silicon detectors which provide several precise points along the trajectory of the muon through a well-known and very uniform magnetic field. Once again, the ATLAS and CMS spectrometers are exceptionally performant, thanks both to the precision of the points (a few microns each) and the high magnetic fields (2 – 4 Tesla). The decay mode H→4L was a leading consideration in the design of these two spectrometers. (CMS and ATLAS also have very advanced dedicated muon detectors mounted outside the calorimetry, that play the essential role in distinguishing muons from other charged particles. For very high momenta, the muon detectors also contribute to the muon momentum measurement, but for the momenta occurring in H→4L decays, they do not contribute much.) For CMS, the measurement resolution (the estimated error for a single lepton) is about the same for electrons and muons produced in Higgs decays. (At higher momenta, though, electrons are more precise than muons, while at lower energies, muons are more precise. This is why we reconstruct J/ψ decays with muons, while heavy Z’ boson decays will show up as narrower peak in the ee channel than in the μμ channel.)
It turns out that the statistical precision on the mass measurement for H→γγ and for H→4L is about the same. This did not have to be the case – the statistical precision depends on the measurements errors, on the size of the signal and on the size and shape of the background. Here is the CMS plot comparing the H→γγ and for H→4L mass measurements.
The green curve shows the H→γγ measurement, and the red curve, H→4L. They plainly are compatible, justifying their combination by CMS; this combination is represented by the black curve.
(You can find this and other plots at the CMS public web site for the combination of CMS Higgs searches.)
There is a subtle point about the combination: should the two channels be given a weight corresponding to the rate expected in the standard model, or should the weight depend on the observed rates? The H→γγ rate is a bit higher than expected, and the H→4L rate is very slightly lower. (It used to be significantly lower, but the signal strength is now quite close to unity.) CMS have produced a two-dimensional plot that allows the normalization of the modes to be free independent of the masses:
Within the present uncertainties, it does not make much difference. (The black curve in the first figure above was computed taking the ratio of rates for H→γγ and H→4L from the standard model, but allowing the normalization of the two together to float.)
The statistical uncertainty is at the level of 0.3% (δM/M), so one has to be careful about systematic uncertainties. Is the absolute photon and electron scale, determined by the EM calorimetry, and the absolute muon scale, determined by the magnetic field, accurate at the level of a fraction of a percent?
Needless to say, much effort was expended in calibrating these absolute energy/momentum scales. The copious production of Z bosons, decaying the electron and muon pairs, provides the key to getting the scale right: the Z mass is known at the 10-4 level, thanks to resonant depolarization techniques applied at LEP. Narrow J/ψ and Υ resonances provide cross checks of the momentum scale. Uniformity of response is achieved with huge sets of isolated and well-measured tracks. There is a real art to this, and decades of experience. One should keep in mind the essential role played by the people who design, care for, and calibrate the detectors, without which no clever analysis would produce results this good.
The details of the way systematic uncertainties are handled when combining H→γγ and H→4L are not yet public. One should take into account the degree of correlation between the two mass peaks coming from possible errors on the EM calorimeter energy scale. This correlation will not be large, however.
CMS derived their total systematic uncertainty by eliminating it and seeing how the total uncertainty shrinks. This corresponds to a narrowing of the black parabola in the first plot. (Technically, the systematic uncertainty is eliminated by fixing the nuisance parameters to their best-fit values, and then scanning in MH.) The difference in quadrature of the full width of the parabola and the narrower width obtained when the systematic uncertainty is eliminated is taken as the systematic uncertainty; this is valid so long as there is no correlation between the statistical and systematic uncertainties, and as long as the minimum of the parabola stays basically in the same place. Here is a comparison of the parabola with and without the systematic uncertainty:
In this manner the CMS Collaboration arrived at their updated and preliminary measurement of the Higgs mass, MH = 125.8 ± 0.4(stat) ± 0.4(sys) GeV. Taken in its entirety, this measurement is precise at the level of 0.5%. One can expect this value to be refined somewhat as the measurement is improved by including more data and perhaps reducing systematic uncertainties.
The CP Nature of the Higgs Boson
We are in the process of ascertaining the properties of the Higgs-like particle discovered by CMS and ATLAS last July 4th. It must be a boson because it decays to pairs of bosons. Since it decays to a pair of massless photons, it cannot be spin-1. The relative rates of decays to WW and ZZ on the one hand, and γγ on the other, are close to what is expected for spin-0 boson and not what is expected for a spin-2 graviton. John Ellis, Veronica Sanz and Tevong You wrote a nice paper about this earlier this week (arXiv:1211.3068, 13-Nov).
So let’s assume that the new particle X(126) is a Higgs boson (and I will use the symbol “H” for it). If it is the standard model Higgs boson, then its CP eigenvalue must be +1. If it is a member of a two-Higgs-doublet model, then its CP eigenvalue might be -1, and if there is CP-violation in the Higgs sector, then its CP eigenvalue would be something other than +1 or -1.
The new news about this comes from the Hadron Collider Physics Symposium that just finished in Tokyo last week. The CMS Collaboration presented results that indicate that CP = -1 is the wrong hypothesis for the H. They used the golden channel H→ZZ→4L, where the four leptons are electrons and muons. The H state is completely reconstructed in this channel, and backgrounds are low. The Z bosons themselves are massive spin-1 particles, which means that they can be transversely and/or longitudinally polarized, so that one can talk about the degree of their polarization. They are produced coherently in the decay of the H, so their quantum mechanical states are entangled and their joint quantum mechanical state reflects the properties of the parent particle, H. Their quantum mechanical state is manifested in the angular distributions of the four leptons, especially taken as pairs — two for the first boson Z1 and two for the second bosons Z2. So a study of the angular distributions tells us, on a statistical basis, whether the parent particle H is spin-0 or spin-2, and what its CP eigenvalue is. This physics as been studied by many authors, some of whom work on the CMS analysis discussed here, and who published their ideas two years ago (Gao et al., InSpire link). Many theorists have discussed similar material, for example arXiv:1108.2274.
Checking the hypotheses spin-0 and spin-2 is not fruitful at this time, and anyway we have reasons to believe that is has spin-0. So assuming that it does have spin-0, we can check the hypotheses CP-even and CP-off.
With four leptons in the final state, several angular distributions are available. Here is a diagram from Gao et al. labeling the main ones in the H center-of-mass frame:
There is the polar angle θ* the two Z bosons make with the beam axis. There are the two polar angles θ1 and θ2 that the lepton pairs make in the rest frames of the two Z bosons. Finally, there is a relative azimuthal angle Φ that the two Z decay planes make with each other.
The most striking differentiation between CP-even and CP-odd comes from the polar angles θ1 and θ2 and the azimuthal angle Φ. Here are the ideal distributions:
The CP-even case is shown by the solid red dots and the CP-off case by the open blue dots — the distributions are plainly different.
The event sample available to CMS at present is not large enough to make a determination of the CP eigenvalue by simply plotting one of these distributions. Instead, the CMS physicists built a probability density function for the two hypotheses based on the measured decay angles. This gives them the highest achievable statistical power (i.e., ability to distinguish two hypotheses CP-even vs. CP-odd) for the observables that they measure. An abstract-like summary is available on a CMS web page and also in the public document CMS PAS HIG-12-041.
The authors take the SM expectation as the null hypothesis and the alternative is the CP-odd hypothesis. The test statistic is D = [1 + P(CP-odd)/P(CP-even)]-1, where P is the probability density calculated from the lepton angles and the two Z masses. There are three terms in the theoretical expression for P, one of which is small for both hypotheses and neglected, and the other two that dominate; which one dominating depends on which CP eigenvalue is assumed. The method takes account of the correlations among all measured quantities — indeed this is the point of the method and the reason why it is more effective than simply projecting out the angular variables.
The distribution of the discriminating variable shows some a priori power of discrimination:
The discrimination is not dramatic, but it is not negligible either. The few data entries do land more to the right of the plot than to the left, favoring the CP-even hypothesis.
The hypothesis test boils down to one number, namely, the log of the ratio of likelihoods. The distribution of this variable is typically Gaussian, and the two hypotheses show up as Gaussians with different means and more or less the same width. The power of the test amounts to the separation of the two peaks (which depends on the separation of the means and the narrowness of the peaks); for a powerful test there is very little overlap between them. The power of the test depends on the number of events, so the authors made the plot for the number of events observed:
The magenta peak on the right represents the CP-even hypothesis (as expected in the SM), and the blue peak on the left represents the CP-odd hypothesis. The two peaks do overlap, so there are some values for this quantity for which a conclusion would be difficult or impossible. As it turns out, the value from the CMS data lands a bit to the right — see the position of the green arrow. If the H particle truly has CP-odd, then the probability to observe the value indicated by the green arrow is low, about 2.4%. In this sense, the CMS analysis disfavors the CP-odd hypothesis at the 2.5σ level. It is completely compatible with the CP-even hypothesis.
So the conclusion is that the new particle is probably CP-even, as expected in the SM.
While this indication is fairly strong and extremely important, 2.5σ can be a fluctuation. We have seen larger fluctuations in other places in the grand landscape of Higgs searches. We will have to see whether ATLAS can perform this analysis and what their data will indicate. Furthermore, the possibility of CP-violation is completely set aside in this analysis, since only two hypotheses are tested – one cannot do better with the present data sample. At some point physicists will define an angle in CP space that quantifies the deviation from the CP-even hypothesis, and experimenters will start to constrain or measure that angle.
Note: Tommaso Dorigo wrote about this briefly, last Wednesday.
Update (21-Nov): The witty author of one of my favorite blogs, In the Dark, wrote yesterday about interesting new CP-violation results in the B system (link: Time will say nothing but I told you so…) and provided a very nice, succinct description of what C, P and CP violation means. Take a read!
Bs to mumu Observed
The LHCb experiment has observed the rare decay Bs→μ+μ–.
Br(Bs→μ+μ–) = (3.2+1.5-1.2)×10-9.
This is the culmination of nearly 30 years of searching for this extremely rare decay (see yesterday’s blog post).
The slides from Johannes Albrecht presented at the HCP conference give a nice overview of the measurement. These results come from 1 fb-1 at 7 TeV and 1.1 fb-1 at 8 TeV, analyzed together with methods very similar to those published in March. For this update, evaluations of the background shapes have been refined and are better constrained by the data, leading to a reduction in the systematic uncertainty. There is also an improvement in fs, the parameter that gives the fraction of b-quarks the hadronize to form a Bs meson.
Here is the limit curve, showing clearly that the data are incompatible with the brackground hypothesis:
In fact, the p-value for the 2011+2012 data is 5×10-4 corresponding to 3.5σ. Conventionally speaking, this is certainly good enough for evidence.
This is the ideal case for a mass peak, since the mass resolution is excellent (25 MeV) and the background is almost zero. Here is a plot showing a Bs signal peak emerging from the background as the cut on the BDT (boosted decision tree, a common multivariate analysis tool) is increased:
The measured rate, Br(Bs→μ+μ–) = (3.2+1.5-1.2)×10-9 is compatible with the SM prediction, (3.2±0.2)×10-9, leaving very little room for new physics contributions. It will be interesting to see how this measurement constrains models of new physics.
Hopefully CMS will also report observation of this rare decay mode, confirming the LHCb result.
Watching for Bs to mu+mu-
On Tuesday, November 13th, Matteao Palutan representing the LHCb Collaboration will report new results on the search for the extremely rare decay Bs→μ+μ–.
At tree level, this decay is forbidden in the standard model. It can occur through a loop diagram, however, involving a top quark and W bosons that are far, far off mass shell:
The SM prediction is really very small: the branching ratio B(Bsmumu) = (3.2±0.2)×10-9.
Since this decay is almost completely absent in the standard model, it provides a very good opportunity for new physics to appear — any observation of this decay above the SM rate would be a clear signal for new physics. Indeed, many models of new physics allow for branching ratios a factor of ten or one hundred higher than the SM value. Chief among theses is generic Supersymmetry, which predicts large enhancements when tanβ is large (20 to 50) and when MA (the mass of the pseudoscalar Higgs boson) is not too large (less than 200 GeV).
This opportunity has enticed experimentalists for nearly twenty years, and a series of searches by CDF and D0 put more and more stringent bounds during the 1990s and 2000s. See, for example, a discussion of a D0 result in 2010 by Tommaso Dorigo. The Tevatron limits were about an order of magnitude above the SM branching ratio.
Early in 2012, the CDF Collaboration reported a two-sided confidence interval for B(Bsmumu), meaning that they had evidence for a signal although they did not use those words. They used an artificial neural network to categorize the events. Using the very best candidates, they reported B(Bsmumu) = 1.3+0.9-0.7×10-8. This result generated some interest and much discussion (e.g., Tommaso’s blog),
needless to say. (For information, see this cdf web page).
The advent of the LHC opened new opportunities to observe this decay. Early results from CMS and LHCb excited experts. The superior capabilities of the CMS and especially the LHCb detectors make the searches for this decay more effective. The luminosity and higher center-of-mass energy deliver much larger data samples than the Tevatron collaborations enjoyed. To see how good the data are, here is a beautiful event from the LHCb Collaboration:
The two pink tracks are the muons, and the blue track shows how the Bs flow our from the primary vertex. More event displays of this type can be viewed at the LHCb web page.
The CMS Collaboration recently published a result based on 5 fb-1 of data taken in 2011 at √s = 7 TeV (arXiv:1203.3976, March 2012): B(Bsmumu) < 7.7×10-9 at 95% CL.
At the same time, the LHCb Collaboration published a slightly more stringent result (arXiv:1203.4493 March 2012): B(Bsmumu) < 4.5×10-9, based on 1 fb-1.
Combining the results from LHCb, CMS and ATLAS, the upper limit is B(Bsmumu) < 4.2×10-9 (combination note).
These limits are rather close to the SM value, so defining the expected limit is tricky: does one make a calculation assuming no signal, or does one assume that the SM process will indeed produce events? These graphs from the combination note make plain that the two calculations are very different:
The plot on the left is calculated assuming the SM contribution, while the plot on the right assumes no contribution from any source.
It seems clear that the LHC experiments are on the verge of observing a signal for this process, if only at the level the SM predicts. CMS has approximately 20 fb-1 at 8 TeV, while LHCb has a data sample more than twice the size of the one used for March’s publication. So it should be very interesting to listen to the presentation by M. Palutan on Tuesday.
The INDICO web page for this seminar is: https://indico.cern.ch/conferenceDisplay.py?confId=216344, and there will be a web retransmission.
Let’s see whether LHCb reports the first observation of this important decay mode…
Update: The news from LHCb will be presented on the first day of the HCP Conference by Johannes Albrecht, at 16:10 in Tokyo which is 8:10 in Geneva (1:00 am in Chicago). The INDICO page for HCP is http://kds.kek.jp/conferenceDisplay.py?confId=9237.
GFITTER Plots
I accidentally hit the link to GFITTER in my bookmarks file while having an early morning cup of coffee. So I looked at the plots there – they’re interesting.
As most particle physicists know, GFITTER is a public computer program for calculating fits to the Standard Model based on precision measurements of electroweak observables. Such fits have a long tradition and have played a crucial role in the development of our field since the 1990s or before. During LEP days, for example, it was customary to infer values of the top quark mass from its influence on electroweak observables. The agreement of these inferred values with the directly measured value at the Tevatron was exciting at the time. Once the top quark mass was known, the fits turned to predicting the Higgs mass. As the years went by and all the crucial measurements improved, the indirect bounds on the Higgs mass sharpened. Once again, the measured value from the LHC agrees with the prediction:
If you want to break the SM in order to access new physics, this agreement is not good news, and now one has to hope for unexpected Higgs properties as revealed in branching ratios and angular distributions of the decay products.
We can continue to scrutinize the internal consistency of the SM, and the GFITTER plots help with that. The traditional plot shows contours in the plane of MW versus Mt – here is the GFITTER version:
The yellow cross indicates the measured values of Mt and MW, and the black point in the middle of the plot with error bars represents the joint measurement – what I will call the true value. The large grey areas show the expected ranges of (MW,Mt) based on a host of precision measurements of electroweak observables. It overlaps the true value so at that level the SM is internally consistent. The narrow blue areas show the expected range of (MW,Mt) based on precision measurements of electroweak observables and the measured Higgs mass. The contour is much narrower reflecting the major impact the measurement of Mh has. Notice that the agreement with the black point is not so good: the measured value of MW is a little bit higher than predicted by the blue areas, while Mt agrees very well.
Looking at this plot, you might wish for “slices” along MW and Mt to see a chi-squared contour. Happily, GFITTER provides these plots for us. First, the Mt plot:
The most precise measurement comes from the Tevatron experiments, taken together, closely followed by the CMS measurement alone. All of the measurements agree among themselves very well, and they agree with the prediction of the SM (blue parabola) at the level of one sigma.
Here is the corresponding plot for MW:
The agreement between the world average value and the SM prediction is less good. Taken at face value, the central value of the measurement is three sigma above the prediction of the SM (blue curve). Taking the measurement error into account, the disagreement is much smaller than three sigma, but could there be a hint of something here?
If the Higgs mass increases, then the (admittedly modest) tension between the measured and predicted values of MW will increase. Perhaps it would be nice to see contours in the plane of MW versus Mh. I’m sure people who know how to run GFITTER can produce this plot easily.
More precise measurements of MW are desirable, but difficult to achieve. People at the LHC talk about reducing the uncertainty to below 10 MeV, but this requires a lot of experimental work and better PDFs, so it is not around the corner. A measurement with a precision of 6 MeV or even better could be made at a new e+e– collider, but that is just a hope, now.