It turned out the main interest of physicists who analyzed the signal was Higgs boson decays to four leptons. By now that Higgs signal is well established and plays an important role in the Higgs mass measurement, but at the time of the CMS publication (InSpire link, i.e., arXiv:1210.3844 October 2012), Z→4L provided the ideal benchmark for H→4L.

You might think that the rare decay Z→4L had been well studied at LEP. In fact, it was *quite* well studied because the ALEPH Collaboration had once reported an anomaly in the 4L final state when two of the leptons were tau leptons. (At the time, this observation hinted at a light supersymmetric Higgs boson signal.) The anomaly was not confirmed by the other LEP experiments. A perhaps definitive study was published by the L3 Collaboration in 1994 (InSpire link). Here are the plots of the two di-lepton masses:

Most of the events consist, in essence, of a virtual photon emitted by one of the primary leptons, with that virtual photon materializing as two more leptons – hence the peak at low masses for the M^{min} distribution. Note there is no point in plotting the 4-lepton mass since the beam energies were tuned to the Z peak resonance – the total invariant mass will be, modulo initial-state radiation, a narrow peak at the center-of-mass-energy.

Here is the Z resonance from the CMS paper:

A rather clear and convincing peak is observed, in perfect agreement with the standard model prediction. This peak is based on the 5 fb

ATLAS have released a study of this final state based on their entire 7 TeV and 8 TeV data set (ATLAS-CONF-2013-055, May 2013). Here is their preliminary 4-lepton mass peak:

Clearly the number of events is higher than in the CMS plot above, since five times the integrated luminosity was used. ATLAS also published the di-lepton sub-masses:

This calibration channel is not meant to be the place where new physics is discovered. Nonetheless, we have to compare the rate observed in the real data with the theoretical prediction – a discrepancy would be quite interesting since this decay is theoretically clean and the prediction should be solid.

Since the rate of pp→Z→2L is very well measured, and the branching ratio Z→2L already well known from LEP and SLD, we can extract branching fractions for Z decays to four leptons:

SM... BF(Z→4L) = (4.37 ± 0.03) × 10^{-6}

CMS.. BF(Z→4L) = (4.2 ± 0.9 ± 0.2) × 10^{-6}

ATLAS BF(Z→4L) = (4.2 ± 0.4 ) × 10^^{-6}

So, as it turns out, the SM prediction matches the observed rate very well.

]]>

The first results were published back in 2010 and then updated in 2011 and 2012, based on 7 TeV data. Since W and Z bosons are produced copiously at the LHC, very small statistical uncertainties can be achieved with a rather small amount of integrated luminosity. (We have tens of millions of Z bosons detected in leptonic decay channels, for example, far more than the LEP experiments recorded. And we have roughly ten times the number of W bosons.) Remarkably, experimental systematic uncertainties are reduced to the 1% – 1.5% level, which is amazing considering the need to control lepton efficiencies and background estimates. (I am setting aside the luminosity uncertainty, which was about 3% – 4% for the early data.) The measurements done with only 35 pb^{-1} are nearly as precise as the theoretical predictions, whose errors are dominated by the PDF uncertainties. We knew, back in 2011, that a new era of electroweak physics had begun.

Experimenters know the power of ratios. We can often remove a systematic uncertainty by normalizing a measured quantity judiciously. For example, PDFs are a major source of uncertainty. These uncertainties are highly correlated, however, in the production of W and Z bosons. So we can extract the ratio (W rate)/(Z rate) with a relatively small error. Even better, we can plot the W cross section against the Z cross section, as ATLAS have done:

The elongated ellipses show that variations of the PDFs affect the W and Z cross sections is nearly the same way. The theoretical predictions are consistent with the data, and tend to lie all together. (The outlier, JR09, is no longer a favored PDF set.)

It is even more interesting to plot the W^{+} cross section against the W^{-} cross section, because the asymmetry between W^{+} and W^{-} production relates to the preponderance of up-quarks over down-quarks (don’t forget we are colliding two protons). Since the various PDF sets describe the d/u-ratio differently, there is a larger spread in theoretical predictions:

During the 8 TeV running in 2012, the instantaneous luminosity was much higher than in 2010, leading to high pile-up (overlapping interactions) which complicate the analysis. The LHC collaborations took a small amount of data (18 pb^{-1}) in a low pile-up configuration in order to measure the W and Z cross sections at 8 TeV, and CMS have reported preliminary results. They produced ellipses plots similar to what ATLAS published:

You might notice that the CMS ellipse appear larger than the ATLAS ones. This is because the ATLAS results are based on *fiducial* cross sections – i.e., cross sections for particle produced within the detector acceptance. One has to apply an acceptance correction to convert a fiducial cross section to a total cross section. This acceptance correction is easily obtained from Monte Carlos simulations, but it comes with a systematic uncertainty coming mainly from the PDFs. (If a PDF favors a harder u-quark momentum distribution, then the vector bosons will have a slightly larger momentum component along the beam, and the leptons from the vector boson decay will be missed down the beam pipe more often. Such things matter at the percent level.) Since modern theoretical tools can calculate fiducial cross sections accurately, it is not necessary to apply an acceptance correction in order to compare to theory. Clearly it is wise to make the comparison at the level of fiducial cross sections, though total cross sections are also useful in other contexts. The CMS result is preliminary.

Back when I was electroweak physics co-convener in CMS, I produced a plot summarizing hadron collider measurements of W and Z production. My younger colleagues have updated that plot to include the new 8 TeV measurements:

This plot nicely summarizes the history of these measurements, and suggests that W and Z production processes are well understood.

As I learned a the WNL workshop, the collaborations are learning how to measure these cross section in the high pile-up data. We may see even more precise values, soon.

]]>

]]>

Apparently Ernie Monitz, the new Secretary of Energy, linked this to his facebook page: https://www.facebook.com/ErnestJMoniz/posts/222609781237091

A really good video “explaining” why the Higgs is important is associated with the one by Don. It comes from MinutePhysics and is fun. Take a look:

]]>

The Higgs mechanism generates mass terms in the standard model Lagrangian. Electroweak symmetry is broken at the same time that the W and Z bosons get their mass. Fermions masses, on the other hand, are generated via Yukawa terms, and have nothing to do with electroweak symmetry breaking. The Higgs coupling to the electroweak gauge bosons go as M_{V}^{2}/v, while the Higgs coupling to the fermions goes as M_{f}/v, where v is the Higgs vacuum expectation value (v = 246 GeV).

Could there be a new physics effect that modifies these tree-level couplings? CMS and ATLAS have taken all their Higgs data and performed a fit with two free scale factors: κ_{V} for the vector boson couplings and κ_{F} for the fermions. The effective couplings for the Higgs boson to gluon pairs and photon pairs are expressed as their standard model loops modified by κ_{V} and κ_{F} as appropriate.

The CMS result is here. The contours centered on the black cross are the constraints from CMS data, and the yellow diamond is the standard model expected values, that fall within the 1σ CMS curve. While best value for κ_{F} is less than the SM value, the best value for κ_{V} agrees perfectly with the SM. There is a second local minimum with κ_{F} < 0 but that one is not favored by the data. (Source: CMS public Higgs page)

This plot shows contours from ATLAS data including the solution with κ_{F} < 0. The best value is marked by the X and the SM value is marked by the blue cross. The ATLAS data agree with the SM for κ_{F} and are a bit above the SM for κ_{V}. (Source: ATLAS Higgs page)

I have tried to put the two curves on the same grid:

]]>

Values for M_{H} were published by both ATLAS (arXiv:1207.7214) and CMS (arXiv:1207.7235) last July:

ATLAS: 126.0 +/- 0.4(stat) +/- 0.4(sys)

CMS : 125.3 +/- 0.4(stat) +/- 0.5(sys)

Although there is a difference in the central values, this difference is not significant even at the level of the statistic uncertainty alone.

CMS have updated their value and shown it at the HCP Symposium in Tokyo last week. Their new value is

CMS new: 125.8 ± 0.4(stat) ± 0.4(sys) GeV.

This measurement is documented in CMS PAS HIG-12-045.

For theorists, the exact value (125? 126?) is generally of little concern, because the real mystery is why M_{H} is not closer to the Planck scale. For good reason they have thought hard about this, inventing several possible explanations why the Higgs mass is near the electroweak scale and not at the Planck scale.

For experimentalists, on the other hand, the data are there to make a precise measurement, so we should do so.

As the reader surely knows by now, signals for the “Higgs boson” have been established in several channels, including di-photons, a pair of Z bosons decaying to four charged leptons (total), a pair of W bosons decaying to two charged leptons and two neutrinos, and possibly some evidence in the b-bbar mode. Of these modes, only the di-photon and the four-lepton modes provide a narrow peak at the putative Higgs boson mass, so the CMS and ATLAS collaborations derive their mass measurements from those channels only.

When the Higgs decays to two photons, two energetic and narrow showers are recorded in the electromagnetic calorimeter (“EM calorimeter”). The EM calorimeters of CMS and ATLAS are especially advanced in design and capability — years ago physicists had the decay mode H→γγ in mind when they designed these two wonderful devices. They are truly state-of-the art.

When the Higgs decays to four leptons, we mean electrons and/or muons, not tau leptons, because the Z boson masses and kinematics can be fully reconstructed for the Z→ee and Z→μμ decay modes. So the four leptons can be four electrons, four muons, or two electrons and two muons. The electron energies are measured primarily by the EM calorimeters, so those measurements are quite good. The muon momenta are measured mainly by the precision Silicon detectors which provide several precise points along the trajectory of the muon through a well-known and very uniform magnetic field. Once again, the ATLAS and CMS spectrometers are exceptionally performant, thanks both to the precision of the points (a few microns each) and the high magnetic fields (2 – 4 Tesla). The decay mode H→4L was a leading consideration in the design of these two spectrometers. (CMS and ATLAS also have very advanced dedicated muon detectors mounted outside the calorimetry, that play the essential role in distinguishing muons from other charged particles. For very high momenta, the muon detectors also contribute to the muon momentum measurement, but for the momenta occurring in H→4L decays, they do not contribute much.) For CMS, the measurement resolution (the estimated error for a single lepton) is about the same for electrons and muons produced in Higgs decays. (At higher momenta, though, electrons are more precise than muons, while at lower energies, muons are more precise. This is why we reconstruct J/ψ decays with muons, while heavy Z’ boson decays will show up as narrower peak in the ee channel than in the μμ channel.)

It turns out that the statistical precision on the mass measurement for H→γγ and for H→4L is about the same. This did not have to be the case – the statistical precision depends on the measurements errors, on the size of the signal and on the size and shape of the background. Here is the CMS plot comparing the H→γγ and for H→4L mass measurements.

The green curve shows the H→γγ measurement, and the red curve, H→4L. They plainly are compatible, justifying their combination by CMS; this combination is represented by the black curve.

(You can find this and other plots at the CMS public web site for the combination of CMS Higgs searches.)

There is a subtle point about the combination: should the two channels be given a weight corresponding to the rate expected in the standard model, or should the weight depend on the observed rates? The H→γγ rate is a bit higher than expected, and the H→4L rate is very slightly lower. (It used to be significantly lower, but the signal strength is now quite close to unity.) CMS have produced a two-dimensional plot that allows the normalization of the modes to be free independent of the masses:

Within the present uncertainties, it does not make much difference. (The black curve in the first figure above was computed taking the ratio of rates for H→γγ and H→4L from the standard model, but allowing the normalization of the two together to float.)

The statistical uncertainty is at the level of 0.3% (δM/M), so one has to be careful about systematic uncertainties. Is the absolute photon and electron scale, determined by the EM calorimetry, and the absolute muon scale, determined by the magnetic field, accurate at the level of a fraction of a percent?

Needless to say, *much effort was expended* in calibrating these absolute energy/momentum scales. The copious production of Z bosons, decaying the electron and muon pairs, provides the key to getting the scale right: the Z mass is known at the 10^{-4} level, thanks to resonant depolarization techniques applied at LEP. Narrow J/ψ and Υ resonances provide cross checks of the momentum scale. Uniformity of response is achieved with huge sets of isolated and well-measured tracks. There is a real art to this, and decades of experience. One should keep in mind the essential role played by the people who design, care for, and calibrate the detectors, without which no clever analysis would produce results this good.

The details of the way systematic uncertainties are handled when combining H→γγ and H→4L are not yet public. One should take into account the degree of correlation between the two mass peaks coming from possible errors on the EM calorimeter energy scale. This correlation will not be large, however.

CMS derived their total systematic uncertainty by eliminating it and seeing how the total uncertainty shrinks. This corresponds to a narrowing of the black parabola in the first plot. (Technically, the systematic uncertainty is eliminated by fixing the nuisance parameters to their best-fit values, and then scanning in M_{H}.) The difference in quadrature of the full width of the parabola and the narrower width obtained when the systematic uncertainty is eliminated is taken as the systematic uncertainty; this is valid so long as there is no correlation between the statistical and systematic uncertainties, and as long as the minimum of the parabola stays basically in the same place. Here is a comparison of the parabola with and without the systematic uncertainty:

In this manner the CMS Collaboration arrived at their updated and preliminary measurement of the Higgs mass, M

]]>

So let’s assume that the new particle X(126) is a Higgs boson (and I will use the symbol “H” for it). If it is the standard model Higgs boson, then its CP eigenvalue must be +1. If it is a member of a two-Higgs-doublet model, then its CP eigenvalue might be -1, and if there is CP-violation in the Higgs sector, then its CP eigenvalue would be something other than +1 or -1.

The new news about this comes from the Hadron Collider Physics Symposium that just finished in Tokyo last week. The CMS Collaboration presented results that indicate that CP = -1 is the wrong hypothesis for the H. They used the golden channel H→ZZ→4L, where the four leptons are electrons and muons. The H state is completely reconstructed in this channel, and backgrounds are low. The Z bosons themselves are massive spin-1 particles, which means that they can be transversely and/or longitudinally polarized, so that one can talk about the degree of their polarization. They are produced coherently in the decay of the H, so their quantum mechanical states are entangled and their joint quantum mechanical state reflects the properties of the parent particle, H. Their quantum mechanical state is manifested in the angular distributions of the four leptons, especially taken as pairs — two for the first boson Z_{1} and two for the second bosons Z_{2}. So a study of the angular distributions tells us, on a statistical basis, whether the parent particle H is spin-0 or spin-2, and what its CP eigenvalue is. This physics as been studied by many authors, some of whom work on the CMS analysis discussed here, and who published their ideas two years ago (Gao et al., InSpire link). Many theorists have discussed similar material, for example arXiv:1108.2274.

Checking the hypotheses spin-0 and spin-2 is not fruitful at this time, and anyway we have reasons to believe that is has spin-0. So *assuming that it does have spin-0*, we can check the hypotheses CP-even and CP-off.

With four leptons in the final state, several angular distributions are available. Here is a diagram from Gao et al. labeling the main ones in the H center-of-mass frame:

There is the polar angle θ

The most striking differentiation between CP-even and CP-odd comes from the polar angles θ_{1} and θ_{2} and the azimuthal angle Φ. Here are the ideal distributions:

The CP-even case is shown by the solid red dots and the CP-off case by the open blue dots — the distributions are plainly different.

The event sample available to CMS at present is not large enough to make a determination of the CP eigenvalue by simply plotting one of these distributions. Instead, the CMS physicists built a probability density function for the two hypotheses based on the measured decay angles. This gives them the highest achievable statistical power (i.e., ability to distinguish two hypotheses CP-even vs. CP-odd) for the observables that they measure. An abstract-like summary is available on a CMS web page and also in the public document CMS PAS HIG-12-041.

The authors take the SM expectation as the null hypothesis and the alternative is the CP-odd hypothesis. The test statistic is D = [1 + P(CP-odd)/P(CP-even)]^{-1}, where P is the probability density calculated from the lepton angles and the two Z masses. There are three terms in the theoretical expression for P, one of which is small for both hypotheses and neglected, and the other two that dominate; which one dominating depends on which CP eigenvalue is assumed. The method takes account of the correlations among all measured quantities — indeed this is the point of the method and the reason why it is more effective than simply projecting out the angular variables.

The distribution of the discriminating variable shows some a priori power of discrimination:

The discrimination is not dramatic, but it is not negligible either. The few data entries do land more to the right of the plot than to the left, favoring the CP-even hypothesis.

The hypothesis test boils down to one number, namely, the log of the ratio of likelihoods. The distribution of this variable is typically Gaussian, and the two hypotheses show up as Gaussians with different means and more or less the same width. The power of the test amounts to the separation of the two peaks (which depends on the separation of the means and the narrowness of the peaks); for a powerful test there is very little overlap between them. The power of the test depends on the number of events, so the authors made the plot for the number of events observed:

The magenta peak on the right represents the CP-even hypothesis (as expected in the SM), and the blue peak on the left represents the CP-odd hypothesis. The two peaks do overlap, so there are some values for this quantity for which a conclusion would be difficult or impossible. As it turns out, the value from the CMS data lands a bit to the right — see the position of the green arrow. If the H particle truly has CP-odd, then the probability to observe the value indicated by the green arrow is low, about 2.4%. In this sense, the CMS analysis disfavors the CP-odd hypothesis at the 2.5σ level. It is completely compatible with the CP-even hypothesis.

So the conclusion is that the new particle is probably CP-even, as expected in the SM.

While this indication is fairly strong and extremely important, 2.5σ can be a fluctuation. We have seen larger fluctuations in other places in the grand landscape of Higgs searches. We will have to see whether ATLAS can perform this analysis and what their data will indicate. Furthermore, the possibility of CP-violation is completely set aside in this analysis, since only two hypotheses are tested – one cannot do better with the present data sample. At some point physicists will define an angle in CP space that quantifies the deviation from the CP-even hypothesis, and experimenters will start to constrain or measure that angle.

Note: Tommaso Dorigo wrote about this briefly, last Wednesday.

**Update** (21-Nov): The witty author of one of my favorite blogs, In the Dark, wrote yesterday about interesting new CP-violation results in the B system (link: Time will say nothing but I told you so…) and provided a very nice, succinct description of what C, P and CP violation means. Take a read!

]]>

Br(B_{s}→μ^{+}μ^{-}) = (3.2^{+1.5}_{-1.2})×10^{-9.}

This is the culmination of nearly 30 years of searching for this extremely rare decay (see yesterday’s blog post).

The slides from Johannes Albrecht presented at the HCP conference give a nice overview of the measurement. These results come from 1 fb^{-1} at 7 TeV and 1.1 fb^{-1} at 8 TeV, analyzed together with methods very similar to those published in March. For this update, evaluations of the background shapes have been refined and are better constrained by the data, leading to a reduction in the systematic uncertainty. There is also an improvement in f_{s}, the parameter that gives the fraction of b-quarks the hadronize to form a B_{s} meson.

Here is the limit curve, showing clearly that the data are incompatible with the brackground hypothesis:

In fact, the p-value for the 2011+2012 data is 5×10

This is the ideal case for a mass peak, since the mass resolution is excellent (25 MeV) and the background is almost zero. Here is a plot showing a B_{s} signal peak emerging from the background as the cut on the BDT (boosted decision tree, a common multivariate analysis tool) is increased:

The measured rate, Br(B_{s}→μ^{+}μ^{-}) = (3.2^{+1.5}_{-1.2})×10^{-9} is compatible with the SM prediction, (3.2±0.2)×10^{-9}, leaving very little room for new physics contributions. It will be interesting to see how this measurement constrains models of new physics.

Hopefully CMS will also report observation of this rare decay mode, confirming the LHCb result.

]]>

At tree level, this decay is forbidden in the standard model. It can occur through a loop diagram, however, involving a top quark and W bosons that are far, far off mass shell:

The SM prediction is really very small: the branching ratio B(Bsmumu) = (3.2±0.2)×10

Since this decay is almost completely absent in the standard model, it provides a very good opportunity for new physics to appear — any observation of this decay above the SM rate would be a clear signal for new physics. Indeed, many models of new physics allow for branching ratios a factor of ten or one hundred higher than the SM value. Chief among theses is generic Supersymmetry, which predicts large enhancements when tanβ is large (20 to 50) and when M_{A} (the mass of the pseudoscalar Higgs boson) is not too large (less than 200 GeV).

This opportunity has enticed experimentalists for nearly twenty years, and a series of searches by CDF and D0 put more and more stringent bounds during the 1990s and 2000s. See, for example, a discussion of a D0 result in 2010 by Tommaso Dorigo. The Tevatron limits were about an order of magnitude above the SM branching ratio.

Early in 2012, the CDF Collaboration reported a two-sided confidence interval for B(Bsmumu), meaning that they had evidence for a signal although they did not use those words. They used an artificial neural network to categorize the events. Using the very best candidates, they reported B(Bsmumu) = 1.3^{+0.9}_{-0.7}×10^{-8}. This result generated some interest and much discussion (e.g., Tommaso’s blog),

needless to say. (For information, see this cdf web page).

The advent of the LHC opened new opportunities to observe this decay. Early results from CMS and LHCb excited experts. The superior capabilities of the CMS and especially the LHCb detectors make the searches for this decay more effective. The luminosity and higher center-of-mass energy deliver much larger data samples than the Tevatron collaborations enjoyed. To see how good the data are, here is a beautiful event from the LHCb Collaboration:

The two pink tracks are the muons, and the blue track shows how the B

The CMS Collaboration recently published a result based on 5 fb^{-1} of data taken in 2011 at √s = 7 TeV (arXiv:1203.3976, March 2012): B(Bsmumu) < 7.7×10^{-9} at 95% CL.

At the same time, the LHCb Collaboration published a slightly more stringent result (arXiv:1203.4493 March 2012): B(Bsmumu) < 4.5×10^{-9}, based on 1 fb^{-1}.

Combining the results from LHCb, CMS and ATLAS, the upper limit is B(Bsmumu) < 4.2×10^{-9} (combination note).

These limits are rather close to the SM value, so defining the expected limit is tricky: does one make a calculation assuming no signal, or does one assume that the SM process will indeed produce events? These graphs from the combination note make plain that the two calculations are very different:

The plot on the left is calculated assuming the SM contribution, while the plot on the right assumes no contribution from any source.

It seems clear that the LHC experiments are on the verge of observing a signal for this process, if only at the level the SM predicts. CMS has approximately 20 fb^{-1} at 8 TeV, while LHCb has a data sample more than twice the size of the one used for March’s publication. So it should be very interesting to listen to the presentation by M. Palutan on Tuesday.

The INDICO web page for this seminar is: https://indico.cern.ch/conferenceDisplay.py?confId=216344, and there will be a web retransmission.

Let’s see whether LHCb reports the first observation of this important decay mode…

**Update:** The news from LHCb will be presented on the first day of the HCP Conference by Johannes Albrecht, at 16:10 in Tokyo which is 8:10 in Geneva (1:00 am in Chicago). The INDICO page for HCP is http://kds.kek.jp/conferenceDisplay.py?confId=9237.

]]>

As most particle physicists know, GFITTER is a public computer program for calculating fits to the Standard Model based on precision measurements of electroweak observables. Such fits have a long tradition and have played a crucial role in the development of our field since the 1990s or before. During LEP days, for example, it was customary to infer values of the top quark mass from its influence on electroweak observables. The agreement of these inferred values with the directly measured value at the Tevatron was exciting at the time. Once the top quark mass was known, the fits turned to predicting the Higgs mass. As the years went by and all the crucial measurements improved, the indirect bounds on the Higgs mass sharpened. Once again, the measured value from the LHC agrees with the prediction:

If you want to break the SM in order to access new physics, this agreement is not good news, and now one has to hope for unexpected Higgs properties as revealed in branching ratios and angular distributions of the decay products.

We can continue to scrutinize the internal consistency of the SM, and the GFITTER plots help with that. The traditional plot shows contours in the plane of M_{W} versus M_{t} – here is the GFITTER version:

The yellow cross indicates the measured values of M_{t} and M_{W}, and the black point in the middle of the plot with error bars represents the joint measurement – what I will call the true value. The large grey areas show the expected ranges of (M_{W},M_{t}) based on a host of precision measurements of electroweak observables. It overlaps the true value so at that level the SM is internally consistent. The narrow blue areas show the expected range of (M_{W},M_{t}) based on precision measurements of electroweak observables *and the measured Higgs mass*. The contour is much narrower reflecting the major impact the measurement of M_{h} has. Notice that the agreement with the black point is not so good: the measured value of M_{W} is a little bit higher than predicted by the blue areas, while M_{t} agrees very well.

Looking at this plot, you might wish for “slices” along M_{W} and M_{t} to see a chi-squared contour. Happily, GFITTER provides these plots for us. First, the M_{t} plot:

The most precise measurement comes from the Tevatron experiments, taken together, closely followed by the CMS measurement alone. All of the measurements agree among themselves very well, and they agree with the prediction of the SM (blue parabola) at the level of one sigma.

Here is the corresponding plot for M_{W}:

The agreement between the world average value and the SM prediction is less good. Taken at face value, the central value of the measurement is three sigma above the prediction of the SM (blue curve). Taking the measurement error into account, the disagreement is much smaller than three sigma, but could there be a hint of something here?

If the Higgs mass increases, then the (admittedly modest) tension between the measured and predicted values of M_{W} will increase. Perhaps it would be nice to see contours in the plane of M_{W} versus M_{h}. I’m sure people who know how to run GFITTER can produce this plot easily.

More precise measurements of M_{W} are desirable, but difficult to achieve. People at the LHC talk about reducing the uncertainty to below 10 MeV, but this requires a lot of experimental work and better PDFs, so it is not around the corner. A measurement with a precision of 6 MeV or even better could be made at a new e^{+}e^{-} collider, but that is just a hope, now.

]]>