## The CP Nature of the Higgs Boson

We are in the process of ascertaining the properties of the Higgs-like particle discovered by CMS and ATLAS last July 4th. It must be a boson because it decays to pairs of bosons. Since it decays to a pair of massless photons, it cannot be spin-1. The relative rates of decays to WW and ZZ on the one hand, and γγ on the other, are close to what is expected for spin-0 boson and not what is expected for a spin-2 graviton. John Ellis, Veronica Sanz and Tevong You wrote a nice paper about this earlier this week (arXiv:1211.3068, 13-Nov).

So let’s assume that the new particle X(126) is a Higgs boson (and I will use the symbol “H” for it). If it is the standard model Higgs boson, then its CP eigenvalue must be +1. If it is a member of a two-Higgs-doublet model, then its CP eigenvalue might be -1, and if there is CP-violation in the Higgs sector, then its CP eigenvalue would be something other than +1 or -1.

The new news about this comes from the Hadron Collider Physics Symposium that just finished in Tokyo last week. The CMS Collaboration presented results that indicate that CP = -1 is the wrong hypothesis for the H. They used the golden channel H→ZZ→4L, where the four leptons are electrons and muons. The H state is completely reconstructed in this channel, and backgrounds are low. The Z bosons themselves are massive spin-1 particles, which means that they can be transversely and/or longitudinally polarized, so that one can talk about the degree of their polarization. They are produced coherently in the decay of the H, so their quantum mechanical states are entangled and their joint quantum mechanical state reflects the properties of the parent particle, H. Their quantum mechanical state is manifested in the angular distributions of the four leptons, especially taken as pairs — two for the first boson Z_{1} and two for the second bosons Z_{2}. So a study of the angular distributions tells us, on a statistical basis, whether the parent particle H is spin-0 or spin-2, and what its CP eigenvalue is. This physics as been studied by many authors, some of whom work on the CMS analysis discussed here, and who published their ideas two years ago (Gao et al., InSpire link). Many theorists have discussed similar material, for example arXiv:1108.2274.

Checking the hypotheses spin-0 and spin-2 is not fruitful at this time, and anyway we have reasons to believe that is has spin-0. So *assuming that it does have spin-0*, we can check the hypotheses CP-even and CP-off.

With four leptons in the final state, several angular distributions are available. Here is a diagram from Gao et al. labeling the main ones in the H center-of-mass frame:

There is the polar angle θ

^{*}the two Z bosons make with the beam axis. There are the two polar angles θ

_{1}and θ

_{2}that the lepton pairs make in the rest frames of the two Z bosons. Finally, there is a relative azimuthal angle Φ that the two Z decay planes make with each other.

The most striking differentiation between CP-even and CP-odd comes from the polar angles θ_{1} and θ_{2} and the azimuthal angle Φ. Here are the ideal distributions:

The CP-even case is shown by the solid red dots and the CP-off case by the open blue dots — the distributions are plainly different.

The event sample available to CMS at present is not large enough to make a determination of the CP eigenvalue by simply plotting one of these distributions. Instead, the CMS physicists built a probability density function for the two hypotheses based on the measured decay angles. This gives them the highest achievable statistical power (i.e., ability to distinguish two hypotheses CP-even vs. CP-odd) for the observables that they measure. An abstract-like summary is available on a CMS web page and also in the public document CMS PAS HIG-12-041.

The authors take the SM expectation as the null hypothesis and the alternative is the CP-odd hypothesis. The test statistic is D = [1 + P(CP-odd)/P(CP-even)]^{-1}, where P is the probability density calculated from the lepton angles and the two Z masses. There are three terms in the theoretical expression for P, one of which is small for both hypotheses and neglected, and the other two that dominate; which one dominating depends on which CP eigenvalue is assumed. The method takes account of the correlations among all measured quantities — indeed this is the point of the method and the reason why it is more effective than simply projecting out the angular variables.

The distribution of the discriminating variable shows some a priori power of discrimination:

The discrimination is not dramatic, but it is not negligible either. The few data entries do land more to the right of the plot than to the left, favoring the CP-even hypothesis.

The hypothesis test boils down to one number, namely, the log of the ratio of likelihoods. The distribution of this variable is typically Gaussian, and the two hypotheses show up as Gaussians with different means and more or less the same width. The power of the test amounts to the separation of the two peaks (which depends on the separation of the means and the narrowness of the peaks); for a powerful test there is very little overlap between them. The power of the test depends on the number of events, so the authors made the plot for the number of events observed:

The magenta peak on the right represents the CP-even hypothesis (as expected in the SM), and the blue peak on the left represents the CP-odd hypothesis. The two peaks do overlap, so there are some values for this quantity for which a conclusion would be difficult or impossible. As it turns out, the value from the CMS data lands a bit to the right — see the position of the green arrow. If the H particle truly has CP-odd, then the probability to observe the value indicated by the green arrow is low, about 2.4%. In this sense, the CMS analysis disfavors the CP-odd hypothesis at the 2.5σ level. It is completely compatible with the CP-even hypothesis.

So the conclusion is that the new particle is probably CP-even, as expected in the SM.

While this indication is fairly strong and extremely important, 2.5σ can be a fluctuation. We have seen larger fluctuations in other places in the grand landscape of Higgs searches. We will have to see whether ATLAS can perform this analysis and what their data will indicate. Furthermore, the possibility of CP-violation is completely set aside in this analysis, since only two hypotheses are tested – one cannot do better with the present data sample. At some point physicists will define an angle in CP space that quantifies the deviation from the CP-even hypothesis, and experimenters will start to constrain or measure that angle.

Note: Tommaso Dorigo wrote about this briefly, last Wednesday.

**Update** (21-Nov): The witty author of one of my favorite blogs, In the Dark, wrote yesterday about interesting new CP-violation results in the B system (link: Time will say nothing but I told you so…) and provided a very nice, succinct description of what C, P and CP violation means. Take a read!

## Bs to mumu Observed

The LHCb experiment has observed the rare decay B_{s}→μ^{+}μ^{-}.

Br(B_{s}→μ^{+}μ^{-}) = (3.2^{+1.5}_{-1.2})×10^{-9.}

This is the culmination of nearly 30 years of searching for this extremely rare decay (see yesterday’s blog post).

The slides from Johannes Albrecht presented at the HCP conference give a nice overview of the measurement. These results come from 1 fb^{-1} at 7 TeV and 1.1 fb^{-1} at 8 TeV, analyzed together with methods very similar to those published in March. For this update, evaluations of the background shapes have been refined and are better constrained by the data, leading to a reduction in the systematic uncertainty. There is also an improvement in f_{s}, the parameter that gives the fraction of b-quarks the hadronize to form a B_{s} meson.

Here is the limit curve, showing clearly that the data are incompatible with the brackground hypothesis:

In fact, the p-value for the 2011+2012 data is 5×10

^{-4}corresponding to 3.5σ. Conventionally speaking, this is certainly good enough for evidence.

This is the ideal case for a mass peak, since the mass resolution is excellent (25 MeV) and the background is almost zero. Here is a plot showing a B_{s} signal peak emerging from the background as the cut on the BDT (boosted decision tree, a common multivariate analysis tool) is increased:

The measured rate, Br(B_{s}→μ^{+}μ^{-}) = (3.2^{+1.5}_{-1.2})×10^{-9} is compatible with the SM prediction, (3.2±0.2)×10^{-9}, leaving very little room for new physics contributions. It will be interesting to see how this measurement constrains models of new physics.

Hopefully CMS will also report observation of this rare decay mode, confirming the LHCb result.

## Watching for Bs to mu+mu-

On Tuesday, November 13th, Matteao Palutan representing the LHCb Collaboration will report new results on the search for the extremely rare decay B_{s}→μ^{+}μ^{-}.

At tree level, this decay is forbidden in the standard model. It can occur through a loop diagram, however, involving a top quark and W bosons that are far, far off mass shell:

The SM prediction is really very small: the branching ratio B(Bsmumu) = (3.2±0.2)×10

^{-9}.

Since this decay is almost completely absent in the standard model, it provides a very good opportunity for new physics to appear — any observation of this decay above the SM rate would be a clear signal for new physics. Indeed, many models of new physics allow for branching ratios a factor of ten or one hundred higher than the SM value. Chief among theses is generic Supersymmetry, which predicts large enhancements when tanβ is large (20 to 50) and when M_{A} (the mass of the pseudoscalar Higgs boson) is not too large (less than 200 GeV).

This opportunity has enticed experimentalists for nearly twenty years, and a series of searches by CDF and D0 put more and more stringent bounds during the 1990s and 2000s. See, for example, a discussion of a D0 result in 2010 by Tommaso Dorigo. The Tevatron limits were about an order of magnitude above the SM branching ratio.

Early in 2012, the CDF Collaboration reported a two-sided confidence interval for B(Bsmumu), meaning that they had evidence for a signal although they did not use those words. They used an artificial neural network to categorize the events. Using the very best candidates, they reported B(Bsmumu) = 1.3^{+0.9}_{-0.7}×10^{-8}. This result generated some interest and much discussion (e.g., Tommaso’s blog),

needless to say. (For information, see this cdf web page).

The advent of the LHC opened new opportunities to observe this decay. Early results from CMS and LHCb excited experts. The superior capabilities of the CMS and especially the LHCb detectors make the searches for this decay more effective. The luminosity and higher center-of-mass energy deliver much larger data samples than the Tevatron collaborations enjoyed. To see how good the data are, here is a beautiful event from the LHCb Collaboration:

The two pink tracks are the muons, and the blue track shows how the B

_{s}flow our from the primary vertex. More event displays of this type can be viewed at the LHCb web page.

The CMS Collaboration recently published a result based on 5 fb^{-1} of data taken in 2011 at √s = 7 TeV (arXiv:1203.3976, March 2012): B(Bsmumu) < 7.7×10^{-9} at 95% CL.

At the same time, the LHCb Collaboration published a slightly more stringent result (arXiv:1203.4493 March 2012): B(Bsmumu) < 4.5×10^{-9}, based on 1 fb^{-1}.

Combining the results from LHCb, CMS and ATLAS, the upper limit is B(Bsmumu) < 4.2×10^{-9} (combination note).

These limits are rather close to the SM value, so defining the expected limit is tricky: does one make a calculation assuming no signal, or does one assume that the SM process will indeed produce events? These graphs from the combination note make plain that the two calculations are very different:

The plot on the left is calculated assuming the SM contribution, while the plot on the right assumes no contribution from any source.

It seems clear that the LHC experiments are on the verge of observing a signal for this process, if only at the level the SM predicts. CMS has approximately 20 fb^{-1} at 8 TeV, while LHCb has a data sample more than twice the size of the one used for March’s publication. So it should be very interesting to listen to the presentation by M. Palutan on Tuesday.

The INDICO web page for this seminar is: https://indico.cern.ch/conferenceDisplay.py?confId=216344, and there will be a web retransmission.

Let’s see whether LHCb reports the first observation of this important decay mode…

**Update:** The news from LHCb will be presented on the first day of the HCP Conference by Johannes Albrecht, at 16:10 in Tokyo which is 8:10 in Geneva (1:00 am in Chicago). The INDICO page for HCP is http://kds.kek.jp/conferenceDisplay.py?confId=9237.

## GFITTER Plots

I accidentally hit the link to GFITTER in my bookmarks file while having an early morning cup of coffee. So I looked at the plots there – they’re interesting.

As most particle physicists know, GFITTER is a public computer program for calculating fits to the Standard Model based on precision measurements of electroweak observables. Such fits have a long tradition and have played a crucial role in the development of our field since the 1990s or before. During LEP days, for example, it was customary to infer values of the top quark mass from its influence on electroweak observables. The agreement of these inferred values with the directly measured value at the Tevatron was exciting at the time. Once the top quark mass was known, the fits turned to predicting the Higgs mass. As the years went by and all the crucial measurements improved, the indirect bounds on the Higgs mass sharpened. Once again, the measured value from the LHC agrees with the prediction:

If you want to break the SM in order to access new physics, this agreement is not good news, and now one has to hope for unexpected Higgs properties as revealed in branching ratios and angular distributions of the decay products.

We can continue to scrutinize the internal consistency of the SM, and the GFITTER plots help with that. The traditional plot shows contours in the plane of M_{W} versus M_{t} – here is the GFITTER version:

The yellow cross indicates the measured values of M_{t} and M_{W}, and the black point in the middle of the plot with error bars represents the joint measurement – what I will call the true value. The large grey areas show the expected ranges of (M_{W},M_{t}) based on a host of precision measurements of electroweak observables. It overlaps the true value so at that level the SM is internally consistent. The narrow blue areas show the expected range of (M_{W},M_{t}) based on precision measurements of electroweak observables *and the measured Higgs mass*. The contour is much narrower reflecting the major impact the measurement of M_{h} has. Notice that the agreement with the black point is not so good: the measured value of M_{W} is a little bit higher than predicted by the blue areas, while M_{t} agrees very well.

Looking at this plot, you might wish for “slices” along M_{W} and M_{t} to see a chi-squared contour. Happily, GFITTER provides these plots for us. First, the M_{t} plot:

The most precise measurement comes from the Tevatron experiments, taken together, closely followed by the CMS measurement alone. All of the measurements agree among themselves very well, and they agree with the prediction of the SM (blue parabola) at the level of one sigma.

Here is the corresponding plot for M_{W}:

The agreement between the world average value and the SM prediction is less good. Taken at face value, the central value of the measurement is three sigma above the prediction of the SM (blue curve). Taking the measurement error into account, the disagreement is much smaller than three sigma, but could there be a hint of something here?

If the Higgs mass increases, then the (admittedly modest) tension between the measured and predicted values of M_{W} will increase. Perhaps it would be nice to see contours in the plane of M_{W} versus M_{h}. I’m sure people who know how to run GFITTER can produce this plot easily.

More precise measurements of M_{W} are desirable, but difficult to achieve. People at the LHC talk about reducing the uncertainty to below 10 MeV, but this requires a lot of experimental work and better PDFs, so it is not around the corner. A measurement with a precision of 6 MeV or even better could be made at a new e^{+}e^{-} collider, but that is just a hope, now.

## Yes ! Discovery of a Higgs boson !!

**Finally, after monumental work, we have discovered a new particle !!!**

This is an historic day in particle physics, fireworks are very well justified. CMS and ATLAS presented status reports showing that the 2012 data confirm the hint of a Higgs boson seen in the 2011 data. The strongest evidence comes from the H→γγ mode and from the H→ZZ→4L mode, mainly because these channels give precise mass information.

Here are the γγ mass distributions from ATLAS and CMS:

These results are absolutely beautiful. No one would doubt that “there is something there.”

Here are the distributions for the H→ZZ→4L channel:

The significance from each experiment is 5σ so both CMS and ATLAS have independent grounds for discovery. This is what experimenters want – this level of confidence sweeps aside any nagging doubts that a new state has been observed. We have a new particle, no doubt, and it is a boson and seems to resemble the Higgs boson.

Here are the local p-values:

There were some interesting differences in the work done by CMS and ATLAS. For example, CMS shows results from several channels, γγ, ZZ, WW, Vbb and ττ, while ATLAS restricted new results to the γγ and ZZ channels (the most important ones in this mass range) only. The mass values are not precisely the same, but they are probably compatible. The signal strengths are quite similar, and both experiments see a γγ signal that is larger than expected.

CMS presented an actual mass measurement: M_{H}=125.3±0.6 GeV.

CMS presented a check of custodial symmetry, reporting the ratio of WW and ZZ signal strengths: H→WW/H→ZZ=0.9+1.1-0.6, consistent with unity. Finally, CMS also presented a study of the couplings, introducing a factor for vectors and another factor for fermions, and find C_{V}=1 and C_{F}=0.5.

I will post again later (it is the middle of the night here in Illinois). Let me share a few quotes from Joe Incandela, Fabiola Gianotti and Rolf Heuer:

Joe thanked the theorists, accelerator scientists and even the six directors general who guided CERN through the LHC project. Then he made a beautiful statement: *Thanks to CERN for opening up the lab to the world. These results are for all of mankind.*

Fabiola pointed out that we are now entering the era of Higgs measurements. Then she added: *The LHC and the experiments have been doing miracles. People have worked very very hard – so theorists, please be patient! I hope these results will open the door to a very bright future.*

Rolf made the official and happy pronouncement: *We have a discovery!*

There was a standing ovation at the end, as was very appropriate.

## PLEASE don’t call it “the God particle”

Let me make a plea to all science journalist out there:

**Please don’t call it “the God particle”!**

That name was invented by Leon Lederman, very much *tongue-in-cheek*, back in 1993 when he published a rather good popular science book. Leon is an nobel prize winner and devoted much of his life to improving math and science education in the US. His talks were clever and witty and this “God Particle” terminology is meant to be full of irony.

To be clear: the Higgs boson has *nothing to do with divinity* – neither does any other particle of the standard model and whatever lies beyond. No one I know believes that the Higgs boson has any direct impact on theology or religion, and in fact, we all hate the term as being irritating at best and embarrassing at worst.

When journalists employ this term, they deviate from good science reporting toward sensationalism. They know that segments of the general population will be drawn into the article, most likely with less than positive attitudes toward the term.

So don’t do it! Stick with “the Higgs boson” or some term that physicists use, please.

For thoughtful commentary on this same issue, see the blog post from Claire Evans, from which the image above comes.

## Tevatron Higgs Results – Evidence?

The bottom line is:

**2.5 standard deviations for all channels****2.9 standard deviations for bb channel alone**

These are global p-values. The local p-values are 3.0 s.d. and 3.2 s.d. respectively. Below are my notes and comments from the presentation today.

The Tevatron Higgs group (CDF+D0) just presented their update – and very nearly final – results for the search for the Higgs boson. Eric James gave the first half (CDF results) and Wade Fisher gave the second half (D0 results and Tevatron combination). Although the Tevatron has stopped taking data, and we have seen impressive results from the Tevatron last winter based on essentially all of their data, this update is important because the D0 analyses have been improved significantly, thereby improving the sensitivity of the Tevatron Higgs search. (Recall that they had an excess at about the 2.5σ level last winter.) The Tevatron data set corresponds to 10 fb^{-1} recorded data per experiment.

Eric started by reminding us of the impact of indirect constraints on the Higgs boson, valid within the standard model (SM). At present those indirect constraints on M_{H} are consistent with the range not already excluded by LEP, the Tevatron and the LHC. One should not forget that the precision measurements of the top and W masses play a key role in those indirect constraints. Wade stated that the Tevatron searches took as their goal the ability to exclude a SM Higgs boson across the full range favored by the precision electroweak data: 100 – 150 GeV, roughly. They very nearly achieved this goal.

CDF and D0 both search for Higgs decays to γγ, WW, ZZ and bb, but the most sensitive channel by far is pp→VH with V→L or ν and H→bb. The sensitivity of each channel is a strong function of the Higgs mass, M_{H}, but for the interesting region (120 < M_{H} < 140 GeV), the γγ, WW and ZZ channels are still several times the SM signal because the production cross sections are really very small. The bb channel, on the other hand, has a sensitivity near 2×SM or even better. As Eric pointed out, this is actually better than what CMS and ATLAS have achieved so far in this channel.

Eric and Wade, and the entire Higgs search teams that they represent, recognize this channel (“Vbb”, or pp→VH + H→bb) as the main opportunity for the Tevatron to make a major contribution to understanding the Higgs, if there indeed is a Higgs with a mass near 125 GeV. (LHC will make a statement about that on Wednesday.) One should not underestimate what these teams can do, as Eric nicely illustrated with a plot of sensitivity versus integrated luminosity. To be plain, the Tevatron Higgs searches today exceed even the most optimistic projections from five years ago. Inspiring. Eric also showed that the significance of any excess around 125 GeV at the Tevatron in the Vbb channel is quite comparable to what ATLAS and CMS can achieve with other channels.

Wade explained succinctly that the D0 Higgs searches have all improved due to technical improvements and the addition of a little more data. The improvements are on the oder of 10% per channel, and some small improvements are still expected over the summer. One the main technical improvements comes from splitting backgrounds in any given channel into categories which are then suppressed individually. This bolsters the S/B ratio and was also done by CDF (and the LHC experiments).

Both CDF and D0 see a fairly broad excess in the M_{H} range 120 – 140 GeV. The characteristics of this excess are very similar for CDF and D0, appearing most clearly in the Vbb channel, with only weak signals in γγ and ZZ, and none in WW. (Recall that the LHC results hint at an enhancement in γγ and a slight deficit in ZZ and WW, assuming SM signal strengths.) Here is the comparison of the exclusion limits from CDF (left) and D0 (right):

Naturally, since they are so similar, the combined exclusion limit will be about the same, only sharper in its features:

Since the observed limit curve lies well above the expected one, we know that we have an excess. This excess comes mainly in bb. Wade showed the following plot to quantify the excess: This plot shows the signal strength μ (σ×BF normalized to the SM calculation) for the three main channels. Clearly the WW channel indicates no signal, while the γγ has low precision (though it does not favor zero). The interesting bit is the bb channel, showing a signal strength of approximately μ_{bb}=2.0±0.7 – according to my ability to read the graph… The plot on the right shows the probability density for the bb channel; the most probable value is indeed μ_{bb}=2 and μ_{bb}=0 is unlikely, according to this picture.

The Tevatron Higgs group likes to show a likelihood ratio as a function of mass, and indeed the plot is informative:

The likelihood ratio (LLR) shows the separation of the signal hypothesis from the null hypothesis. For a given M

_{H}, there will be two Gaussians showing the possible outcomes if a signal is present or absent. If these Gaussians are well separated, then one has very good sensitivity to the signal. The data then pick one Gaussian or the other. Looking at the plot of LLR vs. M

_{H}, the dotted black line shows the expectation if the null hypothesis is correct, while the red dotted line shows the expectation of a signal is present. The data from the Tevatron, represented by the solid black line, twists and turns as a function of M

_{H}due to statistical fluctuations in the data. But one can clearly see a preference for the signal hypothesis for M

_{H}in the 115 to 135 GeV range, and a preference for the null hypothesis elsewhere. The plot on the right shows the ideal case if a Higgs signal is present with M

_{H}= 125 GeV and with a signal strength μ=1.5. The value μ=1.5 comes from the statistical analysis of the Tevatron data; basically the Vbb `signal’ is a bit stronger than one would expect with for the SM Higgs boson; I eye-ball it to be μ=1.4±0.6.

This result is impressive and exciting, but Wade and the Tevatron Higgs community made a rather cautious, sober statement: *The significance of the excess is still under 3σ, so we are not making any announcements today. Still, the Tevatron results are getting interesting.* I’ll say!

In my opinion, this result from the Tevatron is extremely important. While it does not constitute discovery of a Higgs boson, or even `evidence’ in a technical sense, it does illuminate the issue of a new state if indeed the LHC experiments have independent evidence for something at 125 GeV. And it cannot be over-emphasized that this result is completely independent of what goes on at the LHC: the beam energies and particle types are different, the final state is different, and the analysis teams are different.

If there is a new particle, some sort of Higgs boson, at 125 GeV, then I certainly believe it decays to bb with a fairly large branching fraction.

So the Tevatron did not `scoop’ the LHC but it will play an important part in the coming months and years to elucidate whatever Nature will give us.

PS: sorry for the poor quality of the images, this is the best I could get while sharing the live broadcast with more than 1200 other people. For more information, see the Fermilab press release.

PPS: Of course many excellent blogs have already written about the Tevatron results. I recommend that you visit viXra log, Tommaso Dorigo and Resonaances and keep an eye on Matt Strassler and Not Even Wrong, among others…