## Archive for January, 2010

### Setting Length Scales in HEP Experiments

Jim Pivarski who contributes to the Everything Seminar from Cornell University, is an expert in the problem of alignment in high-energy physics experiments. I like to discuss alignment issues with him, and the following problem came up for discussion.

Imagine we are trying to reconstruct events recorded at the LHC. We have the hits in the tracking detectors, and we associate them to form tracks for individual charged particles – a pair of oppositely-charged muons, for example. From fitting those tracks to a model of the trajectory, in crude terms, a helix in a homogeneous magnetic field, we measure the momentum vector of each particle at the point of production. We can then calculate the invariant mass of the muon pair, and reconstruct known resonances such as the Z boson, the J/ψ meson, etc.

Suppose, now, that *all* length scales were re-scaled by some small amount:

**r** → (1+α) **r**

where α is a small number compared to one, and **r** is a position vector in coordinate space. Ideally, we would *know* that α is zero. What are the bounds on α and how do we set them?

Let us recall how the curvature of a track is ‘converted’ to a momentum. In most HEP and nuclear physics experiments, charged particles are made to pass through a calibrated magnetic field. Their deflection is proportional to the magnetic field and inversely proportional to the component of the momentum perpendicular to the field. For small deflections, the vector difference between the initial and final momentum vectors is proportional to the integral of **B**⋅d**L**. Ideally, there is no change in the magnitude of the momentum vector, so the only relevant quantity is the angle of deflection, which we can measure. Hence, knowing that angle and also **B**⋅d**L**, we can infer the momentum vector.

Notice that everything hinges on the product **B**⋅d**L**. So if we re-scaled, perhaps willfully, **L** by (1+α) and and the same time re-scaled the magnetic field by the factor (1+α)^{-1}, the momentum would not change, and the peaks of the Z and J/ψ resonances will remain in place. There is a kind of scale freedom here, apparently.

In reality, we have little freedom to re-scale **B** beyond a factor of 10^{-4} or so. We use Hall probes to give an absolute value for the magnitude of **B**, and these probes cannot be arbitrarily re-calibrated. Ultimately, the values we record from our Hall probes relate to fundamental constants of Nature through a variety of atomic physics experiments.

Furthermore, we know what the meter is, as well. When we design and construct our tracking devices, we use the standard meter and measure actual dimensions to small fractions of a millimeter – there is little danger that all such dimensions are off by a common factor of (1+α). So the task of aligning detectors reduces to the rather difficult task of reconciling the `as-built and installed’ dimensions and distances to the `as-designed and ideal’ ones. As I said before, Jim is one of the world’s experts on this, and has some interesting ideas that eventually will come to light.

It always amuses me to see that in the use of the bending of a charged particle trajectory in a magnetic field links ‘position’ with ‘momentum’ with knowledge of the value of the magnetic field as the fulcrum; this echoes in my mind the profound connection between coordinate space and momentum space with Plank’s constant as the link. Of course, there is no deep connection here…

### Scientific Orthodoxy Kills Truth

This pas week the Department of Physics and Astronomy at Northwestern University hosted the Heilborn Symposium. Our guests were James York, Jacques Laskar and Murray Gell-Mann, and the program focussed on complexity in nature. The Heilborn Series is meant to enhance the intellectual experience of students and faculty, and as part of the scheduled activities, Gell-Mann met with the particle physics group for about an hour and a half.

Gell-Mann’s comments were really quite interesting. He likes to talk in simple declarative sentences, and one of them was:

**Scientific orthodoxy kills truth.**

This is not a shocking assertion, and I would bet that books have been written about it. Gell-Mann provided some nice illustrations, including the assertion that Mayan hieroglyphics were not writing aside from the texts associated with their calender, and the hygienic theories of Semmelweiss. Gell-Mann emphasized the need to pay attention to the facts – always nice to hear from a theoretical physicist in the days of anthropogenic and multi-verse explanations fundamental particle physics. Of course, he also added that most challenges to established theory are bogus – again, it is the facts and their interpretation which matter.

Gell-Mann was a wonderful guest and his lecture was great. I could try to relate more of the interesting and amusing things he said while at Northwestern. But that is not the point of this post!

*Scientific orthodoxy kills truth.* We certainly have an orthodoxy when explaining fundamental particles and their interactions, spanning the standard model and including low-energy supersymmetry and theories of extra dimensions, etc.

*If the LHC presents facts which belie this orthodoxy, will we be able to see it, and set aside the orthodoxy?* Perhaps we should think carefully and seriously about that, unlike the generation of anthropologists who passed over Mayan civilization, and the generations of physicians who refused to wash their hands before performing surgery…

### W Decays and Color

Tommaso Dorigo posted three physics questions on his blog. They’re rather easy and I hope any particle physics student could answer them correctly. His third question touches upon a favorite bit of phenomenology, so let me expand upon it a bit here.

The W boson decays to a pair of fermions nearly all of the time. I will not worry about radiative corrections – i.e., the “extra” photons and gluons that may be emitted in the process W→f+fbar (where “fbar” means an anti-fermion). As Tommaso points out, the weak interactions are universal – the probability of the W to decay into one particular f+fbar pair is the same as the probability for any other f+fbar pair. More precisely, the coupling constant is the same – the phase space will be smaller for heavier fermions than for lighter fermions, which reduces the likelihood that heavier fermions will materialize in a W decay. I will neglect these mass effects here.

So predicting a branching ratio such as BR(W→e+ν) amounts to counting all of the possible f+fbar pairs that a W boson can decay to. The branching ratio is then just one over that total number of possible final states.

How many such states are there, in the standard model – i.e., in the real world as we know it? For leptonic final states, we have (e,ν_{e}), (μ,ν_{μ}) and (τ,ν_{τ}) – this is quite clear. (Forgive me for not putting bars where they belong – it is hard to do it with this editor and it does not matter for the present discussion.) The quark final states require a little more care. Clearly, the top quark (mass = 172 GeV) is too heavy, but the other five quarks are not. You might think you have these six states: (u,d), (c,s), (u,s), (c,d), (u,b) and (c,b), based on electric charge. But the last four of these six states are not weak doublets. More to the point, the CKM matrix, which allows quarks from different weak doublets to couple to the W boson, is nearly diagonal, meaning that the (u,b) and (c,b) final states make a very small, even negligible contribution. Furthermore, the 2×2 sub-matrix which governs the (u,d) and (c,s) couplings is nearly unitary, so whatever part of (u,d) is reduced is picked up by (u,s), so to speak. In the end, because of this important and unexplained feature of the standard model, it is fine to just take the naive set (u,d) and (c,s) and ignore the mixing of weak doublets allowed by the CKM matrix.

If you’re quick and not careful, you’ll conclude that the W can decay only to the five states (e,ν_{e}), (μ,ν_{μ}), (τ,ν_{τ}), (u,d) and (c,s), and you would predict that BR(W→e+ν) = 1/5 = 0.2. This prediction is wrong, as measurements give BR(W→e+ν) = (10.75±0.13)% (see the Particle Data Group web page).

**Color** is the key to the calculation. Remember that quarks come in three colors (the conserved charge of the strong interaction), so when we consider W→u+dbar, there are three distinct channels, corresponding to u(red)+dbar(anti-red), u(blue)+dbar(anti-blue) and u(green)+dbar(anti-green). Notice that the W boson is a color singlet, so if we choose the color of the u-quark, then the color of the d-anti–quark is determined.

Revising our calculation, we have three leptonic states plus __six__ quark states, so the naive prediction is BR(W→e+ν) = 1/9 = 11%, which is quite good indeed. The agreement with the experimental value is clear proof that there are three colors of quarks, and that the W couples to all fermion doublets with equal strength, modulo the factors incorporated in the CKM matrix. I find this a really very nice piece of physics.

This kind of simple phenomenological calculation is at the heart of basic experimental particle physics. It is nice to cast it as an exercise for the student, but in truth we do this kind of work whenever a new particle is observed. For example, a crude measurement of BR(W→e+ν) told us in the 1980s that the top quark mass must be at least M_{W}, else a smaller BR would have been observed. (What is that number, by the way? Take a look again at Tommao’s post.) It came as a bit of a surprise that M_{t}≈172 GeV, which of course is much too heavy to allow W→t+b, which is part of the reason why single-top production is so interesting.

In the 1990s, a parallel line of reasoning led to the conclusion that there are only three species of light neutrinos, through measurement of Z→ν+νbar. This is one of the most important and most beautiful of the results from LEP 1.

In the spirit of Tommaso’s post, let me pose a question for the reader. Suppose there were a hidden lepton charge, similar to color, so that there were __two__ kinds of electrons, muons and taus. What would be the prediction for BR(W→e+ν_{e}), and to what degree is this excluded by the measured value?

An additional question: what can we conclude about W decays to exotic particles this way? I stated that W bosons decay only decay to fermion pairs. Why not to boson pairs??