## The Physics in the ALICE Paper

The ALICE Collaboration published the very first physics paper on collisions at the LHC. You can see the paper on the archive (arXiv:0911.5430), and soon in the European Physics Journal.

I love to read papers reporting good, basic measurements that I don’t really understand – they give me the opportunity to go learn something new! After all, we don’t measure any arbitrary peculiar quantity because we can – every measurement has a point to it, something more than just checking a Monte Carlo event generator. So, what is the physics point behind the ALICE measurement?

The ALICE Collaboration measured the density of charged particles as a function of pseudo-rapidity (η). Here is the main plot:

The horizontal axis gives the pseudorapidity, and the vertical axis gives the “density” or number of charged particles per unit of pseudorapidity.

First, let’s recall what pseudorapidity is. It is sometimes described as a measure of the angle with respect to the beam, but this is not really what is means. It’s not just a substitute for cosθ. It is an approximation to the true rapidity (y), which parametrizes relativistic boosts. In old-fashioned books on special relativity, one sees sinh(y) = βγ so one might say that y is an imaginary angle, but this certainly brings no insight. The main point of rapidity in relativistic kinematics is that a function f(y) transforms as f(y+a) under Lorentz boosts – i.e., the distribution is shifted along the y-axis by some constant amount, but the shape does not change. Hence, it is the shape which is of the essence.

Let’s write the explicit formulae:

y = ½ ln [ (E+pz) / (E-pz) ]

η = ½ ln [ (p0+pz) / (p0-pz) ] = -ln[ tan(θ/2) ]

So it is obvious that η is numerically close to y when the particle is relativistic and its mass does not matter, numerically. Experimentally speaking, however, η is much easier to measure, since one needs only θ, the angle of a track with respect to the beam axis. For y, one needs good momentum measurements and knowledge of the mass, which implies particle identification, which is hard.

All of this is to say that we use η because we can measure it well, but for puzzling out the physics, one should think in terms of the normal rapidity, y.

(Note that the ALICE magnet was off for these data, so they have no way of measuring momenta. For a nice synopsis of the analysis, see Zoe Matthews’ post on Quantum Diaries.)

What do you notice about the results in the plot above? The charge density hardly varies with η! Why not? Why is the density of charged tracks constant as a function of rapidity? Considering its definition as a boost parameter (see above), this should come as quite a surprise. (The small waviness reflects the small numerical differences between η and y.)

Now, to the physics. I learned what little I understand from a very nice review article by Grosse-Oetringhaus (CERN) and Reygers (Heidelberg) which came out just after the ALICE paper: (arXiv:0912.0023). The story begins with Feynman Scaling which pertains to the formation of hadrons in inclusive, inelastic processes.

Let’s begin with this picture: A stream of partons comes in along the z-axis at high energy, and new partons are produces which fly out in many directions. Do not think in terms of simple Feynman diagrams or any sort of one-particle exchange. There is just a big blob with lots of partons coming out. The key concept is that the formation of any particular final-state parton does not depend on the global picture, only the local one. The energy and flavor of the incoming partons don’t figure into the calculation – a parton is produced according to some dynamics, and that parton turns into a hadron according to a universal distribution which might depend on the particle species but not on the other partons or their kinematics. The other key idea is that the distribution functions factorize in terms of the transverse dimension (pT) and a longitudinal quantity. The latter is expressed as a scaling variable, “Feynman xF,” which is simply pz/W, where W = √s is the center-of-mass energy.

Feynman posited that the probability to produce a parton of energy E goes as 1/E. He justified this on very general phenomenological grounds. From this, and integrating over pT, and making a change of variables, one can show that the mean multiplicity N is proportional to ln(W). The total range of rapidity is also proportional to ln(W), so one infers that dN/dy = constant.

Thus, the flatness of the distribution of dN/dη is an important validation of the quark-parton picture, invented in large part by Feynman.

Here is the other important physics plot from the ALICE paper:

This compilation of measurements tests the prediction that N increases as ln(W). Clearly, this prediction fails – the increase is quadratic, not linear. This has been known for quite some time, and does not invalidate the quark-parton picture (of course). The arguments behind Feynman scaling were extended, leading to KNO scaling. KNO scaling works well but not perfectly, and the UA5 experiment showed violations of KNO scaling. It was eventually shown that the multiplicity distribution within a narrow η range follows a negative binomial distribution (NBD), or better, a sum of two of them. I can’t explain these things here (nor are they addressed in the ALICE paper), but the salient point is that these extensions of Feynman’s original line of thinking lead to the expectation that N increases quadratically with √s, as seen in the second plot above.

This piece of physics is very rich, even if not new, and it is interesting that the best Monte Carlo generators have some trouble reproducing the data accurately. For a lot more detail, see the review article by Grosse-Oetringhaus and Reygers (arXiv:0912.0023) or a good textbook on hadron-hadron interactions. For more data, just wait awhile – the LHC collaborations are only getting started!