Jump to ContentJump to Main Navigation
The Black Book of Quantum ChromodynamicsA Primer for the LHC Era$

John Campbell, Joey Huston, and Frank Krauss

Print publication date: 2017

Print ISBN-13: 9780199652747

Published to Oxford Scholarship Online: March 2018

DOI: 10.1093/oso/9780199652747.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 21 January 2019

Summary

Summary

Chapter:
(p.628) 10 Summary
Source:
The Black Book of Quantum Chromodynamics
Author(s):

John Campbell

Joey Huston

Frank Krauss

Publisher:
Oxford University Press
DOI:10.1093/oso/9780199652747.003.0010

Abstract and Keywords

T he book concludes with a short summary of some of the lessons learned from the LHC. This includes a discussion of the theoretical improvements required to leverage the most possible information from future high-luminosity running. The chapter also provides a short overview of considerations for potential higher-energy hadron colliders.

Keywords:   lessons, future colliders, 100 TeV, NNLO accuracy

10.1 Successes and failures at the LHC

Perhaps the greatest success at the LHC (besides the discovery of the Higgs boson) is the non-discovery of new physics. This statement may seem counter-intuitive. Of course, the discovery of new physics would have been desirable, but the experimental analysis techniques and the comparisons to theoretical predictions have worked well enough that Standard Model physics has not been confused with BSM physics.1 Another seemingly counter-intuitive statement is that the LHC benefitted by turning on at a lower energy, with a reduced luminosity, in 2010. The lower energy and smaller data sample precluded most beyond-the-Standard Model searches. This forced more physicists to work on SM physics measurements (leading to the re-discovery of the Standard Model), thus forming benchmarks and tools that were useful with higher luminosity samples where discovery potential was present.

The resolution of the LHC detectors, both calorimetry and tracking, is superior to that of the CDF and DØ detectors. Tracking, in particular, has higher precision and extends to a higher rapidity than possible at the TEVATRON. The improvement in computing power has meant that detailed event simulations, tracing the electromagnetic and hadronic showers, are possible for a variety of physics processes, allowing a better understanding of the detector response.

The theoretical tools and analysis techniques available to LHC physicists are for the most part more sophisticated than those available at the TEVATRON. Fixed-order predictions at NLO (interfaced to parton shower programs) are available for basically any reasonable process, and NNLO calculations for 22 processes have reached a degree of maturity, with calculations of 23 processes to be expected. The ggH process has been calculated to NNNLO and similar calculations for Drell-Yan production are not far off. The higher order calculations have resulted in smaller theoretical uncertainties from scale variations. Since it is possible that new physics may not show up as a clear peak on a distribution, but rather in subtle variations from SM predictions, precision comparisons are crucial for discovery/exclusion of BSM physics.

In the precision physics region (50–500 GeV), PDF uncertainties are small for most (p.629) parton-parton luminosities, but are relatively unconstrained at high mass, especially for initial states involving gluons. Further reduction in PDF uncertainties, especially in the high mass region, can come only from data at the LHC. However, in order to provide constraining information on PDFs at high x, the data must be consistent: different distributions in the same measurement that provide overlapping PDF information (for example yttˉ and mttˉ) must be consistent, results from different measurements in the same experiment must be consistent (for example inclusive jet production and ttˉ production both provide information on the high x gluon), and results from the LHC experiments must be consistent. Otherwise, the measurements may change the central PDFs, but the tension will result in the uncertainty not changing (or even growing larger).

Theoretical predictions are most powerful when they relate to fiducial cross-sections; extrapolating to the full phase space most often introduces an extra layer of uncertainty, as witnessed for example in the ATLAS measurement of the WW cross-section, discussed in Section 9.4.2. Fiducial measurements are more common at the LHC than at the TEVATRON, and hopefully the trend towards more fiducial measurements will continue. This requires that the theoretical calculations also provide predictions at the fiducial level, incorporating for example decays for all unstable particles.

There are still issues with predictions at the very highest masses; in addition to the larger PDF uncertainties in this region, projections of cross-sections/backgrounds are often made using parton shower Monte Carlo programs, where parameter variations in the Monte Carlo can lead to sizeable uncertainties. In some cases, this increased uncertainty is not warranted, especially if the parameter variations can be constrained by (higher precision) fixed-order calculations.

In order to reach the sensitivity needed for new physics searches, the LHC must be run at as high a luminosity as possible. This necessarily results in a large number of additional interactions in each bunch crossing (pileup), creating problems with particle identification and with precision measurements of the particle/jet energies. Techniques have been developed for dealing with pileup, in particular the jet area subtraction technique discussed in Section 9.2.1. Topology dependences of the pileup energy density can limit the ultimate efficacy of the subtraction method.

By necessity, the jet area subtraction technique removes not only the pileup energy, but also the energy associated with the underlying event. Previous measurements at the TEVATRON and LHC have included the underlying event in the physical observables.2 Since the underlying event information has been removed by the subtraction technique, the choice of the LHC experiments has been to add it back in by including a Monte Carlo prediction for that energy. In some sense, this, although necessary, is a step backwards from the trend towards removing as much Monte Carlo extrapolation on an observable as possible.

Tracking at the LHC is better than that at the TEVATRON, and in particular it is most often possible to distinguish the interaction vertex for the interaction of interest from those of pileup events. Thus, one can distinguish hard scatter jets from pileup (p.630) jets using the jet tracking information, and reject jets if too much of the jet energy arises from pileup contributions. Alas, this is possible only for jets produced in the precision tracking region (|y|2.5) and pileup jets are much more of a problem at more forward rapidities. Unfortunately, this is a region where jet identification can be crucial, as for example in measuring the tagging jets in VBF Higgs production. The problem will only get worse as the instantaneous luminosity increases. The solution is to provide more information to discriminate between pileup and hard scatter jets (such as timing for the forward calorimetry), or to simply raise the jet transverse momentum cutoff for forward jets.

10.2 Lessons for future colliders

10.2.1 Standard Model cross-sections beyond 14 TeV

To understand the physics potential of future proton-proton colliders, it is imperative to understand the centre-of-mass energy dependence of notable cross-sections at such machines. Fig. 10.1 shows the predicted cross-sections for a selection of basic processes, ranging over twelve orders of magnitude from the total inelastic proton-proton cross-section to Higgs boson pair-production. For inclusive jet and direct photon production, 50 GeV transverse momentum cuts are applied to the jet and the photon respectively.

Summary

Fig. 10.1 Cross-sections for select hadron collider processes as a function of the operating energy, s. The cross sections presented in this figure have been calculated at next-to-leading order in QCD using the MCFM program [311, 314].

The growth of the cross-sections with s largely reflects the behaviour of the underlying partonic luminosities, cf. Section 6.5. For instance, the top pair cross-section is dominated by the partonic process ggttˉ and the gluon-gluon luminosity rises significantly at higher values of s. The same holds true for the Higgs production channel ttˉH but, in contrast, the associated production channels are dominated by quark-antiquark contributions and rise much more slowly. The different behaviour means that, unlike at current LHC operating energies, the ttˉH channel becomes the third-largest Higgs production cross-section at 33 TeV and above. As a figure of merit for estimating the difficulty of observing the Higgs pair production process it is not unreasonable to consider the ratio of its cross-section to the top pair cross-section. In many of the possible Higgs boson decays the final states receive significant background contributions from the top pair process. The fact that both processes are predominantly gluon-gluon induced means that this measure is approximately constant across the range of energies considered. From a consideration of total cross-sections alone, it is therefore not clear that the prospects for extracting essential information from the Higgs-pair process are significantly better at a higher-energy hadron-collider, even though the rates increase dramatically.

A different sort of contribution to event rates can also be estimated from this figure. The contribution of double parton scattering events, of the type discussed in Section 7.2.3, can be crudely estimated from Eq. (7.43). The value of σeff can be considered to be approximately energy-independent and around 20mb. Although this is not exactly true, the uncertainty on this parameter, and indeed on the accuracy of Eq. (7.43) itself, is such that this should be considered sufficient for an order-of-magnitude estimate only. A particularly simple application of this is the estimation of the fraction of events for a given final state in which there is an additional DPS contribution containing a pair of b-quarks. This fraction is clearly given by the ratio, σbbˉ/(20 mb). From the figure this fraction ranges from a manageably-small 2% effect (p.631) (p.632) at 8 TeV to a much more significant 15% at 100 TeV. More study would clearly be required in order to obtain a true estimate of the impact of such events on the physics that could be studied at higher energies, but these simplified arguments can at least give some idea of the potentially troublesome issues.

As an example of the behaviour of less-inclusive cross-sections at higher energies, Fig. 10.2 shows predictions for H+n jets+X cross-sections at various values of s and as a function of the minimum jet transverse momentum. The cross-sections are all normalized to the inclusive Higgs production cross-section, so that the plots indicate the fraction of Higgs events that contain at least the given number of jets. The inclusive Higgs cross-section includes NNLO QCD corrections, while the 1- and 2-jet rates are computed at NLO in QCD. All are computed in the effective theory with mt.

Summary

Fig. 10.2 Cross-sections for the production of a Higgs boson in association with n or more jets, for n=0,1,2, normalized to the inclusive Higgs cross-section (n=0). Cross-sections are shown as a function of the minimum jet pT and are displayed for a proton-proton collider operating at 14 TeV (left) and 100 TeV (right).

The extent to which additional jets are expected in Higgs events is strongly dependent on how the jet cuts must scale with the machine operating energy. For instance, consider a jet cut of 40 GeV at 14 TeV, a value in line with current analysis projections. For this cut, approximately 20% of all Higgs boson events produced through gluon fusion should contain at least one jet. The fraction with two or more jets is expected to be around 5%. To retain approximately the same jet compositions at 100 TeV requires only a modest increase in the jet cut to 80 GeV.

However, this analysis is not the full story, due to effects induced by a finite top-mass that are neglected in the effective theory. This is illustrated in Fig. 10.3, which shows the rates for Higgs production in association with up to three jets, taking proper account of the top-mass, as a function of the minimum jet pT. As shown in the lower panel, a comparison of these results with those obtained in the effective theory reveals significant differences. Even for moderate jet cuts of around 50 GeV a finite top-mass results in differences in the H+3 jet rate of approximately 30%. For significantly harder jet cuts the effective theory description clearly fails spectacularly. Although (p.633) this should not be a great surprise, given the energy scales being accessed, it is a useful reminder of the limitations of approximations that are commonly used at the LHC. Such approximations must clearly be left behind in order to obtain meaningful predictions for relatively common kinematic configurations at a 100 TeV collider.

Summary

Fig. 10.3 Cross-sections for the production of a Higgs boson in association with 1, 2 or 3 jets, taking into account finite top-mass effects. Cross-sections are shown as a function of the minimum jet pT for a proton-proton collider operating at 100 TeV. The lower panel shows the ratio of these results to the ones obtained in the effective theory.

Reprinted with permission from Ref. [267].

Of course, the differences that exist between the theoretical predictions at 14 TeV and 100 TeV offer significant opportunities that are only beginning to be explored. The event rates will be sufficiently high that analysis cuts can be devised to take advantage of the unique kinematics at a 100 TeV collider, rather than simply “scaling up” the types of analyses currently in use at the LHC. For instance, substantially harder cuts on the transverse momenta of jets will lead to a predominance of boosted topologies, which can be analysed with the types of jet substructure techniques that are still relatively new at the LHC, cf. Section 9.2.4.

10.2.2 Necessary theory developments

Improvements to the theoretical description of hadronic collisions are of course driven by the accuracy of the experimental measurements that can be made. The outstanding level of detail that the LHC detectors have been able to provide, from particle identification to jet tracking, has enabled experimental uncertainties to be controlled at (p.634) the few-percent level for quantities such as the transverse momentum of single photon or Z-bosons. Such exquisite measurements have thrown down the gauntlet to the theoretical community.

Of course, some of these challenges have been foreseen. Going from the early days of the TEVATRON to the build-up to the LHC saw a sea-change in the quality of perturbative predictions. Rather than being limited to LO predictions for 22 processes, by the advent of the LHC NLO predictions were available for almost all final states of immediate interest. At the beginning of Run II of the LHC, even NNLO calculations have matured to the level of providing differential predictions for events containing jets. The pace of these developments has been so fast that it is easy to take for granted a level of sophistication that many never believed would have been achieved by now. The availability of N3LO predictions for Higgs production, multiple examples of NNLO calculations matched to a parton shower, and the ability to go from a Lagrangian to NLO-accurate showered events, are just a few such examples. Such progress, to a level of precision that in some cases borders the ridiculous, may leave the reader wondering if challenges remain. Yet, undeniably, much work lies ahead.

In terms of fixed-order descriptions, the march to higher orders is not yet over. It is not clear whether existing techniques for performing NNLO calculations will be able to be applied to more complex final states. While continued improvements in computer processing power will certainly help, it is almost certain that alternative, superior approaches have yet to be devised. Similar arguments apply to the case of N3LO predictions, where extensions to the method that could provide more differential information, or perhaps be suitable for more general processes, are far from obvious. As highlighted in earlier chapters, the presence of substantial electroweak corrections at high energies is just beginning to be probed. As the LHC becomes more sensitive to even higher energies, the inclusion of higher-order electroweak effects will become mandatory in order to retain theoretical predictions of sufficient precision. A simultaneous expansion in both parameters, i.e. correctly including corrections that contain a mix of strong and electroweak couplings, will also become important. At present no complete calculation of such effects exists, even for a single process. In addition, a number of approximations are routinely used to simplify existing calculations. Examples include neglecting quark masses, working in the limit mt, and considering production and decay stages of resonance production separately. These will all need to be revisited, for various physics processes, in the coming years.

As improved fixed-order predictions become available it will be important that their effects are included in parton shower predictions. This will enable the improved modelling of the computed processes to be properly taken into account across a wide range of experimental analyses. The parton showers themselves will be the subject of greater scrutiny as they are held up to the light of experimental data that is ever more precise. This may reveal deficiencies in our modelling, either related to an incomplete treatment of towers of logarithms, or simply from an unavoidable choice in how the shower is constructed. Further subtleties, related to non-perturbative effects such as hadronization, fragmentation, and even the quality of the factorization picture itself, will eventually require new theoretical understanding as they become the dominant sources of theoretical uncertainty.

Finally – (p.635) and, perhaps, most critically – it is important to not lose sight of the fact that the ultimate goal of this program is to extract the most possible information from the data that the LHC provides. To this end it is imperative to also continually develop new tools and novel approaches for doing just that. An excellent example of this is the development of jet substructure techniques, that have already had applications to top-tagging, jet discrimination, and a host of other analysis methods besides. No doubt there are many more insightful theoretical observations of this nature waiting to be made in the years ahead. (p.636)

Notes:

(1) The authors hold out hope that new physics will indeed be discovered at the LHC.

(2) A prediction for the underlying event is present in every parton shower Monte Carlo program, but not in fixed-order calculations. For these, non-perturbative corrections must be calculated by the experimenters to allow comparison of parton-level predictions to hadron level observables.