Jump to ContentJump to Main Navigation
The Black Book of Quantum ChromodynamicsA Primer for the LHC Era$

John Campbell, Joey Huston, and Frank Krauss

Print publication date: 2017

Print ISBN-13: 9780199652747

Published to Oxford Scholarship Online: March 2018

DOI: 10.1093/oso/9780199652747.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy).date: 22 May 2018

Introduction

Introduction

Chapter:
(p.1) 1 Introduction
Source:
The Black Book of Quantum Chromodynamics
Author(s):

John Campbell

Joey Huston

Frank Krauss

Publisher:
Oxford University Press
DOI:10.1093/oso/9780199652747.003.0001

Abstract and Keywords

This chapter serves as a very brief overview of the physics of the LHC era and the basic elements of the accelerator and experiments. It also provides a user’s guide to the book and succinct summaries of the later chapters.

Keywords:   Standard Model, Higgs boson, accelerator, detector, guide, correction

1.1 The physics of the LHC era

1.1.1 Particle physics in the LHC era

The turn-on of the LHC in 2008 culminated an almost 20-year design and construction effort, resulting in the largest particle accelerator (actually the largest machine) ever built. At its inception a competition still existed with the TEVATRON which, although operating at a much lower energy, had a data sample with a large integrated luminosity and well-understood detectors and physics-analysis software. The TEVATRON had discovered the top quark and was continuing its search for the Higgs boson. As is well known, the LHC suffered considerable damage from a cryogenic quench soon after turn-on that resulted in a shut-down for about 1.5 years. Its (re)turn-on in 2010 was at a much lower energy (7 TeV rather than 14 TeV) and at much lower intensities. The small data sample at the lower energy can be considered in retrospect as a blessing in disguise. There was not enough data to even consider a search for the Higgs boson (or even for much in the way of new physics), but there was enough data to produce W and Z bosons, top quarks, photons, leptons and jets — in other words, all of the particles of the Standard Model except for the Higgs boson. The result was the re-discovery of the Standard Model (a coinage for which one of the authors takes credit) and the development of the analysis tools and the detailed understanding of the detectors that allowed for the discovery of the Higgs boson on July 4, 2012, with data from 7 TeV in 2011 and 8 TeV in 2012. The LHC turned off again in early 2013 for repairs and upgrades (to avoid the type of catastrophic quench that occurred in 2008). The LHC detectors also used this two-year period for repairs and upgrades. The LHC ran again in 2015, at an energy much closer to design (13 TeV). The increased energy allowed for more detailed studies of the Higgs boson, but more importantly offered a much greater reach for the discovery of possible new physics. At the time of completion of this book, a great deal of physics has been measured at the operating energy of 13 TeV. Given the new results continually pouring out at this new energy, the decision was made to concentrate in this book on results from 7 and 8 TeV running. This is sufficient for the data comparisons needed to illustrate the theoretical machinery developed here.

(p.2) 1.1.2 The quest for the Higgs boson — and beyond

1.1.2.1 Finding the Higgs boson

The LHC was designed as a discovery machine, with a design centre-of-mass energy a factor of seven larger than that of the TEVATRON. This higher collision energy opened up a wide phase space for searches for new physics, but there was one discovery that the LHC was guaranteed to make; that of the Higgs boson, or an equivalent mechanism for preventing WW scattering from violating unitarity at high masses.

The Higgs boson couples directly to quarks, leptons and to W and Z bosons, and indirectly (through loops) to photons and gluons. Thus the Higgs boson final states are just the building blocks of the SM with which we have much experience, both at the TEVATRON and the LHC. The ATLAS and CMS detectors were designed to find the Higgs boson and to measure its properties in detail.

The cross-section for production of a Higgs boson is not small. However, the final states for which the Higgs boson branching ratio is large (such as bbˉ) have backgrounds which are much larger from other more common processes. The final states with low backgrounds (such as ZZ++) suffer from poor statistics, primarily due to the Z branching ratio to leptons. The Higgsγγ final state suffers from a small branching ratio and a large SM background. Thus one might not expect this final state to be promising for a Higgs boson search. However, due to the intrinsic narrow width of the Higgs boson, a diphoton signal can be observable if the experimental resolution of the detector is good enough that the signal stands out over the background.

The measurable final states of the Higgs boson decays were further subdivided into different topologies so that optimized cuts could be used to improve on the signal-to-background ratio for each topology (for example, in ATLAS the diphoton channel was divided into 12 topologies). The extracted signal was further weighted by the expectations of the SM Higgs boson in those topologies. In this sense, the Higgs boson that was discovered in 2012 was indeed the Standard Model Higgs boson. However, as will be discussed in Chapter 9, detailed studies have determined the properties of the new particle to be consistent with this assumption.

1.1.2.2 The triumph of the Gauge Principle

The discovery of the Higgs boson by the ATLAS and CMS collaboration, reported in July 2012 and published in [15, 368], is undoubtedly the crowning achievement of the LHC endeavour so far. It is hard to overestimate the importance of this discovery for the field of particle physics and beyond.

The Higgs boson is the only fundamental scalar particle ever found, which in itself makes it unique; all other scalars up to now were bound states, and the fundamental particles found so far have been all either spin-1/2 fermions or spin-1 vector bosons. This discovery is even more significant as it marks a triumph of the human mind: the Higgs boson is the predicted visible manifestation of the Brout–Englert–Higgs (BEH) mechanism [516, 601, 619621, 675], which allows the generation of particle masses in a gauge-invariant way [580, 835, 888]. Ultimately, this discovery proves the paradigm of gauge invariance as the governing principle of the sub-nuclear world at the smallest (p.3) distances and largest energies tested in a laboratory so far. With this discovery a 50-year-old prediction concerning the character of nature has been proven

The question now is not whether the Higgs boson exists but instead what are its properties? Is the Higgs boson perhaps a portal to some new phenomena, new particles, or even new dynamics? There are some hints from theory and cosmology that the discovery of the Higgs boson is not the final leg of the journey.

1.1.2.3 Beyond the Standard Model

By finding the last missing particle and thereby completing the most accurate and precise theory of nature at the sub-nuclear ever constructed, the paradigms by which it has been constructed have proved overwhelmingly successful. Despite this there are still fundamental questions left unanswered. These questions go beyond the realm of the SM, but they remain of utmost importance for an even deeper understanding of the world around us.

Observations of matter — Earth, other planets in the Solar System or beyond, other stars, or galaxies — suggest that the symmetry between matter and anti-matter is broken. This is a universe filled by matter and practically devoid of anti-matter. While naively there is no obvious reason why one should be preferred over the other, at some point in the history of the Universe — and presumably very early — this asymmetry had to emerge from what is believed to have been a symmetric initial state. In order for this to happen, a set of conditions, the famous Sakharov conditions [710, 834] had to be met. One of these intricate conditions is the violation of CP, which demands that the symmetry under the combined parity and charge–conjugation (CP) transformation must be broken. Experimentally, the existence of CP violation has been confirmed and is tightly related to the existence of at least three generations of matter fields in the SM. Due to the BEH mechanism, particles acquire masses, and their mass and electroweak interaction eigenstates are no longer aligned after EWSB. The existence of a complex phase in the CKM matrix, which parametrizes the interrelation between these two set of eigenstates, ultimately triggers CP violation in the quark sector. However, the amount of CP violation established is substantially smaller than necessary to explain how the universe evolved from an initial symmetric configuration to the matter-dominated configuration seen today [358].

Likewise, the existence of dark matter (DM) is now well established, first evidenced by the rotational curves of galaxies [831]. DM denotes matter which interacts only very weakly with normal matter (described by the SM) and therefore certainly does not interact through electromagnetism or the strong nuclear force. Despite numerous attempts it has not been directly detected. DM interacts through gravity and thereby has influenced the formation of large-scale structures in the Universe. Cosmological precision measurements by the WMAP and PLANCK collaborations [125, 623, 862] conclude that dark matter provides about 80% of the total matter content of the Universe. This in turn contributes about 25% of the overall energy balance, with the rest of the energy content of the Universe provided by what is known as dark energy (DE), which is even more mysterious than DM. The only thing known is that the interplay of DM and DE has been crucial in shaping the Universe as observed today and will continue to determine its future. One possible avenue in searches for DM particles at collider (p.4) experiments is that they have no coupling to ordinary matter through gauge interactions but instead couple through the Higgs boson.

These examples indicate that the SM, as beautiful as it is, will definitely not provide the ultimate answer to the questions concerning the fundamental building blocks of the world around us and how they interact at the shortest distances. The SM will have to be extended by a theory encompassing at least enhanced CP violation, dark matter, and dark energy. Any such extension is already severely constrained by the overwhelming success of the gauge principle: the gauge sector of the SM has been scrutinized to incredibly high precision, passing every test up to now with flying colours. See for example [179] for a recent review, combining data from ee+ and hadron collider experiments. The Higgs boson has been found only recently, and it is evident that this discovery and its implications will continue to shape our understanding of the micro–world around us. The discovery itself, and even more so the mass of the new particle and our first, imprecise measurements of its properties, already rule out or place severe constraints on many new physics models going beyond the well-established SM [515].

Right now, we are merely at the beginning of an extensive programme of precision tests in the Higgs sector of the SM or the theory that may reveal itself beyond it. It can be anticipated that at the end of the LHC era, either the SM will have prevailed completely, with new physics effects and their manifestation as new particles possibly beyond direct human reach, or alternatively, we will have forged a new, even more beautiful model of particle physics.

1.1.3 LHC: Accelerator and detectors

1.1.3.1 LHC, the machine

The LHC not only is the world’s largest particle accelerator but it is also the world’s largest machine, at 27 km in circumference. The LHC is a proton-proton collider (although it also operates with collisions of protons on nuclei, and nuclei on nuclei), located approximately 100 m underground and straddling the border between France and Switzerland. The LHC occupies the tunnel formerly used for the LEP accelerator in which electrons and positrons collided at centre-of-mass energies up to 209 GeV. The LHC contains 9593 magnets, including 1232 superconducting dipole magnets, capable of producing magnetic fields of the order of 8.3 T, and a maximum proton beam energy of 7 TeV (trillion electron-volts), leading to a maximum collision energy of 14 TeV. Thus far, the LHC has run at collision energies of 7 TeV (2010, 2011), 8 TeV (2012) and 13 TeV (2015,2016), greatly exceeding the previous record of the Fermilab TEVATRON of 1.96 TeV.1 The large radius of the LHC is necessitated because of the desire to reach as high a beam energy as possible (7 TeV) using dipoles with the largest magnetic fields possible (in an accelerator). Running at full energy, the power consumption (including the experiments) is 750 GWh per year. At full power, the LHC will collide 2808 proton bunches, each approximately 30 cm long and 16 microns in diameter and containing 1.15×1011 protons, leading to a luminosity of 1034cm−2/s and a billion proton-proton collisions per second. The spacing between the bunches is 25 ns leading to collisions occurring every 25 ns; thus, at full luminosity there will (p.5) be on average 25 interactions every beam crossing, most of which will be relatively uninteresting. The high luminosity for the machine is needed to produce events from processes with small cross-sections, for example involving physics at the TeV scale.

There are seven experiments running at the LHC (ATLAS, CMS, LHCB, ALICE, TOTEM, LHCf and MOEDAL), with ATLAS and CMS being the two general-purpose detectors. A schematic drawing of the LHC, indicating the position of the four larger experiments is shown in Fig. 1.1.

Introduction

Fig. 1.1 A 3D layout of the LHC, showing the location of the four major experiments.

Reprinted with permission from CERN.

1.1.3.2 The detectors

It seems paradoxical that the largest devices are needed to probe the smallest distance scales. The ATLAS detector, for example, is 46 m long, 25 m in diameter and weighs 7000 tonnes. The CMS detector, although smaller than ATLAS at 15 m in diameter and 21.5 m in length, is twice as massive, at 14,000 tonnes. This can be compared to the CDF detector at the TEVATRON which was only 12m×12m×12m (and 5000 tonnes). The key to the size and complexity of the LHC detectors is the need to measure the four-vectors of the large number of particles present in LHC events, whose momenta can extend to the TeV range. The large particle multiplicity requires very fine segmentation; the ATLAS detector, for example, has 160 million channels to read out, half of which are in the pixel detector. The large energies/momenta require, in addition to fine segmentation, large magnetic fields and tracking volumes and thick calorimetry.

Both ATLAS and CMS are what are known as general-purpose 4π‎ detectors, meaning that they attempt to cover as much of the solid angle around the collision point as (p.6) possible, in order to reconstruct as much information about each event as possible.2 There is a universal cylindrically symmetric configuration for a 4π‎ detector, embodied, for example, in the ATLAS detector, as shown in Fig. 1.2. Collisions take place in the centre of the detector. Particles produced in each collision first encounter the pixel detector (6) and the silicon tracking detector (5). The first layer of the pixel detector is actually mounted on the beam-pipe in order to be as close to the interaction point as possible. The beam-pipe itself, in the interaction region, is composed of beryllium in order to present as little material as possible to the particles produced in the collision. The proximity of the pixel and silicon detectors to the collision point and the very fine segmentation (50 × 400 μ‎m for the pixel detector and 70 μ‎m for the silicon detector) allow for the reconstruction of secondary vertices from bottom and charm particles, which can travel distances of a few mm from the interaction point before decaying. The next tracking device (4), the transition radiation detector, is a straw-tube detector that provides information not only on the trajectory of the charged particle but also on the likelihood of the particle being an electron. All three tracking devices sit inside the central magnetic field of 2T produced by the solenoid (3).

Introduction

Fig. 1.2 A layout of the ATLAS detector, showing the major detector components, from en.wikipedia.org/wiki/ATLAS_experiment.Original image from CERN.

Reprinted with permission from CERN.

The energies of the particles produced in the collision (both neutral and charged) are measured by the ATLAS calorimeters, the lead-liquid argon electromagnetic calorimeter (7) and the iron-scintillator hadronic calorimeter (Tilecal) (8). Both the ATLAS and CMS electromagnetic calorimeter designs emphasized good resolution for the measurement of the energies of photons and electrons, primarily to be able to distinguish the Higgs boson to γγ signal from the much larger diphoton background. The width of a light Higgs boson is much less than the experimental resolution, so any improvement in the resolution will lead to a better discrimination over the background.

Energetic muons can pass through the calorimetry, while other particles are absorbed. The toroidal magnets (2), in both the central and forward regions, produce an additional magnetic field (4 T) in which a second measurement of the muon momentum can be carried out using the muon tracking chambers (1), using several different technologies. One of the unique characteristics of the ATLAS detector (and part of its acronym) is the presence of the air-core toroidal muon system. The relatively small amount of material in the tracking volume leads to less multiple scattering and thus a more precise measurement of the muon’s momentum. The muon momentum can be measured to a precision of 10% at a transverse momentum value of 1 TeV.

1.1.3.3 Challenges

To use a popular analogy, sampling the physics at the LHC is similar to trying to drink from a fire hose. Over 1 billion proton-proton collisions occur each second, but the limit of practical data storage is on the order of hundreds of events per second only. Thus, the experimental triggers have to provide a reduction capability of a factor of the order of 107, while still recording bread-and-butter signatures such as W and Z boson production. This requires a high level of sophistication for the on-detector hardware triggers and access to large computing resources for the higher-level triggering. Timing (p.7) is also an important issue. The ATLAS detector is 25m in diameter. With a bunch-crossing time of 25ns, this means that as new interactions are occurring in one bunch crossing, the particles from the previous bunch crossing are still passing through the detector. Each crossing produces 25 interactions. Experimental analyses thus face both in-time pileup and out-of-time pileup. The latter can be largely controlled through the readout electronics (modulo substantial variations in the population of the individual bunches), while the former requires sophisticated treatment in the physics analyses.

The dynamic ranges at the LHC are larger than at the TEVATRON. Leptons from W boson decays on the order of tens of GeV are still important, but so are multi-TeV leptons. Precise calibration and the maintenance of linearity are both crucial. To some extent, the TEVATRON has served as a boot camp, providing a learning experience for physics at the LHC, albeit at lower energies and intensities. Coming later, the LHC has benefited from advances in electronics, in computing, and perhaps most importantly, in physics analysis tools. The latter comprise both tools for theoretical predictions at higher orders in perturbative QCD and tools for the simulation of LHC final states.

Despite the difficulties, the LHC has had great success during its initial running, culminating in the discovery of the Higgs boson, but, alas, not in the discovery of new physics. The results obtained so far comprise a small fraction of the total data taking planned for the LHC. New physics may be found with this much larger data sample, but discovering it may require precise knowledge of SM physics, including QCD.

1.2 About this book

The reader is assumed to be already familiar with textbook methods for the calculation of simple Feynman diagrams at tree level, the evaluation of cross-sections through phase-space integration with analytic terms, and the ideas underlying the regularization and renormalization of ultraviolet divergent theories; however, for a short review, (p.8) readers are referred to Appendix B.1, and for a more pedagogical introduction to these issues to a wealth of outstanding textbooks on various levels, including the books by Peskin and Schröder [803], Halzen and Martin [606], Ramond [822], Field [525] and others. For a review of QCD at collider experiments, the reader is referred to the excellent books by Ellis, Stirling, and Webber [504] and by Dissertori, Knowles, and Schmelling [467]. Of course, for a real understanding of various aspects it is hard to beat the original literature, and readers are encouraged to use the references in this book as a starting point for their journey through particle physics.

This book aims to provide an intuitive approach as to how to apply the framework of perturbative theory in the context of the strong interaction towards predictions at the LHC and ultimately towards an understanding of the signals and backgrounds at the LHC. Thus, even without the background discussed at the beginning of this section, this book should be useful for anyone wishing for a better understanding of QCD at the LHC.

The ideas for this book have been developed over various lecture series given at graduate level lectures or at advanced schools on high-energy physics by the authors. The authors hope that this book turns out to be useful in supporting the self-study of young researchers in particle physics at the beginning of their career as well as more advanced researchers as a resource for their actual research and as material for a graduate course on high-energy physics.

1.2.1 Contents

Chapter 2 provides a first overview of the content of this book and aims at putting various techniques and ideas into some coherent perspective. First of all, a physical picture underlying hadronic interactions, and especially scattering reactions at hadron colliders, is developed. To arrive at this picture, the ideas underlying the all-important factorization formalism are introduced which, in the end, allows the use of perturbative concepts in the discussion of the strong interaction at high energies and the calculation of cross-sections and other related observables. These concepts are then used in a specific example, namely the inclusive production of W bosons at hadron colliders. There, their production cross-section is calculated at leading and at next-to-leading order in the strong coupling constant, thereby reminding the reader of the ingredients of such calculations and fixing the notation and conventions used in this book. This part also includes a first discussion of observables relevant for the phenomenology of strong interactions at hadron colliders. In addition, some generic features and issues related to such fixed-order calculations are sketched. In a second part, the perturbative concepts already employed in the fixed-order calculations are extended to also include dominant terms to all orders through the resummation formalism. Generic features of analytical resummation are introduced there and some first practical applications for W production at hadron colliders are briefly discussed. As a somewhat alternative use of resummation techniques, jet production in electron–positron annihilations and in hadronic collisions is also discussed and, especially in the latter, some characteristic patterns are developed.

The next chapter, Chapter 3, is fairly technical, as it comprises a presentation of most of the sometimes fairly sophisticated technology that is being used in order to (p.9) evaluate cross-section at leading and next-to leading order in the perturbative expansion of QCD. It also includes a brief discussion of emerging techniques for even higher order corrections in QCD. In addition, the interplay between QCD and electroweak corrections is touched upon in this chapter. Starting with a discussion of generic features, such as a meaningful definition of perturbative orders for various calculations, the corresponding technology is introduced, representing the current state of the art. As simple illustrative examples for the methods employed in such calculations, again inclusive W boson production and its production in association with a jet are employed. The calculations are worked out in some detail at both leading and next-to leading order in the perturbative expansion in the strong coupling.

The overall picture and phenomena encountered in hadron–hadron collisions, developed in Chapter 2, is discussed in the context of specific processes in Chapter 4. The processes discussed here range from the commonplace (e.g. jet production) to some of the most rare (e.g. production of Higgs bosons). In each case the underlying theoretical description of the process is described, typically at next-to leading order precision. Special emphasis is placed on highlighting phenomenologically relevant observables and issues that arise in the theoretical calculations. The chapter closes with a summary of what is achievable with current technology and an outlook of what may become important and relevant in the future lifetime of the LHC experiments.

Following the logic outlined in Chapter 2, in Chapter 5 the discussion of fixed-order technology is extended to the resummation of dominant terms, connected to large logarithms, to all orders. After reviewing in more detail standard analytic resummation techniques, and discussing their systematic improvement to greater precision by the inclusion of higher-order terms, the connection to other schemes is highlighted. In the second part of this chapter, numerical resummation as encoded in parton showers is discussed in some detail. The physical picture underlying their construction is introduced, some straightforward improvements by introducing some generic high–order terms are presented and different implementations are discussed. Since the parton showers are at the heart of modern event simulation, bridging the gap between fixed-order perturbation theory at high scales and phenomenological models for hadronization and the like at low scales, their improvement has been in the focus of actual research in the past decade. Therefore, some space is devoted to the discussion of how the simple parton shower picture is systematically augmented with fixed-order precision from the corresponding matrix elements in several schemes.

In Chapter 6, an important ingredient for the success of the factorization formalism underlying the perturbative results in the previous two chapters is discussed in more detail, namely the parton distribution functions. Having briefly introduced them, mostly at leading order, in Chapter 2, and presented some simple properties, in this chapter the focus shifts on their scaling behaviour at various orders and how this can be employed to extract them from experimental data. Various collaborations perform such fits with slightly different methodologies and slightly different biases in how data are selected and treated, leading to a variety of different resulting parton distributions. They are compared for some standard candles in this chapter as well, with a special emphasis on how the intrinsic uncertainties in experimental data and the more theoretical fitting procedure translates into systematic errors.

(p.10) The tour of ingredients for a complete picture of hadronic interactions terminates in Chapter 7, where different non-perturbative aspects are discussed. Most of the ideas to address them are fairly qualitative and can be embedded in phenomenological models only. Therefore, rather than presenting in detail all developments in this field, the book focuses more on generic features and basic strategies underlying their treatment in different contexts. Issues discussed there include hadronization, the transition from the partons of perturbation theory of the strong interaction, quarks and gluons, to the experimentally observable hadrons, and their decays into stable ones, the underlying event, which is due to softer further interactions between the hadronic structures of the incident particles, and its connection to very inclusive observables such as total and elastic cross-sections.

In Chapters 8 and 9 theoretical results from analytic calculations and simulation tools are compared with a host of experimental data. Chapter 8 focuses on data especially from the TEVATRON,3 where the foundations of our current understanding of the SM and in particular the dynamics of the strong interaction have been shaped. In Chapter 9 the most sophisticated calculations and simulations are compared with the most recent, most precise and most challenging data so far, taken at the LHC during Run I. This comparison ranges from inclusive particle production over event shape observables to data testing the dynamics of the SM — and potentially beyond — over scales ranging over two order of magnitude in the same process. This is the most challenging test of our understanding of nature at its most fundamental level ever performed. It is fair to state that while our most up-to-date tools, analytical calculations and simulations fare amazingly well in this comparison, some first cracks are showing that will motivate the community to push even further in the years to come.

1.2.2 A user’s guide

This book is meant to provide PhD students in experimental particle physics working at the LHC who have a keen interest in theoretical issues, as well as PhD students working in particle theory with an emphasis on phenomenology at colliders, a starting point for their research. It is meant to introduce and expose the reader to all relevant concepts in current collider phenomenology, introduce and explain the technology that by now is routinely used in the perturbative treatment of the strong interaction, and provide an integrated perspective on the results of such calculations and simulations and the corresponding data.

The book consists of three parts. The first part is an overview of the relevant terminology and technology, worked out through one standard example and providing a coherent perspective on hadronic interactions at high energies. Readers and teachers, using this book for lectures, are invited to study Chapter 2 first before embarking on a more in-depth discussion of various theoretical or experimental aspects. The other two parts consist of a more detailed discussion of various aspects of the perturbative treatment of the strong interaction in hadronic reactions in the second part of the book, in Chapters 37. While these chapters frequently refer back to the overview chapter, Chapter 2, they are fairly independent from each other and could in principle be used in (p.11) any sequence the reader or teacher finds most beneficial. The third part, Chapters 8 and 9, where core experimental findings are confronted with theoretical predictions, again is independent of the second part, although for a better understanding of theoretical subtleties it may be advantageous to be acquainted with certain aspects there.

Finally, a list of updates, clarifications and corrections to this book is maintained at the following website:

http://www.ippp.dur.ac.uk/BlackBook

Notes:

(1) Unlike the LHC, the TEVATRON was a proton-antiproton collider.

(2) The main limitation for the solid-angle coverage is in the forward/backward directions, where the instrumentation is cut off by the presence of the beam pipe.

(3) Experiences from LEP and HERA have also been important but are not included due to space limitations.