
Today, Games User Research forms an integral component of the development of any kind of interactive entertainment. User research stands as the primary source of business intelligence in the incredibly competitive game industry. This book aims to provide the foundational, accessible, goto resource for people interested in GUR. It is a communitydriven effort—it is written by passionate professionals and researchers in the GUR community as a handbook and guide for everyone interested in user research and games. The book bridges the current gaps of knowledge in Game User Research, building the goto volume for everyone working with games, with an emphasis on those new to the field.

Inductive logic (also known as confirmation theory) seeks to determine the extent to which the premisses of an argument entail its conclusion. This book offers an introduction to the field of inductive logic and develops a new Bayesian inductive logic. Chapter 1 introduces perhaps the simplest and most natural account of inductive logic, classical inductive logic, which is attributable to Ludwig Wittgenstein. Classical inductive logic is seen to fail in a crucial way, so there is a need to develop more sophisticated inductive logics. Chapter 2 presents enough logic and probability theory for the reader to begin to study inductive logic, while Chapter 3 introduces the ways in which logic and probability can be combined in an inductive logic. Chapter 4 analyses the most influential approach to inductive logic, due to W.E. Johnson and Rudolf Carnap. Again, this logic is seen to be inadequate. Chapter 5 shows how an alternative approach to inductive logic follows naturally from the philosophical theory of objective Bayesian epistemology. This approach preserves the inferences that classical inductive logic gets right (Chapter 6). On the other hand, it also offers a way out of the problems that beset classical inductive logic (Chapter 7). Chapter 8 defends the approach by tackling several key criticisms that are often levelled at inductive logic. Chapter 9 presents a formal justification of the version of objective Bayesianism which underpins the approach. Chapter 10 explains what has been achieved and poses some open questions.

Cryptography is a vital technology that underpins the security of information in computer networks. This book presents an introduction to the role that cryptography plays in providing information security for technologies such as the Internet, mobile phones, payment cards, and wireless local area networks. Focusing on the fundamental principles that ground modern cryptography as they arise in modern applications, it avoids both an overreliance on transient current technologies and overwhelming theoretical research. A short appendix is included for those looking for a deeper appreciation of some of the concepts involved. By the end of this book, the reader will not only be able to understand the practical issues concerned with the deployment of cryptographic mechanisms, including the management of cryptographic keys, but will also be able to interpret future developments in this increasingly important area of technology.

Spectral methods have long been popular in direct and large eddy simulation of turbulent flows, but their use in areas with complexgeometry computational domains has historically been much more limited. More recently, the need to find accurate solutions to the viscous flow equations around complex configurations has led to the development of highorder discretization procedures on unstructured meshes, which are also recognized as more efficient for solution of timedependent oscillatory solutions over long time periods. This book, an updated edition on the original text, presents the recent and significant progress in multidomain spectral methods at both the fundamental and application level. Containing material on discontinuous Galerkin methods, nontensorial nodal spectral element methods in simplex domains, and stabilization and filtering techniques, this text introduces the use of spectral/hp element methods with particular emphasis on their application to unstructured meshes. It provides a detailed explanation of the key concepts underlying the methods along with practical examples of their derivation and application.

This book addresses a basic question in differential geometry that was first considered by physicists Stanley Deser and Adam Schwimmer in 1993 in their study of conformal anomalies. The question concerns conformally invariant functionals on the space of Riemannian metrics over a given manifold. These functionals act on a metric by first constructing a Riemannian scalar out of it, and then integrating this scalar over the manifold. Suppose this integral remains invariant under conformal rescalings of the underlying metric. What information can one then deduce about the Riemannian scalar? This book asserts that the Riemannian scalar must be a linear combination of three obvious candidates, each of which clearly satisfies the required property: a local conformal invariant, a divergence of a Riemannian vector field, and the Chern–Gauss–Bonnet integrand. The book provides a proof of this conjecture. The result itself sheds light on the algebraic structure of conformal anomalies, which appear in many settings in theoretical physics. It also clarifies the geometric significance of the renormalized volume of asymptotically hyperbolic Einstein manifolds. The methods introduced here make an interesting connection between algebraic properties of local invariants—such as the classical Riemannian invariants and the more recently studied conformal invariants—and the study of global invariants, in this case conformally invariant integrals.

The Error of Truth recounts the astonishing and unexpected tale of how quantitative thinking was invented and rose to primacy in our lives in the nineteenth and early twentieth centuries, bringing us to an entirely new perspective on what we know about the world and how we know it—even on what we each think about ourselves. Quantitative thinking is our inclination to view natural and everyday phenomena through a lens of measurable events, with forecasts, odds, predictions, and likelihood playing a dominant part. This worldview, or Weltanschauung, is unlike anything humankind had before, and it came about because of a momentous human achievement: namely, we had learned how to measure uncertainty. Probability as a science had been invented. Through probability theory, we now had correlations, reliable predictions, regressions, the bellshaped curve for studying social phenomena, and the psychometrics of educational testing. Significantly, these developments in mathematics happened during a relatively short period in world history: roughly, the 130year period from 1790 to 1920, from about the close of the Napoleonic era, through the Enlightenment and the Industrial Revolutions, to the end of World War I. Quantification is now everywhere in our daily lives, such as in the ubiquitous microchip in smartphones, cars, and appliances, in the Bayesian logic of artificial intelligence, and in applications in business, engineering, medicine, economics, and elsewhere. Probability is the foundation of our quantitative thinking. Here we see its story: when, why, and how it came to be and changed us forever.

This text provides an introduction to the theoretical, practical, and numerical aspects of image registration, with special emphasis on medical imaging. Given a socalled reference and template image, the goal of image registration is to find a reasonable transformation such that the transformed template is similar to the reference image. Image registration is utilized whenever information obtained from different viewpoints times and sensors needs to be combined or compared, and unwanted distortion needs to be eliminated. The book provides a systematic introduction to image registration and discusses the basic mathematical principles, including aspects from approximations theory, image processing, numerics, optimization, partial differential equations, and statistics, with a strong focus on numerical methods. A unified variational approach is introduced and enables a separation into datarelated issues like image feature or image intensitybased similarity measures, and problem inherent regularization like elastic or diffusion registration. This general framework is further used for the explanation and classification of established methods as well as the design of new schemes and building blocks including landmark, thinplatespline, mutual information, elastic, fluid, demon, diffusion, and curvature registration.

This book is an introduction to the modelbased approach to survey sampling. It consists of three parts, with Part I focusing on estimation of population totals. Chapters 1 and 2 introduce survey sampling, and the modelbased approach, respectively. Chapter 3 considers the simplest possible model, the homogenous population model, which is then extended to stratified populations in Chapter 4. Chapter 5 discusses simple linear regression models for populations, and Chapter 6 considers clustered populations. The general linear population model is then used to integrate these results in Chapter 7. Part II of this book considers the properties of estimators based on incorrectly specified models. Chapter 8 develops robust sample designs that lead to unbiased predictors under model misspecification, and shows how flexible modelling methods like nonparametric regression can be used in survey sampling. Chapter 9 extends this development to misspecfication robust prediction variance estimators and Chapter 10 completes Part II of the book with an exploration of outlier robust sample survey estimation. Chapters 11 to 17 constitute Part III of the book and show how modelbased methods can be used in a variety of problem areas of modern survey sampling. They cover (in order) prediction of nonlinear population quantities, subsampling approaches to prediction variance estimation, design and estimation for multipurpose surveys, prediction for domains, small area estimation, efficient prediction of population distribution functions and the use of transformations in survey inference. The book is designed to be accessible to undergraduate and graduate level students with a good grounding in statistics and applied survey statisticians seeking an introduction to modelbased survey design and estimation.

A central concern of number theory is the study of localtoglobal principles, which describe the behavior of a global field K in terms of the behavior of various completions of K. This book looks at a specific example of a localtoglobal principle: Weil's conjecture on the Tamagawa number of a semisimple algebraic group G over K. In the case where K is the function field of an algebraic curve X, this conjecture counts the number of Gbundles on X (global information) in terms of the reduction of G at the points of X (local information). The goal of this book is to give a conceptual proof of Weil's conjecture, based on the geometry of the moduli stack of Gbundles. Inspired by ideas from algebraic topology, it introduces a theory of factorization homology in the setting ℓadic sheaves. Using this theory, the authors articulate a different localtoglobal principle: a product formula that expresses the cohomology of the moduli stack of Gbundles (a global object) as a tensor product of local factors. Using a version of the Grothendieck–Lefschetz trace formula, the book shows that this product formula implies Weil's conjecture. The proof of the product formula will appear in a sequel volume.

The last 25 years have seen a small revolution in our approach to the understanding of new technology and information systems. It has become a founding assumption of computersupported cooperative work and human–computer interaction that in the future, if not already, most computer applications will be socially embedded in the sense that they will become infrastructures (in some sense) for the development of the social practices which they are designed to support. Assuming that IT artifacts have to be understood in this sociotechnical way, traditional criteria for good design in computer science, such as performance, reliability, stability or usability, arguably need to be supplemented by methods and perspectives which illuminate the way in which technology and social practice are mutually elaborating. This book concerns the philosophy, conceptual apparatus, and methodological concerns which will inform the development of a systematic and longterm humancentered approach to the ITproduct life cycle, addressing issues concerned with appropriation and infrastructuring. This entails an orientation to “practicebased computing.” The book contains a number of chapters which examine both the conceptual foundations of such an approach, and a number of empirical case studies that exemplify it.

Proving in the Elementary Mathematics Classroom addresses a fundamental problem in children’s learning that has received relatively little research attention: Although proving and related concepts (e.g., proof, argumentation, conjecturing) are core to mathematics as a sensemaking activity, they currently have a marginal place in elementary classrooms internationally. This book takes a step toward addressing this problem by examining how the place of proving in elementary students’ mathematical work can be elevated through the purposeful design and implementation of mathematics tasks, specifically proving tasks. In particular, the book draws on relevant research and theory and classroom episodes with 8–9yearolds from England and the United States to examine different kinds of proving tasks and the proving activity they can help generate in the elementary classroom. It examines further the role of elementary teachers in mediating the relationship between proving tasks and proving activity, including major mathematical and pedagogical issues that can arise for them as they implement each kind of proving task in the classroom. In addition to its research contribution in the intersection of the scholarly areas of teaching/learning proving and task design/implementation, the book has important implications for teaching, curricular resources, and teacher education. For example, the book identifies different kinds of proving tasks whose balanced representation in the mathematics classroom and in curricular resources can support a rounded set of learning experiences for elementary students related to proving. It identifies further important mathematical ideas and pedagogical practices related to proving that can be studied in teacher education.

Motivated by the theory of turbulence in fluids, the physicist and chemist Lars Onsager conjectured in 1949 that weak solutions to the incompressible Euler equations might fail to conserve energy if their spatial regularity was below 1/3Hölder. This book uses the method of convex integration to achieve the bestknown results regarding nonuniqueness of solutions and Onsager's conjecture. Focusing on the intuition behind the method, the ideas introduced now play a pivotal role in the ongoing study of weak solutions to fluid dynamics equations. The construction itself—an intricate algorithm with hidden symmetries—mixes together transport equations, algebra, the method of nonstationary phase, underdetermined partial differential equations (PDEs), and specially designed highfrequency waves built using nonlinear phase functions. The powerful “Main Lemma”—used here to construct nonzero solutions with compact support in time and to prove nonuniqueness of solutions to the initial value problem—has been extended to a broad range of applications that are surveyed in the appendix. Appropriate for students and researchers studying nonlinear PDEs, this book aims to be as robust as possible and pinpoints the main difficulties that presently stand in the way of a full solution to Onsager's conjecture.

Based on lectures given at Zhejiang University in Hangzhou, China, and Johns Hopkins University, this book introduces eigenfunctions on Riemannian manifolds. The book gives a proof of the sharp Weyl formula for the distribution of eigenvalues of Laplace–Beltrami operators, as well as an improved version of the Weyl formula, the DuistermaatGuillemin theorem under natural assumptions on the geodesic flow. The book shows that there is quantum ergodicity of eigenfunctions if the geodesic flow is ergodic. It begins with a treatment of the Hadamard parametrix before proving the first main result, the sharp Weyl formula. The book avoids the use of Tauberian estimates and instead relies on supnorm estimates for eigenfunctions. It also gives a rapid introduction to the stationary phase and the basics of the theory of pseudodifferential operators and microlocal analysis. These are used to prove the DuistermaatGuillemin theorem. Turning to the related topic of quantum ergodicity, the book demonstrates that if the longterm geodesic flow is uniformly distributed, most eigenfunctions exhibit a similar behavior, in the sense that their mass becomes equidistributed as their frequencies go to infinity.

This book provides an introduction to algebraic cycles on complex algebraic varieties, to the major conjectures relating them to cohomology, and even more precisely to Hodge structures on cohomology. The book is intended for both students and researchers, and not only presents a survey of the geometric methods developed in the last thirty years to understand the famous BlochBeilinson conjectures, but also examines recent work by the author. It focuses on two central objects: the diagonal of a variety—and the partial BlochSrinivas type decompositions it may have depending on the size of Chow groups—as well as its small diagonal, which is the right object to consider in order to understand the ring structure on Chow groups and cohomology. An exploration of a sampling of recent works by the author looks at the relation, conjectured in general by Bloch and Beilinson, between the coniveau of general complete intersections and their Chow groups and a very particular property satisfied by the Chow ring of K3 surfaces and conjecturally by hyperKähler manifolds. In particular, the book delves into arguments originating in Nori's work that have been further developed by others.

This book is devoted to the mathematical modelling of electromagnetic materials. Electromagnetism in matter is developed with particular emphasis on material effects, which are ascribed to memory in time and nonlocality. Within the mathematical modelling, thermodynamics of continuous media plays a central role in that it places significant restrictions on the constitutive equations. Further, as shown in connection with uniqueness, existence and stability, variational settings, and wave propagation, a correct formulation of the pertinent problems is based on the knowledge of the thermodynamic restrictions for the material. The book is divided into four parts. Part I (chapters 1 to 4) reviews the basic concepts of electromagnetism, starting from the integral form of Maxwell’s equations and then addressing attention to the physical motivation for materials with memory. Part II (chapers 5 to 9) deals with thermodynamics of systems with memory and applications to evolution and initial/boundaryvalue problems. It contains developments and results which are unusual in textbooks on electromagnetism and arise from the research literature, mainly post1960s. Part III (chapters 10 to 12) outlines some topics of materials modelling — nonlinearity, nonlocality, superconductivity, and magnetic hysteresis — which are of great interest both in mathematics and in applications.

By studying the degeneration of abelian varieties with PEL structures, this book explains the compactifications of smooth integral models of all PELtype Shimura varieties, providing the logical foundation for several exciting recent developments. PELtype Shimura varieties, which are natural generalizations of modular curves, are useful for studying the arithmetic properties of automorphic forms and automorphic representations, and they have played important roles in the development of the Langlands program. As with modular curves, it is desirable to have integral models of compactifications of PELtype Shimura varieties that can be described in sufficient detail near the boundary, which this book explains in detail. Through the discussion, the book generalizes the theory of degenerations of polarized abelian varieties and the application of that theory to the construction of toroidal and minimal compactifications of Siegel moduli schemes over the integers (as developed by Mumford, Faltings, and Chai). The book is designed to be accessible to graduate students who have an understanding of schemes and abelian varieties.

This book has its origin in the need for developing and analyzing mathematical models for phenomena that evolve in time and influence each another, and aims at a better understanding of the structure and asymptotic behavior of stochastic processes. This monograph has double scope. First, to present tools for dealing with dependent structures directed toward obtaining normal approximations. Second, to apply the normal approximations presented in the book to various examples. The main tools consist of inequalities for dependent sequences of random variables, leading to limit theorems, including the functional central limit theorem (CLT) and functional moderate deviation principle (MDP). The results will point out large classes of dependent random variables which satisfy invariance principles, making possible the statistical study of data coming from stochastic processes both with short and long memory. Over the course of the book different types of dependence structures are considered, ranging from the traditional mixing structures to martingalelike structures and to weakly negatively dependent structures, which link the notion of mixing to the notions of association and negative dependence. Several applications have been carefully selected to exhibit the importance of the theoretical results. They include random walks in random scenery and determinantal processes. In addition, due to their importance in analyzing new data in economics, linear processes with dependent innovations will also be considered and analyzed.

This book develops a new theory of multiparameter singular integrals associated with Carnot–Carathéodory balls. The book first details the classical theory of Calderón–Zygmund singular integrals and applications to linear partial differential equations. It then outlines the theory of multiparameter Carnot–Carathéodory geometry, where the main tool is a quantitative version of the classical theorem of Frobenius. The book then gives several examples of multiparameter singular integrals arising naturally in various problems. The final chapter of the book develops a general theory of singular integrals that generalizes and unifies these examples. This is one of the first general theories of multiparameter singular integrals that goes beyond the product theory of singular integrals and their analogs. This book will interest graduate students and researchers working in singular integrals and related fields.

George Gabriel Stokes was one of the most significant mathematicians and natural philosophers of the nineteenth century. Serving as Lucasian professor at Cambridge he made wideranging contributions to optics, fluid dynamics and mathematical analysis. As Secretary of the Royal Society he played a major role in the direction of British science acting as both a sounding board and a gatekeeper. Outside his own area he was a distinguished public servant and MP for Cambridge University. He was keenly interested in the relation between science and religion and wrote extensively on the matter. This edited collection of essays brings together experts in mathematics, physics and the history of science to cover the many facets of Stokes’s life in a scholarly but accessible way.

Outer billiards provides a toy model for planetary motion and exhibits intricate and mysterious behavior even for seemingly simple examples. It is a dynamical system in which a particle in the plane moves around the outside of a convex shape according to a scheme that is reminiscent of ordinary billiards. This book provides a combinatorial model for orbits of outer billiards on kites. The book relates these orbits to such topics as polytope exchange transformations, renormalization, continued fractions, corner percolation, and the Truchet tile system. The combinatorial model, called “the plaid model,” has a selfsimilar structure that blends geometry and elementary number theory. The results were discovered through computer experimentation and it seems that the conclusions would be extremely difficult to reach through traditional mathematics. The book includes an extensive computer program that allows readers to explore the materials interactively and each theorem is accompanied by a computer demonstration.