## Dmitri I. Svergun, Michel H. J. Koch, Peter A. Timmins, and Roland P. May

Print publication date: 2013

Print ISBN-13: 9780199639533

Published to Oxford Scholarship Online: December 2013

DOI: 10.1093/acprof:oso/9780199639533.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 30 March 2017

# Appendix 1: Basic physics and mathematics of wave phenomena

Source:
Small Angle X-Ray and Neutron Scattering from Solutions of Biological Macromolecules
Publisher:
Oxford University Press

### Waves

Imagine that point P in the left panel of Fig. A1.1 moves counterclockwise on a circle with radius OA with a constant angular velocity of $− ω$ radians per second (that is, $ω T = 2 π$, where T is the period and $ν = 1 / T$ the frequency). The oscillatory motion of the projection of OP on the y-axis ($Ψ ( t ) = OQ$), which has a maximum value or amplitude, $OA = A$, is called simple harmonic motion. At any time t, $Ψ ( t )$ is in general given by $Ψ ( t ) = A sin ( − ω t + α )$. The angle $( − ω t + α )$ is called the phase and $α$ is the phase at $t = 0$ or initial phase. The velocity is $d Ψ / d t = − ω A cos ( − ω t + α )$ and the acceleration $d 2 Ψ / d t 2 = − ω 2 A sin ( − ω t + α )$. As the origin of the circle is arbitrary, one can always reset the time so as to have $α = 0$.

Fig. A1.1 Left: Simple harmonic motion. Right: Displacement (OQ) as a function of distance of propagation ($x )$ at a time t. A similar graph is obtained for a fixed x as a function of time.

This simple geometric construction is useful for describing the behaviour of many physical phenomena with a periodic behaviour that does not necessarily imply any circular motion. The relationship between simple harmonic motion and wave motion can be easily understood by imagining that the tip of a pen is attached to Q and that the paper moves at constant velocity to the right, as illustrated in Fig. A1.1 .

The distance between two successive points with identical state of motion is the wavelength $λ$, and the corresponding time interval is the period T. The phase change between such points (such as $P ′$ and $P ′′$) is $2 π$ and $k = 2 π / λ$ is the phase constant describing the change of phase per unit distance, whereas (p.324) $v ϕ = λ$/T, the phase velocity represents the distance travelled per unit time. Since $T = 2 π / ω$, $v ϕ = ω$/k.

The phase velocity $v ϕ$ is related to the properties of the medium through which the wave propagates, in the case of electromagnetic waves to the refractive index. In vacuum $v ϕ = c = ( ɛ 0 μ 0 ) − 1 / 2$, where $ɛ 0$ is the vacuum permittivity, $μ 0$ is the vacuum permeability and $c = 299 , 792 , 458$ ms–1 is the velocity of light, corresponding, by definition, to a refractive index $n = c$/$v ϕ = 1$. When the phase velocity in a medium depends on the frequency there is dispersion (for example, when white light traverses a prism). As the refractive index for X-rays is always very close to 1, dispersion can be neglected except close to absorption edges.

Waves can be longitudinal (such as sound waves) or transverse (such as electromagnetic waves), depending on whether the displacement is in the direction of propagation of the wave or perpendicular to it.

The right panel in Fig. A1.1 illustrates that for a wave propagating in the x direction in a homogeneous medium, the harmonic motion of any point x at time t is the same as at $x = 0$ at the time $t ′ = t − x$/$v ϕ$. Since $Ψ ( 0 , t ) = A sin ( − ω t + α )$, it is clear that $Ψ ( x , t ) = Ψ ( 0 , t ′ ) = A sin ( − ω t + ω x / v ϕ ) = A sin ( − ω t + k x )$. With this convention the phase $ϕ = ( k x − ω t )$ at a given x decreases with time, whereas at a given time it increases with distance. Note that in the literature, other conventions for the direction of the x- or time-axis, yielding different expressions for the phase, are also used.

### Solutions of the wave equation

Simple harmonic motion as in Fig. A1.1 can be represented by a differential equation, called the wave equation:

(A1.1)
$Display mathematics$

In three dimensions this becomes, with $Ψ = Ψ ( x , y , z , t ) = Ψ ( r , t )$ and $∇ 2$ the Laplace operator:

(A1.2)
$Display mathematics$

As sine and cosine are solutions of eq. ( A1.1 ) the more general form of an harmonic oscillation is $Ψ ( x , t ) = A sin ( k x − ω t + α ) + B cos ( k x − ω t + α )$, corresponding to a wave without defined origin in space or time.

Fig. A1.2 Since $f ( x 1 − v t 1 ) = f ( x 2 − v t 2 )$, $x 1 − v t 1 = x 2 − v t 2$ and $v = ( x 2 − x 1 ) / ( t 2 − t 1 )$ is positive if $x 2 〉 x 1$ and $t 2 〉 t 1$ and the wave $f ( x − v t )$ thus moves in the +x direction which corresponds to the case of Fig. A1.1 . The wave $f ( x + v t )$ moves along the $− x$ direction.

The advantage of choosing sinusoidal functions for the description of waves is that, in contrast to other possible solutions, their shape is not distorted even when they propagate through dispersive media. Also, as explained in eq. ( A1.16 ), more complex wave shapes can be represented as sums of sinusoidal waves.

It is easy to verify using partial derivatives that any function, not necessarily sinusoidal, of the form $Ψ ( x , t ) = f ( x − v ϕ t )$, corresponding, as illustrated in Fig. A1.2 , to a wave travelling in the x-direction, is a solution of eq. ( A1.1 ), and that a wave travelling in the opposite direction $Ψ ′ ( x , t ) = g ( x + v ϕ t )$ is, of course, also a solution, since changing $v ϕ$ to $− v ϕ$ gives the same result.

The most general solution to the wave equation is thus represented by the sum of two waves travelling in opposite directions:

(A1.3)
$Display mathematics$

(p.325) The principle of superposition states that if $Ψ 1 ( x , t ) = f 1 ( x − v ϕ t )$ and $Ψ 2 ( x , t ) = f 2 ( x − v ϕ t )$ are solutions of the wave equation, $Ψ ( x , t ) = Ψ 1 ( x , t ) + Ψ 2 ( x , t )$ is also a solution. This is a consequence of the fact that the wave equation is linear in $f ( x , t )$ (that is, if $f 1 ( x − v ϕ t )$ and $f 2 ( x − v ϕ t )$ are solutions, $g ( x − v ϕ t ) = a 1 f 1 ( x − v ϕ t ) + a 2 f 2 ( x − v ϕ t )$ is also a solution).

A special case of superposition is that of two identical waves travelling in opposite directions:

(A1.4)
$Display mathematics$

Using the relationship $sin ( a ) + sin ( b ) = 2 sin 1 / 2 ( a + b ) ⋅ cos 1 / 2 ( a − b )$, one finds a resultant wave of the form $A sin ( k x ) ⋅ cos ( ω t )$. In systems with a small number of degrees of freedom such solutions are called the (normal) modes of the system, whereas in continuous systems they are referred to as standing waves. They correspond to situations where every point x has a simple harmonic motion in time with local amplitude $A sin ( k x )$, but the wave does not propagate. X-ray standing-wave techniques play an important role in the study of surfaces and interfaces with high spatial resolution and chemical selectivity.

Solutions to the wave equation explicitly involving sinusoidal functions are somewhat impractical to handle, and one therefore usually prefers to represent them by using complex exponentials.

### Complex numbers

Complex numbers $z = [ x , y ]$ are points in the complex plane which remaps the familiar $X , Y$-plane in a way that simplifies the representation of waves and signals. In the usual orthogonal coordinate system a complex number is represented as a point [$x , y$] or as a vector $( O z ) = R$ between the origin (O) and ($x , y$), as illustrated in Fig. A1.3 . In this system the X-axis is referred to as the real axis and the Y-axis as the imaginary axis.

The complex number $z = [ x , y ]$ consists of a real (Re) and an imaginary (Im) part, which are both real numbers ([x,0] and [y,0]), and the imaginary unit $i = [ 0 , 1 ]$, such that:

$Display mathematics$

Multiplication by the imaginary unit [0,1] rotates the vector [y,0] which is parallel to the real axis counterclockwise by 90° degrees and transforms it into a vector [0,y] parallel to the imaginary axis.

Fig. A1.3 Representation of complex numbers in an orthogonal coordinate system.

Addition and multiplication of complex numbers follow the rules of ordinary algebra. If $z 1 = x 1 + i y 1$ and $z 2 = x 2 + i y 2$, $z 1 + z 2 = ( x 1 + x 2 ) + i ( y 1 + y 2 )$ and $z 1 z 2 = x 1 x 2 − y 1 y 2 + i ( x 1 y 2 − x 2 y 2 )$. The solutions (u) of the equations $z 1 + u = z 2$ and $z 1 u = z 2$, respectively, define subtraction and division by $z 1 ≠ [ 0 , 0 ]$. The complex conjugate of $z = [ x , y ] = x + i y$ is $z ∗ = [ x , − y ] = x − i y$, so that $z z ∗ = x 2 + y 2$.

As illustrated in Fig. A1.3 , geometrically conjugation corresponds to a reflection through the real axis. Note that when taking the conjugate of a complex expression, all numbers must be conjugated.

It is in general preferable to use the polar form to represent complex numbers where z is specified by its distance $R = x 2 + y 2$ from the origin O (its modulus or amplitude), and the angle $ϕ$ (the phase) between the horizontal and the line through O and z:

(A1.5)
$Display mathematics$

(p.326) The last equality (Euler’s identity) and its counterpart $z ∗ = R ( cos ϕ − i sin ϕ ) = R exp ( − i ϕ )$ are easily proven by series expansion.

Multiplication of two complex numbers $a = R 1 exp ( i ϕ 1 )$ and $b = R 2 exp ( i ϕ 2 )$ then simplifies to

$Display mathematics$

Differentiation and integration reduce respectively to multiplication and division by i.

The solutions of the wave equation can thus be represented as:

(A1.6)
$Display mathematics$

In the complex representation of sinusoidal waves only the real part represents a physical quantity (that is, the physical wave is the projection of the complex wave on the real axis). By an appropriate choice of origin one can make $α = 0$ and drop the last exponential. In applications like X-ray scattering one is generally interested only in the time-averaged pattern, and one can ignore the factor $exp ( − i ω t )$, since after multiplication by its complex conjugate it averages to 1 over a period (that is, $1 T ∫ 0 T d t = 1$). In contrast, in applications like electronics where one deals with time-varying signals such as voltages and currents at a given point, one ignores the spatial factor $( exp ( i k x ) )$ and deals only with signals of the form $V ( t ) = V 0 exp − ( i ω t )$.

Fig. A1.4 A: In the near field, spherical harmonic waves, shown here in a plane through a diameter of the sphere, are the best representation, whereas in the far field the curvature of the wavefront can be neglected. B: Electromagnetic waves, with electric field E and magnetic field H are often represented as plane waves corresponding to planes of constant phase $ϕ = k ⋅ r$ perpendicular to the direction of propagation of the wave.

### Plane waves and spherical waves

The surface over which the phase of a wave is constant is called the wavefront. In practice, an image or a scattering pattern results from the superposition of harmonic waves with the same frequency but different phase constants emitted by several sources. Close to the sources in the near field the solutions to the wave equation are therefore better represented by the superposition of harmonic spherical waves of the form: $Ψ ( r , t ) = A ^ r sin ( ω t − k r )$, where the constant Â is the source strength and the amplitude Â/r decreases with the distance from the source as required for energy conservation. As illustrated in Fig. A1.4 this corresponds to concentric spheres of constant radius r and hence constant $Ψ ( r , t )$.

When the distance between the source and the detector is large, in the far field, the wavefront can be approximated by a plane wave $Ψ ( r , t ) = A sin ( ω t − k ⋅ r )$, where $k ⋅ r =$constant defines the equation of a plane perpendicular to the direction of propagation of the wave given by the wavevector k.

### Polarisation

A travelling plane wave $E x ( z , t ) = E x cos ( ω t − k z )$, propagating along z and with the transverse component of the field oscillating in the x,z plane, is said to be linearly polarised along x.

More generally, at a fixed position z, the oscillations of the electric field of a plane wave propagating along z can described as the superposition of two waves—one linearly polarised along x and the other along y, $E ( z , t ) = E x ( cos ω t + ϕ x ) e x + E y ( cos ω t + ϕ y ) e y$, where $ϕ x = k x x$ and $ϕ y = k y y$. The components of the field $E x$ and $E y$ are independent, but the changes of $E x$ and $B y$ or $E y$ and $− B x$ relative to z and t are the not, as they are coupled by Maxwell’s equations.

If $E y = 0$ or $E x = E y$ and $ϕ y = ϕ x$ or $ϕ x ± π$ the radiation is linearly polarised, whereas if $E x = E y$ and $ϕ y = ϕ x − π$/2 (that is, the x-oscillation leads the (p.327) y-oscillation by $π$/2) it is circularly polarised and if $E x ≠ E y$ and $ϕ y ≠ ϕ x$ it is elliptically polarised. The radiation from a bending magnet in a storage ring, for example, is linearly polarised in the plane of the orbit, and elliptically polarised above and below this plane. Optical elements affect the polarisation of the radiation. Although polarisation can be neglected in most case in SAS, this is not the case for measurements at larger angles or in situations where reflections occur.

### Interferences

Consider two travelling waves with unit amplitude and slightly different frequencies:

$Display mathematics$

Since $cos ( a + b ) = cos a cos b − sin a sin b$ and $cos ( a − b ) = cos ( a ) ⋅ cos ( b ) + sin ( a ) ⋅ sin ( b )$, $cos ( a ) ⋅ cos ( b ) = 1 / 2 cos ( a + b ) + 1 / 2 cos ( a − b )$. Hence taking $α = a + b$ and $β = a − b$ or $a = 1 / 2 ( α + β )$, $b = 1 / 2 ( α − β )$, one obtains $cos ( α ) + cos ( β ) = 2 cos ( 1 / 2 ( α + β ) ) cos ( 1 / 2 ( α − β ) )$.

The sum $Ψ ( x , t ) = A ( x , t ) + B ( x , t )$ is thus given by:

$Display mathematics$

The real part of this expression is: $Ψ ( x , t ) = cos ( 1 / 2 ( a + b ) ) [ 2 cos ( 1 / 2 ( a − b ) ) ]$.

Setting $1 / 2 ( ω 1 + ω 2 ) = ω$, $1 / 2 ( k 1 + k 2 ) = k$, $1 / 2 ( ω 1 − ω 2 ) = Δ ω$ and $1 / 2 ( k 1 − k 2 ) = Δ$k, this simplifies to:

(A1.7)
$Display mathematics$

As illustrated in Fig. A1.5 , the two waves beat and this defines a wave group. The amplitude function (envelope) varies slowly with maxima at $Δ x = 2 π / Δ k$ (with t fixed) and the phase function varies rapidly with maxima at $Δ x = 2 π / k$ (with t fixed). The maxima of the amplitude propagate with the group velocity $v g = Δ ω$/$Δ k$ and the planes of constant phase with the phase velocity $v ϕ = ω$/k.

The group velocity is the one associated with the transport of energy. If there is dispersion (that is, $v ϕ$ depends on frequency), the group velocity and the phase velocity differ.

Real waves, which are never purely monochromatic but have a certain wavelength ($Δ λ$) or frequency ($Δ ν$) spread, will thus not extend infinitely but

Fig. A1.5 Left: Two waves with slightly different phase factors ($k ⋅ r$) will be out of phase by $π$ after travelling a distance $Λ$ corresponding to the longitudinal or temporal coherence length. Right: The superposition of the two waves leads to wave packets or a wave train.

(p.328) consist of wave trains or wave packets which are temporally and spatially limited. This phenomenon determines the coherence of light sources.

### Coherence

The temporal or longitudinal coherence length of radiation is the distance over which two waves with wavelengths $λ$ and $λ + Δ λ$ become out of phase by $π$. It describes the fact that the effective frequency range of monochromatic radiation is of the order of the reciprocal of the duration of a wave ($2 Δ λ =$full width half maximum).

(A1.8)
$Display mathematics$

The transverse one-sigma coherence area corresponds to the area ($S )$ of the sample at a distance R from the source which is coherently illuminated by a quasi-monochromatic incoherent source with horizontal and vertical source sizes $σ x$ and $σ y$:

(A1.9)
$Display mathematics$

Note that the transverse coherence does not depend on the wavelength spread but only on geometry (source size and distance between source and object).

If the source becomes very small ($∼$20 $μ$m) and R large ($∼$50 m), as is the case with some instruments at modern synchrotron radiation sources, the samples are partially coherently illuminated over larger areas (mm²), even if the source is not coherent.

A particularly important application of localised non-periodic wave trains or pulses, which arise from the superposition of a large number of oscillations with equal amplitudes and nearly equal phases (nearly equal k and $ω$), is the quantum-mechanical description of particles like neutrons in terms of probability amplitude waves illustrated in Fig. A1.6 .

Fig. A1.6 Localised wave train or pulse representing a particle in terms of probability amplitude.

The energy of such particles with mass m is $E = ℏ ω = h v$, where $ℏ = h$/2$π$, h is Planck’s constant and $ν$ the frequency, and their momentum is $p = m v$ is $p = ℏ k$. The velocity of the particle is the group velocity of the wave train: $v g = c 2 p E 1$, where E is the energy of the particle. For neutrons one finds that since $p = m n v = h / λ$, here $m n$ is the mass of the neutron:

(A1.10)
$Display mathematics$

### Dirac $δ$ function

Before discussing the basic mathematics underlying the relationship between a scattering (or diffraction) pattern and a structure, it is useful to introduce the Dirac $δ$ function, which plays an important role in the solution of many scattering problems. This function, $δ ( x )$, has a value $δ ( x ) = ∞$ for $x = 0$ and $δ ( x ) = 0$ for $x ≠ 0$, with $∫ − ∞ ∞ δ ( x ) d x = 1$, and corresponds to an infinitely narrow and high spike at $x = 0$. Similarly, $δ ( x − a ) = ∞$ for $x = a$ and $δ ( x ) = 0$ for $x ≠ a$ corresponds to the same spike shifted to a. It is easy to see that with this definition the integral $∫ − ∞ ∞ f ( x ) δ ( x − a ) d x = f ( a )$ selects or filters out a single value from the function $f ( x )$.

An alternative and perhaps more intuitive definition of the $δ$ function is obtained by taking the limit of a Gaussian with vanishing width and constant area: $δ ( x ) = lim a → ∞ a / π exp$ ($a x 2$), which also implies that $δ ( x ) = δ ( x ) m$, where m is a constant.

### (p.329) Convolution

The convolution of two functions $f ( x )$ and $g ( x )$ is defined as:

(A1.11)
$Display mathematics$

where u is a dummy variable running over all values of x. In some definitions the integral is multiplied by a factor of $1 / 2 π$. The relationship between the functions $g ( u )$ and $g ( x − u )$ for a given value of x is easily understood by noting that $g ( − u )$ results from the inversion of $g ( u )$ through the origin of the abscissa, which is equivalent to flipping $g ( u )$ around the y-axis, as illustrated in Fig. A1.7 .B. The function $g ( x − u )$ is obtained by shifting the origin of $g ( − u )$ by x along the abscissa.

An interesting case is that of convolutions involving $δ$ functions. If $g ( x ) = δ ( x )$, the integral $∫ − ∞ ∞ f ( u ) δ ( x − u ) d u = f ( x )$. As in eq. ( A1.11 ) this is repeated for every value of x, it is easy to see that $f ( x ) ∗ g ( x ) = f ( x )$ (that is, the function $f ( u )$ is simply transferred to the space of x). Similarly, if $g ( x ) = δ ( x − a )$, which is a $δ$ function with a peak at a, $∫ − ∞ ∞ f ( u ) δ ( x − ( u − a ) ) d u = f ( x + a )$, which again reproduces the function $f ( x )$ but this time shifted by $− a$ along the abscissa.

In general, as illustrated in Fig. A1.7 , the value of $f ( x ) ∗ g ( x )$ is obtained by repeating the sequence FLIP–SHIFT–MULTIPLY–INTEGRATE for each value of x: flip $g ( u )$ around the y-axis to obtain $g ( − u )$, shift $g ( − u )$ by x to obtain $g ( x − u )$, multiply by $f ( u )$ for all values of u, and integrate the product.

In the case of functions of several variables (for example, $f ( u ) = f ( x , y , z )$ and $g ( u ) = g ( x , y , z ) )$ the function g( u ) is inverted through a centre of symmetry at the origin to obtain $g ( − u ) = g ( − x , − y , − z )$, and its origin is displaced by a vector r:

(A1.12)
$Display mathematics$

Fig. A1.7 The value of the convolution of the two functions $f ( x )$ and $g ( x )$ is obtained by flipping $g ( u )$ around the y-axis to obtain $g ( − u )$. The origin of $g ( − u )$ is shifted to x (here $x = − 150$) to obtain $g ( x − u )$. The functions $f ( u )$ and $g ( x − u )$ are multiplied and the value of the convolution is obtained by integrating the result $f ( x ) ∗ g ( x ) = ∫ − ∞ ∞ f ( u ) g ( x − u ) d u$ corresponding to the shaded area. These operations must be repeated for all values of x.

(p.330)

Fig. A1.8 Convolution of a linear array of equally spaced $δ$ functions (lattice) with a motif yields a linear crystal.

Here again the operations must be repeated for all possible vectors r in the range (volume) where the functions are defined.

Convolution is commutative: $f ( x ) ∗ g ( x ) = g ( x ) ∗ f ( x )$, and distributive: $f ( x ) ∗ ( g ( x ) + h ( x ) ) = f ( x ) ∗ g ( x ) + f ( x ) ∗ h ( x )$.

An application of convolutions: making crystals, chain molecules and solutions

An infinite linear lattice can be represented as a sum of equally spaced $δ$ function:

$Display mathematics$
(A1.13a)

As illustrated in Fig. A1.8 , a crystal can be described as the convolution of the electron density distribution within one unit cell, $ρ ( x )$, which is the motif, with the lattice, which for real crystals is a three-dimensional array of $δ$ functions. For a linear lattice:

$Display mathematics$
(A1.13b)

A similar approach is useful in many circumstances, as illustrated in Fig. A1.9 . In these cases one uses a pseudo-lattice where the $δ$ functions are no longer necessarily regularly spaced.

### Correlation

Correlation $f ( x ) ∘ g ( x ) = f ( x ) ∗ g ( − x )$ is similar to convolution, except that $g ( u )$ does not get flipped. Correlation is thus a repetitive sequence of SHIFT–MULTIPLY–INTEGRATE operations.

(A1.14)
$Display mathematics$

As in the case of convolution, in some definitions the integral is multiplied by a factor of $1 / 2 π$. If $g ( x )$ is even (that is, if $g ( x ) = g ( − x ) )$, convolution and

Fig. A1.9 Examples of convolutions used in the description of chain molecules, concentrated solutions or semicrystalline materials.

(p.331) correlation produce, of course, the same result. The most important case in the context of X-ray scattering is that of autocorrelation:
(A1.15)
$Display mathematics$

The averaged autocorrelation function of the density distribution $γ ( r ) = 〈 ρ ( r ) ∗ ρ ( − r ) 〉$ is the correlation function of the particle, which is related to the distance distribution function $p ( r ) = r 2 γ ( r )$.

### Fourier series and Fourier transforms

Fourier series and transforms are indispensable tools for describing all kinds of signals. If the signal is periodic as in crystallography one uses Fourier series; if it is non-periodic as in scattering one uses Fourier transforms.

Any single-valued periodic function $f ( x )$ which is piecewise-differentiable over the interval [$− π , π$] can be represented as a sum of harmonic functions or Fourier series. One can always map the interval [$− π , π$] to any interval [$− L , L$] over which the function is periodic by taking $x = x ′ π$/L. Rather than using sines or cosines only with a phase shift, it is more convenient to use sums of these functions where n is integer:

(A1.16)
$Display mathematics$

The Fourier coefficients of the Fourier series of $f ( x )$, $a n$ and $b n$ can be found easily by integration (using the fact that $cos ( n x ) sin ( k x ) = 1 / 2 [ sin ( n + k ) x − sin ( n − k ) x ]$ and $sin ( n x ) sin ( k x ) = 1 / 2 [ − cos ( n + k ) x + cos ( n − k ) x ]$ and remembering that integrals from $− π$ to $π$ of $cos ( n x )$ and $sin ( n x )$ with n integer are zero.

(A1.17)
$Display mathematics$
(A1.18)
$Display mathematics$
(A1.19)
$Display mathematics$

The first term in the series, $a 0$/2 is the average of f(x) over the period.

The Fourier series can also be written conveniently in complex form, taking into account that as $cos ϕ + i sin ϕ = exp ( i ϕ )$, and hence, $cos ϕ = ( e i ϕ + e − i ϕ )$/2 and $sin ϕ = ( e i ϕ − e − i ϕ ) / 2 i = − i ( e i ϕ − e − i ϕ ) / 2$:

(A1.20)
$Display mathematics$

As a simple example of Fourier series expansion we consider the periodic function $f ( x )$ with period 2$π$ and $f ( x ) = − 1$ for $− π 〈 x 〈 0$ and $f ( x ) = 1$ for $0 ≤ x ≤ π$ (p.332)

Fig. A1.10 Left: First three terms of the Fourier series of the function $f ( x ) = − 1$ for $− π 〈 x 〈 0$ and $f ( x ) = 1$ for $0 ≤ x ≤ π$. Right: $f ( x )$ and sum of the first three terms of the Fourier series. A large number of terms is required to dampen the ripples.

in Fig. A1.10 , which represents a square wave. In this case, $a 0 = 0$, because the function oscillates around 0, $a n = 0$, because the function is odd (that is, $f ( x ) = − f ( − x ) )$ and $b n = ( 2 / π n ) ( 1 − cos n π )$, which is equal to 0 for n even and 4/$π n$ for n odd, and hence:
$Display mathematics$

Whereas the Fourier series for odd functions like the one above contain only the coefficients $b n$ associated with the sine terms, even functions, where $f ( x ) = f ( − x )$, contain only the coefficients $a n$ associated with the cosine terms. In other words, odd functions are obtained as sums of odd functions (sines) and even functions as sums of even functions (cosines). Functions which are neither odd nor even are represented as sums of odd and even functions.

Note that a truncated series like that in Fig. A1.10 provides only a low-resolution picture, since the low-index terms, which correspond to low frequencies, define the broad features, and the higher-index terms increase the detail. Fourier series are the most important mathematical tool used in the description of the periodic three-dimensional electronic densities in crystals. The description of the electron density $ρ ( x , y , z )$ in the unit cell in terms of the Fourier coefficients (that is, the structure factors F(hkl)) are given in the International Tables of Crystallography for all space groups using expressions analogous to eq. ( A1.20 ). For centrosymmetric crystals the electron density is an even function ($ρ ( x , y , z ) = ρ ( x ¯ , y ¯ , z ¯ )$), the phases are restricted to 0 and $π$, and the structure factors are real numbers, whereas for non-centrosymmetric crystals the structure factors are imaginary numbers with a real and an imaginary part and the phase varies between 0 and 2π.

Fourier series are rarely used in SAS, because most objects are not periodic except for the example in the case of lipid systems. However, by extending the concept of Fourier series to the case where the harmonics vary continuously from 0 to $∞$, one obtains Fourier transforms, which can be used to describe non-periodic structures and are the main mathematical tool in SAS.

The Fourier integral theorem states that for any function defined over [$− ∞ , ∞$] which is piecewise-differentiable and absolutely integrable (that is, for which $∫ − ∞ ∞ f ( x ) d x$ converges):

(A1.21)
$Display mathematics$

(p.333) where $A ( k ) = 1 π ∫ − ∞ ∞ f ( x ) cos ( k x ) d x$ and $B ( k ) = 1 π ∫ − ∞ ∞ f ( x ) sin ( k x ) d x$. These equations are similar to eqs. ( A1.16A1.19 ), but with the major difference that here k is a continuous rather than an integer variable.

Using the dummy variable u, which runs over all values of x, eq. (A1.21) can be rewritten as:

(A1.22)
$Display mathematics$
$Display mathematics$
(A1.23a)

As the expression between square brackets is $cos k ( x − u ) = 1 / 2 ( e i k ( x − u ) + e − i k ( x − u ) )$ one obtains, taking into account that $∫ 0 ∞ ( e i k ( x − u ) + e − i k ( x − u ) ) d k = ∫ − ∞ ∞ e − i k ( x − u ) d k$ and changing the limits of integration for k:

$Display mathematics$
(A1.23b)

The function $F ( k ) = 1 2 π ∫ − ∞ ∞ f ( x ) e i k x d x$ is the Fourier transform of $f ( x )$, which we shall note $ℑ ( f ( x ) )$, and $f ( x ) = 1 2 π ∫ − ∞ ∞ F ( k ) e − i k x d k$, which is the inverse Fourier transform of $F ( k )$ (noted $ℑ − 1 ( F ( k ) ) )$.

In the literature, confusion often arises from the fact that there are a number of definitions of the Fourier transform differing by the factors in front of the integrals representing $f ( x )$ and $F ( k )$ and the sign convention for the exponential. The product of the constants in front of the integrals is always dimensionless and is equal to 1/2$π$, and with this limitation any two arbitrary constants [$a , b$] may be used to define a Fourier transform pair as

(A1.24)
$Display mathematics$

Modern physics prefers [0,1], classical physics used [$− 1$,1], pure mathematics uses [1,$− 1$] and signal processing [0,$− 2 π$] (see Eric W. Weisstein, ‘Fourier Transform’, in MathWorld, A Wolfram Web Resource: $〈$http://mathworld.wolfram.com/FourierTransform.html$〉$).

This implies that if $f ( x )$ is defined over [$− L , L$], $F ( k )$ will be defined over [$− 1$/2$π L$, 1/2$π L$] and corresponds to the definition of reciprocal space in solid-state physics, which differs by a factor 2$π$ from the crystallographic definition.

As a simple illustration, consider the Fourier transform of the function $f ( x ) = 1$ for $| x | 〈 a$ and $f ( x ) = 0$ for $| x | 〉 a$, which defines a rectangular box of unit height and length a in Fig. A1.11 . (p.334)

Fig. A1.11 Fourier transforms of the function $f ( x ) = 1$ for $| x | 〈 a$ and $f ( x ) = 0$ for $| x | 〉 a$ which defines a rectangular box for two values of $a : a = 3$ (dashed line) and $a = 6$ (full line).

(A1.25)
$Display mathematics$

Note that the value of F(0) is proportional to the area of the box, and that the width of the central maximum ($Δ k = π$/a) is inversely related to the width of the box (that is, the interval over which $f ( x )$ is defined, $Δ x = 2 a$). The relationship $Δ k Δ x = 2 π$ is known in quantum mechanics as Heisenberg’s uncertainty principle (since $p = ℏ k$, $Δ p Δ x = h$, Planck’s constant), which arises from the representation of particles by probability amplitude waves. As time and angular velocity ($ω = 2 π ν$) are also reciprocal, $Δ ω Δ t = 2 π$ or since $Δ E = ℏ Δ ω$, $Δ E Δ t = h$) is an alternative expression of this principle.

Clearly, as $Δ k Δ x = 2 π$, when the interval $Δ x$ becomes very large, $F ( k )$ becomes very narrow, and when $Δ x → ∞$ (that is, $f ( x ) = 1 ( x )$, a function which has a constant value of 1 from $− ∞$ to $+ ∞$), $F ( k ) → δ ( k )$. Hence,

(A1.26)
$Display mathematics$

and

(A1.27)
$Display mathematics$

Table A1.1 gives a list of useful functions for small-angle scattering and their Fourier transforms.

The following properties of Fourier transforms are important:

1. 1. The Fourier transform of a linear combination of functions is a linear combination of their transforms:

(A1.28)
$Display mathematics$

2. Table A1.1 A few functions which are useful in small angle scattering and their Fourier transforms.

Function ($f ( x ) )$

Fourier transform ($F ( k )$)

$exp ( − π x 2 )$

$exp ( − π k 2 )$

$cos ( 2 π a x )$

$[ δ ( k + a ) + δ ( k − a ) ] / 2$

$sin ( 2 π a x )$

$[ δ ( k + a ) − δ ( k − a ) ] / 2$

$∏ ( x , a ) = 1 , | x | ≤ a / 2 , 0 | x | 〉 a / 2$

$asin ( π a k ) / ( π a k )$

$Λ ( x , a ) = 1 − | x | / a , | x | ≤ a 0 , | x | 〉 a$

$a [ sin ( π a k ) / ( π a k ) ] 2$

$∏ ( x , a )$ represents a rectangle and $∏ ( x , a ) ⋅ ∏ ( x , b ) ⋅ ∏ ( x , c )$ a parallelipiped.

$Λ ( x , a )$ represents a triangle.

(p.335) 2. The Fourier transform of the complex conjugate is the Fourier transform of the original function inverted through the origin:
(A1.29)
$Display mathematics$

3. 3. The Fourier transform of the product of two functions is the convolution of their Fourier transforms (multiplication theorem):

(A1.30)
$Display mathematics$

4. 4. The Fourier transform of the convolution of two functions is the product of their Fourier transforms (convolution theorem):

(A1.31)
$Display mathematics$

Here is a simple proof, where u is a dummy variable.

Taking

$Display mathematics$

the Fourier transform of the convolution $f ( x ) ∗ g ( x )$ is

$Display mathematics$

and setting $x − u = w$, expressing the exponential as a product and separating the parts containing u and w,

$Display mathematics$

(p.336) The names of dummy variables like u and w are arbitrary, and they can thus also be replaced by x so that

$Display mathematics$

This proof is equally valid for the inverse transform, as only the sign of the exponential changes. The convolution theorem can be rewritten as

$Display mathematics$

Making use of the analogy between $ℑ$ and $ℑ − 1$, one simply obtains the multiplication theorem (eq. A1.30 ).

5. 5. The Fourier transform of a correlation is the product of the Fourier transform of one function with the Fourier transform of the complex conjugate of the other. This property follows directly from (2) and (4).

(A1.32)
$Display mathematics$

6. 6. The similarity theorem states that a change of scale by a factor a of the abscissa in real space leads to a contraction by 1/a of the abscissa in reciprocal space, as implied by eq. ( A1.24 ).

(A1.33)
$Display mathematics$

7. 7. If a function is shifted along the abscissa by $x 0$ its transform is multiplied by $e i k x 0$.

(A1.34)
$Display mathematics$

This expression can be easily extended to three-dimensional space, and is very useful to calculate the scattering amplitudes of an assembly when the amplitudes of the individual subunits in the proper orientation are known.

8. 8. The Fourier transform of the derivative of a function $f ( x )$ is given by

(A1.35)
$Display mathematics$

Setting $d u = f ′ ( x )$ dx and $v = e ikx$ ($u = f ( x )$, $d v = i k e ikx d x$), integration by parts $( ∫ v d u = u v − ∫ u d v )$ yields $ℑ ( f ′ ( x ) ) = 1 2 π [ f ( x ) e i k x | − ∞ ∞ − i k ∫ − ∞ ∞ f ( x ) e i k s d x ] = − i k F ( k )$.

(p.337) The first term in the last equation is the product of $f ( x )$ and an oscillating function, which vanishes if $lim x → ∞ ( f ( x ) ) = 0$.

9. 9. Fourier transforms of odd and even functions As already mentioned, an even function is one where $f ( x ) = f ( − x )$ (for example, cos($x ) )$, and an odd function is one where $f ( x ) = − f ( − x )$ (for example, $sin ( x )$). As in the case of the Fourier series above, the Fourier transform of an even function has only cosine terms (it is real), and $F ( k )$ and $f ( x )$ are cosine transforms of each other:

(A1.36)
$Display mathematics$

Conversely, the Fourier transform of an odd function has only sine terms (it is purely imaginary), and $F ( k )$ and $f ( x )$ are sine transforms of each other:

(A1.37)
$Display mathematics$

Since $1 / 2 [ f ( x ) + f ( − x ) ] + 1 / 2 [ f ( x ) − f ( − x ) ] = E ( x ) + O ( x )$, every function can be partitioned in even $E ( x )$ and odd $O ( x )$ parts, and its Fourier transform can thus also be written as a sum of Fourier transforms of these parts.

If $F ( k )$ and $G ( k )$ are cosine or sine transforms, Parseval’s identity applies:

(A1.38)
$Display mathematics$

In general, however, $∫ − ∞ ∞ F ( k ) G ∗ ( k ) d k = ∫ − ∞ ∞ f ( x ) g ∗ ( x ) d x$, and if $f ( x ) = g ( x )$ this becomes

(A1.39)
$Display mathematics$

which states that the total energy (or intensity) in a wave is the integral of the energies (or intensities) in all its Fourier components.

For real functions such as the electron density or scattering length density one has the following useful relationships:

(A1.40)
$Display mathematics$
(A1.41)
$Display mathematics$
(A1.42)
$Display mathematics$

(p.338) and hence,

(A1.43)
$Display mathematics$

The Fourier transform $ℑ ( f ( x ) ∗ f ( − x ) ) = F ( k ) ⋅ F ( − k )$, in agreement with the convolution theorem.

Since for a real function, , the Fourier transform of the autocorrelation of the real function $f ( x )$ is the squared modulus of its transform

(A1.44)
$Display mathematics$

If $f ( x )$ is the electron density or scattering length density, $F ( k )$ is the scattering amplitude, and the product $F ( k ) ⋅ F ∗ ( k ) = F ( k ) 2$ represents the scattered intensity.