(p.242) D Electronic structure in crystalline solids
(p.242) D Electronic structure in crystalline solids
D.1 Crystalline solids
Most solids present a crystalline structure, where atoms are organised regularly in cells that repeat periodically. The simplest example is the single cubic lattice, where the atoms sit at the corners of packed cubes. In practice, it is rare for atoms to adopt this simple structure, although polonium is one example. Rather, more complex structures appear in Nature. Some examples where atoms are arranged in periodic lattices are shown in Fig. D.1, including the body-centred cubic and face-centred cubic lattices. In Chapter 8 it is shown that the crystalline structure of solids is the main factor responsible for the particular electronic properties that they present.
In all cases, the lattice is first described by a unit cell that can have one or more atoms located at positions relative to a reference point of the cell. Then cells are repeated periodically by displacing them by an integer number of some base vectors , such that the new cells are obtained by the translation vectors (see Fig. D.2),
In three dimensions, there are three base vectors. They do not need to be orthogonal, but only linearly independent such that they can span the whole volume. The volume of the unit cell is given by the scalar triple product .
Associated with these base vectors, we define the reciprocal base vectors as those satisfying the biorthogonality condition . These vectors have dimensions of reciprocal distance, and their integer combinations generate the reciprocal vectors,
D.2 Band structure
The electrons in the system can be classified into two types. The core electrons are in deep energy levels and will remain bound to each specific atom. The exterior or valence electrons, in contrast, are not completely bound to a single atom but rather are subject to the attraction of the (p.243) neighbouring atoms as well. These electrons will become delocalised and are the main subject of study in electronic transport. We thus separate this problem into two: the valence electrons and the ions, the latter being composed of the nucleus and core electrons. The ions are arranged periodically and, as such, generate a periodic potential V for the valence electrons, satisfying for any translation vector (D.1). It is easy to verify that any periodic function can be written as a Fourier series in the reciprocal vectors,
Consider now a macroscopic solid, composed of many unit cells. For simplicity, although it is not strictly necessary to do so, we will consider that the solid is a parallelepiped, aligned with the base vectors , with Ni cells on each side. By macroscopic we understand that the total number of cells is on the order of the Avogadro number. We will also assume that the box is periodic for the purpose of computation of the electron wavefunctions Ψ, which are then written as Fourier series,
where , with , are vectors in reciprocal space. Note that Ψ is not necessarily periodic in the cell.
It is convenient to define the first Brillouin zone (FBZ), also called the Wigner–Seitz cell, as the set of vectors that are closer to than any other reciprocal vector. Figure D.3 presents the FBZ for some lattices. Then, any vector in the reciprocal space can be uniquely decomposed as , where belongs to the FBZ.
D.2.1 Bloch theorem
Let us now analyse the Schrödinger equation for the valence electrons,
The summed vector also belongs to the reciprocal space and therefore we can relabel it as , such that now the equation adopts the simpler form,
(p.244) As the Fourier functions are linearly independent, we end up with the following set of equations:
The equations for and are coupled and constitute a set of infinite size. Each set of equations can be labelled by a unique vector in the FBZ, and we note that the sets of equations for different vectors in the FBZ do not couple (see Fig. D.4). For each there are infinite coupled equations (one for each reciprocal vector ), and as such, an infinite set of discrete energy levels is obtained, which we label as , where is an integer index. The wavefunctions are labelled accordingly as .
Note that, because the equations are coupled, the election of choosing the FBZ to label the levels is arbitrary and other options are possible. This also implies that the levels are periodic in the reciprocal space, . Finally, the vectors in the reciprocal space used to label the states belong to the FBZ, with a distance between successive vectors equal to , which for macroscopic systems is extremely small. Consequently, the vectors are quasicontinuous, densely filling the FBZ, and we will use the notation for the dispersion relation. This property will allow us to use differential calculus when manipulating them.
Before continuing with the energy bands, there is an additional property of the Schrödinger equation that deserves some attention. Because different vectors in the FBZ do not couple in the solution of eqn (D.8), the eigenfunctions are built only by contributions of vectors that differ in reciprocal vectors . Then, for the eigenfunctions, the sum in (D.4) is restricted to
By factoring out , we obtain the representation of a periodic function in the unit cell. We have obtained the Bloch theorem, which states that the wavefunctions for a periodic potential can be written as
(p.245) where is periodic in the unit cell and the normalisation factor has been put explicitly such that
Using the transformation, , the Schrödinger equation can be written,
where now the domain for is the unit cell and the effective Hamiltonian is
D.2.2 Energy bands
In the previous sections it was shown that the energy levels in a periodic system can be labelled as , where is a quasicontinuous vector in the FBZ and is an integer index that labels the bands. Also, we showed that the energy levels for each can be obtained from a Schrödinger-like equation in the unit cell, for a Hamiltonian that depends explicitly on (D.13). It is therefore expected that, except for isolated accidents, the energy levels will be continuous functions of and correspond to hypersurfaces in the FBZ.
To simplify the visualisation, consider first a 1D case where the bands are simple functions of . Figure D.5 presents a scheme of the band structure. Looking at the energy axis, two types of regions are clearly differentiated. Associated with each band, there is an energy interval such that, for each energy in the interval, there are one or various electronic states with this energy value. We will call these intervals the energy bands. Complementary to them, there are empty regions. In these energy gaps there are no electronic states; these energy levels are therefore forbidden.
In three dimensions the structure of the energy hypersurfaces in the FBZ is more complex, but again, when these levels are projected onto the energy axis, energy bands and gaps appear. Finally, it is also possible that two energy bands overlap, as in Exercise 8.11.
D.2.3 Bloch velocity and crystal momentum equation
Electrons in a Bloch state do not have, as we have seen, a well-defined momentum or wavevector. We can, however, compute their velocity as the average of the velocity operator,
To do so, we start by computing the energy level near a given wavevector . Consider a small wavevector . The energy level can (p.246) be expanded as
The energy difference can also be obtained by noting that, when using the u eigenfunctions, the energies are associated with the Hamiltonians,
Then, according to time-independent perturbation theory, 
Note that, if the electrons were free, , the Bloch velocity would be , the known result from quantum mechanics.
Consider a weak external force acting on the electrons. The work done by the force in an interval on an electron in a Bloch state is . This work modifies the energy of the electron, which can be written as a change in , . Equating both expressions and recalling the definition of the Bloch velocity, we find the semiclassical equation of motion,
which is a Newton-like equation for the crystal momentum . Note that it is not the momentum of the particle because the Bloch states are not eigenstates of the momentum operator.
D.2.4 Self-consistent potential
In the derivation of the band structure, we consider a single electron moving in the periodic potential. However, in solids, there are many electrons that interact with each other via the Coulomb potential, which is quite intense (as large as 13.6 eV for electrons separated by 1.06 Å). However, we note that the charge density of an electron in a Bloch state is , which is periodic. Then, the band theory makes the following first (Hartree) approximation. The periodic potential that should be considered for computing the band structure is (p.247) not only the one produced by the ions and core electrons, but also the self-consistent potential generated by the charge density of the valence electrons. This approximation renders the problem nonlinear because the potential that enters in the Schrödinger equation is quadratic in the wavefunctions. The Hartree approximation was further refined by Fock by imposing that the total wavefunction should reflect that electrons are fermions and, therefore, must be antisymmetric under the exchange of any pair of electrons. Under this Hartree–Fock approximation, there are no direct electron–electron interactions, but rather they interact via the self-consistent potential, similarly to what happens in the Vlasov model for plasmas, studied in Chapter 6.
The self-consistent approach, although very successful, is still an approximation and does not fully consider the quantum correlations of the interacting electrons. A more formal approach is the theory of Fermi liquids developed by Landau. There, the relevant objects of study, instead of the bare electrons, are the excitations that develop in the complete system. These excitations behave as quasiparticles that follow Fermi–Dirac statistics and carry charge, but have finite lifetime. We will not use this approach here, as it lies beyond the scope of this book. Interested readers are directed to the books by Lifshitz and Pitaevskii (1980) and Kadanoff and Baym (1962).
D.3 Density of states
In the thermodynamic limit, the vectors in reciprocal space become dense, and instead of treating them separately it is useful to group them according to their energies. The density of states is defined such that is the number of quantum states between and , divided by the volume . It is an intensive quantity that provides information on how the bands are arranged.
The density of states depends on the band structure, which in turn depends on the ionic potential and the geometry of the unit cell. In the next paragraphs, some examples are given to illustrate the general features of g.
D.3.1 Free electron gas
The first model to consider is the free electron gas, where there is no potential and the energy levels are
where the wavevectors are given by . Here, it is convenient to first obtain , equal to the number of states (p.248) with energy equal to or less than , which results as twice (due to the spin degeneracy) the number of wavevectors smaller than . As the spacing between energy levels is , which is small, the number of states bounded by can be accurately approximated by an integral expression,
Computing we obtain the density of states for a free electron gas in three dimensions as
In many technological applications, electrons are confined in one or two dimensions, while moving freely in the others, resulting in quasi-2D or quasi-1D systems. The density of states of such confined systems presents singularities at the excitation energies of the transverse modes, as shown in Fig. D.6.
D.3.2 General case in three dimensions
In the general case, the density of states will depend on the band structure. The bands are limited by the maxima or minima of the dispersion relation as shown in Fig. D.5. Close to these extreme values, one can approximate the bands parabolically as , where the upper sign corresponds to a minimum and the lower sign to a maximum. Locally, the dispersion relations are similar to those of free particles, where is called the effective mass. In a bandgap, the density of states vanishes (Fig. D.7), and at the band borders the density of states presents the characteristic van Hove singularities of free particles of the form for a band that starts at and an analogous expression when a band terminates.
On many occasions, one needs to sum over the states of the system, for example in Section 8.2 when computing the electrical conductivity. Using the density of states, this sum can be cast into an integral. Also, in the thermodynamic limit, we can integrate in in the FBZ as well. Recalling that the distance between successive vectors is , the number of vectors in a volume is , where is the volume of the FBZ. Combining these two results, we obtain , which must be multiplied by the spin degeneracy and summed over the bands. This discussion can be summarised in the following scheme:
(D.1) Density of states in one and two dimensions. Following the same procedure as in Section D.3, derive the density of states for free electrons moving in one and two dimensions.
(D.2) Density of states in quasi-1D and quasi-2D systems. In many technological applications, electrons are confined in one or two dimensions, while moving freely in the others, resulting in quasi-2D or quasi-1D systems, Fig. D.8.
In quasi-2D systems the electrons move in a shallow box of dimensions , where Lx and Ly are large (tending to infinity in the thermodynamic limit), while a is finite. In quasi-1D systems the box dimensions are . The box is periodic in the long directions, while fixed boundary conditions should be considered in the confining direction.
Compute the density of states in both cases and show that, if the confinement length tends to infinity, the three-dimensional density of states is recovered.
(1) Indeed, using this representation, we have . However, , with by the orthogonality condition, proving that U is periodic.
(2) Again, this is not strictly necessary, but it simplifies the calculation enormously. This will allow us to use exponential Fourier functions instead of sine and cosine functions. The final results are nevertheless the same if fixed boundary conditions are used.
(3) This is the bare electron mass . Later we will see that, in some cases, electrons move in a solid as if they have an effective mass, different from this value.
(4) To be precise, since the vectors are quasicontinuous, a small energy tolerance must be accepted; that is, for each energy value in a band, there will be an electronic state with an energy sufficiently close to it.
(5) Take a Hamiltonian with normalised states satisfying . For the perturbed Hamiltonian, , the energy levels are
(7) The case with the minus sign becomes relevant in semiconductors, where is associated with the mass of holes.