Jump to ContentJump to Main Navigation

## Oliver Johns

Print publication date: 2005

Print ISBN-13: 9780198567264

Published to Oxford Scholarship Online: January 2010

DOI: 10.1093/acprof:oso/9780198567264.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 20 January 2017

# (p.534) Appendix C Eigenvalue Problem With General Metric

Source:
Analytical Mechanics for Relativity and Quantum Mechanics
Publisher:
Oxford University Press

The theory of small vibrations in Chapter 10 requires a generalization of the matrix eigenvalue methods of Appendix B. The generalized eigenvalue equation is of the form

(C.1)

where column vector [z (k)] is an eigenvector of matrix A with eigenvalue θk. The only difference between this equation and the standard eigenvalue expression in eqn (B.81) is the presence of a positive-definite matrix g on the right side of eqn (C.1). The matrix g also serves as a metric, allowing a generalization of the inner products of two column vectors.

# C.1 Positive-Definite Matrices

The real, symmetric N × N matrix g is called a positive definite matrix if, for any real implies that [x]

(C.2)

## Lemma C.1.1: Eigenvalues of a Positive-Definite Matrix

A real symmetric matrix is positive definite if and only if all of its eigenvalues are nonzero, positive numbers. Such a matrix is nonsingular.

Proof: Suppose that the matrix g is positive definite and has eigenvalues γk and eigenvectors [y(k)]. If we choose the arbitrary column vector in eqn (C.2) to be the kth normalized eigenvector of g, then

(C.3)

which shows that γk cannot be zero or negative.

Conversely, assume all γk > 0 and let [x] be an arbitrary non-null column vector. Since a real symmetric matrix g is a normal matrix, it follows from eqn (B.121) and the orthonormality and hence completeness of its eigenvectors [y(k)] that

(C.4)
as was to be proved.

If a real symmetric matrix is positive definite, it follows from the just-proved positive definiteness of its eigenvalues and from Theorem B.27.1 that |g|= γ1 … γN > 0.

Therefore the matrix is nonsingular. ⎸

(p.535) Note that the condition |g| > 0 is a necessary but not a sufficient condition for g to be positive definite. For example, a 4 × 4 diagonal matrix with diagonal elements (1, −1, 1, −1) has a positive determinant but is not a positive definite matrix.

Since the γk are all positive, we can define the following real, symmetric matrices,

(C.5)
(C.6)
(C.7)
(C.8)

The first of these, eqn (C.5), is just an application of the dyadic eqn (B.121) to the matrix g. The others are defined by analogy. By construction, these matrices have the following properties,

(C.9)

# C.2 Generalization of the Real Inner Product

If g is an M-rowed, real, symmetric, positive-definite matrix, a generalized inner productoftwo M × 1 column vectors may be defined by

(C.10)

This inner product has properties similar to that of the ordinary real inner product in Section B.20, that

(C.11)

This generalized inner product also has other properties similar to those in Section B.20. If a set of M × 1 vectors [V (1)], [V (2)], …, [V (M)] is orthonormal in the generalized sense,

(C.12)
for all k, l = 1, …, M, then that set if LI and forms a basis for the space of M × 1 (p.536) vectors. Any vector [V] can be expanded as
(C.13)
where the components are, for k = 1, …, M, given by
(C.14)

If a set of vectors tually orthogonal set [V (1)], [V (2)], …, [V(N)] is initially LI but not orthogonal, a mutually orthogonal set [W (1)], [W (2)], …, [W(N)] can be found by a generalization of the Schmidt orthogonalization procedure outlined in Section B.20,

(C.15)
and so on, following the pattern of eqn (B.69), but with the ordinary inner product “·” replaced by the generalized one “•” throughout. The vectors can then be normalized, again using the generalized inner product, so that they become a generalized orthonormal set obeying [W (i)]•[W (j)] = δij for all i, j values.

# C.3 The Generalized Eigenvalue Problem

In the Lagrangian theory of small vibrations, we are asked to solve a generalized eigenvalue problem, to find eigenvectors [z (k)] and eigenvalues θk that are solutions of

(C.16)
where A is a real, symmetric matrix, and g is a real, symmetric, positive-definite matrix. This equation can be rewritten as
(C.17)
and the eigenvalues found from
(C.18)

These equations differ from the standard eigenvalue equations in B.23 only by the replacement of the unit matrix U by a positive definite matrix g.

Before stating and proving the main theorem, we present a preliminary result.

## Lemma C.3.1: Transformed Eigenvector Problem

Equation (C.16) is true if and only if

(C.19)
Where
(C.20)
and the definitions in Section C.2 have been used for g1/2 and g−1/2.

(p.537) Proof: Substituting the second of eqn (C.20) into eqn (C.19) gives

(C.21)

Then pre-multiplying both sides by by g −1/2 and using eqn (C.9) gives eqn (C.16). Conversely, substituting the first and last of eqn (C.20) into eqn (C.16) gives

(C.22)

Pre-multiplying both sides by $g - 1 2$ and using eqn (C.9) then gives eqn (C.19). Thus the two equations are equivalent, as was to be proved. ⎸

We now state the main theorem.

## Theorem C.3.2: Generalized Eigenvector Theorem

If A is an N-rowed, real, symmetric matrix and g is a real, symmetric, positive definite matrix of the same size, then the eigenvalue equation

(C.23)
has N real eigenvalues θ1, θ2, …, θN, and N real eigenvectors [z (1)], [z (2)], …, [z (N)] that are normalized and mutually orthogonal according to the generalized inner product of Section C.2,
(C.24)

Proof: Since the matrix B defined in the last of eqn (C.20) is real and symmetric, we know from Theorem B.24.3 that it has N real eigenvalues and N real eigenvectors [x (k)] that obey the ordinary definition of orthonormality [x (k)] · [x (l)] = δkl. And the above Lemma C.3.1 proves that for each of these [x (k)], the vector [z (k)] defined in the second of eqn (C.20) is a generalized eigenvector of matrix A obeying eqn (C.23). Thus there are N generalized eigenvectors. It only remains to investigate their generalized orthogonality.

Substituting the first of eqn (C.20) into [x(k)] · [x(l)] = δkl gives

(C.25)
which establishes eqn (C.24). Thus the theorem is proved. ⎸

# C.4 Finding Eigenvectors in the Generalized Problem

We now know that a real, symmetric matrix A has N generalized eigenvectors. To find them, the procedure is similar to the ordinary eigenvector solution. Written out, eqn (C.18) is

(C.26)
which can be solved for θ1, θ2, …, θN. We know from Section C.3 that these eigenvalues will all be real.

(p.538) The eigenvector(s) corresponding to a particular eigenvalue are found from eqn (C.17) which may be written as

(C.27)

Just as for the ordinary eigenvector solution, if the eigenvalue is unique, then these equations can be solved for a unique set of ratios $z i ( k ) / z 1 ( k )$. The value of $z 1 ( k )$ can then be obtained from the normalization condition,

(C.28)

If the eigenvalue is a multiple root of degeneracy κ then there will be κ LI roots of eqn (C.27). These can be made orthogonal in the generalized sense by using the generalized Schmidt orthogonalization procedure outlined in eqn (C.15). The resulting set of eigenvector solutions will then obey the orthonormality condition eqn (C.24).

# C.5 Uses of the Generalized Eigenvectors

The main use of the generalized eigenvalue problem is simultaneously to reduce the matrix A to a diagonal matrix, and the matrix g to the unit matrix. Let us define a matrix C whose kth column is the kth eigenvector from the generalized eigenvalue problem of Section C.3,

(C.29)
so that
(C.30)

## Theorem C.5.1: Reduction to Diagonal Form

Let U be the unit matrix, and define F to be a diagonal matrix whose diagonal elements are the eigenvalues of the generalized eigenvalue problem of Section C.3,

(C.31)
With C the matrix defined in eqn (C.29), it follows that
(C.32)

(p.539) Proof: To prove the first of eqn (C.32), use eqn (C.29) to write eqn (C.24) as

(C.33)

Thus CT g C has the same matrix elements as the unit matrix Ukl = δkl and so the two are equal, as was to be proved.

To prove the second of eqn (C.32), replace k by l in eqn (C.23) and then multiply both sides of it from the left by [z (k)]T to obtain

(C.34)

Thus

(C.35)
holds for every value of kl and so the two matrices are equal, as was to be proved. ⎸