# (p.534) Appendix C Eigenvalue Problem With General Metric

# (p.534) Appendix C Eigenvalue Problem With General Metric

The theory of small vibrations in Chapter 10 requires a generalization of the matrix eigenvalue methods of Appendix B. The generalized eigenvalue equation is of the form

where column vector [*z* ^{(k)}] is an eigenvector of matrix A with eigenvalue θ_{k}. The only difference between this equation and the standard eigenvalue expression in eqn (B.81) is the presence of a positive-definite matrix g on the right side of eqn (C.1). The matrix g also serves as a metric, allowing a generalization of the inner products of two column vectors.

# C.1 Positive-Definite Matrices

The real, symmetric *N* × *N* matrix g is called a *positive definite matrix* if, for any real implies that [*x*]

## Lemma C.1.1: Eigenvalues of a Positive-Definite Matrix

*A real symmetric matrix is positive definite if and only if all of its eigenvalues are nonzero, positive numbers. Such a matrix is nonsingular*.

Proof: Suppose that the matrix g is positive definite and has eigenvalues γ_{k} and eigenvectors [*y*(^{k})]. If we choose the arbitrary column vector in eqn (C.2) to be the *k*th normalized eigenvector of g, then

which shows that γ_{k} cannot be zero or negative.

Conversely, assume all γ_{k} > 0 and let [*x*] be an arbitrary non-null column vector. Since a real symmetric matrix g is a normal matrix, it follows from eqn (B.121) and the orthonormality and hence completeness of its eigenvectors [*y*(^{k})] that

If a real symmetric matrix is positive definite, it follows from the just-proved positive definiteness of its eigenvalues and from Theorem B.27.1 that |g|= γ_{1} … γ_{N} > 0.

Therefore the matrix is nonsingular. ⎸

(p.535) Note that the condition |g| > 0 is a necessary but not a sufficient condition for g to be positive definite. For example, a 4 × 4 diagonal matrix with diagonal elements (1, −1, 1, −1) has a positive determinant but is not a positive definite matrix.

Since the γ_{k} are all positive, we can define the following real, symmetric matrices,

The first of these, eqn (C.5), is just an application of the dyadic eqn (B.121) to the matrix g. The others are defined by analogy. By construction, these matrices have the following properties,

# C.2 Generalization of the Real Inner Product

If g is an *M*-rowed, real, symmetric, positive-definite matrix, a generalized inner productoftwo *M* × 1 column vectors may be defined by

This inner product has properties similar to that of the ordinary real inner product in Section B.20, that

This generalized inner product also has other properties similar to those in Section B.20. If a set of *M* × 1 vectors [*V* ^{(1)}], [*V* ^{(2)}], …, [*V* ^{(M)}] is orthonormal in the generalized sense,

*k*,

*l*= 1, …,

*M*, then that set if LI and forms a basis for the space of

*M*× 1 (p.536) vectors. Any vector [

*V*] can be expanded as

*k*= 1, …,

*M*, given by

If a set of vectors tually orthogonal set [*V* ^{(1)}], [*V* ^{(2)}], …, [*V*(^{N})] is initially LI but not orthogonal, a mutually orthogonal set [*W* ^{(1)}], [*W* ^{(2)}], …, [*W*(^{N})] can be found by a generalization of the Schmidt orthogonalization procedure outlined in Section B.20,

*W*

^{(i)}]•[

*W*

^{(j)}] = δ

_{ij}for all

*i*,

*j*values.

# C.3 The Generalized Eigenvalue Problem

In the Lagrangian theory of small vibrations, we are asked to solve a generalized eigenvalue problem, to find eigenvectors [*z* ^{(k)}] and eigenvalues θ_{k} that are solutions of

These equations differ from the standard eigenvalue equations in B.23 only by the replacement of the unit matrix U by a positive definite matrix g.

Before stating and proving the main theorem, we present a preliminary result.

## Lemma C.3.1: Transformed Eigenvector Problem

*Equation (C.16) is true if and only if*

*Where*

*and the definitions in Section*

*C.2*

*have been used for*g

^{1/2}

*and*g

^{−1/2}.

(p.537) Proof: Substituting the second of eqn (C.20) into eqn (C.19) gives

Then pre-multiplying both sides by by *g* ^{−1/2} and using eqn (C.9) gives eqn (C.16). Conversely, substituting the first and last of eqn (C.20) into eqn (C.16) gives

Pre-multiplying both sides by ${g}^{-\frac{1}{2}}$ and using eqn (C.9) then gives eqn (C.19). Thus the two equations are equivalent, as was to be proved. ⎸

We now state the main theorem.

## Theorem C.3.2: Generalized Eigenvector Theorem

*If* A *is an N-rowed, real, symmetric matrix and* g *is a real, symmetric, positive definite matrix of the same size, then the eigenvalue equation*

*has N real eigenvalues*θ

_{1}, θ

_{2}, …, θ

*[*

_{N}, and N real eigenvectors*z*

^{(1)}], [

*z*

^{(2)}], …, [

*z*

^{(N)}]

*that are normalized and mutually orthogonal according to the generalized inner product of Section*

*C.2*,

Proof: Since the matrix B defined in the last of eqn (C.20) is real and symmetric, we know from Theorem B.24.3 that it has *N* real eigenvalues and *N* real eigenvectors [*x* ^{(k)}] that obey the ordinary definition of orthonormality [*x* ^{(k)}] · [*x* ^{(l)}] = δ_{kl}. And the above Lemma C.3.1 proves that for each of these [*x* ^{(k)}], the vector [*z* ^{(k)}] defined in the second of eqn (C.20) is a generalized eigenvector of matrix A obeying eqn (C.23). Thus there are *N* generalized eigenvectors. It only remains to investigate their generalized orthogonality.

Substituting the first of eqn (C.20) into [*x*(^{k})] · [*x*(^{l})] = δ_{kl} gives

# C.4 Finding Eigenvectors in the Generalized Problem

We now know that a real, symmetric matrix A has *N* generalized eigenvectors. To find them, the procedure is similar to the ordinary eigenvector solution. Written out, eqn (C.18) is

_{1}, θ

_{2}, …, θ

_{N}. We know from Section C.3 that these eigenvalues will all be real.

(p.538) The eigenvector(s) corresponding to a particular eigenvalue are found from eqn (C.17) which may be written as

Just as for the ordinary eigenvector solution, if the eigenvalue is unique, then these equations can be solved for a unique set of ratios ${z}_{i}^{(k)}/{z}_{1}^{(k)}$. The value of ${z}_{1}^{(k)}$ can then be obtained from the normalization condition,

If the eigenvalue is a multiple root of degeneracy κ then there will be κ LI roots of eqn (C.27). These can be made orthogonal in the generalized sense by using the generalized Schmidt orthogonalization procedure outlined in eqn (C.15). The resulting set of eigenvector solutions will then obey the orthonormality condition eqn (C.24).

# C.5 Uses of the Generalized Eigenvectors

The main use of the generalized eigenvalue problem is simultaneously to reduce the matrix A to a diagonal matrix, and the matrix g to the unit matrix. Let us define a matrix C whose *k*th column is the *k*th eigenvector from the generalized eigenvalue problem of Section C.3,

## Theorem C.5.1: Reduction to Diagonal Form

*Let* U *be the unit matrix, and define* F *to be a diagonal matrix whose diagonal elements are the eigenvalues of the generalized eigenvalue problem of Section* *C.3*,

*With*C

*the matrix defined in*eqn (C.29),

*it follows that*

(p.539) Proof: To prove the first of eqn (C.32), use eqn (C.29) to write eqn (C.24) as

Thus C^{T} g C has the same matrix elements as the unit matrix *U _{kl}* = δ

_{kl}and so the two are equal, as was to be proved.

To prove the second of eqn (C.32), replace *k* by *l* in eqn (C.23) and then multiply both sides of it from the left by [*z* ^{(k)}]^{T} to obtain

Thus

*kl*and so the two matrices are equal, as was to be proved. ⎸