Jump to ContentJump to Main Navigation
Electromagnetism of Continuous MediaMathematical Modelling and Applications$

Mauro Fabrizio and Angelo Morro

Print publication date: 2003

Print ISBN-13: 9780198527008

Published to Oxford Scholarship Online: September 2007

DOI: 10.1093/acprof:oso/9780198527008.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 25 February 2017

A Some properties of Bessel functions

A Some properties of Bessel functions

Source:
Electromagnetism of Continuous Media
Publisher:
Oxford University Press

The Bessel function of the first kind, of order n,

J n ( x ) = r = 0 ( 1 ) r 1 r ! Γ ( n + r + 1 ) ( x 2 ) 2 r + n ,
is a solution of Bessel's equation of order n,
x 2 y + x y + ( x 2 n 2 ) y = 0.

The function Γ, also called Euler's function, is defined by

Γ ( z ) = 0 exp ( t ) t z 1 d t
for complex-valued z with ℜz > 0.

When n is an integer, positive or negative,

J n ( x ) = ( 1 ) n J n ( x ) .

To prove this property, first consider n > 0 and write

J n ( x ) = r = 0 ( 1 ) r 1 r ! Γ ( n + r + 1 ) ( x 2 ) 2 r n .

As r = 0, 1, …, n - 1, the argument of Γ is a negative integer or zero, Γ is then infinite and its inverse vanishes. Hence, letting r = m + n, we have

J n ( x ) = r = n ( 1 ) r 1 r ! Γ ( n + r + 1 ) ( x 2 ) 2 r n = ( 1 ) n m = 0 ( 1 ) m 1 ( m + n ) ! Γ ( m + 1 ) ( x 2 ) 2 m + n .

Comparison shows that it remains to prove the relation

( m + n ) ! Γ ( m + 1 ) = m ! Γ ( n + m + 1 ) ;
(p.614) this follows by the observation that (m + n)! = m!(m + 1) … (m + n) and that, by Γ(r + 1) = rΓ(r), we have Γ(n + m + 1) = (m + n) … (m + 1)Γ(m + 1). Now consider n < 0. Let n = -p, p > 0. We have to prove that
J p ( x ) = ( 1 ) p J p ( x ) , p > 0 ,
or
( 1 ) p J p ( x ) = J p ( x ) , p > 0 ,
which is the result just proved.

For all values of n, the two independent solutions of Bessel's equation may be taken to be

J n ( x ) , Y n ( x ) = cos n π J n ( x ) J n ( x ) sin n π ,

Yn(x) being called the Bessel function of the second kind (or the Neumann function) of order n. The Bessel functions are said to be generated by exp[½x(t - 1/t], in the sense that

exp [ 1 2 x ( t 1 / t ) ] = n = t n J n ( x ) .

To prove this result, we expand exp[½x(t - 1/t)] in powers of t and show that the coefficient of tn is Jn(x):

exp [ x ( t 1 / t ) / 2 ] = exp ( x t / 2 ) exp ( x / 2 t ) = r = 0 ( x t / 2 ) r r ! s = 0 ( x / 2 t ) s s ! = r , s = 0 ( 1 / 2 ) r x r t r ( 1 ) s ( 1 / 2 ) s x s t s r ! s ! = n = r = n ( 1 ) r n 1 2 2 r n x 2 r n t n r ! ( r n ) ! ,
where the index n = r - s has been introduced and r, n are used instead of r, s. Examine the coefficient of tn; we have to prove that it is equal to Jn(x). Consider separately positive and negative values of n and begin with n ≥ 0. Letting p = r - n and observing that (p + n)! = Γ(p + n + 1), we have
r = n ( 1 ) r n 1 2 2 r n x 2 r n r ! ( r n ) ! = p = 0 ( 1 ) p ( x / 2 ) 2 p + n ( p + n ) ! p ! = J n ( x ) .

(p.615) Let now n < 0. The coefficient of tn is still the same as for n ≥ 0 but now the requirement s = r - n ≥ 0 is satisfied for all values of r (≥ 0). Then we consider the coefficient of tn and obtain

r = 0 ( 1 ) r n 1 2 2 r n x 2 r n r ! ( r n ) ! = ( 1 ) n × r = 0 ( 1 ) r ( x / 2 ) 2 r n r ! Γ ( r n + 1 ) = ( 1 ) n J n ( x ) ,
which equals Jn(x) and the proof is complete.

If n is an integer then

J n ( x ) = 1 π 0 π cos ( n φ x sin φ ) d φ .

This property is easily proved by means of the generating function. For n integral, J-n(x) = (-1)n Jn(x), and hence

exp [ x ( t 1 / t ) / 2 ] = J 0 ( x ) + n = 1 [ t n + ( 1 ) n t n ] J n ( x ) .

By the change of variables t ↦ φ : t = exp(iφ), we have

t 1 / t = exp ( i φ ) exp ( i φ ) = 2 i sin φ
and hence
exp ( i x sin φ ) = J 0 ( x ) + n = 1 [ exp ( i n φ ) + ( 1 ) n exp ( i n φ ) ] J n ( x ) .

If n is even,

exp ( i n φ ) + ( 1 ) n exp ( i n φ ) = 2 cos n φ
and, if n is odd,
exp ( i n φ ) + ( 1 ) n exp ( i n φ ) = 2 i sin n φ .

Accordingly,

exp ( i x sin φ ) = J 0 ( x ) + k = 1 2 cos 2 k φ J 2 k ( x ) + i k = 1 2 sin ( 2 k 1 ) φ J 2 k 1 ( x ) ,
(p.616) whence, equating the real and imaginary parts, we have
cos ( x sin φ ) = J 0 ( x ) + 2 k = 1 cos 2 k φ J 2 k ( x ) , sin ( x sin φ ) = 2 k = 1 sin ( 2 k 1 ) φ J 2 k 1 ( x ) .

Multiply both sides of these equations by cos nφ, n ≥ 0, and sin nφ, n ≥ 1, respectively. Since

0 π cos n φ cos m φ d φ = { 0 if n m , π / 2 if n = m 0 , π if n = m = 0 , 0 π sin n φ sin m φ d φ = { 0 if n m , π / 2 if n = m 0 ,
integration from 0 to π yields
0 π cos n φ cos ( x sin φ ) d φ = { π J n ( x ) if n is even, 0 if n is odd, 0 π sin n φ sin ( x sin φ ) d φ = { 0 if n is even, π J n ( x ) if n is odd .

Adding these equations gives

0 π [ cos n φ cos ( x sin φ ) + sin n φ sin ( x x sin φ ) ] d φ = π J n ( x ) .

Since the integrand equals cos(nφ - x sin φ), we have the desired result, for any positive integer n. If n is negative, let n = -p, with p positive. The desired result then takes the form

0 π cos ( p φ x sin φ ) d φ = π J p ( x ) .

The change of variable φ → θ = π - φ yields

0 π cos ( p φ x sin φ ) d φ = 0 π cos ( p π + p θ x sin θ ) d θ = ( 1 ) p 0 π cos ( p θ x sin θ ) d θ = π J p ( x ) = π J n ( x ) .

(p.617) This proves that the desired result holds also for negative n and the proof is complete.

The particular case for n = 0, namely,

(A.1)
J 0 ( x ) = 1 π 0 π cos ( x sin φ ) d φ ,
is invoked very often. Also, the integral of sin(x sin φ) over [0, 2π] vanishes and hence
1 2 π 0 2 π exp ( i x sin φ ) d φ = J 0 ( x ) ,
thus providing a further representation of J 0(x).

If a is a complex number, with ℜa > 0, and b ∈ ℝ, then

0 exp ( a x ) J 0 ( b x ) d x = 1 a 2 + b 2 .

The proof starts from the observation that, by (A.1),

J 0 ( b x ) = 1 π 0 π cos ( b x sin φ ) d φ .

Substitution, interchange of the order of integration and some rearrangement lead to the result.

If a, b ∈ ℝ, a > b then

0 exp ( i a x ) J 0 ( b x ) d x = 1 b 2 a 2 .

To prove this result, substitute the expression for J 0(bx) and interchange the order of integration to have

0 exp ( i a x ) J 0 ( b x ) d x = 1 π 0 0 π exp ( i a x ) cos ( b x sin φ ) d φ d x = 1 2 π 0 π [ 0 { exp [ i ( a b sin φ ) x ] + exp [ i ( a + b sin φ ) x ] } d x = 1 2 π i 0 π [ exp [ i ( a b sin φ ) x ] a b sin φ + exp [ i ( a + b sin φ ) x ] a + b sin φ ] 0 d φ .

Per se, the value at infinity has no meaning. However, if a and/or b are the argument of distributions, we can regard the value as zero. In fact, we might replace ∞ by l and consider integrals of the form 0 exp ( i a l ) Φ ( a ) d a with

(p.618) Φ test function; as l → ∞, the integral vanishes by the Riemann–Lebesgue lemma. Consequently, we have

0 exp ( i a x ) J 0 ( b x ) d x = a π i 0 π 1 a 2 b 2 sin 2 φ d φ .

The integral on [0, π] is twice that on [0, π/2]. Letting u = cot φ, we have

0 π / 2 1 a 2 b 2 sin 2 φ d φ = 1 a a 2 b 2 0 1 1 + y 2 d y = π 2 a a 2 b 2 .

Substitution yields the result. If a < b, the integral can be shown to vanish.

Since

0 cos a x J 0 ( b x ) d x i 0 sin a x J 0 ( b x ) d x 1 b 2 a 2
if a > b and equal to zero if a < b, equating the real and imaginary parts of both sides gives
0 cos a x J 0 ( b x ) d x = 0 , 0 sin a x J 0 ( b x ) d x = { 1 / a 2 b 2 if a > b , 0 if a < b .

More involved integral representations hold for the Bessel functions if the order n need not be integer. In this regard, we begin by considering the Euler function Γ and observing that it can be defined, or evaluated, by letting the variable t be complex. We write the integrand as the function

exp ( t ) t z 1 = exp ( t ) exp [ ( z 1 ) In t ] ,
which has a branch point at t = 0. Upon a cut along the positive real axis, the function In t is single valued; we let arg t = 0 as t approaches the cut from above. Consider the contour γ obtained by following closely the cut from t = ∞ to t = ε, then turning anticlockwise along a circle λε of radius ε around the origin and then following the cut from t = ε to t = ∞. Hence,
γ exp ( t ) t z 1 d t = exp ( t ) t z 1 d t + exp ( ( z 1 ) 2 π i ) × exp ( t ) t z 1 d t + λ exp ( t ) t z 1 d t .

(p.619) Since ℜ > 0, the integral on λε approaches 0 as ε → 0. Letting ε → 0, we have

[ exp ( z 2 π i ) 1 ] 0 exp ( t ) t z 1 d t = γ exp ( t ) t z 1 d t .

The integral on [0, ∞) is Γ (z) and hence we can write

γ exp ( t ) t z 1 d t = [ exp ( z 2 π i ) 1 ] Γ ( z ) .

Now replace z by 1 - z so that

γ exp ( t ) t z d t = [ exp ( z 2 π i ) 1 ] Γ ( 1 z ) .

Moreover, change the variable t → τ, t = -τ, to obtain

γ exp ( t ) t z d t = exp ( z π i ) γ exp ( τ ) τ z d τ ,

γ′ being the specular contour of γ around the imaginary axis. Hence, we have

γ exp ( τ ) τ z d τ = [ exp ( π z i ) exp ( π z i ) ] Γ ( 1 z ) .

Since

Γ ( z ) Γ ( 1 z ) = π sin π z ,
we obtain
1 Γ ( z ) = 1 2 π i γ exp ( τ ) τ z d τ .

Accordingly, we express 1/Γ(n + k + 1) by replacing z with n + k + 1 and hence represent Jn(z) in the form

J n ( z ) = 1 2 π i k = 0 ( 1 ) k k ! γ exp ( τ ) τ ( n + k + 1 ) ( z 2 ) n + 2 k d τ = 1 2 π i γ exp ( τ ) τ ( n + 1 ) ( z 2 ) n 0 ( 1 ) k k ! τ k ( z 2 ) 2 k d τ .

The summation over k yields exp(-z 2/4τ), whence

J n ( z ) = 1 2 π i γ ( z 2 ) n τ ( n + 1 ) exp ( τ z 2 4 τ ) d τ .

(p.620) Upon the change of variable τ → t, τ = ½zt, we have

J n ( z ) = 1 2 π i λ t n 1 exp [ z 2 ( t 1 t ) ] d t ,
the contour λ being the inverse image of γ′.

For simplicity, restrict attention to z = x ∈ ℝ+, in which case λ is equal to γ′ to within a magnification factor x/2. For convenience, consider a new variable w ∈ C, t = exp w. The contour l in the plane w consists of the line w = -iπ + ξ, ξ ∈ ℝ+, followed by the segment w = iη, η ∈ [-π, π], and the line w = iπ + ξ, ξ ∈ ℝ+. The representation of Jn(x) takes the form

J n ( x ) = 1 2 π i l exp ( x sinh w n w ) d w .

Upon substitution we have

l exp [ x sinh w n w ] d w = i π π exp [ x sinh i η i n η ] d η + 0 exp [ x sinh ( i π + ξ ) + i n π n ξ ] d ξ + 0 exp [ x sinh ( i π + ξ ) i n π n ξ ] d ξ = i π π exp [ i ( x sin η n η ) ] d η 2 i sin n π 0 exp ( x sinh ξ n ξ ) d ξ .

The function sin(x sin η - nη) (of η) is odd and hence the integral on [-π, π] vanishes while cos(x sin η - nη) is even. Accordingly, 1/2πi times the integral over l gives

J n ( x ) = 1 π 0 π cos ( x sin η n η ) d η sin n π π 0 exp ( x sinh ξ n ξ ) d ξ ,
which is known as Schläfi's generalization of Bessel's integral. For, if n is integer then sin nπ = 0 and we have Bessel's integral
J n ( x ) = 1 π 0 π cos ( x sin η n η ) d η , n .

(p.621) The definition of Yn gives

π Y n ( x ) = cot n π 0 π cos ( n η x sin η ) d η 1 sin n π 0 π cos ( n η + x sin η ) d η 0 exp ( n ξ x sinh ξ ) d ξ cos n π 0 exp ( n ξ x sinh ξ ) d ξ = 0 π sin ( x sin η n η ) d η 0 exp ( x sinh ξ ) [ exp ( n ξ ) + cos n π exp ( n ξ ) d ξ .

Hence, we have the Hankel functions in the form

H n ( 1 ) ( x ) = 1 π 0 π exp [ i ( x sin η n η ) ] d η i π exp i n π 2 × 0 exp ( x sinh ξ ) { exp [ n ( ξ + i π 2 ) ] + exp [ n ( ξ + i π 2 ) ] } d ξ
and
H n ( 2 ) ( x ) = 1 π 0 π exp [ i ( n η x sin η ) ] d η i π exp i n π 2 × 0 exp ( x sinh ξ ) { exp [ n ( ξ + i π 2 ) ] + exp [ n ( ξ + i π 2 ) ] } d ξ

As a particular case, examine H 0 ( 1 ) , namely,

H 0 ( 1 ) ( x ) = 1 π 0 π exp ( i x sin η ) d η i π 0 exp ( x sinh ξ ) d ξ .

We prove that H 0 ( 1 ) may equivalently be represented as

(A.2)
H 0 ( 1 ) ( x ) = 1 π i exp ( i x cosh u ) d u .

To this end, observe that the integral on ℝ is twice the integral on ℝ+ and consider the variable ν = u - iπ/2. It follows that

0 exp ( i x cosh u ) d u = i π / 2 i π / 2 exp ( x sinh v ) d v .

(p.622) Now let ν = w + iφ and replace the line from (0 - iπ/2) to (∞ - iπ/2) by the segment w = 0, φ ∈ [-π/2, 0] and the line ℝ+; the ‘segment’ w = ∞, φ ∈ [0, -π/2] gives a vanishing contribution. Hence, we have

i π / 2 i π / 2 exp ( x sinh v ) d v = i π / 2 0 exp ( i x sinh φ ) d φ + 0 exp ( x sinh w ) d w = i 2 0 π exp ( i x sin φ ) d φ + 0 exp ( x sinh w ) d w .

Substitution yields (A.2).