Electromechanics and its devices
Electromechanics and its devices
Abstract and Keywords
Electromechanics—coupling of mechanical forces with others—exhibits a continuumtodiscrete spectrum of properties. In this chapter, classical and newer analysis techniques are developed for devices ranging from inertial sensors to scanning probes to quantify limits and sensitivities. Mechanical response, energy storage, transduction and dynamic characteristics of various devices are analyzed. The Lagrangian approach is developed for multidomain analysis and to bring out nonlinearity. The approach is extended to nanoscale fluidic systems where nonlinearities, fluctuation effects and the classicalquantum boundary is quite central. This leads to the study of measurement limits using power spectrum and, correlations with slow and fast forces. After a diversion to acoustic waves and piezoelectric phenomena, nonlinearities are explored in depth: homogeneous and forced conditions of excitation, chaos, bifurcations and other consequences, Melnikov analysis and the classic phase portaiture. The chapter ends with comments on multiphysics such as of nanotubebased systems and electromechanobiological biomotor systems.
Keywords: Multiscale, Multiphysics, Inertial frame, Mechanical response, Lagrangian, Hamiltonian, Conjugate variables, Beam response, Plate response, Pull in, Eigenmode analysis, NavierStokes equation, Equipartition of energy, Fluctuations, Power spectrum, Sensitivity, Slow force, Fast force, Acoustic wave, Chaos, Bifurcation, Feigenbaum delta, Melnikov analysis, Duffing oscillator, Lennard Jones potential
COUPLING MECHANICAL FORCES WITH OTHER FORCES at the microscale and the nanoscale allows the creation of devices that can perform extremely precise measurements, as well as devices for everyday use. Examples where the first type of device is used include scanning probe techniques, where a vibrating probe with an electrical, magnetic or optical tip, or a tip with some combination of these properties, makes ultrasensitive measurements possible. Examples of the second type of device include inertial sensors for orientation sensing in mobile displays or for head protection in disk drives. Probe techniques let one get down to single electron, single spin, single photon and single phonon sensitivity, in addition to allowing extremely precise measurements of forces near uncertainty limits by using optical coupling . Inertial sensing allows one to perform chemical sensing near single molecule limits, or global positioning, due to precision frequency control. The nanoscale is a dimensional region where continuum mechanics and discrete mechanics intersect—continuum mechanics in the sense that one may look at any object’s behavior as being describable by continuous functions in space, and discrete mechanics in the sense that discreteness at the atomic size scale and thus the position dependence of properties must be considered. An example to illustrate the importance of this discreteness is a carbon nanotube strung between two contacts, with an electric force applied through a gate separated from the tube. While, it may suffice to describe the tube itself through Young’s modulus and other, similar macroscale properties—isotropic and anisotropic—at the clamping points, distortion, generation of defects, et cetera, will be dominated by behavior that must include atomic scale dynamics. A comprehensive description, while keeping the calculation manageable, will require a multiscale multiphysics where both atomic scale and continuum scale descriptions need to be incorporated and coupled through an adequate description of boundaries.
(p.412) This chapter discusses the continuoustodiscrete spectrum of the properties of electromechanic interactions, and their use in devices. Since many of the devices use energy coupling across forms, particularly the coupling of electrical and mechanical energy, we will employ a few different techniques to tackle the problem in order to extract characteristics of interest. Interaction also has statistical characteristics. Forces that arise from discreteness, such as impulsive forces, appear as shot noise. A historically important example of this impulse manifestation is Brownian motion. Objects undergoing Brownian motion, when observed with sufficient resolution, show rapid reorientation of direction followed by slow movement—fluctuation , a noise in a signal such as velocity, momentum or power flux and which rides along with the slow averaged signal that one measures. The macroscopic response is slow even if the fluctuation events are fast. An energy gradient causes slow movement in this Brownian motion in the presence of short and rapid events. This is an example of fast and slow forces at work together. So, there are important temporal attributes essential to understanding the devices in their limits or the limits of the measurement itself. The mathematical treatments reflect the choices one needs to make to gain insight while employing the simplest of techniques that will suffice. Another important attribute arising from these couplings is the ability of the forces to extend the system into the nonlinear region of response—nonlinearities are pervasive in electromechanics. They result in effects such as hysteresis, bifurcation and chaos. We therefore discuss the necessary physical description for the coupling and then look at its manifestation in interesting devices.
5.1 Mechanical response
WE START WITH THE CLASSICAL DESCRIPTION of the response of a beam in elastic conditions . Figure 5.1 shows a beam under a distributed load, that is, a force per unit length arising from an external source. L is the length of the beam, w its width, and t its thickness. Weight arising from gravity, a hydrostatic force arising from a fluid pressure, or a Coulombic force, for example, cause a pressure of force per unit length of $p(\zeta )$. The beam deflects, and the moment of this force and deflection is $\mathfrak{M}$. In general conditions, such as nonuniform loads or nonuniform beams, this deflection can consist of bending (an angular deformation), shearing (a slip), translation (a uniform Cartesian shift) and rotation (a uniform angular movement around an axis).
We consider the case of a beam that has no longitudinal forces. (p.413) The centerline then is a neutral axis with strain of opposite polarity on either side. If the load is uniform, the strain is reflected in bending. Since the force exists across the beam, we employ a moment reflecting the leverage of the positiondependent force. The beam has a compressive and tensile strain on either side of the neutral axis. The moment causes this strain, and the strain is the reaction pushing the beam back to maintain equilibrium. We employ this moment of inertia to reflect this position dependence in the inertial response of the beam.
The strain in a rectangular beam in the coordinate system of Figure 5.1(a) is $\epsilon =zd\varphi /dy=z/r$. This follows from the balance of longitudinal forces, so beam length along the neutral axis is a constant. The strain causes a reaction for restoration. The moment around the neutral axis is the integrated product of the lever of stress ($\sigma =\epsilon Y$, where σ is the stress, and Y is Young’s modulus). The moment of inertia for a rectangular beam is $I={\int}_{S}{z}^{2}dS=w{t}^{3}/12$, where dS is the elemental crosssection of the beam. The moment of the beam is
The radius of curvature r changes with position. Farther away, there is a larger accumulated moment of the force, and the displacement u(y) is large. For this elastic—linear response—case, as a function of the position, the radius of curvature is related to the displacement as
for small deflection, and, hence, the moment as
Consider the general case shown in Figure 5.1(a), without the uniform force approximation. Under conditions of limited displacement, so that higher order terms in the Cartesian geometric representation are ignored, if the positiondependent force per unit length, a (p.414) pressure, is $p(\zeta )$, the positiondependent moment is
where the coordinates are along the beam, with the origin at the fixed point ($y=0$, which coincides with the origin of ζ). This equation gives the leverage at any given position y due to the force per unit length of $p(\zeta )$. This represents the force exerted on an infinitesimal section of the beam, multiplied by its distance from the clamping point. The shearing force at any position y is the result of the accumulation of the force beyond that position—the differential, that is,
The forces beyond the position y cause the shearing at y. The force per unit length then is the second derivative:
If the beam is stationary, then the moment $\mathfrak{M}=YI{\partial}^{2}u/\partial {y}^{2}$. This is what we derived when discussing the case of uniform force. The force per unit length is then related as
When the beam is in motion, the inertial force reacts to the force per unit length with this positional dependence. The vertical displacement, a translational movement, is related to p(y) through the force law, so p(y) can be written as
through timedependence.
These forces—the applied force and the strain from within—and incorporating the timedependence of the response can be gathered together in the form
This equation can be expanded to include any other forcing or damping functions that may exist—electromagnetic, mechanical or other. This partial differential equation couples time and space in form to which the eigenmode analysis can be applied. So, we have now reduced the problem of a moving beam under forces to one of eigenmode analysis.
(p.415) In the case of a fixed cantilever beam, at the fixed end ($y=0$), the shearing force and the moment also vanish. So, the displacement and its first derivative vanish at $y=0$. Similarly, the second and third derivatives of displacement vanish at $y=L$. The short argument that establishes this boundary condition is that, beyond $y=L$, the various forces vanish. This means that the second and third derivatives must too, since the moment vanishes.
This was a rather convoluted way to arrive at the equation for the force. The forcebalancing approach works, but it easily gets out of hand, particularly regarding any intuition and questions on what coordinate system to employ. How would one handle rotations and translations happening simultaneously? One answer is to use energy principles, and scalers, through Hamilton’s principle and the Lagrangian method . This gives us an elegant way to tackle this by resorting to the principle of least action. The kinetic energy is
The potential energy is
With the work done by external forces, $\delta W={\int}_{0}^{L}p\phantom{\rule{thinmathspace}{0ex}}\delta u\phantom{\rule{thinmathspace}{0ex}}dy$, Hamilton’s principle leads to
which states that the sum of the work exerted by external forces and the difference between the changes in kinetic and potential energies change, over time, for infinitely small displacement $\delta u$ around the actual displacement $u$ vanishes so long as either this perturbation or its positional dependence, that is, either $\delta u$ or $\partial (\delta u)/\partial y$ vanishes.
For the beam problem, the principle implies that
Since the perturbation in the displacement at any position, including at $y=L$, and the perturbation in its derivative at $y=L$ are arbitrary, it must follow, from each of the associated terms in this equation, that
(p.416) These are the equations of motion, absent damping, and the boundary conditions at $y=L$. At $y=0$, the displacement vanishes, as does its first differential with position—the inclination. In time, the initial condition consists of a displacement $u$ and a velocity $\partial u/\partial t$. The boundary conditions suffice for solving the fourth order differential equation.
One could have arrived at this same equation by using the Lagrangian method, which we will utilize in problems later on. As remarked, these are all different methods utilizing action in different forms and are formally equivalent. For the Lagrangian $\mathcal{L}\equiv TU$, we may utilize the two canonical conjugate coordinates—position $u$ and velocity $\dot{u}$—to write the equation of motion as
where i denotes a differential section of the beam; assembly of all the sections forms the entire beam.
This equation may be solved through eigenmode analysis. Here, we take the case of free vibration; later on, we will revisit the problem for damping and forced vibrations. A harmonic force causes a harmonic response in this linear response description. We separate the response function into a timedependent part and a spacedependent part. The separation of variables is possible since, in the steady state, the time response of displacement at any position y has to have the same harmonic form in time. This implies that the solution function is a product of a spacedependent part and a timedependent part . So, we wish to find the positiondependent part ($\mathbf{Y}$) and timedependent part ($\mathbf{T}$) of
The governing equation, Equation 5.14, then becomes
since neither of the dependences varies with position or time, these can be split into
(p.417) where ω is the angular frequency of the harmonic force, and ${k}^{2}=\omega /{(YI/\rho A)}^{1/2}=\omega /a$, with $a={(YI/\rho A)}^{1/2}$ as a parametric constant. This solution for the homogeneous equation shows characteristics that are similar to those of the solution for the electromagnetic wave propagation equation, although the dependences are in a different order, and the parameters are quite different. The beam oscillates back and forth, exchanging the potential energy, which in the spring is associated with quantummechanical bonding, with the kinetic energy associated with the vibrational motion. The ansatz function then is
The boundary conditions then imply that ${C}_{1}+{C}_{3}=0$, and ${C}_{2}+{C}_{4}=0$, because of fixed reference position and vanishing inclination at $y=0$. The vanishing second and third derivatives of displacement at $y=L$, which we derived as a boundary condition due to the absence of moment from the shearing force at that position, then lead to the set
whose solution exists iff
An infinite number of solutions exist. The solutions have the form $kL={\lambda}_{i}$, where ${\lambda}_{1}=1.875,{\lambda}_{2}=4.694,\dots $, where the higher order terms are well approximated by ${\lambda}_{i}=2(i1)\pi /2\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\mathrm{\forall}i\ge 3$, with less than a percent of error. We have now found the various eigenmodes of the positional waves that exist in the cantilever beam as it oscillates up and down. The natural frequency of each mode (${\omega}_{i}$) is different:
The frequency of any mode increases as beam is made smaller in length or thicker, because of the $1/{L}^{2}$ and $I/A$ dependence. The amplitude can now be determined through the Ccoefficients:
(p.418) C_{3} and C_{4} follow, since they are related to C_{1} and C_{2} through the boundary conditions at $y=0$.
The displacement then is
with ${A}_{i}={A}_{i}^{{}^{\prime}}{C}_{i}$, and ${B}_{i}={B}_{i}^{{}^{\prime}}{C}_{i}$, determined by the initialization conditions of motion. The displacement and speed at $t=0$ uniquely determine the infinite A_{i} and B_{i} sequences since the initial condition of displacement is a function of position infinitely spread out across the beam. The displacement in time, as a function of position, is the sum of the eigenmode expansion of the infinite series of Equation 5.24. For example, if the initial condition was precisely one corresponding to ${A}_{1}=1$, where all the rest—B_{1}, ${A}_{i},{B}_{i}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\mathrm{\forall}i\ge 2$—were zero, then the initial condition for displacement must be precisely $u(y,0)=cos{k}_{1}ycosh{k}_{1}y{\alpha}_{1}(sin{k}_{1}ysinh{k}_{1}y)$.
Since the solution is in terms of a basis set that is orthogonal, the eigenfunctions are orthogonal. We can show this rigorously by drawing on Equation 5.18, which the basis eigenfunctions ${\mathbf{Y}}_{i}$ of function $\mathbf{Y}$ must satisfy, first rewriting it in the reduced form
The standard technique is to multiply by the other basis eigenfunction and then subtract and integrate over space. So,
because the boundary conditions are that ${\mathbf{Y}}_{i}$ and ${\mathbf{Y}}_{j}$ and their first derivatives with position y vanish at $y=0$ and that their second and third derivatives vanish at $y=L$. This requires that
(p.419) since ${k}_{i}\ne {k}_{j}$. In turn, the first part of Equation 5.25, when multiplied by ${\mathbf{Y}}_{j}$, and integrated over the length, also leads to
Ergo, the eigenmode functions are orthogonal. We have here established the technique to describe the position dependence and time dependence of the beam response as an eigenfunction response. We will look at example solutions when we return to this subject for forced and damped conditions.
Resonators employing circular plates are also of interest as banks of frequencyselective filters in wireless communications. So, we will analyze this problem to show the features that appear from symmetries and dimensionality. We consider only the homogeneous undamped case shown in Figure 5.2: a thin plate of density ρ, diameter D, thickness t and radius R . In this case,
The boundary conditions in polar coordinates, if the plate is clamped at its edges, are
where we have written the conditions in a normalized radial unit, $\varrho =r/R$, where R is the radius of the disk. If the disk is anchored at the center, these boundary conditions change appropriately to $\varrho =r/R=0$. We employ the traditional separation of variables technique, given the nature of the derivative dependences of the governing equation. First, we separate time,
The spatial equation then is
This form of equation immediately suggests that $z(\varrho ,\theta )=F(\varrho ,\theta )+G(\varrho ,\theta )$, which must satisfy
Now, we separate the spatial variables. Take $G(\varrho ,\theta )={G}_{\theta}(\theta ){G}_{\varrho}(\varrho )$, the product of the angular and the radial parts. The solution must satisfy
is a good solution, and there must be phase matching in angular displacement, that is, ${G}_{\theta}(\theta )={G}_{\theta}(\theta +2\pi )$. The constraint is that $\lambda =1,2,\dots $, that is, positive integers. So, the angular part of this solution has the form
The radial dependence equation, recast by substituting for λ, has Bessel functions as the solutions:
${Y}_{n}(\sqrt{\omega}\varrho )$ is unphysical since it is unbounded, so we only need to consider the first term; so we have
where ${a}_{n}={a}_{n}^{{}^{\prime}}{\alpha}_{n}$, and ${b}_{n}={b}_{n}^{{}^{\prime}}{\alpha}_{n}$. Solution for $F(\varrho ,\theta )$, with its negative sign equation, forms a solution in Bessel functions with an imaginary argument and is
Combining Equations 5.39 and 5.39, we obtain the spatial solution for edge clamping,
to which we can now apply our boundary conditions. This lets us find the allowed frequencies, that is, the values of ${\omega}_{n}^{\star}$, that satisfy this equation. These constraints appear through the determinant of the two boundary condition equations, for clamping on the edge, as
The solution to the constraints of the problem with this flexing must now be determined numerically. The natural frequencies of this plate are
where D is related to flexural rigidity of the plate, and L is the radius—the characteristic length of this system.
(p.421) The natural frequency of the plate being a function of size means that an array of such disks, all programmed for different frequencies, are potentially useful as filters at frequencies of wireless communications.
We can convince ourselves of the usefulness of beams too with simple estimation. Resonance frequency relates inversely to the square root of mass. The effective mass of a beam being proportional to the order of the resonance mode, the higher order modes have less energy than the fundamental one. The cantilever beam has a mass $m=\rho Lwt$, the product of density, length, width and thickness. Its elastic constant along the direction of thickness, responsible for the restorative force is, ${k}_{s}=12YI/{L}^{3}$, as determined earlier. The moment of inertia here is proportional to wt^{3}. In a restorative system such as this, the natural frequency of vibration will be related to the spring constant and mass as ${\omega}_{0}=\sqrt{{k}_{s}/m}$. Reducing dimensions reduces mass and increases resonance frequency. If one were to place a mass $\mathrm{\Delta}m$ at the tip of the cantilever, the modified mass $m={m}_{0}+\mathrm{\Delta}m={m}_{0}(1+\mathrm{\Delta}m/{m}_{0})$ would change the vibrational frequency to $\omega ={\omega}_{0}/\sqrt{1+\mathrm{\Delta}m/{m}_{0}}$. With a suitable choice of parameters then, this cantilever becomes an ultrasensitive mass detector. The sensitivity
shows that the mass detection limit can be increased through the frequency measurement in the inverse fourth power of length.
This argument shows the importance of length and of the other (p.422) parameters that determine the spring constant and the inertia of the structure. Size matters. Scaling size reduces effective mass. Vibrational amplitude will also reduce with size. Since electromechanic devices utilize multidomain coupling, it is pertinent to look at this size scaling, where the energy and forces may have dependence on length $[L]$, area ${[L]}^{2}$ and volume ${[L]}^{3}$. Table 5.1 shows the length scale dependence of different energy and forces.
Table 5.1: Dimensional dependence of energy and forces. The electrostatic field $\mathbf{E}$ must be smaller than the breakdown field ${\mathbf{E}}_{br}$. The magnetic field is proportional to the current (I) and to the number of turns of a coil (n) and varies inversely with the permeability ${\mu}_{0}$ and the length of the coil, L. If current density is kept constant then the magnetostatic energy and forces follow as summarized. If heat dissipation, flow and temperature changes are significant, one must also consider a temperature gradient that varies inversely with L. Forces associated with thermal and frictional forms of energy are nonconservative. They couple a broadband of excitations. C is specific heat capacity, ${\mu}_{f}$ is the coefficient of friction, and ${R}_{\perp}$ is the reaction force at the surface in contact.
Energy 
Force 


Electrostatic ($\mathbf{E}<{\mathbf{E}}_{br}$) 
${U}_{es}={\int}_{\mathrm{\Omega}}(1/2)\mathit{\epsilon}\cdot \mathbf{D}{d}^{3}\mathbf{r}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\sim [L{]}^{3}$ 
${\mathbf{F}}_{es}=\mathbf{\nabla}{U}_{es}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\sim {[L]}^{2}$ 
Magnetostatic ($B=nI/{\mu}_{0}L\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\sim [L]$) 
${U}_{mag}={\int}_{\mathrm{\Omega}}(1/2)\mathbf{B}\cdot \mathbf{H}{d}^{3}\mathbf{r}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\sim {[L]}^{5}$ 
${\mathbf{F}}_{mag}=\mathbf{\nabla}{U}_{mag}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\sim {[L]}^{4}$ 
Thermostatic 
Nonconservative, ${\int}_{\mathrm{\Omega}}d(TS)\sim C\rho \mathrm{\Omega}\mathrm{\Delta}T$ 
${\mathbf{F}}_{ts}\sim {[L]}^{3}$ 
Friction (atomically smooth interface) 
Nonconservative, $\int {\mu}_{f}{R}_{\perp}dr$ 
${\mathbf{F}}_{f}={\mu}_{f}{R}_{\perp}\stackrel{\u02c6}{\mathbf{r}}\sim {[L]}^{2}$ 
Frictional forces arise from quantummechanical exclusion and the transfer of energy to a broadband of degrees of freedom—energy loss paths such as the various vibrational modes—when objects are brought in contact. In macroscopic objects that are not atomically smooth, contact occurs only in a small area. At a minimum, three atomic scale regions suffice. The frictional force is independent of area. The difference between static and dynamic friction arises because when the objects are in motion, they are separated further away from each other than when they are static, that is, in the lowest energy ground state. The effective contact area between them is now reduced, and thus the dissipative force is reduced. When surfaces are atomically smooth then adhesion is strong, friction is large and force is proportional to the physical area. Small and atomically smooth objects have a frictional force that varies as ${[L]}^{2}$.
An estimate for the mechanical strength of the beam can be obtained from the dynamic equation, Equation 5.9. In static conditions, ${\partial}^{4}u/\partial {y}^{4}=p/YI$, which relates the bending to Young’s modulus, the moment of inertia, and the load per unit length. This load per unit length is $p=g\rho wt\sim {[L]}^{2}$, and the moment of inertia is $I=w{t}^{3}/12$, independent of length, so the bending has a dimensional dependence of $u\sim {[L]}^{2}$. This goes together with the beam’s resonance frequency, which has the dependence ${\omega}_{0}\sim {[L]}^{1}$. The sensitivity to mass, Equation 5.44, then has a ${[L]}^{4}$ dependence.
It is this sensitivity to forces, arising as inertial response, that is of immense interest in precision measurements of orientation, acceleration, et cetera, and in fundamental measurements through the nanoscale. In Figure 5.3, the simplest circuit model is shown for the cantilever as an inertial sensor. Here, the displacement of the mass m is in the cantilever assembly’s reference frame, and the cantilever assembly is moving with an acceleration a in the laboratory reference frame.
The force equation is
where γ is the damping coefficient. For a harmonic acceleration at a frequency ω, this leads to
(p.423) The acceleration in the laboratory reference frame can be deduced in numerous ways—displacement of the proof mass, stress in the spring such as in the cantilever body, et cetera. Consider the proof mass displacement in these harmonic conditions:
If the acceleration is constant or slowly varying in time, then $\omega \ll {\omega}_{0}$, that is, the frequency of the forcing signal is very different from the natural frequency of the inertial sensor; then
As the spring constant ${k}_{s}\sim [L]$, and mass $m\sim {[L]}^{3}$, the sensitivity S of displacement to acceleration will vary as ${[L]}^{2}$. The sensitivity of the measurement is higher by a second power of the displacement—the inertial sensor has increased sensitivity to the acceleration.
Actuators, the complements of sensors, derive their utility from the energy and the generated forces. So, the discussion of energy and forces in Table 5.1 is particularly apropos. Figure 5.4 shows an idealization of an electrostatic actuator with lateral and transverse motion, under an applied bias voltage of V(t), and with an air gap between the plates. The actuator itself consists of two conducting plates of width W, and length L, and which are separated by a distance t, and it has a bias voltage of V(t). For lateral actuation, the force is
and for transverse actuation, the force, is
For the same dimensions and an energy density defined by the electric field $\mathbf{E}=V/t$, the force transversely is high—proportional to (p.424) the product of energy density and the width and length—compared to the force longitudinally, which is the product of energy density, width and plate separation. Plate separation t is usually significantly smaller than the length L of plates. The transverse motion is subject to higher stress and squeezing, due to the displacement t and consequent dissipative losses. This also results in lower quality factor $\mathbf{Q}$. The force for the lateral motion is more insensitive to lateral displacement, while for that transverse motion is strongly sensitive to the separation.
Actuation can also be triggered thermally , and, although slow as it is subject to thermal time constant, a thermal actuator can unleash a large energy due to higher energy density in thermal forms. Figure 5.5 shows a thermal actuator in a cantilever form consisting of a bimaterial strip of different expansion coefficients.
Thermal expansion causes a change in length of $\mathrm{\Delta}L=L\alpha \mathrm{\Delta}T$ for a change $\mathrm{\Delta}T$ in temperature in a material of thermal expansion coefficient α. The change in elastic energy resulting from this change in length is $\mathrm{\Delta}{U}_{\theta}={k}_{s}{\mathrm{\Delta}L}^{2}/2$. This results in the force
The force varies as the second power of temperature differential and expansion coefficient and can be large. The other characteristic of thermal systems is the large energy density that can be stored.
Table 5.2 shows the nearly three orders of magnitude difference in stored energy between electrostatic and thermal assemblies. So, if speed is not a consideration, thermal systems are a possibility that should be considered.
Table 5.2: Maximum electrostatic energy density and thermal energy density in useful conditions in nanoscale geometries. For thermal energy density, the material is assumed to be silicon with $c\approx 700\phantom{\rule{thickmathspace}{0ex}}J/kg\cdot K$, and $\rho \approx 2.33\times {10}^{3}\phantom{\rule{thickmathspace}{0ex}}kg\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{negativethinmathspace}{0ex}}{m}^{3}$. The strongest FeCo magnets have a remnant strength of about $2.5\phantom{\rule{thickmathspace}{0ex}}T$. ε is strain. Nanowires of silicon have been measured to have a maximum strain of $4.5\phantom{\rule{thickmathspace}{0ex}}\mathrm{\%}$.
Maximum energy ($J/{m}^{3}$) 


Electrostatic 
${\mathbf{E}}_{br}\approx 3\times {10}^{8}\phantom{\rule{thickmathspace}{0ex}}V\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{negativethinmathspace}{0ex}}m$ 
${U}_{es}=(1/2){\u03f5}_{0}{\mathbf{E}}_{br}^{2}\approx 4\times {10}^{5}$ 
Magnetostatic 
${M}_{s}=2.5\phantom{\rule{thickmathspace}{0ex}}T$ (FeCo) 
${U}_{ms}={M}_{s}^{2}/2{\mu}_{0}\approx 2.5\times {10}^{6}$ 
Thermal 
$\mathrm{\Delta}T\approx 350\phantom{\rule{thickmathspace}{0ex}}K$ 
${U}_{\theta}=c\rho \mathrm{\Delta}T\approx 5.8\times {10}^{8}$ 
Mechanical 
${\epsilon}_{max}=0.045$, 
${U}_{mech}=(1/2)Y{\epsilon}^{2}\approx 1.5\times {10}^{8}$ 
(silicon $30\phantom{\rule{thickmathspace}{0ex}}nm$) 
$Y=150\phantom{\rule{thickmathspace}{0ex}}GPa$ 
A comb drive is a common form of electrostatic actuator employed in inertial measurement systems where both transverse and lateral actuation exist. It is useful to look at its sensitivity using the simplistic analysis that we have employed so far. Figure 5.6 shows a comb drive with a few of the parameters relevant to practical structures. Comb drives are either balanced arrangements when a straightline (p.425) movement is desired or round when a rotation is desired. The specific example shown here is linear. It consists of two interlocked combs, one static and one moving, with conducting fingers.
Let y_{0} be the unbiased transverse gap between the fingers, and L_{0} the finger length, which ${x}_{0}=L$ overlaps. In this comb drive arrangement, two primary capacitances—C_{1} and C_{2}—represent the coupling of electrostatic forces. And let us assume fringing capacitance, ${C}_{\pi}$. If N_{r} is the number of fingers in the moving part, the primary capacitance for a thickness t in the depth direction of the figure is
Were the arrangement symmetric, that is, $y=0$, the capacitance would be $C={C}_{\pi}+2{N}_{r}{\u03f5}_{0}Lt/{y}_{0}$. Any differences in alignment show up as a differential change in the capacitances. The same is true for applied voltages. So, if one applies a voltage V_{y} to the moving part, and a differential voltage 2V between the static fingers, there is a net force on the moving part. To the first order, the electrostatic force, the change in energy and the displacement are related as
where ${C}_{0}=2{N}_{r}Lt/{y}_{0}$ is the primary capacitance. The sensitivity of the capacitance, measurable through charge, is therefore related as
For an average gap, ${y}_{0}=1\phantom{\rule{thickmathspace}{0ex}}\mu m$, an overlap length $L=50\phantom{\rule{thickmathspace}{0ex}}\mu m$, a depth $t=2\phantom{\rule{thickmathspace}{0ex}}\mu m$, and a repeating number of ${N}_{r}=50$, the capacitance $C\approx 9.2\phantom{\rule{thickmathspace}{0ex}}fF$, and the sensitivity $S\approx 0.4\phantom{\rule{thickmathspace}{0ex}}fF/\mu m$. A change of a fF is relatively easy to measure, and a force that can cause a $\mu m$ of movement relatively easy to apply in an unloaded comb drive. One can derive the spring constant effect arising from the electric force that modifies the mechanical spring constant of the comb drive. ${k}_{el}=dF/dy=2{C}_{0}V{V}_{y}/{y}_{0}^{2}$, since two opposite forces exist and the spring constant is altered by this magnitude. Therefore, the resonance frequency of the structure changes to
As a result of the application of a bias voltage V, the displacement is x.
(p.426) This analysis is quite approximate—quasistatic and in its onedimensional approximation. Motion will be both in plane and out of plane, due to the multidimensionality of the geometry. A qualitative view of this is shown in Figure 5.7. At the natural frequency of the structure, the resonance frequency ${f}_{0}=2\pi {\omega}_{0}$, the response is large rising by more than an order of magnitude above the low frequency response, and, beyond it, the response rapidly goes out of phase. The outofplane response is small, depending on the parasitic outofplane effects.
5.2 Coupled analysis
WE NOW DEMONSTRATE THE APPLICABILITY OF ENERGYBASED MATHEMATICAL TECHNIQUES to solving this problem coupling electrical and mechanical energy forms. We do this for the simple structural form shown in Figure 5.4(b): a movable plate changing the size of the gap in response to bias voltage V. We consider a conservative exchange in energy between the mechanical and the electrical forms. Let Q be the charge on the capacitor; V, the voltage between the plates; displacement, x; and F, the mechanical force to keep the plate in place by opposing the electrostatic attraction. Conditions are assumed to be ideal—massless, without stiffness or damping, and with a pure onedimensional capacitance. We take the displacement x and the charge Q as the independent coordinates, so $V=V(x,Q)$, and $F=F(x,Q)$. The boundary condition includes that, when Q vanishes F too vanishes, that is,
(p.427) To get to coordinates (x, Q), the power delivered to the structure must be the sum of the electrical power $(VI=VdQ/dt)$ and the mechanical power ($Fdx/dt)$, so, the net work on the capacitor during the time interval dt is
The total stored energy $W(x,Q)$ of the capacitor is partly electrical and partly mechanical, and if we know this energy function, the force and the potential follow as
The complementary energy function using the Legendre transformation is
The total differential then is
It follows then from Equation 5.58 that
This ideal capacitor is a linear electrical element, so one can explicitly write the state functions $W(x,Q)$ and ${W}^{\star}(x,V)$. The linear electrical property is reflected in $V=Q/C(x)$, where C(x) is the capacitance corresponding to position x of the moving plate.
Equation 5.57 shows us the way to find the different energies of interest. In our generalized coordinates, the path $(0,0)$ to $(x,0)$ to (x, Q) allows us to maintain $F=0$, and $V=0$, over the initial segment and then build electrical energy with no change in position over the second segment. From Equation 5.57 for this path,
Legendre transformation and the linear electrical constitutive relation of $V=Q/C(x)$ allows us to write the coenergy as
(p.428) Knowing $W(x,Q)$ and ${W}^{\star}(x,V)$, we may write the mechanical force necessary to balance electrostatic force in choice of coordinates. From work,
From coenergy,
This Legendre transformation–based approach lets us, by choosing independent coordinates and writing work equations, find parameters of interest conveniently from scalars, unlike the vectorbased approach that requires a yeoman’s work.
Similar to an electric actuator, one can visualize a magnetic actuator and sensor where current flowing in a loop causes a magnetic field, or a magnetic field causes a current flow. For an electromagnet placed in an external magnetic field, which is out of plane of the current, counter forces will generate a torque to align magnetic fields—the external, and that generated by the current. Figure 5.8(a) shows a conceptual example where the current I flowing through a structure of dimension L will cause a moment that is proportional to ${L}^{2}IH$. The magnetic field used to actuate the diaphragm vibration in speakers in audio systems is an example of such usage. The piezoelectric effect—mechanicalelectric coupling—converts applied voltage to crystal deformation that is employed for relatively rapid actuation. One can employ this deformation for actuation such as opening and closing of a valve that controls flow, as shown in Figure 5.8(b). This could also be done by thermal means, using expansion and contraction. So, actuators based on a variety of effects that can exert a force exist.
These examples, which illustrate the coupling of energy that we associate with actuators as a mechanical form, are manifestations of the exchange of coupled energy within forms—electromagnetic, mechanical or gravitational—of the macroscopic environment that we are interested in. The mechanical form is the classical view of the manifestation under quantummechanical constraints. The spring constant (k_{s}), for example, is the linear, that is, elastic term of a response arising from the spatial dependence of quantummechanical considerations of energy lowering and increase. To tackle these exchanges, one should work with approaches that are scaler and that (p.429) use the energy as the underlying consideration from which the vector fields can be extracted.
A major deficiency of this discussion to this point is that in each of the examples coupling electrical, magnetic or other forms of energy with mechanical energy, we employed an ad hoc approach suitable for only quite approximate low order estimates. Forces are vectors in which the choice of coordinate systems and reference frames determines the complexity and tractability of the calculation, particularly when more than one interaction needs to be accounted.
The principle of virtual work states that, in order for a system of particles to be in mechanical equilibrium, virtual work done on all the particles must vanish, that is,
There are significant limitations here. If it is a large collection of particles forming the macroscopic assembly, any integral of external forces determined by working through displacement of the center of mass point, $\sum \left(\int {\mathbf{F}}_{i,ext}\cdot d{\mathbf{r}}_{CM}\right)$, is not the real work done. Forces have not been been multiplied by individual displacements but rather by the net force on the center of mass. Any kinetic energy associated with the motion that results will not be in terms of the velocity of the center of mass, that is, $\mathrm{\Delta}\left(M{v}_{CM}^{2}/2\right)$ is not the change in kinetic energy in general, for example, work that causes an object to rotate without a centerofmass motion. The real work is $\sum \left(\int {\mathbf{F}}_{i,ext}\cdot d{\mathbf{r}}_{i}\right)$. The virtual work principle applies only to static conditions; it cannot take into account perturbations arising from motion, for example, energy lost to unaccounted degrees of freedom represented in friction.
Hamilton’s principle—the principle of least action—and the invariant action functional through its Lagrangian provide a natural way of tackling these problems via the EulerLagrange equation.
Consider the simplified electromechanical actuator shown in Figure 5.9, where there is a single degree of freedom of movement for one of the plates of a capacitor across which a bias voltage V(t) can be applied. This moving plate responds with a spring constant of k_{s}. The choice of y as an independent coordinate is obvious. For the second coordinate, we have a choice to make—we can choose either the voltage V or the charge Q on the plates. Let us first choose V as the general coordinate with the objective to determine position y and charge Q.
The potential energy stored arises from the mechanical stored energy and the electrostatic stored energy.
From the calculus of variations, we may write the following for the condition of minimization of the Lagrangian:
which implies
Here, the first term is the elastic stress force (F_{s}), and the second is the balancing electrostatic force (F_{es}). In matrix notation, the dynamics of the actuator in terms of the independent coordinates of position and voltage are described by
We could have chosen charge Q as the independent coordinate that determines the electrostatic energy. It complements y, which determines the mechanical energy. In the absence of bias voltage V(t), we choose mechanical equilibrium at the plate separation of t_{0}. This position is the reference for displacement. The capacitance is $C=\u03f5A/({t}_{0}y)$. The electrostatic force for any displacement from equilibrium is
balanced by the spring force ${F}_{s}={k}_{s}y$. The balance equations can be written as
which have position and charge as the independent variables. This simply states the balance of mechanical and electrical forces for the plate, and the balance between the forces from charge on the plate and those from the applied bias voltage. Either approach suffices; we have written the total energy and, from it, derived the forces in our choice of independent coordinates.
(p.431) Now we look at the implication of these equations. The force balance equation is
This is an equation in the third algebraic degree of y—a nonlinear equation. The degree and the nonlinearity have physical manifestations in the actuator’s response. The former means three solutions—either three real solutions, or one that we will see is stable, and two that are unstable equilibrium configurations that are disallowed because of the boundary conditions of the physical system. The nonlinearity, as we will discuss later, leads to chaos in the response.
The energy of interest in the system is
The energy minima occur at
and
For $y<{t}_{0}$, we find that there are two solutions. But, $y>{t}_{0}$ is an unphysical circumstance. The moving plate cannot get past the other plate, which is located at $y={t}_{0}$. The moving plate pulls in and stops at $y={t}_{0}$.
This third degree equation leads to a set of conditions, subject to applied voltage and the rest of the parameters of the system, where there are energy minima at positions, or the nonlinearity causes the plate to be pulled into contact with the other plate. The pullin voltage ${V}_{\pi}$ is the bias voltage where a small perturbation snaps the moving plate into contact with the static plate even when the moving plate is distant from the static plate.
This is the onset of instability.
The stable solution of Equation 5.75 exists for $y<{t}_{0}$ at bias voltages of
This bias voltage solution implies that, just at the onset of pullin,
(p.432) So, the pullin happens when
As the voltage is increased, and the bias voltage begins to bring the plates closer due to Coulombic attraction, at $y={t}_{0}/3$, and the corresponding bias voltage ${V}_{\pi}$, the plates snap together.
These results are conveniently visualized in a dimensionless form. Using $\zeta =y/{t}_{0}$ for spatial coordinates, and $\upsilon =V/{V}_{\pi}$ for bias voltage, the governing equation is
in dimensionless form, with a dimensionless elastic force of ${f}_{s}=\zeta ={F}_{s}/{k}_{s}{t}_{0}$, and a dimensionless electrostatic force of
Figure 5.10(a) shows this solution with varying parameters. For voltages where $\upsilon >1$, that is, $V>{V}_{\pi}$, two stable solutions exist where the elastic force and electrostatic forces can balance with the plates still apart. At $\upsilon =1$, that is, $V={V}_{\pi}$, we have two stable solutions and the pullin solution. Any small perturbation causes this pullin and brings the plates into contact. For $\upsilon <1$, while an algebraic solution exists, the physical structure doesn’t allow it. Figure 5.10(b) shows this change as the voltage is varied. The nonlinearity of the equation is reflected in the nonlinearity of separation as increasing voltage brings the plates together. It arises from the relationship in energy that we have derived,
(p.433) where the last term is a nonlinear term in position and continuously increases with decreasing spacing. It causes the snapping of pullin and the chaotic effects that we look at later.
We now take a more complex example of usage employing the coenergy approach: a microphone based on capacitive response—so, a mechanicalelectrical coupling, but now including more realism through the use of circuit elements. Our model for this microphone, where the diaphragm has a mass m and is coupled with a resistorinductor to a voltage source, is shown in Figure 5.11. The moving plate of the diaphragm forms a variable capacitor and is mounted, that is, attached to an anchor, modeled here with a spring constant k_{s} and the dissipation constant ${\gamma}^{{}^{\prime}}$. In equilibrium, a charge Q_{0} on the capacitor produces an attractive electrostatic force F_{e0} that is balanced by the elastic force of the spring. We employ x_{0} and x_{1} as the two equilibrium gaps. Any excitation around equilibrium causes oscillation about this equilibrium. The capacitance seen by the electrical network is $C(x)=\u03f5A/({x}_{0}x)$.
The electrostatic force between the plates at equilibrium is
using W_{e} to denote the electrostatic energy. This force is balanced by the mechanical force ${k}_{s}{x}_{1}$. We now use the generalized coordinates (x, Q). Current, a dependent parameter, is $I=\dot{Q}$. To determine (p.434) the Lagrangian, the different kinetic and potential energies are
where ${W}_{m}^{\star}$ is the mechanical coenergy, and q is the excess charge, so that $Q={Q}_{0}+q$. The structure has dissipation and so has nonconservative energy and forces. We write these as
and the nonconservative work is
The Lagrangian ($\mathcal{L}$) is
leading to
The two Lagrange equations in the two generalized coordinates then can be written as
with equilibrium setting the boundary conditions where x, $\dot{x}$, $\ddot{x}$, F, q, $\dot{q}$ and $\ddot{q}$ all vanish. So, of the two Lagrange equations, at equilibrium, the first reduces the balance of elastic and electrostatic forces that we have already found, and the second gives
If small perturbations are assumed for charge and position, then we can take first order terms for the product terms, that is,
So, off equilibrium, the Lagrange equations reduce to
The third term in this equation is the ratio of the positional disturbance and the equilibrium gap, since ${x}_{0}=\u03f5A/{C}_{0}$, and the fourth term is the equivalent for charge. Let ${I}_{0}={Q}_{0}/\u03f5A$; then, the effect of our small perturbation can be represented by
The transfer function—the relationship between acoustic force on the diaphragm and the voltage across the resistor—follows directly from this.
(p.435) We went through this exercise to stress that, by using the approach of Lagrange, relatively complex energy couplings can be analyzed in a straightforward procedure. Nanoscale systems are used for detecting vibrations, measuring different forces, providing stability and often working at measurement capability limits prescribed by noise and the measurement approach. The Lagrangian methodology gives a means of analyzing it to the first order analytically. So, nanosystems used for detecting vibrations, providing stability, measuring different forces—a myriad of diverse transduction mechanisms—all can be analyzed to the first order analytically.
It is pertinent to remark on motion, since our discussion has been entirely quasistatic.
The primary effect here is due to the motion in a gas or fluid environment. It is subject to fluctuationdissipation . In the case of a beam, there are fast phenomena, through atoms and molecules interacting with the beam randomly, and there is the slow response of the beam to the applied forces. The beam response feels the friction of the fast response—they are connected. We will dwell on the connections between fast and slow phenomena later, as they determine the limits for precision measurements, as well as the characteristics, such as power spectra, through which we may measure. But, for now, we need to understand the response of the beam in this fluidic environment, to determine the speed of the response in a framework where we may ignore the fluctuations and capture the effect in the friction caused by the fluid. The motion of the beam will be affected. The Bernoulli effect is precisely the complement of what we are interested in—fluid flow around a shape such as a wing.
To understand, we must look at the conservation equations of fluid flow under conditions commonly encountered. There are five primary variables: the position r, the time t, the energy reflected in the entropy per unit mass s, the fluid’s mass density ρ, and the viscosity η, which reflects the sheer stress response—a form of fluid friction. The relationships
where $\rho (r,t)$ is mass density, $v(r,t)$ is the velocity, $s(r,t)$ is the entropy (p.436) per unit mass, $p(\rho ,s)$ is the pressure, η is the viscosity, and κ is the thermal coefficient, provide a continuum description when the disturbances have scale lengths longer than the mean free paths of the fluid molecules. The first of these describes conservation of mass, the second, the conservation of energy, and the third, the conservation of momentum. The viscosity η is friction, and these equations are the conservation laws in fluids. These are equivalent to the conservation equations employed for electron flow in semiconductors in the continuum approximation—the moment equations derived from Equation 2.74. Viscosity here abstracts the collective molecular fluctuation effects in a single parameter. It is the friction.
The form we commonly use and call the NavierStokes equation references only one of these equations—the third equation, which is valid for incompressible fluid when there are no other bodily forces, such as electrostatic or gravitational forces, present.
More precisely, and more generally, this equation—the NavierStokes equation—may be written as
where $\mathbb{T}$ is the deviatoric total stress—of order 2, reflecting the deviations from the mean normal stress tensor—and f is the body force per unit volume—gravity, electrostatic or any other. Absent this force f, and assuming an incompressible fluid, this equation reduces to the third equation of the group of Equations 5.94. The left hand side of the equation is the contribution of fluid mass—from changes in velocity and its accumulation—so a divergence. The right hand side has the force and the consequence of changes in the velocity coordinate of the phase space. A moving plate or beam will be subject to forces due to the gas or liquid environment it is in during the movement. This is the complement of Bernouilli’s principle—a change of reference frame. And these effects are well described for classical motion in the continuum description. NavierStokes equations are nonlinear and partial. So, additional nonlinearities arise due to the interaction between the environment and the moving plate.
The resistance that the fluid environment places against the motion of the cantilever beam considered earlier in the timedependent beam motion problem represented in the Equation 5.9 now comes under an applied force $F={F}_{0}exp(i\omega t)$:
where
(p.437) β here is the real part of $1/{d}_{11}$, where d_{11} is the strain tensor component. We have written this resistance proportional to the instantaneous velocity of the vibrating beam without proof. F_{0} is the amplitude of the force, and as a reminder, ρ is the mass density of the beam, w is the width, t is the thickness, I is the inertia, and Y is its Young’s modulus. We are only considering the real part of the fluid resistance. An additional timedependent displacement term has appeared in the equation, and together with it, another nonlinearity due to fluidic damping. This equation describes the timedependent evolution more accurately—the first term is the inertial component, the second is the damping, and the third is the inertial force of the beam itself. For example, if a harmonic external force is impressed, the response will have additional frequency components in the response. One can analyze this through the multiple eigenmode analysis we employed in the nondamped, nonforced example earlier. The displacement eigenfunction ${\mathbf{Y}}_{n}$s still satisfies Equation 5.27—the orthogonality of function—and Equation 5.28—the orthogonality with the fourth order derivative. An additional property of the eigenfunction is
These eigenfunctions form the orthogonal basis set from which we can construct the real solution through a linear combination. Let
be the solution. Equation 5.96, multiplied by ${\mathbf{Y}}_{n}$ and integrated over the length then gives the timedependent equation for harmonic forcing function as
Here, we have simplified the appearance of the equation by using dots and double dots for time derivatives. The other parameters of the equation are given by
m_{n} is a mass term related to the eigenmode whose length effect is included in the integral with position, c_{n} is related to how the beam damps in the eigenmodes, k_{sn} is the elastic energy component of the eigenmode, and F_{n} is the force amplitude resulting from integration across the beam at the frequency ω. The nth mode is a result of the (p.438) coupling between the applied force, damping and the beam. This is a second order equation, quite solvable, an equation of damped response encountered in the damped systems, for example, the response of an RLC network. The solution is ${u}_{n}={G}_{n}(\omega ){F}_{n}exp(i\omega t)$, with
where ${\zeta}_{n}={c}_{n}/2{m}_{n}{\omega}_{n}$. The forced displacement of the beam in the presence of fluidic damping and a harmonic force is
This is the complete solution for damped and forced conditions employing eigenmode techniques.
Now let us assume that the impressed force is at a frequency close a mode, the nth mode, that is, $\omega \approx {\omega}_{n}$. The nth term of the expansion, the natural frequency of the nth mode being closest in frequency of the impressed force, will be the dominant term, since the coefficient ${\zeta}_{n}\ll 1$, and $\omega \approx {\omega}_{n}$. So, the displacement $u\approx {F}_{n}{G}_{n}{\mathbf{Y}}_{n}(y)exp(i\omega t)$.
The mode equation 5.100 has the same form as the equation for a single degree of freedom under force vibrations. It is an eigenmode nonlinear partial differential equation where one must find a sufficient number of the terms to get an accurate estimate. The beam responds as if there is only a single degree of freedom—that of the nth mode, which is closest to the impressed frequency. The damping in the system can then be understood within this single degree of freedom. Consider the real harmonic force, the inphase component of $exp(i\omega t)$, $F={F}_{n}sin{\omega}_{n}t$. The displacement of the mass m behaves like the case shown in Figure 5.3—a mass vibrating under force, with kinetic and potential energy exchange under damping. The solution is
The peak kinetic energy of the eigenmode is
The total external work in a cycle is
(p.439) Therefore, from the energy dissipated per cycle, the quality factor $\mathbf{Q}$ is related as
There is a relative energy loss of $4\pi $ times ${\zeta}_{n}$—related to the eigenmode amplitude coefficient and frequency and inversely related to the eigenmode mass, so to the properties of the beam.
Now consider the response when an excitation is shut off and the vibration decays due to damping. Assume that the beam’s free vibration decays from a starting condition consistent with the nth eigenfunction’s mode. Equation 5.103, starting from this unique selfconsistent initial starting condition of the nth eigenmode chosen for simple mathematical form, then has the solution
where ${\omega}_{d}={\omega}_{n}{(1{\zeta}_{n}^{2})}^{1/2}$, and $\tau =1/{\omega}_{n}{\zeta}_{n}$, and where ${\zeta}_{n}\ll 1$ reflects the decay time scale.
Figure 5.12 shows the response that Equation 5.100 indicates. Figure 5.12(a) shows it for the decay of the nth eigenfunction starting from a pure nth mode initial condition. Figure 5.12(b) shows the response for a harmonic force at the frequency closest to ${\omega}_{n}$. The timedependent solution has a progressively decreasing amplitude in time. The progressive amplitude ratio is $exp(t/\tau )\approx 1+2\pi {\zeta}_{n}$. If a cantilever is employed in controlling a position and one wishes to damp motion, one would make the factor ${\zeta}_{n}$ as large as possible. If then one were to apply a static force F_{n}, the static deflection of the beam would be ${F}_{n}/{k}_{sn}$, and the ratio of the resonant amplitude to this static deflection would be $2{\zeta}_{n}$. The frequency response would have resonance at a frequency close to ${\omega}_{n}$, an energy halfwidth, that is, with an amplitude of $1/\sqrt{2}$ of $2{\zeta}_{n}$. The quality factor $\mathbf{Q}={\omega}_{n}/\mathrm{\Delta}\omega $, and ${\zeta}_{n}=1/2\mathbf{Q}$. A high $\mathbf{Q}$ means less loss per cycle as well (p.440) as a more accurate measurement of ${\omega}_{n}$, since the peak is sharp. This, in turn, means a more accurate measurement of forces.
The use of high $\mathbf{Q}$ systems is therefore essential for accurate measurements with cantilevers.
The damping arises not just from the ambient in which the cantilever operates, but also from the internal frictional forces of the beam—the anharmonicity of the mechanical response, due to bulk and surface effects, as well as the anchoring of the structure. This beam effect can become important for long beams in materials such as silicon. Depending on the geometry, the anchoring makes the quality factor of losses at the anchor very significant for even small dimension variations. Another important point regarding this analysis is that, in general, we are attempting to solve a wave equation. It is significantly more complex than the electromagnetic wave case. There is dissipation, nonlinearity and these higher order terms. This means that only very specific highly circumscribed problems will have analytic solutions. The eigenmode analysis is a good example of this analytical solution approach. The threedimensional nature of the problems is an additional wrinkle. And the complexity is also connected to the multiphysics and multiscale nature of the problem—multiple interactions are simultaneously present, more than one dimensional scale is important and therefore more than one method of analysis, for example, quantum and classical, will be important. For now, we restrict ourselves to realistic problems that can be subjected to classical analysis. We should be able to understand and extract important conclusions drawing on our classical analysis with quantum insights.
We now turn to the use of cantilever beams in precision measurements using this classicalquantum mix. These moving cantilever probe techniques are particularly apropos w.r.t. quantum measurements, that is, ultrasensitive measurement in energy limits such as of single quantum modes. Such measurements, for example, singleelectron, singlephoton or singlephonon measurements, or precise “wavefunction” measurements at a surface through charge, must necessarily be at the intersection of the classical with the quantum. We do this by first understanding the meaning and limitations of the classical description and then looking at fluctuationdissipation in a different way than we did in our semiconductor physics discussion. We will start with understanding when equipartition of energy breaks down.
The principle of equipartition of energy is quite useful in analyzing the properties of ensembles of independent classical particles that are not subject to exclusion constraints. The MaxwellBoltzmann distribution function and the coordinates for the position ${\mathbf{q}}_{i}$ and momentum (p.441) ${\mathbf{p}}_{i}$ suffice for description. Appendix D summarizes a short derivation of the expectation energy associated with each degree of freedom of motion. It is ${k}_{B}T/2$. So, in threedimensional assemblies, each particle on an average will be expected to have $3{k}_{B}T/2$ of kinetic energy. The constraint is that there are no exclusion restrictions on position and momentum, except those that arise naturally through the thermodynamic constraints in statistics.
The first order effect that one may consider beyond this classical approximation is that of any noncontinuous effect—a discretization effect such as quantummechanical constraints. For the classical picture to be still applicable, the discretization should be very small compared to this energy scale, that is, $\mathrm{\Delta}{E}_{i}\ll {k}_{B}T/2$. At the nanometer scale, confinement in any or all three dimensions in a semiconductor may break this rule. Low temperatures may also break this rule.
The allowed wavevector, or equivalently, momentum and energy spacings are very small, and the classical description is quite valid in nondegenerate conditions. If the material is degenerate, states below and around the Fermi energy are predominantly occupied, and the noninteracting free ranging of the classical approximation breaks down. Certain states are not randomly accessible anymore because of the interaction and occupation that FermiDirac statistics represents. Even in moving atoms, there is a size scale where this description would break down.
We can now tie this discussion of energy with Brownian motion, which underlies the damping term. Brownian motion, the mechanism that is one of the causes of damping for the cantilever, is a good example of thermal motion and noise for the cantilever beam. Figure 5.13 shows a set of fictitious particles, for example, inert gas molecules, in a classical distribution and undergoing Brownian motion, that is, scattering with each other. Because of the three degrees of freedom, the energy for each particle is
and, by equipartition of energy,
At what size scale will the Brownian motion be observable? The Brownian motion is observable when the energy associated with a single particle in its volume $\mathfrak{V}$ exclusion zone becomes comparable to the thermal energy constraint, that is, for a density ρ, where
The Brownian fluctuations represent a form of scale breakdown of the classical continuum description. When one’s resolution of observation is made more precise, discreteness becomes observable. This sizescale discretization in the quantumclassical span is relatable by the inequality $\mathrm{\Delta}q\mathrm{\Delta}p\ge \mathrm{\hslash}$, or $\mathrm{\Delta}E\mathrm{\Delta}t\ge \mathrm{\hslash}$, of Heisenberg uncertainty. At the large size scale, the quantum uncertainty, a fluctuation, appears washed away. Where does the classicalquantum crossover take place and become significant? Consider a classical description of molecules as objects with a mean separation $\stackrel{\u203e}{r}$ and a mean momentum $\stackrel{\u203e}{p}$. Heisenberg uncertainty here suggests the condition $\stackrel{\u203e}{r}\stackrel{\u203e}{p}\gg \mathrm{\hslash}$ for the classical description to be applicable. The de Broglie wavelength of the molecule is $\stackrel{\u203e}{\lambda}=h/\stackrel{\u203e}{p}=2\pi \mathrm{\hslash}/\stackrel{\u203e}{p}$. So, the usefulness of the classical description is restricted to those cases where $\stackrel{\u203e}{r}\gg \stackrel{\u203e}{\lambda}$. Only when the molecular separation is significantly larger than the de Broglie wavelength of the molecule would a classical description suffice. Where the classical description, that is, an average behavior as a good description of an ensemble—with uniformity of description of the property in space and time—fails is when the size scale or the time scale over which the classical property is measured is limited. This is when one must account for uncertainty. In these situations, the ensemble description is not over sufficiently large numbers for it to be accurate enough to be useful.
Noise is the time fluctuation describing the spread in measurements in the conjugate coordinate—momentum. Brownian motion is the momentum fluctuation that is observable in space through a spread of measurements in its conjugate coordinate—time.
The intermediate domain of these scale relations is at small dimensions when ($\stackrel{\u203e}{r}\approx \stackrel{\u203e}{\lambda}$). Discreteness is still a good description, but the volume is still small enough that the ensemble is also small. This domain is interesting and occurs at the range that one often encounters in measurements at their limits. This is the domain between quantum and classical in the midst of a small ensemble. Here one sees all forms of noise, such as shot noise, Brownian motion, and other phenomena at the intersection of quantum with classical.
Finally, when one gets to the limit $\stackrel{\u203e}{r}\ll \stackrel{\u203e}{\lambda}$, one must describe the molecules quantummechanically, that is, through the wavefunction $\left\psi \right.\u27e9$, from which suitable properties may be extracted.
Some orderofmagnitude calculations about this region at the interface of classical and quantummechanical approaches are helpful (p.443) to make a connection to the natural world. The volume of the region associated with molecules is $V={\stackrel{\u203e}{r}}^{3}N$, where N is the number of molecules. This means $\stackrel{\u203e}{r}={\left(V/N\right)}^{1/3}$. Equipartition of energy, valid in the classical conditions, implies ${\stackrel{\u203e}{p}}^{2}/2m=3{k}_{B}T/2$, that is, $\stackrel{\u203e}{p}={(3m{k}_{B}T)}^{1/2}$. This results in the following requirement for the de Broglie length in this particle or molecule gas
for classical approximations to be valid. This holds true generally—for particles ranging from molecules and atoms to electrons, photons and others that we often think of only quantummechanically.
Let us consider the constraints of this classicalquantum boundary for both molecules in air and electrons in solids. Air is predominantly nitrogen, which has the molecular mass $m=2.324\times {10}^{27}\phantom{\rule{thickmathspace}{0ex}}kg$. At the temperature $T=300\phantom{\rule{thickmathspace}{0ex}}K$ and one atmosphere pressure, the mass of an ensemble is $14\phantom{\rule{thickmathspace}{0ex}}g/mole$, that is, where a mole contains Avogadro’s number, that is, $6.023\times {10}^{23}$, molecules. So, $N/V=\stackrel{\u203e}{p}/{k}_{B}T\approx 2.5\times {10}^{19}\phantom{\rule{thickmathspace}{0ex}}molecules/{cm}^{3}$. This corresponds to a mean spacing of $\stackrel{\u203e}{r}\approx 3.4\times {10}^{9}\phantom{\rule{thickmathspace}{0ex}}m$, and a de Broglie wavelength of $\stackrel{\u203e}{\lambda}\approx 1\times {10}^{10}\phantom{\rule{thickmathspace}{0ex}}m$. So, for air, which consists mostly of nitrogen, $\stackrel{\u203e}{r}\gg \stackrel{\u203e}{\lambda}$, and the classical description is quite acceptable under these conditions of temperature and pressure. But, this is not always true. Low temperatures or high pressures will conflict with this constraint.
Now consider the electron gas in a metal such as of the alkali group—Li, Na, K, et cetera. The outermost orbital electron, one per atom, is potentially available for conduction. And let us consider the extrema of this condition, where all are available for conduction in the crystal. This corresponds approximately to $\stackrel{\u203e}{r}\approx 2\times {10}^{10}\phantom{\rule{thickmathspace}{0ex}}m$. The electron mass is $m=9.1\times {10}^{31}\phantom{\rule{thickmathspace}{0ex}}kg$, corresponding to a de Broglie wavelength of $\stackrel{\u203e}{\lambda}\approx 5\times {10}^{9}\phantom{\rule{thickmathspace}{0ex}}m$. $\stackrel{\u203e}{r}\ll \stackrel{\u203e}{\lambda}$, and the classical approximation is not appropriate for modeling such an ensemble. On the other hand, if one considers a nondegenerate electron ensemble in silicon, say at a density of $1\times {10}^{16}\phantom{\rule{thickmathspace}{0ex}}c{m}^{3}$, then $\stackrel{\u203e}{r}\approx 4.5\times {10}^{8}\phantom{\rule{thickmathspace}{0ex}}m$. Here $\stackrel{\u203e}{r}\gg \stackrel{\u203e}{\lambda}$, and the classical approach will provide quite valid estimations.
When one is in this smalldimensional limit of observation, we can directly observe the fluctuations instead of just the dissipation effect such as the drag. In Figure 5.14, which is a refinement of Figure 5.13, a microsystem $\mathfrak{S}$ of mass m interacts with its environment $\mathfrak{R}$, which is at temperature T. Our interest is in finding not only the energetics of these interactions, and their consequences in system properties, such as the ones that fluctuationdissipation theorem points to, but also specifically the consequences for the ultimate limits of measurements that the characteristics of these interactions portend. This (p.444) would entail understanding the consequences for a nanoscale system, where the energetics arise from gains due to applied forces or losses due to dissipation in time or in its reciprocal space form—frequency, that is, in the power spectral characteristics. We will find that this power spectrum contains much of the information important to us in defining limits of operation of devices such as the resolution limit of measurement or the finest scale of actuation.
The response of mass m is subject to two forces—the externally applied force, and any force that represents the interaction with the environment. The first is a slowly varying force. The second is a rapidly varying force due to the fluctuating interactions with the environment. We write the slowly varying force as $\mathbf{F}$, and the rapidly varying force as $\mathfrak{F}$:
Consider molecules moving at speeds of sound velocity, so $\sim \phantom{\rule{negativethinmathspace}{0ex}}{10}^{3}\phantom{\rule{thickmathspace}{0ex}}m/s$. A travel distance of about the size of simple unit cell of a solid, $\sim \phantom{\rule{negativethinmathspace}{0ex}}0.5\phantom{\rule{thickmathspace}{0ex}}nm$, occurs in about $0.5\phantom{\rule{thickmathspace}{0ex}}ps$. This time is a very small time. Another way of saying this is that the correlation time ${\tau}^{\ast}$ of the rapidly varying force $\mathfrak{F}$ representing the fluctuation interactions is very small. The slow system response takes place on a time scale much larger than the time scales over which any fluctuating rapidly varying force event has an effect that can be directly correlated to that event.
Consider the case when there is no externally applied force in the system outlined in Figure 5.14. Absent external force, the averaged velocity also vanishes.
If we consider time scales $\tau \gg {\tau}^{\ast}$,
When time scales of interest are larger than the correlation times ${\tau}^{\ast}$, one may use an averaged force to capture the effect of the rapidly varying force. It is this rapidly varying force acting in time that causes a system to move towards thermal equilibrium, absent external forces. Present external force, it will also be this force that will cause the system to reach a steady state.
We can now mathematically describe these fluctuation correlations. The change in the system $\mathfrak{S}$ after time $\tau \gg {\tau}^{\ast}$ is describable by Boltzmann statistics. The accessible states are determined by the reservoir $\mathfrak{R}$. The most probable microstates are the ones with the highest degeneracy, as determined through this exchange with the reservoir. The energy change from ${E}^{{}^{\prime}}$ to $\mathrm{\Delta}{E}^{{}^{\prime}}$ for $\mathfrak{S}$, after time ${\tau}^{{}^{\prime}}$, is (p.445) then described by
Here, Ω represents the number of most probable microstates, and ${\mathfrak{p}}_{\sigma}$ represents the probability of any property associated with this likely collection of microstates. This probability is related to the energy change exponentially. When the time elapsed is large enough, that is, significantly greater than the scattering time, the system $\mathfrak{S}$ is likely to be in the equally likely collection of all accessible states—the most probable microstate collection Ω is determined by the statistics of the reservoir, to which the system $\mathfrak{S}$ is a minor perturbation involving the exchanges taking place in the fluctuation interactions.
How does this system change as the reservoir itself evolves?
We know, with small perturbations in energy,
Here, the $\underset{\_}{0}$ superscript indicates thermal equilibrium. This description of probability change in terms of the time of the system describes the evolution of any property that is commensurate with these probabilities. So, the expectation of the rapidly varying force is
Computation of the expectation with equilibrium probabilities means that, at thermal equilibrium, $\u27e8\mathfrak{F}\u27e9=0$. Therefore, for a small perturbation away from equilibrium,
All these relationships are good approximations for times that are significantly larger than correlation times, that is, $\tau \gg {\tau}^{\ast}$. These are time scales where the strong time correlations of rapidly timevarying interactions are averaged out.
We can use these relationships to calculate the magnitudes of the effects:
Since the velocity changes slowly over the correlation times ${\tau}^{\ast}$ of interest, that is, it is perturbed and is observable, but the averages are (p.446) changing slowly,
We substitute $s={t}^{\mathrm{\prime}\mathrm{\prime}}{t}^{\mathrm{\prime}}$, and we get the response to force in the time interval τ as
We now define a correlation function $\mathbf{K}(s)$, where s is the dummy time separation over which this correlation is defined, as
The term inside the integrand arising from the rapidly varying force’s time consequence in correlation is finite positive. It is a dissipative term. It causes the return to thermal equilibrium when the slowly varying forces are null. It causes, at thermal equilibrium, the average velocity to vanish. When a slowly varying force is present, it helps establish a steadystate response. We note, at $s=0$, the following condition of vanishing time difference in the correlation:
This correlation function $\mathbf{K}(s)$ is a measure of dispersion—how rapidly the correspondence between fast force and its effect changes over time. Since, over a long time duration, the average of the rapidly changing force vanishes, that is, $\u27e8\mathfrak{F}\u27e9=0$, the rapidly varying force must lose all correlation with effect as a result of the accumulation of a large number of uncorrelated scattering events. So,
$\mathbf{K}(s)$ also satisfies one additional condition that follows from the expectation of the square of the forces being positive:
(p.447) that is, there is a bound on the correlation function defined by $\mathbf{K}(0)\le \mathbf{K}(s)\le \mathbf{K}(0)$. Since $\mathbf{K}(s)$ is independent of time, being only a function of s, one may shift it arbitrarily. Using a new time, ${t}_{1}=ts$,
This establishes that correlations are symmetric in time.
Figure 5.15 is a sketch of this correlation function with the separation time. It is highest at vanishing separation, that is, close to the time scale of ${\tau}^{\ast}$, the correlation time, and it rapidly decays beyond that.
We can now evaluate the response of the system $\mathfrak{S}$ of mass m of Figure 5.14, as written down in Equation 5.122. We have
The response of the mass is the accumulation from a slowly varying force and a rapidly varying force. Over a small time interval τ, the response to the slowly varying force may be approximated by a constant force. The response to a rapidly varying force can be accounted for through a double integral. The rapidly varying force changes the energy—dissipating it. The average velocity is the average distance over time over which this fluctuating force causes a correlated effect. A small time duration means that the force is more correlated, with the effect, and, hence, the effect is more pronounced. A large time means that this rapidly varying force is less correlated with a motional effect—it leads to impulses in different directions—so that the average effect will reduce. It is the correlation of force with the motional effect acting over a distance that causes the energy change, so the magnitude of energy change w.r.t. the thermal energy ${k}_{B}T$ is also an important ratio. The average response to the rapidly fluctuating forces is embedded in this correlation function, which appears as a double integral in Equation 5.127. The meaning of this double integral is simply that the effect of the force must be evaluated over the extent of the time interval of interest, while taking into account that, within this time interval, the rapidly occurring events lead to a correlation time dependence—the farther apart in time, the smaller the effect.
(p.448) Figure 5.16 shows a geometric view of this integration. For any time separation τ, one must first include the effects between t and $t+\tau $, one of the intervalas over which the double integral is being evaluated. The double integral integrates a section in the $({t}^{{}^{\prime}},s)$ space, and we can simplify:
For time differences $\tau \gg {\tau}^{\ast}$, we have noted that $\mathbf{K}(s)\to 0$ so long as $\lefts\right\gg {\tau}^{\ast}$, that is, $s\to \mathrm{\infty}$. The double integral can be simplified to a single integral, and the term is finite positive, so
where we used the symmetry of $\mathbf{K}(s)$. Since the mean velocity evolves slowly,
Therefore, we may write,
The Brownian process causes dissipation and damping, and one may quantify it with a damping parameter:
We have now found one source of the damping in mechanical movement, in the fluctuations and the drag effect from the gaseous fluid environment. Other sources exist too, for example, the anharmonicity of the elasticity of the movement, sometimes referred to as (p.449) Zener internal damping. We will look at this towards the conclusion of this chapter. All these are the result of the first order correction arising from dissipation. So, in general, this damping correction then results in an oscillator that is a nonlinear harmonic oscillator. The simplest form that one may write its force equation is the form that we have used a number of times:
F(t), the force here, may have both a slowly varying component and a rapidly varying component. If it only consists of a slowly varying component, then in this equation, one may treat, $\mathbf{F}=F(t){k}_{s}u$. On the other hand, if it is only fast, then $\mathbf{F}={k}_{s}u$ and $\Gamma \dot{u}\mathfrak{F}(t)$ produces the damping. The implications of Equation 5.132 can now be evaluated. If the source of noise is entirely uncorrelated, that is, white in the limit of very rapidly varying forces, then
The Fourier transform of this, that is, in the reciprocal space frequency coordinate, is
This is the power spectral density in the doublesided form, that is, over the radial frequency band defined as $\mathrm{\infty}\le \omega \le \mathrm{\infty}$. It is the power contained per unit frequency in the fluctuations that gave rise to the damping expressed through the parameter Γ. This force fluctuation, because of the correlations, will have a spectral dependence that is included in our analysis. The damping factor, arising as it does through the correlation function, includes it. This spectrum of thermal force fluctuations assumed to be the displacements in u is
This is the same form as that of Equation 5.102 which we derived under a damping term using eigenmode analysis. There, we derived it for what happens in displacement of the beam. This spectral distribution means that we can determine the mean square displacement fluctuations where this interaction manifests itself. The spectral density
(p.450) represents, for any frequency, the magnitude of the conjugate product in the infinite time limit at that frequency. So, at a resonant frequency, this density will be large, and, away from the resonant frequency and at times larger than correlation times, it will decrease.
The displacement in time, u(t), or in the reciprocal space, $u(\omega )$, of a beam at any position y, is, respectively,
The mean square displacement fluctuation is
How does the correlation function relate to the spectral density and displacement fluctuations? We can write
So, $\mathbf{K}(0)=\u27e8{u}^{2}\u27e9$. The peak in the correlation function, the one at vanishing displacement in time, is also the mean square displacement.
Having connected the response in these classical conditions by employing classical statistics, we can determine limits. Since ${k}_{B}T/2$ is (p.451) the energy associated with this specific displacement,
This is generally valid so long as the classical approximations of continuum and distribution function are valid. If we consider the condition of strong coupling, that is, frequencies that are very near resonance frequency, then the integral can be evaluated. It is $\pi m/{\omega}_{0}^{2}\Gamma $. So,
The importance of this relationship and the mathematical evaluation of this connection between spectral density, correlation fluctuations and displacements is that if one were interested in making very precise measurements, that is, measurements where the effect of any force coupling is being measured through its energy exchange, then the measurement would have to be near resonance, where the spectral signature can be directly measured. And, at this condition of measurement near resonance, one also knows the fluctuation effect that is being coupled through the equipartition of energy.
What is the minimum detectable limit under these conditions? If we write frequency in Hz instead of radial units, ${S}_{F}(\nu )=4{k}_{B}T\Gamma $. The units are in ${N}^{2}/Hz$. If one measures with bandwidth B, that is, accounts for the energetics across this bandwidth, then the minimum detectable force is ${F}_{min}={(4{k}_{B}T\Gamma B)}^{1/2}$. We have $\Gamma =m{\omega}_{0}/\mathbf{Q}={k}_{s}/{\omega}_{0}\mathbf{Q}$; ${k}_{s}=m{\omega}_{0}^{2}$; and ${\omega}_{0}=2\pi {\nu}_{0}$. As a function of frequency, the damping term is
and the minimum force measurable is
To improve on the limit of measurement, one must minimize bandwidth and dissipation through a high quality factor—this reduces the energy of fluctuations coupled in–and one must also minimize temperature, thus reducing the thermal energy.
We now look at these cantilevers in use in devices under practical considerations. So, first we evaluate this limit force for our classical cantilever, with inertia $I=w{t}^{3}/12$, area $A=wt$, and a mass (p.452) related to density ρ and length L as $m=\rho wtL$, which accounts for the moment. Earlier, we looked at the response of cantilever beams. The outofplane transverse vibration frequency—a resonant radial frequency—of the structure is
Consider silicon; Figure 5.17 shows a pictorial display of silicon’s resonance frequency dependence at dimensions where the classical approximation is still valid. Frequencies in the MHz to GHz range are possible in structures. For GHz, one would need to increase thickness and reduce length, producing a stiffer beam that can resonate higher. Increasing the length of the beam without changing its thickness will reduce the resonance frequency, as longer wavelength modes are supported. The lowest measurable force from this cantilever, since ${k}_{s}=Y\omega {t}^{3}/{L}^{3}$, is
In this expression, the factor within square brackets is the damping factor Γ.
When one applies a harmonic force, that is, one forms a forced harmonic oscillator, such as that of the inertial sensor shown in Figure 5.3,
the response is
where
The spectral density for displacement or for force from this follows as
(p.453) The damped forced oscillator is also a filter that is particularly adept at extracting the signals near the resonance frequency. It allows measurement and extraction of signals of interest through their force and displacement effect near the natural resonance frequencies of the system.
We have tackled the damping arising from the ambient environment but not from the anharmonicity of the beam. Here, we take the case of thermoelasticity. Thermoelastic damping is intrinsic to the material and caused by the flow of heat irreversibly across the thickness of a resonator. Since the oscillating beam is undergoing deformation, there is an elastic field—the stresses and strains of the structure. This field couples to the temperature fields. Such a damping can be considerably pronounced in thick and long resonators. This inelasticity is the Zener effect. In the elastic limit of an isotropic material, we related stress to strain through Young’s modulus, that is, $\sigma =Y\epsilon $. Zener inelasticity is the timedependent damping arising from within the material’s response, that is, a damping arising from within. With the Zener term,
where Y_{0} is the zero order term used here in normalization, and ${T}_{\epsilon}$ is the Zener inelasticity coefficient. If we write the stress and strain in harmonic forms, that is,
respectively, one can write the amplitude ratio as
The stressstrain response can then be determined to be
where
(p.454) We can now determine the timedependent response of this Zener inelastic beam. The fourth order equation, our EulerBernoulli equation, for motion in the x direction at position y, is
For a free beam, this becomes
With a harmonic force ${F}_{0}(\omega )exp(i\omega t)$ and a harmonic displacement response of ${u}_{0x}(\omega )exp(i\omega t)$, the eigenmode equation is
When $\mathbf{Q}$ is large, the dissipation is limited, and the nth eigenmode’s frequency is modified to
It increases in frequency inversely with the quality factor. The displacement in terms of the eigenmode expansion is
Employing the product and integration together with the condition of orthogonality,
So, the coefficient of displacement expansions can be written as
for the first term, and so on; together, this characterizes the response in the presence of Zener inelasticity.
We have now worked through the eigenmode analysis under the conditions of Zener damping. The applied force, its magnitude, and how far it is from the natural frequencies of the mechanical system lets us determine the amplitude coefficients of oscillations under smallsignal conditions. The quality factor is again an important term in determining the frequency shifts in the response, as well as the amplitude of the resulting oscillations. The damping here, to a (p.455) second order, is different from the constant damping factor utilized earlier, and we found a way to handle this through an effective mean theory.
Examples up to this point have concentrated on a beam or plate vibrating—transverse resonators. Electromechanical phenomena also underlie surface and bulk acoustic wave resonance. In such structures, acoustic excitation employs piezoelectric materials, since piezoelectricity couples applied fields to mechanical displacement.
5.3 Acoustic waves
THE ABILITY TO EXCITE TIMEDEPENDENT DISPLACEMENT—even periodic—means that there is now the ability to excite acoustic mechanical modes. In structures formed with thin layers of a piezoelectric on a bulk substrate, one achieves excitation far from any fundamental acoustic mode, that is, of pure modes. An acoustic wave transfers energy from an excitation source for transmission to an elastic medium. Thus, the propagation velocities are determined by the mechanical properties of the material—silicon, for example, being a common microfabricated material. Piezoelectric properties can change this propagation substantially through the local dipole fields that are also atomic in origin, just as the acoustic motion is. The wave on a surface, akin to the wave in a water pond, is the surface acoustic wave (SAW). The oscillation motion and propagation in the bulk of the material is the bulk acoustic wave (BAW) .
Figure 5.18 shows a conceptual drawing of this propagation in an isotropic medium. At the surface, the atomic motion is more pronounced, and in the bulk, less so—a very compressed ellipsis with its long axis along the direction of propagation. The decay length scale is of the order of the wavelength of propagation. If the structure is small, transverse modes—shear mode waves—also exist, propagating energy in both directions. These shear mode resonances are at longer wavelengths than those for the longitudinal waves.
(p.456) If one places boundary conditions, such as those shown in Figure 5.19, through the excitation electrodes, then the region in between acts as an acoustic cavity that supports waves at λ and its even fractions, like an electromagnetic cavity. The cavity stores energy through constructive interference. Acoustic mode resonators are possible in semiconductors—Si and others. When made using piezoelectric materials, such as thin films of AlN, ZnO, LiNbO_{3}, PbZrO_{3} or the other piezoelectrics that we discussed in Chapter 4, a large reduction in size, as in SAW and BAW devices, becomes possible. In a longitudinal mode resonator propagating in the surface plane, the excitation is perpendicular to the surface. So excitation is along the direction of strongest piezoelectricity, for example, the caxis of AlN, which has a wurtzite crystalline structure and actuation in the plane.
The fundamental frequency of the resonator, the lowest frequency supportable, is determined by the fitting of the smallest wave, a halfwave between the electrodes, or a full wave in the pitch. For a speed of sound of c, this is ${f}_{0}=c/\lambda $. AlN has a longitudinal sound velocity of $\sim \phantom{\rule{negativethinmathspace}{0ex}}{10}^{4}\phantom{\rule{thickmathspace}{0ex}}m/s$, so at a pitch of about $10\phantom{\rule{thickmathspace}{0ex}}\mu m={10}^{4}\phantom{\rule{thickmathspace}{0ex}}nm$, the fundamental frequency is $1\phantom{\rule{thickmathspace}{0ex}}GHz$. This makes resonators and frequency selection act as filters, where the frequency is determined by the pitch of the lithography of the structure. Bulk resonators use a similar approach across the depth of the structure and are so thick, but the voltages are applied in the most efficient direction for piezoelectric effect. The depth of the structure causes the frequencies to be lower and, as before, the applied signal and acoustic propagation are orthogonal. Since bulk approaches have higher coupling, bulklike excitation in a surfaceoriented structure has been used in devices with thin film features. Multiple BAR structures operating simultaneously on a common substrate couple the excited modes, so a preferred approach is the use of film bulk acoustic resonators (fBAR), which employ acoustical isolation through air gaps below films employed as bulk resonators. An fBAR structure has electrodes across the thickness of a film but a gap below, so that elastic propagation can be suppressed.
5.4 Consequences of nonlinearity
ONE CONSEQUENCE OF NONLINEARITY that we have already seen is the pullin effect. But, nonlinearity manifests itself in a rich set of ways—chaotic behavior—a variety of changes in response characteristics including limit cycles where frequency components of force response change rapidly. So the structure can behave in what appears to be a reasonable and simpletodescribe fashion and then suddenly jump to a very unexpected behavior, which is usually one not conducive (p.457) to the kind of predictable and feedbackcontrollable behavior that we desire. However, although such chaotic behavior is complex and seemingly unpredictable, it is not random.
A nonlinear system is one whose time evolution is nonlinear. These are systems whose summarizing equations for the dynamical variables of properties of interest are nonlinear. Describing the behavior of a system requires evolution equations, parameters describing the system, and initial conditions.
We will look at our earlier examples to emphasize the nonlinearity consequences. Our force equation with damping, for a simple point mass, for example, is
and has the transfer function
This equation shows a resonant peaking in frequency, at ${\omega}_{0}$, of amplitude $\mathbf{Q}{F}_{0}/{k}_{s}$, together with a low frequency response of ${F}_{0}/{k}_{s}$. In our prior discussion of response, under somewhat different constraints, our solutions took the form of Figure 5.7 for the inplane and the outofplane responses of a comb drive, and the eigenmode solution of Figure 5.12 for a cantilever. Now we introduce damping. In the coordinate system where u is the displacement, we write it as a nonlinearity in the spring constant, so let the spring have anharmonicity of higher order:
To simplify, but still considering the nonlinearity up to the third power term, we set $\gamma =0$, so, for example, no fluidic damping. But, nonlinear effects of the amplitude of oscillations are included.
We start with the homogeneous equation, to see how the system behaves without the forcing function. We transform variables, rewriting the equation up to the third power—a nonlinear term—as
where $\phi ={\stackrel{\u02d8}{\omega}}_{0}t$; ${k}_{s2}=\epsilon \alpha m$; and ${k}_{s3}={\epsilon}^{2}\beta m$. This transformation now makes the power of ε consistent with the power of anharmonicity. Perturbation powers are now consistent, where
(p.458) become ways of ordering perturbations. Substituting these into Equation 5.166,
This can be true in general iff each of the bracketed terms vanishes. The first of these cases occur with the harmonic resonator solution of ${u}_{0}={U}_{0}cos\phi $. Using this, the second term becomes
Since this is a homogeneous equation, there exists no energy input, so the first term on the right must vanish, else the first perturbation in the amplitude of displacement will continue to rise. So, ${\omega}_{1}=0$. The harmonic displacement term from where this came, $({k}_{s2}{u}^{2}/2$, does not cause a perturbation in frequency. But, it does in the amplitude. We may solve the equation with ${\omega}_{1}=0$:
The first order correction in displacement is a static shift and an additional component at twice the frequency, since $\phi =2{\omega}_{0}t$. These two results from the first two vanishing terms of Equation 5.168 can now be fed into the last term:
Using similar arguments,
with substitution in Equation 5.171 leading to
(p.459) Two additional perturbations terms arise from the u^{3} dependence of anharmonicity—one at the resonance frequency, and one at the third harmonic. This last is an additional term in odd order. It is now a source of interference.
From this homogeneous equation analysis, we conclude that the spring anharmonicity leads to a change in resonance frequency, whose first order effect is a shift in resonance frequency to
where we have introduced the parameter ς to denote a nonlinearity ratio factor of the system. This is a major effect. The other additional consequence is the perturbation in the amplitude of oscillation. This resonance frequency may shift down or up depending on the sign and magnitude of the nonlinearities of mechanical stiffness terms. It decreases if ${k}_{s2}<0$, but if ${k}_{s2}>0$, then it will increase.
What is the major consequence of a forcing function? One can estimate this using the homogeneous solution of the nonlinearity’s effect, and this gives us a simpler, understandable way for estimating. For example, in the homogeneous solution, one would expect it to be a shift in the frequency to the form Equation 5.174. So, the amplitude near the resonance is a change from Equation 5.164 to the form
This method will only work up to a certain point.
Figure 5.20 shows a schematic of the response. Recall the pullin behavior of Figure 5.10. It arose as a direct consequence of nonlinearity. The plate pulled in into contact with the static plate even when it was farther away. Nonlinearities cause a pronounced discontinuous effect when driven strongly enough. The response shows a hysteresis. As the forcing function increases, regions come about that are no longer singlevalued functions.
(p.460) A bifurcation is a sudden, qualitatively different behavior of the system, resulting from a small change in a parameter. For example, a doubly clamped beam compressed by the clamp will first shrink and then, with a very small change of force, buckle at the bifurcation point. This sudden jump from one resonating curve to another, portrayed as hysteresis due to its different position depending on how the approach happens, is a bifurcation. Bifurcation can be local in the sense that a crossing of a threshold of some parameter results in a change in a local stability property or other invariant properties. Global bifurcation arises from the intersection of a large set of invariants. One can estimate this bifurcation point. With $\mathrm{\Delta}\omega =\omega {\omega}_{0}$,
Equation 5.175 can be rewritten as
Bifurcation occurs when the amplitude suddenly changes with a shift in the resonant frequency in response to system parameter changes. So, we determine $\partial {U}_{0}/\partial \mathrm{\Delta}{\omega}_{0}$ and see where it explodes, that is, where the denominator vanishes, as shown in Figure 5.21. This leads to
A bifurcation point is single valued. So,
arising from the ς’s opposite signs. This gives the bifurcation point as
One can also see that, at resonance, the response has an amplitude larger than at the bifurcation point:
A higher quality nonlinear resonator has a lower amplitude and bifurcation point. Nonlinear effects establish the range over which a resonator will have usefulness bereft of hysteresis. A response is schematically drawn in Figure 5.21, using our solution approach. The (p.461) amount of power needed to reach this limit is the product of stored energy and ${\omega}_{0}/\mathbf{Q}$, which is the fraction that is lost every cycle. This is ${\omega}_{0}{k}_{s}{U}_{c}^{2}/2\mathbf{Q}$.
This example serves as a good starting point for exploring the variety of interesting characteristics that nonlinear systems undergo. Chaos is a timeaperiodic behavior, that is, not exactly repeating and therefore appearing as apparently random or “noisy.” But, this chaotic response, strictly speaking, is nonrandom, since it is in the response of our explicitly written, deterministic, timeevolution equation, whose parameters and initial conditions are defined. So, it is not the result of any rounding errors but deterministic evolution catalyzed by nonlinearity. Classical systems ranging from pendulums to planetary systems show it. Quantum systems, for example, lasers, show it. Biological systems, for example, a beating heart, show it. It is this ubiquity, its complementarity in fractals, and the nature of its universality, where similar ratios cross disciplines, that made this subject a very interesting multidisciplinary area in the last decades of the 20th century. We should emphasize that all chaotic systems are nonlinear, but not all nonlinear systems are chaotic.
Easily understandable examples are circuits made with diodes, using a diode’s nonlinearity. An example with an inductor is shown in Figure 5.22. A sinusoidal and static signal forces this circuit, which contains a diode and whose energy is stored in an inductor. The diode is the nonlinear element here. Recall how the diode’s response is reflected in the conduction through it. When one forward biases it, it passes current, a current that is exponentially dependent on the bias voltage drop across it. And it passes very little current in reverse bias, a reverse saturation current that we will assume to be zero. The forward current, internally in the diode, is sustained by charge storage, and this charge distribution has a gradient. The drift and diffusion current within the diode sustains the forward current. So, there is a storage of charge associated with the forward current. When a voltage across the diode is flipped from forward to reverse bias, this excess charge still needs to come out. So, a current still continues to flow for a short time—the reverserecovery time. This time depends on the current that was flowing through before, which corresponds to the charge storage that existed in the diode. The inductor in this circuit breaks the tight coupling that exists between current and potential differences, since it introduces storage of energy when current flows. We apply a very small sinusoidal voltage v signal added to the static voltage signal V. The sinusoidal voltage measures the response, but the static voltage drives the nonlinearity. The diode and the inductor store and exchange kinetic and potential energy, much like the way that potential and kinetic energy are stored and exchanged in a (p.462) mechanical beam.
Using this circuit, we now show the first consequence of nonlinearity—the sudden changes that we have called bifurcations.
When the period of oscillation is close to the reverserecovery time, the nonlinear effects of switching on and switching off are dominant. At low voltages, so when $V+v$ is below the diode turnon voltage, currents are low, and the diode is essentially off. At the turnon voltage, one would expect the halfwave rectification. During the positive part of the sinusoid source cycle, the diode conducts, and during the negative part, it turns off. The current is very small, the amount of charge stored in the diode is small, and one just sees the clipping of the sinusoidal signals. This and response under other conditions of signal voltage, that is, $V+v$, are shown in Figure 5.23. One expects and sees the clipped manifestation of the sinusoidal forcing function in (a). The response has the same periodicity as the input voltage. But, as the sinusoidal voltage is increased, following incremental changes as shown in (b), there comes a point when it suddenly jumps to a period that is twice that of the applied signal as in Figure 5.23(c). A bifurcation has occurred. The period doubling occurred because, before bifurcation, there was sufficient time for the diode charge to be drained off, that is, the diode could shut off. But, with just enough extra applied bias voltage signal, the diode could not shut off before a positive applied signal voltage arrived again. However, with the inductor in the circuit, changes in current are not instantaneous and reverse current must first stop before going to the positive cycle. There is therefore less forward current. And now, the diode can actually shut off in the reverse cycle. This is roughly the reason for a doubling of the period at the first bifurcation point. Increase the voltage further and there is again a sudden change—period doubling to period4 and again at a higher voltage parameter to period8, and so on.
As the voltage is increased further, past further period doublings, one gets to a point where the sequence of peaks becomes erratic—this is chaos. One does need to ascertain that this response is not the result of noise, or any other effect. The logic of the current and charge nonlinearity in time affecting the response is shown in Figure 5.24 (a), which shows the current and the voltage of a period doubled instant. Panels (b) and (c) in this figure show the chaotic diode voltage response in the form of aperiodicity. One would also follow the paths that the parameters take, in order to see the sudden and discontinuous change with small change that arises in bifurcation. But, one also observes divergence in nearby trajectories at the onset of chaos. For any small change of initial condition, a very different response, here in the form of peaks that are not at all periodic, (p.463) appears. As one increases the forcing function further, chaos may disappear, and later on, appear again.
One can show the richness of this behavior through a bifurcation diagram such as in Figure 5.25, which shows bias voltage as the parameter, and the peak current response signal. We start with the period2 response, a response that has two magnitudes. When a bifurcation occurs, leading to a period4 response, the response signal shows four peak amplitudes, and these change in magnitude as the parameter is increased. But, in this figure, at a certain voltage, one sees chaos for a significant range of the parameter, before it disappears and one also sees a period3 response, which bifurcates to a period6 response and then chaos, before returning to the period1 response. We could have chosen a slightly different sinusoidal signal, and then the bifurcation diagram may have been considerably different than this one across the same static bias voltage change.
Periodic doubling is but one route to the onset of chaos. For example, nonlinear functions, such as iterated maps, show similar features. The fundamental nature of chaos is best represented by its universality—different functions end up in convergence parameter in bifurcation diagrams, as the period doubling does. This is illustrated by showing the bifurcation for the two functions
and
Both are nonlinear, and their bifurcation diagram are shown in Figure 5.26. Both show features such as period doubling in the march to chaos. What is striking is the geometric convergence ratio. The ratio of differences of parameter values at which successive doubling happens is approximately constant for all the splittings and, in the limit, reaches a constant. So, in Figure 5.26,
where the A_{n} are the parameter values where bifurcation appears, remains approximately constant, and, in the limit,
This is the “Feigenbaum δ.” Any iterated map function that is parabolic near its maximum, as well as a few other properties not relevant to our interest here, will have this same convergence ratio as the order of bifurcation ratio goes to the limit of ∞.
So, a diversity of behavior appears in this nonlinear deterministic calculation, including chaotic behaviors, starting from the precisely (p.464) stated rules of the equations, the values of the parameters, and the initial conditions. Mathematically stated, this is true. But, in real systems, whether experimental or a theoretical model, there is always some imprecision in specifying initial conditions—in real systems, from noise itself, and in simulations, from rounding off errors or just the imprecision of numerical implementations. This means that the behavior does become unpredictable in the chaotic system. But, it is not due to noise, which has its origins in randomness.
The example we explored was that of an electronic circuit, because of its simplicity. But, we could have looked at chaos in a mechanical system, where energies are stored and released and where nonlinearities exist with time dependences. Figure 5.27 shows an example corresponding to the electrical example we have studied. This is a simplified but different form of a comb drive that we looked at while discussing Figure 5.6. We drive the moving plate with a static voltage, and we apply a sinusoidal signal on one of the static plates—an input plate. As a result, the moving plate responds, and the output signal is picked up in the form of a voltage across a resistor R that is between the other static plate—an output plate—and the ground. We will outline the underlying mathematical formulation in order to explore the resulting behavior that brings out several interesting properties.
The force on the moving plate is
where C_{0} is the capacitance in unforced conditions with the movable plate a distance d apart from the static plates. We use a third power force term for the nonlinearity, so
(p.465) This is a dimensionally compressed equation where the mass is now an effective mass, and the damping and elastic constants are lumped constants. We make the equation dimensionfree, as in the starting analysis of nonlinearity. The normalizations are $\phi ={\omega}_{0}t$; $\mathrm{\Omega}=\omega /{\omega}_{0}$; $\eta =u/d$; $\mu =\gamma /m{\omega}_{0}$; $\alpha ={k}_{s}/m{\omega}_{0}^{2}$; $\beta ={k}_{s3}{d}^{2}/m{\omega}_{0}^{2}$; $\varkappa ={C}_{0}{V}^{2}/2m{d}^{3}{\omega}_{0}^{2}$; and ${\rm Y}=2\varkappa {v}_{0}/V$, with ${\omega}_{0}={({k}_{s}/m)}^{1/2}$. With a minuscule sinusoidal signal, the dimensionfree form is
where the derivative is w.r.t. φ.
This equation embodying the response corresponds to a bias potential, referenced to the nodisplacement, that is, $u(=y)=0$, or $\eta =0$, condition as
This is a nonlinear equation. Its solutions depend on applied bias potentials. When none is applied, there is one unique degenerate solution at the equilibrium point. But, as the applied bias voltage is changed, so do the number of equilibrium points and their positions. Figure 5.28 shows these under a few different conditions. At bias voltages corresponding to $\varkappa =0.75$, no equilibrium point appears in the range shown here. Recall our pullin discussion of Figure 5.10. This resonator has become unstable and likely has been pulled in to one of the stationary plates. At the very smallest of bias voltages, or just the sinusoidal signal with no static bias voltage applied, there exists the region of equilibrium at the center, and two unstable saddle points beyond. The resonator operates close to its free oscillation characteristics. As the bias voltage is increased, the center equilibrium point loses its stability, it becomes a saddle point, and two new low energy points emerge symmetrically on either side.
The Melnikov method is a technique for analyzing the stability of the center for timeperiodic perturbations, and we can apply it here. The method provides analytical insight into stability, instability, the appearance of chaos, et cetera, in nonlinear systems. Equation 5.188 may be written for the phase space form as
(p.466) with $\stackrel{\u203e}{\mu}=\mu /\epsilon $, and $\stackrel{\u203e}{{\rm Y}}={\rm Y}/\epsilon $, that is, both μ and ϒ are of order $\mathcal{O}(\epsilon )$, with ε quite small. These conditions are satisfied by a high quality factor $\mathbf{Q}$ and a sinusoidal voltage that is a small fraction of the bias voltage. A Melnikov function, which we employ without discussion, is
This function is designed to be proportional to the perturbation of the distance between stable unstable fixed points of the homoclinic and heteroclinic orbits. $({\eta}_{0},{\xi}_{0}$) is the unperturbed trajectory, which follows
defined through the saddle point.
The solutions of these equations require approximations, for example, the second of Equation 5.190 may be expanded in a Taylor series form and substituted in the Melnikov function, which is thus reduced to the simpler form
with
(p.467) The Melnikov analysis defines a threshold curve that predicts the different regions of behavior, for example, of chaos above it. It also allows one to see analytically the presence of periodic orbits, so long as sufficient accuracy from the Taylor expansion is included. Absent use of this technique, one can perform a numerical simulation with sufficient accuracy, though such a technique will not give obvious insights into the contribution of the different nonlinear connections of the energy terms.
We summarize here observations on this system that show the effect of nonlinearity in beam dynamics. Figure 5.29(a)–(d) shows the orbits at small driving perturbations in this doublewell moving plate system for a chosen set of parameters. This doublewell system is discussed more comprehensively in Appendix N. At low perturbations, the system remains near one of the two fixed points. Upon an increase in perturbation that makes the transfer between the fixed points easier, the phase trajectory of the oscillations appears in both regions surrounding the fixed points, with about equal likelihood. The bifurcation diagram of this system formed by slowly increasing the sinusoidal signal is shown in Figure 5.29(e). One can observe that periodic motion around one of the stable points exists at the small voltages, but, as the sinusoidal voltage is increased, chaotic behavior comes about until, suddenly, at even higher sinusoidal voltage, the periodic response returns. Finally, pullin happens. This example exhibits nearly all the features of a nonlinear system that we have discussed to this point.
We have now seen the richness of phenomena arising from the nonlinearity in these systems. Much of this analysis revolved around resonance, and a narrow band around the resonance frequencies. This is natural for two reasons—the appropriateness of continuum treatment at the size scale of these problems and the properties of material, both of which make the wave approach and its eigenfunction solutions appropriate. Operating near these eigenmode frequencies made it possible to use them in frequency selection and force detection.
These oscillations beget a few comments. We have only considered one particular type of oscillation in a beam or plate vibrating—transverse vibrations. But beams undergo torsional vibrations, and the energies in these two different types of modes can couple. Take a beam clamped at both ends, and excite it laterally. It undergoes damped vibrations not unlike what we have described. Excite it to higher amplitude, and it will have bending vibrations. Take a standing microscale or a nanoscale beam with some mass at its end. Excite (p.468) it, and it will show these modes, and given enough energy, even possibly buckle. The coupling of modes and exchange of energy means that there will be bifurcations—the coupling to bending vibrations is a Hopf bifurcation and shows up as a beating phenomenon. Hysteresis, chaos, et cetera, all can appear, and, in general, this response behavior can be quite complicated.
The oscillator is an important element in systems—essential to frequencybased approaches of measurement or communication. See Appendix O for a discussion of oscillators and their appearance in basic physics, properties of materials, and in devices. The beam under force conditions that we have discussed is an example of a Duffing oscillator. A Duffing oscillator models the behavior of a doublewell system, Figure 5.29(a) is an example of a Duffing oscillator in forced conditions. If started with a certain energy, so a certain amplitude, and then left to itself, a Duffing oscillator gradually loses energy and amplitude, due to damping, and finally comes to rest in a well. The period of oscillation depends on the amplitude. When harmonically forced, a largeamplitude response occurs when the frequency is close to the natural frequency of the oscillator. Since the natural frequency is a function of the amplitude, the response occurs with a change in the natural oscillation frequency of the system. This change in shape of the response curve of the system with amplitude may show the hysteresis that we discussed, depending on whether the external force is increasing the frequency through the response region or decreasing it. This is directly a result of the nonlinearity. The other consequence of nonlinearity is the chaotic behavior of different period cycles near the attractors. The nonlinear examples that we tackled were Duffing oscillatorlike.
A van der Pol oscillator is another type of oscillator where limit cycles of the periodic timedependent behavior appear spontaneously. The oscillation amplitude in a van der Pol oscillator increases with excitation, when excitation is small, but saturate at larger excitation. This is because the damping increases at a higher rate with excitation and hence places a limit. The consequence is that, in a van der Pol oscillator, the nonlinear damping factor causes the phase space trajectory of the oscillator to approach the limit cycle as $t\to \mathrm{\infty}$. The fixed point in the system is now a repeller.
5.5 Caveats: Continuum to nanoscale
WE HAVE EMPLOYED A CONTINUUM APPROACH to the mechanical description up to this point—all properties, such as Young’s modulus, stress and strain, et cetera, are continuously distributed and (p.469) definable throughout the medium. They may arise from phenomena at the atomic scale and from atomic bonding—matter is fundamentally discontinuous, yet we employ a continuous description that ignores any consequences of the specifics of this phenomenon on the local description. There are limits to this use of classical mechanics, since the fields and other characteristics we employ are a continuum approximation. If one has a planar, singleatomthick sheet such as of carbon in its graphene phase, and we bend it, the picture distinguishing two surfaces and compressive stress and tensile stress shown in Figure 5.1 loses meaning. What is compression or tension in a singleatomthick film? A tube with a hole punched in its wall has fracture or bending properties affected by the hole. A carbon nanotube with one carbon atom plucked from its wall will have its properties also affected by this hole. But, an adequate description of the former, drawing on a continuum description, will fail in the latter, where the local atomic scale interactions are now perturbed, and the mechanical properties of the nanotube will change in a very different way. The shortrange interactions now matter, and a description that only utilizes the longrange description is inadequate. In this there is a direct correspondence between our discussion of stochastic effects in electronics at the nanoscale and in mechanics at the nanoscale. A long nanotube clamped at one end, for example, such as in Figure 5.30, will be subject to shortrange constraints at the clamped end, while, further away along the tube, a continuum description and all these eigenmode analysis, et cetera, may be quite adequate, depending on the characteristics one is interested in.
One way to assess a scale length here would be to compare the dimensional scale of the characteristic—spatial frequency of vibration, for example—to the scale of the perturbation. Thick, wide and long beams have anchor losses where leakage and propagation take place over larger length scales. In a very narrow beam, such as a nanotube, this region is much much smaller than the eigenmode wavelength. What this will mean, as in the adiabatic barrier versus abrupt barrier discussion of electron transport, is that the adiabatic approximation breaks down. Changes are now at the atomic scale, and that matters. One interesting aspect of the breakdown, however, is that while the quantum description in charge transport gives little room for approximations for quantumdominated effects to be forcefitted into a classical picture leading to our mesoscale, nanoscale and phase transition discussion, the approximations for mechanical effects, where the particles are localized, do.
For example, the bonding of atoms, arising from the spatial sharing of electrons, can be adequately described by a potential energy which fits into the classical description. For example, the Lennard (p.470) Jones potential for moleculemolecule bonding,
where E and σ are energy and dimensional parameters, respectively, and r is a radial coordinate, works pretty well for interaction with water molecules. So, a scanning probe system, such as a scanning tunneling microprobe, a magnetic resonance microprobe or an electric field microprobe, all with a tip very close to an atomic surface, can be modeled reasonably accurately to the first order (see Figure 5.31).
Another limitation to continuum analysis is due to the statistical effects when the ensemble becomes small. We saw the consequence of this in the observability of Brownian motion, and the consequences of correlations with slowly and rapidly varying forces. There are two interesting offshoots of this. The first is related to our past discussion. Air at room temperature and pressure has about ${10}^{19}\phantom{\rule{thickmathspace}{0ex}}molecules/c{m}^{3}$, so a cube of $1\phantom{\rule{thickmathspace}{0ex}}\mu m\equiv 1000\phantom{\rule{thickmathspace}{0ex}}nm$ has about ${10}^{6}$ air molecules. We can use damping factors, NavierStokes equations, and other continuum descriptions, so long as we also continue to look at the fluctuations. But, what if we have a $10\phantom{\rule{thickmathspace}{0ex}}nm$sized volume? This is the size scale for the vibration at the tip, where a mass is being measured to high resolution: a molecule or molecules attached to the tip. But the surrounding has, on average, 1 molecule. So, while the beam is vibrating, say, at $10\phantom{\rule{thickmathspace}{0ex}}MHz$ is $2\times 10\times {10}^{7}\times 10\times {10}^{6}=20\phantom{\rule{thickmathspace}{0ex}}cm/s$, which is much slower than the thermal velocity, and hence is averaging the noise, the statistical fluctuations of the sample size are not. An ensemble of N has a variance of ${N}^{1/2}$. Averaging long measurements will take an incredibly long time, since the rate of convergence is very slow, resulting in errors corresponding to this variance.
Another way to look at this problem is to consider a fluidfluid interface, as shown in Figure 5.32. With ς as the surface tension, and R_{1} and R_{2} as the curvature radii at the fluidic interface, the stress tensors are related through
Here, the $\mathbb{S}$s are the stress tensors. For stationary fluid, stress is normal—hydrostatic—and this is transformed to the pressure difference
An interface’s position is obtained by the condition that the fluid at the boundary is stationary—the kinematic condition. This means that (p.471) the interface $f(\mathbf{r},t)$ satisfies, for each of velocity vectors ${\mathbf{v}}^{\underset{\_}{i}}$,
This is the noslip boundary condition for the NavierStokes equation. Now, what happens when the mean free path of the fluid molecule is comparable or larger than the system size? The Knudsen number is one of several parameters defined in fluid mechanics and which are useful for describing the common characteristics of the fluid behavior associated with that scale. The Knudsen number is the equivalent of the volumeexclusiontosystemvolume ratio that we have looked at and is defined as $\mathfrak{K}=\stackrel{\u203e}{\lambda}/\ell $, so it is a cube root of the volume exclusion ratio. When the Knudsen number $\mathfrak{K}<{10}^{4}$, a noslip boundary is a good approximation. Higher than this, and slipping along the interface becomes pronounced, and one forcefits an approximation that is pretty accurate. If a fluid wall, say, aligns the with xaxis and moves with velocity v_{w} along the xaxis, then
Here, α is a forcedfitting parameter called the accommodation coefficient, and b is a slip coefficient. The Knudsen number is thus an essential number for understanding the limit of the solidliquid and liquidliquid applicability of continuum mechanics inscribed in the NavierStokes equation.
Fluid viscosity is another parameter that we connect to this interface phenomenon tied to drag—our friction in fluid. The ratio of inertial forces to viscous forces is the Reynold’s number $\mathfrak{R}$. An object of dimensional scale ℓ moving in a medium of viscosity η and density ρ at a velocity v has $\mathfrak{R}=av\rho /\eta =av/\nu $. $\nu =\eta /\rho $ is the kinematic viscosity, which is ${10}^{2}\phantom{\rule{thickmathspace}{0ex}}cm/s$ for water. The ratio ${\eta}^{2}/\rho $ has the units of force parameterizing the drag. If an object has a Reynold’s number of 1, then this force will effectively drag the object. A small Reynold’s number means that the inertial force needed for moving an object is small. As an object gets smaller, the drag effect reduces, and so does Reynold’s number. A human swimming in a pool has an $\mathfrak{R}$ of ${10}^{4}$; a fish in fish tank, of ${10}^{2}$; and an E. coli bacterium which moves at speeds of the order of $30\phantom{\rule{thickmathspace}{0ex}}\mu m/s$ has an $\mathfrak{R}$ of ${10}^{4}$ or less. Inertia plays little role in these conditions. Take away the force, and the object with a low Reynold’s number almost immediately stops. Inertia and the prior velocity are irrelevant.
So, how does an object move in fluid environment using internal action? An object needs more than one degree of freedom in configuration space to be able to direct motion that is not a loop. An oar needs to be rotated around its axis, taken up and out of the water, (p.472) or undergo some other additional degree of freedom in order for a boat to move ahead . If not, so that there is just a forward and reverse motion of the oar in the water, the boat will oscillate back and forth. Human hands and legs break this symmetry during swimming. The flagellar motor or other synthase motors, such as ATP synthase, which is shown in Figure 5.33, do it for microbes. In a flagellar motor, it is the corkscrewlike oar of the motor that allows motion—a straight shaft will not do. In the ATP motors, it is the slightly offaxis shaft. In both of these, the motors have incredible speeds—100s to 1000s of revolutions per minute—and energy conversion efficiencies of more than 50%. A human produces nearly $20\phantom{\rule{thickmathspace}{0ex}}kg$ of ATP every day through an energyefficient, reversible cycle necessary for all these different chemoelectromechanical systems that are necessary for the body to function. The strong coupling that permits efficient energy conversion is crucial for these biological systems.
So, while the continuum picture is adequate at much of the microscale and larger in such fluidic problems, it is not at very small scales. One needs to exercise adequate caution when using the approaches that we have developed because the nanoscale is a region where, many times, continuum modeling will be inadequate, and then one needs to utilize approaches that are more rigorous and appropriate to the scale of interest. For tackling the mechanical problems, one may proceed from continuum models based on bulk materials properties at 1000s of nm to continuum models that incorporate nanoscale material properties, such as the surface effects at 10s of nm, to more quantummechanically accurate approaches, such as molecular dynamics or tight binding, which are more ab initio and fundamentally more rigorous. These will be the equivalent, for tackling electronics problems, of the use of classical conductor models at large dimensions, such as in power transmission, semiclassical Drude models, such as in large dimension semiconductor devices, and quantummechanical and other rigorous models at the smallest scale for electronics.
(p.473) 5.6 Summary
This chapter focused on mechanics and the coupled behavior in environments where electrical forces are also important. We stressed the different approaches of analysis to bring out a number of interesting attributes of the behavior that are of import to devices. Lagrangian and Hamiltonian approaches, the use of conjugate variables, and energy in its kinetic and potential forms give us powerful tools for analysis in conditions where conservative and nonconservative forces exist. At its simplest, one could explore how moving plates and beams become useful as sensors and actuators. In many of these oscillatory modes, eigenmode analysis showed us the spatial and temporal dependence—in beams and plates. Inertial mass sensors and gyroscopes, et cetera, all rely on these approaches. The energy density and sensitivity analysis showed the tremendous capabilities that one can obtain. We extended this analysis in the presence of nonconservative components such as drag to understand how fast and slow forces behave. Correlation and its manifestation in spectral power density gave a powerful approach to then see how one might get the best sensitivity in structures where the mechanical resonance may be utilized together with electrical behavior. These are the forms important for measurements where one attempts to reach the quantum limits—measurements of single electron charge or phonons. One other interesting aspect of these resonances is the stochastic coupling of energy between fast and slow. Biology employs such stochastic motors to utilize energies of the order of $100\phantom{\rule{thickmathspace}{0ex}}{k}_{B}T$s. This is the energy scale of many of the biological transduction processes.
All these systems also exhibit a variety of effects arising from nonlinearity. We emphasized bifurcation and chaos and utilized the phase portrait for observing the variety that unfolds. Even simple classical systems, in presence of nonlinearity, exhibit a variety of complex behaviors. Quantum systems do too. While we did not look at fluidic systems in depth, much of what we have described for mechanical elements in gaseous environment has an equivalent in the liquid environment, albeit with more complexity. Compressible and incompressible conditions will behave differently. Hydrodynamics at extremes can become unpredictable. These all manifest nonlinearityinduced behaviors such as chaos, limit cycles, hysteresis, et cetera, which are observable in simple Duffing systems. We have not yet discussed the coupling of mechanics with electromagnetic optical forces. In Chapter 6, we will dwell on this subject, since it provides a powerful means of obtaining uniquely sensitive measurements (p.474) and generating interactions that can be gainfully employed in signal generation and manipulation.
5.7 Concluding remarks and bibliographic notes
MICROSYSTEMS ARE PERVASIVE in our daily life at this point, whether in mobile instruments in the form of gyroscopes, or in the car, as an accelerometer. But, these mechanicalelectronic interactions and optical interactions, which we will discuss in the next chapter, are just as essential as signal measurement and control mechanisms across many domains where one of them by itself would not suffice or where these energycoupling mechanism would provide a more sensitive or in other ways more appropriate approach.
Early classical mechanics, born of Kepler’s laws, which were based on Tycho Brahe’s as well as his own observations, rapidly progressed to the Lagrange and Hamilton approaches The EulerLagrange equation caste motion in terms of the Lagrangian, where the difference between the kinetic and the potential energy of the system is expressed using position coordinates and their derivatives. Hamilton introduced the action S as an integral of the Lagrangian in time, so that motion becomes a stationary point of action—an invariant. Equivalently, the Hamiltonian and the Hamilton equations give motion in time. This Lagrangianderived approach of action also holds in electromagnetism, in a more complicated form from which Maxwell’s equations follow. In Feynman’s path formulation of quantum mechanics, the probability of an event is the modulus length squared of a complex number—the probability amplitude. This amplitude is obtained by adding together all the contributions of all paths in configuration space, with the contribution of a path proportional to $exp(iS/\mathrm{\hslash})$, where S is again the action. Lagrangian and action are incredibly powerful.
Several exemplary texts—traditional and modern—exist, given the importance of this approach. For mechanics, an exemplar is by Hauser^{1}, but numerous others exist written for a mechanical engineering audience. A classic text for understanding the theory of elasticity is by Timoshenko^{2} and was first published in 1934. A downtoearth description of beam response may be found in the compendium by Pilkey^{3}.
For electromechanical microscale systems, a standard undergraduate text, one of the earliest ones, is by Senturia^{4}. It is from the viewpoint of engineering and is quite comprehensive. A more advanced treatment for the various movements, sensitivities and the scaling considerations may be found in the text by Pelesko and Bernstein^{5}. (p.475) Preumont provides an advanced and comprehensive treatment of dynamics in electromechanical systems, using Lagrangians^{6}. This book also discusses piezoelectric systems.
The intricacies and uses of piezoactuation as well as acoustic interactions are tackled by the compilation edited by Safari and Akdŏgan^{7}.
Nonlinear aspects of movement are discussed in several texts. Particularly appropriate for the mechanics of beams is the book by Younis, which is devoted specifically to microelectromechanical systems (MEMS)^{8}.
Nonlinearity, stochasticity and chaos have been important themes in the scientific, theoretical, and applied mechanics communities for much longer than the area of interest to us: microscale and nanoscale electromechanics. For stochastic resonance and its many manifestations the review paper by Gammaitoni et al. is highly recommended^{9}.
A very readable, intuitive and comprehensive discussion of chaos exists in a number of texts. Acheson^{10} provides a very intuitive introduction to chaos. Hilborn^{11} provides a more advanced and comprehensive treatment, including the fractal aspects of chaos. This text is a very readable and rich text for an introductory but detailed analytical discussion of nonlinearity and chaos. Strogatz’s^{12} is a textbook replete with insights and examples.
Chaos also occurs in nonclassical systems. The adventurous will not be disappointed by Gutzwiller’s exposition^{13}. It is a very wellthoughtthrough and cogently written honest discussion by one of the great gentleman physicists of the 20th century. Those interested in a further exploration of Melnikov functions may wish to consider the book by Han and Yu^{14}.
The mechanics of a fluidic environment, given its importance to biotechnology, has a number of book offerings. For fluidics, Abgrall and Nguyen^{15} provide a good treatment of scale and interface effects. Electrophoresis and magnetophoresis, the motion of objects in fields in a fluidic environment, are important biological techniques. Jones^{16} discusses them comprehensively.
5.8 Exercises
1. Calculate the moment of inertia of a beam of thickness t and width w as well as that of an Ishaped beam such as a railroad track, where the thickness of the top and bottom sections of extent w is $\mathrm{\Delta}t$, and the center element is $\mathrm{\Delta}w$ thick, as shown in Figure 5.34. Find the dependence of the inertia on the crosssectional area and plot it in a suitable form to point out the optimization points where the weight of the beam can be reduced substantially while (p.476) sacrificing a smaller reduction in inertia. This points to why Ishaped beams are so ubiquitous. [S]
2. The shortest curve connecting two points in a plane is a straight line. Show that the variational form of this minimization is to minimize the integral ${\int}_{{x}_{1}}^{{x}_{2}}\sqrt{1+{(dy/dx)}^{2}}\phantom{\rule{thinmathspace}{0ex}}dx$. Here, y(x) connects $({x}_{1},{y}_{1})$ and $({x}_{2},{y}_{2})$, which are the two end points. Use the EulerLagrange equation to prove that a straight line is the minimizing curve. [S]
3. If silicon fractures under an axial stress of $\sim \phantom{\rule{negativethinmathspace}{0ex}}{10}^{9}\phantom{\rule{thickmathspace}{0ex}}N/{m}^{2}$, find the maximum length of a vertical silicon beam that does not exceed the fracture stress under its own gravitational load. [S]
4. Calculate the magnetic energy in a toroidal solenoid whose $L=0.2\phantom{\rule{thickmathspace}{0ex}}nH$ and which has a current of $1\phantom{\rule{thickmathspace}{0ex}}mA$ flowing through it. What electronicscompatible capacitor design would store similar energy? Assume that the capacitor is made out of SiO_{2}. Give the plate area, insulator thickness and necessary operating bias voltage. [S]
5. A bimetallic strip is $1\phantom{\rule{thickmathspace}{0ex}}mm$ long and composed of two materials with thermal expansion coefficients of ${\alpha}_{1}=2.5\times {10}^{6}\phantom{\rule{thickmathspace}{0ex}}{K}^{1}$ and ${\alpha}_{2}=5.0\times {10}^{6}\phantom{\rule{thickmathspace}{0ex}}{K}^{1}$, respectively. The beam is $10\phantom{\rule{thickmathspace}{0ex}}\mu m$ thick. Find the maximum deflection starting from none as designed at $300\phantom{\rule{thickmathspace}{0ex}}K$ for temperature excursions of $100\phantom{\rule{thickmathspace}{0ex}}K$, $500\phantom{\rule{thickmathspace}{0ex}}K$ and $1000\phantom{\rule{thickmathspace}{0ex}}K$. [S]
6. ZnO is piezoelectric and can be suitably deposited on a silicon cantilever. Design a cantilever with a $2000\phantom{\rule{thickmathspace}{0ex}}nm$thick ZnO integrated with two electrodes above it so that the free end may deflect by $\pi /6$. Find the length and thickness of the cantilever, and the voltage needed for the deflection that is suitable in a microsystem. [M]
7. When silicon is oxidized at high temperatures, we may assume that the SiO_{2}Si system is stressfree at the high temperature. When cooled, however, stress develops .
• Estimate the thermal strain when silicon is oxidized at a high temperature—say $1275\phantom{\rule{thickmathspace}{0ex}}K$—creating a stressfree film and then cooled to $300\phantom{\rule{thickmathspace}{0ex}}K$.
• Estimate the thermal strain when a silicon wire d_{0} in diameter is oxidized and reduced to a diameter of d_{c} for the core and diameter d_{f} for the outside oxide. [M]
8. A cantilever of mass m_{c} has a point proof mass m placed at its free end, as shown in Figure 5.35. If the Young’s modulus is Y, determine the resonance frequency of the structure. [M]
(p.477) 9. If the point proof mass is placed with its center of mass a displacement of $\mathrm{\Delta}y$ beyond the beam of length L of the previous problem, assuming that the cantilever is massless, show that the resonant frequency of the structure can be approximated by
$${f}_{r}^{`}=\frac{1}{2\pi}\sqrt{\frac{Yw{t}^{3}}{12m{l}^{3}}\frac{{\mu}^{2}+6\mu +2}{8{\mu}^{4}+14{\mu}^{3}+(21/2){\mu}^{2}+4\mu +(2/3)}},$$where $\mu =\mathrm{\Delta}y/L$. [A]
10. This previous problem’s relationship states that the resonance frequency of a point mass at the end of a massless cantilever is
$${f}_{r}=\frac{1}{2\pi}\sqrt{\frac{Yw{t}^{3}}{4{L}^{3}m}}.$$If one were to attempt to use such a resonance for mechanicaltoelectrical energy conversion, for example, by a charge on such a mass oscillating between two plates, the resonance frequency needs to be close to the system’s mechanical resonance. Which mechanical form in daily living frequencies may allow a practical mechanicaltoelectrical conversion? Examples are the oscillations of walking, a traveling bus or car, et cetera. Is there an issue of frequency mismatch here? [M]
11. For a system subject to $\dot{y}={y}^{1/3}$, show that the initial boundary condition of $y(t=0)=0$ does not have a unique solution. Why is this so? [S]
12. Plot the potential energy for $\dot{y}=y{y}^{3}$ and show the different equilibrium points of the system. [S]
13. For a square and stiff plate supported by four cantilevers, as shown in Figure 5.36, determine the effective stiffness and the pullin voltage ${V}_{\pi}$ between this assembly and a planar electrode in parallel with the stiff plate a distance d away. Assume that the Young’s modulus is Y. Estimate ${V}_{\pi}$ for an effective and nominal microscale geometry. [M]
14. Make an equivalent circuit and write the equations to determine the response of the system shown in Figure 5.37. [M]
15. In a classical system subject to scattering from its environment, the system sensitivity to forces improves at lower temperature. Briefly explain why. And also why does the spectral response decay away from a peak? What is this peak due to? [S]
16. The interaction of the tip and sample in an atomic force microscope may be modeled using the approximate lumped parameter (p.478) equation
$$m\ddot{x}+{k}_{s}x=\frac{D{k}_{s}{\sigma}^{6}}{20{(\ell +x)}^{8}}\frac{D{k}_{s}}{{(\ell +x)}^{2}},$$where m and k_{s} are the mass and the stiffness constant of the cantilever, ℓ is the tiptosurface separation, $D=AR/6{k}_{s}$, with A being a Hamaker constant related to the van der Waals forces of this geometry, R is the radius of the contact tip, and σ is a molecularscale dimension ($\approx 0.03\phantom{\rule{thickmathspace}{0ex}}nm$).
• Rewrite the interaction equation in a dimensionless form to draw out the tip forces.
• What are the dimensionless equilibrium solutions? What kind of conditions of stability and bifurcations may come about?
• What is the potential energy as a function of dimensionless parameters?
17. In a series RC circuit being charged from a voltage source, if the resistor is nonlinear, that is, ${I}_{R}=G(V)$, where I_{R} is current through the resistor, and G(V) is as sketched in Figure 5.38, derive the circuit equations, and identify the fixed points of the response. What are the implications of the nonlinearity, and how does it affect the stability? [S]
18. The Allee effect is the observation that the effective growth rate of some species is at its maximum at some intermediate population, that is, $\dot{n}/n$ peaks at some intermediate n with n as the population. Too small a population, and finding mates is hard. Too large a population, and the food and resources become scarce.
• Show that
$$\frac{\dot{n}}{n}=r{c}_{1}{(n{c}_{2})}^{2},$$under constraints on r, c_{1} and c_{2}, is a model for the Allee effect.
• What are the fixed points of the system and the nature of their stability? Stability is discussed in the Appendix discussion of phase space portraiture.
• Comment on the form of n(t). [S]
19. Phase has appeared inextricably in nearly all discussions throughout the electromechanical system response analysis. Phase has information that is critical to analysis. A simple example to show this is a problem concerning a relationship in time. Two runners A and B are running at a constant speed around a circular track. A takes T_{A} time to complete the circle, and B takes T_{B} time. Let ${T}_{B}>{T}_{A}$. If A and B start at the same time, how long does it take for A to overtake B once? [S]
(p.479) 20. Show that a system with
$$\begin{array}{r}\dot{x}=2cosxcosy,\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\text{and}\\ \dot{y}=2cosycosx\end{array}$$is reversible but not conservative. Show the phase portrait. [M]
21. Consider the Duffing equation $\ddot{y}+y+\epsilon {y}^{3}=0$, where we have introduced a nonlinearity parameter ε.
• Show that there exists a center at the origin that is nonlinear for $\epsilon >0$.
• Show that if $\epsilon <0$, trajectories near the origin are closed.
• What happens to trajectories farther from the origin? [M]
22. We consider the movement of a sphere in air under normal temperature and pressure conditions.
• What is the approximate size of a sphere moving at a reasonable speed for a transition from turbulent flow, that is, chaotic flow, to laminar flow, that is, with parallel continuity at boundaries?
• If the dynamic viscosity varies with pressure as $\eta ={\eta}_{0}p/{p}_{0}$, estimate the pressure for a sphere of radius $1000\phantom{\rule{thickmathspace}{0ex}}nm$ moving at $0.01\phantom{\rule{thickmathspace}{0ex}}cm/s$.
• Can you estimate the spacing between air molecules at this pressure?
• Is the description of these changes consistent? [A]
23. Because mass varies at the cube of length, inertia is usually considered unimportant to microscale motion in fluids. Take a system consisting of a small ball attached with a string to a motor that is putting it through rotational motion .
• Under the constraint of a limiting tension per unit crosssection of the tethering string, derive the scaling relationship of the rotational frequency of the ball.
• Is the inertia important in this system? [M]
24. In two square pads of hook and loop fasteners , let L be the size length, and let ${\ell}^{2}$ be the area on the pad that each hook and loop pair occupy. Let ${F}_{0}=\kappa {\ell}^{n}$ be the force required for separation of a hook and loop pair.
25. A hollow sphere of radius R when pushed into water feels a restoring force equal to the weight of the displaced water, according to the Archimedes’ principle. Ignore friction, and determine the frequency of oscillation. [S]
26. The dynamics of a springmassdamping system (a plate under electric force, nonlinear spring and damping γ due to the squeezing environment or Zener causes) can be described by the force equation
$$F={k}_{s1}u+{k}_{s2}{u}^{2}+{k}_{s3}{u}^{3}.$$If the system is initially pulled a distance x_{0} away from equilibrium, derive a dimensionless equation of motion. [S]
27. Take the chapter’s example of single degree of freedom parallel plate capacitor of mass m actuated by a bias $V=\stackrel{\u203e}{V}+\tilde{V}=\stackrel{\u203e}{V}+\stackrel{\u02c6}{V}cos\omega t$ (a superposition of static and harmonic voltage bias). Write the dimensionless equation of dynamics, and extract and comment on parameters.
28. Consider a van der Pol oscillator, with its nonlinearity, described by
$$\ddot{x}+2\alpha ({x}^{2}1)\dot{x}+x=0.$$Analyze its stability and bifurcation for $\alpha >0$. [M]
Notes:
(^{1}) W. Hauser, “Introduction to the principles of mechanics,” AddisonWesley, ISBN13 9780201028126 (1965)
(^{2}) S. P. Timoshenko, “Theory of elasticity,” Tata McGrawHill, ISBN 0070701229 (2010)
(^{3}) W. D. Pilkey, “Formulas for stress, strain and structural matrices,” John Wiley, ISBN 0471032212 (2005)
(^{4}) S. D. Senturia, “Microsystem design,” Kluwer, ISBN 0306476010 (2002)
(^{5}) J. A. Pelesko and D. H. Bernstein, “Modeling of MEMS and NEMS,” Chapman & Hall, ISBN 0387971734 (2003)
(^{6}) A. Preumont, “Mechatronics: Dynamics of electromechanical and piezoelectric systems,” Springer, ISBN 1402046952 (2006)
(^{7}) A. Safari and E. K Akdŏgan, “Piezoelectric and acoustic materials for transducer applications,” Springer, ISBN: 9780387765389 (2008)
(^{8}) M. I. Younis, “MEMS: Linear and nonlinear statics and dynamics,” Springer, ISBN 9781441960191 (2011)
(^{9}) L. Gammaitoni, P. Hanggi, P. Jung and F. Marchesoni, “Stochastic resonance,” Reviews of Modern Physics, 70, 223–287 (1998)
(^{10}) D. Acheson, “Chaos: An introduction to dynamics,” Oxford, ISBN 0 19 850257 5 (1997)
(^{11}) R. C. Hilborn, “Chaos and nonlinear dynamics,” Oxford, ISBN 0195671732 (2004)
(^{12}) S. H. Strogatz, “Nonlinear dynamics and chaos,” Perseus, ISBN 0201543443 (1994)
(^{13}) M. C. Gutzwiller, “Chaos in classical and quantum mechanic,” SpringerVerlag, ISBN 1584883065 (1990)
(^{14}) M. Han and P. Yu, “Normal forms, Melnikov functions and bifurcations of limit cycles,” Springer, ISBN 9781447129172 (2012)
(^{15}) P. Abgrall and N. T. Nguyen, “Nanofluidics,” Artech, ISBN 9781596933507 (2009)
(^{16}) T. B. Jones “Electromechanics of particles,” Cambridge, ISBN 9780521431965 (1995)