Print publication date: 2007

Print ISBN-13: 9780199229178

Published to Oxford Scholarship Online: January 2008

DOI: 10.1093/acprof:oso/9780199229178.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 26 February 2017

# (p.265) APPENDIX

Source:
Smart Structures
Publisher:
Oxford University Press

# A1. Artificial intelligence

While cybernetics scratched the underside of real intelligence, artificial intelligence scratched the topside.

The interior bulk of the problem remains inviolate.

– Hans Moravec (1988)

The field of artificial intelligence (AI) took birth with the publication of the paper ‘A Logical Calculus of the Ideas Immanent in Nervous Activity’ by McCulloch and Pitts (1943). It was argued that the brain could be modelled as a network of logical operations (e.g. ‘and’, ‘or’, ‘nand’). This was the first attempt to view the brain as an information-processing device. The McCulloch–Pitts model demonstrated that a network of very simple logic gates could perform very complex computations. This fact influenced substantially the general approach to the design of computers.

In classical AI, the basic approach is to try to make computers do things which, if done by humans, would be described as intelligent behaviour. It is intended to be the study of principles and methodologies for making ‘intelligent’ computers. It involves representation of knowledge, control or strategy of procedures, and searching through a problem space, all directed towards the goal of solving problems (Clearwater 1991). The problem with classical AI is that, by and large, it has faired poorly at delivering real-time performance in dynamically changing environments.

In a conventional computer, we have circuitry and software. The former is continuous, and the latter symbolic. Before the 1960s, two distinct schools of thought had evolved in the theory of ‘intelligent’ systems (Newell 1983): One went the continuous or cybernetics way (cf. Wiener 1965), and the other the symbolic or AI way.

(p.266) Cyberneticians were mainly concerned with pattern recognition and learning algorithms. Their continuous approach predominantly involved parallel processing,

And people doing AI work focussed on the development of expert systems doing specific ‘intelligent’ jobs like theorem-proving, game-playing, or puzzle-solving. The AI symbolic approach was basically a serial-computation approach. Statistical models were used, and training by using expert information available a priori was carried out. The trained AI system, when confronted with a new piece of sensory input, proceeded to serially compute the most likely classification for the new data, and then took the data as per its training. In this whole exercise the software was not changed; only the parameters of the statistical model were adjusted in the light of new experience.

From the 1960s onwards, the field of AI has encompassed pattern recognition also. There has also been an increasing interest in the problems of neuroscience, so that the distinction between AI and cybernetics has been becoming more and more blurred.

Lately, there has been a resurgence of interest in applying the AI approach to what have been traditionally regarded as tough problems in materials science (Takeuchi et al. 2002). This is partly a fallout of the fact that, apart from experimental science and theoretical science, computational science has emerged as a distinctly different field of research. A large number of scientific problems are just too tough to be tackled in any other way than by modelling them on a computer. It was mathematically proved by Poincare, as early as in 1889, that there is no analytic solution for the dynamics of even the three-body problem, not to speak of the more intractable N-body interactions. Things have not improved much since then. Numerical solutions, using a computer, are often the only option available. Theoretical scientific complexity apart, quite often it is very expensive, if not impossible, to conduct certain experiments. Computer simulation can again help a great deal. It was only inevitable that, in due course, AI methods also made their contribution to this scenario. Some computational problems are so hard that only the AI approach can yield sensible results. The result, in materials science, is a marked increase in the efficiency with which new materials and processes are being discovered. High throughput is a current paradigm for checking and implementing new strategies for developing materials and processes. We give here a glimpse of this approach.

There are three broad categories of applications of AI techniques in materials science (Maguire et al. 2002): Intelligent process control; discovery of useful new materials and processes; and advanced computational research.

### Intelligent process control

The idea here is to do real-time integration of a distributed network of sensors with sophisticated process models and materials-transformation models that capture the coupled effects of chemical reactivity and transport phenomena. Expert systems are built into the real-time process control so as to take immediate and on-line corrective action where and when needed, all along the process pathway, rather than merely adjusting some general process parameters. A detailed model about the process is (p.267) built into the system. Real-time sensory data coming from a large number of carefully chosen and positioned sensors are continually processed by the computer for taking ‘expert’ decisions for achieving the desired end-product in the most efficient and cost-effective manner. Availability of detailed physico-chemical information about the thermodynamics of the system, along with models for interpreting the data, makes the computer steer the process intelligently along the most desirable trajectory in phase space.

### Discovery of useful new materials

AI techniques are being used for data mining and rapid mapping of phase diagrams for designing new materials. Information about the properties desired for the new material is fed in, and the system makes a search by pattern-matching.

Here the computer itself becomes an experimental/theoretical method for investigating phenomena which cannot be tackled in any other way. N-body interactions are an example. Nanotechnology is another. There is so much in the nascent field of nanotechnology that is unexplored that an AI approach to the modelling of systems at the nanoscale, and training the AI assembly accordingly for problem-solving, can pay rich dividends. Once the system has been trained, the speed of simulation is independent of the complexity of the underlying interaction potentials (whether two-body or 10-body). And the algorithm can be used with any type of equations of motion.

The ‘intelligence’ in artificial intelligence is nowhere near the true intelligence of the human brain. Therefore the successes of AI have been of a rather limited nature, or at least not what one would expect from genuinely intelligent systems. The reason is that machine intelligence has so far been not modelled substantially on the neocortical model of the human brain (Hawkins and Blakeslee 2004).

The current use of statistical reasoning techniques is leading to a revival of interest in large-scale, comprehensive applications of AI. The latest example of this is the use of statistical techniques for achieving a modicum of success in the translation of languages, i.e. machine translation (MT) (cf. Stix 2006).

We should also mention here a recent attempt to overcome some of the shortcomings of the conventional AI approach by taking recourse to statistical reasoning. An artificial brain called ‘Cyc’ has been put on the internet (see Mullins 2005, for some details). Developed by Doug Lenat for over two decades, it is supposed to develop common sense. And interaction with the world will make it more and more experienced, and therefore better able to behave like human beings. It is based on the hope that if we can build up a database of common-sense context-sensitive knowledge and expert systems, we can come closer to the dream of human-like intelligence (although still not in the sense emphasized by Hawkins and Blakeslee 2004). In Cyc, each new input of information is compared and correlated with all the existing facts in the database, so that context-sensitivity develops and the system becomes cleverer with growing experience. The knowledge exists in the memory in the form of logical clauses that assert the stored truths, using the rules of symbolic logic.

# (p.268) A2. Cell biology

All tissues in animals and plants are made up of cells, and all cells come from other cells.

A cell may be either a prokaryote or an eukaryote. The former is an organism that has neither a distinct nucleus and a membrane, nor other specialized organelles. Examples include bacteria and blue–green algae. We shall not discuss such cells further.

Unicellular organisms like yeast are eukaryotes. Such cells are separated from the environment by a semi-permeable cell membrane. Inside the membrane there is a nucleus and the cytoplasm surrounding it.

Multicellular organisms are all made up of eukaryote-type cells. In them the cells are highly specialized, and perform the function of the organ to which they belong.

The nucleus contains nucleic acids, among other things. With the exception of viruses, two types of nucleic acids are found in all cells: RNA (ribonucleic acid) and DNA (deoxyribonucleic acid). Viruses have either RNA or DNA, but not both (but then viruses are not cells).

DNA contains the codes for manufacturing various proteins. Production of a protein in the cell nucleus involves transcription of a stretch of DNA (this stretch is called a gene) into a portable form, namely the messenger RNA (or mRNA). This messenger then travels to the cytoplasm of the cell, where the information is conveyed to a ‘particle’ called the ribosome. This is where the encoded instructions are used for the synthesis of the protein. The code is read, and the corresponding amino acid is brought into the ribosome. Each amino acid comes connected to a specific transfer RNA (tRNA) molecule; i.e. each tRNA carries a specific amino acid. There is a three-letter recognition site on the tRNA that is complementary to, and pairs with, the three-letter code sequence for that amino acid on the mRNA.

The one-way flow of information from DNA to RNA to protein is the basis of all life on earth. This is the central dogma of molecular biology.

DNA has a double-helix structure. Each of the two backbone helices consists of a chain of phosphate and deoxyribose sugar molecules, to which are attached the bases adenine (A), thymine (T), cytosine (C), and guanine (G), in a certain sequence. It is a sequenced polymer. A phosphate molecule and the attached sugar molecule and base molecule constitute a nucleotide base. The sequence in which they exist decides the genetic code of the organism. A strand of DNA is a polynucleotide, or an oligonucleotide.

The two helices in the double-helix structure of DNA are loosely bonded to each other, all along their length, through hydrogen bonds between complementary base pairs: Almost always, A bonds to T, and C bonds to G.

Just as DNA can be viewed as a sequence of nucleotide bases, a protein involves a sequence of amino acids. Only 20 amino acids are used for synthesizing all the proteins in the human body. Three letters (out of the four, namely the bases A, T, C, G) are needed to code the synthesis of any particular protein. The term codon is used for the three consecutive letters on an mRNA. The possible number of codons (p.269) is 64. The linking of most of the amino-acid-triplets for synthesizing a protein can be coded by more than one codon. Three of the 64 codons signal the ‘full stop’ for the synthesis of a protein.

There are ~60-100 trillion cells in the human body. In this multicellular organism (as also in any other multicellular organism), almost every cell (red blood ‘cells’ are an exception) has the same DNA, with exactly the same order of the nucleotide bases.

The nucleus contains 95% of the DNA, and is the control centre of the cell. The DNA inside the nucleus is complexed with proteins to form a structure called chromatin.

The fertilized mother cell (the zygote) divides into two cells. Each of these again divides into two cells, and so on. Before this cell division (mitosis) begins, the chromatin condenses into elongated structures called chromosomes.

A gene is a functional unit on a chromosome, which directs the synthesis of a particular protein. As stated above, the gene is transcribed into mRNA, which is then translated into the protein.

Humans have 23 pairs of chromosomes. Each pair has two nonidentical copies of chromosomes, derived one from each parent.

During cell division, the double-stranded DNA splits into the two component strands, each of which acts as a template for the construction of the complementary strand. At every stage, the two daughter cells are of identical genetic composition (they have identical genomes). In each of the 60 trillion cells in the human body, the genome consists of around three billion nucleotides.

At appropriate stages, cell differentiation starts occurring into various distinct types of cells, like muscle cells, liver cells, neurons, etc. The term stem cells is used for the primal undifferentiated cells. Because of their ability to differentiate into other cells, stem cells are used by the body to act as a repair system, replenishing other cells when needed.

How does cell differentiation occur, and with such high precision? Kauffman (1993) started investigating this problem in 1963. It was known at that time that a cell contains a number of regulatory genes, which can turn one another on and off like switches. This implied that there are genetic circuits within the cell, and the genome is some kind of a biochemical computer. There must be an algorithm which determines the messages sent by genes to one another, and deciding the switching on or off of appropriate genes (i.e. their active and inactive states). This computing behaviour, in turn, determines how one cell can become different from another.

Thus, at any given instant, there is a pattern of on and off genes, and the pattern changes over time with changing conditions. As Kauffman realized, this was a case of parallel computing, and the genome has several, stable, self-consistent patterns of activation (each responsible for a specific cell differentiation). How could such a high degree of order, namely such specific and complicated genetic configurations, arise through the trial and error of gradual evolution? The whole thing had to be there together, and not partially, to be functional at all.

Kauffman concluded that the order must have appeared right in the beginning, rather than having to evolve by trial and error. But how?

(p.270) He could answer the question successfully through the cellular-automata approach. In his model, each cell of the automaton represented a gene. It was known at that time that each regulatory gene was connected to only a few other genes (this number is now known to be between 2 and 10). In Kauffman's model, each cell of the automaton, i.e. each node of the network, was taken as receiving two inputs, i.e. each gene was modelled as connected to two other genes. This sparsely (but not too sparsely) connected network was a good choice. If the connectivity is too little, the system would quickly settle to an uninteresting stable state. And if a network is too densely connected, it becomes hypersensitive and chaotic, not moving towards any stable order or pattern.

Kauffman defined some local rules for his 100-node cellular automaton, and started the process on a computer with a random initial configuration of the nodes of the network. It was found that the system tended to order towards a small number of patterns. Most of the nodes just froze into an off or on state, and a few oscillated through ~10 configurations, just like the regulatory genes of the real genome.

Spurred by this initial success, larger and larger networks were investigated on the computer. It could be established that, in agreement with data from the real world, the total number of differentiated cell types in an organism scales roughly as the square root of the number of genes it has.

Thus Kauffman's work established that complex genetic circuits could come into being by spontaneous self-organization, without the need for slow evolution by trial and error. It also established that genetic regulatory networks are no different from neural networks.

Such networks are examples of systems with nonlinear dynamics. The not-too-sparsely connected network of interacting genetic ‘agents’ is a nonlinear system, for which the stable pattern of cycles corresponds to a basin or attractor (cf. appendices on nonlinear dynamics and on chaos).

The term ontogeny is used for the development of a multicellular being from one single cell, namely the zygote. As we have seen above, ontogeny involves cellular division and cellular differentiation. Embryogenesis is another term used for the ontogeny of animals, especially human beings.

# A3. Chaos and its relevance to smart structures

For the want of a nail, the shoe was lost;

for the want of a shoe the horse was lost;

and for the want of a horse the rider was lost,

being overtaken and slain by the enemy,

all for the want of care about a horseshoe nail.

– Benjamin Franklin

Nonlinear dynamics is discussed in Appendix A9. Here we focus on a possible consequence of nonlinearity, namely chaos.

(p.271) Chaos theory deals with unstable conditions where even small changes can cascade into unpredictably large effects (Abarbanel 2006). As meteorologist Edward Lorenz said, in the context of the complexity of phenomena that determine weather, the flap of a butterfly's wings in Brazil might set off a tornado in Texas, for example.

Chaos is a complex phenomenon that seems random but actually has an underlying order (Langreth 1992). The irregular-looking time evolution of a chaotic system is characterized by a strong dependence on initial conditions, but is, nevertheless, deterministic. The apparent irregularity stems from the nonlinearities in the equations of motion of the system, which magnify the small but inevitable errors in fixing the initial conditions to such an extent that long-time behaviour becomes seemingly erratic and practically unpredictable.

In a dissipative or non-Hamiltonian system, energy is dissipated by friction, etc., and any required movement can be maintained only by using external driving forces. Dissipative systems are characterized by the presence of attractors. These are bounded regions in phase space (e.g. fixed points, limit cycles, etc.) to which the trajectory of the dissipative system gets ‘attracted’ during the time evolution of the system. The attraction occurs due to the dissipative nature of the system, which results in a gradual shrinking of the phase-space volume accessible to the system.

Strange attractors are very unlike the simple attractors mentioned above. They provide a basis for classifying dissipative chaotic systems (Gilmore 2005). What is strange about them is the sensitive dependence of the system on initial conditions. They typically arise when the flow in phase space contracts the volume elements in some directions but stretches them along others. Thus, although there is an overall contraction of volume in phase space (characteristic of a dissipative system), distances between points on the attractor do not necessarily shrink in all directions. Likewise, points which are close initially may become exponentially separated in due course of time.

A chaotic attractor typically has embedded within it an infinite number of unstable periodic orbits: The periodic orbits are unstable even to small perturbations. Any such perturbation or displacement of the periodic orbit (e.g. due to noise) grows exponentially rapidly in time, taking the system away from that orbit. This is the reason why one does not normally observe periodic orbits in a free-running chaotic system.

A chaotic attractor is a geometric object that is neither point-like nor space-filling. It typically has fractal (or nonintegral) dimensions, which is another reason why it is called strange.

A breakthrough in chaos theory occurred when Ott, Grebogi and Yorke (OGY) (1990) showed that it should be possible to convert (or stabilize, or synchronize) a chaotic attractor to one of its possible periodic motions by applying small, time-dependent, feedback-determined perturbations to an appropriate system parameter. Pecora and Carroll (1990, 1991) also made a seminal, independent contribution to this field at about the same time.

OGY also pointed out the tremendous application potential of this idea. Any of a number of different periodic orbits can be stabilized, so one can choose the one best suited for optimizing or maximizing the performance of the system. What (p.272) is more, if the need arises (because of changing environmental conditions), one can quickly and easily switch from one chosen orbit to another by changing the applied time-dependent perturbation. This can be done without having to alter the gross system configuration. The relevance of this to smart-structure applications is obvious: Such structures, by definition, are those which can alter their response functions suitably to achieve an objective even under changing environmental conditions.

As discussed by OGY, the availability of this flexibility is in contrast to what happens for attractors which are not chaotic but, say, periodic. Small changes in the system parameters will then change only the orbit slightly, and one is stuck with whatever performance the system is capable of giving. There is no scope for radical improvement without changing the gross configuration of the system, something not practical for real-life dynamic systems (e.g. an aircraft in flight).

The work of OGY has shown that, in a chaotic system, multi-use or multi-exigency situations can be accommodated by switching the temporal programming of the small perturbations for stabilizing the most appropriate periodic orbit in phase space. Such multipurpose flexibility appears to be essential for the survival of higher life forms. OGY speculated that chaos may be a necessary ingredient in the regulation of such life forms by the brain. It follows that the design of really smart structures will have to factor this in.

We discuss asymptotic stability in the appendix on nonlinear systems. As pointed out there, asymptotic stability implies irreversibility, so one is dealing with a dissipative system rather than a conservative system. Such a system can approach a unique attractor reproducibly because it can eliminate the effects of perturbations, wiping out all memories of them. Asymptotic stability, arising from the dissipative nature of a system, is a very beneficial effect in living systems. By contrast, a conservative system will keep a memory of the perturbations. A conservative system cannot enjoy asymptotic stability.

The heart of a living being is an example of an asymptotically stable dissipative system. The chaotic feature in the functioning of the heart may be actually responsible for preventing its different parts from getting out of synchronization. In the light of OGY's work on chaotic systems, one can conclude that the heart-beat rate is controlled and varied by the brain by applying appropriate time-dependent perturbative impulses. Garfinkel et al. (1992) induced cardiac arrhythmia in a rabbit ventricle by using the drug ouabain, and then succeeded in stabilizing it by an approach based on chaos theory. Electrical stimuli were administered to the heart at irregular time intervals determined by the nature of the chaotic attractor. The result was a conversion of the arrhythmia to a periodic beating of the heart.

The idea that it is possible to steer a chaotic system into optimum-performance configurations by supplying small kicks that keep sending the system back into the chosen unstable periodic orbit has wide-ranging applicability to chemical, biological, electronic, mechanical and optical systems. Its physical realization was first reported by Ditto, Rauseo and Spano (1990). They applied it to a parametrically driven ribbon of a magnetoelastic material. Application of magnetic field to such a material modifies (p.273) its stiffness and length. The thin ribbon was clamped at the base and mounted vertically. A field as small as 0.1 to 2.5 Oe, applied along the length of the vertical ribbon, changed (decreased) the Young's modulus by an order of magnitude, causing the initially stiff and straight ribbon to undergo gravitational buckling, making it sway like an inverse pendulum. A combination of dc and ac magnetic fields was applied, and the frequency of the ac field was ~1 Hz. A suitable ratio of the dc and ac fields made the buckling and unbuckling of the ribbon chaotic, rather than periodic. The swaying of the ribbon was recorded as a function of time by measuring the curvature near its base. The measurements yielded a time series of voltages, V(t).

From this time series a certain desired mode of oscillation was selected for stabilization. The experimenters waited till the chaotic oscillations came close to this mode. A second set of magnetic perturbations, determined from the knowledge of V(t), was applied. Use of these feedback perturbations resulted in a stable periodic orbit. Chaos had been controlled.

It is worthwhile to recapitulate here what can be achieved in the control of chaotic systems, and how they score over linear, conservative systems. The latter can do only one thing well (Ditto and Pecora 1993). By contrast, nonlinear systems and devices can handle several tasks. If we measure the trajectory of a chaotic system, we cannot predict where it would be on the attractor at some time in the distant future. And yet the chaotic attractor itself remains the same in time. What is more, since the chaotic orbits are ergodic, one can be certain that they would eventually wander close to the desired periodic orbit. Since this proximity is assured, one can capture the orbits by a small control. This feature is crucial for exercising control.

Deliberate building of chaos into a system can provide sensitivity and flexibility for controlling it. Instability can be a virtue if the system involved is chaotic. There are typically an infinite number of unstable periodic orbits coexisting in a chaotic system, offering a wide choice for selecting (synchronizing) an orbit (Aziz-Alaoui 2005), and for switching from one orbit to another, all with the underlying objective of optimizing performance, as well as easily altering the orbit for a smart tackling of changing environmental conditions.

# A4. Composites

Composites are made of two or more components or phases, which are strongly bonded together in accordance with some desired connectivity pattern. They are carefully patterned inhomogeneous solids, designed to perform specific functions. Some authors emphasize the presence of interfaces in composites, and define a composite as a macroscopic combination of two or more distinct materials with recognizable interfaces among them (Miracle and Donaldson 2001).

Composites can be configured in an infinite number of ways, and that offers immense scope for design for achieving or enhancing certain desirable properties, as well as for suppressing undesirable ones. Sometimes, new properties, not possessed by any of the constituents separately, can also emerge.

(p.274) Composites can be either structural, or nonstructural (often called functional). Both are important from the point of view of applications in smart structures. Structural composites were introduced as early as in the late nineteenth century, and this subject is therefore already quite highly developed (see Beetz 1992; Hansen 1995; Chung 2001). Functional composites, on the other hand, are of relatively more recent origin.

A typical structural composite has a matrix, a reinforcement, and a filler. The matrix material binds the other materials in the composite, giving it its bulk shape and form. The reinforcing component, usually in the form of filaments, fibres, flakes or particulates, determines to a large extent the structural properties of the composite. The filler meets the designed structural, functional, and other requirements.

Apart from the relative sizes and concentrations of the various phases constituting a composite, a factor of major importance is their connectivities. Connectivity has been defined as the number of dimensions in which a component of a composite is self-connected (Newnham, Skinner and Cross 1978; Newnham and Trolier-McKinstry 1990a, b). There are 10 possible connectivity classes for a diphasic composite in three dimensions, when no distinction is made between, say, the classes 1–3 and 3–1. When such a distinction is made, six additional connectivity classes arise. The 16 connectivity classes are: 0–0, 1–0, 0–1, 2–0, 0–2, 3–0, 0–3, 1–1, 2–1, 1–2, 3–1, 1–3, 2–2, 3–2, 2–3, and 3–3.

We consider an example to explain the meaning of these symbols. It will also illustrate the difference between, say, 1–3 connectivity and 3–1 connectivity. Imagine a composite in which poled rods or fibres of the piezoelectric ceramic PZT are embedded in a polymer matrix. Here the polymer matrix is the major phase, and it is connected to itself (self-connected) in all three directions or dimensions. The PZT phase is self-connected only in one dimension, namely along its length, so this is a 1–3 connectivity composite.

Contrast this with a situation in which one takes a rectangular block of poled PZT ceramic, drills parallel holes in it along one direction, and fills the holes with a polymer. This is a 3–1 composite, where a convention has been followed that the connectivity of the ‘active’ phase should be written first; PZT is the active phase (Pilgrim, Newnham and Rohlfing 1987).

For a review of several commercial applications of composite piezoelectric sensors and actuators, see Newnham et al. (1995).

Of particular interest for smart structures are the 2–2 composites, commonly known as laminated composites. Fibre-reinforced polymer-matrix laminated composites are particularly well suited for embedding sensors (Hansen 1995).

The properties of a composite may be categorized as sum properties, combination properties, and product properties.

The density of a composite is an example of a sum property; it is the weighted arithmetic mean of the densities of the constituent phases. Other examples of sum properties are electrical resistivity, thermal resistivity, dielectric permittivity, thermal expansion, and elastic compliance (van Suchtelen 1972; Hale 1976). The value of a sum property can depend strongly on the connectivity pattern of the composite (cf. Wadhawan 2000).

(p.275) An example of a combination property of a composite is provided by the speed of acoustic waves in a biphasic composite (i.e. a composite made from two phases). The speed depends on two properties, namely Young's modulus and density, and the mixing rule for the Young's moduli of the two phases is not the same as that for density. Further complications are caused by the fact that the mixing rules are different for transverse and longitudinal acoustic waves (Newnham 1986).

Product properties of composites can be particularly fascinating (van Suchtelen 1972). Consider a biphasic composite with an X-Y effect in Phase 1, and a Y-Z effect in Phase 2. Application of a force X invokes a response Y in Phase 1, and then Y acts as a force on Phase 2 to invoke a response Z. The net result is an X-Z effect, which is a product property, not present in Phase 1 or Phase 2 individually.

For example, Phase 1 may be magnetostrictive, and Phase 2 piezoelectric. Suppose we apply a magnetic field H. Phase 2, being nonmagnetic, is not influenced by it directly. The magnetic field produces magnetostrictive strain in Phase 1 (the X-Y effect). Assuming that the two phases are coupled adequately, the strain will act on Phase 2, and produce a dipole moment (through the inverse piezoelectric effect) (the Y-Z effect). The net result (the X-Z effect) is that magnetic field induces an electric dipole moment in the composite; this is called the magnetoelectric effect. Note that neither Phase 1, nor Phase 2, may be magnetoelectric, but the composite is.

We discuss the symmetry of composite systems in a separate appendix. The emergence of the magnetoelectric effect as a product property is a consequence of the fact that when the symmetries of the two constituent phases are superimposed, the net symmetry is lower than either of the component symmetries. A lower symmetry means a lower set of restrictions on the existence of a property in a material. The symmetries of the two component phases prevent the occurrence of the magnetoelectric effect in them. But the lower symmetry of the composite allows this effect to occur.

### Transitions in composites

Phase transitions, including field-induced phase transitions, can occur in any of the constituent phases of a composite. In addition, connectivity transitions are also possible. A number of examples have been discussed by Pilgrim, Newnham and Rohlfing (1987). The connectivity pattern can be altered continuously by changing the volume fractions of the component phases, or by changing their relative size scales. At a critical value of these parameters, the composite acquires a new connectivity, with a drastic change in macroscopic properties (Newnham and Trolier-McKinstry 1990a).

### Nanocomposites

In a nanocomposite, at least one of the phases has at least one of the dimensions below 100 nm. Many of them are biphasic. There are three main types of them (Cammarata 2004): nanolayered, nanofilamentary, and nanoparticulate. All of them have a very high ratio of interface area to volume. This can result in totally new and size-tuneable properties. Take the example of a nanolayered semiconductor, made from alternating layers of epitaxially matched GaAs and GaAlxAs1−x. For layer thicknesses below the electronic mean free path in the bulk form of the two materials, quantum confinement (p.276) effects arise, drastically affecting the electronic and photonic properties. What is more, the properties can be tuned by altering the thicknesses of the two layers.

# A5. Crystallographic symmetry

Symmetry considerations form an integral part of the philosophy of physics. Noether's theorem gives an indication of why this is so. According to this theorem (cf. Lederman and Hill 2005): For every continuous symmetry of the laws of physics, there must exist a conservation law; for every conservation law, there must exist a continuous symmetry.

Symmetry of physical systems is described in the language of group theory. In this appendix we introduce the definition of a group, and give a very brief description of crystallography in terms of symmetry groups.

Atoms in a crystal have nuclei and charge clouds of electrons around them. The chemical bonding among the atoms results in a certain spatial distribution of the electron cloud, which we can describe in terms of a density function, ρ(x,y,z).

Certain coordinate transformations (translations, rotations, reflections, inversion) may map the density function onto itself; i.e. leave it invariant. The set of all such symmetry transformations for a crystal forms a ‘group’, called the symmetry group of the crystal.

What is a group? A group is a set with some specific properties. A set is a collection of objects (or ‘members’, or ‘elements’) which have one or more common characteristics. The characteristics used for defining a set should be sufficient to identify its members. A collection of integers is an example of a set, as also a collection of cats.

A group is a set for which a rule for combining (‘multiplying’) any two members of the set has been specified, and which has four essential features which we shall illustrate here by considering the example of the symmetry group of a crystal. (For this example, the set comprises of symmetry transformations of the crystal.)

Suppose a rotation θ1 about an appropriate axis is a symmetry operation, and a rotation θ2 about the same or different axis is another symmetry operation. Since each of them leaves the crystal invariant, their successive operation (or ‘product’, denoted by θ1 θ2) will also leave the crystal invariant, so the product is also a symmetry operation, and therefore a member of the set of symmetry operations. This is true for all possible products. We say that the set has the property of closure.

It also has the property of associativity. What this means is that if θ3 is another, arbitrarily chosen, symmetry operation, then (θ1 θ2) θ3 has the same effect as θ12 θ3).

The set includes an identity operation, which simply means that not performing any coordinate transformation is also a symmetry operation.

Lastly, for every symmetry operation (or ‘element’ of the set), the inverse element is also a symmetry operation. For example, if a rotation θ1 is a symmetry operation, so is the negative rotation −θ1.

(p.277) A set of elements, for which a law of composition or multiplication of any two elements has been defined, is a group if it has the properties of closure and associativity, and if identity and inverse elements are also members of the set. A simple example of a group is the set of all integers, with addition as the law of composition.

A crystal has the distinctive feature that an atom or a group of atoms or molecules can be identified as a building block or unit cell, using which the whole crystal can be generated by repeating it along three appropriately identified vectors, say, a 1, a 2, a 3. These vectors are called the lattice vectors because the set of all points

(A1)
$Display mathematics$
for all integral values of n1, n2, n3 generates a lattice of equivalent points. This also means that every lattice, and thence any crystal based on that lattice, has translational symmetry. The lattice and the crystal are invariant under lattice translations defined by eqn A1 for various values of the integral coefficients.

Apart from the translational symmetry, a lattice may also have rotational or directional symmetry. The rotational symmetry of a crystal lattice defines the crystal system it belongs to.

All crystals in three-dimensional space belong to one or the other of only seven crystal systems: triclinic, monoclinic, orthorhombic, trigonal, tetragonal, hexagonal, and cubic.

For each of these crystal systems, one can identify a unit cell which has the distinct shape compatible with the rotational symmetry of that crystal system. For example, the unit cell is a cube for any crystal belonging to the cubic crystal system. Similarly, the unit cell has the shape of a square prism for any crystal belonging to the tetragonal crystal system.

The rotational symmetry of a crystal is described by a particular type of group, called the point group. It is a set of all crystallographic symmetry operations which leave at least one point unmoved. Since only directional symmetry is involved, there are no translations to be considered, and all the operations of reflection, rotation, or inversion can be applied about a fixed plane, line or point.

There are only 32 distinct crystallographic point groups, seven of which describe the directional symmetry of the seven crystal systems.

Each crystallographic point group is a set of mutually compatible rotations, reflections or inversion operations. The fact that the crystal also has the all-important translational symmetry puts severe restrictions on what can qualify as a crystallographic rotational symmetry (cf. Wadhawan 2000 for details). For example, a rotation of 2π/5 cannot be a symmetry operation for a crystal. In fact, the only permissible rotations are 2π, 2π/2, 2π/3, 2π/4, and 2π/6. These correspond to one-fold, two-fold, three-fold, four-fold, and six-fold axes of symmetry, respectively. The corresponding symmetry operations of the point group are denoted by 1, 2, 3, 4, and 6.

Some crystal structures possess inversion symmetry, denoted by the symbol i or 1. Some others may possess a composite symmetry comprising a combination of inversion and any of the four permitted rotational symmetries. For example, whereas the (p.278) operation i takes a point (x, y, z) to (−x, −y, −z), and the operation denoted by the symmetry element 2z takes (x, y, z) to (−x, −y, z), a composite operation which is a combination of these two, and is denoted by 2, takes (x, y, z) to (x, y, −z). (Incidentally, 2 happens to have the same effect as a reflection (m z) across a plane normal to the z-axis.)

Thus the elements of the 32 crystallographic point groups consist of symmetry operations 1, 2, 3, 4, 6, 1 (=i), 2 (=m), 3, 4, and 6, and their mutually compatible combinations.

There are only 14 distinct types of crystal lattices in three dimensions. These are called Bravais lattices. Since there are only seven crystal systems, it follows that a crystal system can accommodate more than one Bravais lattices. For example, there are three Bravais lattices belonging to the cubic crystal system: simple cubic (sc), body-centred cubic (bcc), and face-centred cubic (fcc). The centring mentioned here refers to the fact that if one insists on choosing a cube-shaped unit cell to reflect the full directional symmetry of the crystal system and the crystal lattice, then the cell would have lattice points, not only at the corners, but also at the body-centre (½ ½ ½) (in the case of the bcc lattice), or the face centres (½ ½ 0), (½ 0 ½), (0 ½ ½) (in the case of the fcc lattice).

The full atomic-level (or microscopic) symmetry of a crystal is described by its space group. A crystallographic space group is a group, the elements of which are all the symmetry operations (lattice translations, rotations, reflections, etc.) that map the crystal structure onto itself. The only translational symmetry a crystal can have is that described by one of the 14 Bravais groups (i.e. the groups describing the symmetry of the Bravais lattices). Therefore, to specify the space-group symmetry of a crystal we have to identify its Bravais lattice, as well as the symmetry operations involving rotation, reflection and inversion, and their mutually consistent combinations. There are 230 crystallographic space groups in all.

All crystals having the same point-group symmetry are said to belong to the same crystal class. Thus all crystals can be divided into 32 crystal classes.

Eleven of the 32 crystallographic point groups have the inversion operation as a symmetry operation. They are called the 11 Laue groups.

The remaining 21 point groups are noncentrosymmetric. For one of them, all components of the piezoelectric tensor are identically equal to zero. The remaining 20 allow the occurrence of the piezoelectric effect.

Out of these 20 piezoelectric classes, 10 are polar classes. For them the point-group symmetry is such that there is at least one direction (axis) which is not transformed into any other direction by the symmetry operations comprising the group. This means that it is a direction for which symmetry does not result in a cancellation of any electric dipole moment that may exist because of the charge distribution in the unit cell. For example, suppose the only rotational symmetry a crystal has is a two-fold axis, say along the z-direction. A spontaneously occurring dipole moment (P z) along this direction will not get altered or cancelled by any other rotational symmetry, because none exists. Thus the 10 polar classes of crystals are characterized by the occurrence of spontaneous polarization. They are the 10 pyroelectric (p.279) classes because they exhibit the pyroelectric effect: The spontaneous polarization changes with temperature, resulting in an occurrence of additional charge separation on crystal faces perpendicular to this polar direction or axis.

Since the 10 polar classes are a subset of the 20 piezoelectric classes, all pyroelectric crystals are necessarily piezoelectric also.

A crystalline material may exist, not as a single crystal, but as a polycrystal, i.e. as an aggregate of small crystals (crystallites). Since such an assembly of crystallites does not have the periodicity of an underlying lattice, there is no restriction on the allowed rotational symmetry. Of special interest is the occurrence of axes of ∞-fold symmetry. In fact, if the crystallites are oriented randomly, any direction in the polycrystal is an ∞ -fold axis.

Point groups involving at least one ∞ -fold axis are called limit groups or Curie groups. Limit groups are relevant for dealing with polycrystalline specimens of ferroic materials (Wadhawan 2000). For example, if we are dealing with a polycrystalline ferroelectric, we can pole it by applying a strong enough electric field at a temperature a little above the temperature of the ferroelectric phase transition, and cooling it to room temperature under the action of the field. A substantial degree of domain reorientation occurs under the action of the field, so that the spontaneous polarization in different crystallites (‘grains’) and domains tends to align preferentially along the direction of the applied field. Whereas there was no net polarization before poling, the specimen acquires a preferred direction, along which there is a net macroscopic spontaneous polarization.

What is the point-group symmetry of the poled specimen? It is the same as that of the superimposed electric field, and is denoted by the symbol ∞ m: There is an axis of ∞ -fold symmetry along the direction of the poling electric field; in addition, there is also mirror symmetry (denoted by m) across all planes passing through this polar axis. One can visualize this as the symmetry of a cone or a single-headed arrow.

We consider some examples of the description of space-group symmetries of crystals mentioned in this book.

BaTiO3 has the so-called perovskite structure. Its cubic phase (existing above 130°C) has the symmetry Pm 3 m. Here P means that the underlying Bravais lattice is primitive; i.e. only one lattice point is associated with every unit cell. The rest of the symbol is actually a brief version of m [100] 3 [111] m [110]. The first symbol tells us that there is a mirror plane of symmetry, the normal of which points along the [100] direction, or the x-direction.

The international convention is such that, if there is a symbol 3 in the second place of the point-group symbol, it means that one is dealing with the cubic crystal system, with a unit cell that has the shape of a cube. Only a cubic unit cell can have a 3 axis of symmetry along its body-diagonal, i.e. along the [111] direction.

The third symbol, m [110], denotes the presence of a mirror plane of symmetry normal to the [110] direction, or the xy-direction.

On cooling, the cubic phase of BaTiO3 enters the tetragonal phase at 130°C, which has the symmetry P4mm. Here P, of course, means a primitive unit cell. In the rest of the symbol, a 4 in the first position implies, by convention, that one is dealing with the (p.280) tetragonal crystal system. In fact, the full point-group symbol is 4[001] m [100] m [110]. The four-fold axis of symmetry is along the [001] direction, or z-direction. The second symbol indicates the presence of a mirror plane of symmetry normal to the [100] direction, and the third symbol represents a mirror plane normal to the [110] direction. The point-group 4mm is one of the 10 polar groups, allowing the tetragonal phase of BaTiO3 to exhibit pyroelectricity and ferroelectricity. The shape of the unit cell is that of a square prism.

On further cooling, BaTiO3 passes to a phase of symmetry Amm2, and thereafter to a phase of symmetry R3c. Both have polar point-group symmetries. The point-group symmetry of the Amm2 phase is mm2. It belongs to the orthorhombic crystal system. By convention, mm2 stands for m [100] m [010]2[001]. That is, the first mirror plane is normal to the x-axis, and the second is normal to the y-axis. The two-fold axis, which is also the polar axis for this point group, is along the z-axis. The symbol A in Amm2 means that the underlying Bravais lattice is A-face (or yz-face) centred, and is therefore not a primitive lattice. The unit cell has the shape of a rectangular brick, and its face normal to the x-axis has a lattice point at its centre. Thus, there are two lattice points per unit cell; one at a corner of the unit cell, and the other at the centre of the A-face.

In R3c, a 3 in the first place (rather than the second place, as in a cubic point group) indicates that we are dealing with the trigonal or rhombohedral crystal system. The Bravais lattice is primitive, but we write R3c, rather than P3c (by convention). The underlying point group is 3m. There is a three-fold axis along [001], and a c-glide normal to [100]. The notional mirror plane associated with the c-glide operation is parallel to the three-fold axis, and its normal is along [100]. The glide operation is a composite symmetry operation. In the present case, it amounts to reflecting a point across the plane, and then translating this point by c/2.

We consider two more space-group symbols, used in Chapter 9. These are Cm and P4/mmm.

In Cm, the occurrence of a lone m (with no other symbols for the point-group part) indicates that it is for a crystal belonging to the monoclinic system. And C tells us that the Bravais lattice is C-face centred.

The full form of P4/mmm is P(4[001]/m [001])m [100] m [110], which, by now, should be self-explanatory to the reader. The underlying point-group symmetry (4/mmm) is centrosymmetric, rather than polar.

# A6. Electrets and ferroelectrets

An electret is a dielectric solid that has been ‘electrized’ or quasi-permanently polarized by the simultaneous application of heat and strong electric field (Pillai 1995). On cooling to room temperature from an optimum high temperature, the dielectric under the action of the strong electric field develops a fairly permanent charge separation, manifested by the appearance of charges of opposite signs on its two surfaces. Thus even a nonpolar material can be made polar, exhibiting pyroelectricity and piezoelectricity.

(p.281) High temperature and electric field are not the only ways of introducing quasi-permanent charge separation in an insulating material. Other options include the use of a magnetic field in place of electric field (magnetoelectrets), and photons and other ionizing radiation (photoelectrets, radioelectrets, etc.). Mechanical stress has also been used, instead of electric or magnetic fields.

If the dipole moment of an electret can be made to switch sign reversibly (almost like a ferroelectric), we speak of a ferroelectret (Bauer, Gerhard-Multhaupt and Sessler 2004).

Initially the materials used for making electrets were waxes, wax mixtures, and other organic substances. It was realized in due course that the use of suitable polymers can result in higher dipole moments, as well as a better permanency of the dipole moments. The polymers used also have superior thermomechanical properties, and can be readily processed into thin or thick films of requisite shapes and sizes.

Polymers can be either polar (like polyvinylidene fluoride (PVDF)), or nonpolar (like polyethylene (PE), or polytetrafluoroethylene (PTFE; better known as ‘Teflon’)). Electrets have been made from both types, and involve a number of polarization mechanisms (Pillai 1995).

Electrets find a wide range of device applications, including those in sensors, actuators, and robotics (cf. Nalwa 1995).

# A7. Glass transition

Any noncrystalline solid is a glass. A glass is a disordered material that lacks the periodicity of a crystal, but behaves mechanically like a solid. Because of this noncrystallinity, a whole range of relaxation modes and their temperature variation can exist in a glass. In fact, an empirical definition of glass, due to Vogel (1921) and Fulcher (1925), states that a glass is one for which the temperature dependence of relaxation time is described by the equation

(A2)
$Display mathematics$
where τ0, T 0, and T f are ‘best-fit’ parameters. This is the well-known Vogel–Fulcher equation (cf. Tagantsev 1994; Angell 1995). The parameters τ0 and T0 depend on the temperature range of measurement.

A thermodynamic definition of glass can be given in terms of two experimental criteria, namely the existence of a glass transition, and the existence of a residual entropy at T = 0 K (cf. Donth 2001; Debenedetti and Stillinger 2001). A conventional or canonical glass is usually obtained by a rapid cooling or quenching of a melt (to prevent crystallization). It is thus a state of frozen disorder, and can therefore be expected to have a nonzero configurational entropy at T = 0K.

One can associate a glass transition temperature T g with a glass-forming material. For TT g, it is a liquid. As the temperature is decreased, the density increases. As the density approaches the value for the solid state, its rate of increase with decreasing temperature becomes smaller. T g is the temperature such that this rate is high above it, and low below it.

(p.282) Other properties also change significantly around T g. In particular, there is a large increase of viscosity below T g, and the specific heat suddenly drops to a lower value on cooling to T g.

Spin glasses and orientational glasses (including relaxor ferroelectrics) have features in common with canonical glasses. But there is also an important point of difference. Their glass transition can take place even at low cooling rates. Although they are characterized by a quenched disorder, sudden cooling or quenching is not necessary for effecting it.

The term glassy behaviour is used in the context of systems that exhibit noncrystallinity, nonergodicity, hysteresis, long-term memory, history-dependence of behaviour, and multiple relaxation rates. Multiferroics usually display a variety of glassy properties.

# A8. Nonextensive thermostatistics

Entropy is an all-important concept in thermodynamics, invoked for understanding how and why one form of energy changes (or does not change) to another. As introduced by Clausius in 1865, the term entropy was a measure of the maximum energy available for doing useful work. It is also a measure of order and disorder, as expressed later by the famous Boltzmann equation:

(A3)
$Display mathematics$
The entropy S is thus a product of the Boltzmann constant, k B, and the logarithm of the number of (equally probable) microstates of the system under consideration. So defined, entropy is an extensive state parameter; i.e. its value is proportional to the size of the system. Also, for two independent systems, such an entropy for the combined system is simply the sum of the entropies of the two individual systems.

This equation for entropy, though a workhorse of physics and thermodynamics for over a century, has had its share of failures, and has therefore been generalized by Tsallis (1988, 1995a, b, 1997). Tsallis began by highlighting the three premises on which Boltzmann thermodynamics is based (cf. Tirnakli, Buyukkilic and Demirhan 1999):

• The effective microscopic interactions are short-range (in relation to the linear size of the system).

• The time range of the microscopic memory of the system is short compared to the observation time (i.e. one is dealing with ‘Marcovian processes’).

• The system evolves, in some relevant sense, in a Euclidean-like space-time. A contrary example is that of (multi)fractal space-time.

Such a system has the property of thermodynamic extensivity (or additivity). Boltzmann thermodynamics fails whenever any of these conditions in violated. There is a plethora of situations in which this happens. By ‘failure’ of the Boltzmann formalism (p.283) is meant the divergence of standard sums and integrals appearing in the expressions for quantities like partition function, internal energy, entropy (Tsallis 1995b). As a result, one is left with no ‘well-behaved’ prescriptions for calculating, for example, specific heat, susceptibility, diffusivity, etc. Contrary to the predictions of Boltzmann thermodynamics, these quantities are always measured to be finite, rather than infinite.

Tsallis (1988) remedied this very serious situation by generalizing Boltzmann thermodynamics by introducing two postulates. The first postulate generalizes the definition of entropy, and the second postulate generalizes the definition of internal energy (which is another extensive state parameter in Boltzmann thermodynamics).

In Tsallis thermostatistics, an entropic index q is introduced, with q = 1 coming as a special case corresponding to conventional thermostatistics. The generalized entropy is postulated as defined by

(A4)
$Display mathematics$
with
(A5)
$Display mathematics$
Here q is a real number, and {p i} are the probabilities of the W microscopic states.

The entropy so defined is nonnegative but nonextensive. Its limiting value for q = 1 is the standard (extensive) entropy, interpreted in the 1870s by Gibbs in terms of statistical mechanics:

(A6)
$Display mathematics$
This equation reduces to the Boltzmann equation for the equiprobability case, i.e. when p i = 1/W.

Tsallis entropy has the pseudo-additivity property. If A and B are two independent systems, i.e. if $p i j A + B = p i A p j B$, then

(A7)
$Display mathematics$
Thus (1−q) is a measure of the nonextensivity of the system. Moreover, the entropy is greater than the sum for q < 1, and less than the sum for q > 1. The system is said to be extensive for q = 1, superextensive for q < 1, and subextensive for q > 1.

(p.284) The second postulate introduced by Tsallis is the following generalized equation for internal energy:

(A8)
$Display mathematics$
Here {ɛi} is the energy spectrum of the microscopic states.

In this formalism, the canonical ensemble equilibrium distribution is obtained by first defining the generalized partition function as

(A9)
$Display mathematics$
where
(A10)
$Display mathematics$
One then optimizes S q under suitable constraints (namely ∑i pi = 1 and $U q ≡ ∑ i = 1 W p i q ε i$) as follows:
(A11)
$Display mathematics$
This expression is the generalization of the standard expression for the Boltzmann weight, namely e −β ɛi.

The Boltzmann factor is therefore no longer an exponential always. It can be a power law. Power laws are strongly linked to fractal behaviour, and are encountered in a large variety of natural phenomena. In nonextensive systems, the correlations among individual constituents do not decay exponentially with distance, but rather obey a power-law dependence (cf. Section 5.5.2 on self-organized criticality, where power-law dependence is the central theme).

A visit to the website http//:tsallis.cat.cbpf.br/biblio.htm gives some idea of the huge number of very basic scientific problems which have yielded to Tsallis thermostatistics. Here is a small sampling:

• Spin glasses and the replica trick.

• Theory of perceptions, notably the theory of human visual perception.

• The travelling-salesman problem.

• The ubiquity of Levy distributions in Nature.

• Non-Gaussian behaviour of the heartbeat.

• Stellar polytropes.

• Two-dimensional turbulence.

• Peculiar velocity distribution of galaxy clusters.

• Nanostructured materials.

• (p.285)
• Earthquakes, flocking patterns of birds, clouds, mountains, coastlines, and other self-organizing systems that exhibit fractal behaviour.

• Time-dependent behaviour of DNA and other macromolecules.

Tsallis has argued that his postulated expression of entropic nonextensivity ‘appears in a simple and efficient manner to characterize what is currently referred to as complexity, or at least some types of complexity’. The basic idea is that any small number raised to a power less than unity becomes larger. For example, 0.40.3=0.76. Thus, if an event is somewhat rare (say p = 0.4), the fact that q = 0.3 makes the effective probability larger (0.76). Tsallis gives the example of a tornado to illustrate how low-probability events can grow in weight for nonextensive systems. Unlike the air molecules in normal conditions, the movements of air molecules in a tornado are highly correlated. Trillions and trillions of molecules are turning around in a correlated manner in a tornado. A vortex is a very rare (low-probability) occurrence, but when it is there, it controls everything because it is a nonextensive system.

# A9. Nonlinear dynamical systems

Over the last few decades, the somewhat arbitrary compartmentalization of science into various disciplines has been getting more and more porous. The science of nonlinear systems is one major reason for this changing perspective.

Nonlinear phenomena in large systems tend to be very complex. The ready availability of huge computing power has made all the difference when it comes to investigating them. Nonlinear phenomena are now more tractable than ever before, and (to distort the original meaning of Phil Anderson's famous remark), more is indeed different, in the sense that more computational power has led to a qualitative change in nonlinear science; it has made it all-pervasive. The basic unity of all science has become more visible. The diverse range of the contents of this book is an example of that.

A nonlinear system is characterized by the breakdown of the principle of linear superposition: the sum of two solutions of an equation is not necessarily a solution. The output is not proportional to the input; the proportionality factor is not independent of the input. This makes field-tuneability of properties a wide-ranging reality, a situation of direct relevance to the subject of smart structures.

The book Exploring Complexity by Nicolis and Prigogine (1989) is an excellent and profound introduction to the subject of nonlinear systems (also see Prigogine 1998). We recapitulate here a few basic ideas from that book, just to introduce the reader to the vocabulary of complexity. Some of the terms we use here are explained in the Glossary.

The time-evolution of a system, described by a set of state parameters {χi}, can be influenced by the variations of some control parameters λ:

(A12)
$Display mathematics$
(p.286) Whatever the form of Fi, in the absence of constraints these equations must reproduce the state of equilibrium:
(A13)
$Display mathematics$
For a nonequilibrium steady state (equilibrium is also a steady state), this generalizes to
(A14)
$Display mathematics$
For a linear system, if χ is the unique state-variable, eqn A12 can take the form
(A15)
$Display mathematics$
where k is some parameter of the system. This yields a stationary-state solution:
(A16)
$Display mathematics$
A plot of χ against λ is a straight line, as one would expect for a linear-response system. For a nonlinear system, eqn A16 will not hold, and the plot will not be a straight line, making the system amenable to all kinds of complex behaviour.

Having arrived at a stationary state, the system stays there if there is no perturbation. For a conservative system, χs is a state of mechanical equilibrium. For a dissipative system, it can be a state of stationary nonequilibrium or a state of equilibrium.

In any real system, there are always perturbations, either internal (e.g. thermal fluctuations), or external (because the system is always communicating with the environment), so that the so-called stationary state really gets modified to

(A17)
$Display mathematics$
where x denotes the perturbation. χs now serves as a reference state for the system.

We now consider all possible ways in which the system may respond to the deviations imposed by x(t). There are four possible scenarios.

• Case 1. The simplest possibility is that, after some transient jitter or adjustment, the system comes close to the reference state χs. This is a case of point stability. An example is that of a pendulum, which when displaced slightly from its vertical equilibrium position, tends to settle towards that position again. This final state is described as an attractor; it ‘attracts’ the system towards itself.

If we are interested, not in the response of one stationary state, but of a whole trajectory of them, we deal with of orbital stability, rather than point stability.

• Case 2. If χ(t) approaches χs asymptotically as time progresses, χs is said to be an asymptotically stable state. Like in Case 1, the argument can be extended to asymptotic orbital stability. Asymptotic stability implies irreversibility, so we are dealing with a dissipative system here. Such a system can approach a unique (p.287) attractor reproducibly because it can eliminate the effects of perturbations, wiping out all memories of them. Asymptotic stability is a very beneficial effect of irreversibility in Nature (Nicolis and Prigogine 1989; Prigogine 1998).

By contrast, conservative systems keep a memory of the perturbations. A conservative system cannot enjoy asymptotic stability.

• Case 3. In this category, perturbations have a strong destabilizing effect, so that, as time passes, χ(t) does not remain near χs. We speak of point instability and orbital instability of χs. Such a situation is possible for both conservative and dissipative systems.

• Case 4. It can happen that a system is stable against small initial perturbations, but unstable against large initial perturbations. The χs is then said to be locally stable and globally unstable. If, on the other hand, there is stability against any initial value of the perturbation, we have global stability. In such a case, χs is said to be a global attractor. An example is that of thermodynamic equilibrium in an isolated system.

### Bifurcation

Let us focus on dissipative systems; conservative systems cannot have asymptotic stability.

Consider a variable x, controlled by a parameter λ through the following rate equation:

(A18)
$Display mathematics$
The fixed points (steady states) are given by
(A19)
$Display mathematics$
This equation has three solutions: x 0 and x ±. Apart from the trivial solution x 0 = 0, the other two solutions are given by
(A20)
$Display mathematics$
Only positive λ gives meaningful solutions of this equation:
(A21)
$Display mathematics$
The solution x 0 = 0 is independent of λ, and the other two solutions correspond to two distinct branches of the plot of x s against λ. Thus, for λ ≥ 0 the horizontal curve bifurcates into two branches. This is known as pitchfork bifurcation (cf. Stewart 1982).

The fixed point x 0 is globally asymptotically stable for λ < 0, and unstable for λ > 0. And x + and x are asymptotically but not globally stable.

At the bifurcation point λ = 0, the solutions x ± cannot be expanded as a power series of this control parameter; this point is a singularity.

(p.288) Thus, multiple, simultaneously stable, solutions can exist for nonlinear systems described by some very simple mathematical models. The example discussed here demonstrates the ability of the system to bifurcate or to switch to perform regulatory tasks.

We have so far discussed only a one-dimensional phase space, in which the phase trajectories can be only straight half-lines, converging to or diverging from the fixed points. Much more flexibility of behaviour becomes available in two-dimensional phase space. Features such as periodic attractors and limit cycles become possible in such a phase space.

The complexity of possible nonlinear behaviour increases enormously as we go to still higher-dimensional phase spaces. Particularly interesting is the existence of strange attractors. The property of asymptotic stability of dissipative systems allows the possibility of attracting chaos: chaos becomes the rule, rather than the exception, in such systems, as trajectories emanating from certain parts of phase space are inevitably drawn towards the strange attractor (Crutchfield, Farmer and Packard 1986; Grebogi, Ott and Yorke 1987; Gleick 1987; Kaye 1993; Ott and Spano 1995). We discuss chaos in a separate appendix.

# A10. Symmetry of composite systems

Suppose we have two systems described by symmetry groups G 1 and G 2. The systems could, for example, be the two phases constituting a biphasic composite material. Or we could have a crystal of symmetry G 1 on which a field of symmetry G 2 has been applied. What is the net symmetry of the composite system in each of these examples?

Common sense tells us that if some symmetry operation is present in both G 1 and G 2, then it would be present in the composite system also. And if a symmetry operation is present only in G 1 or only in G 2, but not in both, then it would not be present as a symmetry of the composite system. In other words, only the common symmetry elements can survive when G 1 and G 2 are superimposed.

This fact is embodied in what is called the Curie principle of superposition of symmetries, which states that if two or more symmetries (G 1, G 2, G 3,…) are superimposed, then the symmetry group (G d) of the composite system has only those elements that are common to all the superimposed groups. Mathematically this is expressed by writing G d as the intersection group of G 1, G 2, G 3,…:

(A22)
$Display mathematics$

It is clear from this that G d cannot be a higher symmetry than the component symmetries G 1, G 2, G 3, etc.; it can at the most be equal to any of them:

(A23)
$Display mathematics$
In the appendix on composites, we discuss an example of how this lowering of symmetry, when two phases are superimposed in a composite material, makes possible the occurrence of the magnetoelectric effect.

(p.289) Equation A23, which is a corollary of the Curie principle, embodies a very important theorem of crystal physics, called the Neumann theorem. Before we state the theorem, let us assume that G 1, G 2, G 3,… denote the symmetry groups for the various macroscopic physical properties of a crystal. Since all these properties occur in the same crystal, G d can be identified with the point-group symmetry of the crystal. The Neumann theorem simply states that the symmetry Gi possessed by any macroscopic physical property of the crystal cannot be lower than the point-group symmetry of the crystal; it must be at least equal to it, if not higher (eqn A23).

The subscript d in eqn A22 stands for dissymmetrization or symmetry-lowering. In general, the symmetry is indeed lowered when we superimpose two or more symmetries.

But there can be exceptions when we superimpose ‘equal’ objects in certain special ways. The different domain types in a specimen of a ferroic material are an example of equal objects. Each domain type has the same crystal structure as any other domain type; only their mutual positions or orientations are different.

We consider here a simpler (geometrical) example of equal objects to illustrate how the symmetry of a composite object formed from them can be higher than G d.

Consider a rhombus (Fig. A1a), with one of its diagonals horizontal (parallel to the x-axis); the other diagonal will naturally be vertical (parallel to the y-axis). Either of them divides the rhombus into two equal parts, each part being an isosceles triangle. Let us choose the vertical diagonal for this purpose.

We can view the rhombus as a composite object, formed by combining the two equal triangles along the vertical diagonal. Each triangle has the same symmetry, namely a mirror plane (or line) m y perpendicular to the y-axis. Therefore, in the

Fig. A1 Formation of a composite object (rhombus) from two equal isosceles triangles, having an apex angle θ (a). The rhombus becomes a square when θ = 90° (b), even when there is no change in the symmetries of the two isosceles triangles from which the square is constituted.

(p.290) notation of eqn A22,
(A24)
$Display mathematics$
Then
(A25)
$Display mathematics$
But the actual symmetry (say G s) of the composite object (the rhombus) is higher than this: There is present an additional mirror symmetry (m x).

The symmetry group G s can be obtained as an extended group from G d:

(A26)
$Display mathematics$
This generalization of the original Curie principle was suggested by Shubnikov, and eqn A26 is a statement of the Curie–Shubnikov principle of superposition of symmetries (cf. Wadhawan 2000).

M in eqn A26 is a symmetrizer. If the objects superimposed are unequal (imagine two unequal isosceles triangles superimposed as described above), then m x is not a symmetry operation of the composite, and M is just an identity operation. But for the example of the rhombus discussed here,

(A27)
$Display mathematics$
This can be rewritten as
(A28)
$Display mathematics$
Substituting eqn A27 into eqn A26 we get
(A29)
$Display mathematics$
which describes correctly the symmetry of the rhombus.

### Latent symmetry

As first discussed by Wadhawan (2000), a very interesting and important situation develops as we vary the apex angle, say θ, of the two equal isosceles triangles combined to construct the composite object, namely the rhombus. For θ = 90° the rhombus becomes a square (Fig. A1b), and then even eqn A29, which is a generalization of the original Curie principle, fails to give correctly the symmetry group for the square. We had introduced only one symmetrizer (M=G d m x) to explain the symmetry of the rhombus. More symmetrizers must be introduced to form a still larger extended group G s which can describe the symmetry of the square (which has a four-fold axis of symmetry among its symmetry elements). The general definition of the symmetrizer is therefore as follows:

(A30)
$Display mathematics$

There is, however, another way of looking at what has happened here (Wadhawan 2000). Suppose we start with two equal right-angled isosceles triangles. For them, (p.291) G 1 = G 2=(1, m y), as before; the fact that θ = 90° makes no difference to G 1 and G 2. There is no four-fold symmetry axis either in G 1 or in G 2, or in the recipe used for forming the composite (the square) from the two component triangles. Yet the four-fold axis does arise when the composite is formed. Wadhawan (2000) called the four-fold axis an example of latent symmetry. It is as if the four-fold axis lies dormant (i.e. is not manifest) in the symmetry of the two identical right-angled triangles, and manifests itself only when the composite is formed.

A formal group-theoretical treatment of this concept has been given by Litvin and Wadhawan (2001, 2002). Consider an object A of symmetry H. We can construct a composite object S from A by applying on it a set of transformations (‘isometries’) {G 1=1, G 2, … g m}:

(A31)
$Display mathematics$
There is not much loss of generality if we assume that the isometries involved constitute a group, say G:
(A32)
$Display mathematics$
Latent symmetry, by definition (Litvin and Wadhawan 2002), is any symmetry of the composite S that is not a product of the operations or isometries of G and H.

A partition theorem has been proved in Litvin and Wadhawan (2002), which provides a sufficient condition that an isometry is a symmetry of a composite constructed from a component A by a set of isometries which constitute a group G.

# A11. Tensor properties

In this appendix we describe some basic notions about tensors, and introduce the tensor properties relevant to the subject matter of this book.

The properties of a material are specified as relationships between measurable quantities. For example, we can measure the mass m and the volume V of a specimen of a material, and it is the density ρ which connects these two measurables:

(A33)
$Display mathematics$
Both m and V in this equation are scalars; they can be specified completely in terms of single numbers (in appropriate units), and therefore ρ also can be described completely by a single number. It is a scalar property of the material.

### Tensors of rank 1

Let us consider a material (a single crystal, to be specific) which exhibits what is called the pyroelectric effect. It can occur in crystalline materials belonging to any of the 10 polar classes. The point-group symmetries of these classes of crystals are such that there is a direction in them (called the polar axis), along which there occurs a nonzero electric polarization or dipole moment, even when no electric field has been applied. Thus such a polarization is spontaneous, rather than induced by an external field.

(p.292) The spontaneous polarization, naturally, varies with temperature. Let us say that a change ΔT in temperature results in a change P(P 1, P 2,P 3) in the polarization; the components P 1, P 2, P 3 of P are with reference to a Cartesian system of axes. We can express this relationship between ΔT and Pi as follows:

(A34)
$Display mathematics$
Here the proportionality constant pi is a component of what is called the pyroelectric tensor.

(pi) is a vector. All its three components must be specified for defining it completely, unlike the case of density, which requires only one number for a complete specification.

Both (Pi) and (pi) are vectors. Another example of a vector is the position vector of a point in space: It is the straight line from the origin to the point in question. Such a line is defined by the three coordinates of the point: (x 1, x 2, x 3) or (xi).

Suppose we make a transformation of the coordinate axes by rotating the reference frame in a general way. Then the coordinates (xi) will change to, say, (xi′). It should be possible to calculate the new coordinates in terms of the old ones:

(A35)
$Display mathematics$
One usually follows the convention that if some index is repeated on any of the sides of an equation (as j is on the RHS in the above equation), a summation over that index is implicitly there, and need not be written explicitly. So we can rewrite eqn A35 as
(A36)
$Display mathematics$
Since (pi) is a vector, just like the position vector (xi), its components will change under a coordinate transformation according to an equation similar to eqn A36:
(A37)
$Display mathematics$
It should be noted that, since there is a summation over the index j, it does not matter what symbol we use for this index; the following equation says the same thing as the above equation:
(A38)
$Display mathematics$
We say that (pi) is a tensor of rank 1; this is because its components transform (under a general rotation of coordinate axes) like the components of a single position vector (xi).

### Tensors of rank 2

We next consider an example of a tensor property of rank 2. We discuss the response of an insulating or dielectric material to a small electric field (Ei). The applied field (p.293) results in an electric displacement (D j). There is no reason to presume that the vector (D j) should be parallel to the vector (Ei). In general,

(A39)
$Display mathematics$
Here the proportionality constants are measures of the dielectric permittivity response of the material. Similar equations can be written for D 2 and D 3. The three equations, representing the overall dielectric response of the material, can be written compactly as follows:
(A40)
$Display mathematics$
The proportionality constants in this equation are components of the dielectric permittivity tensor. Since i = 1, 2, 3 and j = 1, 2, 3, there are nine such components in all, compared to just three for a vector, or tensor of rank 1.

What is the rank of this permittivity tensor? To answer this question, we have to see how the components of this tensor behave under a coordinate transformation. We can write equations similar to eqn A40 for the new frame of reference:

(A41)
$Display mathematics$
Since (Ei) and (Di) are vectors,
(A42)
$Display mathematics$
(A43)
$Display mathematics$
Substituting from eqn A40 into eqn A43,
(A44)
$Display mathematics$
Equation A42 can be inverted to get
(A45)
$Display mathematics$
Substituting this into eqn A44 we get
(A46)
$Display mathematics$
Comparing eqns A46 and A41,
(A47)
$Display mathematics$
This result tells us that, when a coordinate transformation like rotation, reflection or inversion is carried out, the permittivity tensor transforms as a product of two position vectors or two coordinates. It is therefore called a tensor of rank 2, or a second rank tensor.

(p.294) Any physical quantity which transforms according to eqn A47 is a tensor of rank 2. Its components are represented by two indices, unlike only one index needed for specifying the components (vi) of a vector, or tensor of rank 1.

This can be generalized. A tensor of rank n, by definition, transforms as a product of n vectors or n coordinates, and therefore requires n indices for specifying its components.

Another example of a tensor of rank 2 is the magnetic permeability tensor (μij):

(A48)
$Display mathematics$
The dielectric permeability and magnetic permeability tensors are examples of matter tensors.

An example of a second-rank field tensor, is the stress tensor. Stress is defined as force (fi) per unit area (Ai):

(A49)
$Display mathematics$
Being a field tensor, the stress tensor is independent of the crystal on which the stress is applied. By contrast, the strain tensor (e ij), which is also a second rank tensor, is a matter tensor, and is influenced by the symmetry of the crystal (see below):
(A50)
$Display mathematics$

### Tensors of rank 3

The piezoelectric tensor (d ijk) is the most familiar example of a tensor property of rank 3. It transforms as a product of three vectors:

(A51)
$Display mathematics$
In a piezoelectric material, there is a coupling of electric polarization and mechanical strain. In the so-called direct piezoelectric effect (which can occur in a crystal belonging to any of the 20 piezoelectric crystal classes), application of stress produces electric polarization:
(A52)
$Display mathematics$
The inverse piezoelectric effect pertains to the development of strain when an electric field is applied to the piezoelectric crystal:
(A53)
$Display mathematics$

In a smart material based on the piezoelectric effect, both the direct and the inverse piezoelectric effect can be involved: The direct effect is used for sensing, and this is followed by feedback through the inverse effect, which provides actuation through the strain developed.

(p.295) As can be seen from eqns A49 and A50, the stress tensor and the strain tensor are symmetric tensors: σij = σji, and e ij = e ji. This symmetry is carried over to the piezoelectric tensor (eqn A52): d ijk = d ikj. Because of this ‘intrinsic’ symmetry, the piezoelectric tensor has only 18 independent components, instead of 27. It is therefore customary to denote it as (d iμ), with i = 1, 2, 3 and μ = 1,2,…,6. For example, for a stress applied along the x 3-axis, the polarization component P 3 along the same direction is determined by the d 33 coefficient: P 3 = d 33σ3.

The d-tensor discussed so far is the piezoelectric charge tensor: it determines the charge separation produced by the applied stress. It is often relevant to deal with what is called the piezoelectric voltage tensor (g). It determines the open-circuit voltage generated by the applied stress. The two tensors are related via the permittivity tensor:

(A54)
$Display mathematics$

Hydrophones, used by the Navy in large numbers, have to sense very weak hydrostatic pressure (p). For such a situation, σ11 = σ22 = σ33 = −p. Equation A52 then yields

(A55)
$Display mathematics$
Equating d 31 with d 32, and introducing the symbol d h to represent the hydrostatic piezoelectric charge coefficient, we can write eqn A55 as
(A56)
$Display mathematics$
The hydrophone usually feeds into a very high-impedance load. Therefore, the voltage generated by the hydrophone can be written as
(A57)
$Display mathematics$

### Tensors of rank 4

The piezoelectric effect described above is a linear effect: Reversal of the electric field reverses the mechanical deformation (cf. eqn A53). There is also a quadratic effect, determined by the fourth-rank electrostriction tensor (M ijkl):

(A58)
$Display mathematics$
In this case, reversal of the sign of the electric field has no effect on the (electrostrictive) strain.

Another tensor of rank 4 is the magnetostriction tensor:

(A59)
$Display mathematics$
Lastly, we consider the more familiar example of a fourth-rank tensor, namely the elastic compliance tensor, which determines the strain produced by the application (p.296) of a small stress (Hooke's law):
(A60)
$Display mathematics$
The inverse of this tensor is the elastic stiffness tensor:
(A61)
$Display mathematics$

### Effect of crystal symmetry on tensor properties

Matter tensors have to conform to the symmetry of the matter they describe. For crystals, the relevant symmetry is the point-group symmetry. It is not necessary to consider the full space-group symmetry for this purpose because the directional symmetry of macroscopic tensor properties is not influenced by crystallographic translational operations.

A macroscopic tensor property of a crystal cannot have less directional symmetry than the point-group symmetry of the crystal (Neumann theorem). For example, if the crystal has inversion symmetry, i.e. if its point group is one of the 11 Laue groups, then only those tensor properties can be nonzero which have inversion symmetry among the elements of their symmetry group. Let us take a look at how the properties of various ranks described above behave under an inversion operation.

We see from eqn A33 that, since both m and V remain the same under an inversion operation, so does the density ρ. This is true for all scalar or zero-rank tensor properties.

Next, we refer to eqn A34. Under an inversion operation, ΔT remains the same, but Pi changes to −Pi for all i. Therefore, (pi) changes to (−pi). Since the inversion operation is a symmetry operation, we must have −(pi) = (pi) for each i. This is possible only if all the components of the pyroelectric tensor, a tensor of rank 1, are identically equal to zero.

Equation A40 describes a tensor of rank 2. Under inversion, since both (Ei) and (Di) change signs, the permittivity tensor, a tensor of rank 2, remains invariant.

Lastly, we consider a tensor property of rank 3 (eqn A52). Like permittivity, the stress tensor in eqn A52 is also a tensor of rank 2. Therefore it remains invariant under an inversion operation. By contrast, the polarization tensor in the same equation, being a tensor of rank 1 (like the pyroelectric tensor) changes sign under inversion. These two facts can be consistent only if the piezoelectric tensor in eqn A52 is equal to its own negative, which means that all its components must be identically equal to zero.

We can now generalize. All tensor properties of odd rank are absent in crystals having inversion symmetry, and all even-rank tensor properties are permitted by this symmetry to exist in such crystals.

Likewise, we can investigate the effect of other directional or point-group symmetry elements present in a crystal on the tensor properties.

Bibliography references:

(p.297) Abarbanel, H. D. I. (2006). ‘Physics of chaotic systems’. In G. Fraser (ed.), The New Physics for the Twenty-First Century. Cambridge, U. K.: Cambridge University Press, p. 311.

Angell, C. A. (31 March 1995). ‘Formation of glasses from liquids and biopolymers’. Science, 267: 1924.

Aziz-Alaoui, M. A. (2005). ‘Synchronization of chaos’, in J.-P. Francoise, G. L. Naber and T. S. Tsun (eds.), Encyclopaedia of Mathematical Physics, Vol. 5, p. 213. Amsterdam: Elsevier.

Bauer, S., R. Gerhard-Multhaupt and G. M. Sessler (February 2004). ‘Ferroelectrets: Soft electroactive foams for transducers’. Physics Today, p. 37.

Beetz, C. P. (1992). ‘Composite materials’, in G. L. Trigg (ed.), Encyclopedia of Applied Physics, Vol. 4. New York: VCH Publishers.

Cammarata, R. C. (2004). ‘Nanocomposites’. In Di Ventra, M., S. Evoy and J. R. Heflin (eds.), Introduction to Nanoscale Science and Technology. Dordrecht: Kluwer.

Chung, D. D. L. (2001). Applied Materials Science. London: CRC Press.

Clearwater, S. H. (1991). ‘Artificial intelligence’. In G. L. Trigg (ed.), Encyclopedia of Applied Physics, Vol. 2, p. 1. New York: VCH Publishers.

Crutchfield, J. P., J. D. Farmer and N. H. Packard (1986). ‘Chaos’. Scientific American, 254(12): 46.

Debenedetti, P. G. and F. H. Stillinger (8 March 2001). ‘Supercooled liquids and the glass transition’. Nature, 410: 259.

Ditto, W. L. and L. M. Pecora (Aug. 1993). ‘Mastering chaos’. Scientific American, 269: 62.

Ditto, W. L., S. N. Rauseo and M. L. Spano (1990). ‘Experimental control of chaos’. Phys. Rev. Lett. 65: 3211.

Donth, E. (2001). The Glass Transition. Berlin: Springer.

Fulcher, G. S. (1925). ‘Analysis of recent measurements of the viscosity of glasses’. J. Amer. Ceram. Soc. 8: 339.

Garfinkel, A., M. L. Spano, W. L. Ditto and J. N. Weiss (28 Aug. 1992). ‘Controlling cardiac chaos’. Science, 257: 1230.

Gilmore, R. (2005). ‘Chaos and attractors’, in J.-P. Francoise, G. L. Naber and T. S. Tsun (eds.), Encyclopaedia of Mathematical Physics, Vol. 1, p. 477. Amsterdam: Elsevier.

Gleick, J. (1987). Chaos: Making a New Science. New York: Viking Penguin.

Grebogi, C., E. Ott and J. A. Yorke (1987). ‘Chaos, strange attractors, and fractal basin boundaries in nonlinear dynamics’. Science 238: 632.

Hale, D. K. (1976). ‘The physical properties of composite materials’. J. Mater. Sci., 11: 2105.

Hansen, J. S. (1995). ‘Introduction to advanced composite materials’, in Udd, E. (ed.), Fibre Optic Smart Structures. New York: Wiley.

Hawkins, J. and S. Blakeslee (2004). On Intelligence. New York: Times Books (Henry Holt).

Kauffman, S. A. (1993). The Origins of Order. Oxford: Oxford University Press.

Kaye, B. (1993). Chaos and Complexity: Discovering the Surprising Patterns of Science and Technology. Weinheim: VCH.

Langreth, R. (1992). ‘Engineering dogma gives way to chaos’. Science, 252: 776.

Lederman, L. M. and C. T. Hill (2005). Symmetry and the Beautiful Universe. New York: Prometheus Books.

Litvin, D. B. and V. K. Wadhawan (2001). ‘Latent symmetry and its group-theoretical determination’. Acta Cryst. A57: 435.

Litvin, D. B. and V. K. Wadhawan (2002). ‘Latent symmetry’. Acta Cryst. A58: 75.

Maguire, J. F., M. Benedict, L. V. Woodcock and S. R. LeClair (2002). ‘Artificial intelligence in materials science: Application to molecular and particulate simulations’. In Takeuchi, I., J. M. Newsam, L. T. Willie, H. Koinuma and E. J. Amis (eds.), Combinatorial and Artificial Intelligence Methods in Materials Science. MRS Symposium Proceedings, Vol. 700. Warrendale, Pennsylvania: Materials Research Society.

McCulloch, W. and W. Pitts (1943). ‘A logical calculus of the ideas immanent in nervous activity’. Bull. Math. Biophys. 5: 115.

(p.298)

Miracle, D. B. and L. Donaldson (2001). ‘Introduction to composites’, in D. B. Miracle and L. Donaldson (eds.), Composites, Vol. 21 of ASM Handbook. Materials Park, Ohio 44073-0002, USA: The Materials Information Society.

Moravec, H. (1988). Mind Children: The Future of Robot and Human Intelligence. Cambridge: Harvard University Press.

Mullins, J. (23 April 2005). ‘Whatever happened to machines that think?’. New Scientist, p. 32.

Nalwa, H. S. (ed.) (1995). Ferroelectric Polymers: Chemistry, Physics, and Applications. New York: Marcel Dekker.

Newell, A. (1983). ‘Intellectual issues in the history of artificial intelligence’. In Machlup, F. and U. Mansfield (eds.), The Study of Information: Interdisciplinary Messages. New York: Wiley.

Newnham, R. E. (1986). ‘Composite electroceramics’. Ann. Rev. Mater. Sci. 16: 47.

Newnham, R. E., D. P. Skinner and L. E. Cross (1978). ‘Connectivity and piezoelectric–pyroelectric composites’. Mat. Res. Bull., 13: 525.

Newnham, R. E. and S. E. Trolier-McKinstry (1991a). ‘Crystals and composites’ J. Appl. Cryst., 23: 447.

Newnham, R. E. and S. E. Trolier-McKinstry (1990b), “Structure–property relationships in ferroic nanocomposites”. Ceramic Transactions, 8: 235.

Newnham, R. E., J. F. Fernandez, K. A. Murkowski, J. T. Fielding, A. Dogan and J. Wallis (1995). ‘Composite piezoelectric sensors and actuators’. In George, E. P., S. Takahashi, S. Trolier-McKinstry, K. Uchino and M. Wun-Fogle (eds.), Materials for Smart Systems. MRS Symposium Proceedings, Vol. 360. Pittsburgh, Pennsylvania: Materials Research Society.

Nicolis, G. and I. Prigogine (1989). Exploring Complexity: An Introduction. New York: W. H. Freeman.

Ott, E., C. Grebogi and J. A. Yorke (1990). ‘Controlling chaos’. Phys. Rev. Lett. 64: 1196.

Ott, E. and M. Spano (1995). ‘Controlling chaos’. Physics Today, 48(5): 34.

Pecora, L. and T. Carroll (1990). ‘Synchronisation in chaotic systems’. Phys. Rev. Lett., 64: 821.

Pecora, L. and T. Carroll (1991). ‘Driving systems with chaotic signals’. Phys. Rev. A, 44: 2374.

Pilgrim, S. M., R. E. Newnham and L. L. Rohlfing (1987). ‘An extension of the composite nomenclature scheme’. Mat. Res. Bull., 22: 677.

Pillai, P. K. C. (1995). ‘Polymer electrets’. In Nalwa, H. S. (ed.) (1995), Ferroelectric Polymers: Chemistry, Physics, and Applications. New York: Marcel Dekker.

Prigogine, I. (1998). The End of Certainty: Time, Chaos, and the New Laws of Nature. New York: Free Press.

Stewart, I. N. (1982). ‘Catastrophe theory in physics’. Rep. Prog. Phys. 45: 185.

Stix, G. (March 2006). ‘The elusive goal of machine translation’. Scientific American, 294: 70.

Tagantsev, A. K. (1994). ‘Vogel–Fulcher relationship for the dielectric permittivity of relaxor ferroelectrics’. Phys. Rev. Lett. 72: 1100.

Takeuchi, I., J. M. Newsam, L. T. Willie, H. Koinuma and E. J. Amis (eds.) (2002). Combinatorial and Artificial Intelligence Methods in Materials Science. MRS Symposium Proceedings, Vol. 700. Warrendale, Pennsylvania: Materials Research Society.

Tirnakli, U., F. Buyukkilic and D. Demirhan (1999). ‘A new formalism for nonextensive physical systems: Tsallis thermostatistics’, Tr. J. Phys., 23: 21.

Tsallis, C. (1988). ‘Possible generalizations of Boltzmann–Gibbs statistics’. J. Stat. Phys. 52: 479.

Tsallis, C. (1995a). ‘Some comments on Boltzmann–Gibbs statistical mechanics’. Chaos, Solitons and Fractals, 6: 539.

Tsallis, C. (1995b). ‘Non-extensive thermostatistics: brief review and comments’. Physica A, 221: 277.

Tsallis, C. (July 1997). ‘Levy distributions’. Physics World: p. 42.

Van Suchtelen (1972). ‘Product properties: A new application of composite materials’. Philips Research Reports, 27: 28.

Vogel, H. (1921). ‘Das temperatur-abhangigkeitsgesetz der viskositat von flussigkeiten’. Phys. Zeit. 22: 645.

Wadhawan, V. K. (2000). Introduction to Ferroic Materials. Amsterdam: Gordon and Breach.

Wiener, N. (1965), 2nd edition. Cybernetics: Or Control and Communication in Animal and the Machine. Cambridge MA: MIT Press.