Nonlinear Dynamics and Chaos by Steven H. Strogatz (z-lib.org).pdf
http://www.100md.com
2020年11月20日
![]() |
| 第1页 |
![]() |
| 第6页 |
![]() |
| 第18页 |
![]() |
| 第25页 |
![]() |
| 第42页 |
![]() |
| 第57页 |
参见附件(14048KB,505页)。
H With Applications to
~hyaib, @ Chsmtstry, NONLINEAR
DYNAMICS AND
CHAOS
With Applications to
Physics, Biology, Chemistry,and Engineering
STEVEN H. STROGATZ
PERSEUS BOOKS
I Reading, Massachusetts Many of the designations used by nlanufacturers and sellers to distin-
guish their products are claimed as trademarks. Where those designa-
tions appear in this book and Perseus Books was aware of a trademark
claim, the designations have been printed in initial capital letters.
Library of Congress Cataloging-in-Publication Data
Strogatz, Steven H. (Steven Henry)
Nonlmear dynamics and chaos: with applications to physics,biology, chemistry, and engineering Steven H. Strogatz.
p. cm.
Includes bibliographical references and index.
ISBN 0-201 -54344-3
1. Chaotic behavior in systems. 2. Dynamics. 3. Nonlinear
theories. I. Title.
Q172.5.C45S767 1994
501'.1'85-dc20 93-6166
CIP
Copyright O 1994 by Perseus Books Publishing, L.L.C.
Perseus Books is a member of the Perseus Books Group
All rights reserved. No part of this publication may be reproduced,stored in a retrieval system, or transmitted, in any form or by any means,electronic, mechanical, photocopying, recording, or otherwise, without
the prior written permission of the publisher. Printed in the United States
of America. Published simultaneously in Canada.
Cover design by Lynne Reed
Text design by Joyce C. Weston
Set in 10-point Times by Compset, Inc.
Cover art is a computer-generated picture of a scroll ring, from
Strogatz (1985) with permission. Scroll rings are self-sustaining
sources of waves in diverse excitable media, including heart muscle,neural tissue, and excitable chemical reactions (Winfree and Strogatz
1984, Winfrce 1987b).
Perseus Books are available for special discounts for hulk purchases in the
U.S. by corporations, institutions, and other organizations. For more in-
formation, please contact the Special Markets Department at Harper-
Collins Publishers, 10 East 53rd Street, New York, NY 10022, or call 1-
2 12-207-7528. CONTENTS
Preface ix
1. Overview 1
1.0 Chaos, Fractals, and Dynamics 1
1.1 Capsule History of Dynamics 2
1.2 The Importance of Being Nonlinear 4
1.3 A Dynamical View of the World 9
Part I. One-Dimensional Flows
Flows on the Line 15
2.0 Introduction 15
2.1 A Geometric Way of Thinking 16
2.2 Fixed Points and Stability 18
2.3 PopulationGrowth 21
2.4 Linear Stability Analysis 24
2.5 Existence and Uniqueness 26
2.6 Impossibility of Oscillations 28
2.7 Potentials 30
2.8 Solving Equations on the Computer 32
Exercises 36
3. Bifurcations 44
3.0 Introduction 44
3.1 Saddle-Node Bifurcation 45
3.2 Transcritical Bifurcation 50
3.3 Laser Threshold 53
3.4 Pitchfork Bifurcation 55
3.5 Overdamped Bead on a Rotating Hoop 61
CONTENTS v 3.6 Imperfect Bifurcations and Catastrophes 69
3.7 Insect Outbreak 73
Exercises 79
4. Flows on the Circle 93
4.0 Introduction 93
4.1 Examples and Definitions 93
4.2 Uniform Oscillator 95
4.3 Nonuniform Oscillator 96
4.4 Overdamped Pendulum 101
4.5 Fireflies 103
4.6 Superconducting Josephson Junctions 106
Exercises 1 13
Part II. Two-Dimensional Flows
5. Linear Systems 123
5.0 Introduction 123
5.1 Definitions and Examples 123
5.2 Classification of Linear Systems 129
5.3 Love Affairs 138
Exercises 140
6. Phase Plane 145
6.0 Introduction 145
6.1 Phase Portraits 145
6.2 Existence, Uniqueness, and Topological Consequences 148
6.3 Fixed Points and Linearization 150
6.4 Rabbits versus Sheep 155
6.5 Conservative Systems 159
6.6 Reversible Systems 163
6.7 Pendulum 168
6.8 Index Theory 174
Exercises 18 1
7. Limit Cycles 196
7.0 Introduction 196
7.1 Examples 197
7.2 Ruling Out Closed Orbits 199
7.3 Poincark-Bendixson Theorem 203
7.4 Liknard Systems 2 10
7.5 Relaxation Oscillators 2 1 1
7.6 Weakly Nonlinear Oscillato~~s 2 15
Exercises 227
vi CONTENTS 8. Bifurcations Revisited 241
8.0 Introduction 24 1
8.1 Saddle-Node, Transcritical,and Pitchfork Bifurcations 24 1
8.2 Hopf Bifurcations 248
8.3 Oscillating Chemical Reactions 254
8.4 GIobal Bifurcations of Cycles 260
8.5 Hysteresis in the Driven Pendulum and Josephson Junction 265
8.6 Coupled Oscillators and Quasiperiodicity 273
8.7 Poincare Maps 278
Exercises 284
Part Ill. Chaos
9. Lorenz Equations 301
9.0 Introduction 301
9.1 A Chaotic Waterwheel 302
9.2 Simple Properties of the Lorenz Equations 3 1 1
9.3 Chaos on a Strange Attractor 3 17
9.4 Lorenz Map 326
9.5 Exploring Parameter Space 330
9.6 Using Chaos to Send Secret Messages 335
Exercises 34 1
10. One-Dimensional Maps 348
10.0 Introduction 348
10.1 Fixed Points and Cobwebs 349
10.2 Logistic Map: Numerics 353
10.3 Logistic Map: Analysis 357
10.4 Periodic Windows 36 1
10.5 Liapunov Exponent 366
10.6 Universality and Experiments 369
10.7 Renormalization 379
Exercises 388
11. Fractals 398
1 1.0 Introduction 398
1 1.1 Countable and Uncountable Sets 399
11.2 Cantor Set 401
1 1.3 Dimension of Self-similar Fractals 404
1 1.4 Box Dimension 409
1 1.5 Pointwise and Correlation Dimensions 4 1 1
Exercises 4 16
CONTENTS vii 12. Strange Attractors 423
12.0 Introduction 423
12.1 The Simplest Examples 423
12.2 Henon Map 429
12.3 Rossler System 434
12.4 Chemical Chaos and Attractor Reconstruction 437
12.5 Forced Double-Well Oscillator 441
Exercises 448
Answers to Selected Exercises 455
References 465
Author Index 475
Subject Index 478
viii CONTENTS PREFACE
This textbook is aimed at newcomers to nonlinear dynamics and chaos, especially
students taking a first course in the subject. It is based on a one-semester course
I've taught for the past several years at MIT and Cornell. My goal is to explain the
mathematics as clearly as possible, and to show how it can be used to understand
some of the wonders of the nonlinear world.
The mathematical treatment is friendly and informal, but still careful. Analyti-
cal methods, concrete examples, and geometric intuition are stressed. The theory is
developed systematically, starting with first-order differential equations and their
bifurcations, followed by phase plane analysis, limit cycles and their bifurcations,and culminating with the Lorenz equations, chaos, iterated maps, period doubling,renormalization, fractals, and strange attractors.
A unique feature of the book is its emphasis on applications. These include me-
chanical vibrations, lasers, biological rhythms, superconducting circuits, insect
outbreaks, chemical oscillators, genetic control systems, chaotic waterwheels, and
even a technique for using chaos to send secret messages. In each case, the sci-
entific background is explained at an elementary level and closely integrated with
the mathematical theory.
Prerequisites
The essential prerequisite is single-variable calculus, including curve-sketch-
ing, Taylor series, and separable differential equations. In a few places, multivari-
able calculus (partial derivatives, Jacobian matrix, divergence theorem) and linear
algebra (eigenvalues and eigenvectors) are used. Fourier analysis is not assumed,and is developed where needed. Introductory physics is used throughout. Other
scientific prerequisites would depend on the applications considered, but in all
cases, a first course should be adequate preparation.
I
PREFACE ix Possible Courses
The book could be used for several types of courses:
A broad introduction to nonlinear dynamics, for students with no prior expo-
sure to the subject. (This is the kind of course I have taught.) Here one goes
straight through the whole book, covering the core material at the beginning
of each chapter, selecting a few applications to discuss in depth and giving
light treatment to the more advanced theoretical topics or skipping them alto-
gether. A reasonable schedule is seven weeks on Chapters 1-8, and five or six
weeks on Chapters 9-12. Make sure there's enough time left in the semester
to get to chaos, maps, and fractals.
A traditional course on nonlinear ordinary differential equations, but with
more emphasis on applications and less on perturbation theory than usual.
Such a course would focus on Chapters 1-8.
A modern course on bifurcations, chaos, fractals, and their applications, for
students who have already been exposed to phase plane analysis. Topics
would be selected mainly from Chapters 3,4, and 8-12.
For any of these courses, the students should be assigned homework from the
exercises at the end of each chapter. They could also do computer projects; build
chaotic circuits and mechanical systems; or look up some of the references to get a
taste of current research. This can be an exciting course to teach, as well as to take.
I hope you enjoy it.
Conventions
Equations are numbered consecutively within each section. For instance, when
we're working in Section 5.4, the third equation is called (3) or Equation (3), but
elsewhere it is called (5.4.3) or Equation (5.4.3). Figures, examples, and exercises
are always called by their full names, e.g., Exercise 1.2.3. Examples and proofs
end with a loud thump, denoted by the symbol m.
Acknowledgments
Thanks to the National Science Foundation for financial support. For help with
the book, thanks to Diana Dabby, Partha Saha, and Shinya Watanabe (students);
Jihad Touma and Rodney Worthing (teaching assistants); Andy Christian, Jim
Crutchfield, Kevin Cuomo, Frank DeSimone, Roger Eckhardt, Dana Hobson, and
Thanos Siapas (for providing figures); Bob Devaney, Irv Epstein, Danny Kaplan,Willem Malkus, Charlie Marcus, Paul Matthews, Arthur Mattuck, Rennie Mirollo,Peter Renz, Dan Rockmore, Gil Strang, Howard Stone, John Tyson, Kurt Wiesen-
x PREFACE feld, Art Winfree, and Mary Lou Zeeman (friends and colleagues who gave advice);
and to my editor Jack Repcheck, Lynne Reed, Production Supervisor, and all the
other helpful people at Perseus Books. Finally, thanks to my family and Elisabeth
for their love and encouragement.
Steven H. Strogatz
Cambridge, Massachusetts
PREFACE xi OVERVIEW
1.0 Chaos, Fractals, and Dynamics
There is a tremendous fascination today with chaos and fractals. James Gleick's
book Chaos (Gleick 1987) was a bestseller for months-an amazing accomplish-
ment for a book about mathematics and science. Picture books like The Beauty of
Fractals by Peitgen and Richter (1986) can be found on coffee tables in living
rooms everywhere. It seems that even nonmathematical people are captivated by
the infinite patterns found in fractals (Figure 1.0.1). Perhaps most important of all,chaos and fractals represent hands-on mathematics that is alive and changing. You
can turn on a home computer and create stunning mathematical images that no one
has ever seen before.
The aesthetic appeal of chaos
and fractals may explain why so
many people have become in-
trigued by these ideas. But maybe
you feel the urge to go deeper-to
learn the mathematics behind the
pictures, and to see how the ideas
can be applied to problems in sci-
ence and engineering. If so, this is
a textbook for you.
The style of the book is infor-
mal (as you can see), with an em-
phasis on concrete examples and
geometric thinking, rather than
proofs and abstract arguments. It is
Figure 1.0.1 also an extremely applied
1.0 CHAOS, FRACTALS, AND DYNAMICS 1 book-virtually every idea is illustrated by some application to science or engi-
neering. In many cases, the applications are drawn from thc rcccnt research litera-
ture. Of course, one problem with such an applied approach is that not everyone is
an cxpert in physics trtld biology and fluid mechanics . . . so the science as well as
the mathematics will need to be explained from scratch. But that should be fun,and it can be instructive to see the connections among different fields.
Before we start, we should agree about something: chaos and fractals are part of
an even grander subject known as dynamics. This is the subject that deals with
change, with systems that evolve in time. Whether the system in question settles
down to equilibrium, keeps repeating in cycles, or does something more compli-
cated, it is dynamics that we use to analyze the behavior. You have probably been
exposed to dynamical ideas in various places-in courses in differential equations,classical mechanics, chemical kinetics, population biology, and so on. Viewed
from the perspective of dynamics, all of these subjects can be placed in a common
framework, as we discuss at the end of this chapter.
Our study of dynamics bcgins in earnest in Chapter 2. But before digging in, we
present two overviews of the subject, one historical and one logical. Our treatment
is intuitive; careful definitions will come later. This chapter concludes with a dy-
namical view of the world, a framework that will guide our studies for the rest of
the book.
1.1 Capsule History of Dynamics
Although dynamics is an interdisciplinary subject today, it was originally a branch
of physics. The subject began in the mid-1600s, when Newton invented differen-
tial equations, discovered his laws of motion and universal gravitation, and com-
bined them to explain Kepler's laws of planetary motion. Specifically, Newton
solved the two-body problem-the problem of calculating the motion of the earth
around the sun, given the inverse-square law of gravitational attraction between
them. Subsequent generations of mathematicians and physicists tried to extend
Newton's analytical methods to the three-body problem (e.g., sun, earth, and
moon) but curiously this problem turned out to be much more difficult to solve.
After decades of effort, it was eventually realized that the three-body problem was
essentially impossible to solve, in the sense of obtaining explicit formulas for the
motions of the three bodies. At this point the situation seemed hopeless.
The breakthrough came with the work of PoincarC in the late 1800s. He intro-
duced a new point of view that emphasized qualitative rather than quantitative
questions. For example, instead of asking for the exact positions of the planets at
all times, he asked Is the solar system stable forever, or will some planets eventu-
ally fly off to infinity? PoincarC developed a powerful geo?tetric approach to an-
alyzing such questions. That approach has flowered into the modern subject of
dynamics, with applications reaching far beyond celestial mechanics. PoincarC
2 OVERVIEW was also the first person to glimpse the possibility of chaos, in which a determinis-
tic system exhibits aperiodic behavior that depends sensitively on the initial condi-
tions, thereby rendering long-term prediction impossible.
But chaos remained in the background in the first half of this century; instead
dynamics was largely concerned with nonlinear oscillators and their applications
in physics and engineering. Nonlinear oscillators played a vital role in the develop-
ment of such technologies as radio, radar, phase-locked loops, and lasers. On the
theoretical side, nonlinear oscillators also stimulated the invention of new mathe-
matical techniques-pioneers in this area include van der Pol, Andronov, Little-
wood, Cartwright, Levinson, and Smale. Meanwhile, in a separate development,PoincarC's geometric methods were being extended to yield a much deeper under-
standing of classical mechanics, thanks to the work of Birkhoff and later Kol-
mogorov, Arnol'd, and Moser.
The invention of the high-speed computer in the 1950s was a watershed in
the history of dynamics. The computer allowed one to experiment with equa-
tions in a way that was impossible before, and thereby to develop some intuition
about nonlinear systems. Such experiments led to Lorenz's discovery in 1963 of
chaotic motion on a strange attractor. He studied a simplified model of convec-
tion rolls in the atmosphere to gain insight into the notorious unpredictability of
the weather. Lorenz found that the solutions to his equations never settled down
to equilibrium or to a periodic state-instead they continued to oscillate in an ir-
regular, aperiodic fashion. Moreover, if he started his simulations from two
slightly different initial conditions, the resulting behaviors would soon become
totally different. The implication was that the system was inherently unpre-
dictable-tiny errors in measuring the current state of the atmosphere (or any
other chaotic system) would be amplified rapidly, eventually leading to embar-
rassing forecasts. But Lorenz also showed that there was structure in the
chaos-when plotted in three dimensions, the solutions to his equations fell
onto a butterfly-shaped set of points (Figure 1.1.1). He argued that this set had
to be an infinite complex of surfacesu-today we would regard it as an exam-
ple of a fractal.
Lorenz's work had little impact until the 1970s, the boom years for chaos. Here
are some of the main developments of that glorious decade. In 197 1 Ruelle and Tak-
ens proposed a new theory for the onset of turbulence in fluids, based on abstract
considerations about strange attractors. A few years later, May found examples of
chaos in iterated mappings arising in population biology, and wrote an influential re-
view article that stressed the pedagogical importance of studying simple nonlinear
systems, to counterbalance the often misleading linear intuition fostered by tradi-
tional education. Next came the most surprising discovery of all, due to the physicist
Feigenbaum. He discovered that there are certain universal laws governing the tran-
sition from regular to chaotic behavior; roughly speaking, completely different sys-
tems can go chaotic in the same way. His work established a link between chaos and
1.1 CAPSULE HISTORY OF DYNAMICS 3 Figure 1.1.1
phase transitions, and enticed a generation of physicists to the study of dynamics. Fi-
nally, experimentalists such as Gollub, Libchaber, Swinney, Linsay, Moon, and
Westervelt tested the new ideas about chaos in experiments on fluids, chemical reac-
tions, electronic circuits, mechanical oscillators, and semiconductors.
Although chaos stole the spotlight, there were two other major developments in
dynamics in the 1970s. Mandelbrot codified and popularized fractals, produced
magnificent computer graphics of them, and showed how they could be applied in
a variety of subjects. And in the emerging area of mathematical biology, Winfree
applied the geometric methods of dynamics to biological oscillations, especially
circadian (roughly 24-hour) rhythms and heart rhythms.
By the 1980s many people were working on dynamics, with contributions too
numerous to list. Table 1.1.1 summarizes this history.
1.2 The Importance of Being Nonlinear
Now we turn from history to the logical structure of dynamics. First we need to in-
troduce some terminology and make some distinctions.
4 OVERVIEW Dynamics - A Capsule History
Newton
Birkhoff
Kolmogorov
Arnol'd
Moser
Lorenz
Ruelle Talcens
May
Feigenbaum
Winfree
Mandelbrot
Invention of calculus, explanation of planetary motion
Flowering of calculus and classical mechanics
Analytical studies of planetary motion
Geometric approach, nightmares of chaos
Nonlinear oscillators in physics and engineering,invention of radio, radar, laser
Complex behavior in Hamiltonian mechanics
Strange attractor in simple model of convection
Turbulence and chaos
Chaos in logistic map
Universality and renormalization, connection between
chaos and phase transitions
Experimental studies of chaos
Nonlinear oscillators in biology
Fractals
Widespread interest in chaos, fractals, oscillators,and their applications
Table 1.1.1
There are two main types of dynamical systems: differential equations and it-
erated maps (also known as difference equations). Differential equations describe
the evolution of systems in continuous time, whereas iterated maps arise in prob-
lems where time is discrete. Differential equations are used much more widely in
science and engineering, and we shall therefore concentrate on them. Later in the
book we will see that iterated maps can also be very useful, both for providing sim-
ple examples of chaos, and also as tools for analyzing periodic or chaotic solutions
of differential equations.
Now confining our attention to differential equations, the main distinction is be-
tween ordinary and partial differential equations. For instance, the equation for a
damped harmonic oscillator
1.2 THE IMPORTANCE OF BEING NONLINEAR 5 is an ordinary differential equation, because it involves only ordinary derivatives
dxldt and d2xdt' . That is, there is only one independent variable, the time t . In
contrast, the heat equation
is a partial differential equation-it has both time t and space x as independent
variables. Our concern in this book is with purely temporal behavior, and so we
deal with ordinary differential equations almost exclusively.
A very general framework for ordinary differential equations is provided by the
system
Here the overdots denote differentiation with respect to t . Thus x, - dx,dt. The
variables x, , . . , x,, might represent concentrations of chemicals in a reactor, popula-
tions of different species in an ecosystem, or the positions and velocities of the planets
in the solar system. The functions A, ..., i, are determined by the problem at hand.
For example, the damped oscillator (1) can be rewritten in the form of (2),thanks to the following trick: we introduce new variables x, = x and xl = x. Then
x, = X, , from the definitions, and
from the definitions and the governing equation (1). Hence the equivalent system
(2) is
This system is said to be linear, because all the x, on the right-hand side appear
to the first power only. Otherwise the system would be nonlinear. Typical nonlin-
ear terms are products, powers, and functions of the x,, such as x,x2 , (x,)', or
cos X2 .
For example, the swinging of a pendulum is governed by the equation
where x is the angle of the pendulum from vertical, g is the acceleration due to
gravity, and L is the length of the pendulum. The equivalent system is nonlinear:
6 OVERVIEW Nonlinearity makes the pendulum equation very difficult to solve analytically.
The usual way around this is to fudge, by invoking the small angle approximation
sin x = x for x << 1 . This converts the problem to a linear one, which can then be
solved easily. But by restricting to small x, we're throwing out some of the
physics, like motions where the pendulum whirls over the top. Is it really necessary
to make such drastic approximations?
It turns out that the pendulum equation can be solved analytically, in terms of
elliptic functions. But there ought to be an easier way. After all, the motion of the
pendulum is simple: at low energy, it swings back and forth, and at high energy it
whirls over the top. There should be some way of extracting this information from
the system directly. This is the sort of problem we'll learn how to solve, using geo-
metric methods.
Here's the rough idea. Suppose we happen to know a solution to the pendu-
lum system, for a particular initial condition. This solution would be a pair of
functions x,(t) and x,(t), representing the position and velocity of the pendu-
lum. If we construct an abstract space with coordinates (x,,~,), then the solu-
tion (x,(t), x2(t)) corresponds to a point moving along a curve in this space
(Figure 1.2.1).
Figure 1.2.1
This curve is called a trajectory, and the space is called the phase space for the
system. The phase space is completely filled with trajectories, since each point can
serve as an initial condition.
Our goal is to run this construction in reverse: given the system, we want to
1.2 THE IMPORTANCE OF BEING NONLINEAR 7 draw the trajectories, and thereby extract information about the solutions. In many
cases, geometric reasoning will allow us to draw the trajectories without actually
solving the system!
Some terminology: the phase space for the general system (2) is the space with
coordinates x, , ..., x,, . Because this space is n-dimensional, we will refer to (2) as
an n-dimensional system or an nth-order system. Thus n represents the dimen-
sion of the phase space.
Nonautonomous Systems
You might wor~y that (2) is not general enough because it doesn't include any ex-
plicit time dependence. How do we deal with time-dependent or nonautonomous
equations like the forced harmonic oscillator mx + bx + hx = F cos t ? In this case too
there's an easy trick that allows us to rewrite the system in the form (2). We let x, = x
and x, = i as before but now we introduce x, = t . Then x, = 1 and so the equivalent
system is
which is an example of a three-dimensional system. Similarly, an nth-order time-
dependent equation is a special case of an (n+ l )-dimensional system. By this
trick, we can always remove any time dependence by adding an extra dimension to
the system.
The virtue of this change of variables is that it allows us to visualize a phase
space with trajectories frozen in it. Otherwise, if we allowed explicit time depen-
dence, the vectors and the trajectories would always be wiggling-this would ruin
the geometric picture we're trying to build. A more physical motivation is that the
state of the forced harmonic oscillator is truly three-dimensional: we need to know
three numbers, x, i, and t , to predict the future, given the present. So a three-
dimensional phase space is natural.
The cost, however, is that some of our terminology is nontraditional. For exam-
ple, the forced harmonic oscillator would traditionally be regarded as a second-
order linear equation, whereas we will regard it as a third-order nonlinear system,since (3) is nonlinear, thanks to the cosine term. As we'll see later in the book,forced oscillators have many of the properties associated with nonlinear systems,and so there are genuine conceptual advantages to our choice of language.
Why Are Nonlinear Problems So Hard?
As we've mentioned earlier, most nonlinear systems are impossible to solve ana-
lytically. Why are nonlinear systems so much harder to analyze than linear ones?
The essential difference is that linear systems can be broken down into parts. Then
8 OVERVIEW each part can be solved separately and finally recombined to get the answer. This
idea allows a fantastic simplification of complex problems, and underlies such meth-
ods as normal modes, Laplace transforms, superposition arguments, and Fourier
analysis. In this sense, a linear system is precisely equal to the sum of its parts.
But many things in nature don't act this way. Whenever parts of a system inter-
fere, or cooperate, or compete, there are nonlinear interactions going on. Most of
everyday life is nonlinear, and the principle of superposition fails spectacularly. If
you listen to your two favorite songs at the same time, you won't get double the plea-
sure! Within the realm of physics, nonlinearity is vital to the operation of a laser, the
formation of turbulence in a fluid, and the superconductivity of Josephson junctions.
1.3 A Dynamical View of the World
Now that we have established the ideas of nonlinearity and phase space, we can
present a framework for dynamics and its applications. Our goal is to show the log-
ical structure of the entire subject. The framework presented in Figure 1.3.1 will
guide our studies thoughout this book.
The framework has two axes. One axis tells us the number of variables needed
to characterize the state of the system. Equivalently, - this number is the dimension
of the phase space. The other axis tells us whether the system is linear or nonliri-
ear.
For example, consider the exponential growth of a population of organisms.
This system is described by the first-order differential equation
where x is the population at time t and r is the growth rate. We place this system
in the column labeled n = 1 because one piece of information-the current value
of the population x-is sufficient to predict the population at any later time. The
system is also classified as linear because the differential equation x = rx is linear
in x.
As a second example, consider the swinging of a pendulum, governed by
In contrast to the previous example, the state of this system is given by two vari-
ables: its current angle x and angular velocity x . (Think of it this way: we need
the initial values of both x and x to determine the solution uniquely. For example,if we knew only x, we wouldn't know which way the pendulum was swinging.)
Because two variables are needed to specify the state, the pendulum belongs in the
n = 2 column of Figure 1.3.1. Moreover, the system is nonlinear, as discussed in
the previous section. Hence the pendulum is in the lower, nonlinear half of the
n = 2 column.
1.3 A DYNAMICAL VIEW OF THE WORLD 9 Linear
t
Nonlinear
Growth, decay, or
equilibrium
Exponential growth
RC circuit
Radioactive decay
Fixed points
Bifurcations
Overdamped systems,relaxational dynamics
Logistic equation
for single species
Number of variables -
n=2 n23 n >> 1 Continuum
Oscillan'ons Collective phenomena Waves and patterns
Linear oscillator Civil engineering, Coupled harmonic oscillators Elasticity
Mass and spring
st~uctures Solid-state physics
Wave equations
RLC circuit Electrical engineering Molecular dynamics Electromagnetism (Maxwell)
2-body problem Equilibrium statistical Quantum mechanics
(Kepler, Newton) mechanics (Schrodinger, Heisenberg, Dirac)
Pendulum
Anharmonic oscillators
Limit cycles
Biological oscillators
(neurons, heart cells)
Predator-prey cycles
Nonlinear electronics
(van der Pol, Josephson)
Heat and diffusion
Acoustics
Viscous fluids
The frontier
I-_--_--------_----
Chaos Spatio-temporal complexity
I
Strange attractors I
(Lorenz)
I
3-body problem (Poincd) I
Chemical kinetics I
Iterated maps (Feigenbaum) 1
Fractals
(Mandelbrot)
I
I
Forced nonlinear oscillators I
(Levinson, Smale)
I
I Practical uses of chaos
, Quantum chaos ?
Coupled nonlinear oscillators
Lasers, nonlinear optics
Nonequilibrium statistical
mechanics
Nonlinear solid-state physics
(semiconductors)
Josephson arrays
Heart cell synchronization
Neural networks
Immune system
Ecosystems
Economics
Nonlinear waves (shock;, solitons)
Plasmas
Earthquakes
General relativity (Einstein)
Quantum field theory
Reaction-diffusion,biological and chemical waves
Fibrillation
Epilepsy
Turbulent fluids (Navier-Stokes)
Life One can continue to classify systems in this way, and the result will be some-
thing like the framework shown here. Admittedly, some aspects of the picture are
debatable. You might think that some topics should be added, or placed differ-
ently, or even that more axes are needed-the point is to think about classifying
systems on the basis of their dynamics.
There are some striking patterns in Figure 1.3.1. All the simplest systems occur
in the upper left-hand corner. These are the small linear systems that we learn
about in the first few years of college. Roughly speaking, these linear systems ex-
hibit growth, decay, or equilibrium when n = 1, or oscillations when n = 2. The
italicized phrases in Figure 1.3.1 indicate that these broad classes of phenomena
first arise in this part of the diagram. For example, an RC circuit has n = 1 and
cannot oscillate, whereas an RLC circuit has n = 2 and can oscillate.
The next most familiar part of the picture is the upper right-hand corner. This is
the domain of classical applied mathematics and mathematical physics where the
linear partial differential equations live. Here we find Maxwell's equations of elec-
tricity and magnetism, the heat equation, Schrodinger's wave equation in quantum
mechanics, and so on. These partial differential equations involve an infinite con-
tinuum of variables because each point in space contributes additional degrees of
freedom. Even though these systems are large, they are tractable, thanks to such
linear techniques as Fourier analysis and transform methods.
In contrast, the lower half of Figure 1.3.1-the nonlinear half-is often ignored
or deferred to later courses. But no more! In this book we start in the lower left cor-
ner and systematically head to the right. As we increase the phase space dimension
from n = 1 to n = 3, we encounter new phenomena at every step, from fixed points
and bifurcations when n = 1, to nonlinear oscillations when n = 2, and finally
chaos and fractals when n = 3. In all cases, a geometric approach proves to be very
powerful, and gives us most of the information we want, even though we usually
can't solve the equations in the traditional sense of finding a formula for the an-
swer. Our journey will also take us to some of the most exciting parts of modern
science, such as mathematical biology and condensed-matter physics.
You'll notice that the framework also contains a region forbiddingly marked
The frontier. It's like in those old maps of the world, where the mapmakers
wrote, Here be dragons on the unexplored parts of the globe. These topics are
not completely unexplored, of course, but it is fair to say that they lie at the limits
of current understanding. The problems are very hard, because they are both large
and nonlinear. The resulting behavior is typically complicated in both space and
time, as in the motion of a turbulent fluid or the patterns of electrical activity in a
fibrillating heart. Toward the end of the book we will touch on some of these prob-
lems-they will certainly pose challenges for years to come.
1.3 A DYNAMICAL VIEW OF THE WORLD 11 ONE-DIMENSIONAL FLOWS FLOWS ON THE LINE
2.0 Introduction
In Chapter 1, we introduced the general system
x, =-f;(x,, ... ,xn)
and mentioned that its solutions could be visualized as trajectories flowing through
an n-dimensional phase space with coordinates (x,, . .. , x,). At the moment, this
idea probably strikes you as a mind-bending abstraction. So let's start slowly, be-
ginning here on earth with the simple case n = 1. Then we get a single equation of
the form
Here x(t) is a real-valued function of time t , and f(x) is a smooth real-valued .
function of x. We'll call such equations one-dimensional orfirst-order systems.
Before there's any chance of confusion, let's dispense with two fussy points of
terminology:
1. The word system is being used here in the sense of a dynamical system,not in the classical sense of a collection of two or more equations. Thus
a single equation can be a system.
2. We do not allow f to depend explicitly on time. Time-dependent or
nonautonomous equations of the form x = f (x, t) are more compli-
cated, because one needs two pieces of information, x and t, to predict
the future state of the system. Thus x = f(x,t) should really be re-
garded as a two-dimensional or second-order system, and will there-
fore be discussed later in the book.
2.0 INTRODUCTION 15 2.1 A Geometric Way of Thinking
Pictures are often more helpful than formulas for analyzing nonlinear systems.
Here we illustrate this point by a simple example. Along the way we will introduce
one of the most basic techniques of dynamics: interpreting a differential equation
as a vector field.
Consider the following nonlinear differential equation:
x = sin x. (1)
To emphasize our point about formulas versus pictures, we have chosen one of the
few nonlinear equations that can be solved in closed form. We separate the vari-
ables and then integrate:
dx
dt=-,sin x
which implies
t = cscx dx
I
To evaluate the constant C, suppose that x = x, at t = 0. Then C = In ( csc x, + cot x, 1.
Hence the solution is
csc x, + cot x,t = ln
cscx+cotx
This result is exact, but a headache to interpret. For example, can you answer
the following questions?
1. Suppose x, = n4 ; describe the qualitative features of the solution x(t)
for all t > 0. In particular, what happens as t + .. ?
2. For an arbitrary initial condition x,, what is the behavior of x(t) as
t+.. ?
Think about these questions for a while, to see that formula (2) is not transparent.
In contrast, a graphical analysis of (1) is clear and simple, as shown in Figure
2.1.1. We think of t as time, x as the position of an imaginary particle moving
along the real line, and x as the velocity of that particle. Then the differential
equation x = sin x represents a vectorfield on the line: it dictates the velocity vec-
tor i at each x . To sketch the vector field, it is convenient to plot x versus x , and
then draw arrows on the x-axis to indicate the corresponding velocity vector at
each x. The arrows point to the right when x > 0 and to the left when x < 0.
16 FLOWS ON THE LINE I Figure 2.1.1
I
I
Here's a more physical way to think about the vector field: imagine that fluid
1
is flowing steadily along the x-axis with a velocity that varies from place to
place, according to the rule x = sin x. As shown in Figure 2.1.1, theflow is to the
right when x > 0 and to the left when x < 0. At points where x = 0, there is no
I flow; such points are therefore called fixedpoints. You can see that there are two
kinds of fixed points in Figure 2.1.1: solid black dots represent stable fixed
I
points (often called attractors or sinks, because the flow is toward them) and
open circles represent unstable fixed points (also known as repellers or
I
sources).
Armed with this picture, we can now easily understand the solutions to the dif-
ferential equation x = sin x. We just start our imaginary particle at x, and watch
how it is carried along by the flow.
This approach allows us to answer the questions above as follows:
1. Figure 2.1.1 shows that a particle starting at x, = n4 moves to the
right faster and faster until it crosses x = n2 (where sinx reaches its
maximum). Then the particle starts slowing down and eventually ap-
proaches the stable fixed point x = n from the left. Thus, the qualita-
tive form of the solution is as shown in Figure 2.1.2.
Note that the curve is concave up at first, and then concave down;
this corresponds to the initial acceleration for x < n2 followed by the
deceleration toward x = n.
2. The same reasoning applies to any initial condition x,. Figure 2.1.1
shows that if x > 0 initially, the particle heads to the right and asymptot-
ically approaches the nearest sta-
I ble fixed point. Similarly, if
2.1 A GEOMETRIC WAY OF THINKING 17
n - - - - - - - - - - - - -
x < 0 initially, the particle ap-
proaches the nearest stable fixed
point to its left. If x = 0, then x
remains constant. The qualitative
n -
4
form of the solution for any ini-
tial condition is sketched in Fig-
ure 2.1.3.
Figure 2.1.2 Figure 2.1.3
In all honesty, we should admit that a picture can't tell us certain quantitative
things: for instance, we don't know the time at which the speed I .i 1 is greatest. But in
many cases qualitative information is what we care about, and then pictures are fine.
2.2 Fixed Points and Stability
The ideas developed in the last section can be extended to any one-dimensional
system x = f (1). We just need to draw the graph of f (x) and then use it to sketch
the vector field on the real line (the x-axis in Figure 2.2.1).
Figure 2.2.1
18 FLOWS ON THE LINE As before, we imagine that a fluid is flowing along the real line with a local veloc-
ity f (x). This imaginary fluid is called the phase fluid, and the real line is the
phase space. The flow is to the right where f (x) > 0 and to the left where f (x) < 0.
To find the solution to x = f (x) starting from an arbitrary initial condition x,, we
I
place an imaginary particle (known as aphasepoint) at x, and watch how it is car-
ried along by the flow. As time goes on, the phase point moves along the x-axis
according to some function x(t) . This function is called the trajectory based at x, ,and it represents the solution of the differential equation starting from the initial
condition x, . A picture like Figure 2.2.1, which shows all the qualitatively differ-
ent trajectories of the system, is called aphaseportrait.
The appearance of the phase portrait is controlled by the fixed points x , de-
fined by f(x) = 0 ; they correspond to stagnation points of the flow. In Figure
2.2.1, the solid black dot is a stable fixed point (the local flow is toward it) and the
I
open dot is an unstable fixed point (the flow is away from it).
In terms of the original differential equation, fixed points represent equilib-
rium solutions (sometimes called steady, constant, or rest solutions, since if
x = x initially, then x(t) = x for all time). An equilibrium is defined to be sta-
ble if all sufficiently small disturbances away from it damp out in time. Thus sta-
ble equilibria are represented geometrically by stable fixed points. Conversely,unstable equilibria, in which disturbances grow in time, are represented by unsta-
ble fixed points.
EXAMPLE 2.2.1 :
Find all fixed points for x = x2 - 1, and classify their stability.
Solution: Here f (x) = x2 - 1. To find the fixed points, we set f (x) = 0 and
solve for x . Thus x = f 1. To determine stability, we plot x2 - 1 and then sketch
the vector field (Figure 2.2.2). The flow is to the right where x2 - 1 > 0 and to the
left where x2 - 1 < 0. Thus x = -1 is stable, and x = 1 is unstable. .
I
Figure 2.2.2
2.2 FIXED POINTS AND STABILITY 19 Note that the definition of stable equilibrium is based on sinall disturbances;
certain large disturbances may fail to decay. In Example 2.2.1, all small distur-
bances to x = -1 will decay, but a large disturbance that sends x to the right of
x = 1 will not decay-in fact, the phase point will be repelled out to +m . To em-
phasize this aspect of stability, we sometimes say that x = -1 is locally stable, but
not globally stable.
EXAMPLE 2.2.2:
Consider the electrical circuit shown in Figure 2.2.3. A resistor R and a capaci-
tor Care in series with a battery of constant dc voltage V,,. Suppose that the switch
is closed at t = 0, and that there is no charge on the capacitor initially. Let Q(t) de-
1
note the charge on the capacitor at time
The graph of f (Q) is a straight line with a negative slope (Figure 2.2.4). The
corresponding vector field has a fixed point where f(Q) = 0, which occurs at
Q = CV, . The flow is to the right where
Q f (Q) > 0 and to the left where f (Q) < 0.
Thus the flow is always toward Q -it is a
stable fixed point. In fact, it is globally sta-
ble, in the sense that it is approached from
Q all initial conditions.
To sketch Q(t), we start a phase point at
the origin of Figure 2.2.4 and imagine how
it would move. The flow carries the phase
Figure 2.2.4
point monotonically toward Q. Its speed
7 t 2 0. Sketch the graph of Q(t).
Solution: This type of circuit problem
is probably familiar to you. It is governed
by linear equations and can be solved an-
+
20 FLOWS ON THE LINE
b
-
alytically, but we prefer to illustrate the
geometric approach.
First we write the circuit equations. As
we go around the circuit, the total voltage
-
-
- drop must equal zero; hence -4 +
Figure 2.2.3
RI + QC = 0, where I is the current
flowing through the resistor. This current causes charge to accumulate on the ca-
pacitor at a rate Q = I. Hence Q decreases linearly as it approaches the fixed point; therefore Q(t) is increasing
and concave down, as shown in Figure 2.2.5. a
EXAMPLE 2.2.3:
- - - -
Sketch the phase portrait corre-
sponding to x = x - cos x , and deter-
mine the stability of all the fixed points.
Solution: One approach would be to
plot the function f (x) = x - cos x and
t
then sketch the associated vector field.
Figure 2.2.5 This method is valid, but it requires you
to figure out what the graph of
x - cos x looks like.
There's an easier solution, which exploits the fact that we know how to graph
g = x and y = cosx separately. We plot both graphs on the same axes and then
observe that they intersect in exactly one point (Figure 2.2.6).
Figure 2.2.6
This intersection corresponds to a fixed point, since x = cos x and therefore
f (x) = 0. Moreover, when the line lies above the cosine curve, we have x > cos x
and so x > 0: the flow is to the right. Similarly, the flow is to the left where the line is
below the cosine curve. Hence x is the only fixed point, and it is unstable. Note that
we can classify the stability of x , even though we don't have a formula for x it-
self! a
2.3 Population Growth
The simplest model for the growth of a population of organisms is N = rN,where N(t) is the population at time t , and r > 0 is the growth rate. This model
2.3 POPULATION GROWTH 2 1 Growth rate
r
Figure 2.3.1
predicts exponential growth:
N(t) = Noer', where No is the
population at t = 0.
Of course such exponential
\ growth cannot go on forever.
\ To model the effects of over-
\
crowding and limited resources,population biologists and de-
mographers often assume that
the per capita growth rate NN
decreases when N becomes sufficiently large, as shown in Figure 2.3.1. For
small N, the growth rate equals r, just as before. However, for populations larger
This leads to the logistic equation
than a certain carrying capacity
first suggested to describe the growth of human populations by Verhulst in 1838.
This equation can be solved analytically (Exercise 2.3.1) but once again we prefer a
graphical approach. We plot N versus N to see what the vector field looks like.
Note that we plot only N 2 0, since it makes no sense to think about a negative pop-
ulation (Figure 2.3.3). Fixed points occur at N = 0 and N = K, as found by set-
ting N = 0 and solving for N. By looking at the flow in Figure 2.3.3, we see that
N = 0 is an unstable fixed point and N = K is a stable fixed point. In biological
terms, N = 0 is an unstable equilibrium: a small population will grow exponen-
tially fast and run away from N = 0 . On the other hand, if N is disturbed slightly
from K, the disturbance will decay monotonically and N(t) -+ K as t -+ .
In fact, Figure 2.3.3 shows that if we start a phase point at arly No > 0, it will al-
ways flow toward N = K. Hence the populatiorl always approaches the carrying
capacity.
The only exception is if No = 0 ; then there's nobody around to start reproducing,and so N = 0 for all time. (The model does not allow for spontaneous generation!)
Growth rate
2 2 FLOWS ON THE LINE
K, the growth rate actually be-
comes negative; the death rate is
r
higher than the birth rate.
A mathematically convenient
way to incorporate these ideas is
to assume that the per capita
N growth rate NN decreases lin-
early with N (Figure 2.3.2).
Figure 2.3.2 Figure 2.3.3
Figure 2.3.3 also allows us to deduce the qualitative shape of the solutions. For
example, if No < K2, the phase point moves faster and faster until it crosses
N = K2, where the parabola in Figure 2.3.3 reaches its maximum. Then the phase
point slows down and eventually creeps toward N = K. In biological terms, this
means that the population initially grows in an accelerating fashion, and the graph
of N(t) is concave up. But after N = K2, the derivative N begins to decrease,and so N(t) is concave down as it asymptotes to the horizontal line N = K (Figure
2.3.4). Thus the graph of N(t) is S-shaped or sigmoid for N(, < K2.
Figure 2.3.4
Something qualitatively different occurs if the initial condition No lies between
K2 and K; now the solutions are decelerating from the start. Hence these solu-
tions are concave down for all t. If the population initially exceeds the carrying ca-
pacity (No > K ), then N(t) decreases toward N = K and is concave up. Finally, if
No = 0 or No = K, then the population stays constant.
Critique of the Logistic Model
Before leaving this example, we should make a few comments about the biological
validity of the logistic equation. The algebraic form of the model is not to be taken lit-
erally. The model should really be regarded as a metaphor for populations that have a
2.3 POPULATION GROWTH 2 3 tendency to grow from zero population up to some carrying capacity K.
Originally a much stricter interpretation was proposed; and the model was ar-
gued to be a universal law of growth (Pearl 1927). The logistic equation was tested
in laboratory experiments in which colonies of bacteria, yeast, or other simple or-
ganisms were grown in conditions of constant climate, food supply, and absence of
predators. For a good review of this literature, see Krebs (1972, pp. 190-200).
These experiments often yielded sigmoid growth curves, in some cases with an im-
pressive match to the logistic predictions.
On the other hand, the agreement was much worse for fruit flies, flour beetles,and other organisms that have complex life cycles, involving eggs, larvae, pupae,and adults. In these organisms, the predicted asymptotic approach to a steady car-
rying capacity was never observed-instead the populations exhibited large, per-
sistent fluctuations after an initial period of logistic growth. See Krebs (1972) for a
discussion of the possible causes of these fluctuations, including age structure and
time-delayed effects of overcrowding in the population.
For further reading on population biology, see Pielou (1969) or May (1981).
Edelstein-Keshet (1988) and Murray (1989) are excellent textbooks on mathemat-
ical biology in general.
2.4 Linear Stability Analysis
So far we have relied on graphical methods to determine the stability of fixed
points. Frequently one would like to have a more quantitative measure of stability,such as the rate of decay to a stable fixed point. This sort of information may be
obtained by linearizing about a fixed point, as we now explain.
Let x be a fixed point, and let q(t) = x(t) - x be a small perturbation away
from x . To see whether the perturbation grows or decays, we derive a differential
equation for q. Differentiation yields
since x is constant. Thus ?j = x = f (x) = f (x + q). Now using Taylor's expan-
sion we obtain
where 0(q2) denotes quadratically small terms in q . Finally, note that f(x) = 0
since x is a fixed point. Hence
Now if ff(x) 0, the 0(q2) terms are negligible and we may write the approxi-
mation
2 4 FLOWS ON THE LINE rl = q f '(x) .
This is a linear equation in q, and is called the linearization about x . It shows
that the perturbation q(t) grows exponentially if f'(x) > 0 and decays if
f'(x) < 0. If f'(x) = 0, the 0(q2) terms are not negligible and a nonlinear
analysis is needed to determine stability, as discussed in Example 2.4.3 below.
The upshot is that the slope f '(x) at the fixed point determines its stability. If
you look back at the earlier examples, you'll see that the slope was always nega-
tive at a stable fixed point. The importance of the sign of f '(x) was clear from
our graphical approach; the new feature is that now we have a measure of how sta-
ble a fixed point is-that's determined by the magnitude of f '(x). This magni-
tude plays the role of an exponential growth or decay rate. Its reciprocal l(f '(x)l
is a characteristic time scale; it determines the time required for x(t) to vary sig-
nificantly in the neighborhood of x .
EXAMPLE 2.4.1 :
Using linear stability analysis, determine the stability of the fixed points for
x =sinx.
Solution: The fixed points occur where f (x) = sin x = 0 . Thus x = kn , where
k is an integer. Then
1, k even
f '(x) = cos kn =
-1, k odd
Hence x is unstable if k is even and stable if k is odd. This agrees with the re-
sults shown in Figure 2.1.1. w
EXAMPLE 2.4.2:
Classify the fixed points of the logistic equation, using linear stability analysis,and find the characteristic time scale in each case.
Solution: Here f (N) = r~ (1 - %), with fixed points N = 0 and N = K. Then
f '(N) = r - and so f '(0) = r and f '(K) = -r . Hence N = 0 is unstable and
N = K is stable, as found earlier by graphical arguments. In either case, the char-
acteristic time scale is 111 f '(N)J = 1r . m
EXAMPLE 2.4.3:
What can be said about the stability of a fixed point when f '(x) = O?
Solution: Nothing can be said in general. The stability is best determined on a
case-by-case basis, using graphical methods. Consider the following examples:
(a) x = -1' (b) x = X' (c) x = x2 (d) x = 0
2.4 LINEAR STABILITY ANALYSIS 2 5 Each of these systems has a fixed point x = 0 with f'(x) = 0. However the sta-
bility is different in each case. Figure 2.4.1 shows that (a) is stable and (b) is unsta-
ble. Case (c) is a hybrid case we'll call half-stable, since the fixed point is
attracting from the left and repelling from the right. We therefore indicate this type
of fixed point by a half-filled circle. Case (d) is a whole line of fixed points; pertur-
bations neither grow nor decay.
Figure 2.4.1
These examples may seem artificial, but we will see that they arise naturally in the
context of bifurcations-more about that later. rn
2.5 Existence and Uniqueness
Our treatment of vector fields has been very informal. In particular, we have taken
a cavalier attitude toward questions of existence and uniqueness of solutions to
26 FLOWS ON THE LINE the system x = f(x). That's in keeping with the applied spirit of this book.
Nevertheless, we should be aware of what can go wrong in pathological cases.
EXAMPLE 2.5.1 :
Show that the solution to x = x'I3 starting from x, = 0 is not unique.
Solution: The point x = 0 is a fixed point, so one obvious solution is x(t) = 0
for all t. The surprising fact is that there is another solution. To find it we separate
variables and integrate:
j x-113d. = Jdt
so 2 x2I3 = t + C . Imposing the initial condition x(0) = 0 yields C = 0. Hence
x(t) = (3 c)~ is also a solution! rn
When uniqueness fails, our geometric approach collapses because the phase
point doesn't know how to move; if a phase point were started at the origin, would
312
it stay there or would it move according to x(t) = ( t) ? (Or as my friends in el-
ementary school used to say when discussing the problem of the irresistible force
and the immovable object, perhaps the phase point would explode!)
Actually, the situation in Example 2.5.1 is even worse than we've let on-there
are infinitely many solutions starting from the same initial condition (Exercise
x 2.5.4).
What's the source of the non-uniqueness?
A hint comes from looking at the vector field
(Figure 2.5.1). We see that the fixed point
x = 0 is very unstable-the slope ff(0) is
infinite.
Chastened by this example, we state a theo-
Figure 2.5.1 rem that provides sufficient conditions for exis-
tence and uniqueness of solutions to x = f (x).
Existence and Uniqueness Theorem: Consider the initial value problem
Suppose that f (x) and f '(x) are continuous on an open interval R of the x-axis,and suppose that x, is a point in R. Then the initial value problem has a solution
x(t) on some time interval (-z,z) about t = 0, and the solution is unique.
For proofs of the existence and uniqueness theorem, see Borrelli and Coleman
(1987), Lin and Sege1(1988), or virtually any text on ordinary differential equations.
This theorem says that if f(x) is smooth enough, then solutions exist and are
unique. Even so, there's no guarantee that solutions exist forever, as shown by the
2.5 EXISTENCE AND UNIQUENESS 2 7 next example.
EXAMPLE 2.5.2:
Discuss the existence and uniqueness of solutions to the initial value problem
x = 1 + x2, ~(0) = x0. DO solutions exist for all time?
Solution: Here f (x) = 1 + x2. This function is continuous and has a continuous de-
rivative for all x. Hence the theorem tells us that solutions exist and are unique for any
initial condition x,. But the theorem does not say that the solutions exist for all time;
they are only guaranteed to exist in a (possibly very short) time interval around t = 0.
For example, consider the case where x(0) = 0. Then the problem can be solved
analytically by separation of variables:
which yields
tan-' x = t + C
The initial condition x(0) = 0 implies C = 0. Hence x(t) = tant is the solution.
But notice that this solution exists only for -n2 < t < n2, because x(t) + f- as
t + f n2. Outside of that time interval, there is no solution to the initial value
problem for x, = 0.
The amazing thing about Example 2.5.2 is that the system has solutions that
reach infinity infinite time. This phenomenon is called blow-up. As the name sug-
gests, it is of physical relevance in models of combustion and other runaway
processes.
There are various ways to extend the existence and uniqueness theorem. One
can allow f to depend on time t , or on several variables x,, . . , x,, . One of the
most useful generalizations will be discussed later in Section 6.2.
From now on, we will not worry about issues of existence and uniqueness-our
vector fields will typically be smooth enough to avoid trouble. If we happen to
come across a more dangerous example, we'll deal with it then.
2.6 Impossibility of Oscillations
Fixed points dominate the dynamics of first-order systems. In all our examples so
far, all trajectories either approached a fixed point, or diverged to f- . In fact,those are the otzly things that can happen for a vector field on the real line. The rea-
son is that trajectories are forced to increase or decrease monotonically, or remain
constant (Figure 2.6.1). To put it more geometrically, the phase point never re-
verses direction.
2 8 FLOWS ON THE LINE Figure 2.6.1
Thus, if a fixed point is regarded as an equilibrium solution, the approach to
equilibrium is always monotonic-overshoot and damped oscillations can never
occur in a first-order system. For the same reason, undamped oscillations are im-
possible. Hence there are no periodic solutions to x = f (x) .
These general results are fundamentally topological in origin. They reflect the
fact that x = f(x) corresponds to flow on a line. If you flow monotonically on a
line, you'll never come back to your starting place-that's why periodic solutions
are impossible. (Of course, if we were dealing with a circle rather than a line, we
could eventually return to our starting place. Thus vector fields on the circle can
exhibit periodic solutions, as we dis ......
~hyaib, @ Chsmtstry, NONLINEAR
DYNAMICS AND
CHAOS
With Applications to
Physics, Biology, Chemistry,and Engineering
STEVEN H. STROGATZ
PERSEUS BOOKS
I Reading, Massachusetts Many of the designations used by nlanufacturers and sellers to distin-
guish their products are claimed as trademarks. Where those designa-
tions appear in this book and Perseus Books was aware of a trademark
claim, the designations have been printed in initial capital letters.
Library of Congress Cataloging-in-Publication Data
Strogatz, Steven H. (Steven Henry)
Nonlmear dynamics and chaos: with applications to physics,biology, chemistry, and engineering Steven H. Strogatz.
p. cm.
Includes bibliographical references and index.
ISBN 0-201 -54344-3
1. Chaotic behavior in systems. 2. Dynamics. 3. Nonlinear
theories. I. Title.
Q172.5.C45S767 1994
501'.1'85-dc20 93-6166
CIP
Copyright O 1994 by Perseus Books Publishing, L.L.C.
Perseus Books is a member of the Perseus Books Group
All rights reserved. No part of this publication may be reproduced,stored in a retrieval system, or transmitted, in any form or by any means,electronic, mechanical, photocopying, recording, or otherwise, without
the prior written permission of the publisher. Printed in the United States
of America. Published simultaneously in Canada.
Cover design by Lynne Reed
Text design by Joyce C. Weston
Set in 10-point Times by Compset, Inc.
Cover art is a computer-generated picture of a scroll ring, from
Strogatz (1985) with permission. Scroll rings are self-sustaining
sources of waves in diverse excitable media, including heart muscle,neural tissue, and excitable chemical reactions (Winfree and Strogatz
1984, Winfrce 1987b).
Perseus Books are available for special discounts for hulk purchases in the
U.S. by corporations, institutions, and other organizations. For more in-
formation, please contact the Special Markets Department at Harper-
Collins Publishers, 10 East 53rd Street, New York, NY 10022, or call 1-
2 12-207-7528. CONTENTS
Preface ix
1. Overview 1
1.0 Chaos, Fractals, and Dynamics 1
1.1 Capsule History of Dynamics 2
1.2 The Importance of Being Nonlinear 4
1.3 A Dynamical View of the World 9
Part I. One-Dimensional Flows
Flows on the Line 15
2.0 Introduction 15
2.1 A Geometric Way of Thinking 16
2.2 Fixed Points and Stability 18
2.3 PopulationGrowth 21
2.4 Linear Stability Analysis 24
2.5 Existence and Uniqueness 26
2.6 Impossibility of Oscillations 28
2.7 Potentials 30
2.8 Solving Equations on the Computer 32
Exercises 36
3. Bifurcations 44
3.0 Introduction 44
3.1 Saddle-Node Bifurcation 45
3.2 Transcritical Bifurcation 50
3.3 Laser Threshold 53
3.4 Pitchfork Bifurcation 55
3.5 Overdamped Bead on a Rotating Hoop 61
CONTENTS v 3.6 Imperfect Bifurcations and Catastrophes 69
3.7 Insect Outbreak 73
Exercises 79
4. Flows on the Circle 93
4.0 Introduction 93
4.1 Examples and Definitions 93
4.2 Uniform Oscillator 95
4.3 Nonuniform Oscillator 96
4.4 Overdamped Pendulum 101
4.5 Fireflies 103
4.6 Superconducting Josephson Junctions 106
Exercises 1 13
Part II. Two-Dimensional Flows
5. Linear Systems 123
5.0 Introduction 123
5.1 Definitions and Examples 123
5.2 Classification of Linear Systems 129
5.3 Love Affairs 138
Exercises 140
6. Phase Plane 145
6.0 Introduction 145
6.1 Phase Portraits 145
6.2 Existence, Uniqueness, and Topological Consequences 148
6.3 Fixed Points and Linearization 150
6.4 Rabbits versus Sheep 155
6.5 Conservative Systems 159
6.6 Reversible Systems 163
6.7 Pendulum 168
6.8 Index Theory 174
Exercises 18 1
7. Limit Cycles 196
7.0 Introduction 196
7.1 Examples 197
7.2 Ruling Out Closed Orbits 199
7.3 Poincark-Bendixson Theorem 203
7.4 Liknard Systems 2 10
7.5 Relaxation Oscillators 2 1 1
7.6 Weakly Nonlinear Oscillato~~s 2 15
Exercises 227
vi CONTENTS 8. Bifurcations Revisited 241
8.0 Introduction 24 1
8.1 Saddle-Node, Transcritical,and Pitchfork Bifurcations 24 1
8.2 Hopf Bifurcations 248
8.3 Oscillating Chemical Reactions 254
8.4 GIobal Bifurcations of Cycles 260
8.5 Hysteresis in the Driven Pendulum and Josephson Junction 265
8.6 Coupled Oscillators and Quasiperiodicity 273
8.7 Poincare Maps 278
Exercises 284
Part Ill. Chaos
9. Lorenz Equations 301
9.0 Introduction 301
9.1 A Chaotic Waterwheel 302
9.2 Simple Properties of the Lorenz Equations 3 1 1
9.3 Chaos on a Strange Attractor 3 17
9.4 Lorenz Map 326
9.5 Exploring Parameter Space 330
9.6 Using Chaos to Send Secret Messages 335
Exercises 34 1
10. One-Dimensional Maps 348
10.0 Introduction 348
10.1 Fixed Points and Cobwebs 349
10.2 Logistic Map: Numerics 353
10.3 Logistic Map: Analysis 357
10.4 Periodic Windows 36 1
10.5 Liapunov Exponent 366
10.6 Universality and Experiments 369
10.7 Renormalization 379
Exercises 388
11. Fractals 398
1 1.0 Introduction 398
1 1.1 Countable and Uncountable Sets 399
11.2 Cantor Set 401
1 1.3 Dimension of Self-similar Fractals 404
1 1.4 Box Dimension 409
1 1.5 Pointwise and Correlation Dimensions 4 1 1
Exercises 4 16
CONTENTS vii 12. Strange Attractors 423
12.0 Introduction 423
12.1 The Simplest Examples 423
12.2 Henon Map 429
12.3 Rossler System 434
12.4 Chemical Chaos and Attractor Reconstruction 437
12.5 Forced Double-Well Oscillator 441
Exercises 448
Answers to Selected Exercises 455
References 465
Author Index 475
Subject Index 478
viii CONTENTS PREFACE
This textbook is aimed at newcomers to nonlinear dynamics and chaos, especially
students taking a first course in the subject. It is based on a one-semester course
I've taught for the past several years at MIT and Cornell. My goal is to explain the
mathematics as clearly as possible, and to show how it can be used to understand
some of the wonders of the nonlinear world.
The mathematical treatment is friendly and informal, but still careful. Analyti-
cal methods, concrete examples, and geometric intuition are stressed. The theory is
developed systematically, starting with first-order differential equations and their
bifurcations, followed by phase plane analysis, limit cycles and their bifurcations,and culminating with the Lorenz equations, chaos, iterated maps, period doubling,renormalization, fractals, and strange attractors.
A unique feature of the book is its emphasis on applications. These include me-
chanical vibrations, lasers, biological rhythms, superconducting circuits, insect
outbreaks, chemical oscillators, genetic control systems, chaotic waterwheels, and
even a technique for using chaos to send secret messages. In each case, the sci-
entific background is explained at an elementary level and closely integrated with
the mathematical theory.
Prerequisites
The essential prerequisite is single-variable calculus, including curve-sketch-
ing, Taylor series, and separable differential equations. In a few places, multivari-
able calculus (partial derivatives, Jacobian matrix, divergence theorem) and linear
algebra (eigenvalues and eigenvectors) are used. Fourier analysis is not assumed,and is developed where needed. Introductory physics is used throughout. Other
scientific prerequisites would depend on the applications considered, but in all
cases, a first course should be adequate preparation.
I
PREFACE ix Possible Courses
The book could be used for several types of courses:
A broad introduction to nonlinear dynamics, for students with no prior expo-
sure to the subject. (This is the kind of course I have taught.) Here one goes
straight through the whole book, covering the core material at the beginning
of each chapter, selecting a few applications to discuss in depth and giving
light treatment to the more advanced theoretical topics or skipping them alto-
gether. A reasonable schedule is seven weeks on Chapters 1-8, and five or six
weeks on Chapters 9-12. Make sure there's enough time left in the semester
to get to chaos, maps, and fractals.
A traditional course on nonlinear ordinary differential equations, but with
more emphasis on applications and less on perturbation theory than usual.
Such a course would focus on Chapters 1-8.
A modern course on bifurcations, chaos, fractals, and their applications, for
students who have already been exposed to phase plane analysis. Topics
would be selected mainly from Chapters 3,4, and 8-12.
For any of these courses, the students should be assigned homework from the
exercises at the end of each chapter. They could also do computer projects; build
chaotic circuits and mechanical systems; or look up some of the references to get a
taste of current research. This can be an exciting course to teach, as well as to take.
I hope you enjoy it.
Conventions
Equations are numbered consecutively within each section. For instance, when
we're working in Section 5.4, the third equation is called (3) or Equation (3), but
elsewhere it is called (5.4.3) or Equation (5.4.3). Figures, examples, and exercises
are always called by their full names, e.g., Exercise 1.2.3. Examples and proofs
end with a loud thump, denoted by the symbol m.
Acknowledgments
Thanks to the National Science Foundation for financial support. For help with
the book, thanks to Diana Dabby, Partha Saha, and Shinya Watanabe (students);
Jihad Touma and Rodney Worthing (teaching assistants); Andy Christian, Jim
Crutchfield, Kevin Cuomo, Frank DeSimone, Roger Eckhardt, Dana Hobson, and
Thanos Siapas (for providing figures); Bob Devaney, Irv Epstein, Danny Kaplan,Willem Malkus, Charlie Marcus, Paul Matthews, Arthur Mattuck, Rennie Mirollo,Peter Renz, Dan Rockmore, Gil Strang, Howard Stone, John Tyson, Kurt Wiesen-
x PREFACE feld, Art Winfree, and Mary Lou Zeeman (friends and colleagues who gave advice);
and to my editor Jack Repcheck, Lynne Reed, Production Supervisor, and all the
other helpful people at Perseus Books. Finally, thanks to my family and Elisabeth
for their love and encouragement.
Steven H. Strogatz
Cambridge, Massachusetts
PREFACE xi OVERVIEW
1.0 Chaos, Fractals, and Dynamics
There is a tremendous fascination today with chaos and fractals. James Gleick's
book Chaos (Gleick 1987) was a bestseller for months-an amazing accomplish-
ment for a book about mathematics and science. Picture books like The Beauty of
Fractals by Peitgen and Richter (1986) can be found on coffee tables in living
rooms everywhere. It seems that even nonmathematical people are captivated by
the infinite patterns found in fractals (Figure 1.0.1). Perhaps most important of all,chaos and fractals represent hands-on mathematics that is alive and changing. You
can turn on a home computer and create stunning mathematical images that no one
has ever seen before.
The aesthetic appeal of chaos
and fractals may explain why so
many people have become in-
trigued by these ideas. But maybe
you feel the urge to go deeper-to
learn the mathematics behind the
pictures, and to see how the ideas
can be applied to problems in sci-
ence and engineering. If so, this is
a textbook for you.
The style of the book is infor-
mal (as you can see), with an em-
phasis on concrete examples and
geometric thinking, rather than
proofs and abstract arguments. It is
Figure 1.0.1 also an extremely applied
1.0 CHAOS, FRACTALS, AND DYNAMICS 1 book-virtually every idea is illustrated by some application to science or engi-
neering. In many cases, the applications are drawn from thc rcccnt research litera-
ture. Of course, one problem with such an applied approach is that not everyone is
an cxpert in physics trtld biology and fluid mechanics . . . so the science as well as
the mathematics will need to be explained from scratch. But that should be fun,and it can be instructive to see the connections among different fields.
Before we start, we should agree about something: chaos and fractals are part of
an even grander subject known as dynamics. This is the subject that deals with
change, with systems that evolve in time. Whether the system in question settles
down to equilibrium, keeps repeating in cycles, or does something more compli-
cated, it is dynamics that we use to analyze the behavior. You have probably been
exposed to dynamical ideas in various places-in courses in differential equations,classical mechanics, chemical kinetics, population biology, and so on. Viewed
from the perspective of dynamics, all of these subjects can be placed in a common
framework, as we discuss at the end of this chapter.
Our study of dynamics bcgins in earnest in Chapter 2. But before digging in, we
present two overviews of the subject, one historical and one logical. Our treatment
is intuitive; careful definitions will come later. This chapter concludes with a dy-
namical view of the world, a framework that will guide our studies for the rest of
the book.
1.1 Capsule History of Dynamics
Although dynamics is an interdisciplinary subject today, it was originally a branch
of physics. The subject began in the mid-1600s, when Newton invented differen-
tial equations, discovered his laws of motion and universal gravitation, and com-
bined them to explain Kepler's laws of planetary motion. Specifically, Newton
solved the two-body problem-the problem of calculating the motion of the earth
around the sun, given the inverse-square law of gravitational attraction between
them. Subsequent generations of mathematicians and physicists tried to extend
Newton's analytical methods to the three-body problem (e.g., sun, earth, and
moon) but curiously this problem turned out to be much more difficult to solve.
After decades of effort, it was eventually realized that the three-body problem was
essentially impossible to solve, in the sense of obtaining explicit formulas for the
motions of the three bodies. At this point the situation seemed hopeless.
The breakthrough came with the work of PoincarC in the late 1800s. He intro-
duced a new point of view that emphasized qualitative rather than quantitative
questions. For example, instead of asking for the exact positions of the planets at
all times, he asked Is the solar system stable forever, or will some planets eventu-
ally fly off to infinity? PoincarC developed a powerful geo?tetric approach to an-
alyzing such questions. That approach has flowered into the modern subject of
dynamics, with applications reaching far beyond celestial mechanics. PoincarC
2 OVERVIEW was also the first person to glimpse the possibility of chaos, in which a determinis-
tic system exhibits aperiodic behavior that depends sensitively on the initial condi-
tions, thereby rendering long-term prediction impossible.
But chaos remained in the background in the first half of this century; instead
dynamics was largely concerned with nonlinear oscillators and their applications
in physics and engineering. Nonlinear oscillators played a vital role in the develop-
ment of such technologies as radio, radar, phase-locked loops, and lasers. On the
theoretical side, nonlinear oscillators also stimulated the invention of new mathe-
matical techniques-pioneers in this area include van der Pol, Andronov, Little-
wood, Cartwright, Levinson, and Smale. Meanwhile, in a separate development,PoincarC's geometric methods were being extended to yield a much deeper under-
standing of classical mechanics, thanks to the work of Birkhoff and later Kol-
mogorov, Arnol'd, and Moser.
The invention of the high-speed computer in the 1950s was a watershed in
the history of dynamics. The computer allowed one to experiment with equa-
tions in a way that was impossible before, and thereby to develop some intuition
about nonlinear systems. Such experiments led to Lorenz's discovery in 1963 of
chaotic motion on a strange attractor. He studied a simplified model of convec-
tion rolls in the atmosphere to gain insight into the notorious unpredictability of
the weather. Lorenz found that the solutions to his equations never settled down
to equilibrium or to a periodic state-instead they continued to oscillate in an ir-
regular, aperiodic fashion. Moreover, if he started his simulations from two
slightly different initial conditions, the resulting behaviors would soon become
totally different. The implication was that the system was inherently unpre-
dictable-tiny errors in measuring the current state of the atmosphere (or any
other chaotic system) would be amplified rapidly, eventually leading to embar-
rassing forecasts. But Lorenz also showed that there was structure in the
chaos-when plotted in three dimensions, the solutions to his equations fell
onto a butterfly-shaped set of points (Figure 1.1.1). He argued that this set had
to be an infinite complex of surfacesu-today we would regard it as an exam-
ple of a fractal.
Lorenz's work had little impact until the 1970s, the boom years for chaos. Here
are some of the main developments of that glorious decade. In 197 1 Ruelle and Tak-
ens proposed a new theory for the onset of turbulence in fluids, based on abstract
considerations about strange attractors. A few years later, May found examples of
chaos in iterated mappings arising in population biology, and wrote an influential re-
view article that stressed the pedagogical importance of studying simple nonlinear
systems, to counterbalance the often misleading linear intuition fostered by tradi-
tional education. Next came the most surprising discovery of all, due to the physicist
Feigenbaum. He discovered that there are certain universal laws governing the tran-
sition from regular to chaotic behavior; roughly speaking, completely different sys-
tems can go chaotic in the same way. His work established a link between chaos and
1.1 CAPSULE HISTORY OF DYNAMICS 3 Figure 1.1.1
phase transitions, and enticed a generation of physicists to the study of dynamics. Fi-
nally, experimentalists such as Gollub, Libchaber, Swinney, Linsay, Moon, and
Westervelt tested the new ideas about chaos in experiments on fluids, chemical reac-
tions, electronic circuits, mechanical oscillators, and semiconductors.
Although chaos stole the spotlight, there were two other major developments in
dynamics in the 1970s. Mandelbrot codified and popularized fractals, produced
magnificent computer graphics of them, and showed how they could be applied in
a variety of subjects. And in the emerging area of mathematical biology, Winfree
applied the geometric methods of dynamics to biological oscillations, especially
circadian (roughly 24-hour) rhythms and heart rhythms.
By the 1980s many people were working on dynamics, with contributions too
numerous to list. Table 1.1.1 summarizes this history.
1.2 The Importance of Being Nonlinear
Now we turn from history to the logical structure of dynamics. First we need to in-
troduce some terminology and make some distinctions.
4 OVERVIEW Dynamics - A Capsule History
Newton
Birkhoff
Kolmogorov
Arnol'd
Moser
Lorenz
Ruelle Talcens
May
Feigenbaum
Winfree
Mandelbrot
Invention of calculus, explanation of planetary motion
Flowering of calculus and classical mechanics
Analytical studies of planetary motion
Geometric approach, nightmares of chaos
Nonlinear oscillators in physics and engineering,invention of radio, radar, laser
Complex behavior in Hamiltonian mechanics
Strange attractor in simple model of convection
Turbulence and chaos
Chaos in logistic map
Universality and renormalization, connection between
chaos and phase transitions
Experimental studies of chaos
Nonlinear oscillators in biology
Fractals
Widespread interest in chaos, fractals, oscillators,and their applications
Table 1.1.1
There are two main types of dynamical systems: differential equations and it-
erated maps (also known as difference equations). Differential equations describe
the evolution of systems in continuous time, whereas iterated maps arise in prob-
lems where time is discrete. Differential equations are used much more widely in
science and engineering, and we shall therefore concentrate on them. Later in the
book we will see that iterated maps can also be very useful, both for providing sim-
ple examples of chaos, and also as tools for analyzing periodic or chaotic solutions
of differential equations.
Now confining our attention to differential equations, the main distinction is be-
tween ordinary and partial differential equations. For instance, the equation for a
damped harmonic oscillator
1.2 THE IMPORTANCE OF BEING NONLINEAR 5 is an ordinary differential equation, because it involves only ordinary derivatives
dxldt and d2xdt' . That is, there is only one independent variable, the time t . In
contrast, the heat equation
is a partial differential equation-it has both time t and space x as independent
variables. Our concern in this book is with purely temporal behavior, and so we
deal with ordinary differential equations almost exclusively.
A very general framework for ordinary differential equations is provided by the
system
Here the overdots denote differentiation with respect to t . Thus x, - dx,dt. The
variables x, , . . , x,, might represent concentrations of chemicals in a reactor, popula-
tions of different species in an ecosystem, or the positions and velocities of the planets
in the solar system. The functions A, ..., i, are determined by the problem at hand.
For example, the damped oscillator (1) can be rewritten in the form of (2),thanks to the following trick: we introduce new variables x, = x and xl = x. Then
x, = X, , from the definitions, and
from the definitions and the governing equation (1). Hence the equivalent system
(2) is
This system is said to be linear, because all the x, on the right-hand side appear
to the first power only. Otherwise the system would be nonlinear. Typical nonlin-
ear terms are products, powers, and functions of the x,, such as x,x2 , (x,)', or
cos X2 .
For example, the swinging of a pendulum is governed by the equation
where x is the angle of the pendulum from vertical, g is the acceleration due to
gravity, and L is the length of the pendulum. The equivalent system is nonlinear:
6 OVERVIEW Nonlinearity makes the pendulum equation very difficult to solve analytically.
The usual way around this is to fudge, by invoking the small angle approximation
sin x = x for x << 1 . This converts the problem to a linear one, which can then be
solved easily. But by restricting to small x, we're throwing out some of the
physics, like motions where the pendulum whirls over the top. Is it really necessary
to make such drastic approximations?
It turns out that the pendulum equation can be solved analytically, in terms of
elliptic functions. But there ought to be an easier way. After all, the motion of the
pendulum is simple: at low energy, it swings back and forth, and at high energy it
whirls over the top. There should be some way of extracting this information from
the system directly. This is the sort of problem we'll learn how to solve, using geo-
metric methods.
Here's the rough idea. Suppose we happen to know a solution to the pendu-
lum system, for a particular initial condition. This solution would be a pair of
functions x,(t) and x,(t), representing the position and velocity of the pendu-
lum. If we construct an abstract space with coordinates (x,,~,), then the solu-
tion (x,(t), x2(t)) corresponds to a point moving along a curve in this space
(Figure 1.2.1).
Figure 1.2.1
This curve is called a trajectory, and the space is called the phase space for the
system. The phase space is completely filled with trajectories, since each point can
serve as an initial condition.
Our goal is to run this construction in reverse: given the system, we want to
1.2 THE IMPORTANCE OF BEING NONLINEAR 7 draw the trajectories, and thereby extract information about the solutions. In many
cases, geometric reasoning will allow us to draw the trajectories without actually
solving the system!
Some terminology: the phase space for the general system (2) is the space with
coordinates x, , ..., x,, . Because this space is n-dimensional, we will refer to (2) as
an n-dimensional system or an nth-order system. Thus n represents the dimen-
sion of the phase space.
Nonautonomous Systems
You might wor~y that (2) is not general enough because it doesn't include any ex-
plicit time dependence. How do we deal with time-dependent or nonautonomous
equations like the forced harmonic oscillator mx + bx + hx = F cos t ? In this case too
there's an easy trick that allows us to rewrite the system in the form (2). We let x, = x
and x, = i as before but now we introduce x, = t . Then x, = 1 and so the equivalent
system is
which is an example of a three-dimensional system. Similarly, an nth-order time-
dependent equation is a special case of an (n+ l )-dimensional system. By this
trick, we can always remove any time dependence by adding an extra dimension to
the system.
The virtue of this change of variables is that it allows us to visualize a phase
space with trajectories frozen in it. Otherwise, if we allowed explicit time depen-
dence, the vectors and the trajectories would always be wiggling-this would ruin
the geometric picture we're trying to build. A more physical motivation is that the
state of the forced harmonic oscillator is truly three-dimensional: we need to know
three numbers, x, i, and t , to predict the future, given the present. So a three-
dimensional phase space is natural.
The cost, however, is that some of our terminology is nontraditional. For exam-
ple, the forced harmonic oscillator would traditionally be regarded as a second-
order linear equation, whereas we will regard it as a third-order nonlinear system,since (3) is nonlinear, thanks to the cosine term. As we'll see later in the book,forced oscillators have many of the properties associated with nonlinear systems,and so there are genuine conceptual advantages to our choice of language.
Why Are Nonlinear Problems So Hard?
As we've mentioned earlier, most nonlinear systems are impossible to solve ana-
lytically. Why are nonlinear systems so much harder to analyze than linear ones?
The essential difference is that linear systems can be broken down into parts. Then
8 OVERVIEW each part can be solved separately and finally recombined to get the answer. This
idea allows a fantastic simplification of complex problems, and underlies such meth-
ods as normal modes, Laplace transforms, superposition arguments, and Fourier
analysis. In this sense, a linear system is precisely equal to the sum of its parts.
But many things in nature don't act this way. Whenever parts of a system inter-
fere, or cooperate, or compete, there are nonlinear interactions going on. Most of
everyday life is nonlinear, and the principle of superposition fails spectacularly. If
you listen to your two favorite songs at the same time, you won't get double the plea-
sure! Within the realm of physics, nonlinearity is vital to the operation of a laser, the
formation of turbulence in a fluid, and the superconductivity of Josephson junctions.
1.3 A Dynamical View of the World
Now that we have established the ideas of nonlinearity and phase space, we can
present a framework for dynamics and its applications. Our goal is to show the log-
ical structure of the entire subject. The framework presented in Figure 1.3.1 will
guide our studies thoughout this book.
The framework has two axes. One axis tells us the number of variables needed
to characterize the state of the system. Equivalently, - this number is the dimension
of the phase space. The other axis tells us whether the system is linear or nonliri-
ear.
For example, consider the exponential growth of a population of organisms.
This system is described by the first-order differential equation
where x is the population at time t and r is the growth rate. We place this system
in the column labeled n = 1 because one piece of information-the current value
of the population x-is sufficient to predict the population at any later time. The
system is also classified as linear because the differential equation x = rx is linear
in x.
As a second example, consider the swinging of a pendulum, governed by
In contrast to the previous example, the state of this system is given by two vari-
ables: its current angle x and angular velocity x . (Think of it this way: we need
the initial values of both x and x to determine the solution uniquely. For example,if we knew only x, we wouldn't know which way the pendulum was swinging.)
Because two variables are needed to specify the state, the pendulum belongs in the
n = 2 column of Figure 1.3.1. Moreover, the system is nonlinear, as discussed in
the previous section. Hence the pendulum is in the lower, nonlinear half of the
n = 2 column.
1.3 A DYNAMICAL VIEW OF THE WORLD 9 Linear
t
Nonlinear
Growth, decay, or
equilibrium
Exponential growth
RC circuit
Radioactive decay
Fixed points
Bifurcations
Overdamped systems,relaxational dynamics
Logistic equation
for single species
Number of variables -
n=2 n23 n >> 1 Continuum
Oscillan'ons Collective phenomena Waves and patterns
Linear oscillator Civil engineering, Coupled harmonic oscillators Elasticity
Mass and spring
st~uctures Solid-state physics
Wave equations
RLC circuit Electrical engineering Molecular dynamics Electromagnetism (Maxwell)
2-body problem Equilibrium statistical Quantum mechanics
(Kepler, Newton) mechanics (Schrodinger, Heisenberg, Dirac)
Pendulum
Anharmonic oscillators
Limit cycles
Biological oscillators
(neurons, heart cells)
Predator-prey cycles
Nonlinear electronics
(van der Pol, Josephson)
Heat and diffusion
Acoustics
Viscous fluids
The frontier
I-_--_--------_----
Chaos Spatio-temporal complexity
I
Strange attractors I
(Lorenz)
I
3-body problem (Poincd) I
Chemical kinetics I
Iterated maps (Feigenbaum) 1
Fractals
(Mandelbrot)
I
I
Forced nonlinear oscillators I
(Levinson, Smale)
I
I Practical uses of chaos
, Quantum chaos ?
Coupled nonlinear oscillators
Lasers, nonlinear optics
Nonequilibrium statistical
mechanics
Nonlinear solid-state physics
(semiconductors)
Josephson arrays
Heart cell synchronization
Neural networks
Immune system
Ecosystems
Economics
Nonlinear waves (shock;, solitons)
Plasmas
Earthquakes
General relativity (Einstein)
Quantum field theory
Reaction-diffusion,biological and chemical waves
Fibrillation
Epilepsy
Turbulent fluids (Navier-Stokes)
Life One can continue to classify systems in this way, and the result will be some-
thing like the framework shown here. Admittedly, some aspects of the picture are
debatable. You might think that some topics should be added, or placed differ-
ently, or even that more axes are needed-the point is to think about classifying
systems on the basis of their dynamics.
There are some striking patterns in Figure 1.3.1. All the simplest systems occur
in the upper left-hand corner. These are the small linear systems that we learn
about in the first few years of college. Roughly speaking, these linear systems ex-
hibit growth, decay, or equilibrium when n = 1, or oscillations when n = 2. The
italicized phrases in Figure 1.3.1 indicate that these broad classes of phenomena
first arise in this part of the diagram. For example, an RC circuit has n = 1 and
cannot oscillate, whereas an RLC circuit has n = 2 and can oscillate.
The next most familiar part of the picture is the upper right-hand corner. This is
the domain of classical applied mathematics and mathematical physics where the
linear partial differential equations live. Here we find Maxwell's equations of elec-
tricity and magnetism, the heat equation, Schrodinger's wave equation in quantum
mechanics, and so on. These partial differential equations involve an infinite con-
tinuum of variables because each point in space contributes additional degrees of
freedom. Even though these systems are large, they are tractable, thanks to such
linear techniques as Fourier analysis and transform methods.
In contrast, the lower half of Figure 1.3.1-the nonlinear half-is often ignored
or deferred to later courses. But no more! In this book we start in the lower left cor-
ner and systematically head to the right. As we increase the phase space dimension
from n = 1 to n = 3, we encounter new phenomena at every step, from fixed points
and bifurcations when n = 1, to nonlinear oscillations when n = 2, and finally
chaos and fractals when n = 3. In all cases, a geometric approach proves to be very
powerful, and gives us most of the information we want, even though we usually
can't solve the equations in the traditional sense of finding a formula for the an-
swer. Our journey will also take us to some of the most exciting parts of modern
science, such as mathematical biology and condensed-matter physics.
You'll notice that the framework also contains a region forbiddingly marked
The frontier. It's like in those old maps of the world, where the mapmakers
wrote, Here be dragons on the unexplored parts of the globe. These topics are
not completely unexplored, of course, but it is fair to say that they lie at the limits
of current understanding. The problems are very hard, because they are both large
and nonlinear. The resulting behavior is typically complicated in both space and
time, as in the motion of a turbulent fluid or the patterns of electrical activity in a
fibrillating heart. Toward the end of the book we will touch on some of these prob-
lems-they will certainly pose challenges for years to come.
1.3 A DYNAMICAL VIEW OF THE WORLD 11 ONE-DIMENSIONAL FLOWS FLOWS ON THE LINE
2.0 Introduction
In Chapter 1, we introduced the general system
x, =-f;(x,, ... ,xn)
and mentioned that its solutions could be visualized as trajectories flowing through
an n-dimensional phase space with coordinates (x,, . .. , x,). At the moment, this
idea probably strikes you as a mind-bending abstraction. So let's start slowly, be-
ginning here on earth with the simple case n = 1. Then we get a single equation of
the form
Here x(t) is a real-valued function of time t , and f(x) is a smooth real-valued .
function of x. We'll call such equations one-dimensional orfirst-order systems.
Before there's any chance of confusion, let's dispense with two fussy points of
terminology:
1. The word system is being used here in the sense of a dynamical system,not in the classical sense of a collection of two or more equations. Thus
a single equation can be a system.
2. We do not allow f to depend explicitly on time. Time-dependent or
nonautonomous equations of the form x = f (x, t) are more compli-
cated, because one needs two pieces of information, x and t, to predict
the future state of the system. Thus x = f(x,t) should really be re-
garded as a two-dimensional or second-order system, and will there-
fore be discussed later in the book.
2.0 INTRODUCTION 15 2.1 A Geometric Way of Thinking
Pictures are often more helpful than formulas for analyzing nonlinear systems.
Here we illustrate this point by a simple example. Along the way we will introduce
one of the most basic techniques of dynamics: interpreting a differential equation
as a vector field.
Consider the following nonlinear differential equation:
x = sin x. (1)
To emphasize our point about formulas versus pictures, we have chosen one of the
few nonlinear equations that can be solved in closed form. We separate the vari-
ables and then integrate:
dx
dt=-,sin x
which implies
t = cscx dx
I
To evaluate the constant C, suppose that x = x, at t = 0. Then C = In ( csc x, + cot x, 1.
Hence the solution is
csc x, + cot x,t = ln
cscx+cotx
This result is exact, but a headache to interpret. For example, can you answer
the following questions?
1. Suppose x, = n4 ; describe the qualitative features of the solution x(t)
for all t > 0. In particular, what happens as t + .. ?
2. For an arbitrary initial condition x,, what is the behavior of x(t) as
t+.. ?
Think about these questions for a while, to see that formula (2) is not transparent.
In contrast, a graphical analysis of (1) is clear and simple, as shown in Figure
2.1.1. We think of t as time, x as the position of an imaginary particle moving
along the real line, and x as the velocity of that particle. Then the differential
equation x = sin x represents a vectorfield on the line: it dictates the velocity vec-
tor i at each x . To sketch the vector field, it is convenient to plot x versus x , and
then draw arrows on the x-axis to indicate the corresponding velocity vector at
each x. The arrows point to the right when x > 0 and to the left when x < 0.
16 FLOWS ON THE LINE I Figure 2.1.1
I
I
Here's a more physical way to think about the vector field: imagine that fluid
1
is flowing steadily along the x-axis with a velocity that varies from place to
place, according to the rule x = sin x. As shown in Figure 2.1.1, theflow is to the
right when x > 0 and to the left when x < 0. At points where x = 0, there is no
I flow; such points are therefore called fixedpoints. You can see that there are two
kinds of fixed points in Figure 2.1.1: solid black dots represent stable fixed
I
points (often called attractors or sinks, because the flow is toward them) and
open circles represent unstable fixed points (also known as repellers or
I
sources).
Armed with this picture, we can now easily understand the solutions to the dif-
ferential equation x = sin x. We just start our imaginary particle at x, and watch
how it is carried along by the flow.
This approach allows us to answer the questions above as follows:
1. Figure 2.1.1 shows that a particle starting at x, = n4 moves to the
right faster and faster until it crosses x = n2 (where sinx reaches its
maximum). Then the particle starts slowing down and eventually ap-
proaches the stable fixed point x = n from the left. Thus, the qualita-
tive form of the solution is as shown in Figure 2.1.2.
Note that the curve is concave up at first, and then concave down;
this corresponds to the initial acceleration for x < n2 followed by the
deceleration toward x = n.
2. The same reasoning applies to any initial condition x,. Figure 2.1.1
shows that if x > 0 initially, the particle heads to the right and asymptot-
ically approaches the nearest sta-
I ble fixed point. Similarly, if
2.1 A GEOMETRIC WAY OF THINKING 17
n - - - - - - - - - - - - -
x < 0 initially, the particle ap-
proaches the nearest stable fixed
point to its left. If x = 0, then x
remains constant. The qualitative
n -
4
form of the solution for any ini-
tial condition is sketched in Fig-
ure 2.1.3.
Figure 2.1.2 Figure 2.1.3
In all honesty, we should admit that a picture can't tell us certain quantitative
things: for instance, we don't know the time at which the speed I .i 1 is greatest. But in
many cases qualitative information is what we care about, and then pictures are fine.
2.2 Fixed Points and Stability
The ideas developed in the last section can be extended to any one-dimensional
system x = f (1). We just need to draw the graph of f (x) and then use it to sketch
the vector field on the real line (the x-axis in Figure 2.2.1).
Figure 2.2.1
18 FLOWS ON THE LINE As before, we imagine that a fluid is flowing along the real line with a local veloc-
ity f (x). This imaginary fluid is called the phase fluid, and the real line is the
phase space. The flow is to the right where f (x) > 0 and to the left where f (x) < 0.
To find the solution to x = f (x) starting from an arbitrary initial condition x,, we
I
place an imaginary particle (known as aphasepoint) at x, and watch how it is car-
ried along by the flow. As time goes on, the phase point moves along the x-axis
according to some function x(t) . This function is called the trajectory based at x, ,and it represents the solution of the differential equation starting from the initial
condition x, . A picture like Figure 2.2.1, which shows all the qualitatively differ-
ent trajectories of the system, is called aphaseportrait.
The appearance of the phase portrait is controlled by the fixed points x , de-
fined by f(x) = 0 ; they correspond to stagnation points of the flow. In Figure
2.2.1, the solid black dot is a stable fixed point (the local flow is toward it) and the
I
open dot is an unstable fixed point (the flow is away from it).
In terms of the original differential equation, fixed points represent equilib-
rium solutions (sometimes called steady, constant, or rest solutions, since if
x = x initially, then x(t) = x for all time). An equilibrium is defined to be sta-
ble if all sufficiently small disturbances away from it damp out in time. Thus sta-
ble equilibria are represented geometrically by stable fixed points. Conversely,unstable equilibria, in which disturbances grow in time, are represented by unsta-
ble fixed points.
EXAMPLE 2.2.1 :
Find all fixed points for x = x2 - 1, and classify their stability.
Solution: Here f (x) = x2 - 1. To find the fixed points, we set f (x) = 0 and
solve for x . Thus x = f 1. To determine stability, we plot x2 - 1 and then sketch
the vector field (Figure 2.2.2). The flow is to the right where x2 - 1 > 0 and to the
left where x2 - 1 < 0. Thus x = -1 is stable, and x = 1 is unstable. .
I
Figure 2.2.2
2.2 FIXED POINTS AND STABILITY 19 Note that the definition of stable equilibrium is based on sinall disturbances;
certain large disturbances may fail to decay. In Example 2.2.1, all small distur-
bances to x = -1 will decay, but a large disturbance that sends x to the right of
x = 1 will not decay-in fact, the phase point will be repelled out to +m . To em-
phasize this aspect of stability, we sometimes say that x = -1 is locally stable, but
not globally stable.
EXAMPLE 2.2.2:
Consider the electrical circuit shown in Figure 2.2.3. A resistor R and a capaci-
tor Care in series with a battery of constant dc voltage V,,. Suppose that the switch
is closed at t = 0, and that there is no charge on the capacitor initially. Let Q(t) de-
1
note the charge on the capacitor at time
The graph of f (Q) is a straight line with a negative slope (Figure 2.2.4). The
corresponding vector field has a fixed point where f(Q) = 0, which occurs at
Q = CV, . The flow is to the right where
Q f (Q) > 0 and to the left where f (Q) < 0.
Thus the flow is always toward Q -it is a
stable fixed point. In fact, it is globally sta-
ble, in the sense that it is approached from
Q all initial conditions.
To sketch Q(t), we start a phase point at
the origin of Figure 2.2.4 and imagine how
it would move. The flow carries the phase
Figure 2.2.4
point monotonically toward Q. Its speed
7 t 2 0. Sketch the graph of Q(t).
Solution: This type of circuit problem
is probably familiar to you. It is governed
by linear equations and can be solved an-
+
20 FLOWS ON THE LINE
b
-
alytically, but we prefer to illustrate the
geometric approach.
First we write the circuit equations. As
we go around the circuit, the total voltage
-
-
- drop must equal zero; hence -4 +
Figure 2.2.3
RI + QC = 0, where I is the current
flowing through the resistor. This current causes charge to accumulate on the ca-
pacitor at a rate Q = I. Hence Q decreases linearly as it approaches the fixed point; therefore Q(t) is increasing
and concave down, as shown in Figure 2.2.5. a
EXAMPLE 2.2.3:
- - - -
Sketch the phase portrait corre-
sponding to x = x - cos x , and deter-
mine the stability of all the fixed points.
Solution: One approach would be to
plot the function f (x) = x - cos x and
t
then sketch the associated vector field.
Figure 2.2.5 This method is valid, but it requires you
to figure out what the graph of
x - cos x looks like.
There's an easier solution, which exploits the fact that we know how to graph
g = x and y = cosx separately. We plot both graphs on the same axes and then
observe that they intersect in exactly one point (Figure 2.2.6).
Figure 2.2.6
This intersection corresponds to a fixed point, since x = cos x and therefore
f (x) = 0. Moreover, when the line lies above the cosine curve, we have x > cos x
and so x > 0: the flow is to the right. Similarly, the flow is to the left where the line is
below the cosine curve. Hence x is the only fixed point, and it is unstable. Note that
we can classify the stability of x , even though we don't have a formula for x it-
self! a
2.3 Population Growth
The simplest model for the growth of a population of organisms is N = rN,where N(t) is the population at time t , and r > 0 is the growth rate. This model
2.3 POPULATION GROWTH 2 1 Growth rate
r
Figure 2.3.1
predicts exponential growth:
N(t) = Noer', where No is the
population at t = 0.
Of course such exponential
\ growth cannot go on forever.
\ To model the effects of over-
\
crowding and limited resources,population biologists and de-
mographers often assume that
the per capita growth rate NN
decreases when N becomes sufficiently large, as shown in Figure 2.3.1. For
small N, the growth rate equals r, just as before. However, for populations larger
This leads to the logistic equation
than a certain carrying capacity
first suggested to describe the growth of human populations by Verhulst in 1838.
This equation can be solved analytically (Exercise 2.3.1) but once again we prefer a
graphical approach. We plot N versus N to see what the vector field looks like.
Note that we plot only N 2 0, since it makes no sense to think about a negative pop-
ulation (Figure 2.3.3). Fixed points occur at N = 0 and N = K, as found by set-
ting N = 0 and solving for N. By looking at the flow in Figure 2.3.3, we see that
N = 0 is an unstable fixed point and N = K is a stable fixed point. In biological
terms, N = 0 is an unstable equilibrium: a small population will grow exponen-
tially fast and run away from N = 0 . On the other hand, if N is disturbed slightly
from K, the disturbance will decay monotonically and N(t) -+ K as t -+ .
In fact, Figure 2.3.3 shows that if we start a phase point at arly No > 0, it will al-
ways flow toward N = K. Hence the populatiorl always approaches the carrying
capacity.
The only exception is if No = 0 ; then there's nobody around to start reproducing,and so N = 0 for all time. (The model does not allow for spontaneous generation!)
Growth rate
2 2 FLOWS ON THE LINE
K, the growth rate actually be-
comes negative; the death rate is
r
higher than the birth rate.
A mathematically convenient
way to incorporate these ideas is
to assume that the per capita
N growth rate NN decreases lin-
early with N (Figure 2.3.2).
Figure 2.3.2 Figure 2.3.3
Figure 2.3.3 also allows us to deduce the qualitative shape of the solutions. For
example, if No < K2, the phase point moves faster and faster until it crosses
N = K2, where the parabola in Figure 2.3.3 reaches its maximum. Then the phase
point slows down and eventually creeps toward N = K. In biological terms, this
means that the population initially grows in an accelerating fashion, and the graph
of N(t) is concave up. But after N = K2, the derivative N begins to decrease,and so N(t) is concave down as it asymptotes to the horizontal line N = K (Figure
2.3.4). Thus the graph of N(t) is S-shaped or sigmoid for N(, < K2.
Figure 2.3.4
Something qualitatively different occurs if the initial condition No lies between
K2 and K; now the solutions are decelerating from the start. Hence these solu-
tions are concave down for all t. If the population initially exceeds the carrying ca-
pacity (No > K ), then N(t) decreases toward N = K and is concave up. Finally, if
No = 0 or No = K, then the population stays constant.
Critique of the Logistic Model
Before leaving this example, we should make a few comments about the biological
validity of the logistic equation. The algebraic form of the model is not to be taken lit-
erally. The model should really be regarded as a metaphor for populations that have a
2.3 POPULATION GROWTH 2 3 tendency to grow from zero population up to some carrying capacity K.
Originally a much stricter interpretation was proposed; and the model was ar-
gued to be a universal law of growth (Pearl 1927). The logistic equation was tested
in laboratory experiments in which colonies of bacteria, yeast, or other simple or-
ganisms were grown in conditions of constant climate, food supply, and absence of
predators. For a good review of this literature, see Krebs (1972, pp. 190-200).
These experiments often yielded sigmoid growth curves, in some cases with an im-
pressive match to the logistic predictions.
On the other hand, the agreement was much worse for fruit flies, flour beetles,and other organisms that have complex life cycles, involving eggs, larvae, pupae,and adults. In these organisms, the predicted asymptotic approach to a steady car-
rying capacity was never observed-instead the populations exhibited large, per-
sistent fluctuations after an initial period of logistic growth. See Krebs (1972) for a
discussion of the possible causes of these fluctuations, including age structure and
time-delayed effects of overcrowding in the population.
For further reading on population biology, see Pielou (1969) or May (1981).
Edelstein-Keshet (1988) and Murray (1989) are excellent textbooks on mathemat-
ical biology in general.
2.4 Linear Stability Analysis
So far we have relied on graphical methods to determine the stability of fixed
points. Frequently one would like to have a more quantitative measure of stability,such as the rate of decay to a stable fixed point. This sort of information may be
obtained by linearizing about a fixed point, as we now explain.
Let x be a fixed point, and let q(t) = x(t) - x be a small perturbation away
from x . To see whether the perturbation grows or decays, we derive a differential
equation for q. Differentiation yields
since x is constant. Thus ?j = x = f (x) = f (x + q). Now using Taylor's expan-
sion we obtain
where 0(q2) denotes quadratically small terms in q . Finally, note that f(x) = 0
since x is a fixed point. Hence
Now if ff(x) 0, the 0(q2) terms are negligible and we may write the approxi-
mation
2 4 FLOWS ON THE LINE rl = q f '(x) .
This is a linear equation in q, and is called the linearization about x . It shows
that the perturbation q(t) grows exponentially if f'(x) > 0 and decays if
f'(x) < 0. If f'(x) = 0, the 0(q2) terms are not negligible and a nonlinear
analysis is needed to determine stability, as discussed in Example 2.4.3 below.
The upshot is that the slope f '(x) at the fixed point determines its stability. If
you look back at the earlier examples, you'll see that the slope was always nega-
tive at a stable fixed point. The importance of the sign of f '(x) was clear from
our graphical approach; the new feature is that now we have a measure of how sta-
ble a fixed point is-that's determined by the magnitude of f '(x). This magni-
tude plays the role of an exponential growth or decay rate. Its reciprocal l(f '(x)l
is a characteristic time scale; it determines the time required for x(t) to vary sig-
nificantly in the neighborhood of x .
EXAMPLE 2.4.1 :
Using linear stability analysis, determine the stability of the fixed points for
x =sinx.
Solution: The fixed points occur where f (x) = sin x = 0 . Thus x = kn , where
k is an integer. Then
1, k even
f '(x) = cos kn =
-1, k odd
Hence x is unstable if k is even and stable if k is odd. This agrees with the re-
sults shown in Figure 2.1.1. w
EXAMPLE 2.4.2:
Classify the fixed points of the logistic equation, using linear stability analysis,and find the characteristic time scale in each case.
Solution: Here f (N) = r~ (1 - %), with fixed points N = 0 and N = K. Then
f '(N) = r - and so f '(0) = r and f '(K) = -r . Hence N = 0 is unstable and
N = K is stable, as found earlier by graphical arguments. In either case, the char-
acteristic time scale is 111 f '(N)J = 1r . m
EXAMPLE 2.4.3:
What can be said about the stability of a fixed point when f '(x) = O?
Solution: Nothing can be said in general. The stability is best determined on a
case-by-case basis, using graphical methods. Consider the following examples:
(a) x = -1' (b) x = X' (c) x = x2 (d) x = 0
2.4 LINEAR STABILITY ANALYSIS 2 5 Each of these systems has a fixed point x = 0 with f'(x) = 0. However the sta-
bility is different in each case. Figure 2.4.1 shows that (a) is stable and (b) is unsta-
ble. Case (c) is a hybrid case we'll call half-stable, since the fixed point is
attracting from the left and repelling from the right. We therefore indicate this type
of fixed point by a half-filled circle. Case (d) is a whole line of fixed points; pertur-
bations neither grow nor decay.
Figure 2.4.1
These examples may seem artificial, but we will see that they arise naturally in the
context of bifurcations-more about that later. rn
2.5 Existence and Uniqueness
Our treatment of vector fields has been very informal. In particular, we have taken
a cavalier attitude toward questions of existence and uniqueness of solutions to
26 FLOWS ON THE LINE the system x = f(x). That's in keeping with the applied spirit of this book.
Nevertheless, we should be aware of what can go wrong in pathological cases.
EXAMPLE 2.5.1 :
Show that the solution to x = x'I3 starting from x, = 0 is not unique.
Solution: The point x = 0 is a fixed point, so one obvious solution is x(t) = 0
for all t. The surprising fact is that there is another solution. To find it we separate
variables and integrate:
j x-113d. = Jdt
so 2 x2I3 = t + C . Imposing the initial condition x(0) = 0 yields C = 0. Hence
x(t) = (3 c)~ is also a solution! rn
When uniqueness fails, our geometric approach collapses because the phase
point doesn't know how to move; if a phase point were started at the origin, would
312
it stay there or would it move according to x(t) = ( t) ? (Or as my friends in el-
ementary school used to say when discussing the problem of the irresistible force
and the immovable object, perhaps the phase point would explode!)
Actually, the situation in Example 2.5.1 is even worse than we've let on-there
are infinitely many solutions starting from the same initial condition (Exercise
x 2.5.4).
What's the source of the non-uniqueness?
A hint comes from looking at the vector field
(Figure 2.5.1). We see that the fixed point
x = 0 is very unstable-the slope ff(0) is
infinite.
Chastened by this example, we state a theo-
Figure 2.5.1 rem that provides sufficient conditions for exis-
tence and uniqueness of solutions to x = f (x).
Existence and Uniqueness Theorem: Consider the initial value problem
Suppose that f (x) and f '(x) are continuous on an open interval R of the x-axis,and suppose that x, is a point in R. Then the initial value problem has a solution
x(t) on some time interval (-z,z) about t = 0, and the solution is unique.
For proofs of the existence and uniqueness theorem, see Borrelli and Coleman
(1987), Lin and Sege1(1988), or virtually any text on ordinary differential equations.
This theorem says that if f(x) is smooth enough, then solutions exist and are
unique. Even so, there's no guarantee that solutions exist forever, as shown by the
2.5 EXISTENCE AND UNIQUENESS 2 7 next example.
EXAMPLE 2.5.2:
Discuss the existence and uniqueness of solutions to the initial value problem
x = 1 + x2, ~(0) = x0. DO solutions exist for all time?
Solution: Here f (x) = 1 + x2. This function is continuous and has a continuous de-
rivative for all x. Hence the theorem tells us that solutions exist and are unique for any
initial condition x,. But the theorem does not say that the solutions exist for all time;
they are only guaranteed to exist in a (possibly very short) time interval around t = 0.
For example, consider the case where x(0) = 0. Then the problem can be solved
analytically by separation of variables:
which yields
tan-' x = t + C
The initial condition x(0) = 0 implies C = 0. Hence x(t) = tant is the solution.
But notice that this solution exists only for -n2 < t < n2, because x(t) + f- as
t + f n2. Outside of that time interval, there is no solution to the initial value
problem for x, = 0.
The amazing thing about Example 2.5.2 is that the system has solutions that
reach infinity infinite time. This phenomenon is called blow-up. As the name sug-
gests, it is of physical relevance in models of combustion and other runaway
processes.
There are various ways to extend the existence and uniqueness theorem. One
can allow f to depend on time t , or on several variables x,, . . , x,, . One of the
most useful generalizations will be discussed later in Section 6.2.
From now on, we will not worry about issues of existence and uniqueness-our
vector fields will typically be smooth enough to avoid trouble. If we happen to
come across a more dangerous example, we'll deal with it then.
2.6 Impossibility of Oscillations
Fixed points dominate the dynamics of first-order systems. In all our examples so
far, all trajectories either approached a fixed point, or diverged to f- . In fact,those are the otzly things that can happen for a vector field on the real line. The rea-
son is that trajectories are forced to increase or decrease monotonically, or remain
constant (Figure 2.6.1). To put it more geometrically, the phase point never re-
verses direction.
2 8 FLOWS ON THE LINE Figure 2.6.1
Thus, if a fixed point is regarded as an equilibrium solution, the approach to
equilibrium is always monotonic-overshoot and damped oscillations can never
occur in a first-order system. For the same reason, undamped oscillations are im-
possible. Hence there are no periodic solutions to x = f (x) .
These general results are fundamentally topological in origin. They reflect the
fact that x = f(x) corresponds to flow on a line. If you flow monotonically on a
line, you'll never come back to your starting place-that's why periodic solutions
are impossible. (Of course, if we were dealing with a circle rather than a line, we
could eventually return to our starting place. Thus vector fields on the circle can
exhibit periodic solutions, as we dis ......
您现在查看是摘要介绍页, 详见PDF附件(14048KB,505页)。
_1.jpg)
_2.jpg)
_3.jpg)
_4.jpg)
_5.jpg)
_6.jpg)