Mathematical Introduction to Dynamical Systems

Chelsea Zou
13 min readMay 23, 2023

--

Part I: Introduction

Dynamical systems are mathematical models that represent the rich interplay between physical laws that govern our space and time. They encompass everything from the movement of planets, to bacteria in the digestive tract, to fluctuations of the global stock market. To understand dynamical systems is to engage with some of the most fundamental questions in science: How do things change over time? And what can we learn from these patterns of change?

With a set of differential equations (DEs) that govern their behavior, these systems allow us to geometrically model the global behavior of solution spaces. So what is a DE, you ask? A bit of preliminary knowledge before we get started: DEs express a function’s relationship to its derivatives. Recall that the derivative of a function represents the sensitivity of change (or rate of change) of an output with respect to its inputs. So in essence, a system of DEs represents the interaction and evolution between multiple variables.

For example, suppose we want to understand the population dynamics of a predator-prey system between rabbits and wolves. In the beginning, as the rabbit population increases, the increase in food source for wolves leads to a population growth in the wolf population. However, as the wolf population grows, they eat more, resulting in a rabbit population decline. This decline, however, reduces the availability of hare delicacy for wolves, resulting in a decrease in wolf population. These interdependent cyclical changes in the ecosystem can be modeled as a dynamical system.

Cool right? So how do we actually model the system? Let’s dive into the maths…

Part II: Geometric Approach to Solving DEs:

When we want to understand the general dynamics of a system of DEs, we first represent the system of equations as a matrix so we can extract some useful information using the matrix’s trace and determinant. [Note: In linear algebra, matrices represent linear transformations, where the trace represents the amount of stretch/compression of vectors, and the determinant represents a scaling factor to a region of space — (ie. determinant of 2 scales area by factor of 2, and determinant of 0 collapses area to a single point). The interpretation of trace and determinant in the context of dynamical systems is different in application].

Trace-Determinant Graph (TD Graph):

Let A be a 2 x 2 matrix.

The trace of A is simply the sum of its diagonals

  • tr(A) = a + d

and the determinant is

  • det(A) = ad - cb

By using just the tr(A) and det(A), we can actually extract a lot of information about the dynamics of the system using this handy graph:

Note that “sink” and “source” = stable and unstable, respectively.

As tr(A) is the x-axis and det(A) is the y-axis, computing these values lands us somewhere on this graph. And depending on where we land, we are able to understand the general long term behavior of the dynamical system! If we want an even more accurate depiction, we can further decompose the matrix into its eigenvectors, each with its associated eigenvalue. You can think of the eigenvectors as the direction vectors along which the system’s behavior is primarily manifested. Basically, each eigenvector captures a distinct pattern or “shape” of the solutions, and we can gain insight into the dominant modes in which the system evolves.

The eigenvalues, on the other hand, are scalar values associated with each eigenvector. They represent the growth or decay rates of different modes or components of the system. For example, if an eigenvalue is positive, it indicates that the corresponding mode grows exponentially over time. Conversely, if an eigenvalue is negative, the mode decays. The magnitude of the eigenvalue represents the speed of growth or decay.

Okay so back into the different behaviors a system can exhibit. We can start with a stable spiral as an example. So what does it actually mean when a system exhibits the dynamics of a stable spiral? Visually speaking, we would imagine that it would look something like an inwards coil. What this means is that over time, solutions will oscillate towards the convergence of an equilibrium state.

For instance, suppose we are modeling the trajectory a mother pushing her child on a swing. Imagine that the mother leaves the child in the middle to go do something better, and let’s assume the child does not know how to swing on their own. Initially, the swing continues to sway back and forth. However, as time goes on, air resistance and gravity eventually diminish the height of the swing. And if we fast forward a few minutes, we would observe a motionless swing (and a sad child). If we plot out the dynamics with the x-axis as the position of the swing, and the y-axis as the velocity of the swing, we can see a stable spiral of some sort. This sort of dynamics is called an attractor, because eventually as time goes to infinity, we converge to a stable point (the stationary swing). On the contrary, there are things called repellers, which behave opposite to attractors and push solutions away from an unstable point. Visually speaking, we can imagine an unstable spiral that diverges outwards over time.

Along with spirals, there are also centers, nodes and saddles. Centers exhibit periodic motion that will always return to its initial starting point without deviating from a path, node trajectories diverge (unstable repellers) or converge (stable attractors), and saddles are a combination of trajectories (semi-stable) that approach along some directions and diverge from others.

Supplementary: Solving Second Order Linear DEs Using Newton’s Second Law of Motion

Solving x’ = Ax is not too bad. What if we need to solve a system in the form x’’ = Ax?

Trick: Let x’ = y and y’ = x’’

When we encounter a second order DE, we can break it down into a pair of first order DEs to make it easier to geometrically analyze the solutions. A second order DE is basically an equation that involves the second derivative of a function, typically with respect to time. For instance, let’s model the motions of a spring. We know that the derivative of position is velocity, and the second derivative of position is acceleration.

x = position

x’ = velocity

y’ = x’’ = acceleration (the trick)

• µ = friction coefficient

k = spring constant

Using Newton’s law of motion (F = ma), set up mx’’ = -kx + µx’

Defining a new variable y = x’ allows us to express this single second order DE as a pair of first order differential equations. Rearranging a few things, we get:

x’ = y

y’ = -(k/m)x + (µ/m)y

Now we can do our usual matrix analysis!

Part III. Nonlinear Dynamical Systems

This is where stuff gets a bit more interesting. Nonlinear dynamical systems can be understood intuitively by considering their behavior in terms of cause and effect, feedback loops, and sensitive dependence on initial conditions. Let’s break down these concepts:

  • Cause and Effect: In a nonlinear dynamical system, the relationship between cause and effect is not necessarily linear or straightforward. Small changes in the system’s initial conditions or inputs can lead to significant and sometimes unexpected chaotic outcomes.
  • Feedback Loops: Nonlinear dynamical systems often involve feedback loops, where the output of the system feeds back into the system itself, influencing its future behavior. These feedback loops can create self-reinforcing or self-regulating cycles that can give rise to complex behavior. Positive feedback amplifies and reinforces the system’s behavior, while negative feedback dampens and stabilizes it.
  • Sensitive Dependence on Initial Conditions: Nonlinear systems often exhibit highly sensitive dependence on initial conditions, commonly referred to as the “butterfly effect” (discussed in my article on chaos theory). This means that even tiny differences in the starting state of the system can lead to significantly divergent outcomes over time.

Supplementary: Identifying Nonlinear Systems

A nonlinear system where at least one of the equations is non-linear. Easiest way to identify nonlinear equations is to graph them, they will look…nonlinear.

Oftentimes, they are raised to a power greater than or less than 1, or when variables are multiplied together (xy), or trigs and logs, etc.

So matrix analysis is simple and well in linear dynamics. But in nonlinear dynamics, we unfortunately will get variables in our matrix. With this problem, we cannot seem to get constant values to compute the trace and determinant, thus leaving us adrift in a sea of hopelessness and despair. But fear not, we have:

The Hartman-Grobman Theorem

The Hartman-Grobman theorem allows us to approximate the behavior of nonlinear systems (which is hard) by analyzing it through linear techniques (which is easier). The foundational concept employs the Jacobian, which is a matrix of partial derivatives of nonlinear systems of equations.

Jacobian Matrix.

Essentially, the Jacobian captures the rate of change of a system’s variables with respect to each other. By using the Jacobian matrix evaluated at each critical point, we can gain a good approximation of the dynamics near each critical point. Critical points can be thought of as “inflection points”, or major turning points of some phase space. We find these values by setting the system of DEs equal to zero and solving for the variables.

The key insight of this theorem is that the behavior of the nonlinear system near each critical point is qualitatively similar to that of the “linearized system” (aka the Jacobian evaluated at the critical points). In other words, we can basically use the Jacobian to reduce a nonlinear system to a linear system (where by evaluating it at the critical points, we can get rid of the variables in our matrix). Then, by using this linearized system, we can simply analyze the dynamics using our TD graph as we did with linear systems.

However, one issue is that this linearized system is only an approximation of our nonlinear system. Therefore, if the critical point lands us on sensitive regions of the TD graph (such as directly on the y-axis), the dynamics could easily be either a center, stable spiral, or an unstable spiral. So in these cases, more work is needed! Darn, you thought we were done?

Hamiltonian Systems and Conservation of Energy

In the case of when the approximation is directly on the y-axis, we can use the concept of “conservation of energy”. What this tells us is that if a system exhibits conservation of energy, then no repellors or attractors exist in the region containing our critical point. In other words, they must be periodic solutions (aka centers). One simple way to check if a system has conservation of energy is to check if the system is Hamiltonian. If it is, then it must conserve energy (in the autonomous case at least). So Hamiltonian systems conserve energy, but beware, the converse is not necessarily true: a system can be NON-Hamiltonian yet still exhibit a conservation of energy.

Supplementary: Hamiltonian Systems

Let x’ = f(x,y)

Let y’ = g(x,y)

If partial derivatives ∂f/∂x = -(∂g/∂y), then system is Hamiltonian

Now that we’ve gotten a taste of ruling out repellers and attractors to find periodic solutions, we can dive a bit deeper.

Part IV: The Search for Periodic Solutions & Closed Orbits

Imagine the periodic orbit of planets, where the gravitational pull of a central mass balances the planet’s inertia and keeps it confined to a specific orbit. Understanding the existence and behavior of trajectories like these is of great significance, as these solutions represent stable, repeating patterns within a dynamical system. Across many fields, scientists and engineers are on the hunt for these periodic solutions, aka closed orbits. Let’s explore a couple of techniques they use.

Non-Existence of Closed Orbits

We begin first by exploring cases where closed orbits do not exist, since it is generally easier to rule out spaces where they cannot occur. There are three different approaches I will discuss: gradient systems, index theory, and Dulac’s theorem. Each method offers unique insights into the absence of closed orbits within a system of ODEs.

A. Gradient Systems

Gradient systems rule out closed orbits due to the decreasing nature of their energy landscapes. Let’s consider a simple analogy: Imagine a ball rolling down a hill, which will represent our gradient system. The ball’s motion is driven by gravity. As it rolls down, the overall energy will change throughout, but will always move towards the direction of decreasing potential energy. Now, let’s consider the concept of a closed orbit. A closed orbit must eventually return to its starting point, forming a loop without losing any energy. In other words, in order for a closed orbit to exist in a gradient system, the ball would need to roll down a hill and end up right on top of the hill where it started. Clearly, this is impossible, so therefore closed orbits cannot exist in a gradient system.

Supplementary: Gradient Systems

Let x’ = f(x,y)

Let y’ = g(x,y)

If partial derivatives ∂f/∂y = ∂g/∂x, then the system is a gradient system.

B. Index Theory

This is a fun one. Index theory allows one to rule out closed orbits with the rotation of a pen. Ok, maybe not exactly, but close enough (see supplementary). The intuition behind the index of a dynamical system represents the net rotation along a trajectory. Calculating the index essentially assigns a numerical value to quantify the revolving behavior of the system. To understand why index theory can rule out closed orbits, imagine a particle moving along a trajectory. As the particle traverses the path, its orientation may wiggle a bit up and down, left and right. However, in a closed orbit, the particle returns to its starting point, implying that the system must have completed one full rotation (aka, have an index of 1). Hence, closed orbits cannot exist if the index of a system is not 1. But careful again, the converse is not always true. This does not necessarily imply that there will be a closed orbit if the index of the system is indeed 1.

Supplementary: Index Theory

The index of critical point P is calculated by the number of counterclockwise rotations based on the vector field, when going counterclockwise along (any arbitrary) closed path.

(Legitimate) Approach Using Pens

1. Draw a circle on the phase portrait that contains critical point P

2. Pick any starting point C on the circle

3. Using a pen of your choosing, face the tip according to vector field at C

4. Traverse the circle counterclockwise while rotating your pen tip along the path’s vector fields

5. Count the number of times your pen rotates counterclockwise until you return to C.

C. Dulac’s Theorem

Finally, we have Dulac’s theorem for ruling out closed orbits. First, a few definitions. A vector field, denoted F, is basically a collection of arrows in space that represent the direction and magnitude of a particle at each point. The divergence, denoted ∇, measures the change in density of particle flow according to a given vector field (aka “flux density”). The theorem states that if there exists some function d, that maps a region to a real number such that the ∇·dF ≠ 0 (in other words, where ∇·dF is always either positive or negative), then there are no closed orbits within that region. Intuitively, this (sorta) means that if you can find a function that, when multiplied by the vector field, always points in the same general direction within a region, then there won’t be any closed curves in that region. In application, the hardest part is finding the right Dulac function d. But as long as it satisfies the mapping from region R → ℝ, you can essentially try anything.

Existence of Closed Orbits: Poincaré-Bendixson Theorem

I say we have endured enough ways to rule out closed orbits. The time has come to explicitly search for them. Introducing the Poincaré-Bendixson theorem. This theorem states that if we can construct some special “trapping region” R, then solutions must eventually approach some closed orbit (or more precisely, a limit cycle), within this trapping region. Seems simple enough, right? The catch, however, is that this “trapping region” must meet several criterias.

  1. This region R must not contain any critical points P.
  2. The vector fields along the boundaries of the trapping region must confine the solutions in a way where they cannot escape R, like this:

If we are able to create some R without any contradictions, then within R must exist some closed orbit. Looking at the picture above, this seems pretty intuitive. If a particle ends up inside R, then it seems that there is no way for it to actually leave. Thus, its trajectories will keep tracing out space inside R. However, since no solution trajectories can cross itself (Picard–Lindelöf theorem), it eventually runs out of space, forcing solutions to approach some closed orbit

So how do we construct this trapping region R? The key is to convert the system of equations into polar coordinates. This way, we can analyze the rate of change of the radial component r’. Specifically, we examine when r’ is negative at its maximum value, which indicates the vector fields that point inwards on the outer bound, and when r’ is positive at its minimum, representing the vector fields that point outwards on the inner bound. When we are able to find some r-min and r-max, then we have successfully constructed a region R in which a closed orbit must exist.

Supplementary: Poincaré-Bendixson Theorem

Let x’ = f(x,y)

Let y’ = g(x,y)

Convert to polar coordinates

1. x = rcos(𝛳)

2. y = rsin(𝛳)

Plug into this equation r’r = x’x + x’y

Prove that r’ is always negative for r-max

Prove that r’ is always positive for r-min

We’ve discussed a bit on linear systems, nonlinear systems, and how to rule out and find closed orbits, but trust me, there is a whole lot more on this out there. Anyhow, that concludes this article as an introduction to dynamical systems. You’re welcome. You just got an entire college math class for free in 15 minutes.

--

--

Chelsea Zou
Chelsea Zou

Written by Chelsea Zou

ML @ Stanford | Dabbler in science, tech, maths, philosophy, neuroscience, and AI | http://bosonphoton.github.io

No responses yet