# 216 Notes

Get PDF (from cache) | View log | Regenerate PDF | Export TeX source
Included files:

\documentclass{article}
\usepackage[a4paper]{geometry}
\begin{document}
\title{Course 216: Ordinary Differential Equations}
\author{Notes by Chris Blair}
\date{}
\maketitle
\vspace{-2.5em}
\begin{center}
\begin{minipage}{170pt}
\scriptsize These notes cover the ODEs course given in 2007-2008 by Dr.
John Stalker.
\end{minipage}
\end{center}
\tableofcontents

\section*{Terminology}

\paragraph{Scalar equation} A single ODE.

\paragraph{System of equations} Several ODEs.

\paragraph{Order} The order of an ODE is the order of the highest
derivative appearing in it.

\paragraph{Linear / Non-linear} A linear ODE is an ODE that is linear,
etc.

\paragraph{Homogeneous / Inhomogeneous} Homogeneous means no constant
terms present. Inhomogeneous means constant terms are present.

\paragraph{Invariants}

An invariant of a system of ODEs is a function of the dependent and
independent variables and their derivatives which is constant for any
solution of the equation. They can be used to place bounds on solutions.

\part{Solving Linear ODEs}

\section{Reduction of Order}

\begin{itemize}

\item Any higher order ODE or system of ODEs can be reduced to a system of
first order ODEs by introducing new variables to replace the derivatives in
the original equation/system.

\item For example, the third order equation
$c_1 x'''(t) + c_2 x''(t) + c_3 x'(t) + c_4 x(t) = 0$
can be reduced to a first order system using the following set of
substitutions:
$x_1 = x, \, \, x_2 = x', \, \, x_3 = x''$
giving:
$x_1' = x_2, \, \, x_2' = x_3, \,\, x_3' = - \frac{c_4}{c_1} x_1 - \frac{c_3}{c_1} x_2 - \frac{c_2}{c_1} x_3$

We can write this in matrix form:

$\left( \begin{array}{c}x_1' \\ x_2' \\ x_3' \end{array} \right) = \left(\begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\- \frac{c_4}{c_1} & - \frac{c_3}{c_1} & - \frac{c_2}{c_1} \end{array} \right) \left(\begin{array}{c}x_1 \\ x_2 \\ x_3 \end{array} \right)$

\item Hence, any ODE or system of ODEs can be written in the following
matrix form:

$\vec{x}' (t) = A(t) \vec{x} (t)$
which has solution:
$\vec{x}(t) = \mbox{exp} (tA) \, \vec{x} (0)$

\end{itemize}

\section{Computing Matrix Exponentials}

\begin{itemize}

\item The exponential of the matrix $tA$ is given by:

$\mbox{exp} (tA) = \sum_{n=0}^{\infty} \frac{1}{n!} t^n A^n$

\item For a diagonal matrix,

$\exp \left( \begin{array}{cccc} a & 0 & \dots & 0 \\ 0 & b & \dots & 0 \\ \vdots & & \ddots & \\ 0 & \dots & 0 & n \\ \end{array} \right) = \left( \begin{array}{cccc} \exp(a) & 0 & \dots & 0 \\ 0 & \exp(b) & \dots & 0 \\ \vdots & & \ddots & \\ 0 & \dots & 0 & \exp(n) \\ \end{array} \right)$

\item Given two matrices $A$ and $B$ then

$\mbox{exp} (A+B) = \mbox{exp} (A) \mbox{exp} (B)$

if $AB = BA$. Note that any scalar multiple of the identity commutes with
all matrices.

\item \bf 2 by 2 Matrices \normalfont

$A = \left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) = \left( \begin{array}{cc} \frac{a+d}{2} & 0 \\ 0 &\frac{a + d}{2} \\ \end{array} \right) + \left( \begin{array}{cc} \frac{a-d}{2} & b \\ c & \frac{d-a}{2} \\ \end{array} \right) = B + C$
and we have $BC = CB$ so that $\exp(B+C) = \exp B \, \exp C$. Letting $\mu = \frac{a+d}{2}$, we then have
$\exp(tA) = \exp(tB)\exp(tC)$
$\Rightarrow \exp(tA) = \left( \begin{array}{cc} \exp(\mu t) & 0 \\ 0 & \exp(\mu t) \\ \end{array} \right)\exp(tC)$
Now, the discriminant $\Delta$ of $A$ is
$\Delta = \Big(\mbox{tr}A\Big)^2 - 4 \, \mbox{det} A$
and $C^2 = \frac{\Delta}{4} I$. This leads to three cases:

i) $\Delta = 0$, then
$\exp(tC) = I + t C$
ii) $\Delta < 0$, then
$\exp(tC) = \cos \Big(\frac{t \sqrt{- \Delta}}{2}\Big) I + \frac{\sin (\frac{t \sqrt{- \Delta}}{2})}{\frac{ \sqrt{- \Delta}}{2}} C$
iii) $\Delta > 0$, then
$\exp(tC) = \cosh \Big(\frac{t \sqrt{\Delta}}{2}\Big) I + \frac{\sinh (\frac{t \sqrt{\Delta}}{2})}{\frac{ \sqrt{\Delta}}{2}} C$

\item \bf $n \times n$ Matrices \normalfont

Every $n$ by $n$ matrix $A$ is similar to its Jordan form $J$, which can
be written as the sum of a diagonal and a nilpotent matrix, $J = D + N$.
We have

$A = P J P^{-1}$
$\Rightarrow \exp(tA) = P \, \exp(tJ) \, P^{-1}$
$\Rightarrow \exp(tA) = P \, \exp(tD) \,\exp(tN) \, P^{-1}$

The Jordan form $J$ has the eigenvalues of $A$ on the diagonal, and some
ones below the diagonal, depending on whether the eigenvalues are
distinct. The columns of the matrix $P$ are the eigenvectors of $A$. The
entries of $P$ can also be found once you know $J$, using $AP = PJ$.

The exponential of the nilpotent matrix $N$ is computed directly using the
exponential formula.

Note that in the case of a higher order scalar equation, we only need the
first row of $P$, as we are just looking for $x(t)$.

\end{itemize}

\section{Higher Order Scalar ODEs}

\begin{itemize}

\item Consider a higher order scalar ODE,

$c_n \frac{d^n x}{d t^n} + \dots + c_2 \frac{d^2 x}{d t^2} + c_1 \frac{d x}{dt} + c_0 x = 0$
which we can write as
$p \left(\frac{d}{dt} \right) x = 0$

where $p$ is the polynomial

$p(s) = c_n s^n + \dots + c_2 s^2 + c_1 s + c_0 = 0$

which has roots $\lambda_i$.

\item A basis for the solution space is then

$\Big\lbrace \exp(\lambda_1 t), t\, \exp(\lambda_1 t), \dots , t^{r_1 - 1} \exp(\lambda_1 t), \dots , \exp(\lambda_k t), \dots , t^{r_k - 1} \exp(\lambda_k t) \Big\rbrace$

where the $\lambda_i$ are the individual roots of the equation and $r_i$
is the multiplicity of the $i^{th}$ root.

\item In the inhomogeneous case, we have $p(\frac{d}{dt}) x = f$, and have
the special case where $f$ itself satisfies some differential equation
$q(\frac{d}{dt}) f = 0$. Hence

$q\bigg(\frac{d}{dt}\bigg) p\bigg(\frac{d}{dt}\bigg) x = 0$

and we can form a basis for the solution space using the roots of $r(s) = q(s) p(s)$. It is then possible to evaluate the coefficients of the
particular solution to the inhomogeneous equation by evaluating
$p(\frac{d}{dt}) x = f$

\end{itemize}

\section{Non-constant Coefficients}

\begin{itemize}

\item \bf Homogeneous Scalar Equations \normalfont

The homogeneous equation

$x'(t) = a(t) x(t)$
has unique solution:
$x(t) = x(0) \, \exp \left( \int_0^t a(s) ds \right)$

\item \bf Inhomogeneous Scalar Equations \normalfont

The inhomogeneous equation
$x'(t) = a(t) x(t) + f(t)$
has unique solution:
$x(t) = x(0) \, \exp \left( \int_0^t a(s) ds \right) + \int_0^t \, \exp \left( \int_s^t a(r)dr \right) f(s) ds$

\item \bf Systems \normalfont

The equation

$\vec{x}'(t) = A(t) \vec{x}(t) + \vec{f} (t)$
has unique solution:
$\vec{x}(t) = W(t) \vec{x}(0) + \int_0^t W(t) W^{-1}(s) \vec{f}(s) ds$
where $W(t)$ satisfies the matrix initial value problem
$W'(t) = A(t) W(t), \, \, \,W(0) = I$

\end{itemize}

\section{Method of Wronski}

\begin{itemize}

\item Consider a second order scalar linear homogeneous ODE:

$$\label{a} p(t) x''(t) + q(t) x'(t) + r(t) x(t) = 0$$

which has a two-dimensional solution space.

\item We define

$w(t) = x_1 (t) x_2'(t) - x_1'(t) x_2(t)$
giving
$p(t) w'(t) + q(t) w(t) = x_1(t) \Big[ p(t) x_2''(t) + q(t) x_2'(t) + r(t) x_2(t)\Big] - x_2 (t) \Big[p(t) x_1''(t) + q(t) x_1'(t) + r(t) x_1(t)\Big]$
so if $x_1, x_2$ solve ($\ref{a}$) then $w(t)$ solves
$$\label{b} p(t) w'(t) + q(t) w(t) = 0$$

\item Hence, if we have $x_1$ a solution to ($\ref{a}$) and $w$ a solution
to ($\ref{b}$), we can then find $x_2$ such that $x_2$ is a solution to
($\ref{a}$), and is linearly independent to $x_1$.

\item Then, given ($\ref{a}$) and $x_1$:

$w(t) = w(0) \, \exp \left( - \int_0^t \frac{q(s)}{p(s)} ds \right)$
and as $\frac{d}{dt} \left(\frac{x_2(t)}{x_1(t)}\right) = \frac{w(t)}{x_1^2(t)}$,

$\frac{x_2(t)}{x_1(t)} = \frac{x_2(0)}{x_1(0)} + \int_0^t \frac{w(s)}{x_1(s)^2} ds$

\item The general solution is then any linear combination of $x_1$ and
$x_2$:

$x(t) = c_1 x(t) + c_2 x_2 (t)$

\end{itemize}

\part{Stability}

\section{Non-linear ODEs}

\begin{itemize}

\item \bf Non-linear ODEs \normalfont

A non-linear ODE is of the form

$\vec{x}' (t) = \vec{F} \Big( \vec{x}(t),t\Big)$

\item \bf Autonomous Systems \normalfont

An autonomous system is of the form

$\vec{x}^{\prime} (t) = \vec{F} \Big( \vec{x}(t)\Big)$

\end{itemize}

\section{Equilibria and Stability}

\begin{itemize}

\item \bf Equilibria \normalfont

An equilibrium of an autonomous system $\vec{x}^{\prime}(t) = \vec{F} \Big( \vec{x}(t)\Big)$ is a $\vec{c}$ such that

$\vec{F} (\vec{c}) = 0$

i.e. the equilibria of a system are the zeros of $\vec{F}$.

\item \bf Stability \normalfont

An equilibrium $\vec{c}$ is said to be stable if $\forall \, \varepsilon > 0$, $\exists \, \delta > 0$ such that if

$||\, \vec{x}(0) - \vec{c}\,|| \leq \delta$
then
$|| \, \vec{x} (t) - \vec{c}\, || \leq \varepsilon$
for all positive $t$.

\item \bf Asymptotic Stability \normalfont

An equilibrium $\vec{c}$ is said to be asymptotically stable if $\exists \, \delta > 0$ such that

$|| \, \vec{x}(0) - \vec{c} \, || \leq \delta \Rightarrow \lim_{t \rightarrow \infty} \vec{x}(t) = \vec{c}$

\item \bf Strict Stability \normalfont

An equilibrium $\vec{c}$ is said to be strictly stable if it is both
stable and asymptotically stable.

\item \bf Stability and Invariants \normalfont

If $\vec{c}$ is an equilibrium of an autonomous system and $E$ is a
continuously differentiable invariant of the system which has a strict
local minimum at $\vec{c}$, then $\vec{c}$ is stable but not
asymptotically stable.

\item \bf Stability of Linear Constant Coefficient First Order Systems
\normalfont

These are systems

$\vec{x}' (t) = A \vec{x}(t)$

with solution

$\vec{x}(t) = \exp (tA) \vec{x}(0) = P \exp (tJ) P^{-1} \vec{x}(0)$

$\vec{0}$ is always an equilibrium, and each equilibrium is
stable/asymptotically stable if and only if $\vec{0}$ is
stable/asymptotically stable.

We can determine the stability of the system by considering the real parts
of the eigenvalues of $A$:

\vspace{1em}

\renewcommand\arraystretch{1.25}
\begin{tabular}{|l|l|l|}
\hline

\bf Real Parts &  \bf Stable &  \bf Asymptotically Stable \\ \hline

all $<0$ & Yes & Yes \\ \hline

all $\leq 0$,& Yes & No \\

\scriptsize geometric multiplicity = algebraic multiplicity for all
imaginary eigenvalues  & & \\ \hline

all $\leq 0$,  & No & No \\

\scriptsize geometric multiplicity $<$ algebraic multiplicity for some
imaginary eigenvalue & & \\ \hline

some $>0$ & No & No \\ \hline

\end{tabular}

\vspace{1em}

In the 2 by 2 case, then if trace $A<0$ and det $A\geq0$, then $\vec{0}$
is strictly stable. If trace $A\leq0$ and det $A\geq 0$ then $\vec{0}$ is
stable. Otherwise it is not stable or asymptotically stable.

In the scalar high order case where $p(\frac{d}{dt})x = 0$, $p(s)$ a
polynomial, if all roots of $p(s) = 0$ have negative real parts, then we
have strict stability. If all roots have non-positive real parts, and all
imaginary roots have multiplicity one, then we have stability but not
strict stability. Otherwise, neither stability nor asymptotic stability.
\end{itemize}

\section{Linearisation}

\begin{itemize}

\item The linearisation of an autonomous system $\vec{x}'(t) = \vec{F} \Big(\vec{x}(t)\Big)$ about an equilibrium $\vec{c}$ is the matrix $A$
defined by

$a_{jk} = \frac{\partial F_j}{\partial x_k} (\vec{c})$

\item If all eigenvalues of $A$ have negative real parts, then $\vec{c}$
is strictly stable.

\item If some eigenvalue of $A$ has positive real part, then $\vec{c}$ is
neither stable nor asymptotically stable.

\item Otherwise, we learn nothing.

\end{itemize}

\section{Method of Lyapunov}

\begin{itemize}

\item \bf Lyapunov Function \normalfont

A Lyapunov function for the equilibrium $\vec{c}$ of an autonomous system
is a continuously differentiable function $V$ with a strict local minimum
at $\vec{c}$ such that

$\sum_j \frac{\partial V}{\partial x_j} F_j \leq 0$

\item \bf Strict Lyapunov Function \normalfont

A strict Lyapunov function is a Lyapunov function satisfying

$\sum_j \frac{\partial V}{\partial x_j} F_j \leq - r \Big[ V(\vec{x}) - V(\vec{c}) \Big]$

for some positive $r$.

\item An equilibrium $\vec{c}$ is stable if it admits a Lyapunov function,
and strictly stable if it admits a strict Lyapunov function.

\end{itemize}

\end{document}