notes/mechanical/mmme1026_maths_for_engineering.md

25 KiB
Executable File

author date title tags
Alvie Rahman \today MMME1026 // Mathematics for Engineering
uni
nottingham
mechanical
engineering
mmme1026
maths
complex_numbers

Complex Numbers

What is a Complex Number?

  • i is the unit imaginary number, which is defined by:

    i^2 = -1
  • An arbritary complex number is written in the form

    z = x + iy

    Where:

    • x is the real part of z (which you may seen written as \Re(z) = x or Re$(z) = x$)
    • y is the imaginary part of z (which you may seen written as \Im(z) = y or Im$(z) = y$)
  • Two complex numbers are equal if both their real and imaginary parts are equal

    e.g. (3 + 4i) + (1 -2i) = (3+1) + i(4-2) = 2 + 2i

The Complex Conjugate

Given complex number z:

z = z + iy

The complex conjugate of z, \bar z is:

\bar{z} = z -iy

Division of Complex Numbers

  • Multiply numerator and denominator by the conjugate of the denominator

Example

\begin{align*} z_1 &= 5 + i \ z_2 &= 1 -i \ \ \frac{z_1}{z_2} &= \frac{5+i}{1-i} \cdot \frac{\bar{z_2}}{\bar{z_2}} = \frac{(5+i)(1+i)}{(1-i)(1+i)} \ &= \frac{5 + i + 5i -1}{1 + 1} \ &= \frac{4 + 6i}{2} = 2 + 3i \end{align*}

Algebra and Conjugation

When taking complex conjugate of an algebraic expresion, we can replace i by -i before or after doing the algebraic operations:

\begin{align*} \overline{z_1+z_2} &= \bar{z_1}+\bar{z_2} \ \overline{z_1z_2} &= \bar{z_1}\bar{z_2} \ \overline{z_1/z_2} &= \frac{\bar{z_1}}{\bar{z_2}} \end{align*}

The conjugate of a real number is the same as that number.

Application

If z is a root of the polynomial equation

0 = az^2 + bz + c

with real coefficients a, b, and c, then \bar{z} is also a root because

\begin{align*} 0 &= \overline{az^2 + bz + c} \ &= \bar{a}\bar{z}^2 + \bar{b}\bar{z} + \bar{c} \ &= a\bar{z}^2 + b\bar{z} + c \end{align*}

The Argand Diagram

A general complex number z = x + iy has two components so it can can be represented as a point in the plane with Cartesion coordinates (x, y).

\begin{align*} 4-2i &\leftrightarrow (4, -2) \ -i &\leftrightarrow (0, -1) \ z &\leftrightarrow (x, y) \ \bar z &\leftrightarrow (x, -y) \end{align*}

Plotting on a Polar Graph

We can also describe points in the complex plain with polar coordinates (r, \theta):

\begin{align*} z &= r(\cos\theta + i\sin\theta) &\text{ polar form of $z$} \ r &= \sqrt{x^2+y^2} &\text{(modulus)}\ \theta &= \arg z,\text{ where} \tan \theta = \frac y x &\text{(argument)} \ x &= r\cos \theta \ y &= r\sin \theta \end{align*}

Be careful when turning (x, y) into (r, \theta) form as \tan^{-1} \frac y x = \theta does not always hold true as there are many solutions.

Choosing \theta Correctly

  1. Determine which quadrant the point is in (draw a picture).
  2. Find a value of \theta such that \tan \theta = \frac y x and check that it is consistent. If it puts you in the wrong quadrant, add or subtract \pi.

Exponential Functions

  • The exponential function f(x) = \exp x may be wirtten as an infinite series:

    \exp x = e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots
  • The function f(x) = e^{-x} is just \frac 1 {e^x}

  • Note the important properties:

\begin{align*} e^{a+b} &= e^a e^b \ (e^a)^b &= e^{ab} \end{align*}

Euler's Formula

e^{i\theta} = \cos\theta + i\sin\theta
  • Properties of e^{i\theta}: For any real angle \theta we have

    |e^{i\theta}| = |\cos\theta + i\sin\theta| = \sqrt{\cos^2\theta + \sin^2\theta} = 1

    and

    \arg {e^{i\theta}} = \theta
  • A complex number in polar form, where r = |z|, and \theta = \arg z, may alternatively be written in its exponential form:

    z = re^{i\theta}

    Note: \bar z = r\cos\theta - ir\sin\theta = re^{-i\theta}

Example 1

Write z = -1 + i in exponential form

\arg z = \frac {3\pi} 4 |z| = \sqrt 2

So z = \sqrt2e^{i\frac{3\pi} 4}

Example 2

The equations for a mechanical vibration problem are found to have the following mathematical solution:

z(t) = \frac{e^{i\omega t}}{\omega_0^2-\omega^2 + i\gamma}

where t represents time and \omega, \omega_0 and \gamma are all positive real physical constants. Although z(t) is complex and cannot directly represent a physical solution, it turns out that the real and imaginary parts x(t) and y(t) in z(t) = x(t) + iy(t) can. Polar notation can be used to extract this physical information efficiently as follows:

a. Put the denominator in the form

ae^{i\delta}

where you should give explicit expressions for a and \delta in terms of \gamma, \gamma_0, and \gamma.

\begin{align*} a &= \sqrt{\gamma^2 + (\omega_0^2 - \omega^2)^2} \ \delta &= \tan^{-1}\frac \delta {\omega_0^2 - \omega^2} \end{align*}

b. Hence find the constants b and \varphi such that

x(t) = b\cos(\omega t + \varphi)

and write a similar expression for y(t).

\begin{align*} z &= \frac{e^i\omega t}{ae^{i\delta}} = \frac 1 a e^{i (\omega t - \delta)} \ x + iy &= \frac 1 a \cos(\omega t - \delta) + \frac 1 a \sin(\omega t - \delta) \ \therefore \Re z &= x = \frac 1 a \cos(\omega t - \delta), \ \Im z &= y = \frac 1 a \sin(\omega t - \delta) \ \ b &= \frac 1 a = \frac 1 {\sqrt{\gamma^2 + (\omega_0^2 - \omega^2)^2}} \ \varphi &= -\delta = \tan^{-1}\frac \delta {\omega_0^2 - \omega^2}\ \ y(t) &= \frac 1 a \sin(\omega t - \delta) \ \end{align*}

Products of Complex Numbers

Suppose we have 2 complex numbers:

z_1 = x_1 + iy_1 = r_1e^{i\theta_1} z_2 = x_2 + iy_2 = r_2e^{i\theta_2}

Using e^a e^b = e^{a+b}, the product is:

\begin{align*} z_3 = z_1 z_2 &= (r_1e^{i\theta_1})(r_2e^{i\theta_2}) \ &= r_1r_2e^{i\theta_1}e^{i\theta_2} \ &= r_1r_2e^{i(\theta_1+\theta_2)} \ \ |z_1z_2| &= |z_1|\times|z_2| \ \arg z_1z_2 &= \arg z_1 \times \arg z_2 \end{align*}

de Moivre's Theorem

Let z = re^{i\theta}. Consider z^n.

Since z = r(\cos\theta + i\sin\theta), \begin{align*} z^n &= r^n(\cos\theta + i\sin\theta)^n &\text{(1)} \ \end{align*}

But also

\begin{align*} z^n &= (re^{i\theta})^n \ &= r^n(e^{i\theta})^n \ &= r^ne^{in\theta} \ &= r^n(\cos{n\theta} + i\sin{n\theta}) &\text{(2)} \ \end{align*}

By equating (1) and (2), we find de Moivre's theorem:

\begin{align*} r^n(\cos\theta +i\sin\theta)^n &= r^n(\cos{n\theta} + i\sin{n\theta}) \ (\cos\theta +i\sin\theta)^n &= (\cos{n\theta} + i\sin{n\theta}) \end{align*}

Example 1

Write 1+i in polar form and use de Moivre's theorem to calculate (1+i)^{15}.

\begin{align*} r &= |1+i| = \sqrt2 \ \theta &= \arg{1+i} = \frac \pi 4 \ \ \text{So } 1 + i &= \sqrt2(\cos{\frac pi 4} + i\sin{\frac \pi 4}) = \sqrt2e^{i\frac \pi 4} \text{ and}\ (i+i)^{15} &= (\sqrt2)^{15}\left(\cos{\frac \pi 4} + i\sin{\frac \pi 4}\right)^{15} \ &= 2^{\frac 15 2} \left(\cos{\frac{15\pi} 4} + i\sin{\frac{15\pi} 4}\right) \ &= 2^{\frac 15 2} \left(\frac 1 {\sqrt2} - \frac i {\sqrt2}\right) \ &= 2^7 (1 - i) \ &= 128 - 128i \end{align*}

Example 2

Use de Moivre's theorem to show that

\begin{align*} \cos{2\theta} &= \cos^2\theta-\sin^2\theta \ \text{and} \ \sin{2\theta} &= 2\sin\theta\cos\theta \end{align*}

Let n=2:

\begin{align*} (\cos\theta+i\sin\theta)^2 &= \cos^2\theta + 2i\sin\theta\cos\theta - \sin^2\theta \ \text{Real part: } \cos^2\theta - \sin^2\theta &= \cos{2\theta}\ \text{Imaginary part: } 2\sin\theta\cos\theta &= \sin{2\theta} \end{align*}

Example 3

Given that n \in \mathbb{N} and \omega = -1 + i, show that w^n + \bar{w}^n = 2^{\frac n 2 + 1}\cos{\frac{3n\pi} 4} with Euler's formula.

\begin{align*} r &= \sqrt{2} \ \arg \omega = \theta &= \frac 3 4 \pi \ \ \omega^n &= r^n(cos{n\theta} + i\sin{n\theta}) \ \bar\omega^n &= r^n(cos{n\theta} - i\sin{n\theta}) \ \omega^n + \bar\omega^n &= r^n(2\cos{n\theta}) \ &= 2^{\frac n 2 + 1}\cos{\frac {3n\pi} 4} \end{align*}

Complex Roots of Polynomials

Example

Find which complex numbers z satisfy

z^3 = 8i
  1. Write 8i in exponential form,

    |8i| = 8 and \arg{8i} = \frac \pi 2

    \therefore 8i = 8e^{i\frac \pi 2}

  2. Let the solution be r = re^{i\theta}.

    Then z^3 = r^3e^{3i\theta}.

  3. z^3 = r^3e^{3i\theta} = 8e^{i\frac \pi 2}

    i. Compare modulus:

    r^3 = 8 \rightarrow r = 2

    ii. Compare argument:

    $$3\theta = \frac \pi 2$$
    
    is a solution but there are others since 
    
    $$e^{i\frac \pi 2} = e^{i \frac \pi 2 + 2n\pi}$$
    
    so we get a solution whenever
    
    $$3\theta = \frac \pi 2 + 2n\pi$$
    
    for any integer `n`
    
    - $n = 0 \rightarrow z = \sqrt3 + i$
    - $n = 1 \rightarrow z = -\sqrt3 + i$
    - $n = 2 \rightarrow z = -2i$
    - $n = 3 \rightarrow z = \sqrt3 + i$
    - $n = 4 \rightarrow z = -\sqrt3 + i$
    - The solutions start repeating as you can see
    
    In general, an $n$-th order polynomial has exactly $n$ complex roots.
    Some of these complex roots may be real numbers.
    
  4. There are three solutions

Systems of Equations (Simultaneous Equations)

Gaussian Elimination

Gaussian eliminiation can be used when the number of unknown variables you have is equal to the number of equations you are given.

I'm pretty sure it's the name for the method you use to solve simultaneous equations in school.

For example if you have 1 equation and 1 unknown:

\begin{align*} 2x &= 6 \ x &= 3 \end{align*}

Number of Solutions

Let's generalise the example above to

ax = b

There are 3 possible cases:

\begin{align*} a \ne 0 &\rightarrow x = \frac b a \ a = 0, b \ne 0 &\rightarrow \text{no solution for $x$} \ a = 0, b = 0 &\rightarrow \text{infinite solutions for $x$} \end{align*}

2x2 Systems

A 2x2 system is one with 2 equations and 2 unknown variables.

Example 1

\begin{align*} 3x_1 + 4x_2 &= 2 &\text{(1)} \ x_1 + 2x_2 &= 0 &\text{(2)} \ \end{align*}

\begin{align*} 3\times\text{(2)} = 3x_1 + 6x_2 &= 0 &\text{(3)} \ \text{(3)} - \text{(1)} = 0x_1 + 2x_2 &= -2 \ x_2 &= -1 \end{align*}

We've essentially created a 1x1 system for x_2 and now that's solved we can back substitute it into equation (1) (or equation (2), it doesn't matter) to work out the value of x_1:

\begin{align*} 3x_1 + 4x_2 &= 2 \ 3x_1 - 1 &= 2 \ 3x_1 &= 6 \ x_1 &= 2 \end{align*}

You can check the values for x_1 and x_2 are correct by substituting them into equation (2).

3x3 Systems

A 3x3 system is one with 3 equations and 3 unknown variables.

Example 1

\begin{align*} 2x_1 + 3x_2 - x_3 &= 5 &\text{(1)} \ 4x_1 + 4x_2 - 3x_3 &= 5 &\text{(2)} \ 2x_1 - 3x_2 + x_3 &= 5 &\text{(3)} \ \end{align*}

The first step is to eliminate x_1 from (2) and (3) using (1):

\begin{align*} \text{(2)}-2\times\text{(1)} = -2x_2 -x_3 &= -1 &\text{(4)} \ \text{(3)}-\text{(1)} = -6x_2 + 3x_3 &= -6 &\text{(5)} \ \end{align*}

This has created a 2x2 system of x_2 and x_3 which can be solved as any other 2x2 system. I'm too lazy to type up the working, but it is solved like any other 2x2 system.

\begin{align*} x_2 &= -2 x_3 &= 5 \end{align*}

These values can be back-substituted into any of the first 3 equations to find out x_1:

\begin{align*} -2x_1 + 3x_2 - x_3 = 2x_1 + 6 - 3 = 5 \rightarrow x_1 = 1 \end{align*}

Example 2

\begin{align*} x_1 + x_2 - 2x_3 &= 1 &R_1 \ 2x_1 - x_2 - x_3 &= 1 &R_2 \ x_1 + 4x_2 + 7x_3 &= 2 &R_3 \ \end{align*}

  1. Eliminate x_1 from R_2, R_3:

    \begin{align*} x_1 + x_2 - 2x_3 &= 1 &R_1' = R_1\

    • 3x_2 - 5x_3 &= -1 &R_2' = R_2 - 2R_1 \ 3x_2 + 5x_3 &= 1 &R_3' = R_3 - R_1 \ \end{align*}

    We've created another 2x2 system of R_2' and R_3'

  2. Eliminate x_2 from R_3''

    \begin{align*} x_1 + x_2 - 2x_3 &= 1 &R_1'' = R_1' = R_1\

    • 3x_2 - 5x_3 &= -1 &R_2'' = R_2' = R_2 - 2R_1 \ 0x_3 &= 0 &R_3'' = R_3 '+ R_2' \ \end{align*}

    We can see that x_3 can be any number, so there are infinite solutions. Let:

    x_3 = t

    where t can be any number

  3. Substitute x_3 into R_2'':

    R_2'' = -3x_2 - 5t = -1 \rightarrow x_2 = \frac 1 3 - \frac{5t} 3
  4. Substitute x_2 and x_3 into R_1'':

    R_1'' = x_1 + \frac 1 3 - \frac{5t} 3 + 2t = 1 \rightarrow x_1 = \frac 2 3 - \frac t 3

Systems of Equations and Matrices

Many problems in engineering have a very large number of unknowns and equations to solve simultaneously. We can use matrices to solve these efficiently.

Take the following simultaneous equations::

\begin{align*} 3x_1 + 4x_2 &= 2 &\text{(1)} \ x_1 + 2x_2 &= 0 &\text{(2)} \end{align*}

They can be represented by the following matrices:

\begin{align*} A &= \begin{pmatrix} 3 & 4 \ 1 & 2 \end{pmatrix} \ \pmb x &= \begin{pmatrix} x_1 \ x_2 \end{pmatrix} \ \pmb b &= \begin{pmatrix} 2 \ 0 \end{pmatrix} \ \end{align*}

You can then express the system as:

A\pmb x = \pmb b

A 3x3 System as a Matrix

\begin{align*} 2x_1 + 3x_2 - x_3 &= 5 \ 4x_1 + 4x_2 - 3x_3 &= 3 \ 2x_1 - 3x_2 + x_3 &= -1 \end{align*}

Could be expressed in the form A\pmb x = \pmb b where:

\begin{align*} A &= \begin{pmatrix} 2 & 3 & -1 \ 4 & 4 & -3 \ 2 & -3 & -1 \end{pmatrix} \ \pmb x &= \begin{pmatrix} x_1 \ x_2 \ x_3 \end{pmatrix} \ \pmb b &= \begin{pmatrix} 5 \ 3 \ -1 \end{pmatrix} \ \end{align*}

An m\times n System as a Matrix

\begin{align*} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n &= b_1 \ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n &= b_2 \ \cdots \ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n &= b_m \ \end{align*}

Could be expressed in the form A\pmb x = \pmb b where:

\begin{align*} A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \ a_{21} & a_{22} & \cdots & a_{2n} \ \vdots & & & \vdots \ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix}, \pmb x = \begin{pmatrix} x_1 \ x_2 \ \vdots \ x_n \end{pmatrix}, \pmb b = \begin{pmatrix} b_1 \ b_2 \ \vdots \ b_m \end{pmatrix} \end{align*}

Matrices

Order of a Matrix

The order of a matrix is its size e.g. 3\times2 or m\times n

Column Vectors

  • Column vectors are matrices with only one column:

    \begin{pmatrix} 1 \\ 2 \end{pmatrix} \begin{pmatrix} 4 \\ 45 \\ 12 \end{pmatrix}
  • Column vector variables typed up or printed are expressed in \pmb{bold} and when it is handwritten it is \underline{underlined}:

    \pmb x = \begin{pmatrix} -3 \\ 2 \end{pmatrix}

Matrix Algebra

Equality

Two matrices are the same if:

  • Their order is the same
  • Their corresponding elements are the same

Addition and Subtraction

Only possible if their order is the same. \begin{align*} A + B&= \begin{pmatrix} a_{11} + b_{11} & a_{12} + b_{12} & \cdots & a_{1n} + b_{1n} \ a_{21} + b_{21} & a_{22} + b_{22} & \cdots & a_{2n} + b_{2n} \ \vdots & & & \vdots \ a_{m1} + b_{m1} & a_{m2} + b_{m2} & \cdots & a_{mn} + b_{mn} \end{pmatrix} \ A - B&= \begin{pmatrix} a_{11} - b_{11} & a_{12} - b_{12} & \cdots & a_{1n} - b_{1n} \ a_{21} - b_{21} & a_{22} - b_{22} & \cdots & a_{2n} - b_{2n} \ \vdots & & & \vdots \ a_{m1} - b_{m1} & a_{m2} - b_{m2} & \cdots & a_{mn} - b_{mn} \end{pmatrix}, \end{align*}

Zero Matrix

This is a matrix whose elements are all zeros. For any matrix A,

A + 0 =A

We can only add matrices of the same order, therefore 0 must be of the same order as A.

Multiplication

Let


\begin{matrix}
A & m\times n \\
B & p\times q
\end{matrix}

To be able to multiply A by B, n = p.

If n \ne p, then AB does not exist.


\begin{matrix}
A & B & = & C \\
m\times n & p \times q & & m\times q
\end{matrix}

When C = AB exists,

C_{ij} = \sum_r\! a_{ir}b_{rj}

That is, C_{ij} is the 'product' of the $i$th row of A and $j$th column of B.

Multiplication of a Matrix by a Scalar

If \lambda is a scalar, we define


\lambda a = \begin{pmatrix} \lambda a_{11} & \lambda a_{12} & \cdots & \lambda a_{1n} \\
                \lambda a_{21} & \lambda a_{22} & \cdots & \lambda a_{2n} \\
                \vdots &        &     & \vdots \\
                \lambda a_{m1} & \lambda a_{m2} & \cdots & \lambda a_{mn} 
                \end{pmatrix}, 

Example 1


\begin{pmatrix} 1 & -1 \\ 2 & 1 \end{pmatrix}
\begin{pmatrix} 0 & 1 \\ 3 & 2 \end{pmatrix} = 
\begin{pmatrix} -3 & -1 \\ 3 & 4 \end{pmatrix}

\begin{pmatrix} 0 & 1 \\ 3 & 2 \end{pmatrix}
\begin{pmatrix} 1 & -1 \\ 2 & 1 \end{pmatrix} =
\begin{pmatrix} 2 & 1 \\ 7 & -1 \end{pmatrix}

Example 2


A = \begin{pmatrix} 4 & 1 & 6 \\ 3 & 2 & 1 \end{pmatrix},\,
B = \begin{pmatrix} 1 & 1 \\ 1 & 2 \\ 1 & 0 \end{pmatrix}

AB = \begin{pmatrix} 11 & 6 \\ 6 & 7 \end{pmatrix},\,
BA = \begin{pmatrix} 7 & 3 & 7 \\ 10 & 5 & 8 \\ 4 & 1 & 6 \end{pmatrix}

Other Properties of Matrix Algebra

  • (\lambda A)B = \lambda(AB) = A(\lambda B)

  • A(BC) = (AB)C = ABC

  • (A+B)C = AC + BC

  • C(A+B) = CA + CB

  • In general, AB \ne BA even if both exist

  • AB = 0 does not always mean A = 0 or B = 0:

    $$\begin{pmatrix} 0 & 1 \ 0 & 0 \end{pmatrix} \begin{pmatrix}3 & 0 \ 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix} = 0$$

    It follows that AB = AC does not imply that B=C as

    AB = AC \leftrightarrow A(B + C) = 0

    and as A and (B-C) are not necessarily 0, B is not necessarily equal to C:

    $$AB = \begin{pmatrix} 0 & 1 \ 0 & 0 \end{pmatrix} \begin{pmatrix}0 & 0 \ 1 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix}$$

    and

    $$AC = \begin{pmatrix} 0 & 1 \ 0 & 0 \end{pmatrix} \begin{pmatrix}1 & 2 \ 1 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix} = AB$$

    but B \ne C

Special Matrices

Square Matrix

Where m = n

Example 1

A 3\times3 matrix.

\begin{pmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9\end{pmatrix}

Example 2

A 2\times2 matrix.

\begin{pmatrix}1 & 2 \\ 4 & 5 \end{pmatrix}

Identity Matrix

The identity matrix is a square matrix whose eleements are all 0, except the leading diagonal which is 1s. The leading diagonal is the top left to bottom right corner.

It is usually denoted by I or I_n.

The identity matrix has the properties that

AI = IA = A

for any square matrix A of the same order as I, and

Ix = x

for any vector x.

Example 1

The 3\times3 identity matrix.

\begin{pmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}

Example 2

The 2\times2 identity matrix.

\begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix}

Transposed Matrix

The transpose of matrix A of order m\times n is matrix A^T which has the order n\times m. It is found by reflecting it along the leading diagonal, or interchanging the rows and columns of A.

by Lucas Vieira

Let matrix D = EF, then D^T = (EF)^T = E^TF^T

Example 1


A = \begin{pmatrix}3 & 2 & 1 \\ 4 & 5 & 6 \end{pmatrix},\,
A^T = \begin{pmatrix}3 & 4 \\ 2 & 5 \\ 1 & 6\end{pmatrix}

Example 2


B = \begin{pmatrix}1 \\ 4\end{pmatrix},\,
B^T = \begin{pmatrix}1 & 4\end{pmatrix}

Example 3


C = \begin{pmatrix}1 & 2 & 3 \\ 0 & 5 & 1 \\ 2 & 3 & 7\end{pmatrix},\,
C^T = \begin{pmatrix}1 & 0 & 2 \\ 2 & 5 & 4 \\ 3 & 1 & 7\end{pmatrix}

Orthogonal Matrices

A matrix, A, such that

A^{-1} = A^T

is said to be orthogonal.

Another way to say this is

AA^T = A^TA = I

Symmetric Matrices

A square matrix which is symmetric about its leading diagonal:

A = A^T

You can also express this as the matrix A, where

a_{ij} = a_{ji}

is satisfied to all elements.

Example 1

$$\begin{pmatrix} 1 & 0 & -1 & 3 \ 0 & 3 & 4 & -1 \ -2 & 4 & -1 & 6 \ 3 & -7 & 6 & 2 \end{pmatrix}$$

Anti-Symmetric

A square matrix is anti-symmetric if

A = -A^T

This can also be expressed as

a_{ij} = -a_{ji}

This means that all elements on the leading diagonal must be 0.

Example 1

$$\begin{pmatrix} 0 & -1 & 5 \ 1 & 0 & 1 \ -5 & -1 & 0 \end{pmatrix}$$

The Determinant

Determinant of a 2x2 System

The determinant of a 2x2 system is

D = a_{11}a_{22} - a_{12}a_{21}

It is denoted by


\begin{vmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{vmatrix}
\text{ or }
\det
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix}
  • A system of equations has a unique solution if D \ne 0

  • If D = 0, then there are either

    • no solutions (the equations are inconsistent)
    • intinitely many solutions

Determinant of a 3x3 System

Let


A = \begin{pmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{pmatrix}

\begin{align*} \det A = &a_{11} \times \det \begin{pmatrix}a_{22} & a_{23} \ a_{32} & a_{33} \end{pmatrix} \ &-a_{12} \times \det \begin{pmatrix}a_{21} & a_{23} \ a_{31} & a_{33} \end{pmatrix} \ &+a_{13} \times \det \begin{pmatrix}a_{21} & a_{22} \ a_{31} & a_{32} \end{pmatrix} \end{align*}

The 2x2 matrices above are created by removing any elements on the same row or column as its corresponding coefficient:

Chessboard Determinant

\det A may be obtained by expanding out any row or column. To figure out which coefficients should be subtracted and which ones added use the chessboard pattern of signs:

\begin{pmatrix} + & - & + \\ - & + & - \\ + & - & + \end{pmatrix}

Properties of Determinants

  • \det A = \det A^T
  • If all elements of one row of a matrix are multiplied by a constant z, the determinant of the new matrix is z times the determinant of the original matrix:

    \begin{align*} \begin{vmatrix} za & zb \ c & d \end{vmatrix} &= zad - zbc \ &= z(ad-bc) \ &= z\begin{vmatrix} a & b \ c & d \end{vmatrix} \end{align*}

    This is also true if a column of a matrix is mutiplied by a constant.

    Application if the fator z appears in each elements of a row or column of a determinant it can be factored out

    $$\begin{vmatrix}2 & 12 \ 1 & 3 \end{vmatrix} = 2\begin{vmatrix}1 & 6 \ 1 & 3 \end{vmatrix} = 2 \times 3 \begin{vmatrix} 1 & 2 \ 1 & 1 \end{vmatrix}$$

    Application if all elements in one row or column of a matrix are zero, the value of the determinant is 0.

    \begin{vmatrix} 0 & 0 \\ c & d \end{vmatrix} = 0\times d - 0\times c = 0

    Application if A is an n\times n matrix,

    \det(zA) = z^n\det A
  • Swapping any two rows or columns of a matrix changes the sign of the determinant

    \begin{align*} \begin{vmatrix} c & d \ a & b \end{vmatrix} &= cb - ad \ &= -(ad - bc) \ &= -\begin{vmatrix} a & b \ c & d \end{vmatrix} \end{align*}

    Application If any two rows or two columns are identical, the determinant is zero.

    Application If any row is a mutiple of another, or a column a multiple of another column, the determinant is zero.

  • The value of a determinant is unchanged by adding to any row a constant multiple of another row, or adding to any column a constant multiple of another column

  • If A and B are square matrices of the same order then

    \det(AB) = \det A \times \det B

Inverse of a Matrix

If A is a square matrix, then its inverse matrix is A^{-1} and is defined by the property that:

A^{-1}A = AA^{-1} = I
  • Not every matrix has an inverse

  • If the inverse exists, then it is very useful for solving systems of equations:

    \begin{align*} A\pmb{x} = \pmb b \rightarrow A^{-1}A\pmb x &= A^{-1}\pmb b \ I\pmb x &= A^{-1}\pmb b \ \pmb x &= A^{-1}\pmb b \end{align*}

    Therefore there must be a unique solution to A\pmb x = \pmb b: \pmb x = A^{-1}\pmb b.

  • If D = EF then

    D^-1 = (EF)^{-1} = F^{-1}E^{-1}

Inverse of a 2x2 Matrix

If A is the 2x2 matrix


A = \begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix}

and its determinant, D, satisfies D \ne 0, A has the inverse A^{-1} given by


A^{-1} = \frac 1 D \begin{pmatrix}
a_{22} & -a_{12} \\
-a_{21} & a_{11}
\end{pmatrix}

If D = 0, then matrix A has no inverse.

Example 1

Find the inverse of matrix A = \begin{pmatrix} -1 & 5 \\ 2 & 3 \end{pmatrix}.

  1. Calculate the determinant

    \det A = -1 \times 3 - 5 \times 2 = -13

    Since \det A \ne 0, the inverse exists.

  2. Calculate A^{-1}

    A^{-1} = \frac 1 {-13} \begin{pmatrix} 3 & -5 \\ -2 & -1\end{pmatrix}