(240)-343-2585

Math 318 Homework 2

(1) Permutation Matrices
(6.1 #34) Consider the following permutation matrix:

P =









0 0 0 1
1 0 0 0
0 1 0 0
0 0 1 0









Let’s examine why P is called a “permutation matrix” and study its eigenvalues
and eigenvectors.

(a) What is the effect of multiplying the vector x = (x1,x2,x3,x4)
⊺ by the matrix

P , i.e., what is Px?
(b) Find the matrix Q that permutes the entries of a vector x by the permutation

1324.
(c) Compute all eigenvalues of the matrix P shown above.
(d) For each eigenvalue, find a corresponding eigenvector. You shouldn’t have to

do any tedious computations such as finding nullspaces.

(2) Products and sums of eigenvalues
Let A be an n×n matrix with eigenvalues λ1, . . . ,λn (not necessarily all distinct).

(a) (i) (6.2 #20) Suppose A is diagonalizable. Then show that
det(A) = λ1λ2⋯λn.

(ii) (6.1 #16) Argue that for any matrix A, det(A) = λ1λ2⋯λn.
Hint: Recall that det(A− tI) = (−1)n(t−λ1)(t−λ2)⋯(t−λn).

(b) (6.2 #20, #21) The trace of a square matrix is the sum of its diagonal entries.

For example, if A = [
a b
c d

], then trace(A) = a+d.

(i) Argue that trace(PQ) = trace(QP) for any two n×n matrices P and Q.
Hint: To prove the general case, write down P = (pij) and Q = (qij)
symbolically and compute the trace of PQ and QP symbolically. The
trace expression will be a sum of sums. You then need to argue that
these sums are the same. You might first check the statement for two
symbolic 2 × 2 matrices to understand what is happening.

(ii) Whether you can do (a) or not, use the result to show that if A is
diagonalizable, then the trace of A is the sum of the eigenvalues of A.

(iii) In general, the trace of any n×n matrix is the sum of its eigenvalues.
(No need to argue this.)
Show that the trace of any 3 × 3 matrix is the sum of its eigenvalues.
(This requires starting with a symbolic 3 × 3 matrix.)
Hint: Here is a hint for general n that you can also use for the n = 3 case.
Recall that the characteristic polynomial p(λ) = (−1)n(λ−λ1)⋯(λ−λn)
and also p(λ) = det(A−λI). Think about where you might see the sum
of the eigenvalues of A if you expanded the product, and where you
might see it if you computed det(A−λI) using permutations.

(3) Projections †
(6.2 #25) Recall that the column space of a n×n matrix A, denoted Col(A), is

the span of the columns of A. Suppose A is a non-zero n×n matrix such that
A2 = A.
(a) Show that λ = 1 is an eigenvalue of A with eigenspace equal to Col(A).
(b) What is the eigenspace of λ = 1 if A is invertible? Is A diagonalizable in this

case? If yes, write its diagonalization.
(c) If A is not invertible, then what are its eigenvalues and their eigenspaces? Is A

diagonalizable in this case? If yes, write its diagonalization.
Note: If you can write its diagonalization, it will not be with explicit vectors,
but rather a general diagonalization in terms of vectors coming from the
various eigenspaces that you have found.

There will be a series of problems throughout this quarter marked with a †. They all
relate to projections.

(4) (Simultaneous diagonalization). (6.2 #24, #29)
(a) Consider the set of all 4 × 4 matrices that are diagonalized by the same

eigenvector matrix U:

SU = {UΛU
−1
∈ R4×4 ∶ Λ is a diagonal matrix } .

Show that SU is a subspace of R4×4. (Check the properties of a subspace.)
(b) What is SI where I is the identity matrix?
(c) Suppose the same U diagonalizes A and B, i.e., A = UΛ1U

−1 and B = UΛ2U
−1.

Argue that AB = BA. (Recall that in general matrix multiplication is not
commutative, i.e., AB ≠ BA.)

(5) * How to find a triangle in a graph. A graph G is a collection of nodes labeled
1, 2, . . . ,n and edges which are pairs of nodes. Shown below is a graph with 5 nodes
labeled 1, . . . , 5 and edges {1, 2},{1, 3},{2, 4},{3, 4},{3, 5},{4, 5}. A triangle in a
graph G is a triple of nodes i,j,k such that all edges {i,j},{i,k},{j,k} are present
in G. For example 3, 4, 5 forms a triangle in the graph below. Two nodes i and j
are neighbors in G if {i,j} is an edge in G.

1

2

3

4

5

If the graph G is very large it becomes hard to decide if there is a triangle in it
by simply looking at the graph. In this problem we will see that linear algebra can
be used to decide if a graph contains a triangle.

(a) The adjacency matrix A of a graph G with n nodes is a n×n matrix defined as
follows. The rows and columns of A are indexed by the node labels 1, . . . ,n and
the (i,j)-entry of A is 1 if the pair {i,j} is an edge in G and 0 otherwise. (An
example can be seen in Problem 2.4A on page 76 in Strang’s book. See also
Chapter 3 of the lecture notes.)
Write down the adjacency matrix of the graph G shown above.

(b) Let B = A2 where A is the adjacency matrix of G. The entry bij in B is the dot
product of two vectors sitting in A. Which vectors are they? In our example,
how is b23 formed?

(c) For three nodes i,j,k from G, argue that aikakj is 1 exactly when k is a
common neighbor of i and j.

(d) Using the above, what does bij count?

(e) The nodes i,j,k form a triangle if and only if i and j are neighbors and k is a
common neighbor of i and j. If i,j,k is a triangle what property must aij and
bij have?

(f) Putting all this together can you construct an algorithm that takes as input
two indices i,j, and determines whether or not there exists a triangle with the
edge between i and j as one of the sides . Justify your algorithm. Think of
an algorithm that uses A and B. If you use only A you will likely
have a less efficient algorithm. You don’t need to write computer code,
just writing out the steps in words is enough.

(g) (optional) If you know about running times of algorithms, do you see how fast
this algorithm runs? Is it faster than checking all triples of nodes in G?

Chapter 2

Di↵erence Equations

In this chapter we see an application of diagonalization to solving di↵erence equations. We will develop the
theory we need by working out an example. The material in this chapter comes from Strang’s Chapter 6.2.

2.1 Fibonacci sequence

You might have heard of the Fibonacci sequence 0, 1, 1, 2, 3, 5, 8, . . . defined recursively as follows: if Fk
denotes the kth Fibonacci number, where k = 0, 1, 2, 3, . . ., then

F0 = 0, F1 = 1, Fk+2 = Fk+1 + Fk.

This recursive definition allows us to enumerate any Fibonacci number we want

F0 = 0, F1 = 1, F2 = 1, F3 = 2, F4 = 3, F5 = 5, F6 = 8, F7 = 13, F8 = 21, F9 = 33, F10 = 54, . . .

The numbers get large relatively quickly and it becomes cumbersome to use the recursive definition to find
a large Fibonacci number like F100. However, linear algebra makes this easy as we now see.

Define wk :=


Fk+1
Fk


which means that wk+1 =


Fk+2
Fk+1


. Check that wk and wk+1 can be related via

multiplication by a matrix A. Indeed, if A =


1 1
1 0


, then Awk = wk+1. Let’s check:

Awk =


1 1
1 0


wk =


1 1
1 0

� 
Fk+1
Fk


=


Fk+1 + Fk

Fk+1


=


Fk+2
Fk+1


= wk+1.

Now suppose we start this process with w0 =


F1
F0


=


1
0


. We know that Aw0 = w1 and multiplying

both sides of this equation with A we see that

w2 = Aw1 = A(Aw0) = A
2w0

Continuing to multiply by A on both sides, we see that wk = A
kw0 and hence w100 = A

100w0. Recall that
we can “easily” compute A100 if A is diagonalizable.

We find the eigenvalues of A by computing the roots of its characteristic polynomial:

det(A � �I) =

1 � � 1
1 ��


= ��(1 � �) � 1 = �2 � � � 1 = 0

16

which yields the eigenvalues �1 =
1+

p
5

2
⇠ 1.618 and �2 = 1�

p
5

2
⇠ �0.618. Now computing eigenvectors we

get that u1 =


�1
1


is an eigenvector of �1 and u2 =


�2
1


is an eigenvector of �2. Since {u1, u2} is a basis

of R2, we can diagonalize A and write A = U⇤U�1. This means that

wk = A
kw0 = U⇤

kU�1w0

Putting everything together we have that


F101
F100


= w100 = A

100w0 = U⇤
100U�1


1
0


= U


(1+

p
5

2
)100 0

0 (1�
p
5

2
)100

#
U�1


1
0


.

Multiplying everything out we could find F100, but this still requires some matrix computations and com-

puting ⇤100 is not too easy in this example since it requires finding (1+
p
5

2
)100 and (1�

p
5

2
)100.

We will now see that there is an easier way to find F100. In fact, there is a closed form expression for the
kth Fibonacci number Fk purely in terms of the eigenvalues and eigenvectors of A.

2.2 The general setting

Before we start, we point out an important interpretation of matrix-vector multiplication. These are very
useful facts that we will need over and over again in this class. Please test these facts on some examples and
make sure you understand these interpretations very very well.

Matrix-vector multiplication as a linear combination of columns/rows

Let A 2 Rm⇥n with columns a1, . . . , an and rows b>1 , b
>
2 , . . . , b

>
m where ai 2 Rm and bj 2 Rn.

• The vector Ax is the linear combination of the columns of A by x1, x2, . . . , xn. It is a column
vector whose ith entry is the dot product of the ith row of A with x. Mathematically, if
A =


a1 a2 · · · an


2 Rm⇥n and x = (x1, x2, . . . , xn)>, then Ax = x1a1 +x2a2 +· · ·+xnan

and (Ax)i = b
>
i x.

• The vector y>A is the linear combination of the rows of A by y1, . . . , ym. It is a row vector whose
jth entry is the dot product of y and the jth column of A. Mathematically, if y = (y1, . . . , ym)

>

and A =

2

666
4

b>1
b>2

b>m

3

777
5
2 Rm⇥n, then y>A = y1b>1 + y2b

>
2 + . . . + ymb

>
m and (y

>A)j = y
>aj.

Using the above (check!) we get the following two very useful facts about the Col(A), the column space
of A which is the span of the columns of A, and Row(A), the rowspace of A which is the span of the rows of A:

Column and row space of a matrix

Col(A) = {Ax : x 2 Rn} ✓ Rm and Row(A) = {y>A : y 2 Rm} ✓ Rn

Now let’s come back to di↵erence equations. Suppose a matrix A 2 Rn⇥n sends an initial state vector
w0 to the k

th
state vector, wk via the relation A

kw0 = wk. Recall that we got this from the di↵erence
equation Awk = wk+1 that related two consecutive states of some system whose kth state vector is wk. If

17

A is diagonalizable, then A = U⇤U�1 and wk = A
kw0 = U⇤

kU�1w0.

Set c := U�1w0. Recall that in the eigenbasis of Rn consisting of the columns of U, the coordinates of
w0 is c = U

�1w0. You can find the vector c by solving the system Uc = w0.

Now consider wk = A
kw0 again where we substitute c for U

�1w0:

wk = U⇤
kU�1w0 = U⇤

kc =

u1 u2 · · · un

2

666
4

�k1 0 . . . 0
0 �k2 . . . 0



0 0 . . . �kn

3

777
5

2

666
4

c1
c2

cn

3

777
5

=

u1 u2 · · · un

2

666
4

�k1c1
�k2c2

�kncn

3

777
5
= c1�

k
1u1 + c1�

k
2u2 + · · · + cn�

k
nun

which expresses the kth state vector wk as a linear combination of the eigenvectors u1, . . . , un of A. The
scalars involved in the combination come from the eigenvalues of A as well as the vector c. We summarize
the above calculations in the following proposition.

Proposition 2.2.1. Suppose we have a di↵erence equation Awk = wk+1 with initial state vector w0 and
A 2 Rn⇥n. Then wk = Akw0. If A is diagonalizable then we can explicitly solve for wk as:

wk = c1�
k
1u1 + c2�

k
2u2 + · · · + cn�

k
nun = ⌃

n
i=1ci�

k
i ui.

In the above expression, the vector c = (c1, c2, . . . , cn)
>

is the solution of the linear system Uc = w0, and
the vectors u1, u2, . . . , un are independent eigenvectors of A with eigenvalues �1, �2, . . . , �n.

2.3 Back to the Fibonacci sequence

Now let’s come back to computing F100. Solving Uc = w0, we see that

c =

 1
�1��2
�1

�1��2


.

Therefore,

wk =
1

�1 � �2| {z }
c1

�k1


�1
1

|{z}
u1

+
�1

�1 � �2| {z }
c2

�k2


�2
1

|{z}
u2

=

2

6
4

�k+11 ��
k+1
2

�1��2

�k1 ��
k
2

�1��2

3

7
5 .

In particular, you can check that

w0 =


1
0


=

1

�1 � �2| {z }
c1


�1
1

|{z}
u1


1

�1 � �2| {z }
c2


�2
1

|{z}
u2

.

Since Fk is the second coordinate of wk =


Fk+1
Fk


, we conclude that

Fk =
�k1 � �k2
�1 � �2

.

18

What happens as k goes to infinity, i.e., what is limk!1 wk = limk!1 c1�
k
1u1 + c1�

k
2u2?

Since �2 = �0.618 2 [�1, 0], limk!1 �k2 = 0. Therefore,

lim
k!1

c1�
k
1u1 + c1�

k
2u2 = lim

k!1
c1�

k
1u1 =


lim
k!1

c1�
k
1


u1.

This means that wk becomes proportional to the first eigenvector u1 in the long run. Looking back at the

formula for Fk we also see that in the long run, Fk ⇠
�k1

�1��2
= 1p

5


1+

p
5

2

⌘k
.

The key observation is that in this Fibonacci example, the second eigenvalue of A tended to 0 as k got
large and wk limits to a multiple of the first eigenvector u1 of A. In other words, the long term behavior
of wk is controlled by the eigenvector that had the largest eigenvalue (called the dominant eigenvector and
eigenvalue). Keep this in mind as we go to the next chapter where we will see situations in which the
dominant eigenvalue and eigenvector are guaranteed to control long term behavior of states.

2.4 Fibonacci numbers and the golden ratio – an aside

Definition 2.4.1. The golden ratio, denoted by the greek letter �, is defined to be � = 1+
p
5

2
. It is also a

root of the quadratic polynomial x2 � x � 1.

The golden ratio came about from the idea of trying to draw the “perfect” rectangle, that is, a rectangle
that was the most pleasing to the human eye. It appears everywhere in nature and is intimately related to
the Fibonacci sequence. Recall that � is the dominant eigenvalue of the matrix A in the Fibonacci example.

Suppose we were a plant, one that grows upwards and has leaves coming o↵ of its main stem. How would
we grow more and more leaves and ensure that the leaf spacing maximized sunlight on the surface of our
leaves? We could start with leaf 1, and then rotate halfway around the stem to let leaf 2 grow there. That
is, leaf 2 is a rotation of 1

2
units around the stem from leaf 1. Next, we would want leaf 3 to be positioned

so it does not block too much sunlight from leaves 1 and 2. If we drew a picture and though about it for
a bit, we would end up rotating 3

5
units away from leaf 2. How about leaf 4? We would rotate this one 5

8
units from leaf 3. Assuming the plant was immortal, we would continue this process indefinitely, with a new
spacing for each leaf.

What would happen as we proceed? If we write it out we see that the we obtain a sequence of fractions
1
2
, 3
5
, 5
8
, . . . , Fk

Fk+1
and the limit of this sequence as k tends to infinity is

lim
k!1

Fk
Fk+1

=
1

In other words, limk!1
Fk+1
Fk

= �, the golden ratio!

19

Chapter 3

Markov Matrices

In this chapter we will look at the eigenvalues and eigenvectors of positive and nonnegative matrices. Our
focus will be on a subset of positive and non-negative matrices that are said to be Markov. We will see several
applications where di↵erence equations involving Markov matrices arise. The eigenvalues and eigenvectors
of Markov matrices have special properties which play a crucial role in these applications. This material is
based on Strang’s Chapter 10.3.

Let’s start with an example that allows us to practice what we learned about solving di↵erence equations
in the last chapter.

3.1 Longterm Behavior of Rental Cars

Suppose we want to understand the behavior of rental cars in Seattle. Cars are constantly being rented in
Seattle and returned elsewhere, and cars from elsewhere are being returned in Seattle. Suppose:

• At the start 2% of all rental cars are in Seattle (i.e., fraction of cars in Seattle is 0.02 and fraction
outside Seattle is 0.98).

• Every month 20% of Seattle cars leave and 5% of outside cars come in.

Question 3.1.1. What fraction of rental cars are in Seattle in the long run?

Step 1: Define the kth state vector of rental cars to be wk whose first coordinate is the fraction of
cars in Seattle and second coordinate is the fraction of cars outside Seattle at the end of month k. Then

w0 =


0.02
0.98


.

Step 2: Determine w1. Since 20% of cars leave every month and 5% come in, we know that after one
month the fraction of cars in Seattle is

(0.8)(0.02) + (0.05)(0.98)

and the fraction of cars outside Seattle is

(0.2)(.02) + (0.95)(0.98)

Therefore,

w1 =


(0.8)(0.02) + (0.05)(0.98)
(0.2)(0.02) + (0.95)(0.98)


=


0.8 0.05
0.2 0.95

| {z }
A


0.02
0.98

| {z }
w0

20

Step 3: Using the same reasoning as in the Fibonacci example, we can conclude that at the end of month

k, our kth state vector is wk = A
kw0. The eigenvalues and eigenvectors of A are

�1 = 1, u1 =


0.2
0.8


�2 = 0.75, u2 =


�1
1

Check that

w0 = 1


0.2
0.8


+ 0.18


�1
1

Now we apply the result in Proposition 2.2.1 to find wk. From the expression for w0, we get that c1 = 1
and c2 = 0.18. Therefore,

wk = (1)(1
k)


0.2
0.8


+ (0.18)(0.75)k


�1
1

| {z }
!0 as k!1

Once again, since the second eigenvalue 0.75 is between 0 and 1, its powers (0.75)k will tend to 0 as k goes

to infinity. Thus the dominant eigenvalue is 1, and limk!1 wk =


0.2
0.8


. Therefore, in the long run, 20% of

cars end up in Seattle. Once again, in the limit, wk is proportional to the dominant eigenvector of A.

3.2 Positive Markov Matrices

Definition 3.2.1. Let A = (aij) 2 Rm⇥n.

• A is a positive matrix, denoted A > 0, if aij > 0 for all i, j.

• A is a non-negative matrix, denoted A � 0 if aij � 0 for all i, j.

• A is Markov (also known as stochastic) if A is non-negative and the entries in each column of A add
up to 1, i.e., aij � 0 8i, j and

Pm
i=1 aij = 1 8j.

• A is a positive Markov matrix if it is positive and Markov, i.e., aij > 0 8i, j and
Pm

i=1 aij = 1 8j.

Example 3.2.2. The matrix A =


.8 .05
.2 .95


in the rental car example is positive Markov. The matrix

A =


1 1
1 0


in the Fibonacci example is non-negative but neither positive nor Markov.

Let us first develop some facts about Markov matrices. Suppose 1 = (1, 1, · · · , 1)> is the vector of all
ones. The condition that the entries in each column of a matrix A adds up to 1 is the same as saying
1>A = 1>. Therefore, if A is Markov then 1>A = 1>. We argue the following about powers of positive
Markov matrices.

Lemma 3.2.3. If A is positive Markov then Ak is also positive Markov.

Proof. Suppose A is positive Markov and v is a non-negative vector that is not 0. Then

1. Av > 0 since each entry of Av is the dot product of a row of A which is all positive with a non-negative
vector whose entries are not all zero.

2. If further, 1>v = 1 (i.e., the entries of v add up to 1), then 1>Av = (1>A)v = 1>v = 1 which means
that the entries of Av also sum to 1.

21

Now consider A2 = A

a1 a2 · · · an


. The jth column of A2 is Aaj. Since A is positive Markov and

aj is a column of A, aj > 0 and its entries sum to 1. So thinking of aj as the v in the previous paragraph
we get that Aaj > 0 and its entries sum to 1. Thus we have shown that A

2 is positive Markov. Repeating
this argument we get that Ak is positive Markov.

Theorem 3.2.4. (Perron’s theorem for positive matrices) If A > 0 then A has a dominant eigenvalue
�A with the following properties:

1. �A > 0 and it has an eigenvector uA > 0 (has all positive entries) called a dominant eigenvector of A.

2. AM(�A) = 1.

3. For any other eigenvalue µ of A, all its eigenvectors have at least one negative coordinate, and |µ| < �A.

The proof of this theorem is beyond the scope of this class, so we will just accept it. If you are interested,
a proof can be found in the book Numerical Linear Algebra by Lloyd Trefethen and David Bau. The above
theorem says more if A is a positive Markov matrix.

Theorem 3.2.5. (Perron’s theorem for positive Markov matrices) If A is positive Markov then A
has all the properties stated in Theorem 3.2.4, and in addition:

1. �A = 1 and thus if µ is another eigenvalue of A, then |µ| < 1.

2. If w0 � 0 and wk = Akw0, then limk!1 Akw0 = cuA where c > 0.
In other words, the in the long run, wk tends to a non-negative multiple of uA.

Example 3.2.6. Recall that in the rental car problem, wk tends to uA =


.2
.8


which is the dominant

eigenvector of the positive Markov matrix A =


.8 .05
.2 .95


in that problem. The dominant eigenvalue was

indeed 1 and the other eigenvalue 0.75 is smaller than 1 in absolute value. The constant c = 1.

We prove Theorem 3.2.5 using Theorem 3.2.4.

Proof. 1. Let A be a positive Markov matrix and let 1 be the vector of all ones as before. Since the
column entries of A sum to 1, the row entries of A> sum to 1 which means A>1 = 1. Hence 1 is an
eigenvalue of A> with eigenvector 1. We now argue that 1 is the dominant eigenvalue of A. Since
A> is a positive matrix, it can have only one positive eigenvector by part 3 of Theorem 3.2.4 and this
eigenvector is the dominant eigenvector of A>. Therefore, 1 is the dominant eigenvector of A> and 1
is the dominant eigenvalue of A>. Since A and A> have the same eigenvalues their biggest eigenvalues
are also the same. We conclude that �A = 1.

The second part of the statement follows from Theorem 3.2.4.

2. To prove the second statement, we assume for simplicity that A is diagonalizable. We will only need
this special case. If A = U⇤U�1 then by Proposition 2.2.1 we know that

wk = c1(1)
kuA + c2�

k
2u2 + · · · + cn�

k
nun.

Since we know that |�i| < 1 for all i = 2, 3, . . . , n, we get that limk!1 �ki = 0 for all non-dominant
eigenvalues. From this we can conclude that

lim
k!1

wk = lim
k!1

c1(1)
kuA + c2�

k
2u2 + · · · + cn�

k
nun| {z }

!0 as k!1

= c1uA.

22

How do we see that c1 > 0? From Lemma 3.2.3 we know that A
k is positive Markov. Therefore,

(Ak)>1 = 1. Since w0 � 0 and w0 6= 0,

(Akw0)
>1 = w>0 (A

k)>1 = w>0 1 > 0.

Therefore, limk!1(A
kw0)

>1 = (c1uA)
>1 > 0 which implies that c1 > 0 since we know that u

>
A1 > 0.

Warning: If A is only non-negative Markov and not positive Markov, Perron’s theorem fails. A coun-

terexample is A =


0 1
1 0


with eigenvalues �1 = 1 and �2 = �1. Note that the absolute value of the second

largest eigenvalue is NOT strictly smaller than the dominant eigenvalue. Even so, not all hope is lost.

Proposition 3.2.7. If A � 0 but Ak is positive Markov for some k > 1, then 1 is still the unique dominant
eigenvalue of A and |µ| < 1 for all other eigenvalues µ of A.

Proof. If µ is an eigenvalue of A, then µk is an eigenvalue of Ak. Since Ak is a positive Markov matrix,
by Theorem 3.2.5, 1 is its dominant eigenvalue. This means that 1 = �k for some eigenvalue � of A, and
|µk| < 1 for all other eigenvalues µ. Thus �A = 1 (why do we know �A 6= �1 or some other root of 1?), and
|µ| < 1 for all other eigenvalues µ.

Here is a situation where this Proposition becomes helpful.

Example 3.2.8. Suppose we have three groups with populations p1, p2, p3 respectively and after each week,
each group splits in half and joins the others. This behavior can be modeled by a non-negative Markov

matrix. If w0 =

2

4
p1
p2
p3

3

5, then after one month we have

w1 = Aw0 =

2

4
0 1/2 1/2
1/2 0 1/2
1/2 1/2 0

3

5

| {z }
non-negative Markov

2

4
p1
p2
p3

3

5

w2 = A
2w0 =

2

4
1/2 1/4 1/4
1/4 1/2 1/4
1/4 1/4 1/2

3

5

| {z }
positive Markov

2

4
p1
p2
p3

3

5 .

Since A2 is positive Markov the previous proposition tells us that 1 is the dominant eigenvalue of A and
|µ| < 1 for any other eigenvalue µ of A. Computing eigenvalues and eigenvectors we see that �A = 1 with
uA = (1/3, 1/3, 1/3)

> and �2 = �3 = �1/2.

Let’s compute with Julia.

julia> using LinearAlgebra

julia> A = [1/2 1/4 1/4; 1/4 1/2 1/4; 1/4 1/4 1/2]
Array{Float64,2}:
0.5 0.25 0.25
0.25 0.5 0.25
0.25 0.25 0.5

23

julia> eigvals(A)
3-element Array{Float64,1}:
0.24999999999999975
0.25
0.9999999999999998

julia> eigvecs(A)
Array{Float64,2}:
-0.0698557 0.813503 -0.57735
-0.669586 -0.467248 -0.57735
0.739442 -0.346255 -0.57735

Since Julia runs a numerical algorithm it does not often output integer answers, even if the true answer
is an integer. The output 0.9999999999999998 should be interpreted as 1. The eigenvector of 1 that Julia
gives us is the last column of the eigenvector output. Note that we can scale the eigenvector and have an
eigenvector of 1. So if we flip the sign we get the positive eigenvector (0.57735, 0.57735, 0.57735)>. We can
scale it further to get (1/3, 1/3, 1/3)>.

By Perron’s theorem, wk will approach a positive multiple of uA in the long run. Since all coordinates of
uA are equal, we should expect that in the long run the three populations will equalize. Stop for a moment
and think about whether this is what you would have expected from just the statement of the problem before
you did any computation at all.

To see a computational example, suppose w0 = (8, 16, 32)
>. Then you can compute that

w0 =

2

4
8
16
32

3

5 , w1 =

2

4
24
20
12

3

5 , w2 =

2

4
16
18
22

3

5 , w3 =

2

4
20
19
17

3

5 · · · �! 56

2

4
1/3
1/3
1/3

3

5 .

The number 56 is the sum of the populations at the start which remains constant as you advance k.

3.3 Non-negative Markov Matrices

We already saw that Perron’s theorem fails for non-negative matrices. However, the eigenvalues and eigen-
vectors of non-negative and non-negative Markov matrices still have some special properties as we see in the
following theorem.

Theorem 3.3.1. (Frobenius Theorem) If A � 0 then A has a dominant eigenvalue �A such that

1. �A � 0 with eigenvector uA � 0.

2. If µ is an eigenvalue of A then |µ|  �A.

Example 3.3.2. The matrix A =


0 1
0 0


has a single eigenvalue 0 with multiplicity 2. Therefore, �A = 0

and AM(�A) = 2. Also, E0 = {x 2 R2 : x2 = 0} which is the x1 axis of R2. There are certainly nonnegative
vectors in E0 such as (1, 0). This example shows you that if A is only non-negative (not positive), then the
dominant eigenvalue is also only non-negative and it can be repeated. Further, we can only guarantee that
the dominant eigenvector is non-negative, not positive.

As with Perron’s theorem, we can say something more if A is non-negative and Markov. Besides the
properties already guaranteed by Theorem 3.3.1, we get the following.

Theorem 3.3.3. If A is non-negative and Markov, then �A = 1.

24

Example 3.3.4. Consider the n⇥n identity matrix In which has only one eigenvalue � = 1 with multiplicity
n. The eigenspace E1 = Rn so 1 has a non-negative eigenvector. In fact, it has a positive eigenvector.

We write the proof of part of this theorem in case you are interested to see it. This is optional and uses
(slightly) advanced facts, but is within your reach. We need a theorem that we will not prove.

Theorem 3.3.5. (Gershgorin’s Theorem) For any A 2 Rn⇥n and any eigenvalue � of A, there is some
i such that |aii � �| 

P
j 6=i |aij|.

What this theorem says is that every eigenvalue of A is close to at least one of the diagonal entries aii
of A. More precisely, if you plot all the diagonal entries of A on the number line R and for each aii mark
the interval centered at aii and distance

P
j 6=i |aij| on either side of aii, then � will lie in at least one these

intervals. Note that
P

j 6=i |aij| is the sum of the absolute values of all the entries in row i except aii.

We can now argue why Theorem 3.3.3 is true. Suppose we denote the (i, j) entry of A> by a0ij. Since A

is non-negative Markov, the rows of A> sum to 1 and that all entries of A> are non-negative. Therefore,

X

j 6=i
|a0ij| =

X

j 6=i
a0ij = 1 � a

0
ii = 1 � aii

since a0ii = aii. Suppose µ is an eigenvalue of A
> then

|µ| � a0ii = |µ| � |a
0
ii|  |µ � a

0
ii| 

X

j 6=i
|a0ij| = 1 � aii

We have used lots of di↵erent facts in the above chain: the first equality is because a0ii � 0 and hence
a0ii = |a

0
ii|. The inequality that comes next follows from the triangle inequality for distances in R. The next

inequality is from Geshgorin’s theorem and the last equality is what we showed two lines before. Altogether
we have shown that

|µ| � a0ii  1 � aii
but since a0ii = aii we conclude that |µ|  1.

Now we use the fact that all eigenvalues of A and A> are the same, so µ is also an eigenvalue of A. Also
recall that if A is Markov, then 1 is an eigenvalue of A since 1 is an eigenvalue of A> because A>1 = 1. So
we conclude that �A = 1 and |µ|  1 for all other eigenvalues µ of A.

3.4 Adjacency Matrices of Graphs

We now give a family of non-negative matrices that will come up over and over again in this course.
A graph G is a collection of nodes or vertices, labeled 1, 2, . . . , n and edges which connect pairs of

nodes. We let {i, j} denote the edge connecting nodes i and j.

Here is an example of a graph.

1 2 3

This graph has three nodes: 1, 2, 3 and two edges {1, 2}, {2, 3}.
Graphs are denoted as G = (V, E) where V is the set of nodes in G and E is the set of edges in G. Our

example graph above would be written as G = ({1, 2, 3}
| {z }

V

, {{1, 2}, {2, 3}}
| {z }

E

).

To any graph G we can associate a matrix AG, known as the adjacency matrix of G.

25

Definition 3.4.1. The adjacency matrix AG of a graph G = (V, E) with n nodes is a n⇥n matrix defined
as follows. The rows and columns of A are indexed by the node labels 1, . . . , n and the (i, j)-entry of A is 1
if the pair {i, j} is an edge in G and 0 otherwise. That is AG = (aij) where

aij =

(
1 if there exists an edge from i to j, i.e., {i, j} 2 E
0 otherwise

The adjacency matrix for the graph G from above would be

AG =

2

4
0 1 0
1 0 1
0 1 0

3

5

Notice that this matrix is non-negative (but not Markov). Computing its eigenvalues and eigenvectors:

�1 =
p
2 u1 =

2

4

p
2
2p
2

3

5 , �2 = 0 u2 =

2

4
1
0
�1

3

5 , �3 = �
p
2 u3 =

2

4

p
2

�2p
2

3

5

Note several things:

• The dominant eigenvalue �AG =
p
2 is positive but not 1.

• AG has a smaller eigenvalue µ = �
p
2 such that |µ| = �AG.

• The dominant eigenvalue has a non-negative (in fact, positive) eigenvector.

Over the coming weeks we will be looking at lots of graphs and their adjacency matrices. We finish this
section with a powerful application of what we have learned in this chapter.

3.5 Google Page Rank

The Google page rank algorithm is due to Larry Page and Sergey Brin in 1998, the founders of Google, and
is used to rank webpages. The central idea is to decide the “rank” or “importance” of a webpage by the
importance of pages that link to it. When we submit a query, we want to see the most relevant pages at the
top of the list, so a useful measurement of importance is needed. We will see how positive Markov matrices
and di↵erence equations give us an algorithm. This material is taken from Chapter 12 of the book Math
Bytes by Tim Chartier.

We can think of the world wide web as a directed graph, with nodes indicating webpages and edges
indicating links from one page to another. Suppose our world wide web consists of 6 webpages and links as
follows.

4 5

2 3

1

6

26

From this graph we construct a page rank matrix, which will be a 6 ⇥ 6 matrix of probabilities, that
models movement between pages. We compute it according to the following rules:
When you are at page i, you have two choices

1. Teleportation: jump to a di↵erent page by typing the url of the webpage directly into the browser.

2. Follow a link.

If you decide to teleport, assume that all pages are equally likely for you to teleport to. If you decide to
follow a link, assume that all links are equally likely. You may want to assign di↵erent probabilities to these
moves but let’s keep it simple. Let’s also decide in this example when we will teleport and when we will
follow a link. Say we roll a dice and do the following:

• If we roll 1, 2, 3, 4, or 5, then we follow a link.

• If we roll a 6, then we teleport.

The (i, j) entry of the page rank matrix A is the probability of going from page j to page i. Let’s
compute the first column of this matrix in our example. The entries are ai1 for i = 1, 2, 3, 4, 5, 6. Remember,
to compute ai1 we only focus on going from page 1 to page i.

• a11: We can only teleport from 1 to 1. We have a 1/6 chance of rolling a 6 and a 1/6 chance of
teleporting to 1 out of the 6 possible webpages. The total probability a11 = (1/6)(1/6) = 1/36.

• a21: Similarly to the previous case, we have a 1/6 chance of teleporting from 1 to 2 and since all
webpages are equally likely, the probability that we teleport from 1 to 2 is (1/6)(1/6) = 1/36. We can
also follow a link with probability 5/6 (i.e. rolling anything but a 6), there is no link from 1 to 2 in
our graph. So in total, we move from 1 to 2 with probability a21 = (1/6)(1/6) + (5/6)(0) = 1/36.

• a31: This is the same as the above, i.e., a31 = 1/36.

• a41: This is the only computation that is di↵erent because there is a link from 1 to 4. As before, we
teleport with probability 1/6 (chance of rolling a 6), so the teleportation probability from 1 to 4 is
1/36. We will follow a link with probability 5/6 (chance of rolling anything other than a 6), and a
1/1 chance of linking from 1 to 4 since 4 is the only link we can follow from 1. Note that if 1 had
k outgoing edges, then the chance of linking to any of those pages would then be 1/k. Therefore the
total probability of going from 1 to 4 is a41 = (1/6)(1/6) + (5/6)(1) = 31/36.

• a51: This is the same as a21.

• a61: This is the same as a21.

Summarizing, the first column of our page rank matrix is

2

666666
4

1/36
1/36
1/36
31/36
1/36
1/36

3

777777
5
. Computing all entries by the same

procedure as above, we get the page rank matrix:

A =

2

666666
4

1/36 31/36 31/36 1/36 1/36 6/36
1/36 1/36 1/36 11/36 1/36 6/36
1/36 1/36 1/36 11/36 16/36 6/36
31/36 1/36 1/36 1/36 1/36 6/36
1/36 1/36 1/36 11/36 1/36 6/36
1/36 1/36 1/36 1/36 16/36 6/36

3

777777
5
.

27

The computation of the last column is a bit special since page 6 has no links at all. Therefore, from page
6 you can only teleport and so with probability 1 you will teleport from page 6 and each page has an equal
probability of being teleported to. Therefore, ai6 = 1 · 16 =

1
6
for all i. You should compute several entries

by hand to make sure you understand the procedure. We make several key observations about this matrix
and see how the theorems we saw in this chapter help.

Important Observations

• Teleportation guarantees that our matrix A will be positive since every entry will be at least 1/36.

• Since the entries in column j are the probabilities of going from page j to the others, the entries must
add up to 1, so A will be Markov. Check that all columns of the A in our example add up to 1. This
also provides shortcuts to the computation.

Since the page rank matrix A is positive Markov, Perron’s theorem applies. Using Julia we can compute
its eigenvalues and eigenvectors.

julia> using LinearAlgebra

julia> A = [1/36 31/36 31/36 1/36 1/36 6/36;
1/36 1/36 1/36 11/36 1/36 6/36;
1/36 1/36 1/36 11/36 16/36 6/36;
31/36 1/36 1/36 1/36 1/36 6/36;
1/36 1/36 1/36 11/36 1/36 6/36;
1/36 1/36 1/36 1/36 16/36 6/36]

Array{Float64,2}:
0.0277778 0.861111 0.861111 0.0277778 0.0277778 0.166667
0.0277778 0.0277778 0.0277778 0.305556 0.0277778 0.166667
0.0277778 0.0277778 0.0277778 0.305556 0.444444 0.166667
0.861111 0.0277778 0.0277778 0.0277778 0.0277778 0.166667
0.0277778 0.0277778 0.0277778 0.305556 0.0277778 0.166667
0.0277778 0.0277778 0.0277778 0.0277778 0.444444 0.166667

julia> eigvals(A)
6-element Array{Complex{Float64},1}:

-0.2977044274773357 – 0.6329450281890806im
-0.2977044274773357 + 0.6329450281890806im
-0.22223919020536226 + 0.0im

-5.782547471497545e-18 + 0.0im
0.12320360071558936 + 0.0im
1.0000000000000013 + 0.0im

julia> eigvecs(A)
Array{Complex{Float64},2}:

0.233465+0.505988im 0.233465-0.505988im 0.122729+0.0im ? 0.20872+0.0im -0.599066+0.0im
0.0893727-0.243577im 0.0893727+0.243577im 0.48738+0.0im 0.0323265+0.0im -0.253028+0.0im
0.198012-0.133645im 0.198012+0.133645im -0.426388+0.0im 0.141653+0.0im -0.358456+0.0im

-0.691375-0.0im -0.691375+0.0im -0.108766+0.0im 0.443701+0.0im -0.588717+0.0im
0.0893727-0.243577im 0.0893727+0.243577im 0.48738+0.0im 0.0323265+0.0im -0.253028+0.0im
0.0811519+0.114811im 0.0811519-0.114811im -0.562336+0.0im ? -0.858726+0.0im -0.194924+0.0im

The dominant eigenvalue of A is listed at the end in the list of eigenvalues and its eigenvector is listed

28

last in the list of eigenvectors. Observe that the dominant eigenvalue is 1 with eigenvector

2

666666
4

�0.599066
�0.253028
�0.358456
�0.588717
�0.253028
�0.194924

3

777777
5
.

This eigenvector is not positive, but since any scaling of an eigenvector is again an eigenvector of the same
eigenvalue, we can scale the given eigenvector by �1 and take the dominant eigenvector to be

2

666666
4

0.599066
0.253028
0.358456
0.588717
0.253028
0.194924

3

777777
5

which is positive. Let’s now scale once more by dividing by the sum of the coordinates to get a vector of
probabilities, and set

uA =

2

666666
4

0.26658
0.11259
0.15951
0.26197
0.11259
0.08674

3

777777
5
.

Definition 3.5.1. We call uA the page rank vector of our system of webpages. It is the dominant
eigenvector of the page rank matrix and is always scaled to be a vector of probabilities that add up to 1.

Now here is how the page rank algorithm works. We assume all pages are equally important at the
beginning, so our initial state vector, whose entries rank the importance of each webpage at time 0, is
w0 = (1/6, 1/6, 1/6, 1/6, 1/6, 1/6)

>. Let’s think about what Aw0 means. It’s first entry is the dot product
of the first row of A with w0. The first row of A records the probabilities with which all the di↵erent pages
transfer to page 1. Our model assumes that the importance of a page is based on the importance of the pages
that can lead to this page (either via a direct link or by teleportation). Therefore, the first entry of Aw0 is
the importance page 1 has after one iteration of teleporting and linking from all vertices. In other words,
it is the importance that page 1 inherits from other pages after one iteration. Applying the same logic to
all entries of Aw0 we see that w1 = Aw0 is the vector of rankings of pages after one iteration. Continuing
we see that wk = A

kw0 is the vector of rankings after k iterations. Therefore, by Theorem 3.2.5, we have
limk!1 wk = c1(1

k)uA. That is, the importance of webpages, in the long run, is a positive multiple of the
dominant eigenvector uA of A, and in particular, proportional to uA. Looking at uA in our example, the
most important page in our system is webpage 1, the second most important is webpage 4 and so on.

29

Chapter 1

Eigenvalues and Diagonalization

1.1 Eigenvalues and Eigenvectors

Consider the vector space Rm⇥n of all m ⇥ n matrices with entries in R. Being a vector space means that
the elements of Rm⇥n (which are matrices) satisfy the following under addition and scalar multiplication:

1. for all A, B 2 Rm⇥n, their sum A + B 2 Rm⇥n,

2. for any A 2 Rm⇥n and scalar r 2 R, the product rA 2 Rm⇥n, and

3. the zero matrix 0 2 Rm⇥n.

The set of all square matrices of size n ⇥ n is denoted as Rn⇥n. The set of all diagonal matrices of size n ⇥ n
is a subspace of Rn⇥n. Every subspace is a vector space – it just happens to live in a bigger vector space.

A matrix A 2 Rm⇥n represents the linear transformation from Rn to Rm, that sends x 2 Rn to Ax 2 Rm,
written as x 7! Ax. It is usual to denote the linear transformation also by A and write A : Rn ! Rm such
that x 7! Ax. Note that A is used both to denote the linear transformation and its matrix representation.

Throughout this chapter we consider a square matrix A 2 Rn⇥n. Then both x and its image Ax live in
Rn. Normally the vector x and its image Ax are not related in any simple way, but for some vectors x, it
may happen that Ax is simply a scaling of x. Such vectors tell us about parts (subspaces) of the domain
Rn that only scale (and hence remain fixed) under the action of A. This is important information about the
linear transformation A and brings us to the notion of eigenvalues and eigenvectors.

Definition 1.1.1. Let A 2 Rn⇥n. If there exists a non-zero vector x 2 Rn and some scalar � 2 R such that
Ax = �x we say that x is an eigenvector of A with eigenvalue �.

Question: Should 0 2 Rn be allowed as an eigenvector of A? If yes, what would be its eigenvalue? Do
you see why the above definition requires eigenvectors to be non-zero?

Definition 1.1.2. The eigenspace of eigenvalue �, denoted as E�, is the set of all vectors x 2 Rn that
are eigenvectors of A with eigenvalue �. Mathematically, E� = {x 2 Rn : Ax = �x}.

It follows from the definition that E� = {x 2 Rn : (A � �I)x = 0} = Null(A � �I) where the notation
Null(B) denotes the nullspace of the matrix B. Since E� is the nullspace of a matrix, it is a subspace of Rn.
Any vector in E� only scales by � under the linear transformation A.

The nullspace of a square matrix is non-zero if and only if the matrix is singular (not invertible) which
is if and only if the determinant of the matrix is zero. Therefore, E� = Null(A � �I) 6= {0} if and only if
det(A � �I) = 0.

Proposition 1.1.3. A scalar � 2 R is an eigenvalue of A if and only if det(A � �I) = 0

5

This proposition enables the computation of all eigenvalues of A: Let the variable t denote an unknown
eigenvalue and set up the equation det(A � tI) = 0. On the left is a polynomial in t of degree n. Its roots
are the eigenvalues of A. The polynomial det(A � tI) is called the characteristic polynomial of A.

Question: Do you see why p(t) = det(A � tI) is a polynomial in t of degree n? Hint: Think of the
cofactor formula for computing a determinant. We’ll see more of this soon.

Example 1.1.4. Let A 2 R2⇥2 be the matrix that represents reflection in R2 about the line y = x. This is
a linear transformation (check if you wish). Recall how one finds the matrix of a linear transformation.

Matrix of a linear transformation

Given a linear transformation T : Rn ! Rm, the matrix for T can be written as

T(e1) T(e2) · · · T(en)

where {e1, . . . , en} is the standard basis of Rn. Remember: A linear transformation is com-
pletely determined by where it takes a basis of the domain.

Using this fact, we find that the matrix representing the above reflection is A =


0 1
1 0


. Before we

compute the eigenvalues and eigenvectors of this matrix, let us see if we can just find them using geometry.
Which vectors are only scaled by the above reflection? There are two natural guesses based on geometry:

1. If x lies on the line y = x, then it remains unchanged under reflection about the line y = x and hence

all vectors of the form r


1
1


where r 2 R is an eigenvector of A with eigenvalue 1.

2. If x is perpendicular to the line y = x then it goes to �x under this reflection. Therefore, all vectors

of the form r


�1
1


where r 2 R is an eigenvector of A with eigenvalue �1.

We can now double check our predictions by solving the characteristic polynomial equation:

• Step 1: Compute that det(A � �I) = �2 � 1.

• Step 2: Solving det(A � �I) = �2 � 1 = 0 we find that the eigenvalues of A are � = ±1.

• Step 3: For each �, we find a basis of E� = Null(A � �I)

� = 1 =) A � I =

�1 1
1 �1


which has echelon form


�1 1
0 0

This gives the linear system �x1 + x2 = 0 =) x1 = x2 and E1 = Span
n 

1
1

� o
.

� = �1 =) A + I =

1 1
1 1


which has echelon form


1 1
0 0

This gives the linear system x1 + x2 = 0 =) x1 = x2 and E�1 = Span
n �1

1

� o
.

Indeed, we had found all the eigenvalues and eigenvectors of A using geometry.

6

1.1.1 The characteristic polynomial

We now look at the characteristic polynomial more closely. Recall that p(�) = det(A � �I) is a degree n
polynomial in the variable � and its roots are the eigenvalues of A. (It is usual to use � for the variable t
when we talk about characteristic polynomials, but you should remember that this � is a variable and not
any particular eigenvalue.) By roots of a polynomial p(t) we mean the values of t for which p(t) = 0. What
can we say about the roots of the characteristic polynomial p(�) or more generally, a univariate polynomial
p(t) of degree n? To fully understand this we need to recall complex numbers.

The imaginary i is defined to be the positive square root of �1, i.e., i2 = �1. An expression of the form
a + ib where a, b 2 R is called a complex number, e.g., 2 + 3i,

p
5 � 2i. The set of all complex numbers is

denoted C. Real numbers are special cases of complex numbers, i.e., R = {a + 0i : a 2 R}. Therefore,
R ✓ C. The conjugate of a complex number a + ib is a � ib, and the conjugate of a � ib is a + ib. Check that
the conjugate of a real number a is a again.

We could do this whole course over C but for simplicity, and for the sake of geometry, we will stick to R.
Almost all the results we will see also hold for complex matrices with slight modifications. We will revisit
complex matrices and vector spaces at the end of this course.

Fundamental Theorem of Algebra

Let p(t) be a degree n polynomial in the variable t with complex numbers as coe�cients. Then
p(t) has (counting multiplicities), precisely n complex roots µ1, . . . , µn. Equivalently, there is some
constant c 2 C such that p(t) can be factored as

p(t) = c(t � µ1)(t � µ2) · · · (t � µn).

We make a few comments:

1. The multiplicity of a root µi is the number of times it appears as a root of p(t). If we denote the
multiplicity of µi by ↵i, then we can factor p(t) as

p(t) = c(t � µ1)↵1(t � µ2)↵2 · · · (t � µk)↵k

where ↵1 + ↵2 + · · · + ↵k = n.

2. If all the coe�cients of the polynomial p(t) are real, and a + ib is a complex root of p(t), then so is
a � ib which is called the complex conjugate of a + ib. Do you see why? Try a small example.
Example 1.1.5. (a) p(t) = t2 � 1 = 0 =) t = ±1 and t2 � 1 = (t � 1)(t + 1).
(b) p(t) = 2t2 + 1 = 0 =) t2 = �1

2
=) t = ± ip

2
and 2t2 + 1 = 2(t � ip

2
)(t + ip

2
).

Notice that the coe�cients of both polynomials above are real numbers. In the first case, p(t) has only
real roots and in the second case it has two complex roots that are conjugates.

3. If p(�) = det(A��I) is the characteristic polynomial of the square matrix A 2 Rn⇥n then all coe�cients
of p(�) are real. However, the some roots of p(�) may be complex and if so, they come in conjugate
pairs. The factorization in this case looks like

p(�) = det(A � �I) = (�1)n(� � �1)(� � �2) · · · (� � �n).

Question: Do you see why (�1)n appears as the constant in front of the factorization?
This brings us to the first general fact about eigenvectors and eigenvalues of a matrix A 2 Rn⇥n.

Theorem 1.1.6. A matrix A 2 Cn⇥n (in particular, in Rn⇥n) has n eigenvalues (counting multiplicities).
Let’s recap why. The characteristic polynomial p(�) = det(A � �I) of a matrix A 2 Cn⇥n is a degree

n polynomial with complex coe�cients. Therefore, by the Fundamental Theorem of Algebra it has n roots
(counting multiplicities) and these roots are the eigenvalues of A.

7

1.1.2 Arithmetic and geometric multiplicities

Definition 1.1.7. Suppose the characteristic polynomial of A factors as

p(�) = det(A � �I) = (�1)n(� � �1)↵1(� � �2)↵2 · · · (� � �k)↵k

where ↵1 + ↵2 + · · · + ↵k = n and �1, . . . , �k are the eigenvalues of A. The algebraic multiplicity of the
eigenvalue �i is the exponent ↵i, denoted as AM(�i).

Definition 1.1.8. The geometric multiplicity of the eigenvalue �i, denoted GM(�i), is the dimension of
the eigenspace E�i, i.e., GM(�i) = dim(E�i).

These two multiplicities are sometimes di↵erent as the following example shows.

Example 1.1.9. Let A =


8 �9
4 �4


. Check that p(�) = (� � 2)2 and AM(2) = 2. On the other hand,

A � �I =

8 � � �9
4 �4 � �


=) A � 2I =


6 �9
4 �6

which shows that the columns of A � 2I are linearly dependent. Since it is not the zero matrix, we conclude
that rank(A � 2I) = 1 which implies that nullity(A � 2I) = 1. Hence, dim(E2) = dim(Null(A � 2I)) = 1,
and GM(2) < AM(2).

In general, we have the following relationship between the two multiplicities.

Proposition 1.1.10. For an eigenvalue � of A 2 Rn⇥n, GM(�)  AM(�) always.

1.1.3 Complex eigenvalues

From the Fundamental Theorem of Algebra we know that some eigenvalues of a real matrix can be complex.
Rotation matrices are good examples of real matrices with complex eigenvalues. We have no easy way to see
geometry in complex eigenvectors since we can only draw in Rn, but we will develop some machinery later
that will provide insight.

Example 1.1.11. Let A denote the linear transformation that rotates any vector in R2 by ⇡/2. Clearly
there is no real vector that is only scaled under rotation by ⇡/2 and so we do not expect any real eigenvectors
for the matrix of this linear transformation.

Check that the matrix representing this rotation is

A =


0 �1
1 0


.

In general, the matrix that represents rotation in R2 by ✓ is

R✓ =


cos ✓ � sin ✓
sin ✓ cos ✓


.

We compute the characteristic polynomial of A and see that

det


�� �1
1 ��


= �2 + 1 = 0 =) � = ±i

Both eigenvalues are complex in this case. Further, Ei = Span

⇢
i
1

��
and E�i = Span

⇢
1
i

��
.

8

Example 1.1.12. Now suppose B denotes rotation in R2 by ⇡ rather than ⇡/2. Then we do expect real
eigenvectors since every vector x 2 R2 will simply flip sign and become �x under this rotation. Hence �1
is an eigenvalue and we might guess that E�1 = R2. Let’s confirm this with calculations.

We have three ways to compute B: (i) compute B using the standard basis of R2, or (ii) compute B
from the R✓ in the previous example by setting ✓ = ⇡, or ((iii) we could obtain B from A by recalling that
the product of matrices corresponds to the composition of linear transformations. Since rotating by ⇡ is the
same as rotating twice by ⇡/2, we have that

B = A2 =


�1 0
0 �1

and hence,

det


�1 � � 0

0 �1 � �


= (�1 � �)2 = (� + 1)2 = 0 =) � = �1 with AM(�1) = 2.

Therefore �1 is the only eigenvalue of B with arithmetic multiplicity 2, and B is the rare rotation matrix
with only real eigenvalues. Check that

E�1 = Null(A
2 + I) = Null(�I + I) = Null(O22) = R2

and so our guess above was correct and GM(�1) = AM(�1) = 2.

1.1.4 Eigenvalues of powers and inverses

We now state a few general facts about eigenvalues and eigenvectors under various matrix operations. You
will derive more facts in homework.

1. If � is an eigenvalue of A then �k is an eigenvalue of Ak, and if x is an eigenvector of A with eigenvalue
� then it is also an eigenvector of Ak with eigenvalue �k.

Why?

Ax = �x =) A2x = A(Ax) = A�x = �Ax = �2x =) A3x = A(A2x) = A�2x = �2Ax = �3x

Continuing with this process we can see that

Akx = A(Ak�1)x = A�k�1x = �k�1Ax = �kx

Question: does this account for all eigenvalues of Ak?

2. If � is an eigenvalue of A and A is invertible, then ��1 = 1

is an eigenvalue of A�1.

Why? If Ax = �x, we can multiply both sides of this equation on the left by A�1 (which we know
exists!), and get x = A�1�x which implies that 1


x = A�1x. Note that again the eigenvalues change

but the eigenvectors don’t.

In compact notation, we write a matrix A 2 Rm⇥n as

A = (aij) =

2

6
4
a11 a12 · · · a1n

… · · ·

am1 am2 · · · amn

3

7
5 .

Here is a quick notational refresher, which we will use a lot.

9

Definition 1.1.13. The transpose of a matrix A = (aij) 2 Rm⇥n, denoted as A>, is the matrix obtained
by swapping rows and columns of A. That is A> = (aji) 2 Rn⇥m.

Example 1.1.14. If A =


1 �1 0
3 1 �2


then A> =

2

4
1 3
�1 1
0 2

3

5.

In homework you will have to think about the relationship between the eigenvalues of A and A>.

1.2 Diagonalization

Definition 1.2.1. A matrix A 2 Rn⇥n is diagonalizable if there exists an invertible matrix U 2 Rn⇥n
and a diagonal matrix ⇤ 2 Rn⇥n (⇤ is the capital lambda) such that A = U⇤U�1.

Suppose A = U⇤U�1 where ⇤ = diag((�1, �2, . . . , �n). Then multiplying both sides by U on the right,
we get that AU = U⇤. if ui denotes the ith column of U, then this implies that Aui = �iui for each
i = 1, . . . , n. Therefore, �i, the entry in position (i, i) of ⇤, is an eigenvalue of A with eigenvector ui, which
is the the ith column of U. For A to be diagonalizable, U needs to be invertible which means that A has to
have n linearly independent eigenvectors, i.e., Rn has a basis consisting of eigenvectors of A.
Theorem 1.2.2. A matrix A 2 Rn⇥n is diagonalizable if and only if it has n linearly independent eigenvec-
tors.

Recall that the eigenvectors of A lie in the eigenspaces of A. We saw earlier that the dimension of the
eigenspace E� can be smaller than the multiplicity of �. If this happens for some eigenvalue �, then we will
not have enough linearly independent eigenvectors of A to span Rn and A will not be diagonalizable. So we
can write down a finer (equivalent) condition for diagonalizability as follows.

Theorem 1.2.3. A 2 Rn⇥n is diagonalizable if and only if AM(�) = GM(�) for all eigenvalues �.
We now look at three examples:

Example 1.2.4. Recall A =


0 1
1 0


from 1.1.4. The eigenvalues of this matrix were � = ±1 with eigenspaces

E1 = Span
n 

1
1

� o
and E�1 = Span

n �1
1

� o
. Using the basis vectors for our eigenspaces we obtain the

eigenbasis of R2 given by
n 

1
1


,


�1
1

� o
. A is therefore, diagonalizable:

A =


1 �1
1 1

� 
1 0
0 �1

� 
1 �1
1 1

��1

In this example, AM(�) = GM(�) for both eigenvalues � = ±1.

Example 1.2.5. Now consider the matrix B =


�1 0
0 �1


from Example 1.1.12. This matrix has only

one eigenvalue � = �1 but since E�1 = R2 it has two linearly independent eigenvectors. Hence B is
diagonalizable. For instance, we could take the standard basis of R2 as two eigenvectors in E�1 and write

B =


�1 0
0 �1


=


1 0
0 1

� 
�1 0
0 �1

� 
1 0
0 1


.

Example 1.2.6. Here is an example of a matrix that is not diagonalizable. Let A =


6 �1
1 4


. Computing

eigenvalues and eigenvectors, we see the following:

det


6 � � �1
1 4 � �


= �2 � 10� + 25 = (� � 5)2 = 0 =) � = 5 with AM(5) = 2

10

Since eigenvalues are not distinct, A may not be diagonalizable. Computing the eigenspace of � = 5,

E5 = Null


1 �1
1 �1


= Null


1 �1
0 0


= Span

n 
1
1

� o

so GM(5) = 1 and AM(5) 6= GM(5). This means A is not diagonalizable.

In general, it takes some work to see if a matrix is diagonalizable. Here is a special case in which a matrix
is always diagonalizable.

Theorem 1.2.7. If A 2 Rn⇥n has n distinct eigenvalues, then A is diagonalizable.

Proof. First note that if �i is an eigenvalue of A it must have some non-zero eigenvector x such that
Ax = �ix. Moreover, for any x 2 E�i, we know that cx 2 E�i for any c 2 R, thus if x 2 E�i then
Span{x} ✓ E�i. This means that for each eigenvalue �i, we have GM(�i) � 1.

If A has n distinct eigenvalues, then we can list them out as �1, �2, . . . , �n and are guaranteed that
�i 6= �j for all i 6= j. This means that AM(�i) = 1 8i. From Proposition 1.1.10 we can conclude that

1  GM(�i)  AM(�i) = 1

hence AM(�i) = GM(�i) 8i. The result now follows from Theorem 1.2.3.

1.2.1 Applications of Diagonalization

Powers of matrices

Powers of diagonalizable matrices are very easy to compute. If A = U⇤U�1 then

Ak = (U⇤U�1)(U⇤U�1) · · · (U⇤U�1)
| {z }

k times

= U⇤kU�1

Recall that a power of a diagonal matrix is easy to compute: ⇤k = diag(�k1, �
k
2, . . . , �

k
n).

For the A in Example 1.1.4, we see that

A53 =


0 1
1 0

�53
=


1 �1
1 1

� 
1 0
0 �1

�53 
1 �1
1 1

��1
=


1 �1
1 1

� 
1 0
0 �1

� 
1 �1
1 1

��1
=


0 1
1 0

A diagonalizable matrix behaves like a diagonal matrix

The action of a diagonal matrix ⇤ is very easy to understand, since it sends x to ⇤x = (�1×1, �2×2, . . . , �nxn)
>.

Check this on an example. Diagonalizable matrices behave like diagonal matrices if we are willing to change
bases as we now explain. This is perhaps the greatest use of diagonalization!

Suppose A is diagonalizable and A = U⇤U�1. This means that the columns of U which we denote by
u1, . . . , un form a basis for Rn. Take a vector x 2 Rn represented in the standard basis e1, . . . , en. Recall
that the coordinates of x in the basis U = {u1, . . . , un} is the vector y such that Uy = x or alternately,
y = U�1x. This formula is extremely important to remember!

Change of basis formula

If a point in Rn has coordinates x with respect to the standard basis, then its coordinates with respect
to the basis U = {u1, . . . , un} of Rn is y = U�1x, where U is the matrix whose ith column is ui.

11

Now suppose we use the basis U = {u1, . . . , un} instead of the standard basis of Rn. Then the coordinates
of x with respect to the basis U is y = U�1x and the coordinates of Ax with respect to the basis U is

U�1Ax = U�1U⇤U�1x = ⇤U�1x = ⇤y.

Therefore in the U basis, a vector y is sent to ⇤y which rather simple to understand. In other words, the
action of the matrix A is the same as the action of the diagonal matrix ⇤ if we are willing to change the
basis of Rn from the standard basis to the eigenbasis U of A.

1.3 Similar Matrices

We finish the chapter with a completely new notion.

Definition 1.3.1. Two matrices A and C are similar, denoted as A ⇠ C, if there exists an invertible matrix
B such that

A = BCB�1

There is one main example that probably comes to mind.

Example 1.3.2. If A is diagonalizable, then A = U⇤U�1 for some U, and hence A ⇠ ⇤.

Theorem 1.3.3. If A ⇠ C then A and C have the same eigenvalues.

Proof. If A ⇠ C then 9 an invertible matrix B such that A = BCB�1. Our goal is to show that A and C
have the same eigenvalues. Suppose Cx = �x, then since AB = BC we know that ABx = BCx, hence

A(Bx) = B(Cx) = B�x = �Bx

This implies that � is an eigenvalue of A (with eigenvector Bx), hence all eigenvalues of C are also eigenvalues
of A. Similarly we could start with an eigenvalue of A and show that it is an eigenvalue of C. Therefore, we
have that the set of eigenvalues of C is a subset of the set of eigenvalues of A and the set of eigenvalues of
A is a subset of the set of eigenvalues of C. This means that the eigenvalues of A and C are the same. In
general, if F and G are two sets and F ✓ G and G ✓ F, then F = G. This is because, the assumptions imply
that F ✓ G ✓ F, but since the left and right ends are equal, it must be that there is equality throughout.

1.4 Determinants and Permutations

In this section we will see a fact about the determinant of a square matrix that you perhaps did not see in
Math 308. This will help us understand the characteristic polynomial a bit more.

You might know that the number called n factorial (written as n!) is 1⇥2⇥· · ·⇥(n�1)⇥n. A permutation
of the numbers 1, 2, . . . , n is an ordering of the numbers. For example, there are 2 = 2! permutations of
1, 2, namely 12 and 21 (we write the numbers in a string with no commas separating them). There are
6 = 1⇥2⇥3 = 3! permutations of 1, 2, 3, namely 123, 132, 213, 231, 312, 321. There are 24 = 4! permutations
of 1, 2, 3, 4. In general, there are n! = 1⇥2⇥ · · ·⇥(n�1)⇥n permutations of 1, 2, . . . , n. You can also think
of a permutation as a function ⇡ : {1, 2, 3, . . . , n} ! {1, 2, 3, . . . , n}. For example, the permutation 1432 is
the function ⇡ that sends 1 7! 1, 2 7! 4, 3 7! 3, 4 7! 2, alternately, ⇡(1) = 1, ⇡(2) = 4, ⇡(3) = 3, ⇡(4) = 2.

It turns out that there is a close connection between all permutations of n (which is a short way of saying
all permutations of 1, 2, . . . , n) and the determinant of a square matrix. Let’s illustrate on the following 3⇥3
symbolic matrix:

A =

2

4
a11 a12 a13
a21 a22 a23
a31 a32 a33

3

5 .

12

Computing the determinant of A by cofactor expansion (say along the first row) we have

det(A) = a11(a22a33 � a23a32) � a12(a21a33 � a23a31) + a13(a21a32 � a22a31)
= a11a22a33 � a11a23a32 � a12a21a33 + a12a23a31 + a13a21a32 � a13a22a31

The above expression is a polynomial of degree 3 in the variables a11, . . . , a33. It consists of 6 terms and
each term is the product of a coe�cient which is +1 or �1 and a monomial which is a product of variables.
For example, the second term in the above polynomial is �a11a23a32 whose coe�cient is �1 and monomial
is a11a23a32. Notice the following facts:

• There are 3! terms in the determinant of the above 3 ⇥ 3 matrix.

• Each monomial consists of three variables, one from each row and column of A (i.e., two variables in
a monomial do not come from the same row or same column and all rows and columns appear).

This makes us suspect that there is a term in the determinant for each permutation of 3. Let’s check.
First, note that we have already written each term so that the rows the variables come from are in the
order 1, 2, 3. We can always do this since every row contributes exactly one variable to a monomial and
multiplication of real and complex numbers is commutative. Once we have fixed the order of the rows to be
1,2,3, notice that the order in which the column indices appear in each monomial is a permutation of 3, and
all 6 = 3! permutations appear in the terms of the determinant. Let’s check on our example:

a11a22a33| {z }
123

� a11a23a32| {z }
132

� a12a21a33| {z }
213

+ a12a23a31| {z }
231

+ a13a21a32| {z }
312

� a13a22a31| {z }
321

The coe�cient that appears in front of a monomial is also determined by the permutation of the column
indices. A pair i, j is said to be inverted in a permutation ⇡ if i < j but ⇡(i) > ⇡(j). For example in the
permutation 1432, 2 and 4 are inverted since ⇡(2) = 4 > ⇡(4) = 2 even though 2 < 4. We say that a
permutation is even if it has an even number of inverted pairs and odd if it has an odd number of inverted
pairs. The sign of a permutation is 1 if the permutation is even and �1 if the permutation is odd.

Notice that the coe�cient in front of the term in the determinant indexed by a permutation ⇡ is exactly
the sign of ⇡. For example take the third term �a12a21a33 which is indexed by the permutation 213. There
is exactly one inversion in this permutation which is given by the pair (1, 2) since ⇡(1) = 2 > ⇡(2) = 1. The
pairs (1, 3) and (2, 3) are not inverted in this permutation. Therefore, this is an odd permutation and its
sign is �1 which is the coe�cient of the term �a12a21a33. Check every term in the above determinant and
make sure you agree that the coe�cient in front of the term in the determinant indexed by a permutation
⇡ is exactly the sign of ⇡. Let’s check that the same behavior can be seen in 2 ⇥ 2 matrices:
Example 1.4.1.

det


a11 a12
a21 a22


= a11a22 � a12a21

The first term corresponds to the permutation 12 with sign 1, and the second term corresponds to the
permutation 21 with sign �1.

This brings us to the following general theorem (which we will not prove):

Theorem 1.4.2. If A = (aij) 2 Rn⇥n, then

det(A) =
X

⇡ permutation of n

sign(⇡)a1⇡(1)a2⇡(2) · · · an⇡(n).

This formula helps us understand the characteristic polynomial of a matrix even more. Let’s take the
3 ⇥ 3 matrix A from above again. Then:

p(�) = det(A � �I) = det

2

4
a11 � � a12 a13
a21 a22 � � a23
a31 a32 a33 � �

3

5 .

13

The term in the determinant indexed by the permutation 123 is (a11 � �)(a22 � �)(a33 � �) = ��3 + · · · No
other term in the determinant contributes a �3 or even �2, only lower powers of �, so the �3 we got from
the first term will not cancel out (write out the determinant and check!). Therefore, p(�) is a polynomial in
� of degree 3 and the coe�cient of �3 is (�1)3 = �1.

Do you now see why if A 2 Rn⇥n, then p(�) = det(A � �I) is a degree n polynomial in � and its leading
coe�cient is (�1)n? Hint: As above, look at the term in the determinant indexed by the permutation
123 · · · n. What is the coe�cient of this term and what is the highest power of � in it and why does this
highest power not cancel out when you take all terms together?