# ...
you may NOT replace # x.
\forall T \in \R^(2 \times 2), \forall x,y,y' \in \R, # replace \undefined with explicit matrix from part (a) T = \undefined \implies
# ...
# In the conclusion below, you may replace y' # with the correct expression; you may NOT replace # x. T [x;y] = [x;y']
Determine what initial state [x; y] is such that there is no change in the state from one generation to another.
\forall T \in \R^(2 \times 2), \forall x,y \in \R, # replace \undefined with explicit matrix from part (a) T = \undefined \and [x;y] = \undefined # find the explicit vector \implies
# ...
T [x;y] = [x;y]
Suppose that in the fourth generation of an instance of this system (that is, after the transformation has
been applied three times), we have 2 predators and 56 prey animals. How many predators and prey animals were
in the system in the first generation (before any transformations were applied)? Let [x; y] represent the
state of the system in the first generation. Set up a matrix equation that involves [x; y] and
matrix multiplication, and solve it to obtain the answer.
Matrix operations and their interpretations
The following table summarizes the matrix operations that we are considering in this course.
operation |
definition |
restrictions |
general properties |
M1 + M2 |
component-wise |
matrices must have the same number of rows and columns |
commutative,
associative,
has identity (matrix with all 0 components),
has inverse (multiply matrix by -1),
scalar multiplication is distributive
|
M1 ⋅ M2 |
row-column-wise dot products |
columns in M1 = rows in M2 rows in M1 ⋅ M2 = rows in M1 columns in M1 ⋅ M2 = columns in M2 |
associative,
has identity I (1s in diagonal and 0s elsewhere),
distributive over matrix addition,
not commutative in general,
no inverse in general
|
M -1 |
|
columns in M = rows in M
matrix is invertible
|
M -1 ⋅ M = M ⋅ M -1 = I
|
The following tables list some high-level intuitions about how matrix operations can be understood.
level of abstraction |
interpretations of multiplication of a vector by a matrix |
applications |
transformation of system states |
extraction of information about system states |
computing properties of combinations or aggregations of objects (or system states) |
conversion of system state observations from one set of dimensions to another |
geometry |
"moving" vectors in a space (stretching, skewing, rotating, reflecting) |
projecting vectors |
taking a linear combination of two vectors |
reinterpreting vector notation as referring to a collection of non-canonical vectors |
level of abstraction |
interpretations of multiplication of two matrices |
applications |
composition of system state transformations or conversions |
geometry |
sequencing of motions of vectors within a space (stretching, skewing, rotating, reflecting) |
level of abstraction |
invertible matrix |
singular matrix |
applications |
reversible transformation of system states |
extraction of complete information uniquely determining a system state |
irreversible transformation of system states |
extraction of incomplete information about a system state |
geometry |
reversible transformation or motion of vectors in a space |
projection onto a strict subset of a set of vectors (space) |
symbolic |
reversible transformation of information numerically encoded in matrix
(example of such information: system of linear equations encoded as matrix)
|
irreversible/"lossy" transformation of information encoded in matrix |
Suppose we interpret multiplication of a vector by a matrix M as a function from vectors to vectors:
f(v) = M v.
Fact: Notice that for any M, if f(v) = M v then f is always a function because M v has only one possible result (i.e.,
matrix multiplication is deterministic) for a given M and v.
Fact: If f(v) = M v, then f is invertible if M is an invertible matrix. The inverse of f is then defined to be:
f -1(v) = M -1 v.
Notice that f -1 is a function because M -1 v only has one possible result. Notice that f -1 is the inverse of f because
f -1(f(v)) = M -1 M v = I v = v.
Fact: If the columns of a matrix M are linearly dependent and f(v) = M v, then M cannot have an inverse. We consider the case
in which M ∈ R2×2. Suppose we have that
If the columns of M are linearly dependent, then we know that there is some s ∈ R such that
This means that we can rewrite M:
Since matrix multiplication can be interpreted as taking a linear combination of the column vectors, this means that for x,y ∈ R,
But this means that for any two vectors (x,y) and (x',y'), if x + sy = x' + sy' then multiplying by M will lead to the same result.
Thus, f is a function that takes two different vector arguments and maps them to the same result. If we interpret f as a relation and take
its inverse f -1, f -1 cannot be a function. Thus, M cannot have an inverse (if it did, then f -1 would be a function).
Matrix properties
The following are subsets of Rn×n that are of interest in this course because they
correspond to transformations and systems that have desirable or useful properties. For some of these
sets of matrices, matrix multiplication and inversion have properties that they do not in general.
subset of Rn×n |
definition |
closed under matrix multiplication |
properties of matrix multiplication |
inversion |
identity matrix |
∀ i,j Mij = 1 if i=j, 0 otherwise |
closed |
commutative, associative, distributive with addition, has identity |
has inverse (itself); closed under inversion |
elementary matrix |
can be obtained via an elementary row operation from I:
- add multiple of one row
of the matrix to another row
- multiply a row by a
scalar
- swap two rows of the
matrix
Note: the third is a combination of the first two operations.
|
|
associative, distributive with addition, have identity |
have inverses; closed under inversion |
scalar matrices |
∃ s ∈ R, ∀ i,j Mij = s if i=j, 0 otherwise |
closed |
commutative, associative, distributive with addition, have identity |
nonzero members have inverses; closed under inversion |
diagonal matrices |
∀ i,j Mij ∈ R if i=j, 0 otherwise |
closed |
associative, distributive with addition, have identity |
nonzero members have inverses; closed under inversion |
matrices with constant diagonal |
∀ i,j Mii = Mjj |
|
associative, distributive with addition, have identity |
|
symmetric matrices |
∀ i,j Mij = Mji |
|
associative, distributive with addition, have identity |
|
symmetric matrices with constant diagonal |
∀ i,j Mii = Mjj and Mij = Mji |
closed |
commutative, associative, distributive with addition, have identity |
|
upper triangular matrices |
∀ i,j Mij = 0 if i > j |
closed |
associative, distributive with addition, have identity |
not invertible in general; closed under inversion when invertible |
lower triangular matrices |
∀ i,j Mij = 0 if i < j |
closed |
associative, distributive with addition, have identity |
not invertible in general; closed under inversion when invertible |
invertible matrices |
M -1 M = M M -1 = I |
closed |
associative, distributive with addition, have identity |
nonzero members have inverses; closed under inversion |
square matrices |
all of Rn×n |
closed |
associative, distributive with addition, have identity |
|
Two facts presented in the above table are worth noting.
Fact: Suppose A is invertible. Then the inverse of A -1 is A, because A A -1 = I. Thus, (A -1) -1 = A.
Fact: If A and B are invertible, so is AB. That is, invertible matrices are closed under matrix multiplication. We can show
this by using the associativity of matrix multiplication. Since A and B are invertible, there exist B -1 and A -1 such that:
Thus, since there exists a matrix (B -1 A -1) such that (B -1 A -1) A B = I, (B -1 A -1) is the inverse of AB.
Another fact not in the table is also worth noting.
Fact: Any product of a finite number of elementary row matrices is invertible. This fact follows from the fact that all
elementary matrices are invertible, and that the set of invertible matrices is closed under multiplication.
Given the last fact, we might ask whether the opposite is true: are all invertible matrices the product of a finite number of elementary
matrices? The answer is yes, as we will see further below.
Solving the equation M v = w for M with various properties
Recall that for v,w ∈ Rn and M ∈ Rn×n, an equation of the following form
can represent a system of equations:
M v = w
Notice that if M is invertible, we can solve for v by multiplying both sides by M -1. More generally, if M is a member of
some of the sets in the above table, we can find straightforward algorithms for solving such an equation for v.
M is ... |
algorithm to solve M v = w for v |
the identity matrix |
w is the solution |
an elementary matrix |
perform a row operation on M to obtain I; perform the same operation on w |
a scalar matrix |
divide the components of w by the scalar |
a diagonal matrix |
divide each component of w by the corresponding matrix component |
an upper triangular matrix |
start with the last entry in v, which is easily obtained;
move backwards through v, filling in the values by
substituting the already known variables
|
a lower triangular matrix |
start with the first entry in v, which is easily obtained;
move forward through v, filling in the values by
substituting the already known variables
|
product of a lower triangular matrix
and an upper triangular matrix |
combine the algorithms for upper and lower triangular
matrices in sequence (see example below)
|
an invertible matrix |
compute the inverse and multiply w by it |
Example: We consider an equation M v = w where M is a diagonal matrix (the identity matrix and all scalar matrices are also diagonal
matrices).
Example: We consider an equation M v = w where M is a lower triangular matrix.
Example: Suppose that M = L U where L is lower triangular and U is upper triangular. How can we solve M v = w?
First, note that because matrix multiplication is associative, we have
We introduce a new vector for the intermediate result U v, which we call u. Now, we have a system of matrix equations.
We first solve for u using the algorithm for lower triangular matrices, then we solve for v using the algorithm for upper triangular
matrices.
Example: The inverse of a matrix in R2×2 can be computed as follows.
| = | | | d/(ad-bc) | -b/(ad-bc) | -c/(ad-bc) | a/(ad-bc) |
| |
|
|
|
Thus, if we find that the determinant of a matrix M ∈ R2×2 is nonzero, the algorithm for solving M v = w is
straightforward. Consider the following example.
| = | | | = | | | = |
| | 4/((1⋅4)-(2⋅3)) | -2/((1⋅4)-(2⋅3)) | -3/((1⋅4)-(2⋅3)) | 1/((1⋅4)-(2⋅3)) |
| |
| | | |
| | = | | | = | |
Row echelon form and reduced row echelon form
We define two more properties that a matrix may possess.
M is in row echelon form |
- all nonzero rows are above any rows consisting of all zeroes
- the first nonzero entry (from the left) of a nonzero row is strictly
to the right of the first nonzero entry of the row above it
- all entries in a column below the first nonzero entry in a row are zero
(the first two conditions imply this)
|
M is in reduced row echelon form |
- M is in row echelon form
- the first nonzero entry in every row is 1; this 1 entry is the only nonzero entry in its column
|
We can obtain the reduced row echelon form of a matrix using a sequence of appropriately chosen elementary row operations.
Example: Suppose we want to find the reduced row echelon form of the matrix below. We list the steps of the procedure.
Fact: For any matrix M, there may be more than one way to reach the reduced row echelon form using elementary row operations. However,
it is always possible, and there is exactly one unique reduced row echelon form for that matrix M. We do not prove this result in this course,
but fairly short proofs by induction can be found elsewhere (such as here).
Because the reduced row echelon form of a matrix M is unique, we use the following notation to denote it:
rref M.
Question: For a given matrix in reduced row echelon form, how many different matrices can be reduced to it (one or more than one)?
Fact: It is always true that rref M = rref (rref M). Thus, the rref operation is idempotent.
Fact: If rref M is I (the identity matrix), then M is invertible. This is because I is an elementary matrix, and the row operations
used to obtain rref M from M can be represented as a product of elementary (and thus, invertible) matrices E1 ⋅ ... ⋅ En, so:
M = E1 ⋅ ... ⋅ En ⋅ I.
Fact: If M ∈ Rn×n is not invertible, then the bottom row of rref M must have all zeroes. This is because when all
rows of rref M ∈ Rn×n have nonzero values, it must be the identity matrix, and M must then be invertible.
Fact: The reduced row echelon form of an invertible matrix M ∈ Rn×n is I ∈ Rn×n.
We can show this is true using a proof by contradiction. Suppose that M is invertible, but rref M ≠ I. We know that rref M can be
obtained via a finite number of elementary row operations E1 ⋅ ... ⋅ En:
(E1 ⋅ ... ⋅ En) M = rref M.
If rref M is not I, then the last row of rref M must consist of only zeroes. But because M is invertible, we have:
Since the product of elementary matrices is invertible, (rref M) M -1 is also invertible. But if the last row of rref M consists of only
zeroes, it cannot be that (rref M) M -1 is invertible. Thus, we have a contradiction, so our assumption that rref M ≠ I is false.
The above result implies the following fact.
Fact: If a matrix M is invertible, it is the finite product of a finite number of elementary matrices. This is because rref M is
the identity, which is an elementary matrix, and M can be reduced via a finite number of invertible row operations to I. Thus, the
elementary matrices can be used to generate every possible invertible matrix.
The following table summarizes the results.
fact |
justification |
(1) |
{M | M is a finite product of elementary matrices} |
= |
{M | rref M = I} |
I is an elementary matrix; sequences of row operations are equivalent to multiplication by elementary matrices |
(2) |
{M | M is a finite product of elementary matrices} |
⊂ |
{M | M is invertible} |
elementary matrices are invertible; products of invertible matrices are invertible |
(3) |
{M | rref M = I} |
⊂ |
{M | M is invertible} |
fact (1) in this table; fact (2) in this table; transitivity of equality |
(4) |
{M | M is invertible} |
⊂ |
{M | rref M = I} |
proof by contradiction; non-invertible M implies rref M has all zeroes in bottom row |
(5) |
{M | M is invertible} |
= |
{M | rref M = I} |
for any sets A,B, A ⊂ B and B ⊂ A implies A = B |
(6) |
{M | M is a finite product of elementary matrices} |
= |
{M | M is invertible} |
fact (1) in this table; fact (5) in this table; transitivity of equality |
Given all these results, we can say that the properties for a matrix M in the following table are all equivalent:
if any of these is true, then all of them are true. If any of these is false, then all of them are false.
M is invertible |
det M ≠ 0 |
the columns of M are (setwise) linearly independent |
Mv = w has exactly one solution |
M is a finite product of elementary matrices |
Fact: For matrices M ∈ Rn×n, rref M is guaranteed to be upper triangular. Notice that this means that for some finite
product of elementary matrices E1 ⋅ ... ⋅ En, it is the case that
M = (E1 ⋅ ... ⋅ En) ⋅ U.
If E1 ,..., En are all lower triangular, then then M has an LU decomposition. However, this will not always be the case.
Matrix transpose
Definition: The transpose of a matrix M ∈ Rn×n, denoted M⊤, is defined to be A such that Aij = Mji.
Fact: If a matrix M is scalar, diagonal, or symmetric, M⊤ = M. If a matrix M is upper triangular, M⊤ is lower triangular (and vice versa).
Fact: For A,B ∈ Rn×n, it is always the case that:
Fact: It is always the case that (AB)⊤ = B⊤ A⊤. If M = AB, then:
| = | ith row of A ⋅ jth column of B |
| | = | ith column of A⊤ ⋅ jth row of B⊤ |
| | = | jth row of B⊤ ⋅ ith column of A⊤ |
| | = | |
Fact: If A is invertible, then so is A⊤. This can be proven using the fact directly above.
\forall A,B \in \R^(2 \times 2), `(A) is invertible` \implies (A^(-1)) * A = [1 , 0; 0 , 1] \and A^\t * (A^(-1))^\t = ((A^(-1)) * A)^\t \and `` = [1,0;0,1]^\t \and `` = [1,0;0,1]
Fact: det A = det A⊤. We can see this easily in the A ∈ R2×2 case.
\forall a,b,c,d \in \R, \det [a,b;c,d] = a d-b c \and `` = a d - c b \and `` = \det [a,c;b,d] \and \det [a,b;c,d] = \det [a,c;b,d]
Orthogonal matrices
Definition: A matrix M ∈ Rn×n is orthogonal iff M⊤ = M -1.
Fact: The columns of orthogonal matrices are always setwise orthogonal unit vectors. We can see this in the R2×2 case.
| = | | | = | |
| | (a,b) ⋅ (a,b) | (a,b) ⋅ (c,d) | (c,d) ⋅ (a,b) | (c,d) ⋅ (c,d) |
| |
| | |
| = | |
Below, we provide a verifiable argument for the R2×2 case.
\forall a,b,c,d \in \R, \begin{eqnarray} [a,b;c,d] * [a,b;c,d]^\t = [1,0;0,1] \end{eqnarray} \implies [a,b;c,d] * [a,c;b,d] = [1,0;0,1] \and [a a + b b, a c + b d; c a + d b, c c + d d] = [1,0;0,1] \and
a a + b b = 1 \and [a;b] * [a;b] = 1 \and ||[a;b]|| = 1 \and `([a;b]) is a unit vector` \and c c + d d = 1 \and [c;d] * [c;d] = 1 \and ||[c;d]|| = 1 \and `([c;d]) is a unit vector` \and a c + b d = 0 \and [a;b] * [c;d] = 0 \and `([a;b]) and ([c;d]) are orthogonal`
Fact: Matrices representing rotations and reflections are orthogonal. We can show this for the general rotation matrix:
| = | | | = | | | cos 2 θ + sin 2 θ | -cos θ sin θ + sin θ cos θ | -sin θ cos θ + cos θ sin θ | sin 2 θ + cos 2 θ |
| |
| |
| | = | |
Fact: Orthogonal matrices are closed under multiplication. For orthogonal A, B ∈ Rn×n we have:
Fact: Orthogonal matrices are closed under inversion and transposition. For orthogonal A ∈ Rn×n we have:
Thus, we can show that both A -1 and A⊤ are orthogonal.
We can summarize this by adding another row to our table of matrix subsets.
subset of Rn×n |
definition |
closed under matrix multiplication |
properties of matrix multiplication |
inversion |
orthogonal matrices |
M⊤ = M -1 |
closed |
associative, distributive with addition, have identity |
nonzero members have inverses; closed under inversion |
Matrix rank
Yet another way to characterize matrices is by considering their rank.
Definition: We can define a the rank of a matrix M, rank M, as the number of nonzero rows in rref M.
We will see in this course that rank M is related in many ways to the various characteristics of a matrix.
Fact: Matrix rank is preserved by elementary row operations. Why is this the case? Because rref is idempotent and rank is defined in
terms of it:
| = | |
number of nonzero rows in rref M | |
| = | number of nonzero rows in rref (rref M) |
| | = | |
Fact: If M ∈ Rn×n is invertible, rank M = n. How can we prove this? Because all invertible matrices
have rref M = I, and I has no rows with all zeroes. If I ∈ Rn×n, then I has n nonzero rows.
Fact: For A ∈ Rn×n, rank A = rank (A⊤). We will not prove this in this course in general, but typical proofs
involve first proving that for all A ∈ Rn×n, rank A ≤ rank (A⊤). Why is this sufficient to complete the proof? Because we
can apply this fact for both A and A⊤ and use the fact that (A⊤)⊤ = A:
To get some intuition about the previous fact, we might consider the following question: if the columns of a matrix M ∈ R2×2
are linearly independent, are the rows also linearly independent?
Fact: The only matrix in Rn×n that has no nonzero rows in reduced row echelon form is I.
Fact: If a matrix M ∈ Rn×n has rank n, then it must be that rref M = I (and, thus, M is invertible). This is derived from
the fact above.
Fact: Invertible matrices are closed under the transposition operation. In other words, if M is invertible, then M⊤ is invertible. How
can we prove this using the rank operator? We know that rank is preserved under transposition, so we have:
Thus, the rank of rank(A⊤) is n, so it is invertible.
The following table summarizes the important facts about the rank and rref operators.
rank (rref M) = rank M |
rref (rref M ) = rref M |
rank (M⊤) = rank M |
for M ∈ Rn×n, M is invertible iff rank M = n |
Matrix similarity
We introduce a relation on matrices called similarity in order to practice some matrix algebra. We will use the notion of similarity
later in the course in other ways.
Definition: We can define a relation on matrices called similarity. Two matrices A and B in Rn×n are similar,
which we write as
A ∼ B,
iff there exists an invertible matrix C such that
A = C -1 B C.
The relation ∼ is reflexive, symmetric, and transitive.
Fact: For any A ∈ Rn×n,
so the relation ∼ is reflexive.
Fact: For any A,B ∈ Rn×n, if A ∼ B then we have C ∈ Rn×n such that
Thus, there exists an invertible matrix, namely C -1, that satisfies the definition for similarity, so B ∼ A.
Fact: For any X,Y,Z ∈ Rn×n, if X ∼ Y and Y ∼ Z then we have X ∼ Z:
We know that invertibility is closed under multiplication and
(D C) -1 = C -1 D -1, so D C is the invertible matrix that satisfies the definition for similarity between X and Z:
Notice that because all invertible matrices can be expressed as a finite product of elementary matrices. So if two matrices are similar, does it
mean they have the same reduced row echelon form? No. We can provide a counterexample.
Review #1
This section contains a comprehensive collection of review problems going over the course material covered until this point. These problems are an
accurate representation of the kinds of problems you may see on an exam.
Problem: Find any h ∈ R such that the following two vectors are linearly independent.
There are many ways to solve this problem. One way is to use the definition of linear dependence and find an h that does not satisfy it.
Then, we have that any h such that h ≠ -8 is sufficient to contradict linear dependence (and, thus, imply linear independence):
Another solution is to recall that orthogonality implies linear independence. Thus, it is sufficient to find h such that
the two vectors are orthogonal.
This implies h = 100.
5(20) + (-5)(-20) + (-2)h | |
| = | | | = | |
Problem: Suppose we have a matrix M such that the following three equations are true:
Compute the following:
We should recall that multiplying a matrix by a canonical unit vector with 1 in the ith row in the vector gives us the ith column of the matrix.
Thus, we can immediately infer that:
Thus, we have:
| = | | | (3,3,-3) ⋅ (2,1,-1) | (-2,0,1) ⋅ (2,1,-1) |
| |
| | |
| = | |
Problem: List at least three properties of the following matrix:
| |
The matrix has many properties, such as:
- it is an elementary matrix
- it is an invertible matrix
- it is an orthogonal matrix
- it is a symmetric matrix
- it has rank n
- its reduced row echelon form is the identity
Problem: List all unit vectors orthogonal to:
| |
It is sufficient to solve for ( x, y) that satisfy the following equations:
Problem: List all the unit vectors that are linearly dependent with:
| |
It is sufficient to solve for s and ( x, y) that satisfy the following equations:
One approach is to write x and y in terms of s and then solve for s using the second equation.
Problem: Find the matrix B ∈ R2×2 that is symmetric, has a constant diagonal, and satisfies
We know that B is symmetric and has a constant diagonal, so we need to solve for a and b in:
Problem: Compute the inverse of the following matrix:
| |
One approach is to set up the following equation and solve for a, b, c, and d:
Another approach is to apply the formula for the inverse of a matrix in R2×2:
Problem: Let a ∈ R be such that a ≠ 0. Compute the inverse of the following matrix:
| |
As in the previous problem, we can either solve an equation or apply the formula:
Problem: In terms of a, b ∈ R where a ≠ 0 and b ≠ 0, compute the inverse of:
| | | | | | | |
While we could perform the instances of matrix multiplication step-by-step and then invert the result (either by solving an equation or
using the formula for the inverse of a matrix in R2×2), it's easier to recall that diagonal matrices behave in a manner that is very
similar to the real numbers. Thus, the above product is equal to
| | ,
and its inverse simply has the multiplicative inverses of the two diagonal entries as its diagonal entries:
| | .
Problem: Let A ∈ R2×2 be an orthogonal matrix. Compute A A⊤ A.
We recall that because A is orthogonal, A⊤ = A -1. Thus, we have that
A A⊤ A = (A A -1) A = I A = A.
The following problems are slightly more difficult. Only one or two such problems would be found on an exam.
Problem: Suppose x ∈ R is such that x ≠ 0. Compute the inverse of the following matrix:
| |
Because we have an upper triangular matrix, computing the inverse by solving the following equation is fairly efficient. Start by considering
the bottom row and its product with each of the columns. This will generate the values for g, h, and i. You can then proceed to the other
rows.
The solution is:
| | 1/x | 0 | 1/(2x) | 0 | 1/x | 0 | 0 | 0 | 1/4x |
| |
|
Problem: Given an invertible upper triangular matrix M ∈ R2×2, show that M -1 is also upper triangular. Hint: write out
the matrix M explicitly.
Suppose we have the following upper triangular matrix:
If it is invertible, then there exists an inverse such that:
This implies that
Because M is invertible, we know that z ≠ 0. Since zc = 0, This means that c = 0. Thus, we have that the inverse is upper triangular.
Problem: Find a matrix that is not the identity matrix, but is similar only to itself. Explain why it cannot be similar to any other matrix.
The only such matrix is:
It cannot be similar to any other matrix because for any invertible matrix C, C -1 M C will be M. Thus, the set of all matrices similar to
M is {M}:
{A | A ∈ R2×2, A ~ M} = {M}
Problem: Find a matrix that is similar but not equivalent to the following matrix:
It suffices to pick a non-trivial invertible matrix E, such as an elementary matrix, and compute E -1 M E. One option is:
Problem: Suppose that a gram of gold costs $50, while a gram of silver costs $10. After purchasing some of each, you have spent $350 on
15 grams of material. How many grams of each commodity have you purchased?
- Write down four dimensions describing this system.
- grams of gold
- grams of silver
- cost
- total weight in grams
-
Define a matrix A that can be used to convert a description of a system state that specifies only the amount of gold and silver purchased into
a description of the system state that specifies only the cost and total weight.
- Write down a matrix equation describing this problem and solve it to find the solution.
-
Define a matrix B such that for any description of a system state v that specifies only the total weight and amount spent, Bv
is a description of that system state that specifies the amount of gold and silver in that system state.
We are looking for the inverse of A.
-
Suppose that the per gram price of gold and silver were the same. What can you say about the properties of the matrix A?
The matrix M is not invertible (it is singular).
Vector Spaces
Sets of vectors and their notation
We are interested in studying sets of vectors because they can be used to model sets of system states (and, thus, entire systems), observations
and data that might be obtained about systems, geometric shapes and regions, and so on. As we did with vectors and matrices, we can introduce a symbolic
language for sets of vectors consisting of symbols, operators, and predicates. And, as with vectors and matrices, we can study the
algebraic laws that govern symbolic expressions.
We will consider three kinds of sets of vectors in this course; they are listed in the table below.
kind of set (of vectors) |
maximum cardinality ("quantity of elements") |
solution space of a... |
examples |
finite set of vectors |
finite |
|
- {(0,0)}
- {(2,3),(4,5),(0,1)}
|
vector space |
infinite |
homogenous system of linear equations: Av = 0 |
- {(0,0)}
- R
- R2
- span{(1,2),(2,3),(0,1)}
- any point, line, or plane
intersecting the origin
|
affine space |
infinite |
nonhomogenous system of linear equations: Av = w |
- { a + v | v ∈ V} where
V is a vector space and a is a vector
- any point, line, or plane
|
To represent finite sets of vectors symbolically, we adopt the convention of simply listing the vectors between a pair of braces (as with any
set of objects). However, we need a different convention for symbolically representing vector spaces and affine spaces. This is because we
must use a symbol of finite size to represent a vector space or affine space that may contain an infinite number of vectors.
We first recall the definition for what constitutes a vector space (there are many equivalent definitions).
Definition: A vector space is a set of vectors that contains 0, is closed under vector addition and scalar multiplication, and is
such that all the elements in the set satisfy the vector space axioms governing vector addition and scalar multiplication.
Fact: Any set of linear combinations of a collection of vectors is closed under vector addition and scalar multiplication, contains
0, and satisfies the vector space axioms. In other words, for any collection of vectors v1, ..., vn, span{v1, ..., vn} is a vector space.
Fact: For any set of vectors V in Rn×n that satisfies the vector space axioms, there exists a finite set of at most n vectors
v1, ..., vk (where k ≤ n) such that span{v1, ..., vk} = V.
Given the two facts above, we can safely adopt span{v1, ..., vn} as a standard notation for vector spaces. In addition to this notation,
we will also often use the notation Rn for specific values of n (e.g., R2 is equivalent to span{[1;0],[0;1]}), and {0} for specific
vector 0 (e.g., {[0;0]} is equivalent to span{[0;0]}).
Equality of sets of vectors and vector spaces
Recall that many of the algebraic laws we saw governing operators on vectors and matrices involved equality of vectors and matrices. In fact, one
can view these laws as collectively defining the semantic equality of the symbols (i.e., they specify when two symbols refer to the same
object). Thus, the meaning of these symbols is closely tied to the equality relation we define over them.
Because we are considering sets of vectors, and equality of pairs of vectors has already been defined, it is relatively straightforward
to provide a definition of equality for sets of vectors (both finite and infinite).
Definition: We have the following definitions:
It is easy to use the above to compute whether two finite sets of vectors are equivalent. However, while the above definition is mathematically
adequate for any two infinite sets of vectors (including vector spaces), it does not help us define a practical algorithm for computing equality
for our chosen notation for vector spaces (spans of finite sets of vectors). Suppose we take the definition above and replace the vector spaces
with explicit spans of finite sets of vectors.
Definition: Let W and V be finite sets of vectors.
| iff | | | iff | ∀ w ∈ span W, w ∈ span V (2) |
| | iff | span W ⊂ span V and span V ⊂ span W (3)
|
|
We still have a problem: how do we try all possible vectors w in span W (see the ∀ quantifier on the right-hand side
of the statement (2))? This can be addressed
by observing that if w1,...,wn ∈ V, then span w1,...,wn ∈ V. Thus, it is sufficient to check that the finite number of
vectors in W are found in V.
There is another problem. How do we determine whether a given vector w is in the vector space that we call span V? Notice that determining this
is equivalent to determining whether w is a linear combination of the vectors in V (we assume that the size of the finite set V is n).
Because V is finite set of n vectors, we can always find a matrix MV that has the vectors in V as its columns.
| iff | w ∈ {v | ∃ a1,...,an ∈ R, v1,...,vn ∈ V, v = a1v1 + ... + anvn } |
| | iff | ∃ a1,...,an ∈ R, v1,...,vn ∈ V, w = a1v1 + ... + anvn |
| | iff | |
Thus, determining whether w ∈ span V is equivalent to determining whether there is a solution to the equation MV ⋅ x = w. We
can solve such equations in general by building an augmented matrix and then finding its reduced row echelon form.
Example: Consider the following example of how one can solve an equation Mv = w by finding the reduced row echelon form of an augmented
matrix.
M := [1,2,4; 5,6,8; -3,1,4] w := [4;5;8]
\rref \augment(M,w) # the solution x to Mx=w
We have now found a process that allows us to determine for a particular W and V whether span W = span V:
| iff | span W ⊂ span V and span V ⊂ span W |
| | iff | | | iff | | | iff | |
The above implies that a matrix can be used to represent a particular vector space. We will see in other sections
further below that a given vector space can be represented in more than one way by a matrix, and that more than one matrix can be used
to represent a vector space.
Vector spaces as abstract structures
Our definitions of a vector space so far have explicitly referenced sets of concrete vectors as we usually understand them. However, this is not
necessary.
Definition: A vector space is a set of objects S such that
- there is a unique additive identity 0 ∈ S
- addition (+) is an operation on elements of S under which S is closed, and which satisfies the vector space axioms
- scalar multiplication (⋅) is an operation on elements of S under which S is closed, and which satisfies the vector space axioms
The above definition specifies conditions under which any set of objects S can be studied as a vector space.
Fact: The affine space A = {a + v | v ∈ V} is a vector space for appropriate definitions of addition, scalar multiplication, and identity.
An abstract definition of vector spaces is useful because it also allows
us to study by analogy and learn about other objects that are not necessarily sets of vectors. All the properties we can derive about vector
spaces using the above definition will apply to other sets of objects that also satisfy the above definition.
Example: We consider the set of functions F = {f | f(x) = cx, c ∈ R}. How do we show that F is a vector
space?
Exercise: Show that {f | f(x) = bx2 + cx, b,c ∈ R} is a vector space.
Exercise: Find two elements in {f | f(x) = bx2 + cx + d, b,c,d ∈ R} that are linearly independent. What does it mean for
two functions to be linearly independent?
Exercise: What is a finite set of functions that spans {f | f(x) = bx2 + cx + d, b,c,d ∈ R}? One such set is:
{f,g,h} where f(x) = 1, g(x) = x, h(x) = x2
Example: We consider the set of functions {f | f(x) = xk, k ∈ R}. The following definitions of addition and scalar multiplication
make this set of functions a vector space.
Example: Just as with any vector space, we can set up and solve problems involving linear combinations of vectors. Suppose we want to find a
function in the space {f | f(x) = ax3 + bx2 + cx + d, a,b,c,d ∈ R} that fits a certain collection of points, such as:
{(1,3), (-1,13), (2,1), (-2,33)}
We can interpret each point as a pair (x, f(x)). We can then set up the following system of equations:
Expanding, we have:
| = | |
a(-1)3 + b(-1)2 + c(-1) + d | |
| = | | | = | |
a(-2)3 + b(-2)2 + c(-1) + d | |
| = | | |
Notice that we can rewrite this in the form of an equation M v = w:
| | (1)3 | (1)2 | (1) | 1 | (-1)3 | (-1)2 | (-1) | 1 | (2)3 | (2)2 | (2) | 1 | | (-2)3 | (-2)2 | (-2) | 1 |
| |
|
⋅ | | | |
| = | |
This shows us that we can interpret individual objects in {f | f(x) = ax3 + bx2 + cx + d, a,b,c,d ∈ R} as vectors in R4.
We can also view this problem in terms of systems, system states, and observations. If the function f(x) = ax3 + bx2 + cx + d that solves
the above system is a state of the system, then (3, 13, 1, 33) is a partial observation of that state along four distinct "dimensions".
Note that finding the equation for a line that crosses two points is just a special case of the above.
Example: We want to find a function f(x) = cx + d such that the points (2,3) and (5,-4) fall on the line the equation represents. Thus,
we have:
We can consider many spaces of functions beyond spaces of polynomial functions. For example, suppose exp(x) = ex.
Then we can define the following vector space:
span{ exp, cos , sin }.
We can also generalize this approach to families of related functions.
Example: Consider the following set of vectors of functions, where f' denotes the derivative of a function f:
F = { (f, f, f') | f(x) = bx2 + cx + d }
The set F is a vector space. Now, suppose we have the following problem. Find the function for a parabola in F such that
(1,4) and (2,-3) lie on the parabola, and the maximum or minimum point on the parabola is at x = 1. Such a function is
represented by the solution to the following equation:
| | (1)2 | (1) | 1 | (2)2 | (2) | 1 | 2(1) | 1 | 0 |
| |
| ⋅ | | | |
| = | |
Note that the number of rows in the matrix (and result vector) corresponds to the number of data points for which
a model is being sought, and is not bounded.
Example: Find the order 5 polynomial that exactly fits the points [-1;45], [8;4], [6;5], [2;-8], [-10;8], [15;6].
P := { [-1;45], [8;4], [6;5], [2;-8], [-10;8], [15;6] }
M := (\augment { [x^5, x^4, x^3, x^2, x, 1]^\t | [x;y] \in P })^\t C := (\augment { [y] | [x;y] \in P })^\t B := \augment (M, C) A := \rref B
The disadvantage to using this particular approach to fitting functions to data points is that the data must
match an actual function perfectly (or, equivalently, the space of functions being considered must be rich
enough to contain a function that can fit the data points exactly). We will see further below
how to find functions that have the "best" fit (for one particular definition of "best") without necessarily matching the points exactly.
Subspace relation on vector spaces
Definition: Given two vector spaces W and V, we have:
W is a (vector) subspace of V iff V ⊂ W.
Finding a basis and an orthonormal basis of a vector space
We have adopted span{v1,...,vn} as our notation for vector spaces. However, this makes it possible to represent a given vector space
in a variety of ways:
span{(0,1),(1,0)} = span{(0,1),(1,0),(1,0)} = span{(0,1),(1,0),(1,0),(1,0)}, and so on.
We might naturally ask: given
a vector space V, what is the smallest possible size for a finite set of vectors W such that V = span W?
Definition: A finite set of vectors B is a basis of the vector space V if span B = V and for all finite
sets of vectors W such that span W = V, |B| ≤ |W|.
Fact: A finite set of vectors v1,...,vn is a basis for V iff span {v1,...,vn} = V and the vectors v1,...,vn
are setwise linearly independent.
Fact: Given a finite set of vectors v1,...,vn, we can find the basis for span {v1,...,vn} by creating a matrix M in which the
rows are the vectors v1,...,vn and computing rref M. The set of distinct nonzero rows in rref M is a basis for span {v1,...,vn}.
Example: Below, we find the basis of span {[-3;0;7;0], [-9;0;21;0], [0;0;2;8], [0;0;1;4], [3;0;13;4], [0;0;1;0]}.
A := \augment{[-3;0;7;0], [-9;0;21;0], [0;0;2;8], [0;0;1;4], [3;0;13;4], [0;0;1;0]} B := A^\t
\decompose ((\rref B)^\t) - {[0;0;0;0]}
Definition: A finite set of vectors B is an orthonormal basis of the vector space V if B is a basis of V,
all the vectors in B are unit vectors, and the vectors in B are setwise orthogonal.
Recall that for any vector v ∈ Rn and any unit vector u ∈ Rn, (v ⋅ u) ⋅ u is the projection of v onto the line parallel to
u (i.e., the vector space {a u | a ∈ R}). We can use this fact to define a process for turning an arbitrary basis into an orthonormal basis.
Fact: Given a basis B = {v1,...,vn}, it is possible to turn it into an orthonormal basis by using the Gram-Schmidt process. This
algorithm can be summarized as follows.
| = | | | | | = | | | | | = | v3 - ((v3 ⋅ e1) ⋅ e1) - ((v3 ⋅ e2) ⋅ e2) | |
| | | | = | v4 - ((v4 ⋅ e1) ⋅ e1) - ((v4 ⋅ e2) ⋅ e2) - ((v4 ⋅ e3) ⋅ e3) | |
| | | | ⋮ | | | | | = | vn - Σi = 1n-1 ((vn ⋅ ei) ⋅ ei) | |
| | |
Intuitively, for each vector vi ∈ B, the algorithm substracts out the contributions of the already-computed orthogonal unit vectors from
vi, obtaining ui, and then rescales ui to make it into a unit vector.
Many computational environments for mathematics that support linear algebra provide an operator for turning a basis into an orthonormal basis. An example is
provided below.
\orthonormal {[4;0], [1;2]}
Why is an orthonormal basis useful? Once again, we appeal to the fact that (v ⋅ u) ⋅ u is the projection of v onto u, where
v ⋅ u is the length of that projection. Suppose we want to determine how to express an arbitrary vector v using an orthonormal
basis u1,...,un.
Fact: Given a vector v ∈ Rn, and an orthonormal basis B = {u1,...,un} for Rn, we can determine what linear combination of the vectors
u1,...,un yields v by computing:
a = (MB)⊤ ⋅ v.
Notice that the components of the vector a, which we will call a1,...,an, are the scalars needed to compute v as a
linear combination of the vectors u1,...,un:
v = a1 u1 + ... + an un = MB a.
Fact: Given a vector v ∈ Rn, some k < n, and an orthonormal basis B = {u1,...,uk} for a subspace W ⊂ Rn,
the product (MB ⋅ (MB)⊤ ⋅ v) is the projection of v onto W.
Example: Notice the relationship of the above fact to projection onto the span of a single unit vector u = (x,y) in R2.
Suppose we have a basis B = {u} for a subspace {s ⋅ u | s ∈ R2} ⊂ R2. Then we have
Thus, for an arbitrary v ∈ R2 not necessarily in {s ⋅ (x,y) | s ∈ R2} we have:
Thus, we see that the projection of a vector onto another unit vector in R2 is just a special case of the general approach to projection.
Assignment #3: Vector Spaces
In this assignment you will work with vector spaces, including vector spaces of functions.
-
- Use the appropriate operators to calculate a basis for the following vector spaces.
V1 := \span { [3;2;5;0;8], [1;0;-6;7;5], [0;0;1;0;1], [-6;-4;-10;0;-16] }
V2 := \span { [1;0;-4;0], [2;0;3;1], [1;0;-6;7], [0;0;1;0], [4;0;8;-5], [-1;0;-1;0] }
V3 := \span { [1;3;-9], [-2;-6;18], [4;0;0] }
S1 := { [3;2;5;0;8], [1;0;-6;7;5], [0;0;1;0;1], [-6;-4;-10;0;-16] }
S2 := { [1;0;-4;0], [2;0;3;1], [1;0;-6;7], [0;0;1;0], [4;0;8;-5], [-1;0;-1;0] }
S3 := { [1;3;-9], [-2;-6;18], [4;0;0] }
\decompose ((\rref (\augment S1)^\t)^\t) - {[0;0;0;0;0]} \decompose ((\rref (\augment S2)^\t)^\t) - {[0;0;0;0]} \decompose ((\rref (\augment S3)^\t)^\t) - {[0;0;0]}
- Complete the following argument showing that span{ [1;2], [2;4] } = span {[3;6]} using the propositions provided in the verifier. Hint:
find an explicit vector x ∈ R2 and add it to the second premise in place of undefined; use a similar approach (and a similar
but distinct proposition) when building the part of the argument for the other direction of ⊂.
\forall x \in \R^2, x = \undefined # Find an explicit vector to put here. \implies
# ... \augment ( [1;2], [2;4] ) * x = [3;6] \and # Hint (keep this in your argument). # ...
\span{ [1;2], [2;4] } = \span {[3;6]}
\forall x \in \R^2, { [1;2], [2;4] } \subset \span {[3;6]} \and \augment ( [1;2], [2;4] ) * x = [3;6] \and x = [1;1] \implies \augment ( [1;2], [2;4] ) * [1;1] = [3;6] \and [3;6] \in \span { [1;2], [2;4] } \and {[3;6]} \subset \span { [1;2], [2;4] } \and \span{ [1;2], [2;4] } \subset \span {[3;6]} \and \span {[3;6]} \subset \span{ [1;2], [2;4] } \and \span{ [1;2], [2;4] } = \span {[3;6]}
- Complete the following argument: show that the assumptions imply that the spans are equivalent.
\forall a,b,c,d,v,v',w,w' \in \R^2, \forall A,B \in \R^(2 \times 2), A = \augment(w,w') \and B = \augment(v,v') \and A * a = v \and A * b = v' \and B * c = w \and B * d = w' \implies # ... \span {w,w'} = \span {v,v'}
\forall a,b,c,d,v,v',w,w' \in \R^2, \forall A,B \in \R^(2 \times 2), A = \augment(w,w') \and B = \augment(v,v') \and A * a = v \and A * b = v' \and B * c = w \and B * d = w' \implies \augment(w,w') * a = v \and \augment(w,w') * b = v' \and \augment(v,v') * c = w \and \augment(v,v') * d = w' \and v \in \span {w,w'} \and v' \in \span {w,w'} \and w \in \span {v,v'} \and w' \in \span {v,v'} \and {v,v'} \subset \span {w,w'} \and {w,w'} \subset \span {v,v'} \and \span {v,v'} \subset \span {w,w'} \and \span {w,w'} \subset \span {v,v'} \and \span {w,w'} = \span {v,v'}
-
Find the order 9 polynomial that exactly fits the following data. Your solution must be in the form of a vector
in R10 that represents the coefficients of the polynomial.
P := { [-1; 31], [-4; 268327], [ 1; 17], [-5; -1520701], [ 3; 389147], [-3; 78869], [-6; -19979509], [ 5; 29571949], [ 0; -1], [-2; 5027] }
Hint: you can make your answer shorter by using a comprehension to generate a one-row matrix containing the different powers of x.
For example:
| | | k ∈ {0 ... 9})
x := 2 \augment ([x^k] | k \in {0 ... 9})
| | instead of | | in our formula for S, # this would be reversed.
P := { [-1; 31], [-4; 268327], [ 1; 17], [-5; -1520701], [ 3; 389147], [-3; 78869], [-6; -19979509], [ 5; 29571949], [ 0; -1], [-2; 5027] }
# This is the right-hand side of the equation.
w := (\augment ( [y] | [x;y] \in P))^\t
# We generate the matrix using a nested comprehension.
S := ( (\augment ([x^(9-k)] | k \in {0 ... 9}))^\t | [x;y] \in P )
# This is the matrix in the left-hand side of the equation. M := (\augment S)^\t
# The matrix is invertible, so we can multiply both sides # of the equation by the inverse of M to obtain the # solution. (M^(-1)) * w
# Alternatively, we can find the rref of the augmented # matrix. \rref \augment(M,w)
# The solution vector has the x^9 coefficient as its first entry. # If we had used [x^k] instead of [x^(9-k)] in our formula for S, # this would be reversed.
Consider the order 3 polynomial f(x) = 2x3 + 5x2 - 3x + 1.
-
Find the order 2 polynomial that is the closest least squares approximation of f on the set {-2,0,2,4,8,16}.
{ 2 (x3) + 5 (x2) - 3 x + 1 | x ∈ {-2,0,2,4,8,16} }
{ 2 (x^3) + 5 (x^2) - 3 x + 1 | x \in {-2,0,2,4,8,16} }
- Compute the error of the approximation from part (a) above on the domains {-2,0,2,4,8,16} and {-16,-8,-4,-2,-1,0}.
Solution to #3 (both parts):
# The points to which we're trying to find an approximately # fitting function. Points3a := { | | x | 2 (x3) + 5 (x2) - 3 x + 1 |
| |
| | x ∈ {-2,0,2,4,8,16} } |
| |
# The right-hand side of the overdetermined system. v := (\augment { | | ⊤ | | | ∈ Points3a })⊤ |
| |
# The matrix on the left-hand side of the overdetermined system. M := (\augment { | | ⊤ | | | ∈ Points3a})⊤ |
| |
# The orthonormal matrix that can be used to compute projections # onto the span of the columns of M. A := \augment \orthonormal \decompose (rref M ⊤) ⊤# The projection of v onto the span of the columns of M. w := A ⋅ A ⊤ ⋅ v # Part (a): # The solution to the system below contains the coefficients # for the least squares approximation. The top entry corresponds # to the x 2 coefficient. FullMatrix := rref \augment (M, w) # We could isolate the actual coefficients in a 3-component # vector, but this is not required. approx := (\augment (\decompose ((FullMatrix ⋅ | | )⊤) - { | | }))⊤
|
|
# Part (b): # The error on the initial domain is defined as follows. ||v - w|| # Below is an alternative formula. ||v - M ⋅ approx|| # To compute the error of the approximation on the second # domain, we must first generate the points and build the # corresponding matrix: | := | { | | x | 2 (x3) + 5 (x2) - 3 x + 1 |
| |
| | x ∈ {-16,-8,-4,-2,-1,0} } |
| |
M3b := (\augment { | | ⊤ | | | ∈ Points3b})⊤ |
| |
# Then, M3b ⋅ approx gives us the outputs of the function # on the new data points. We compute the error in the same way. ||v - M3b ⋅ approx|| # The points to which we're trying to find an approximately # fitting function.
Points3a := { [x; 2 (x^3) + 5 (x^2) - 3 x + 1] | x \in {-2,0,2,4,8,16} }
# The right-hand side of the overdetermined system.
v := (\augment { [y]^\t | [x;y] \in Points3a })^\t
# The matrix on the left-hand side of the overdetermined system.
M := (\augment { [x^2, x, 1]^\t | [x;y] \in Points3a})^\t
# The orthonormal matrix that can be used to compute projections # onto the span of the columns of M. A := \augment \orthonormal \decompose (\rref M^\t)^\t
# The projection of v onto the span of the columns of M. w := A * A^\t * v
# Part (a): # The solution to the system below contains the coefficients # for the least squares approximation. The top entry corresponds # to the x^2 coefficient. FullMatrix := \rref \augment (M, w)
# We could isolate the actual coefficients in a 3-component # vector, but this is not required.
approx := (\augment (\decompose ((FullMatrix * [0;0;0;1])^\t) - {[0]}))^\t
# Part (b): # The error on the initial domain is defined as follows. ||v - w||
# Below is an alternative formula. ||v - M * approx||
# To compute the error of the approximation on the second # domain, we must first generate the points and build the # corresponding matrix:
Points3b := { [x; 2 (x^3) + 5 (x^2) - 3 x + 1] | x \in {-16,-8,-4,-2,-1,0} }
M3b := (\augment { [x^2, x, 1]^\t | [x;y] \in Points3b})^\t
# Then, M3b * approx gives us the outputs of the function # on the new data points. We compute the error in the same way. ||v - M3b * approx||
Homogenous, non-homogenous, overdetermined, and underdetermined systems
While we have usually considered n × n matrices, in real-world applications (especially those involving data), system states usually have
a shorter description than the corpus of data (e.g., observational data) being used to determine the system state. Thus, problems in such applications
usually involve matrices in which the number of rows is very different from the number of columns. Furthermore, the data may be noisy,
which means that there may be no exact system state that matches the data. Informally, this might mean that the system of equations being
used to determine a system state is overdetermined. On the other hand, if the amount of data is not sufficient to determine a single system
state, a system is underdetermined.
Definition: For any matrix M ∈ Rn×m and vector v ∈ Rm, the system M x = v is overdetermined if there exist
no solutions to the equation M x = v.
Definition: For any matrix M ∈ Rn×m and vector v ∈ Rm, the system M x = v is undetermined if there exist
two or more solutions for x that satisfy the equation M x = v.
Definition: For any matrix M ∈ Rn×m, the system M x = 0 is homogenous.
Definition: For any matrix M ∈ Rn×m and vector v ∈ Rm where v ≠ 0, the system M x = v is nonhomogenous.
Fact: A homogenous system M x = 0 has at least one solution.
Fact: A homogenous system M x = 0 is never overdetermined.
Fact: The solution space of a homogenous system is a vector space:
- there is a unique additive identity: 0 is a solution
- adding two solutions yields a solution:
- multiplying a solution by a scalar yields a solution:
Fact: The solution space of a nonhomogenous system is an affine space.
Motivating Example: Suppose we have a nonhomogenous system M x = v where v ∈ R1000000.
We may not know ahead of time whether or
not it is overdetermined. Suppose we do not want to incur the cost of attempting to solve M x = v exactly. This might be because we only want to make
one pass of the data to reduce costs, but it might also be because this is a data stream that is too voluminous to store, so the data will be
gone unless we solve the system right away (e.g., data from a telescope, a particle collider, or Twitter).
In such cases, we can typically assume the data will have noise, so the system is overwhelmingly likely to be overdetermined and have no solution.
Thus, we are okay with finding an approximate solution.
We do know, however, that M x = 0 has at least one solution, and that the solution space is a vector space (so, for example, we
could try to "search" for a better solution using some other technique involving addition and scalar multiplication
once we find at least one non-trivial one). Thus, we might instead
choose to solve the system M x = 0, knowing that even in the worst case, a very poor approximate solution of 0 will always be found
in the worst case.
Is there a better compromise between these two strategies of keeping v or changing it to 0? Can we find some other space of
solutions that won't be overdetermined, but will
still allow us to find a non-trivial (nonzero) approximate solution? One possibility is to project the vector v onto a subspace
of the solution space, ensuring that the system is no longer overdetermined. This approach is covered in the section below.
Application: approximating overdetermined systems
Fact: We first review the triangle inequality in the case of a right triangle. Suppose we have a triangle with a height a, a base length of b, and
a hypotenuse of length c. Then we have:
This implies that the orthogonal projection of a vector v ∈ V onto a subspace W ⊂ V is the closest point in W to v.
Definition: Given v ∈ V and a subspace W ⊂ V, the vector w within the subspace W that is closest to v is
a vector w* ∈ W such that ||v - w*|| is minimal. In other words, for all w' ∈ W,
||v - w*|| ≤ ||v - w'||.
Definition: Given v ∈ V and a subspace W ⊂ V, the orthogonal projection of v onto W is the closest vector in W to v.
We can show the above is true by appealing to the above fact related to the triangle inequality.
Fact: Let B = {w1,...,wn} be a basis for W ⊂ V. Recall that MB is the matrix whose columns are the vectors in B. Given
v ∈ V where v ∉ W, MB x = v has no solution. This is because MB x ∈ span B; thus, if a solution existed, we would have
v ∈ span B and v ∈ W, which contradicts our assumption.
Fact: Let B = {w1,...,wn} be an orthonormal basis for W ⊂ V. Given v ∈ V, the error of an approximate solution
x to the equation MB x = v is defined to be
||v - MB x||.
Fact: Let B = {w1,...,wn} be a basis for W ⊂ V. Given v ∈ V, the approximate solution
x* to the equation MB x = v with smallest error is defined to be the least-squares approximate solution to MB x = v. In other words,
the least-squares approximate solution x* to MB x = v is such that for any other x',
||v - MB x*|| ≤ ||v - MB x'||.
Why is it called a "least-squares" approximation?
Fact: Let B = {w1,...,wn} be a basis for W ⊂ V. Given v ∈ V and its projection w* ∈ W, the least-squares
approximate solution to MB x = v is the solution x* to the system
MB x* = w*.
That is, if we assume the above equality, then for all possible and x' and w' = MB x', we have that:
In the table below, we summarize the correspondences used in the above facts.
concept related to solving MB x = v |
notation |
relationship |
notation |
geometric concept |
the space of values W with which we can replace v in the overdetermined system MB x = v to make a system MB x = w that has solutions |
{MB x | x ∈ Rn} |
the span of the columns of MB |
span B |
the subspace W of V spanned by B |
the error of an approximate solution x' to MB x = v |
||v - MB x' || |
MB x' = w' |
||v - w'|| |
the distance between w' ∈ W and v ∈ V |
MB x* where x* is the minimum error solution to MB x = v |
for all x', ||v - MB x* || ≤ ||v - MB x' || |
MB x* = w* |
for all w' ∈ W, ||v - w*|| ≤ ||v - w'|| |
the orthogonal projection w* of v ∈ V onto W (the closest vector in W to v) |
Notice that we know a solution to MB x = w* exists, and that we can show this in two different ways. Since w* ∈ W, it
must be that w is a linear combination of the vectors in B, so it is a linear combination of the columns in MB. Alternatively, if MB is
a square matrix and we know that B is a basis, then the columns of MB are linearly independent, which means MB is invertible, so
x* = MB -1 w*.
Fact: We can compute the least-squares solution to any equation M x = v by using the following process:
- find an orthonormal basis B of the span of the column vectors of M to make MB
- use MB to compute the projection w* of v onto span B
- solve the system M x = w* by finding the rref of an augmented matrix
Later in the course we will consider a slightly different way of solving this problem that uses algebraic properties of linear transformations.
Exercise: Suppose a matrix M is such that M x = 0 has exactly one solution. What can we say about the error of the least-squares
approximation of any system M x = v?
Exercise: Suppose a matrix M is such that M x = 0 has Rn as its space of solutions. What can we say about any system M x = v?
What can we say about the error of any least-squares approximation of a solution for M x = v?
Exercise: We know that the solution space for an undetermined system Mx = 0 is a vector space.
Thus, this space must have a basis. How can we use a process similar to the Gram-Schmidt algorithm to find a basis of this space?
Relationship to curve fitting and linear regressions in statistics
The method described above does overlap with standard curve fitting and linear regression techniques you see
in statistics. There are many variants to these approaches, and the one considered above corresponds to a fairly
basic variant that has no specialized characteristics (i.e., it makes no special assumptions about the data points,
the relative importance of the data points, their distribution, or the way they can be interpreted).
-
The approach above is known as
ordinary least squares and as
linear least squares.
-
The case in which the function space is {f | f(x) = ax + b, a,b ∈ R}
corresponds to a simple linear regression.
This involves fitting a line to a collection of points while minimizing the sum of the squares of the
distances parallel to the y-axis between all the data points and the approximate line.
This diagram
is an illustration of this. The method in these notes is more general in that it is not restricted to
polynomials of order 1, but can be used to fit polynomials of any order to data.
-
We can restate the least squares method described in these notes as finding x* such that
the vector ε in M x* = v - ε is minimized. Notice that ε + w = v.
The length of the vector ε can be viewed as the sum of squares of y-axis-aligned distances from the
estimate curve to the actual data points:
||ε|| = ||v - M x*||.
-
This diagram illustrates the
projection method being used.
-
The matrix M in M x = v is sometimes called the
design matrix.
The same term is used to refer to the
corresponding
matrix in a simple linear regression.
Application: approximating a model of system state dimension relationships
The approach described in the previous section can be used to approximate a solution to the equation M x = v. Usually, if M describes
a model of a system (in this sense, system refers to a system of states that are described by values along some number of dimensions, as
in an earlier section), solving M x = v corresponds to determining a system state x given some other information
about a system state v.
What if we instead have a large collection of pairs of observations of system states of the form (x,v) (or, more generally, any collection of
observations of a system along multiple dimensions). Can we approximate a model of a set of relationships between some two subsets of the
dimensions in the system? In other words, if we take all our observations (x1, ..., xn) and divide them into pairs of descriptions
x = (x1,...,xk) and v = (xk+1,...,xn), can we find the best M that approximates the relationship between incomplete
system state descriptions x and v?
Example: Suppose we have the following dimensions in our system:
- number of red dwarf stars
- number of G type stars
- number of pulsars
- units of infrared (IR) radiation observed
- units of ultraviolet (UV) radiation observed
- units of gamma (γ) radiation observed
Suppose that we already have some set of confirmed (but possibly noisy) observations of systems along all the dimensions (the dimensions
of the quantities in each point correspond to the list of dimensions above):
O := { [ 4; 5; 1; 243; 3341; 700], [ 3; 6; 21; 125; 1431; 1465], [ 4; 2; 13; 533; 3432; 334], [16; 4; 4; 334; 143; 762], [13; 8; 13; 235; 1534; 513], [34; 16; 17; 333; 3532; 450] }
For the above observations, we want to find a matrix that relates the first three dimensions (the number of each type of star) to the last
three dimensions (the amount of each kind of radiation). In other words, we are looking for a matrix M ∈ R3× 3 of the
form:
| | a units IR/red dwarf | b units IR/G type | c units IR/pulsar |
d units UV/red dwarf | e units UV/G type | f units UV/pulsar |
g units γ/red dwarf | h units γ/G type | i units γ/pulsar |
| |
|
Multiplication by the above matrix represents a system state description transformation with the following units:
(# red dwarf stars, # G type stars, # pulsars)
→ (units IR, units UV, units γ)
We can split the data into two collections of points: those that specify the number of each type of star (the inputs to the above transformation),
and those that specify the amount of each kind of radiation (the output of the above transformation). Below, we call these P and Q. We then
turn these two sets of data points into matrices. In order to turn this problem into the familiar form M x = v, we transpose the two
matrices.
O := { [ 4; 5; 1; 243; 3341; 700], [ 3; 6; 21; 125; 1431; 1465], [ 4; 2; 13; 533; 3432; 334], [16; 4; 4; 334; 143; 762], [13; 8; 13; 235; 1534; 513], [34; 16; 17; 333; 3532; 450] }
P := {[a;b;c] | [a;b;c;x;y;z] \in O} Q := {[x;y;z] | [a;b;c;x;y;z] \in O}
M := (\augment P)^\t v := (\augment Q)^\t # Actually a matrix.
A := \augment \orthonormal \decompose (\rref M^\t)^\t
w := A * (A^\t) * v
\rref \augment (M, w)
The dimension of a vector space
Definition: The dimension of a vector space V, which we write as dim V, is the size of the basis for V.
Example: What is the dimension of {f | f(x) = ax3 + bx2 + cx + d, a,b,c,d ∈ R}?
Example: Consider the vector space consisting of finite sets of real numbers:
What are the basis and dimension of this vector space?
Orthogonal complements of vector spaces
Definition: The orthogonal complement W⊥ of a vector subspace W ⊂ V is the set of all vectors in V that are orthogonal
to all the vectors in W:
W⊥ = {v | v ∈ V, ∀ w ∈ W, v ⋅ w = 0}
Notice that the orthogonal complement of a subspace is always taken within the context of some specified space V.
Algebra of vector spaces
In this section, we consider vector spaces, their orthogonal complements, and common set operations: set union, set intersection, and set product.
Exercise: Given any two vectors spaces V and W, is V ∪ W a vector space? No. For example, consider
V = span{[1;0]} ∪ span{[0;1]}.
We have that [1;0] ∈ V and [0;1] ∈ V, but not [1;0] + [0;1] = [1;1] ∈ V.
Exercise: Given any two vectors spaces V and W, is V ∩ W a vector space? Yes.
- there is a unique additive identity: 0 ∈ V and 0 ∈ W, so 0 ∈ V ∩ W
- V ∩ W is closed under addition (+): if v,w ∈ V ∩ W then we have that v ∈ V and w ∈ V, and we also have that
v ∈ W and w ∈ W, so we have that
v + w ∈ V and v + w ∈ W, so v + w ∈ V ∩ W.
Since addition satisfies the vector space axioms for elements V and W, it also does so for elements in V ∩ W.
- V ∩ W is closed under scalar multiplication (⋅): if v ∈ V ∩ W then we have that v ∈ V and w ∈ V, so we have that
sv ∈ V and sv ∈ W, so sv ∈ V ∩ W.
Since scalar multiplication satisfies the vector space axioms for elements V and W, it also does so for elements in V ∩ W.
Exercise: Given any two vectors spaces V and W, is V × W a vector space? Yes.
Exercise: For a vector space W ⊂ V, compute W ∩ W⊥.
Fact: For a vector subspace W ⊂ V,
dim W + dim W⊥ = dim V.
Fact: For a vector subspace W ⊂ V,
W × W⊥ = V.
Linear Transformations
Maps (a.k.a., functions) and their properties
In addition to the definitions involving relations and maps we introduce in the Appendix, we consider several more definitions.
First, we introduce a notation for specifying the set of inputs a function accepts, and the set of outputs it may produce. We use the notation
f: A → B
to indicate that a function takes elements in the set A and maps them to elements in the set B. This notation is equivalent to
f ∈ A → B
where A → B is the set of functions that map elements from A to elements in B:
A → B = { f | ∀ a ∈ A, f(a) ∈ B }.
Definition: Given a map f: A → B, we call A the domain of f.
Definition: Given a map f: A → B, we call B the codomain of f. We can also call it the range of f, but this can be
ambiguous (because sometimes it is used to refer to the image, defined below) so we avoid using it.
We also introduce some properties of a map that can be used to describe more precisely describe its set of outputs and the relationship
between those outputs and the inputs the can generate them.
Definition: Given a map f: A → B, we call {f(a) | a ∈ A} the image of f.
Definition: Given a map f: A → B, for any subset A' ⊂ A, we call {f(a) | a ∈ A'} the image of A' under f. We sometimes write
this as f(A).
Definition: Given a map f: A → B, for any a ∈ A, we call f(a) the image of a under f.
Definition: Given a map f: A → B, for any subset B' ⊂ B, we call {a | a ∈ A, f(a) ∈ B} the pre-image of B under f. We sometimes write this as
f -1(B).
Definition: Given a map f: A → B, for any b ∈ B, we call {a | a ∈ A, f(a) = b} the pre-image of b under f.
We sometimes write this as f -1(b).
Finally, we consider a few properties of maps that deal with how the map relates elements in its domain to elements in its codomain.
Definition: A map f: A → B is surjective (or onto) if for all b ∈ B, there exists a ∈ A such that f(a) = b.
Definition: A map f: A → B is injective (or one-to-one) if for all a,a' ∈ A, f(a) = f(a') implies a = a'.
Definition: A map f: A → B is bijective if it is surjective and injective.
Visual illustrations of examples of the above definitions can be found in the Appendix.
Fact: A map f: A → B is injective iff for all a,a' ∈ A, a ≠ a' implies f(a) ≠ f(a'). We can prove this
by appealing to the logical fact that given two propositions P and Q, not P implies not Q iff Q implies P.
Fact: If B is the codomain of f, B' is the image of f, and f is surjective, then B = B'.
Fact: If f: A → B is bijective, then f -1 is bijective.
Fact: If f: A → B is not injective, then f -1 is not a function. If f is not injective, then there
exist a,a' ∈ A such that a ≠ a' but f(a) = f(a'). In this case, we cannot define a single
value to which f -1(f(a)) can evaluate, so f -1 cannot be a function.
Exercise: Describe with as much detail as you can the domains, codomains, and images of the following maps (there is more than one right
answer for these, as the definitions do not specify the domains of the functions). Determine whether each map is injective, surjective, or both
given the domains and codomains you chose.
| = | | | | | = | | | | | = | | | | | = | | | | | = | | | | | = | | | where M and N are matrices
|
|
Fact: Given some f:A → B, the following process can be used to show a map f is injective:
- assume a,a' ∈ A and a ≠ a';
- it is necessary to show that f(a) ≠ f(a'):
- expand f(a) and f(a') using the definition of f,
- use algebraic properties or other facts to derive f(a) ≠ f(a').
Alternatively, we can use the following approach:
- assume a,a' ∈ A;
- assume f(a) = f(a'):
- expand f(a) and f(a') using the definition of f,
- use algebraic properties or other facts to derive a = a'.
Fact: Given some f:A → B, we can show a map f is not injective by
picking some explicit a,a' ∈ A such that a ≠ a' and showing that f(a) = f(a'). Alternatively, pick
some explicit b ∈ B and find two or more distinct solutions a and a' ∈ A to the equation f(x) = b.
Fact: Given some f:A → B, the following process can be used to determine whether a map f is surjective:
- let b ∈ B;
- solve f(x) = b:
- if you can solve the equation (or derive an equation x = g(b)) such that x ∈ A, f:A → B is surjective,
- if the equation has no solutions in A, f:A → B is not surjective.
Notice that this does not mean that there is not some A' where A ⊂ A' that does have a solution
to the equation f(x) = b. For example, f(x) = 3x is surjective if f ∈ R → R but not surjective
if f ∈ Z → Z.
Exercise: Show that f: R → R where f(x) = x + 1 is injective.
Exercise: Show that f: R → R where f(x) = x + 1 is surjective.
Definition: Given two maps f: B → C and g: A → B, we define the composition of f and g, which we denote f o g,
as the map h:A → C defined by:
h(x) = f(g(x))
Exercise: Show that if f: B → C and g: A → B are injective, then their composition (f o g):A → C is injective.
Homomorphisms and isomorphisms
The previous section defined a number of properties of functions f: A → B that only address the equality of elements in A and B. However,
what if A and B have relations and operators? For example, what if A and B are vector spaces with an identity, an addition operation, and a
scalar multiplication operation?
We can restrict our notion of a map or function to maps or functions f: A → B that somehow preserve or consistently transform the properties
of operators and relations we may have already defined on A and B. For example, suppose we have A = {a1, a2, a3}, B = {b1, b2, b3},
an operator ⊕ such that a1 ⊕ a2 = a3, and an operator ⊕ over elements of B such that b3 ⊕ b2 = b1. A map f from
A to B that preserves (or respects, or consistently transforms) ⊕ would be such that
Notice that the above map is such that the operation ⊕ is respected by the map f:
f(a3) = f(a1 ⊕ a2) = f(a1) ⊕ f(a2) = b3 ⊕ b2 = b1 |
| |
Definition: Given a binary operator ⊗ over A and another operator ⊕ over B, we say that a map f: A → B is a homomorphism
if we have that
∀ a,a' ∈ A, f(a ⊗ a') = f(a) ⊕ f(a').
Notice that a homomorphism might have, but does not necesserily have, any of the properties we introduced for maps: it could be injective, surjective,
and so on.
Definition: A bijective homomorphism f: A → B is called an isomorphism.
Linear transformations
In this course we are interested in a specific kind of homomorphism.
Definition: For any two vector spaces V and W, a map f: V → W is a linear transformation iff we have that for all v,v' ∈ V and
scalars s ∈ R,
In other words, a linear transformation is a homomorphism between vector spaces that preserves vector addition and scalar multiplication.
If a map f does not satisfy the above properties, it is not a linear transformation.
Fact: For any linear transformation f:A → B and appropriate constants 0 ∈ A and 0' ∈ B,
f(0) = 0'
We can show this in the following way: given any v ∈ V,
Fact: For any linear transformation f:A → B and the corresponding additive inversion operations of
A and B, it is the case that f respects additive inversion:
f(-v) = -f(v)
We can derive this fact in two ways. One approach is to use the previous fact that f(0) = 0' to show that f(-v) is indeed the inverse of f(v):
The other approach is to use the fact that in both vector spaces,
v = -1 ⋅ v:
Then, because f respects scalar multiplication, we have that:
Thus, the argument is:
Fact: For any two linear transformations f: V → W, g: U → V, the composition f o g: U → W is also a linear transformation.
To show this, we must demonstrate that if h(x) = f(g(x)) then h respects vector addition and scalar multiplication. For any u,u' ∈ U, and
s ∈ R, we have:
and
Because linear transformations are homomorphisms (and, thus, maps), we can ask whether they have properties of maps (i.e., whether they are injective,
surjective, bijective, and so on). We will do so further below.
Data modelling as a linear transformation
Recall that for a system that can be described using quantities along multiple dimensions, subsets of a vector space Rn can be used to
model that system's possible states (or observations of that system's states). Recall also that a matrix can be used to model the relationships
between the dimensions describing that system, and that finding an approximate model of such relationships involves finding a matrix in a vector
space of matrices.
In a similar manner, linear transformations (and, more generally, homomorphisms) can be used to represent the relationship
between actual data from (or observations of) a system, and possible mathematical models of that system. The problem of finding a
mathematical model of data in a domain D can be viewed as the problem of finding an appropriate mathematical structure M and
an appropriate homomorphism (or isomorphism) f:D → M from the data domain D to that mathematical model.
If we assume that the data domain can be modelled using a vector space, then M can be a vector space and f can be a linear transformation.
Example: As an illustration, we consider a very simple (and fairly contrived) example from chemistry. Suppose we have three substances:
iron (I), sulfur (S), and troilite (T). Through experiments, we find that mixing 8 units of iron and 1 unit of sulfur (together with some energy)
produces 8 units of troilite. We write this down as:
8 I ⊕ 1 S = 8 T.
We could take our data and define a set with a an operation ⊕ that corresponds to whatever we do to produce the chemical reaction:
| = | {I, S, T, 2 I, 2 S, 2 T, 3 I, 3 S, 3 T, ...} |
| | = | | | = | | | ⋮ | |
We do not know the internal structure of individual units of iron, sulfur, and troilite. Even if individual units of each can be
represented using multiple dimensions, we do not know how many dimensions there might be and what those dimensions might represent.
Nevertheless, we can make hypotheses about what structure might be appropriate to model this system. We can then test these hypotheses
by using them to make new predictions and collecting data to verify or negate those predictions.
We hypothesize that there exists a non-trivial vector space V with a vector addition operation + such that we can construct a homomorphism from our
data to V. We assume it is non-trivial (nonzero) because a trivial space like this always exists: let I, S, and T all map to 0 ∈ V. We are looking
for a vector space V and homomorphism f such that
f(8 I ⊕ 1 S) = f(8 I) + f(1 S) = 8 f(I) + 1 f(S) = 8 f(T)
Thus, we are looking V with non-trivial (nonzero) u, v, w ∈ V such that:
8 u + 1 v = 8 w
How many dimensions should V have in order for our model to be interesting? What is the minimum number of dimensions V must have so that
u, v, and w are nonzero? What number of dimensions would make it trivially easy to find u, v, and w that fit the above equation?
We choose dim V = 2 and V = R2. We can also make some additional assumptions to simplify our problem: let u and v be orthogonal and
axis-aligned. Then we have the following equation:
Thus, we have that:
Assume that a = 1 and b = 1, we get x = 1 and y = 8. Thus, we have found u,v, and w ∈ R2 such that f respects ⊕ by mapping it to +:
Notice that we made several assumptions, but this is okay. We can test the quality and fitness of these assumptions later by making predictions
and collecting more data. This corresponds to the usual scientific method:
hypothesize what kind of mathematical model fits the data, add assumptions until the model is instantiated to match or approximate the data
(the model should not be so simple that it can fit any data, or so complex that it only fits the already collected data), make a prediction
using the model about something which has not yet been observed, and try to confirm or falsify that prediction by collecting more data.
How do we interpret the two dimensions of V? We do not necessarily know when we first create the model to fit the data. They may correspond
to actual entities, or they may correspond to abstract characteristics that can be broken down further once more data is collected.
In this case, they happen to correspond to the number of iron and sulfur atoms in the molecules of iron, sulfur, and troilite.
What is a prediction we can make using our model? Let us consider a scenario in which we introduce a different mix of iron and sulfur into our
reaction; how much troilite do we expect to make? We can write down the equation:
8 I + 2 S = c T
Applying our homomorphism, we get:
This implies:
Since there is no solution, our model predicts that it is not possible to convert all 8 units of iron and all 2 units of sulfur into troilite.
This negative prediction is already testable, but we can do better by looking for a more positive prediction. Our mathematical model V is a
vector space, and this suggests that the right-hand side of the equation must represent a linear combination of the three vector f(I), f(S),
and f(T). This suggests we could write down a slightly different equation:
This implies:
Notice that there exist multiple solutions to these equations. Which correspond to reasonable interpretations within the physical system we are
modelling? All three variables should be positive; we might also require that they should be integers, and that c is nonzero. We have that
e = (16-c)/8,
so c is either 16 or 8. It cannot be 16 because d > 0, so c = 8. Thus, e = 1 and d = 0. This predicts that 8 units of iron and 2
units of sulfur should produce 8 units of troilite and 1 unit of sulfur. This is a testable prediction that we can use to direct
further experiments to evaluate the quality of the model.
We will see similar examples in later sections.
Kernels of linear transformations
Definition: For a linear transformation f:A → B, the kernel of f of the set of elements in the domain of f that map to 0 ∈ B:
ker f = {a | a ∈ A, f(a) = 0 }.
Fact: For a linear transformation f:A → B, is the case that:
- ker f ⊂ A
- the image of ker f under f is {0}
- ker f is the pre-image of 0 under f
Definition: For a linear transformation f:A → B, ker f is a vector space (and, thus, a subspace of A). How do we show this formally?
Exercise: Suppose we are given the linear transformation f:R4 → R3 defined as:
What is the codomain of f? What is the image of f? What is the kernel of f? Provide a set comprehension to define each.
Exercise: Suppose we are given the linear transformation f:R4 → R4 defined as:
| = | | | x | x + y | x + y + z | x + y + z + t |
| |
| .
|
|
Show that f is surjective.
Matrices as symbolic representations of linear transformations
Matrices are the conventional concrete representation we adopt for linear transformations between vector spaces that can be represented
symbolically using the usual vector and span notation (i.e., subspaces of Rn). Thus, we can interpret any matrix M ∈ Rn → Rm as
the linear transformation f: Rm → Rn with the definition f(v) = M ⋅ v.
Fact: For any matrix M ∈ Rn×m, the map f: Rm → Rn defined as f(v) = M ⋅ v is a linear transformation.
Fact: The image of the linear transformation f(v) = M ⋅ v represented by M ∈ Rn×m is the span of the column vectors of M.
Let w1,...,wn be the columns of M. We can see this by noting that
Fact: The image of the linear transformation represented by M ∈ Rn×m is a vector space (and a subspace of Rm). Why is this?
Fact: If a matrix M ∈ Rn×n is invertible and f(v) = M ⋅ v, then f is injective.
Fact: If a matrix M ∈ Rn×n is invertible and f(v) = M ⋅ v, then ker f = {0}.
Fact: For a matrix M ∈ Rn×n and f(v) = M ⋅ v where ker f = {0}, then M is invertible.
Fact: If a matrix M ∈ Rn×n represents a linear transformation f(v) = M ⋅ v and ker f = {0},
then the system M v = 0 has exactly one solution.
Review #2
This section contains a comprehensive collection of review problems going over all the course material covered until this point. These problems
are an accurate representation of the kinds of problems you may see on an exam.
Problem: Given u ∈ Rn, what is the orthogonal projection of v ∈ Rn onto span{u}?
It is sufficient to compute the orthogonal projection of v onto u. Given a unit vector e parallel to u, the projection of v onto e would
be:
(e ⋅ v) ⋅ e.
However, we cannot assume u is a unit vector. Thus, we scale u to obtain a unit vector e = u/||u||. Then, the solution is:
((u/||u||) ⋅ v) ⋅ (u/||u||).
Problem: Assume a matrix M ∈ R2×2 is symmetric, has constant diagonal, and all its entries are nonzero. Show that A cannot
be an orthogonal matrix.
Let a ≠ 0 and b ≠ 0, and let us define:
Suppose that M is an orthogonal matrix; then, we have that:
Thus, we have ba + ab = 0, so 2ab = 0. This implies that either a = 0 or b = 0. But this contradicts the assumptions, so M cannot be
orthogonal.
Problem: Let M ∈ R17× 9 and f(v) = M ⋅ v. What is the largest possible value of dim(im(f))?
We know that we can only compute M v if v has as many rows as M has columns. Thus, if f( v) = M ⋅ v, then v ∈ R9. We
also know that M ⋅ v will be a vector with 17 rows because M has 17 rows, so f( v) ∈ R17. Thus, f ∈ R9 → R17.
This means that dim(im(f)) cannot be greater than dim(R17), so it cannot exceed 17. However, we also need to note that M
has 9 columns, and that any value in the image of f is thus a linear combination of at most 9 vectors. Thus, any basis of im(f) has
at most 9 distinct vectors in it. Since dim(im(f)) is the size of the basis of im(f), dim(im(f)) can be at most 9.
Problem: Find an orthonormal basis for the following vector space:
We can use the algorithm for computing the vectors in an orthonormal basis. We can work with the vectors in any order, so suppose we have:
According to the algorithm, we then let u1 = v1 and e1 = u1 / ||u1||. In this case, we still have e1 = v1. Next, we compute:
| = | v3 - ((v3 ⋅ e1) ⋅ e1) - ((v3 ⋅ e2) ⋅ e2)
| |
| = | | = | | | | = | |
Thus, {e1, e2, e3} is an orthonormal basis for span{v1, v2, v3}.
Problem: Suppose we are given a linear transformation f:R2 → R2 where:
- Find an orthonormal basis for im(f).
We know that the column vectors of the matrix are linearly independent. Thus, dim(im( f)) as at least 2, and since
the codomain of f is R2, im( f) = R2. Thus, any orthonormal basis of R2 is appropriate, such as:
- Find ker(f).
Let the matrix in the definition of f be M.
We know from lecture that M is invertible iff ker( f) = {0}. Since det M ≠ 0, M is invertible, so ker( f) = {0}.
Alternatively, we could recall the definition of a kernel:
Thus, it is sufficient to find the set of solutions to the equation f(v) = 0, which is the set of solutions (expanding the
definition of f) of:
Since M is invertible, we can multiply both sides by M -1 to obtain:
Thus, there is only one solution, so the kernel contains a single element, and ker(f) = {0}.
- Show that f is surjective.
To show that f is surjective, it is sufficient to show that for every v in the codomain, there exists x such that:
In other words, we want a formula for x in terms of v that is defined for any v. Let the matrix in the definition of f be M. Since M
is invertible, we can define:
This formula for x is always defined, so f is surjective.
Problem: Suppose we are given a linear transformation f:R2 → R2 where:
- Find dim(im(f)).
Recall that the dimension of a space is the size of any basis of that space (all bases of a space have the same size). Let M be the
matrix in the definition of f. The space im( f) is equivalent to the span of the column vectors of the matrix:
To find a basis of a space spanned by a collection of vectors, we create a matrix whose rows are the vectors in that collection, find its
reduced row echelon form, and keep only the nonzero rows in the basis:
Thus, the dimension of im(f) is 1.
- Show that f is not injective.
To show that a map is not injective, it is sufficient to find v and v' such that v ≠ v' but f( v) = f( v').
One approach to finding such v and v' is to expand f(v) = f(v') until the constraints are simple enough that
it is straightforward to try some inputs and easily check that they satisfy the constraints.
Since the top row of the matrix is a multiple of the bottom row, we get one equation:
One possible pair of vectors in the domain that satisfies the above is:
Thus, f is not injective.
- Find ker(f).
We write down the definition of ker( f) for this particular map f:
The above would be sufficient, but suppose we want to find the basis of ker(f) (recall that ker(f) is a subspace of the domain
of the linear transformation f). The constraints imposed above can be summarized as:
Thus, ker(f) is the line in R2 defined by the equation y = -2x. We can choose any vector on this line and take the span of the set
containing only that vector to provide an explicit definition for ker(f):
Problem: Find a matrix M such that for f(v) = M ⋅ v,
One way to approach this problem is to expand the definition on the right-hand side using the
definition of the orthogonal complement operation:
| = | { | | | 2x = 0, 2z = 0, y,t ∈ R } |
| | = | { | | | x = 0, z = 0, y,t ∈ R } |
| | = | |
Thus,
Recall that the image of f is the span of the columns of M. Thus, one possible solution is:
Problem: Suppose that a2 + b2 = 1. What is the orthogonal projection of v onto im(f) if f(v) = M ⋅ v,
We have that:
Because a2 + b2 = 1, the first vector is already a unit vector and orthogonal to the second vector. By rescaling the second vector to be
a unit vector, we can obtain an orthonormal basis:
We can now find the orthogonal projection of v onto each vector in the orthonormal basis, and add these to find the orthogonal projection of the
vector onto im(f):
Problem: Suppose we are given the following points:
- Find a function of the form f(x) = ax2 + bx + c that is the best least-squares fit for these points.
We begin by writing down the equations in terms of f for each point:
We construct the matrix equation that represents the above system:
| | (0)2 | (0) | 1 | (2)2 | (2) | 1 | (2)2 | (2) | 1 |
| |
| ⋅ | | | |
| = | |
There is no solution to the above equation. However, we are looking for the least-squares best fit approximation. Thus, we first find an
orthonormal basis of the image of the matrix. If the matrix is M and f(v) = M ⋅ v, we have that:
Using the algorithm for finding an orthonormal basis, we obtain:
We now find the orthogonal projection of | | onto im(f) and use it to rewrite the equation so that it is not overdetermined:
|
| | (0)2 | (0) | 1 | (2)2 | (2) | 1 | (2)2 | (2) | 1 |
| |
| ⋅ | | | |
| = | |
This system is underdetermined. The space of solutions implied by the above equation is:
{ f | f(x) = ax2 + bx + c, c = 6, 4a + 2b + c = 9}
|
|
This means that any solution to the system is a least-squares best fit approximation. One possible best fit approximation is:
f(x) = x2 + (-1/2)x + 6
- Find a function of the form f(x) = bx + c that is the best least-squares fit for these points.
We follow the same process as in part (a) above. The matrix equation is:
There is no solution to the above equation. However, we are looking for the least-squares best fit approximation. Thus, we first find an
orthonormal basis of the image of the matrix. If the matrix is M and f(v) = M ⋅ v, we have that:
Using the algorithm for finding an orthonormal basis, we obtain:
We again find the orthogonal projection of | | onto im(f) and use it to rewrite the equation so that it is not overdetermined:
|
This yields c = 6 and b = 3/2. Thus, the least-squares best fit approximation is:
f(x) = (3/2)x + 6
Problem: Show that for M ∈ Rn×n, the space of solutions to M x = 0 is a vector space (you do not need to show that
all the axioms hold; it is sufficient to show the appropriate closure properties).
Let S be the space of solutions.
We show that 0 ∈ S:
We show that if v,v' ∈ S, then v + v' ∈ S:
We show that if v ∈ S, then for any scalar s ∈ R, s v ∈ S:
Problem: Compute the orthogonal projection of v onto im(f) where f(v) = M ⋅ v,
Normally, the way to approach such problems is to find an orthonormal basis for im( f) where f( v) = M ⋅ v, then use
that orthonormal basis to project v onto im( f). However, it's a good idea to check for special cases before delving into a large
computation. In this particular example, we have that:
Thus, v is orthogonal to both vectors, so the orthogonal projection of v onto im(f) is 0 ∈ R3.
It is worth noting that v is in the orthogonal complement of im(f). In general, given a vector subspace W ⊂ V,
for any w ∈ W⊥, the orthogonal projection of w onto W is 0.
Problem: Is the system below overdetermined or underdetermined? Does it have a solution?
The system is neither overdetermined nor underdetermined. It has a solution. Since the matrix is upper triangular, we can use the
algorithm for solving a system with an upper triangular matrix to obtain v:
Problem: Determine whether the system below has a solution for all possible v ∈ R3. If it does not, describe exactly
the set of vectors v for which the system below has a solution and determine whether this set a vector space.
Notice that determining whether the system has a solution for any v is equivalent to determining whether the linear
transformation represented by the matrix is surjective. It is also equivalent to determining whether the matrix is invertible.
To determine whether the matrix is invertible, we could compute its reduced row echelon form by finding an appropriate series of row operations.
Given the above, we see that the matrix is not invertible.
Let the matrix be M. The set of vectors for which there is a solution to the above equation is im(f) where f(x) = M ⋅ x. Thus,
we want to find im(f) explicitly. We could say:
If we want to be more precise, we could compute rref(M⊤) to find a basis for im(f).
Since the set of possible vectors v for which the equation has a solution is im(f) and f is a linear transformation, the set of such v is a vector
space.
Assignment #4: Algebra of Linear Transformations
In this assignment you will work with vector spaces and linear transformations.
- Complete the following argument.
\forall f,g \in \R^2 \to \R^2, \forall x,y \in \R^2, `(f \circ g) is bijective` \and x \neq y \implies # ... (g \circ f)(x) \neq (g \circ f)(y)
- Complete the following argument.
\forall f \in \R^2 \to \R^2, `([1,2;3,4]) represents (f)` \implies # ... `(f) is bijective`
- Complete the following argument.
\forall f,g \in \R^2 \to \R^2, `([1,2;3,6]) represents (f)` \and `([2,4;-1,-2]) represents (g)` \implies # ... \img(g) \subset \ker(f)
- Complete the following argument.
\forall f \in \R^2 \to \R^2, `([4,7;2,5]) represents (f)` \implies # ... `(f) is surjective`
Orthogonal Projections are Linear Transformations
Fact: For v ∈ Rn, an orthogonal projection onto a one-dimensional vector space span{v} is a linear transformation. First,
note that the dot product of two vectors can be rewritten as matrix multiplication of two matrices:
u ⋅ v = u⊤ ⋅ v.
Also, notice that
Finally, notice that for nonzero r ∈ R where r ≠ 0, the matrix | | is invertible and has inverse | | .
Consider the formula for the projection of w ∈ Rn onto span(v). We can rewrite the formula to use only multiplication of matrices.
| = | | | = | | | = | | | = | v ⋅ (v⊤ ⋅ v) -1 ⋅ (v⊤ ⋅ w) |
| | = | (v ⋅ (v⊤ ⋅ v) -1 ⋅ v⊤) ⋅ w
|
|
Thus, we have a product of the matrices v ∈ Rn×1, (v⊤ ⋅ v) -1 ∈ R1×1, and v⊤ ∈ R1×n.
This product is a matrix in Rn×n. Call it Mspan{v}. Then we can define the linear transformation f ∈ Rn → Rn
that performs the orthogonal onto span{v} as:
f(w) = Mspan{v} ⋅ w.
We used the above approach because we will see later that it can be generalized to orthogonal projections onto multidimensional vector spaces.
An alternative way to show the above is to notice that v/||v|| can be rewritten as a matrix u ∈ Rn×1. In that case, we have:
Here, u ⋅ u⊤ ∈ Rn×n.
Fact: Any orthogonal projection onto a subspace span{v1,...,vm} ⊂ Rn is a linear transformations. We know that we
can find an orthonormal basis of span{v1,...,vm}. Let M be a matrix the columns of which form an orthonormal basis of span{v1,...,vm}.
Then, we can define f ∈ Rn → Rn as:
f(w) = M ⋅ M⊤ ⋅ w.
In fact, by generalizing the formula for projections onto a one-dimensional subspace, we can in certain cases avoid relying on our ability to
compute an orthonormal basis for span{v1,...,vm}. However, before we can derive the generalized formula, we need a few preliminary results.
Example: Consider the case where we have
Let V = im(f). Then V⊥ is the set of vectors orthogonal to w. A vector u ∈ R3 is orthogonal to w iff w ⋅ v = 0. But this
means M⊤ ⋅ w = 0, which means w ∈ ker(g).
Fact: Take any matrix M ∈ Rn×m and linear transformations f ∈ Rm → Rn and g ∈ Rn → Rm defined
as
We have that
im(f)⊥ = ker(g).
To show these two sets are equal, we can make two arguments:
\forall f,g \in \R^2 \to \R^2, \forall M \in \R^(2 \times 2), \forall v \in \R^2, `(M) represents (f)` \and `(M^\t) represents (g)` \and v \in \ker(g) \implies g(v) = [0;0] \and g(v) = M^\t * v \and M^\t * v = [0;0] \and v \in (\im(f))^\bot
\forall f,g \in \R^2 \to \R^2, \forall M \in \R^(2 \times 2), \forall v \in \R^2, `(M) represents (f)` \and `(M^\t) represents (g)` \and v \in (\im(f))^\bot \implies M^\t * v = [0;0] \and g(v) = M^\t * v \and g(v) = [0;0] \and v \in \ker(g)
Because the subset relation applies to the two sets in both directions, the two sets must be equal.
Fact: We show that if a vector is in the kernel of a linear transformation f ∈ Rn → Rm, for any g ∈ Rm → Rk, it is also
in the kernel of the linear transformation g o f. In other words, ker(f) ⊂ ker(g o f).
∀ f,g ∈ R2 → R2, ∀ A,B ∈ R2×2, ∀ v ∈ R2, (A) represents (f) and (B) represents (g) and v ∈ ker(f) implies #... v ∈ ker(g o f)
\forall f,g \in \R^2 \to \R^2, \forall A,B \in \R^(2 \times 2), \forall v \in \R^2, `(A) represents (f)` \and `(B) represents (g)` \and v \in \ker(f) \implies #... v \in \ker(g \circ f)
Fact: Let M ∈ Rn×m and f,g ∈ Rm → Rn be defined as
We have that:
ker(f) = ker(g o f).
We know that ker(f) ⊂ ker(g o f) from the previous fact. It suffices to show that
ker(g o f) ⊂ ker(f).
The argument below does just that. Notice that we use the fact that ker(g) = im(f)⊥ (for appropriate f and g) that we derived
above.
\forall f,g \in \R^2 \to \R^2, \forall M \in \R^(2 \times 2), \forall v \in \R^2, `(M) represents (f)` \and `(M^\t) represents (g)` \and v \in \ker(g \circ f) \implies (g \circ f)(v) = [0;0] \and (g \circ f)(v) = g(f(v)) \and g(f(v)) = [0;0] \and g(f(v)) = M^\t * f(v) \and f(v) = M*v \and (g \circ f)(v) = M^\t * (M * v) \and M^\t * (M * v) = [0;0] \and g(M*v) = [0;0] \and (M*v) \in \ker(g) \and M*v \in \im(f) \and \ker(g) = (\im(f))^\bot \and M*v \in (\im(f))^\bot \and M*v = [0;0] \and f(v) = [0;0] \and v \in \ker(f)
Fact: Suppose that for M ∈ Rn×m and f ∈ Rm → Rn where f(x) = M ⋅ x we have an overdetermined system:
M ⋅ x = v.
Suppose that ker(f) = {0}. Then for g ∈ Rn → Rm where g(y) = M⊤ ⋅ y, we have that
ker(g o f) = ker(f) = {0}.
Then we have that g o f is invertible, so M⊤ ⋅ M is invertible and (M⊤ ⋅ M) -1 is defined. Thus, we can say that
| = | | | = | |
(M⊤ ⋅ M) -1 ⋅ (M⊤ ⋅ M) ⋅ x | |
| = | | | = | | | = | |
Thus, even if M ⋅ x = v is overdetermined, M⊤ ⋅ (M ⋅ x) = M⊤ ⋅ v has a solution x such that M ⋅ x corresponds
to the projection of v onto im(f).
Fact: If a matrix M ∈ Rn×m represents a linear transformation f(v) = M ⋅ v, then we have
that
dim(im(f)) = rank(M).
Fact (Rank-Nullity Theorem): If a matrix M ∈ Rn×m represents a linear transformation f(v) = M ⋅ v, then we have
that
dim(ker(f)) + dim(im(f)) = n.
There is more than one way to understand the above.
Applications to Linear Systems
Interpreting and Using Vector Spaces and Linear Transformations in Applications
Let V,W ⊂ Rn be vector spaces, and let f: V → W be a linear transformations. The following table reviews the operators we introduced
for working with sets of vectors, vector spaces, and linear transformations.
operator |
input |
output |
additional properties |
span V |
set of vectors |
vector space |
|
basis V |
vector space |
set of vectors |
the output is finite if the V has finite dimension (since dim V = |basis V|) |
dim V |
vector space |
integer |
|
V⊥ |
vector space |
vector space |
|
im f |
linear transformation |
vector space |
subspace of the codomain W |
ker f |
linear transformation |
vector space |
subspace of the domain V |
In this section, we consider how these operators and their properties are interpreted within particular applications, and how they can be used to
represent and solve problems within these application domains.
Example: Suppose there are eight islands labelled A,B,C,D, and E,F,G,H, and the following adjacency matrix M ∈ R4× 4
represents whether bridges exist between pairs of islands:
|
|
E |
F |
G |
H |
|
A |
|
1 bridges from A to E |
0 bridges from A to F |
0 bridges from A to G |
0 bridges from A to H |
|
B |
|
0 bridges from B to E |
1 bridges from B to F |
0 bridges from B to G |
0 bridges from B to H |
|
C |
|
0 bridges from C to E |
1 bridges from C to F |
0 bridges from C to G |
1 bridges from C to H |
|
D |
|
1 bridges from D to E |
0 bridges from D to F |
1 bridges from D to G |
0 bridges from D to H |
|
We can extract information from this matrix by multiplying it by a vector in R4.
|
1 bridges from A to E |
0 bridges from A to F |
0 bridges from A to G |
0 bridges from A to H |
|
|
0 bridges from B to E |
1 bridges from B to F |
0 bridges from B to G |
0 bridges from B to H |
|
|
0 bridges from C to E |
1 bridges from C to F |
0 bridges from C to G |
1 bridges from C to H |
|
|
1 bridges from D to E |
0 bridges from D to F |
1 bridges from D to G |
0 bridges from D to H |
|
|
⋅
|
|
1 count bridges to E |
|
|
0 ignore bridges to F |
|
|
1 count bridges to G |
|
|
0 ignore bridges to H |
|
|
=
|
|
1 bridges from A to E or G |
|
|
0 bridges from B to E or G |
|
|
0 bridges from C to E or G |
|
|
2 bridges from D to E or G |
|
|
Let f: R4 → R4 be defined as f(v) = M ⋅ v.
-
What is ker(f)? What does this mean about the connectedness of the islands?
Since ker(f) = {0}, there must be at least one bridge to every destination E,F,G, or H. In other words, if we choose to count
bridges to even one destination in v, the product of M ⋅ v will be nonzero.
-
Suppose for some other adjacency matrix M' and g(v) = M' ⋅ v, we have that ker(g) ≠ {0}.
Does this necessarily imply that at least one island has no bridges going to it?
No. We could have 1 for all entries of M`, but the kernel would then have dimension 3 (and thus be nonzero). For example,we could let
Then, v ∈ ker( g) so ker( g) ≠ { 0}, but all destination islands have a bridge going to them.
-
In terms of ker(f) and other sets and set operators, provide sufficient conditions such that M must describe a situation in which
each destination island has at least one bridge going to it.
It must be that ker(f) ∩ (R+)4 ≠ {0}.
Example: Suppose we have some noisy observations of a system (radiation at three wavelengths on a logarithmic scale
and the quantity of particles observed). We have only sampled this system on four points:
IR |
UV |
γ |
quantity |
1 |
-1 |
2 |
1 |
2 |
0 |
0 |
4 |
0 |
2 |
-4 |
2 |
3 |
1 |
-2 |
5 |
Suppose we believe that the quantity of particles is a linear function of the three radiation quantities. Find the least-squares
approximation of a linear relationship between the three dimensions describing the radiation output and the quantity of particles.
One way to use our existing technique for finding approximate functions that fit data is to imagine that there are three unknown
functions (e.g., polynomials) p, q, r ∈ R → R whose inputs are defined to be the observations. Then, we are looking for a linear
combination of these,
f(x) = a ⋅ p(x) + b ⋅ q(x) + c ⋅ r(x),
that best approximates the data:
IR |
UV |
γ |
quantity |
p(0) = 1 |
q(0) = -1 |
r(0) = 2 |
f(0) = a ⋅ p(0) + b ⋅ q(0) + c ⋅ r(0) = 1 |
p(1) = 2 |
q(1) = 0 |
r(1) = 0 |
f(1) = a ⋅ p(1) + b ⋅ q(1) + c ⋅ r(1) = 4 |
p(2) = 0 |
q(2) = 2 |
r(2) = -4 |
f(2) = a ⋅ p(2) + b ⋅ q(2) + c ⋅ r(2) = 2 |
p(3) = 3 |
q(3) = 1 |
r(3) = -2 |
f(3) = a ⋅ p(3) + b ⋅ q(3) + c ⋅ r(3) = 5 |
Notice that this is just a generalization of our approach to modelling data points using polynomials: in those cases, we might have had
p(x) = x2, q(x) = x, r(x) = 1.
We can also view this problem as a search for a linear transformation g:R3 → R that will be a least-squares approximation of the data; it can be
defined as:
In a manner similar to previous examples, we can represent the problem as a matrix of inputs and a vector of dependent outputs (one row per data point):
M = | | 1 | -1 | 2 | 2 | 0 | 0 | 0 | 2 | -4 | 3 | 1 | -2 |
| |
| , v = | | .
|
|
Thus, we want a least-squares approximate solution to the overdetermined system:
| | 1 | -1 | 2 | 2 | 0 | 0 | 0 | 2 | -4 | 3 | 1 | -2 |
| |
| ⋅ u = | | .
|
|
Let h,h' ∈ R3 → R4 be defined as h(u) = M ⋅ u and h'(u) = M⊤ ⋅ u. We can find the approximation
for the above overdetermined system by first finding the orthogonal projection of v onto im(h). However, We cannot directly use the formula
u = (M⊤ ⋅ M) -1 M⊤ v because ker(h) ≠ {0}:
| | 1 | -1 | 2 | 2 | 0 | 0 | 0 | 2 | -4 | 3 | 1 | -2 |
| |
| ⋅ | | | |
| = | | | = | | | = | | | = | | | = | | | = | | | = | |
| | 1 | -1 | 2 | 2 | 0 | 0 | 0 | 2 | -4 | 3 | 1 | -2 |
| |
| ⋅ | | | |
| = | | | ∈ | | | ≠ | |
This means ker(h' o h) ≠ {0}. This means M⊤ ⋅ M is not invertible, so (M⊤ ⋅ M) -1 is undefined.
If the matrix of data observations M has a nonzero kernel, it must have at least one column vector that is a linear combination of the
other column vectors. But within the context of the problem this means that for any data point, the value of the dependent dimension
can be determined using the values along the other dimensions. Thus, we need not include this dimension explicitly in our attempt
to model the observations.
The above calculation indicates that the third column vector is a scalar multiple of the second column vector.
Suppose we remove the third column vector from our data. Then, we have:
N = | | , l(u) = N ⋅ u, l'(u) = N⊤ ⋅ u.
|
|
Thus, we can instead look for an approximate solution to the following system:
This system is overdetermined. However, we have:
Thus, ker(l' o l) = {0}, which means (N⊤ ⋅ N) -1 is defined. We can then compute our approximation using the closed formula:
Communications: Applications of Curve Approximations and Properties of Linear Transformations
Example: Suppose that Alice wants to transmit some data to Bob, and this data consists of the following table:
T := [ 3,-1,5,10,7,-7,3,0,-15; 7,0,2,4,14,-2,14,7,-35; 5,2,0,0,8,4,16,11,-25; 2,1,3,6,3,-1,7,5,-10; 0,4,-1,-2,-4,9,12,12,0]
Since T ∈ R5× 9, the straightforward solution would be to transmit 5 ⋅ 9 = 40 scalar values. However, it is
not necessary to transmit all five scalar values found in every column.
-
Devise a different approach for sending such data that requires
sending fewer than 40 scalar values for the above example; the approach should be such that if
Alice and Bob can agree on how it works, they can reliably transmit all the data.
-
Suppose the matrix of data to transmit is T ∈ Rm×n. Using the dim and im operators, write a formula
describing exactly how many scalar values will need to be sent in order to transmit all the data in T using the method you devised.
Example: Suppose that Alice wants to transmit some data (a sequence of numbers) to Bob without revealing what they are to Eve,
who can read anything Alice sends. One simple encryption scheme involves splitting the message into vectors v ∈ Rn and using
a matrix M ∈ Rn×n to convert them into something that would be unrecognizable to an eavesdropper. For example, suppose
the message is:
1, 2, 3, 4
Alice could split the above message into two vector in R2:
She could then use a matrix M to convert these into something unrecognizable:
This yields the following vectors, which Alice then transmits to Bob:
-
If Alice sends a message to Bob using the above method, how can Bob decode the message?
Bob can compute the inverse of the encoding matrix and apply it to the messages he receives.
-
Suppose Eve finds out what matrix Alice and Bob are using. If Alice and Bob choose a new matrix, what conditions must
be satisfied by the matrix (or the corresponding linear transformation) that Alice will use to encode her messages?
The matrix and corresponding linear transformation must be invertible in order for Bob to be able to recover the message in full.
-
Suppose that Eve does not know M, but does know at least one sequence of two consecutive numbers
that were encoded using the matrix (for example, suppose Eve knows that every morning Alice sends a message telling Bob whether it is
sunny (unencoded sequence 1,2) or cloudy (unencoded sequence 3,4) at her location). Show that if Eve is near Alice's location, Eve
will eventually be able to decode any message Alice sends using that matrix.
Once Eve has seen an encoded message for both a sunny day and a cloudy day, she can solve the following system to obtain the components
of the matrix:
-
Let s be the unencoded vector Alice uses to represent a sunny day, let c be the unencoded vector she uses to represent a cloudy day,
and let f be the linear transformation she uses to encode her messages. Provide sufficient conditions on s and c such that Eve will
not be able to recover the entire matrix M using the method in part (c) above; express you conditions in terms of s, c, and f by using im
and span.
It must be that span{s,c} = im(f).
Example: More generally, a flaw with Alice's approach is that Eve can detect when Alice sends the same message twice. This leaks information to
Eve about the content of her messages (for example, if Alice is encoding English sentences, the most commonly occurring letter in those
sentences will be "e").
In order to address this, Alice can extend her encoding process. She can take all her vectors in R2 and convert them into vectors in
R3, and then use a matrix in R3 to encode them:
Bob could then decode any such message using the following matrix (notice that the top-left 2 × 2 portion of this matrix is the
inverse of the top-left 2 × 2 portion of the matrix above):
| |
In fact, Bob can decode the messages using any matrix of the form:
Let g(v) = D ⋅ v. Then, because g is a linear transformation, for any w ∈ ker(g) we have that:
But this means that Alice can take each encoded vector E ⋅ v that she creates and add any w ∈ ker(g) to it in order to obfuscate
any patterns that Eve might see in the encoded messages. Alice can choose a random w ∈ ker(g) for each message.
Bob will easily be able to continue decoding Alice's messages using the same transformation g.
-
Suppose Alice has a random vector generator that takes as an input a matrix M ∈ R3× 3 and returns a random vector in im(h)⊥
where h(v) = M ⋅ v. How can she use this generator to create obfuscated encrypted message vectors?
-
If Alice wants to hide any patterns in the messages she sends over time (including any patterns in parts of her messages),
what is a necessary property of ker(g)?
-
Given an encoding matrix E ∈ R3×2, how can Alice and Bob find D such that ker(g) ≠ {0} and such that the
desired property from part (b) above holds?
Let the column vectors of D be d1, d2, and d3. What is needed is a vector d3 that satisfies the following:
d3 ∈ (span {d1, d2} - span {d1} - span {d2})
-
Suppose Alice has a random number generator that can only generate a single scalar r ∈ R at any given moment.
Find a single matrix E' ∈ R3× 3 that Alice can use to convert message vectors with a random number, of the following form:
so that E' ⋅ v is always an obfuscated encrypted message vector.
Example: Suppose Alice has an unreliable communication device that can transmit any vector v ∈ R2 to Bob, but will always add
an error vector e to v where ||e|| is bounded by 5:
Find two matrices A and B such that if Alice wants to send v to Bob, she can send A ⋅ v using the device and after receiving
the transmission (call it w), Bob can always recover a close approximation B ⋅ w that has at most error 0.25
(that is, ||B ⋅ w - v|| ≤ 0.25).
It is sufficient to scale the vector being sent to make the error negligible, and then to scale it back down once it is received. Thus,
Then, we have for any v being sent:
When Bob decodes it, it will be:
Then we have in the worst case ||e|| = 5, which means:
Example: Alice has two unreliable communication devices for transmitting vectors in R2.
Given a vector v ∈ R to transmit, the first device will add an arbitrarily large error vector during transmission (ε varies between
transmissions):
The second device will add a different arbitrarily large error vector during transmission:
Alice wants to send a vector v ∈ R2 to Bob by sending only two messages, one with each device. Devise a way for Alice to send v ∈ R2
in this way so that Bob can recover it without any error. This method should not rely in any way on ε, and
should work for any ε.
We can observe that the two error vectors are orthogonal. This means that each device can be used to reliably
transfer the projection of a vector onto the orthogonal complement of the span of the error vector for that device. That is,
projection of v onto (span { | | })⊥ | |
| = | projection of v + ε ⋅ | | onto (span { | | })⊥ |
|
projection of v onto (span { | | })⊥ | |
| = | projection of v + ε' ⋅ | | onto (span { | | })⊥
|
|
Notice that because the two vectors are orthogonal complements, we have:
Thus, we can rewrite our first set of equalities:
projection of v onto span { | | } | |
| = | projection of v + ε ⋅ | | onto span { | | } |
|
projection of v onto span { | | } | |
| = | projection of v + ε' ⋅ | | onto span { | | }
|
|
We can now show that the above is true because, as we have seen further above, projection onto a vector space (including
the span of a single vector) is a linear transformation. Suppose we define the following two projections:
Thus, we have that:
| = | | | = | p(v) + ε ⋅ ((( | | ⋅ ( | | ⊤ ⋅ | | ) -1 ⋅ | | ]⊤) ⋅ | | ) |
| | = | p(v) + ε ⋅ ( | | ⋅ ( | | ⊤ ⋅ | | ) -1 | | ) |
| | = | | | = | | | = | | | = | | | = | |
Thus, Alice can simply use both devices to send v to Bob. When Bob receives w1 (from the first device) and w2 (from the second device),
he must interpret them appropriately:
Using our equalities from above, we have that:
projection of v onto span { | | } | |
| = | projection of w1 onto span { | | } |
|
projection of v onto span { | | } | |
| = | projection of w2 onto span { | | } |
| | = | | | = | | |
Thus, to retrieve the original v sent by Alice it is sufficient for Bob to perform the two projections and to add the results:
Example: Let polynomials in F = {f | f(t) = a t4 + b t3 + c t2 + d t + e } represent a space of possible radio signals.
To send some information to Bob, Alice must use that information to generate a radio signal that is described by some f.
To receive that information, Bob must listen to the radio signals he is receiving, determine what f Alice chose, and decode the message.
Suppose that a given area is saturated with noise or signals (e.g., communication by others) that are linear combinations of the following
three polynomials:
Devise a communication method that Alice and Bob can use that would allow Alice to send vectors in R2 to Bob despite the interference from
the noise.
First, let us consider the communication method in this environment when no noise is present. In such a scenario, Alice can choose a vector
v ∈ R5 to transmit. For example:
She then has a radio transmitter generate the signal corresponding to the polynomial whose coefficients are represented by v:
f(t) = -2 t4 - t3 + t + 2
|
|
To receive the message, Bob samples the signal at five different values of t:
In order to recover the message v, Bob must find the coefficients of the curve in F that fits the above points. Thus, it is sufficient to solve
the following equation using any appropriate technique. Notice that the solution is the vector Alice v that was is sending to Bob.
| | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 16 | 8 | 4 | 2 | 1 | 81 | 27 | 9 | 3 | 1 | 256 | 64 | 16 | 4 | 1 |
| |
| ⋅ | | | |
| = | | |
Next, suppose that noise is present in the environment. In this scenario, Alice will not be able to send a vector in R5 directly. This is
because for any signal polynomial f that Alice generates, Bob will only be able to sample some linear combination of polynomials
that includes both Alice's signal and the noise from the environment:
f(t) + s1 g1(t) + s2 g2(t) + s3 g3(t)
|
|
However, she can first find two polynomials that are linearly independent from the polynomials g1, g2, and g3 that constitute the
radio noise in the environment:
Then, to send a vector v ∈ R2 to Bob, Alice simply takes the corresponding linear combination of the two vectors and transmits the signal
corresponding to the polynomial whose coefficients correspond to the components of this linear combination:
For example, suppose Alice wants to send the following vectors in R2:
Then she computes:
She then transmits:
The signal Bob hears is some linear combination of Alice's signal and the noise signals in the environment. For example, suppose he hears the
following (note that Bob does not know s1, s2, and s3).
f(t) + s1 g1(t) + s2 g2(t) + s3 g3(t)
|
|
For the purposes of this example, suppose that he hears the following (again, note that Bob does not know the coefficients):
f(t) + 1 ⋅ g1(t) + 2 ⋅ g2(t) + 0 ⋅ g3(t)
|
|
In order to receive the message, Bob can now do the following.
- Bob samples the signal at 5 time points to obtain an equation.
| | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 16 | 8 | 4 | 2 | 1 | 81 | 27 | 9 | 3 | 1 | 256 | 64 | 16 | 4 | 1 |
| |
| ⋅ | | | |
| = | | |
- Next, Bob can solve the above equation to obtain a vector corresponding to the polynomial that includes both noise and Alice's signal.
At this point, Bob knows that this is a linear combination:
- Remove the noise from that vector by multiplying it by a matrix with an appropriate kernel.
- Determine what linear combination of Alice's two chosen vectors constitute what remains to determine the vector Alice sent.
Alternatively, Bob can do the following:
- Sample the signal at 10 time points to obtain an equation.
- Solve the equation to obtain both the parameters of the polynomial and the linear combination of the vectors Alice sent and the noise vectors.
Example: Alice has an unreliable communication device for transmitting scalars in R: for every sequence of scalars she sends, e.g.:
(s1, ..., sn),
up to half of the scalars will be lost during transmission (Bob will know which positions in the sequence have missing scalars, but
he will not know what their quantities were). Alice wants to send Bob a vector v ∈ R2, but she cannot use naive methods of
duplication to send it reliably. For example, simply sending many copies of v will not guarantee delivery of both components because up to
half of the components may be dropped, and these may (albeit with low probability) all be copies of the first component of v.
Devise a way for Alice to send a Bob a vector v ∈ R2 using her device so that Bob can recover it without error. Alice and Bob may
coordinate ahead of time to agree on the parameters of the method.
We first illustrate the method they can use using an example. Before parting, Alice and Bob agree on a subspace of R2
and a finite set of vectors W from that subspace:
They also agree that the first element in this set will be the special vector used for decoding. Next, suppose Alice wants to
send the following vector to Bob:
Alice first computes the intersection of the orthogonal complement of the span of each vector in W with span{v}. In this case, we have:
Thus, the intersections of the points in W and span{v} would be:
Next, Alice projects these onto the orthogonal complement of the subspace she and Bob initially chose to obtain:
She can now discard the 0 entries and sends only the second components to Bob:
{2, 4, 6, 8}
Bob can recover the original message using any two of the above scalars. For example, suppose Bob receives only 4 and 8.
Bob knows that these were generated using the following vectors in W:
Thus, Bob knows two vector in span{v}:
To determine span{v}, Bob must simply determine the slope of the line through these two vector:
Finally, Bob recalls that the first vector in W was the decoding vector. How now compute the intersection of that vector's span and span{v}
to obtain the original v:
Example: Suppose Alice has a very unreliable communication device for transmitting scalars in R: for every sequence of scalars she sends,
up to half of them are lost; those scalars that do arrive have a bounded error of at most 5. Devise a way for Alice to send vectors in R2 to
Bob so that he can always recover the sent vectors with an error of at most 0.25.
Assignment #5: Applications: Communication
In this assignment you will use concepts from linear algebra to solve several problems in the application domain of communication.
- Alice wants to send Bob the matrix T below. However, she can only afford to send 30 scalars or fewer. Will she be able to send a message
from which Bob can recover all the information in the matrix?
T :=[0, 3, 1, -4, -1, -8, 0, -6, -2, 8, 2, 16; 1, 0, -2, 0, -2, -9, -2, 0, 4, 0, 4, 18; 3, -9, -9, 12, -3, -3, -6, 18, 18, -24, 6, 6; -2, -6, 2, 8, 6, 34, 4, 12, -4, -16, -12, -68; 3, -3, -7, 4, -5, -19, -6, 6, 14, -8, 10, 38]
numColumns := undefined # replace with a formula in terms of T numRows := undefined # replace with a formula in terms of T basisSize := undefined # replace with a formula in terms of T numScalarsToSend := undefined # replace with a formula that produces an integer numScalarsToSend \leq 30
- You know that Alice is using a matrix M ∈ R2×2 to send encoded vectors in R2 to Bob. You also know that she will only
send vectors from the following set:
You observe two encoded messages as they are being transmitted:
Determine which two unencoded vectors Alice sent, and then use them along with the encoded vectors and some matrix
operations to provide a definition of the encoding matrix. You should be able to use \augment to accomplish this using a definition
that fits on a single line.
\forall v,w \in \R^2, \forall a,b,c,d \in \R, v = \undefined \and w = \undefined \and [a,b;c,d] = \undefined \implies # ... [a,b;c,d] * v = [4;-6] \and [a,b;c,d] * w = [2;11]
-
Alice wants to send Bob a vector v ∈ R2. She has an unreliable device that can transmit a vector in R2 to Bob, but it always adds
the following error to v during transmission (where s is unknown ahead of time, and different
each time):
Alice and Bob will agree on some u, M, and M' before parting. When sending a vector with x and y components, Alice will use the device to
transmit two vectors: (x ⋅ u) and (y ⋅ u). Bob will receive w and w' as defined below, and will decode v by computing M w + M w'.
Find u,M, and M' that make it possible to retrieve v, and complete the argument below. Hint: review the propositions available in the
verification system that deal with matrices and vectors, as they may allow some algebraic manipulations to be more concise.
\forall x,y,s,s' \in \R, \forall u,v,w,w' \in \R^2, \forall M,M' \in \R^(2 \times 2), u = \undefined \and M = \undefined \and M' = \undefined \and w = x u + s [-1;1] \and w' = y u + s' [-1;1] \and v = [x;y] \implies # ... v = M w + M' w'
- In many of the communication examples being considered, it is useful to be able to construct a matrix whose corresponding linear
transformation has a specific non-trivial image, and a specific non-trivial kernel (i.e., the span of a particular vector or set
of vectors). Suppose we know that a vector [x;y] in R2 is a linear combination of two linearly independent vectors:
We do not know s and r, but we do know all of the following:
One way to remove the [b;d] term from [x;y] is to first set up a matrix equation and solve for s and r:
However, suppose we instead want to find a linear transformation f such that for any [x;y] as defined above, we have:
In fact, given the information we have, we can construct exactly such a linear transformation:
Complete the argument below showing that this linear transformation indeed has this property.
\forall a,b,c,d,x,y,s,r \in \R, `([a;c]) and ([b;d]) are linearly independent` \and s * [a;c] + r * [b;d] = [x;y] \implies # ... s * [a;c] = ([a,b;c,d] * [1,0;0,0] * [a,b;c,d]^(-1)) * [x;y]
-
Let polynomials in F = {f | f(t) = a t2 + b t + c } represent a space of possible radio signals.
To send a vector v ∈ R3 to Bob, Alice sets her device to generate the signal corresponding to the polynomial in F whose coefficients
are represented by v. Bob then has his receiver sample the radio signals in his environment at three time points to retrieve the message.
-
Suppose Bob samples the signals at t ∈ {1,2,3} and obtains the vectors P defined below. What vector in R3 did Alice send? Use
techniques you have learned in this course to retrieve the vector.
P := { [1;1], [2;-3], [3;-11] }
# ... v := \undefined
-
Suppose the environment contains noise from other communication devices; the possible signals in this noise are always from the span
of the following polynomials:
Alice and Bob agree that Alice will only try to communicate to Bob one scalar r ∈ R at a time.
They agree on a vector u ahead of time. Any time Alice wants to send some r ∈ R to Bob, she will
have her device generate the signal corresponding to the polynomial represented by
r ⋅ u.
If the vectors P below represent the samples collected by Bob, what scalar was Alice sending to Bob?
P := { [1;3], [2;9], [3;13] }
# ... s := \undefined
-
Suppose there is no noise in the environment. Alice, Bob, and Carol want to send individual scalars to one another at the same time:
all three would be sending a signal simultaneously, and all three would be listening at the same time:
- Alice's device would generate a signal corresponding to a scalar multiple of f(x) = 2 x + 1;
- Bob's device would generate a signal corresponding to a scalar multiple of g(x) = -x2 + 1;
- Carol's device would generate a signal corresponding to a scalar multiple of h(x) = x2 - 3 x.
Suppose you sample the radio signals at a given moment and obtain the vectors P below. Determine which scalars
Alice, Bob, and Carol are each transmitting. Your answer can be in the form of a vector.
P := { [0;5], [1;7], [2;7] }
# ... v := \undefined
Modelling Systems of Populations: Applications of Linear Transformations and Eigenvalues
We introduce a few concepts.
Definition: Given a linear transformation f ∈ Rn → Rn, v ∈ Rn is a fixed point of f if:
f(v) = v
Example: Consider the following linear transformation:
Then the following vector v is a fixed point of f:
Fact: If v is a fixed point of f and v ∈ ker f, then v = 0.
Fact: For any linear transformation f, 0 is a fixed point of f (see the definition of a linear transformation).
Fact: If v is a fixed point of f and v ≠ 0, then f has an infinite number of fixed points. This is because for any s ∈ R and
any fixed point v, we have that:
Fact: For any linear transformation f where f(v) = T ⋅ v, let S be the set of fixed points of f:
S = {v | f(v) = v }
Then S is a vector space. There are several ways to show this. Let v be any fixed point of f. Then we have:
Thus, the fixed points of f represented by T are exactly the elements of ker h where h(v) = (T - I) ⋅ v,
so S = ker h. Alternatively, we could show that S contains 0, S is closed under scalar addition, and S is closed under scalar multiplication.
We can generalize the notion of a fixed point by observing that a fixed point is just a vector on which the linear transformation acts
as a scalar multiplier (in the case of a fixed point, the multiplier is 1). If v is a fixed point of f, then we have that:
f(v) = 1 ⋅ v
What if we created another notion that was not restricted to 1?
Definition: Given a linear transformation f ∈ Rn → Rn, v ∈ Rn is an eigenvector of f with eigenvalue
λ if:
f(v) = λ ⋅ v
Fact: For any linear transformation f, if v is a fixed point of f, then it is an eigenvector of f with eigenvalue 1.
Definition: The set of eigenvectors of a linear transformation f is its eigenspace.
Fact: Given a linear transformation f ∈ Rn → Rn represented by T ∈ Rn×n and an eigenvector v, consider the following:
The above equation has only zero solutions if (T - λ I) is invertible:
| = | |
(T - λ I) -1 ⋅ (T - λ I) ⋅ v | |
| = | | | = | | | = | |
Thus, nonzero eigenvectors v only if (T - λ I) is not invertible. However, this means that det (T - λ I) = 0. Thus, if it
is the case that det (T - λ I) = 0, then there must exist at least one λ that solves this equation. In fact, the eigenvalues
of T are exactly the solutions to the equation:
Example: Suppose that we are modelling a system with two dimensions that are each modelled using R:
- population in the city;
- population in the suburbs.
The following matrix (call it T) represents the movement of the two populations between the two locations (with entries in percentages) over
one year:
|
|
from city |
from suburbs |
|
to city |
|
0.95 stay in the city |
0.03 move from suburbs to city |
|
to suburbs |
|
0.05 move from city to suburbs |
0.97 stay in the suburbs |
|
For example, suppose the population in 1999 is represented by v below. Then the population in 2000 is represented by T ⋅ v:
Let f(v) = T ⋅ v be the linear transformation represented by T.
- What does a fixed point of f represent?
A fixed point of f represents a stable population distribution that will not change from one year to the next. For example:
Notice that any scalar multiple of this vector, including a vector that is normalized so that the two components add up to 1,
is also a fixed point of f:
Thus, for this transformation, for any distribution of the population in which 37.5% live in the city, the distribution is stable.
- What does an eigenvector of f represent?
An eigenvector of f represents a population distribution that may grow or shrink, but whose relative distribution between the city and
the suburbs remains the same from one year to the next.
-
Does f have any nonzero eigenvectors other than the fixed points?
No, because the sum of the components of any vector in im f is always the same as the sum of the components of the input vector that produced
that image.
| = | | | 0.95 x + 0.03 y | 0.05 x + 0.97 y |
| |
| |
| | = | (0.95 x + 0.05 x) + (0.03 y + 0.97 y) |
| | = | (0.95 x + 0.03 y) + (0.05 x + 0.97 y)
|
|
This is because the sums of the components of the column vectors of T are 1. This makes T a stochastic matrix, and it makes
the fixed points the steady-state or equilibrium vectors of T.
-
Find the vector space of fixed points of f.
Note that either the fixed points of f are { 0}, or there are infinitely many. Thus, if we can find a matrix equation whose solutions
are the fixed points, we will obtain either a system whose only solution is 0, or an underdetermined system.
We know that for our particular f and T, the space of fixed points is ker h where:
We know that ker h consists of the vectors in the orthogonal complement of the row vectors of (T - I).
Recall that if g(v) = (T - I)⊤, then ker h = (im g)⊥. Also, since the row vectors of (T - I) are linearly dependent
(the top row multiplied by -1 yields the bottom row and vice versa), we know that dim ker h = 1. Thus,
-
Suppose we want to find a closed formula for the population k years after the initial state v (i.e., after applying T to an initial
vector v eight times, or Tk ⋅ v) where we have:
The formula should be in terms of k and should not require matrix multiplication. In other words,
we should be able to obtain closed formulas for the city population and the suburb population in terms of k.
We can approach this problem by finding the eigenvectors of f, which span its eigenspace. Then, we can express the result of Tk ⋅ v
as a linear combination of eigenvectors.
| = | |
det | | 0.95 - λ | 0.03 | 0.05 | 0.97 - λ |
| |
| | |
| = | |
(0.95 - λ) ⋅ (0.97 - λ) - (0.03 ⋅ 0.05) | |
| = | |
λ2 - 1.92 λ + 0.9215 - 0.0015 | |
| = | | | = | | | = | (1.92 ± √(1.922 - 4(0.92)))/2 |
| | = | | | = | | | = | |
We already know the eigenvector for eigenvalue 1 (it is the fixed point). To find the eigenvector for the other eigenvalue, we solve:
Thus, our eigenvectors are:
Notice that we had a space of solutions because the system was underdetermined; we chose a particular eigenvector. Finally, notice that:
| = | (Tk ⋅ a ⋅ e1) + (Tk ⋅ b ⋅ e2) |
| | = | a ⋅ (Tk ⋅ e1) + b ⋅ (Tk ⋅ e2) |
| | = | |
Thus, if the initial input vector can be rewritten in terms of the two eigenvectors, we can find the closed formula. In fact, it can be
because the two eigenvectors are linearly independent:
The closed formula is:
| = | 0.125 ⋅ | | + 0.225 ⋅ 0.92k ⋅ | |
|
|
Eigenspaces have many other applications. In particular, they make it possible to provide "natural" interpretations of general notions of
concepts such as differentiation in the context of vector spaces.
Example: Differentiation is a linear transformation from the vector space of differentiable functions (or a subset, e.g., the polynomials):
As an example, consider the space of polynomials of the form f(x) = a x2 + b x + c. If each polynomial is represented as a vector
of its coefficients, the differentiation operator for this vector space of functions can be represented as a matrix:
Thus, an eigenvector in this vector space is any differentiable function f such that:
Notice that the above is a differential equation. If λ = 0, then we have for any constant c ∈ R the solution:
If λ ≠ 0 and we do not restrict ourselves to polynomials but allow all infinitely differentiable functions, then we have the solution:
Review #3
Comprehensive List of Course Topics
The following is a breakdown of what you should be able to do at this point in the course (and of what you may be tested on in the final exam).
Notice that many of the tasks below can be composed (e.g., you could find the image of a linear transformation, and then because it is a
vector space, you could determine its dimension). This also means that many problems can be solved in more than one way.
- vectors and matrices
- algebraic properties of scalar, vector, and matrix multiplication and vector and matrix addition
- vector properties and relationships
- norm of a vector
- unit vectors
- linear dependence
- linear independence
- dot product of two vectors
- orthogonal vectors
- matrix properties and relationships
- identity matrix
- elementary matrices
- scalar matrices
- diagonal matrices
- upper and lower triangular matrices
- determinant of a matrix
- inverse of a matrix and invertible matrices
- determine whether a matrix is invertible
- algebraic properties of matrix inverses with respect to matrix addition and multiplication
- transpose of a matrix
- algebraic properties of transposed matrices with respect to matrix addition and multiplication
- row echelon form and reduced row echelon form of a matrix
- matrix rank
- vector spaces
- vector spaces and their properties
- given a set of vectors or other objects, show that it is a vector space
- express a vector space as a span of a finite set of vectors
- given two vector spaces, show one is a subspace of the other
- given two vector spaces, show they are equal
- find the orthogonal complement of a vector space
- find the basis of a vector space
- find an orthonormal basis of a vector space
- find the dimension of a vector space
- particular vector spaces
- subsets of Rn and Rn×m
- the set of polynomials of a given order
- linear transformations
- show that a map is a linear transformation
- show a linear transformation is surjective
- show a linear transformation is injective
- show a linear transformation is bijective
- given a linear transformation (and/or its matrix representation)...
- show it is injective
- show it is surjective
- show it is bijective
- find its kernel (a vector space)
- find its image (a vector space)
- find its space of fixed points
- compositions of linear transformations and their properties
- given a kernal and image, find a linear transformation (i.e., the matrix that represents it) with that kernel and image
- compute orthogonal projections...
- onto the span of a single vector in Rn
- onto a subspace of Rn...
- by first computing an orthonormal basis and then using it to find the projection
- by using the formula M (M⊤ ⋅ M) -1 M⊤ when ker f = {0} for f(v) = M ⋅ v
- systems of equations and matrix equations
- solve a linear system of the form LU = v
- find the least-squares approximation of an overdetermined linear system
- applications
- systems of states
- interpret a system of states consisting of dimensioned quantities as a vector space
- interpret relationships in a system of states as a matrix
- interpret system state transitions/transformations as matrices/linear transformations
- interpret partial observations of system states as matrices/linear transformations
- given a partial description of a system state and a matrix of relationships, find the full description of the system state
- curve fitting and data approximation
- find a polynomial that fits a given set of points in R2
- find a least-squares approximate curve that best fits a set of points in R2
- communications (sending messages consisting of vectors)
- decrypt message vectors encrypted using a matrix
- eliminate noise from a message vector
- encode a message vector as a polynomial and use sampling to transmit it
- decode a message vector sent as a polynomial
- filter out noise from a message vector sent as a polynomial
- population distributions over time
- find the fixed point of a transition matrix
Practice Problems
Below is a comprehensive collection of review problems going over all the course material. These problems
are an accurate representation of the kinds of problems you may see on an exam.
Problem: Consider the following vector space:
- Find the orthogonal projection of the following vector onto V:
If the two spanning vectors were orthogonal, one approach would be to project u onto a normalized
form of each vector, and to add the results. If the two spanning vectors were orthonormal, it would be
sufficient to simply project onto each vector, and add the results. Because the two vectors are neither,
we must use the formula A ( A⊤ A) -1 A⊤ u for the projection where
- Find a basis of V⊥.
Note that dim V = 2 (since the two spanning vectors are linearly independent), and we know that:
dim V + dim V⊥ = 3
Thus, dim V⊥ = 1, so we need one spanning vector of V⊥ that is orthogonal to both spanning vectors of V.
It is sufficient to set up an underdetermined
system, solve for two of the variables in terms of the third, and set the third to an arbitrary constant:
| = | | | = | | | = | | | = | -y/2 - 4z/2 = -z/2 - 4z/2 = -5/2 ⋅ z
|
|
Setting z = 2, we get:
- Find any matrix M such that for f(v) = M ⋅ v, ker f = V.
It is sufficient to find a matrix that maps both spanning vectors to 0. One approach is to use the formula for a matrix with a specific
kernel and image. From above, we already have a vector that spans V⊥, so such a matrix can be computed using the formula:
Problem: Consider the vector space of polynomials of degree at most 2:
F = { f | f(x) = a x2 + b x + c, a,b,c ∈ R}
The map d:F → F represent differentiation. For example:
- Determine whether d:F → F is injective.
It is not injective because we can find two unequal inputs that produce the same output. Consider the following polynomials:
Then we have that:
Thus, the map d is not injective.
- Find ker d.
We are looking for the set of all polynomials in F whose derivative is the polynomial f( x) = 0. We know this is exactly the set of polynomials:
{f | f(x) = 0 x2 + 0 x + r, r ∈ R}
- Recall that a polynomial can be represented as a vector. For example, f(x) = 5 x2 - 2 x + 3 can be
represented as:
Show that d is a linear transformation by finding a matrix representation for d.
The matrix is:
Notice that:
- Show that d:F → F is not surjective by using im and dim.
In order for a linear transformation to be surjective, it must be that dim im = dim F. We know that dim F = 3, and we also know that
there are only two linearly independent columns in the matrix representing d, so dim (im d) = 2. Thus, dim F ≠ dim (im d), so
there must be vectors in the codomain of d that are not in the image of d, so d is not surjective.
Problem: Alice wants to send vectors in R2 to Bob. For any vector v ∈ R2 that
she wants to send, she generates a random scalar r ∈ R and sends w
to Bob as defined below:
- Find a matrix D ∈ R3× 3 that Bob can use to decode Alice's messages.
Alice is sending vectors that are a linear combination of two vectors that carry the two scalars in a message vector v and a noise vector:
Bob must first cancel out the noise using a matrix in R3 that has the appropriate kernel. Then he must take the result of that and recover
the scalars x and y.
To find an appropriate matrix to cancel out the noise, Bob can use the following formula:
Once Bob applies C to the vector he receives from Alice, he has:
Notice that we can find a matrix that inverts this operation by replacing the top-left portion of the above encoding matrix with its inverse:
Thus, Bob can use the following matrix to decode messages:
D ⋅ C.
If Bob receives a message w that is the encoded version of v, he can retrieve it by computing:
D ⋅ C ⋅ w.
- Find a matrix D' ∈ R2×3 that Bob can use to retrieve v ∈ R2 given a transmitted w ∈ R3.
Bob simply needs to drop the third component of the result of D ⋅ C ⋅ w. This can be accomplished by computing:
Thus, an appropriate matrix in R2×3 would be:
Problem: Suppose there are two locations across which a population is distributed. Over the course of each year,
the population moves between the two locations according to one of two population distribution transformations depending
on how well the economy is doing (A if the economy is doing well, B otherwise):
Appendix
This section contains a short review of concepts used in this course that are found throughout mathematics.
Logical formulas and quantifiers
A mathematical formula is a string of symbols that follow a certain syntax. If it the formula is written using a
correct syntax, we can ask whether the formula is true or false.
formula |
true or false |
example |
\true |
always true |
|
\false |
always false |
|
f1 \and f2 |
only true if both f1 and f2 are true |
1 < 2 and 2 < 3
1 < 2 \and 2 < 3
|
f1 implies f2 |
true if assuming f1 can let us derive f2 always true if f1 is false (i.e., proof by contradiction) |
1 = 2 and 2 = 3 implies 1 = 3
1 = 2 \and 2 = 3 \implies 1 = 3
|
∀ x ∈ S, f |
true if no matter what element we choose in S, substituting x for that element in f will result in a true formula |
∀ x ∈ {1,2,3}, x > 0 ∀ x ∈ {1,2,3}, x > 2
\forall x \in {1,2,3}, x > 0 \forall x \in {1,2,3}, x > 2
|
∃ x ∈ S, f |
true if there is at least one element in S such that substituting x for that element in f will result in a true formula |
∃ x ∈ {1,2,3}, x = 2 ∃ x ∈ {1,2,3}, x > 3
\exists x \in {1,2,3}, x = 2 \exists x \in {1,2,3}, x > 3
|
Set comprehensions
Set comprehension notation is a way to specify how sets can be built or filtered by using formulas.
set comprehension |
description |
result |
example |
{x | x ∈ {1,2,3,4}} |
take each element in S and put it in the set represented by {x | x ∈ S} |
{1,2,3,4} |
|
{x | x ∈ {1,2,3,4}, x > 2} |
take each element in S and put it in the result set only if it is greater than 2 |
{3,4} |
{ x | x ∈ {1,2,3,4}, x > 2}
{ x | x \in {1,2,3,4}, x > 2}
|
{x | x ∈ S, f} |
take each element in S and put it in the result set only if substituting x in f with that element results in a true formula |
|
{ x | x ∈ {1,2,3,4}, ∃ y ∈ {3,4}, x = y }
{ x | x \in {1,2,3,4}, \exists y \in {3,4}, x = y }
|
Sets and operators
We often consider sets together with operators over those sets. These operators can have properties
related to those sets.
Operators allow us to construct syntactic terms that are then interpreted as referring to
elements in a set. When two syntactic terms refer to the same element in a set (i.e., have the same
meaning), such as 1+(2+3) and (3+2)+1, we say they are equivalent:
1+(2+3) = (3+2)+1.
We have names for several common properties that operators may possess for a given set. In the table below, We define them
precisely using logical notation.
property |
definition |
S is closed under ⊕ |
∀ x,y ∈ S, x ⊕ y ∈ S |
⊕ is commutative on S |
∀ x,y ∈ S, x ⊕ y = y ⊕ x |
⊕ is associative on S |
∀ x,y,z ∈ S, (x ⊕ y) ⊕ z = x ⊕ (y ⊕ z) |
⊕ has a left identity I in S |
∀ x ∈ S, I ⊕ x = x |
⊕ has a right identity I in S |
∀ x ∈ S, x ⊕ I = x |
⊕ has an identity I in S |
∀ x ∈ S, I ⊕ x = x ⊕ I = x |
⊗ distributes across ⊕ in S |
∀ x,y,z ∈ S, x ⊗ (y ⊕ z) = (x ⊗ y) ⊕ (x ⊗ z) |
Equality is assumed to be reflexive, symmetric, and transitive.
property |
definition |
= is reflexive for S |
∀ x ∈ S, x = x |
= is symmetric for S |
∀ x,y ∈ S, x = y iff y = x |
= is transitive for S |
∀ x,y,z ∈ S, x = y and y = z implies x = z |
Relations, maps, and functions
A relation between two sets X and Y is simply a subset of the collection of all pairs of objects drawn from two sets.
construct |
definition |
example |
graphical example |
X × Y |
{ (x,y) | x ∈ X, y ∈ Y } |
{1,2,3} × {4,5,6} = {(1,4),(1,5),(1,6), (2,4),(2,5),(2,6), (3,4),(3,5),(3,6)} |
|
R is a relation between X and Y |
R ⊂ X × Y |
{(1,D), (2,B), (2,C)} is a relation between {1,2,3,4} and {A,B,C,D} |
|
f is a function from X to Y |
f is a relation between X and Y and
∀ x ∈ X, there is exactly one y ∈ Y s.t. f relates x to y
|
{ (x,f(x)) | f(x) = x2 } |
|
R -1 is the inverse of R |
{ (b,a) | (a,b) ∈ R } |
|
|
f: X → Y is injective |
|
|
|
f: X → Y is surjective |
|
|
|
f: X → Y is bijective |
|
|
|
Notice that we may have f such that f is a function, but f -1 is not a function.
|