The lecture notes below are presented for archival purposes as they appeared at the time that this course was originally offered. The latest version of these notes is available at https://lapets.io/course-linear-algebra.
NOTE: This page contains all the machine-verifiable examples presented during the lectures, as well as all the homework assignments. Click here to go back to the main page with the course information and schedule.


1. Introduction and Motivation

This section introduces at a high level the direction and goals of this course, as well as some terminology. Do not worry if all the details are not clear; all this material will be covered in greater detail and with many more examples during the course.

1.1. Models, systems, and system states

When many real-world problems are addressed or solved mathematically and computationally, the details of those problems are abstracted away until they can be represented directly as idealized mathematical structures (e.g., numbers, sets, matrices, and so on). We can talk about this distinction by calling the real-world problems systems (i.e., organized collections of interdepentend components) that have distinguishable system states, and the abstract versions of these systems that only capture some parts or aspects of the different system states as models. Since models are abstractions, we must also consider the interpretation of a model that explains how the different parts of the model are related to the different parts of the system (i.e., what each part of the model represents).

Definition: We represent the set of all real numbers using the symbol R.
R
=
{ -1, 1.2, 3, 4, 1/3, π, e, ... }

The set of real numbers is a very simple kind of model that can be used to represent quantities or, by considering quantities and fractions of units (e.g., inch, meter, kilogram, and so on), a magnitude. If we adopt R as our model of a system, we can then represent individual system states using individual real numbers.

model
(language for
describing
system states)
example of a
system state
in the model
interpretation system
R 7 number of giraffes zoo
R 1 distance in AU between the two objects the Earth-Sun system
R 5.6 temperature in Celsius weather in Boston

Thus, real numbers are a very simple symbolic language for modelling systems. Furthermore, this language allows us to describe characteristics of systems concisely ("7" is much more concise than a drawing of seven giraffes).

1.2. A language of system states: terms and their algebraic properties

Real numbers can be used to characterize the quantities or magnitudes of parts of a system, but this is often not enough. We may want to capture within our models certain kinds of changes that might occur to a system. In order to do so, we must add something to our language of models: binary operators such as +, -, ⋅, and so on.

model
(language for
describing
system states)
a system state
in the model
interpretation system
R with addition (+) 3 number of apples in one of the apple baskets a collection of apple baskets
R with addition (+) 3 + 2 number of apples in two of the apple baskets a collection of apple baskets
R with addition (+) 2 + 3 number of apples in two of the apple baskets a collection of apple baskets
symbolic language symbol string
(a.k.a., "term")
in the language
meaning of symbol string system

The different parts of this language for models have technical names. So far, we have seen real numbers and operators. We can build up larger and larger symbol strings using operators and real numbers. These are all called terms.

Definition: The following table defines the collection of symbol strings corresponding to terms.

term
0
1.2
x x is a real number
t1 + t2 if t1 and t2 are terms
t1t2 if t1 and t2 are terms
t if t is a term
t1t2 if t1 and t2 are terms

Note that in these notes, we use the multiplication symbols ⋅ and * interchangeably, or omit them entirely.

Notice that when our language for models only had real numbers, each symbol corresponded to a unique system state. However, by introducing operators into our language for describing system states, we are now able to write down multiple symbol strings that represent the same system state (e.g., 2 + 3 and 3 + 2). In other words, "2 + 3" and "3 + 2" have the same meaning.

Exercise: List as many of the algebraic properties of the operators + and ⋅ over the real numbers as you can.

For all real numbers x R, y R, and z R, it is true that:
x + y
=
y + x
x + (y + z)
=
(x + y) + z
x + 0
=
x
xy
=
yx
x ⋅ (yz)
=
(xy) ⋅ z
x ⋅ 1
=
x
x ⋅ (y + z)
=
(xy) + (xz)
x + (-x)
=
0

1.3. Extending the language of models: vectors and matrices

Real numbers and operators are still not sufficient to capture many interesting aspects of real-world systems: they can only capture quantities, one-dimensional magnitudes (e.g., distance). What if we want to capture multidimensional magnitudes (e.g., temperature and pressure) or relationships between different quantities? These can correspond to points, lines, planes, spaces; relationships and changes within such systems can be modelled as translations, rotations, reflections, stretching, skewing, and other transformations.

Example: Suppose we have $12,000 and two investment opportunities: one has an annual return of 10%, and the other has an annual return of 20%. How much should we invest in each opportunity to get exactly $1800 over one year?

The above problem can be represented as a system in which two magnitudes x R and y R have a relationship. Typically, this is represented using a system of equations:
x + y
=
12000
0.1 x + 0.2 y
=
1800
The possible solutions to this system are the collection of all pairs of real numbers that can be assigned to the pair (x,y). These are the possible system states. Notice that these can be interpreted directly as points on a plane.

How does the above example suggest we might extend our language for system states in a model? We might consider ordered pairs (x,y) of real numbers. More generally, we might consider ordered lists (x1,...,xn) of real numbers. In this course, we will introduce a few new kinds of terms into our language for models that can help us represent these sorts of relationships within systems: vectors and matrices. We will also study the algebraic properties of this extended language.

new term
language construct
what it represents
vectors system state in the model
matrix transitions between system states
matrix changes to system states
matrix relationships between system states
matrix multiplication composition of transformations of system states

1.4. Review: sets and formulas

Definition: The symbols and, or, and not are logical operators. The symbols and are quantifiers, where is read "for all" and is called the universal quantifier, while is read "exists" and is called the existential quantifier.

Definition: Strings of mathematical symbols that can be either true or false are called formulas. The following table describes how formulas can be built up using logical operators and logical quantifiers, and their meanings.

formula conditions meaning
true always true
false always false
t1 = t2 t1 and t2 are terms true only if the meaning (e.g., system state)
of t1 and t2 is the same
t1 < t2 t1 and t2 are terms true only if t1 and t2 represent real numbers,
and t1 is less than t2
f1 and f2 if f1 and f2 are formulas only true if both f1 and f2 are true;
otherwise, it is false
f1 or f2 if f1 and f2 are formulas true if f1, f2, or both are true;
only false if both are false
not f if f is a formula true if f is false;
false if f is true
f1 implies f2 if f1 and f2 are formulas only true if f2 is true whenever f1 is true,
   or equivalently,
only true if f1 is false or f2 is true
f1 iff f2 if f1 and f2 are formulas only true if f1 and f2 are both true,
or f1 and f2 are both false
x S,    f if S is a set and f is a formula true only if taking for every element of S,
replacing x inside f with that element makes f true
x S,    f if S is a set and f is a formula true only if there is at least one element of S
that can replace x inside f so that f is true

Definition: The symbols =, <, >, ≤, ≥, ≠, and are relational operators or relations. Given two terms, t1 and t2, apply any of these operators to the terms produces a formula that is either true of false (e.g., "5 ≤ 6").

Fact: The relational operator = (the equality relation or just equality) has the following properties.

property definition example
reflexivity for any term t,   t = t


x R,
x = x

symmetry for any terms t1 and t2,   t1 = t2 implies t2 = t1


x,y R,
x = y
implies
y = x

transitivity for any terms t1, t2, and t3,   t1 = t2 and t2 = t3 implies t1 = t3


x,y,z R,
x = y
y = z
implies
x = z

The above properties always apply to =. The properties below are assumptions we make in this course (they are extensions of the definition of equality):


2. Vectors

In this section we consider a particular kind of term: a vector. A vector can have many interpretations, but the most common is that of a point in a geometric space. We first define a notation for names of vectors. Vectors will be written using any of the the following equivalent notations:

(2,3)    [2;3]    
 
2
3
 
    (x, y, z)    [x; y; z]    
 
x
y
z
 

2.1. Defining operations on vectors

We have introduced a new kind of term (vectors) with their own corresponding notation. As with R, the set of real numbers, we define symbol to represent the sets of vectors.

Definition: We represent the set of all vectors with two components using the symbol R2:
R2
=
{
 
x
y
 
    |    x R, y R }

Definition: For positive n N, we represent the set of all vectors with n components using the symbol Rn:
Rn
=
{
 
x1
xn
 
    |    x1 R, ..., xn R }

We will also use symbols to name some geometric manipulations of vectors. One such operation is addition. Suppose we treat vectors as paths from (0,0), so that (2,3) is the path from (0,0) to (2,3) and (1,2) is the path from (0,0) to (1,2). Then vector addition would represent the final destination if someone walks first along the length and direction specified by one vector, and then from that destination along the length and direction specified by the other. The following definition for the operation + and vectors of two components corresponds to this intuition.

Definition: The following formula is assumed to be true; it defines what operation the symbol + represents when applied to two vectors. We call + vector addition in this context.

x, y, x', y' R,
 
x
y
 
+
 
x'
y'
 
=
 
x + x'
y + y'
 


Notice that we have defined an entirely new operation; we are merely reusing (or "overloading") the + symbol to represent this operation. We cannot assume that this operation has any of the properties we normally associate with addition of real numbers. However, we do know that according to our interpretion of vector addition (walking along one vector, then along the other from that destination), this operation should be commutative. Does our symbolic definition conform to this interpretation? If it does, then we should be able to show that [x;y] + [x';y'] and [x+x';y+y'] are names for the same vector. Using the commutativity of the real numbers and our definition above, we can indeed write the proof of this property.

Fact: Vector addition is commutative.

x, y, x', y' R,
 
x
y
 
+
 
x'
y'
 
=
 
x + x'
y + y'
 
=
 
x' + x
y' + y
 
=
 
x'
y'
 
+
 
x
y
 


Recall that we can view multiplication by a number as repeated addition. Thus, we could use this intuition and our new notion of vector addition defined above to define multiplication by a real number; in this context, it will be called a scalar, and the operation is known as scalar multiplication.

Definition: The following formula is assumed to be true; it defines what operation the symbol ⋅ represents when applied to a scalar (a real number) s R and a vector. We call ⋅ scalar multiplication in this context.

s,x,y R,
s
 
x
y
 
=
 
sx
sy
 


Note that, as with multiplication, the symbol is sometimes omitted.

Scalar multiplication has some of the intuitive algebraic properties that are familiar to us from our experience with R. Below, we provide proofs of a few.

Fact: The following fact is true for scalar multiplication.

s, t, x, y R,
s (t
 
x
y
 
)
=
s (
 
t x
t y
 
)
=
 
s (t x)
s (t y)
 
=
 
(s t) x
(s t) y
 
=
 
(t s) x
(t s) y
 
=
 
t (s x)
t (s y)
 
=
t
 
s x
s y
 
=
t (s (
 
x
y
 
))


# alternatively, we could also derive...

 
s (t x)
s (t y)
 
=
 
(s t) x
(s t) y
 
=
(s t)
 
x
y
 


Once we have defined scalar multiplication, we can now assign a meaning to the negation operator (interpreting -v for any vector v R2 as referring to the scalar multiplication −1 ⋅ v).

Definition: The following formula is assumed to be true; it defines what operation the symbol − represents when applied to a vector.

x,y R,
 
x
y
 
=
 
x
y
 


The above definition allows us to define vector subtraction: for two vectors v and w R2, v - w = v + (-w).

Notice that we did not define vector division. In fact, division by a vector is undefined (in the same way that division by 0 is undefined). However, we can prove a cancellation law for vectors.

Fact: Let [x; y] = v R2 be a vector with at least one nonzero real number component, and let a,b R be scalars. Suppose that av = bv. If x is a nonzero component, then we have that:

av
=
bv
a
 
x
y
 
=
b
 
x
y
 
ax
=
bx
a
=
b

Notice that we used division by x, a real number.

2.2. Properties of vector operations

In this course, we will study collections of vectors that have particular properties. When a collection of vectors satisfies these properties, we will call it a vector space. We list these eight properties (in the form of equations) below. These must hold for any u, v, and w in the collection, and 0 must also be a vector in the collection.

u + (v + w)
=
(u + v) + w
u + v
=
v + u
0 + v
=
v
v + (-v)
=
0
1 * v
=
v
s * (u + v)
=
(s * u) + (s * v)
(u + v) * s
=
(u * s) + (v * s)
s * (t * u)
=
(s * t) * u

Note that these are not unique; we could have specified an alternative equation for the additive identity.

v + 0 = v

Each equation can be derived from the other using the commutativity

v + 0 = 0 + v

These eight equations can be called axioms (i.e., assumptions) when we are studying vector spaces without thinking about the internal representation of vectors. For example, the above derivation of the alternative additive identity equation is based entirely on the axioms and will work for any vector space. However, to show that some other structure, such as R2, is a vector space we must prove that these properties are satisfied. Thus, to define a new vector space, we usually must:

  1. define a way to construct vectors;
  2. define a vector addition operator + (i.e., how to add vectors that we have constructed);
  3. specify the vector that is the additive identity (call it 0);
  4. define vector inversion: a process for constructing the inverse of a vector;
  5. prove that the eight properties of a vector space are satisfied by the vectors, +,0, and *.

Example: Assuming the following equation is true, solve for x R and y R:
 
x
10
 
=
2 ⋅
 
4
y
 

x, y R,
 
x
10
 
=
2 ⋅
 
4
y
 

implies
 
x
10
 
=
 
2 ⋅ 4
2 ⋅ y
 
=
 
8
2 ⋅ y
 
x
=
8
10
=
2 ⋅ y
(0.5) ⋅ 10
=
0.5 ⋅ (2 ⋅ y)
5
=
(0.5 ⋅ 2) ⋅ y
5
=
1 ⋅ y
5
=
y
y
=
5

Example: Assuming the following equation is true, solve for x R and y R:
 
x
2 ⋅ x
 
=
5 ⋅
 
2 ⋅ y
20
 

x, y R,
 
x
2 ⋅ x
 
=
5 ⋅
 
2 ⋅ y
20
 

implies
 
x
2 ⋅ x
 
=
5 ⋅
 
2 ⋅ y
20
 
=
 
5 ⋅ (2 ⋅ y)
5 ⋅ 20
 
 
x
2 ⋅ x
 
=
 
(5 ⋅ 2) ⋅ y
100
 
2 ⋅ x
=
100
0.5 (2 ⋅ x)
=
0.5 ⋅ 100
(0.5 ⋅ 2) x
=
50
1 ⋅ x
=
50
x
=
50
x
=
10 ⋅ y
0.1 ⋅ 50
=
0.1 ⋅ (10 ⋅ y)
=
(0.1 ⋅ 10) ⋅ y
=
1 ⋅ y
=
y
y
=
5



2.3. Assignment #1: Vector Algebra

In this assignment you will perform step-by-step algebraic manipulations involving vectors and vector operations. In doing so, you will assemble machine-verifiable proofs of algebraic facts. Your proofs must be automatically verifiable using the proof verifier. Please submit a single text file a1.txt containing your proof scripts for each of the problem parts below.

Working with machine verification can be frustrating; minute operations that are normally implied must often be explicit. However, the process should familiarize you with common practices of rigorous algebraic reasoning in mathematics: finding, applying, or expanding definitions to reach a particular formula that represents a desired argument or solution.

    1. Finish the proof below by solving for a using a sequence of permitted algebraic manipulations.

      a, b R,
       
      3
      a
       
      =
       
      b
      b + 8
       


      implies
      # ... put proof steps here ...
      a = ? # replace "?" with an integer

    2. Finish the proof below by solving for b in terms of a.

      a,b R,
       
      b
      38
       
      =
       
      3
      7
       
      a +
       
      0
      4
       
      b

      implies
      # ... put proof steps here ...
      b = ? # replace "?" with a term containing "a"

    3. Finish the proof below to show that [1;2] and [-2;1] are linearly independent.

       
      1
      2
       
       
      −2
      1
       
      = 1 ⋅ −2 + 2 ⋅ 1   and

      # ... put proof steps here ...
      (
       
      1
      2
       
      ) and (
       
      −2
      1
       
      ) are orthogonal

    4. Solve for x (the solution is an integer).

      x R,
      (
       
      x
      4
       
      ) and (
       
      1
      1
       
      ) are orthogonal

      implies
      # ... put proof steps here ...
      x = ?

    5. Show that no [x; y] exists satisfying both of the below properties by deriving a contradiction (e.g., 1 = 0).

      x,y R,
      (
       
      x
      y
       
      ) is a unit vector
      (
       
      xx
      yy
       
      ) and (
       
      1
      1
       
      ) are orthogonal

      implies
      # ... put proof steps here ...
      ? # this should be a contradiction

  1. We have shown in lecture that R2, together with vector addition and scalar multiplication, satisfies some of the vector space axioms. In this problem, you will show that the remaining axioms are satisfied.
    1. Finish the proof below showing that [0; 0] is a left identity for addition of vectors in R2.

      x,y R,
      # ... put proof steps here ...
       
      0
      0
       
      +
       
      x
      y
       
      =
       
      x
      y
       


    2. Finish the proof below showing that [0; 0] is a right identity for addition of vectors in R2.

      x,y R,
      # ... put proof steps here ...
       
      x
      y
       
      +
       
      0
      0
       
      =
       
      x
      y
       


    3. Finish the proof below showing that 1 is the identity for scalar multiplication of vectors in R2.

      x,y R,
      # ... put proof steps here ...
      1 ⋅
       
      x
      y
       
      =
       
      x
      y
       


    4. Finish the proof below showing that the component-wise definition of inversion is consistent with the inversion axiom for vector spaces.

      x,y R,
      # ... put proof steps here ...
       
      x
      y
       
      + (−
       
      x
      y
       
      ) =
       
      0
      0
       


    5. Finish the proof below showing that the addition of vectors in R2 is associative.

      a,b,c,d,e,f R,
      # ... put proof steps here ...
       
      a
      b
       
      + (
       
      c
      d
       
      +
       
      e
      f
       
      ) = (
       
      a
      b
       
      +
       
      c
      d
       
      ) +
       
      e
      f
       


    6. Finish the proof below showing that the distributive property applies to vector addition and scalar multiplication for R2.

      s, x, y, x', y' R,
      # ... put proof steps here ...
      s (
       
      x
      y
       
      +
       
      x'
      y'
       
      ) = (s
       
      x
      y
       
      ) + (s
       
      x'
      y'
       
      )

  2. Any point p on the line between vectors u and v can be expressed as a(u-v)+u for some scalar a. In fact, it can also be expressed as b(u-v)+v for some other scalar b. In this problem, you will prove this fact for R3.

    Given some p = a(u-v)+u, find a formula for b in terms of a so that p = b(u-v)+v. Add this formula to the beginning of the proof (after a = ) and then complete the proof.

    a,b R, ∀ u,v,p R3,
    p
    =
    a (uv) + u
    a
    =
    ? # define a in terms of b

    implies
    # ... put proof steps here ...
    p = b (uv) + v

    The verifier's library contains a variety of derived algebraic properties for R and R3 in addition to the vector space axioms for R3. Look over them to see which might be useful.



2.4. Common vector properties, relationships, and operators

We introduce two new operations on vectors (e.g. u and v): the norm (|| ... ||) and the dot product (uv). When interpreting some other structure, such as R2, as a vector space, we must provide definitions for these operators in order to use them.

Definition: The operator ⋅ when applied to two vectors with the same number of components is called the dot product, and is defined for x,y,x',y' R as:
[x; y] ⋅ [x'; y']
=
x*x' + y*y'
The operator || ... || when applied to a vector is called the norm and is defined as:
|| [x;y] ||
=
√(x*x + y*y)
The norm || v || of a vector v represents its length in Euclidean space.

Notice that the dot product is a new, distinct form of multiplication (distinct from multiplication of real numbers, and distinct from scalar multiplication). Also notice that the two operations are related:

Fact: The following equation is true for all x,y,x',y' R:
|| [x; y] ||
=
√(x*x + y*y) = √([x; y] ⋅ [x; y])

These operations can be shown to have various algebraic properties.

Fact: The dot product is commutative. Notice that below, the dot product is a real number.

a,b,c,d R,
 
a
b
 
 
c
d
 
=
ac + bd
=
ca + db
=
 
c
d
 
 
a
b
 


We also introduce several vector properties. Some deal with a single vector; some deal with two vectors; and some deal with three vectors.

Definition: The table below summarizes the definitions of several vector properties and relationships and how they are related in some cases to vector operators and associated algebraic properties.

property definition(s) algebraic properties for R2
u = [x;y], v = [x',y'], w = [x'',y'']
v has length s ||v|| = s or
√(vv) = s
||v|| = √(x*x + x*y) = √([x,y] ⋅ [x,y])
v is a unit vector ||v|| = 1 or
vv = 1
1 = ||v|| = √(x*x + x*y) = √([x,y] ⋅ [x,y])
1 = ||v|| = x*x + x*y = [x,y] ⋅ [x,y]
u and v are linearly dependent
u and v are collinear
a R, au = v y/x = y'/x'
(the vectors have the same slope)
u and v are linearly independent a R, auv
   or equivalently,
not (∃ a R, au = v)
y/xy'/x'
(the vectors have different slopes)
u and v are orthogonal uv = 0 y/x = -x'/y'
w is a projection of v onto u w = (v ⋅ (u/||u||)) ⋅ u/||u||
d is the (Euclidean) distance
between v and w
d = ||u - v|| = ||v - u|| d = √((x - x')2 + (y - y')2)
L is the unique line
parallel to v R2
L = { av | a R }
L = { p | ∃ a R, p = av }
{ [x',y']   |   y' = m x' } where m = y/x
L is the unique line
orthogonal to v R2
L = { w | vw = 0 }
L is the unique line
defined by the
two points u R2 and v R2
L = { a (u - v) + u | a R }
L = { p | ∃ a R, p = a (u - v) + u }
P is the unique plane
orthogonal to v R3
P = { w | vw = 0 }
P is the unique plane
of linear combinations of
v, w R3 where v and w are
linearly independent
P = { a v + b w | a R, b R }
w is a linear combination of u and v a,b R, w = au + bv
{u, v, w} are linearly independent not (u is a linear combination of v and w) and
not (v is a linear combination of u and w) and
not (w is a linear combination of u and v)

In R2, we can derive the linear independence of [x;y] and [x';y'] from the orthogonality of [x;y] and [x';y'] using a proof by contradiction.

Fact: If [x;y] and [x';y'] are orthogonal (and have nonzero length), then they are linearly independent.

Suppose that [x;y] and [x';y'] are both orthogonal and linearly dependent. Since they are linearly dependent, we have that there exists an a R such that:
 
x'
y'
 
=
a
 
x
y
 
x'
=
ax
y'
=
ay
Since they are orthogonal, we have that:
 
x
y
 
 
x'
y'
 
=
0
 
x
y
 
 
ax
ay
 
=
0
x ⋅ (ax) + y ⋅ (ay)
=
0
a ⋅ (x2 + y2)
=
0
x2 + y2
=
0
x2
=
- y2
(x2)/(y2)
=
- 1
(x/y)2
=
- 1
x/y
=
√(-1)
No real numbers y and x satisfy the above equation, so we must have introduced a contradiction by supposing that [x;y] and [x';y'] are not linearly independent. Thus, they must be linearly independent.

Also, notice that [0;0] is the only solution to x2 + y2 = 0. Thus, all vectors in R2 are linearly dependent, linearly independent, and orthogonal with [0;0].

Fact: If v R2 is a vector that has nonzero length, then we know that the vector v/||v||: (1) is linearly dependent with v, and (2) is a unit vector.

  1. Because v is a vector, then ||v|| is a real number that is nonzero, and 1/||v|| is a real number. By our definition of linear dependence, (1/||v||) ⋅ v = v/||v|| is linearly dependent with v because 1/||v|| exists.
  2. Let v = [x;y]. We have that:
    1/||v||
    =
    1 / ||[x;y]||
    =
    1 / √(x2 + y2)
    Then, we have that:
    (1 / √(x2 + y2)) ⋅
     
    x
    y
     
    =
     
    x / √(x2 + y2)
    y / √(x2 + y2)
     
    If we take the norm of the above vector, we have:
    ||
     
    x / √(x2 + y2)
    y / √(x2 + y2)
     
    ||
    =
    √(    (x / √(x2 + y2))2 + (y / √(x2 + y2))2    )
    =
    √(    (x2 / (x2 + y2)) + (y2 / (x2 + y2))    )
    =
    √(    (x2 + y2) / (x2 + y2)    )
    =
    √(1)
    =
    1
    Thus, ||   v/||v||   || = 1, so v/||v|| is a unit vector.

Example: List all the unit vectors that are linearly dependent with:

 
5
12
 

It is sufficient to solve for s R, x R, and y R that satisfy the following equations:
s
 
x
y
 
=
 
5
12
 
||(x,y)||
=
1
One approach is to write x and y in terms of s and then solve for s using the second equation.

Example: Solve the following three problems.
  1. Solve the following equation for x R:
    ||
     
    x
    2 ⋅ √(2)
     
    ||
    =
    3
  2. Solve the following equation for x R and y R:
     
    x
    y
     
    has length 10
     
    x
    y
     
    and
     
    2
    0
     
    are linearly dependent
  3. Determine if the following two vectors are linearly dependent or linearly independent:
     
    2
    3
    0
     
    and
     
    2
    0
    1
     

In the case of vectors in R2, the properties of linear dependence and orthogonality can be defined in terms of the slopes of the vectors involved. However, note that these definitions in terms of slope are a special case for R2. The more general definitions (in terms of scalar multiplication and dot product, respectively) apply to any vectors in a vector space. Thus, we can derive the slope definitions from the more general definitions.

Fact: If two vectors [x;y] R2 and [x';y'] R2 are linearly dependent, then x/y = x'/y'. If the two vectors a linearly dependent, then there exists a scalar s R such that:

s
 
x'
y'
 
=
 
x
y
 
 
sx'
sy'
 
=
 
x
y
 
sx'
=
x
sy'
=
y
(sx') / (sy')
=
x/y
x'/y'
=
x/y

Fact: If two vectors [x;y] R2 and [x';y'] R2 are orthogonal, then x/y = -y'/x':

 
x
y
 
 
x'
y'
 
=
0
xx' + yy'
=
0
xx'
=
- yy'
x
=
- yy'/x'
x/y
=
- y'/x'

Example: List all unit vectors orthogonal to:

 
4
−3
 

It is sufficient to solve for x R and y R that satisfy the following equations:
 
x
y
 
 
4
−3
 
=
0
||(x,y)||
=
1
We can solve the above for the two vectors by first solving for both possible values of x, then finding the corresponding values for y:
4 x − 3 y
=
0
y
=
(4/3) x
√(x2 + y2)
=
1
√(x2 + ((4/3) x)2)
=
1
√((9/9)x2 + (16/9)x2
=
1
√((25/9) x2)
=
1
√((25/9) x2)
=
1
± (5/3) x
=
1
x
=
± 3/5
Thus, the two vectors are:
 
x
y
 
{
 
3/5
4/5
 
,
 
−3/5
−4/5
 
}

Orthogonal projections of vectors have a variety of interpretations and applications; one simple interpretation of the projection of a vector v onto a vector w is the shadow of vector v on w with respect to a light source that is orthogonal to w.

Example: Consider the vector following vectors, where u, v R2 and u is a unit vector:
v
=
 
x
y
 
u
=
 
1
0
 
What is the orthogonal projection of v onto u? It is simply the x component of v, which is x. Thus:
 
x
0
 
is the projection of v onto u
Notice that we can obtain the length of the orthogonal projection using the dot product:
vu
=
 
x
y
 
 
1
0
 
=
x
Since u is a unit vector, we can then simply multiply u by the scalar x to obtain the actual projection:
(vu) ⋅ u
=
(
 
x
y
 
 
1
0
 
) ⋅
 
1
0
 
=
x
 
1
0
 
=
 
x
0
 

The above example can be generalized to an arbitrary vector v and unit vector u; then, it can be generalized to any vector u by using the fact that u/||u|| is always a unit vector.

Example: Compute the orthogonal projection of v onto u where:
v
=
 
9
2
 
u
=
 
3
4
 

We can apply the formula:
||u||
=
√(32 + 42)
=
√(9 + 16)
=
5
u/||u||
=
 
3/5
4/5
 
(v ⋅ (u/||u||)) ⋅ (u/||u||)
=
(
 
9
2
 
 
3/5
4/5
 
) ⋅
 
3/5
4/5
 
=
(27/5 + 8/5) ⋅
 
3/5
4/5
 
=
(35/5) ⋅
 
3/5
4/5
 
=
7 ⋅
 
3/5
4/5
 
=
 
21/5
28/5
 

Other tools available online can be used to perform and check computations such as the above.

Example: Solve the problems below.

  1. Compute the orthogonal projection of v onto u where u is a unit vector and:
    v
    =
     
    4
    2
     
    u
    =
     
    1/√(2)
    1/√(2)
     

    We can apply the formula:
    (v ⋅ (u/||u||)) ⋅ (u/||u||)
    =
    (
     
    4
    2
     
     
    1/√(2)
    1/√(2)
     
    ) ⋅
     
    1/√(2)
    1/√(2)
     
    =
    (6/√(2)) ⋅
     
    1/√(2)
    1/√(2)
     
    =
     
    3
    3
     

  2. Compute the projection of v onto w where:
    v
    =
     
    13 ⋅ 3
    13 ⋅ 2
     
    w
    =
     
    5
    12
     

    We can apply the formula:
    ||w||
    =
    √((5)2 + (12)2)
    =
    √(25 + 144)
    =
    13
    w/||w||
    =
     
    5/13
    12/13
     
    (v ⋅ (w/||w||)) ⋅ (w/||w||)
    =
    (
     
    13 ⋅ 3
    13 ⋅ 2
     
     
    5/13
    12/13
     
    ) ⋅
     
    5/13
    12/13
     
    =
    (15 + 24) ⋅
     
    5/13
    12/13
     
    =
    39 ⋅
     
    5/13
    12/13
     
    =
     
    15
    36
     

Fact: Given the points u = [x1,y1] and v = [x2,y2], we can find the equation of the line between these two points in the form y = mx + b.

We recall the definition for a line L defined by two points:
L
=
{ p | ∃ a R, p = a (u - v) + u }.
Thus, if [x; y] is on the line, we have:
 
x
y
 
=
a (
 
x1
y1
 
-
 
x2
y2
 
) +
 
x1
y1
 
.
This implies the following system of equations (one from the x components in the above, and one from the y components):
x
=
a (x1 - x2) + x1
y
=
a (y1 - y2) + y1
If we solve for a in terms of x, we can recover a single equation for the line:
a
=
(x - x1)/(x1 - x2)
y
=
((x - x1)/(x1 - x2)) (y1 - y2) + y1
y
=
((y1 - y2)/(x1 - x2)) (x - x1) + y1
Notice that we can set m = (y1 - y2)/(x1 - x2) because that is exactly the slope of the line between [x1;y1] and [x2;y2].
y
=
m (x - x1) + y1
y
=
mx - m x1 + y1

We see that we can set b = - m x1 + y1.
m
=
(y1 - y2)/(x1 - x2)
b
=
- m x1 + y1
y
=
mx + b
L
=
{
 
x
y
 
  |   y = m x + b }

2.5. Solving common problems involving vector algebra

We review the operations and properties of vectors introduced in this section by considering several example problems.

Example: Are the vectors [2; 1] and [3; 2] linearly independent?

There are at least two ways we can proceed in checking pairwise linear independence. Both involve checking if the vectors are linearly dependent. If they are linearly dependent, then they cannot be linearly independent. If they are not linearly dependent, they must be linearly independent.

We can compare the slopes; we see they are different, so they must be linearly independent.

1/2 ≠ 2/3

We can also use the definition of linear dependence. If they are linearly dependent, then we know that
a R, a
 
2
1
 
=
 
3
2
 
Does such an a exist? We try to solve for it:
a
 
2
1
 
=
 
3
2
 
2a
=
3
a
=
2
4
=
3

Since we derive a contradiction, there is no such a, so the two vectors are not linearly dependent, which means they are linearly independent.

Example: Given v = [15; 20], list the vectors that are orthogonal to v, but of the same length as v.

The two constraints on the vectors [x;y] we seek are:

 
15
20
 
 
x
y
 
=
0
||
 
15
20
 
||
=
||
 
x
y
 
||

We take the first constraint and solve for x in terms of y.

 
15
20
 
 
x
y
 
=
0
15x + 20y
=
0
x
=
(-20/15) y

We now plug this into the second equation.

||
 
15
20
 
||
=
||
 
(-20/15) y
y
 
||
√(152 + 202)
=
√((400/225) y2 + y2)
√(625)
=
√((625/225) y2)
25
=
± (25/15) y
15
=
± y
y
=
± 15

Thus, the vectors are [-20; 15] and [20; -15].

Example: Given constants a,b,c R, find a vector orthogonal to the plane P defined by:
P
=
{
 
x
y
z
 
  |   a (x + y + z) + b (y + z) + c z = 0 }.
We only need to rewrite the equation defining the plane in a more familiar form. For any [x; y; z] P, we know that:
a (x + y + z) + b (y + z) + c z
=
0
ax + (a + b) y + (a + b + c) z
=
0
 
a
a + b
a + b + c
 
 
x
y
z
 
=
0
In order to be orthogonal to a plane, a vector must be orthogonal to all vectors [x;y;z] on that plane. Since all points on the plane are orthogonal to [a; a + b; a + b + c] by the above argument, [a; a + b; a + b + c] is such a point.

Example: Define the line L that is orthogonal to the vector [a; b] but also crosses [a; b] (i.e, [a; b] falls on the line).

We know that the line must be parallel to the vector that is orthogonal to [a; b]. The line L0 crossing [0; 0] that is orthogonal to [a; b] is defined as:
L0
=
{
 
x'
y'
 
  |  
 
a
b
 
 
x'
y'
 
= 0 }.
However, we need the line to also cross the point [a; b]. This is easily accomplished by adding the vector [a; b] to all the points on the orthogonal line going through [0; 0] (as defined above). Thus, we have:
L
=
{
 
x'
y'
 
+
 
a
b
 
  |  
 
a
b
 
 
x'
y'
 
= 0 }.
If we want to find points [ x ; y ] on the line directly without the intermediate term [ x' ; y' ], we can solve for [ x' ; y' ] in terms of [ x ; y ]:
 
x
y
 
=
 
x'
y'
 
+
 
a
b
 
 
x'
y'
 
=
 
x
y
 
 
a
b
 
We can then substitute to obtain a more direct definition of L (in terms of a constraint on the vectors [ x ; y ] in L):
L
=
{
 
x'
y'
 
+
 
a
b
 
   |   
 
a
b
 
 
x'
y'
 
= 0 }
=
{ (
 
x
y
 
 
a
b
 
) +
 
a
b
 
   |   
 
a
b
 
⋅ (
 
x
y
 
 
a
b
 
) = 0 }
=
{
 
x
y
 
   |   
 
a
b
 
⋅ (
 
x
y
 
 
a
b
 
) = 0 }

Example: Is [7; −1] on the line defined by the points u = [19; 7] and v = [1; −5]?

To solve this problem, we recall the definition for a line defined by two points:

{ p | ∃ a R, p = a (u - v) + u }.

Thus, we want to know if [7; -1] is in the set defined as above. This can only occur if there exists an a such that [7; -1] = a (u - v) + u.

We solve for a; if no solution exists, then the point [7; -1] is not on the line L. If an a exists, then it is. In this case, a = 1/3 is a solution to both equations, so [7; -1] is on the line.

 
7
-1
 
=
a(
 
19
7
 
 
1
-5
 
) +
 
1
−5
 
=
(
 
19a − 1a
7a + 5a
 
) +
 
1
−5
 
=
(
 
18a + 1
12a − 5
 
)
7
=
18a+1
−1
=
12a − 5
a
=
1/3

Example: Define the line that is orthogonal to the vector [3; 5] but also crosses [3; 5] (i.e, [3; 5] falls on the line).

We know that the line must be parallel to the vector that is orthogonal to [3; 5]. The line crossing [0; 0] that is orthogonal to [3; 5] is defined as:
{
 
x
y
 
  |  
 
3
5
 
 
x
y
 
= 0 }
We can rewrite the above in a more familiar form:
5 y
=
− 3 x
y
=
− (3/5) x
However, we need the line to also cross the point [3; 5]. This is easily accomplished by adding the vector [3; 5] to all the points on the orthogonal line going through [0; 0] (as defined above). Thus, we have:
{
 
3
5
 
+
 
x
y
 
  |  
 
3
5
 
 
x
y
 
= 0 }
We can rewrite the above by defining:
 
x'
y'
 
=
 
3
5
 
+
 
x
y
 
 
x'
y'
 
 
3
5
 
=
 
x
y
 
Now we substitute [x;y] with [x',y'] in our definition of the line:
{
 
x'
y'
 
  |  
 
3
5
 
⋅ (
 
x'
y'
 
-
 
3
5
 
) = 0 }
We can now write the equation for the line:
3 (x' − 3) + 5 (y' − 5)
=
0
5 (y' − 5)
=
-3 (x' − 3)
5 y' − 25
=
-3 (x' − 3)
5 y'
=
-3 (x' − 3) + 25
y'
=
−3/5 (x' − 3) + 5
y'
=
−3/5 x' + 9/5 + 5
Notice that, alternatively, we could have instead simply found the y-intercept b R of the following equation using the point [3;5]:
y
=
− (3/5) x + b
5
=
− (3/5) (3) + b
5 + 9/5
=
b
b
=
5 + 9/5

Example: Is [8; -6] a linear combination of the vectors [19; 7] and [1; -5]?

We recall the definition of a linear combination and instantiate it for this example:

a,b R,
 
8
-6
 
= a
 
19
7
 
+ b
 
1
−5
 
.

Thus, if we can solve for a and b, then [8; −6] is indeed a linear combination.

 
8
−6
 
=
a
 
19
7
 
+ b
 
1
−5
 
=
 
19a
7a
 
+
 
1b
−5b
 
8
=
19a + b
b
=
-19a + 8
−6
=
7a − 5b
−6
=
7a − 5(−19a + 8)
−6
=
7a + 95a − 40
34
=
102 a
a
=
1/3
b
=
−19/3 + 8 = 5/3

Example: Are the vectors V = {[2; 0; 4; 0], [6; 0; 4; 3], [1; 7; 4; 3]} linearly independent?

The definition for linear independence requires that none of the vectors being considered can be expressed as a sum of the others. Thus, we must check all pairs of vectors against the remaining third vector not in the pair. There are 3!/(2!*1!) such pairs:
can
 
2
0
4
0
 
be expressed as a combination of
 
6
0
4
3
 
and
 
1
7
4
3
 
?
can
 
6
0
4
3
 
be expressed as a combination of
 
2
0
4
0
 
and
 
1
7
4
3
 
?
can
 
1
7
4
3
 
be expressed as a combination of
 
6
0
4
3
 
and
 
2
0
4
0
 
?

For each combination, we can check whether the third vector is linearly dependent. If it is linearly dependent, we can stop and say that the three vectors are not linearly independent. If it is not, we must continue checking all the pairs. If all the pairs are incapable of being scaled and added in some way to obtain the third vector, then the three vectors are linearly independent.

not (u, v, w are linearly independent)   iff   (∃ a,b R, u,v,w V, w is a linear combination of u and v)

Notice that this an example of a general logical rule:

not (∀ x S, p)   iff   (∃ x S, not p)

We check each possible combination and find that we derive a contradiction if we assume they are not independent.

 
2
0
4
0
 
=
 
6
0
4
3
 
a +
 
1
7
4
3
 
b
0
=
7 b
b
=
0
2
=
6 a + 1 b
a
=
2/6
0
=
3 (2/6) + 3 (0)
0
=
1

 
6
0
4
3
 
=
 
2
0
4
0
 
a +
 
1
7
4
3
 
b
3
=
3 b
b
=
1
6
=
2 a + b
6
=
2 a + 1
5
=
2 a
a
=
5/2
4
=
4 (5/2) + 4
0
=
10

 
1
7
4
3
 
=
 
2
0
4
0
 
a +
 
6
0
4
3
 
b
7
=
0 a + 0 b
7
=
0

Thus, V is a set of linearly independent vectors.

2.6. Using vectors and linear combinations to model systems

In the introduction we noted that in this course, we would define a symbolic language for working with a certain collection of idealized mathematical objects that can be used to model system states of real-world systems in an abstract way. Because we are considering a particular collection of objects (vectors, planes, spaces, and their relationships), it is natural to ask what kinds of problems are well-suited for such a representation (and also what problems are not well-suited).

What situations and associated problems can be modelled using vectors and related operators and properties? Problems involving concrete objects that have a position, velocity, direction, geometric shape, and relationships between these (particularly in two or three dimensions) are natural candidates. For example, we have seen that it is possible to compute the projection of one vector onto another. However, these are just a particular example of a more general family of problems that can be studied using vectors and their associated operations and properties.

A vector of real numbers can be used to represent an object or collection of objects with some fixed number of characteristics (each corresponding to a dimension or component of the vector) where each characteristic has a range of possible values. This range could be a set of magnitudes (e.g., position, cost, mass), a discrete collection of states (e.g., absence or presence of an edge in a graph), or even a set of relationships (e.g., for every cow, there are four cow legs; for every $1 invested, there is a return of $0.02). Thus, vectors are well-suited for representing problems involving many instances of objects where all the objects have the same set of possible characteristics along the same set of linear dimensions. In these instances, many vector operations also have natural interpretations. For example, addition and scalar multiplication (i.e., linear combinations) typically correspond to the aggregation of a property across multiple instances or copies of objects with various properties (e.g., the total mass of a collection of objects).

In order to illustrate how vectors and linear combinations of vectors might be used in applications, we recall the notion of a system. A system is any physical or abstract phenomenon, or observations of a phenomenon, that we characterize as a collection of real values along one or more dimensions. A system state or state a system is a particular collection of real values. For example, if a system is represented by R4, states of that system are represented by individual vectors in R4 (note that not all vectors need to correspond to valid or possible states; see the examples below).

Example: Consider the following system: a barn with cows and chickens inside it. There are several dimensions along which an observer might be able to measure this system (we assume that the observer has such poor eyesight that chickens and cows are indistinguishable from above): Notice that we could represent a particular state of this system using a vector in R4. However, notice also that many vectors in R4 will not correspond to any system that one would expect to observe. Usually, the number of legs and heads in the entire system will be a linear combination of two vectors: the number of legs per cow, and the number of legs per chicken:

 
1 head
2 legs
 
x chickens +
 
1 head
4 legs
 
y cows
=
 
x+y heads
2x+4y legs
 

Given this relationship, it may be possible to derive some characteristics of the system given only partial information. Consider the following problem: how many chickens and cows are in a barn if 8 heads and 26 legs were observed?

 
1 head
2 legs
 
x chickens +
 
1 head
4 legs
 
y cows
=
 
8 heads
26 legs
 

Notice that a linear combination of vectors can be viewed as a translation from a vector describing one set of dimensions to a vector describing another set of dimensions. Many problems might exist in which the values are known along one set of dimensions and unknown along another set.

Example: We can restate the example from the introduction using linear combinations. Suppose we have $12,000 and two investment opportunities: A has an annual return of 10%, and B has an annual return of 20%. How much should we invest in each opportunity to get $1800 over one year?

The two investment opportunities are two-dimensional vectors representing the rate of return on a dollar:

 
1 dollar
0.1 interest
 
and
 
1 dollar
0.2 interest
 

The problem is to find what combination of the two opportunities would yield the desired observation of the entire system:

 
1 dollar
0.1 dollars of interest
 
⋅ x dollars in opportunity A +
 
1 dollar
0.2 dollars of interest
 
⋅ y dollars in opportunity B
=
 
12,000 dollars
1800 dollars of interest
 

Next, we consider a problem with discrete dimensions.

Example: Suppose there is a network of streets and intersections and the city wants to set up cameras at some of the intersections. Cameras can only see as far as the next intersection. Suppose there are five streets (#1, #2, #3, #4, #5) and four intersections (A, B, C, and D) at which cameras can be placed, and the city wants to make sure a camera can see every street while not using any cameras redundantly (i.e., two cameras should not film the same street).

Vectors in R5 can represent which streets are covered by a camera. A fixed collection of vectors, one for each intersection, can represent what streets a camera can see from each intersection. Thus, the system's dimensions are:

Four fixed vectors will be used to represent which streets are adjacent to which intersections:

 
0
1
0
0
1
 
,
 
1
0
1
1
0
 
,
 
1
1
0
0
0
 
,
 
1
0
0
1
1
 

Placing the cameras in the way required is possible if there is integer solution to the following equation involving a linear combination of the above vectors:

 
0
1
0
0
1
 
a +
 
1
0
1
1
0
 
b +
 
1
1
0
0
0
 
c +
 
1
0
0
1
1
 
d
=
 
1
1
1
1
1
 

Example: Suppose a chemist wants to model a chemical reaction. The dimensions of the system might be:

Individual vectors in R3 can be used to represent how many atoms of each element are in each type of molecule being considered:

C3H8:
 
3
8
0
 
,     O2:
 
0
0
2
 
,     CO2:
 
1
0
2
 
,     H2O:
 
0
2
1
 

Suppose we know that the number of atoms in a system may never change during a reaction, and that some quantity of C3H8 and O2 can react to yield only CO2 and H2O. How many molecules of each compound will be involved in the reaction? That is the solution to the following linear combination.

 
3
8
0
 
x1 +
 
0
0
2
 
x2
=
 
1
0
2
 
x3 +
 
0
2
1
 
x4

For example, suppose we start with 1000 molecules of C3H8 and 5000 molecules of O2. If both of these compounds react to produce only CO2 and H2O, how many molecules of each will be produced?

 
3
8
0
 
1000 +
 
0
0
2
 
5000
=
 
1
0
2
 
a +
 
0
2
1
 
b
 
3000
8000
10000
 
=
 
1
0
2
 
a +
 
0
2
1
 
b
3000
=
1 ⋅ a + 0 ⋅ b
a
=
3000
8000
=
0 ⋅ a + 2 ⋅ b
b
=
4000
10000
=
2 ⋅ 3000 + 1 ⋅ 4000

Thus, a = 3000 molecules of CO2 and b = 4000 molecules of H2O will be produced.

The notion of a linear combination of vectors is common and can be used to mathematically model a wide variety of problems. Thus, a more concise notation for linear combinations of vectors would be valuable. This is one of the issues addressed by introducing a new type of term: the matrix.


3. Matrices

In this section we introduce a new kind of term: a matrix. We define some operations on matrices and some properties of matrices, and we describe some of the possible ways to interpret and use matrices.

3.1. Matrices and multiplication of a vector by a matrix

Matrices are a concise way to represent and reason about linear combinations and linear independence of vectors (e.g., setwise linear independence might be difficult to check using an exhaustive approach), reinterpretations of systems using different dimensions, and so on. One way to interpret a matrix is as a collection of vectors. Multiplying a matrix by a vector corresponds to computing a linear combination of that collection of vectors.

As an example, we consider the case for linear combinations of two vectors. The two scalars in the linear combination can be interpreted as a 2-component vector. We can then put the two vectors together into a single object in our notation, which we call a matrix.

Definition:

 
ab
cd
 
 
x
y
 
=
 
a
c
 
x +
 
b
d
 
y

Notice that the columns of the matrix are the vectors used in our linear combination. Notice also that we can now reinterpret the result of multiplying a vector by a matrix as taking the dot product of each of the matrix rows with the vector.

Fact:

 
ab
cd
 
 
x
y
 
=
 
a
c
 
x +
 
b
d
 
y
=
 
a x
c x
 
+
 
b y
d y
 
=
 
ax + by
cx + dy
 
=
 
(a,b) ⋅ (x,y)
(c,d) ⋅ (x,y)
 

Because a matrix is just two column vectors, we can naturally extend this definition of multiplication to cases in which we have multiplication of a matrix by a matrix: we simply multiply each column of the second matrix by the first matrix and write down each of the resulting columns in the result matrix.

Definition:

 
ab
cd
 
 
x s
y t
 
=
 
(a,b) ⋅ (x,y) (a,b) ⋅(s,t)
(c,d) ⋅ (x,y) (c,d) ⋅ (s,t)
 

These definitions can be extended naturally to vectors and matrices with more than two components. If we denote using Mij the entry in a matrix M found in the ith row and jth column, then we can define the result of matrix multiplication of two matrices A and B as a matrix M such that

Mij   =   ith row of Ajth column of B.

3.2. Interpreting matrices as tables of relationships and transformations of system states

We saw how vectors can be used to represent system states. We can extend this interpretation to matrices and use matrices to represent relationships between the dimensions of system states. This allows us to interpret matrices as transformations between system states (or partial observations of system states).

If we again consider the example system involving a barn of cows and chickens, we can reinterpret the matrix as a table of relationships between dimensions. Each entry in the table has a unit indicating the relationship it represents.

chickens cows
heads 1 head/chicken 1 head/cow
legs 2 legs/chicken 4 legs/cow

Notice that the column labels in this table represent the dimensions of an "input" vector that could be multiplied by this matrix, and the row labels specify the dimensions of the "output" vector that is obtained as a result. That is, if we multiply using the above matrix a vector that specifies the number of chickens and the number of cows in a system state, we will get a vector that specifies the number of heads and legs we can observe in that system.

 
1 head/chicken 1 head/cow
2 legs/chicken 4 legs/cow
 
 
x chickens
y cows
 
=
 
x+y heads
2x+4y legs
 

Thus, we can interpret multiplication by this matrix as a function that takes system states that only specify the number of chickens and cows, and converts them to system states that only specify the number of heads and legs:

(# chickens × # cows) → (# heads × # legs)

3.3. Interpreting multiplication of matrices as composition of system state transformations

Example: Suppose that we have a system with the following dimensions.

Two different matrices might specify the relationships between some combinations of dimensions in this system.

M1 =
 
100 power/wind farm 250 power/coal plant
50 cost/wind farm 400 cost/coal plant
 
,      M2 =
 
4 s.f.h./unit power -2 s.f.h./unit cost
1 businesses/unit power 0 businesses/unit cost
 

Notice that these two matrices both represent transformations between partial system state descriptions.

T1: (# wind farms × # coal plants) → (units of power × units of cost)
T2: (units of power × units of cost) → (# s.f.h. × # businesses)

Notice that because the interpretion of a result obtained using the first transformation matches the interpretation of an input to the second, we can compose these transformations to obtain a third transformation.

T2 o T1: (# wind farms × # coal plants) → (# s.f.h. × # businesses)

This corresponds to multiplying the two matrices to obtain a third matrix. Notice that the units of the resulting matrix can be computed using a process that should be familiar to you from earlier coursework.

 
4 s.f.h./unit power -2 s.f.h./unit cost
1 businesses/unit power 0 businesses/unit cost
 
 
100 power/wind farm 250 power/coal plant
50 cost/wind farm 400 cost/coal plant
 
=
 
300 s.f.h./wind farm 200 s.f.h./coal plant
100 business/wind farm 250 businesses/coal plant
 

Thus, given some vector describing the number of wind farms and coal plants in the system, we can multiply that vector by (M2M1) to compute the number of single family homes and business we expect to find in that system.

Example: Suppose that a gram of gold costs $50, while a gram of silver costs $10. After purchasing some of each, you have spent $350 on 15 grams of material. How many grams of each commodity have you purchased?
  1. Write down four dimensions describing this system.
  2. Define a matrix A that can be used to convert a description of a system state that specifies only the amount of gold and silver purchased into a description of the system state that specifies only the cost and total weight.
  3. Write down a matrix equation describing this problem and solve it to find the solution.
  4. Define a matrix B such that for any description of a system state v that specifies only the total weight and amount spent, Bv is a description of that system state that specifies the amount of gold and silver in that system state.

Example: Suppose we characterize our system in terms of two dimensions: In this example, instead of studying the relationships between dimensions, we want to study how the dimensions change (possibly in an interdependent way) over time. For example, the following matrix might capture how the system state evolves from year to year:
M =
 
2 s.f.h. in year 1/s.f.h. in year 0 -1 s.f.h. in year 1/p.p. in year 0
0 p.p. in year 1/s.f.h in year 0 1 p.p. in year 1/p.p. in year 0
 
We can parameterize the above in terms of a year t R. Notice that the matrix above is just a special case of the matrix below (when t = 0):
M =
 
2 s.f.h. in year t+1/s.f.h. in year t -1 s.f.h. in year t+1/p.p. in year t
0 p.p. in year t+1/s.f.h in year t 1 p.p. in year t+1/p.p. in year t
 
What does MM represent? If we consider the units, we have:
MM =
 
4 s.f.h. in year t+2/s.f.h. in year t -3 s.f.h. in year t+2/p.p. in year t
0 p.p. in year t+2/s.f.h in year t 1 p.p. in year t+2/p.p. in year t
 
Suppose v = [x; y] R2 represents the number of single family homes and factories in a given year. We can then define the number of single family homes and factories after t years as:
Mtv
=
 
2 -1
0 1
 
t
 
x
y
 
If we wanted to write the number of single family homes and factories as a function of t R and an initial state x0, y0 R, we could nest the dot products as follows and use algebra to simplify:
# s.f.h. in year t
=
2 ( 2 ( 2 ( ... 2 (2 x0 - y0) - y0 ... ) - y0) - y0) - y0
=
2t x0 - (2t-1 + ... + 4 + 2 + 1) y0
=
2t x0 - (2t - 1) y0
=
2t x0 - 2t y0 + y0
# p.p. in year t
=
0 ⋅ xt + 1 ⋅ ( ... (0 ⋅ x2 + 1 ⋅ ( 0 ⋅ x1 + (0 ⋅ x0 + 1 ⋅ y0))) ... )
=
y0



3.4. Assignment #2: Using Vectors and Matrices

In this assignment you will solve problems involving vectors and matrices. Please submit a single file a2.* containing your solutions. The file extension may be anything you choose; please ask before submitting a file in an exotic or obscure file format (plain text is preferred).

  1. Consider the line L in R2 that passes through the following two vectors:
    u
    =
     
    9
    7
     
    v
    =
     
    1
    -5
     
    1. Using set notation, define L in terms of u and v.
      Below are some possible definitions:
      L
      =
      { a (u - v) + u  |  a R }
      L
      =
      { a (u - v) + v  |  a R }
      L
      =
      { a (
       
      9
      7
       
      -
       
      1
      -5
       
      ) +
       
      1
      -5
       
       |  a R }
      L
      =
      {
       
      x
      y
       
       |  y = (3/2) x - 13/2 }
      Solutions are acceptable as long as the definition is correct set notation for the set of vectors representing the line.
    2. Determine whether the following vectors are on the line L:
       
      25
      31
       
      ,
       
      7
      −1
       
      ,
       
      −3
      −11
       
      The quickest way is to derive the y = (3/2) x − 13/2 equation for points on the line and use it to check each point:
      31
      =
      (3/2) ⋅ 25 − 13/2
      ,
      so
       
      25
      31
       
      L
      −1
      (3/2) ⋅ 7 − 13/2
      ,
      so
       
      7
      −1
       
      L
      −11
      =
      (3/2) ⋅ (−3) − 13/2
      ,
      so
       
      −3
      −11
       
      L
    3. Find all unit vectors orthogonal to L.
      It is sufficient to find the two unit vectors on the line through the origin that is perpendicular to L. A vector that is parallel to L can be obtained using u - v:
       
      9
      7
       
       
      1
      -5
       
      =
       
      8
      12
       
      The unit vectors must be orthogonal to the above vector. Thus, we have the following two equations for the unit vectors [ x ; y ] that we want to find:
       
      8
      12
       
       
      x
      y
       
      =
      0
      ||
       
      x
      y
       
      ||
      =
      1
      The two vectors that satisfy the above are:
       
      x
      y
       
      {
       
      3/√(13)
      −2/√(13)
       
      ,
       
      −3/√(13)
      2/√(13)
       
      }
    4. Define the line L that passes through the origin and is orthogonal to L using an equation of the form:
      y
      =
      mx + b
      In other words, find m, b R such that:
      L
      =
      {
       
      x
      y
       
        |   y = mx + b }
      We can use one of the unit vectors we found in part (c) above to determine the slope of L. Since the line passes through the origin, b = 0.
      m
      =
      (2/√(13))  /  (−3/√(13))
      =
      −2/3
      b
      =
      0
  2. For each of the following collections of vectors, determine whether the collection is linearly independent. If it is, explain why; if not, show that one of the vectors is a linear combination of the other vectors in the collection.

    1. The following two vectors in R2:
       
      2
      1
       
      ,
       
      3
      2
       
      The following equation has no solution a R since assuming there is a solution leads to a contradiction:
      a
       
      2
      1
       
      =
       
      3
      2
       
      a ⋅ 2
      =
      3
      a ⋅ 1
      =
      2
      a
      =
      2
      2 ⋅ 2
      3
      Thus, the two vectors do not satisfy the definition of linear independence. They must be linearly independent, so the collection is linearly independent.
    2. The following three vectors in R2:
       
      -2
      1
       
      ,
       
      1
      3
       
      ,
       
      2
      4
       
      We must check if any of the three vectors might be a linear combination of the other two. In fact, there is at least one such vector, so the collection of vectors is not linearly independent:
       
      -2
      1
       
      =
      5 ⋅
       
      1
      3
       
      + (−7/2) ⋅
       
      2
      4
       
      Notice that we could have concluded this immediately without finding the above counterexample because these vectors are in R2 and there are at least two linearly independent vectors in the collection. Thus, those two vectors can be used in a linear combination to obtain the third.
    3. The following three vectors in R4:
       
      2
      0
      4
      0
       
      ,
       
      6
      0
      4
      3
       
      ,
       
      1
      7
      4
      3
       
      We must check if any of the three vectors might be a linear combination of the other two. We check if the first can be a linear combination of the second and third; since we arrive at a contradiction below, it cannot.
       
      2
      0
      4
      0
       
      =
      a
       
      6
      0
      4
      3
       
      + b
       
      1
      7
      4
      3
       
      2
      =
      6 a + b
      0
      =
      7 b
      b
      =
      0
      a
      =
      1/3
      4
      4/3 + 0
      Next, we check if the second vector can be a linear combination of the first and third.
       
      6
      0
      4
      3
       
      =
      a
       
      2
      0
      4
      0
       
      + b
       
      1
      7
      4
      3
       
      6
      =
      2 a + b
      0
      =
      7 b
      b
      =
      0
      a
      =
      3
      4
      12 + 0
      Finally, we check if the third can be a linear combination of the first and second:
       
      1
      7
      4
      3
       
      =
      a
       
      6
      0
      4
      3
       
      + b
       
      2
      0
      4
      0
       
      7
      =
      0 ⋅ a + 0 ⋅ b
      7
      0
      Since no vector is a linear combination of the other two, the collection is linearly independent.
  3. Consider the following vectors in R3:
    u
    =
     
    3
    7
    9
     
    v
    =
     
    2
    −5
    3
     
    w
    =
     
    3
    4
    −3
     

    1. Compute the orthogonal projection of w onto the vector:
       
      1/√(12)
      1/√(12)
      1/√(12)
       
      We use the formula for an orthogonal projection. First, we compute the norm of the vector onto which we are projecting.
      ||
       
      1/√(12)
      1/√(12)
      1/√(12)
       
      ||
      =
      √(3/12)
      Notice that 2 ⋅ √(3/12) = √((4 ⋅ 3)/12) = 1. Thus, to obtain a unit vector, we simply multiply the vector onto which we are projecting by the scalar 2.
      2 ⋅
       
      1/√(12)
      1/√(12)
      1/√(12)
       
      =
       
      2/√(12)
      2/√(12)
      2/√(12)
       
      To project w onto the above unit vector, we can use the orthogonal projection formula:
      (
       
      3
      4
      -3
       
       
      2/√(12)
      2/√(12)
      2/√(12)
       
      ) ⋅
       
      2/√(12)
      2/√(12)
      2/√(12)
       
      =
      (8/√(12)) ⋅
       
      2/√(12)
      2/√(12)
      2/√(12)
       
      =
       
      16/12
      16/12
      16/12
       
      =
       
      4/3
      4/3
      4/3
       
    2. Determine whether each of the following points lies on the plane perpendicular to u:
       
      3
      0
      -1
       
      ,
       
      7
      -9
      -5
       
      ,
       
      1
      1
      -1
       
      If a vector lies on the plane perpendicular to u, then it must be orthogonal to u. Thus, we compute the dot product of each of these vectors with u:
       
      3
      0
      -1
       
       
      3
      7
      9
       
      =
      9 + 0 - 9
      =
      0, so this vector is on the plane;
       
      7
      -9
      -5
       
       
      3
      7
      9
       
      =
      21 - 63 - 45
      0, so this vector is not on the plane;
       
      1
      1
      -1
       
       
      3
      7
      9
       
      =
      3 + 7 - 9
      0, so this vector is not on the plane.
    3. Extra credit: Given the vector v and w, let P be the plane orthogonal to v, and let Q be the plane orthogonal to w. Find a vector p R3 that lies on the line in R3 along which P and Q intersect, and provide a definition of the line.
      We have the following:
      P
      =
      { p  |  pv = 0 }
      Q
      =
      { p  |  pw = 0 }
      The line along which P and Q intersect is the set of vectors that are orthogonal to both P and Q.
      PQ
      =
      { p  |  pv = 0  and  pw = 0}
      We can expand the two equations pv = 0 and pw = 0 in order to obtain a system of equations that restricts the possible components of p = [ x ; y ; z ]:
       
      x
      y
      z
       
       
      2
      −5
      3
       
      =
      0
       
      x
      y
      z
       
       
      3
      4
      −3
       
      =
      0
      2 x − 5 y + 3 z
      =
      0
      3 x + 4 y − 3 z
      =
      0
      5 xy
      =
      0
      y
      =
      5 x
      z
      =
      (23/3) x
      Given the above, we can introduce a scalar a and write:
      x
      =
      a
      y
      =
      5 a
      z
      =
      (23/3) a
      PQ
      =
      { a
       
      1
      5
      (23/3)
       
       |  a R }
  4. You decide to drive the 2800 miles from New York to Los Angeles in a hybrid vehicle. A hybrid vehicle has two modes: using only the electric motor and battery, it can travel 1 mile on 3 units of battery power; using only the internal combustion engine, it can travel 1 mile on 0.1 liters of gas (about 37 mpg) while also charging the battery with 1 unit of battery power. At the end of your trip, you have 1400 fewer units of battery power than you did when you began the trip. How much gasoline did you use (in liters)?

    You should define a system with the following dimensions:

    • net change in the total units of battery power;
    • total liters of gasoline used;
    • total number of miles travelled;
    • number of miles travelled using the electric motor and battery;
    • number of miles travelled using the engine.

    You should define a matrix M R3×2 to characterize this system. Then, write down an equation containing that matrix (and three variables in R), and solve it to obtain the quantity of gasoline.

    M
     
    x
    y
     
    =
     
    ?
    ?
    ?
     
    One possible solution is to solve the matrix equation below for n, the number of liters of gasoline used:
     
    -3 battery power/mile on battery 1 battery power/mile on engine
    1 miles/mile on battery 1 miles/mile on engine
    0 liters of gas/mile on battery -0.1 liters of gas/mile on engine
     
     
    x miles on battery
    y miles on engine
     
    =
     
    -1400 battery power
    2800 miles
    n liters of gas
     
    The above equation can be rewrittenn as:
    −3 ⋅ x + 1 ⋅ y
    =
    -1400
    x + y
    =
    2800
    0 ⋅ x - 0.1 ⋅ y
    =
    n
    We then have that 245 liters were used:
    x
    =
    2800 − y
    −3 ( 2800 − y) + y
    =
    −1400
    −8400 + 4 y
    =
    −1400
    4 y
    =
    7000
    y
    =
    1750
    n
    =
    − 0.1 ⋅ 1750
    n
    =
    −175
    Notice that it is possible to solve an equation for a 2 × 2 matrix first, and then use x and y to determine n. For example:
     
    -3 1
    1 1
     
     
    x
    y
     
    =
     
    -1400
    2800
     
    It is also possible to immediately notice that 10 ⋅ n is the number of miles travelled on n liters of gasoline. Then, the following matrix equation can be set up and solved:
     
    -3 battery power/mile on battery 10 battery power/liter of gas
    1 miles/mile on battery 10 miles/liter of gas
     
     
    x miles on battery
    n liters of gas
     
    =
     
    -1400 battery power
    2800 miles
     
  5. Suppose we create a very simple system for modelling how predators and prey interact in a closed environment. Our system has only two dimensions: the number of prey animals, and the number of predators. We want to model how the state of the system changes from one generation to the next.

    If there are x predators and y prey animals in a given generation, in the next generation the following will be the case:

    • all predators already in the system will stay in the system;
    • all prey animals already in the system will stay in the system;
    • for every prey animal, two new prey animals are introduced into the system;
    • for every predator, two prey animals are removed from the system;
    • we ignore any other factors that might affect the state (e.g., natural death or starvation).

    1. Specify explicitly a matrix T in R2×2 that takes a description of the system state in one generation and produces the state of the system during the next generation. Note: you may simply enter the matrix on its own line for this part of the problem, but you must also use it in the remaining three parts below.

      The following matrix reflects the constraints imposed by the description of a state transformation:
      T
      =
       
      1 # predators at time t+1 / predator at time t 0 # predators at time t+1 / prey at time t
      -2 # prey at time t+1 / predator at time t 3 # prey at time t+1 / prey at time t
       
    2. Show that the number of predators does not change from one generation to the next.

      T
       
      x
      y
       
      =
       
      x
      y'
       
      It is sufficient to derive that the number of predators does not change after a transformation is applied. Let x be the number of predators before the transformation is applied, and let x' be the number of predators after it is applied. Then we have that:
       
      1 0
      -2 3
       
       
      x
      y
       
      =
       
      x'
      y'
       
      1 ⋅ x + 0 ⋅ y
      =
      x'
      x
      =
      x'
    3. Determine what initial state [x0; y0] is such that there is no change in the state from one generation to another.

      T
       
      x0
      y0
       
      =
       
      x0
      y0
       

      It is sufficient to solve the equation for x0 and y0:
      T
       
      x0
      y0
       
      =
       
      x0
      y0
       
       
      1 0
      -2 3
       
       
      x0
      y0
       
      =
       
      x0
      y0
       
      1 ⋅ x0 + 0 ⋅ y0
      =
      x0
      -2 ⋅ x0 + 3 ⋅ y0
      =
      y0
      The first equation of real numbers above imposes no constraints on x0 or y0. Thus, any vector that satisfies the last equation would be a solution. All vectors on the line defined by this equation are solutions:
      -2 ⋅ x0 + 3 ⋅ y0
      =
      y0
      -2 ⋅ x0
      =
      -2 y0
      y0
      =
      x0
    4. Suppose that in the fourth generation of an instance of this system (that is, after the transformation has been applied three times), we have 2 predators and 56 prey animals. How many predators and prey animals were in the system in the first generation (before any transformations were applied)? Let [x; y] represent the state of the system in the first generation. Set up a matrix equation that involves [x; y] and matrix multiplication, and solve it to obtain the answer.

      It is sufficient to solve the following equation for x and y:
      T3
       
      x
      y
       
      =
       
      2
      56
       
      We solve it below:
       
      1 0
      -2 3
       
       
      1 0
      -2 3
       
       
      1 0
      -2 3
       
       
      x
      y
       
      =
       
      2
      56
       
       
      1 0
      -8 9
       
       
      1 0
      -2 3
       
       
      x
      y
       
      =
       
      2
      56
       
       
      1 0
      -26 27
       
       
      x
      y
       
      =
       
      2
      56
       
      x − 0 ⋅ y
      =
      2
      x
      =
      2
      −26 x + 27 y
      =
      56
      27 y
      =
      108
      y
      =
      4


3.5. Matrix operations and their interpretations

The following table summarizes the matrix operations that we are considering in this course.

term definition restrictions general properties
M1 + M2 component-wise matrices must have
the same number of rows
and columns
commutative,
associative,
has identity (matrix with all 0 components),
has inverse (multiply matrix by -1),
scalar multiplication is distributive
M1M2 row-column-wise
dot products
columns in M1 = rows in M2
rows in M1M2 = rows in M1
columns in M1M2 = columns in M2
associative,
has identity I (1s in diagonal and 0s elsewhere),
distributive over matrix addition,
not commutative in general,
no inverse in general
M -1 columns in M = rows in M
matrix is invertible
M -1M = MM -1 = I

The following tables list some high-level intuitions about how matrix operations can be understood.

level of
abstraction
interpretations of multiplication
of a vector by a matrix
applications transformation of
system states
extraction of information
about system states
computing properties of
combinations or aggregations
of objects (or system states)
conversion of system
state observations
from one set of dimensions
to another
geometry "moving" vectors in
a space (stretching,
skewing, rotating,
reflecting)
projecting vectors taking a linear combination
of two vectors
reinterpreting vector notation
as referring to a collection
of non-canonical vectors

level of
abstraction
interpretations of multiplication of two matrices
applications composition of system state
transformations or conversions
geometry sequencing of motions of vectors within
a space (stretching, skewing, rotating,
reflecting)

level of
abstraction
invertible matrix singular matrix
applications reversible transformation
of system states
extraction of complete
information uniquely determining
a system state
irreversible transformation
of system states
extraction of incomplete
information about
a system state
geometry reversible transformation or motion
of vectors in a space
projection onto a strict subset of
a set of vectors (space)
symbolic reversible transformation of
information numerically encoded in matrix
(example of such information: system of
linear equations encoded as matrix)
irreversible/"lossy" transformation of
information encoded in matrix

Suppose we interpret multiplication of a vector by a matrix M as a function from vectors to vectors:

f(v) = M v.

Fact: Notice that for any M, if f(v) = M v then f is always a function because M v has only one possible result (i.e., matrix multiplication is deterministic) for a given M and v.

Fact: If f(v) = M v, then f is invertible if M is an invertible matrix. The inverse of f is then defined to be:

f -1(v) = M -1 v.

Notice that f -1 is a function because M -1 v only has one possible result. Notice that f -1 is the inverse of f because

f -1(f(v)) = M -1 M v = I v = v.

Fact: If the columns of a matrix M are linearly dependent and f(v) = M v, then M cannot have an inverse. We consider the case in which M R2×2. Suppose we have that
M
=
 
a b
c d
 
.
If the columns of M are linearly dependent, then we know that there is some s R such that
s
 
a
c
 
=
 
b
d
 
.
This means that we can rewrite M:
M
=
 
a sa
c s c
 
.
Since matrix multiplication can be interpreted as taking a linear combination of the column vectors, this means that for x,y R,
 
a sa
c sc
 
 
x
y
 
=
 
a
c
 
x +
 
sa
sc
 
y
=
(x + s y)
 
a
c
 
But this means that for any two vectors [x;y] and [x';y'], if x + sy = x' + sy' then multiplying by M will lead to the same result. Thus, f is a function that takes two different vector arguments and maps them to the same result. If we interpret f as a relation and take its inverse f -1, f -1 cannot be a function.

Thus, M cannot have an inverse (if it did, then f -1 would be a function).

Fact: If the columns of a matrix M are linearly independent and f(v) = M v, then M has an inverse. We consider the case in which M R2×2. Suppose we have that
M
=
 
a b
c d
 
.
If the columns of M are linearly independent, then we know that a d - b % c ≠ 0:
a/c
b/d
a d
b c
a d - b c
0
Suppose we pick the following M -1:
M -1
=
(1/(a d - b c)) ⋅
 
d -b
-c a
 
.
The we have that:
M -1M
=
(1/(a d - b c)) ⋅
 
d -b
-c a
 
 
a b
c d
 
=
(1/(a d - b c)) ⋅
 
a d - b c bd - bd
ac - ac a d - b c
 
=
 
1 0
0 1
 
=
I

Example: Solve the following problems.
  1. Determine which of the following matrices are not invertible:
     
    2 -7
    -4 14
     
    ,
     
    0 0
    0 1
     
    ,
     
    2 3
    0 1
     

    The columns of the first matrix are linearly dependent, so it is not invertible. The first column of the second matrix can be obtained by multiplying the second column by 0, so the two columns of that matrix are linearly dependent; thus, it is not invertible. For the third matrix, the following equation has no solution:
     
    2
    0
     
    =
    s
     
    3
    1
     
    Thus, the third matrix is invertible. It is also possible to determine this by computing the determinant for each matrix.

  2. The matrix below is not invertible, and the following equation is true. What is the matrix? List all four of its components (real numbers).
     
    a b
    b c
     
     
    1
    2
     
    =
     
    5
    10
     
    Since the matrix is not invertible, it must be that its determinant is 0. Thus, we have the following system of equations:
    a + 2 b
    =
    5
    b + 2 c
    =
    10
    a cb2
    =
    0
    If we solve the above, we get:
    a
    =
    1
    b
    =
    2
    c
    =
    4

3.6. Matrix properties

The following are subsets of Rn×n that are of interest in this course because they correspond to transformations and systems that have desirable or useful properties. For some of these sets of matrices, matrix multiplication and inversion have properties that they do not in general.

subset of Rn×n definition closed under
matrix
multiplication
properties of
matrix multiplication
inversion
identity matrix i,j
    Mij = 1 if i=j, 0 otherwise
closed commutative,
associative,
distributive with addition,
has identity
has inverse (itself);
closed under inversion
elementary matrix can be obtained via an
elementary row operation
from I:
  • add nonzero multiple
    of one row of the matrix
    to another row
  • multiply a row by a
    nonzero scalar
  • swap two rows of the
    matrix
Note: the third is a combination
of the first two operations.
associative,
distributive with addition,
have identity
have inverses;
closed under inversion
scalar matrices s R, ∀ i,j
    Mij = s if i=j, 0 otherwise
closed commutative,
associative,
distributive with addition,
have identity
nonzero members
have inverses;
closed under inversion
diagonal matrices i,j
    Mij R if i=j, 0 otherwise
closed associative,
distributive with addition,
have identity
nonzero members
have inverses;
closed under inversion
matrices with
constant diagonal
i,j
    Mii = Mjj
associative,
distributive with addition,
have identity
symmetric matrices i,j
    Mij = Mji
associative,
distributive with addition,
have identity
symmetric matrices
with constant diagonal
i,j
    Mii = Mjj and Mij = Mji
closed commutative,
associative,
distributive with addition,
have identity
upper triangular matrices i,j
    Mij = 0 if i > j
closed associative,
distributive with addition,
have identity
not invertible in general;
closed under inversion
when invertible
lower triangular matrices i,j
    Mij = 0 if i < j
closed associative,
distributive with addition,
have identity
not invertible in general;
closed under inversion
when invertible
invertible matrices M -1 s.t. M -1 M = M M -1 = I closed associative,
distributive with addition,
have identity
nonzero members
have inverses;
closed under inversion
square matrices all of Rn×n closed associative,
distributive with addition,
have identity

Two facts presented in the above table are worth noting.

Fact: Suppose A is invertible. Then the inverse of A -1 is A, because A A -1 = I. Thus, (A -1) -1 = A.

Fact: If A and B are invertible, so is AB. That is, invertible matrices are closed under matrix multiplication. We can show this by using the associativity of matrix multiplication. Since A and B are invertible, there exist B -1 and A -1 such that:

(B -1 A -1) A B
=
(B -1 A -1) A B
=
B -1 (A -1 A) B
=
B -1 I B
=
B -1 B
=
I.

Thus, since there exists a matrix (B -1 A -1) such that (B -1 A -1) A B = I, (B -1 A -1) is the inverse of AB.

Example: Given an invertible upper triangular matrix M R2×2, show that M -1 is also upper triangular. Hint: write out the matrix M explicitly.

Suppose we have the following upper triangular matrix:

M
=
 
x y
0 z
 

If it is invertible, then there exists an inverse such that:

 
x y
0 z
 
 
a b
c d
 
=
 
1 0
0 1
 

This implies that

xa + yc
=
1
xb + yd
=
0
zc
=
0
zd
=
1

Because M is invertible, we know that z ≠ 0. Since zc = 0, This means that c = 0. Thus, we have that the inverse is upper triangular.

Alternatively, we can observe that c = 0, and that using the formula for the inverse would yield a lower-left matrix entry equal to −c/(det  M) = 0.

Example: In terms of a, b R where a ≠ 0 and b ≠ 0, compute the inverse of:

 
a 0
0 b
 
 
a 0
0 b
 
 
a 0
0 b
 
 
a 0
0 b
 

While we could perform the instances of matrix multiplication step-by-step and then invert the result (either by solving an equation or using the formula for the inverse of a matrix in R2×2), it's easier to recall that diagonal matrices behave in a manner that is very similar to the real numbers. Thus, the above product is equal to

 
a4 0
0 b4
 
,

and its inverse simply has the multiplicative inverses of the two diagonal entries as its diagonal entries:

 
1/a4 0
0 1/b4
 
.

Another fact not in the table is also worth noting.

Fact: Any product of a finite number of elementary row matrices is invertible. This fact follows from the fact that all elementary matrices are invertible, and that the set of invertible matrices is closed under multiplication.

Given the last fact, we might ask whether the opposite is true: are all invertible matrices the product of a finite number of elementary matrices? The answer is yes, as we will see further below.

3.7. Solving the equation M v = w for M with various properties

Recall that for v,w Rn and M Rn×n, an equation of the following form can represent a system of equations:

M v = w

Notice that if M is invertible, we can solve for v by multiplying both sides by M -1. More generally, if M is a member of some of the sets in the above table, we can find straightforward algorithms for solving such an equation for v.

M is ... algorithm to solve M v = w for v
the identity matrix w is the solution
an elementary matrix perform a row operation on M to obtain I;
perform the same operation on w
a scalar matrix divide the components of w by the scalar
a diagonal matrix divide each component of w by the
corresponding matrix component
an upper triangular matrix start with the last entry in v, which is easily obtained;
move backwards through v, filling in the values by
substituting the already known variables
a lower triangular matrix start with the first entry in v, which is easily obtained;
move forward through v, filling in the values by
substituting the already known variables
product of a lower triangular matrix
and an upper triangular matrix
combine the algorithms for upper and lower triangular
matrices in sequence (see example below)
an invertible matrix compute the inverse and multiply w by it

Example: We consider an equation M v = w where M is a diagonal matrix (the identity matrix and all scalar matrices are also diagonal matrices).

 
4 0 0
0 3 0
0 0 5
 
 
x
y
z
 
=
 
2
9
10
 
4x
=
2
3y
=
9
5z
=
10
x
=
1/2
y
=
3
z
=
2

Example: We consider an equation M v = w where M is a lower triangular matrix.

 
4 0 0
2 4 0
4 3 5
 
 
x
y
z
 
=
 
2
9
18
 
4x
=
2
2x + 4y
=
9
4x + 3y + 5z
=
10
x
=
1/2
y
=
2
z
=
2

Fact: Suppose that M = L U where L is lower triangular and U is upper triangular. How can we solve M v = w?

First, note that because matrix multiplication is associative, we have

M v
=
(L U) v
=
L (U v)

We introduce a new vector for the intermediate result U v, which we call u. Now, we have a system of matrix equations.

U v
=
u
L u
=
w

We first solve for u using the algorithm for lower triangular matrices, then we solve for v using the algorithm for upper triangular matrices.

Example: Solve the following equation for x,y,z R:
 
−2 0 0
1 2 0
1 −1 −1
 
 
3 0 1
0 1 2
0 0 1
 
 
x
y
z
 
=
 
−8
12
−1
 

We first divide the problem into two steps using the intermediate vector [ a ; b ; c ]:
 
-2 0 0
1 2 0
1 −1 −1
 
 
a
b
c
 
=
 
−8
12
−1
 
 
3 0 1
0 1 2
0 0 1
 
 
x
y
z
 
=
 
a
b
c
 
 
a
b
c
 
=
 
4
4
1
 
 
x
y
z
 
=
 
1
2
1
 

Example: The inverse of a matrix in R2×2 can be computed as follows.

 
a b
c d
 
 -1
=
 
d/(adbc) b/(ad-bc)
c/(adbc) a/(adbc)
 

Thus, if we find that the determinant of a matrix M R2×2 is nonzero, the algorithm for solving M v = w is straightforward. Consider the following example.

 
1 2
3 4
 
 
x
y
 
=
 
5
13
 
 
1 2
3 4
 
 -1
 
1 2
3 4
 
 
x
y
 
=
 
1 2
3 4
 
 -1
 
5
13
 
 
1 0
0 1
 
 
x
y
 
=
 
4/((1⋅4)-(2⋅3)) −2/((1⋅4)-(2⋅3))
−3/((1⋅4)-(2⋅3)) 1/((1⋅4)-(2⋅3))
 
 
5
13
 
 
1 0
0 1
 
 
x
y
 
=
 
−2 1
3/2 1/−2
 
 
5
13
 
 
x
y
 
=
 
3
1
 

3.8. Row echelon form and reduced row echelon form

We define two more properties that a matrix may possess.

M is in row echelon form
  • all nonzero rows are above any rows consisting of all zeroes
  • the first nonzero entry (from the left) of a nonzero row is strictly
    to the right of the first nonzero entry of the row above it
  • all entries in a column below the first nonzero entry in a row are zero
    (the first two conditions imply this)
M is in reduced row echelon form
  • M is in row echelon form
  • the first nonzero entry in every row is 1;
    this 1 entry is the only nonzero entry in its column

We can obtain the reduced row echelon form of a matrix using a sequence of appropriately chosen elementary row operations.

Example: Suppose we want to find the reduced row echelon form of the matrix below. We list the steps of the procedure.

 
1 2 1
-2 -3 1
3 5 0
 
 
1 2 1
0 1 3
3 5 0
 
 
1 2 1
0 1 3
0 -1 -3
 
 
1 2 1
0 1 3
0 0 0
 
 
1 0 -5
0 1 3
0 0 0
 

Fact: For any matrix M, there may be more than one way to reach the reduced row echelon form using elementary row operations. However, it is always possible, and there is exactly one unique reduced row echelon form for that matrix M. We do not prove this result in this course, but fairly short proofs by induction can be found elsewhere (such as here). Because the reduced row echelon form of a matrix M is unique, we use the following notation to denote it:

rref M.

Example: Determine which of the following matrices are elementary:

 
1 0 0
0 1 0
 
,
 
0 0 2
0 1 0
1 0 0
 
,
 
1 0 0
0 1 0
0 0 0
 
,
 
1 0 0
0 1 0
-2 0 1
 

The matrices are: (a) not elementary (not square, so not invertible), (b) not elementary (composition of two elementary row operations applied to the identity), (c) not elementary (multiplication of a row by the 0 scalar is not invertible), and (d) elementary (multiple of first row added to last row).

Example: Determine which of the following matrices are in reduced row echelon form:
 
1 0 -2
0 1 4
 
,
 
1 0 0
0 1 0
0 1 3
0 0 1
 
,
 
1 0
0 1
0 0
 
,
 
1 0
0 1
0 1
0 0
 
,
 
0 0 0 1 0
0 0 0 0 1
 

The matrices are: (a) in reduced row echelon form, (b) not in reduced row echelon form, (c) in reduced row echelon form, (d) not in reduced row echelon form, and (e) in reduced row echelon form.

Example: Suppose the matrix below is in reduced row echelon form. Solve for a, b R.
 
1 a    0
0 b    1
0 0 1 − a
 

Without loss of generality, we can focus on four possibility: a is either zero or nonzero, and b is either zero or nonzero. We can further simplify this by considering 1 as the only nonzero value of interest. Then we have that:

Thus, a = 1 and b = 0.

Question: For a given matrix in reduced row echelon form, how many different matrices can be reduced to it (one or more than one)?

Fact: It is always true that rref M = rref (rref M). Thus, the rref operation is idempotent.

Fact: If rref M is I (the identity matrix), then M is invertible. This is because I is an elementary matrix, and the row operations used to obtain rref M from M can be represented as a product of elementary (and thus, invertible) matrices E1 ⋅ ... ⋅ En. Thus, we have:

E1 ⋅ ... ⋅ EnM
=
I
(E -1n ⋅ ... ⋅ E -11) ⋅ E1 ⋅ ... ⋅ EnM
=
(E -1n ⋅ ... ⋅ E -11) ⋅ I
M
=
(E -1n ⋅ ... ⋅ E -11) ⋅ I

Since elementary matrices and I are invertible, and invertible matrices are closed under matrix multiplication, M must be invertible, too.

Fact: A matrix M Rn×n is not invertible iff the bottom row of rref M has all zeroes. This is because when all rows of rref M Rn×n have at least one nonzero value, rref M must be the identity matrix (try putting a nonzero value on the bottom row, and see what the definition of reduced row echelon form implies about the rest of the matrix). Since rref M = I, this implies that M must then be invertible by the fact immediately above this one.

Fact: For any invertible matrix M Rn×n, the reduced row echelon form of M is I Rn×n.

We can show this is true using a proof by contradiction. Suppose that M is invertible, but rref MI. We know that rref M can be obtained via a finite number of elementary row operations E1 ⋅ ... ⋅ En:

(E1 ⋅ ... ⋅ En) M = rref M.

If rref M is not I, then the last row of rref M must consist of only zeroes. But because M is invertible, we have:

((E1 ⋅ ... ⋅ En) M) M -1
=
(rref M) ⋅ M -1
E1 ⋅ ... ⋅ En
=
(rref M) ⋅ M -1

Since the product of elementary matrices is invertible, (rref M) ⋅ M -1 is also invertible. But if the last row of rref M consists of only zeroes, then the last row of (rref M) M -1 also contains only zeroes, so it cannot be that (rref M) M -1 is invertible. Thus, we have a contradiction, so our assumption that rref MI is false.

The following table provides an alternative illustration of how the contradiction is derived:

M is invertible rref MI
the matrix M -1 exists the last row of rref M is all zeroes
(E1 ⋅ ... ⋅ En) M = rref M
((E1 ⋅ ... ⋅ En) M) M -1 = (rref M) ⋅ M -1
E1 ⋅ ... ⋅ En = (rref M) ⋅ M -1 the last row of (rref M) ⋅ M -1
is all zeroes
(rref M) ⋅ M -1 is invertible
because it is a product of the
invertible matrices E1, ..., En
(rref M) ⋅ M -1 is not invertible
because multiplication by it is
a many-to-one function

The above result implies the following fact.

Fact: If a matrix M is invertible, it is the product of a finite number of elementary matrices. This is because rref M is the identity, which is an elementary matrix, and M can be reduced via a finite number of invertible row operations to I. Thus, the elementary matrices can be used to generate every possible invertible matrix.

Example: Suppose that for some matrix M R2×2, the following row operations can be applied to M (in the order specified) to obtain the identity matrix I: Find the matrix M -1.

We know that performing the three row operations on M will result in I. Thus, we can write out the row operations as three matrices E1, E2, and E3:
E3E2E1M
=
I
 
1 0
0 1/3
 
 
0 1
1 0
 
 
1 1
0 1
 
M
=
I
Thus, we have that:
M -1
=
 
1 0
0 1/3
 
 
0 1
1 0
 
 
1 1
0 1
 
=
 
0 1
1/3 1/3
 

We can also find M:
det  M
=
−1/3
M
=
1/(−1/3) ⋅
 
1/3 −1
−1/3 0
 
=
 
−1 3
1 0
 

The following table summarizes the results.

fact justification
(1) {M | M is a finite product of elementary matrices} = {M | rref M = I} I is an elementary matrix;
sequences of row operations
are equivalent to multiplication by
elementary matrices
(2) {M | M is a finite product of elementary matrices} {M | M is invertible} elementary matrices are invertible;
products of invertible matrices are invertible
(3) {M | rref M = I} {M | M is invertible} fact (1) in this table;
fact (2) in this table;
transitivity of equality
(4) {M | M is invertible} {M | rref M = I} proof by contradiction;
non-invertible M implies
rref M has all zeroes in bottom row
(5) {M | M is invertible} = {M | rref M = I} for any sets A,B,
AB and BA
implies A = B
(6) {M | M is a finite product of elementary matrices} = {M | M is invertible} fact (1) in this table;
fact (5) in this table;
transitivity of equality

Given all these results, we can say that the properties for a matrix M in the following table are all equivalent: if any of these is true, then all of them are true. If any of these is false, then all of them are false.

M is invertible
det  M ≠ 0
the columns of M are (setwise) linearly independent
Mv = w has exactly one solution
M is a finite product of elementary matrices

Fact: For matrices M Rn×n, rref M is guaranteed to be upper triangular. Notice that this means that for some finite product of elementary matrices E1 ⋅ ... ⋅ En, it is the case that

M = (E1 ⋅ ... ⋅ En) ⋅ U.

If E1 ,..., En are all lower triangular, then then M has an LU decomposition. However, this will not always be the case. But recall that E1 ,..., En are all elementary matrices.

Fact: Given any product of elementary matrices E1 ⋅ ... ⋅ En, it is possible to find a lower triangular matrix L by applying a finite number of elementary swap operations S1 ,..., Sn such that L = S1 ⋅ ... ⋅ SnE1 ⋅ ... ⋅ En is lower triangular; then we have:
L
=
S1 ⋅ ... ⋅ SnE1 ⋅ ... ⋅ En
(Sn -1 ⋅ ... ⋅ S1 -1) ⋅ L
=
E1 ⋅ ... ⋅ En
Thus, E1 ⋅ ... ⋅ En can be decomposed into a lower triangular matrix and a product of elementary swap matrices.

Fact: Any product of a finite number of swap elementary matrices S1 ,..., Sn is a permutation matrix.

Fact: Any matrix M can be written as the product of three matrices PLU where P is a permutation matrix, L is a lower-triangular matrix, and U is an upper-triangular matrix, where:
P
=
S1 ⋅ ... ⋅ Sn
L
=
E1 ⋅ ... ⋅ Ek
U
=
rref M

Example: Suppose we have a system in which a single object is traveling along one spatial dimension (i.e., in a straight line). The object has a distance travelled, a velocity, and an acceleration; these are the three dimensions of the system:

Consider the following equations that can be used to compute the distance the object travels and its final velocity given its acceleration a R+, initial velocity v0 R+, and the amount of time t R+ it travels.
d
=
0.5 a t2 + v0 t
v
=
a t
Suppose we want to convert a view of the system state that describes the acceleration and velocity at time 0 into a view of the system state that represents the distance travelled and velocity at time t. This conversion operation can be represented using a matrix M.
M
=
 
0.5 t2 dist. at t+1 / accel. at t t dist. at t+1 / vel. at t
t vel. at t+1 / accel. at t 1 vel. at t+1 / vel. at t
 
=
 
0.5 t2 m / (m/s2) t m / (m/s)
t (m/s) /(m/s2) 1 (m/s)/(m/s)
 
=
 
0.5 t2 s2 t s
t s 1 scalar identity
 
This matrix is invertible. This immediately tells us that it is possible to derive the initial acceleration and velocity given only the amount of time that has elapsed, the distance travelled, and the final velocity.

By computing the inverse of this matrix, we can obtain formulas that allow us to derive the initial velocity and acceleration of an object given how much time has passed, how far it has travelled, and its current velocity.
M -1
=
 
-2/t2 accel. at t / dist. at t+1 t accel. at t / vel. at t+1
t vel. at t / dist. at t+1 1 vel. at t / vel. at t+1
 
=
 
-2/t2 (m/s2)/m 2/t (m/s2) / (m/s)
2/t (m/s) / m -1 (m/s)/(m/s)
 
=
 
-2/t2 1/s2 2/t 1 / s
2/t 1 / s -1 scalar identity
 
The formulas can be obtained by multiplying M -1 by a system state describing the distance travelled and current velocity. This yields:
a
=
-2d/t2 + 2v/t
v0
=
2d/t - v
We could have also obtained the above formulas using manipulation of equations of real numbers, or via the corresponding row operations. Let us consider the latter approach:
 
0.5 t2 t
t 1
 
 
a
v0
 
=
 
d
v
 
 
0.5 t2 t
t − (2/t ⋅ 0.5 t2) 1 − (2/tt)
 
 
a
v0
 
=
 
d
vd ⋅ (2/t)
 
 
0.5 t2 t
0 1 − 2
 
 
a
v0
 
=
 
d
v − (2d/t)
 
 
0.5 t2 t
0 1
 
 
a
v0
 
=
 
d
(2d/t) − v
 
 
0.5 t2 − (t ⋅ 0) t − (t ⋅ 1)
0 1
 
 
a
v0
 
=
 
d − (t ⋅ ((2d/t) − v))
(2d/t) − v
 
 
0.5 t2 0
0 1
 
 
a
v0
 
=
 
d + v t
(2d/t) − v
 
 
1 0
0 1
 
 
a
v0
 
=
 
(2/t2) ⋅ (− d + v t)
(2d / t) − v
 
 
a
v0
 
=
 
−2d/t2 + v/t
2d / tv
 

3.9. Matrix transpose

Definition: The transpose of a matrix M Rn×n, denoted M, is defined to be A such that for all i and j, Aij = Mji.

Fact: If a matrix M is scalar, diagonal, or symmetric, M = M. If a matrix M is upper triangular, M is lower triangular (and vice versa).

Example: Below are some examples of matrices and their transposes:
 
1 0
0 1
 
=
 
1 0
0 1
 
 
a b
c d
 
=
 
a c
b d
 
 
1 2
3 4
5 6
 
=
 
1 3 5
2 4 6
 
 
1 0 0
2 3 0
4 5 6
 
=
 
1 2 4
0 3 5
0 0 6
 

Fact: For A,B Rn×n, it is always the case that:
(A)
=
A
(A + B)
=
A + B
s(B)
=
(sB)

Fact: It is always the case that (AB) = B A. If M = AB, then:
(AB)ij
=
ith row of Ajth column of B
=
ith column of Ajth row of B
=
jth row of Bith column of A
=
(B A)ji.

Example: We consider the following example with matrices in R3× 3. Let a, b, c, x, y, and z be vectors in R3, with vi representing the ith entry in a vector v. Suppose we have the following product of two matrices. Notice that a, b, and c are the rows of the left-hand matrix, and x, y, and z are the columns of the right-hand matrix.
 
a1 a2 a3
b1 b2 b3
c1 c2 c3
 
 
x1 y1 z1
x2 y2 z2
x3 y3 z3
 
=
 
(a1, a2, a3) ⋅ (x1, x2, x3) (a1, a2, a3) ⋅ (y1, y2, y3) (a1, a2, a3) ⋅ (z1, z2, z3)
(b1, b2, b3) ⋅ (x1, x2, x3) (b1, b2, b3) ⋅ (y1, y2, y3) (b1, b2, b3) ⋅ (z1, z2, z3)
(c1, c2, c3) ⋅ (x1, x2, x3) (c1, c2, c3) ⋅ (y1, y2, y3) (c1, c2, c3) ⋅ (z1, z2, z3)
 
=
 
ax ay az
bx by bz
cx cy cz
 
Suppose we take the transpose of both sides of the equation above. Then we would have:
(
 
a1 a2 a3
b1 b2 b3
c1 c2 c3
 
 
x1 y1 z1
x2 y2 z2
x3 y3 z3
 
)
=
 
ax ay az
bx by bz
cx cy cz
 
=
 
ax bx cx
ay by cy
az bz cz
 
=
 
xa xb xc
ya yb yc
za zb zc
 
=
 
x1 x2 x3
y1 y2 y3
z1 z2 z3
 
 
a1 b1 c1
a2 b2 c2
a3 b3 c3
 
=
 
x1 y1 z1
x2 y2 z2
x3 y3 z3
 
 
a1 a2 a3
b1 b2 b3
c1 c2 c3
 

Fact: If A is invertible, then so is A. This can be proven using the fact directly above.

A,B R2×2,
(A) is invertible
implies
(A−1) ⋅ A
=
 
1 0
0 1
 
A ⋅ (A−1)
=
((A−1) ⋅ A)
=
 
10
01
 
=
 
10
01
 


Fact: det  A = det  A. We can see this easily in the A R2×2 case.

a,b,c,d R,
det 
 
ab
cd
 
=
a db c
=
a dc b
=
det 
 
ac
bd
 
det 
 
ab
cd
 
=
det 
 
ac
bd
 


3.10. Orthogonal matrices

Definition: A matrix M Rn×n is orthogonal iff M = M -1.

Fact: The columns of orthogonal matrices are always orthogonal unit vectors. The rows of orthogonal matrices are always orthogonal unit vectors. We can see this in the R2×2 case. For columns, we use the fact that MM = I:
 
a b
c d
 
 
a b
c d
 
=
 
1 0
0 1
 
 
a c
b d
 
 
a b
c d
 
=
 
1 0
0 1
 
 
(a,c) ⋅ (a,c) (a,c) ⋅ (b,d)
(b,d) ⋅ (a,c) (b,d) ⋅ (b,d)
 
=
 
1 0
0 1
 
.
For rows, we use that MM = I:
 
a b
c d
 
 
a b
c d
 
=
 
1 0
0 1
 
 
a b
c d
 
 
a c
b d
 
=
 
1 0
0 1
 
 
(a,b) ⋅ (a,b) (a,b) ⋅ (c,d)
(c,d) ⋅ (a,b) (c,d) ⋅ (c,d)
 
=
 
1 0
0 1
 
.

Below, we provide a verifiable argument of the above fact for the R2×2 case.

a,b,c,d R,
 
ab
cd
 
 
ab
cd
 
=
 
10
01
 


implies
 
ab
cd
 
 
ac
bd
 
=
 
10
01
 
 
a a + b b a c + b d
c a + d b c c + d d
 
=
 
10
01
 

a a + b b
=
1
 
a
b
 
 
a
b
 
=
1
||
 
a
b
 
||
=
1

(
 
a
b
 
) is a unit vector

c c + d d
=
1
 
c
d
 
 
c
d
 
=
1
||
 
c
d
 
||
=
1

(
 
c
d
 
) is a unit vector

a c + b d
=
0
 
a
b
 
 
c
d
 
=
0

(
 
a
b
 
) and (
 
c
d
 
) are orthogonal

Fact: Matrices representing rotations and reflections are orthogonal. We can show this for the general (counterclockwise) rotation matrix:

 
cos  θ sin  θ
-sin  θ cos  θ
 
 
cos  θ sin  θ
-sin  θ cos  θ
 
=
 
cos  θ sin  θ
-sin  θ cos  θ
 
 
cos  θ -sin  θ
sin  θ cos  θ
 
=
 
cos 2 θ + sin 2 θ      -cos  θ sin  θ + sin  θ cos  θ
-sin  θ cos  θ + cos  θ sin  θ      sin 2 θ + cos 2 θ
 
=
 
1 0
0 1
 
.

Example: Suppose that the hour hand of an animated 12-hour clock face is represented using a vector [ x ; y ]. To maintain the time, the coordinates [ x ; y ] must be updated once every hour by applying a matrix M to the vector representing the hour hand. What is the matrix M?

Since the hour hand must be rotated by 360/12 = 30 degrees in the clockwise direction while the rotation matrix represents counterclockwise rotation, we actually want to find the rotation matrix for 360 - (360/12) = 330 degrees.

Thus, M must be the rotation matrix for θ = 2π - (2π / 30), where θ is specified in radians. Thus, we have:
cos  (330/360 ⋅ 2π)
=
(√ 3)/2
sin  (330/360 ⋅ 2π)
=
−1/2
M
=
 
cos  (330/360 ⋅ 2π) −sin  (330/360 ⋅ 2π)
sin  (330/360 ⋅ 2π) cos  (330/360 ⋅ 2π)
 
=
 
(√ 3)/2 1/2
−1/2 (√ 3)/2
 

Example: You are instructed to provide a simple algorithm for drawing a spiral on the Cartesian plane. The spiral is obtained using counterclockwise rotation, and for every 360 degree turn of the spiral, the spiral arm's distance from the origin should double. Provide a matrix M R2×2 that will take any point [ x ; y ] on the spiral and provide the next point [ x' ; y' ] after θ radians of rotation (your definition of M should contain θ).

It is sufficient to multiply a rotation matrix by a scalar matrix that scales a vector according to the angle of rotation. If the angle of rotation is θ, then the scale factor s should be such that:
s((2π) / θ)
=
2
s
=
21 / ((2π) / θ)
s
=
2θ/(2π)
In the above, if n = π/θ then s is the nth root of 2. Thus M would be:
 
2θ/(2π) 0
0 2θ/(2π)
 
 
cos  θ −sin  θ
sin  θ cos  θ
 
=
 
2θ/(2π) cos  θ − (2θ/(2π)) sin  θ
2θ/(2π) sin  θ 2θ/(2π) cos  θ
 

Fact: Orthogonal matrices are closed under multiplication. For orthogonal A, B Rn×n we have:

(AB)
=
B A
=
B -1 A -1
=
(AB) -1.

Fact: Orthogonal matrices are closed under inversion and transposition. For orthogonal A Rn×n we have:

(A -1)
=
(A) -1.

Thus, we can show that both A -1 and A are orthogonal.

(A -1)
=
(A -1) -1
(A)
=
A.

(A)
=
(A) -1
=
A.

We can summarize this by adding another row to our table of matrix subsets.

subset of Rn×n definition closed under
matrix
multiplication
properties of
matrix multiplication
inversion
orthogonal matrices M = M -1 closed associative,
distributive with addition,
have identity
nonzero members
have inverses;
closed under inversion

Example: Let A R2×2 be an orthogonal matrix. Compute A A A.

We recall that because A is orthogonal, A = A -1. Thus, we have that

A A A = (A A -1) A = I A = A.

Fact: Suppose the last row of a matrix M Rn×n consists of all zeroes. Show that M is not invertible.

Suppose M is invertible. Then M is invertible. But the last column of M is all zeroes, which means it is a trivial linear combination of the other column vectors of M. Thus, we have a contradiction, so M cannot be invertible.

3.11. Matrix rank

Yet another way to characterize matrices is by considering their rank.

Definition: We can define a the rank of a matrix M, rank M, as the number of nonzero rows in rref M.

We will see in this course that rank M is related in many ways to the various characteristics of a matrix.

Fact: Matrix rank is preserved by elementary row operations. Why is this the case? Because rref is idempotent and rank is defined in terms of it:

rref M
=
rref (rref M)
number of nonzero rows in rref M
=
number of nonzero rows in rref (rref M)
rank (M)
=
rank (rref M)

Fact: If M Rn×n is invertible, rank M = n. How can we prove this? Because all invertible matrices have rref M = I, and I has no rows with all zeroes. If I Rn×n, then I has n nonzero rows.

Fact: For A Rn×n, rank A = rank (A).

We will not prove the above in this course in general, but typical proofs involve first proving that for all A Rn×n, rank A ≤ rank (A). Why is this sufficient to complete the proof? Because we can apply this fact for both A and A and use the fact that (A) = A:

rank(A)
rank(A)
rank(A)
rank((A))
rank(A)
rank(A)
rank(A)
=
rank(A)

To get some intuition about the previous fact, we might consider the following question: if the columns of a matrix M R2×2 are linearly independent, are the rows also linearly independent?

Fact: The only matrix in Rn×n that has no nonzero rows in reduced row echelon form is I.

Fact: If a matrix M Rn×n has rank n, then it must be that rref M = I (and, thus, M is invertible). This is derived from the fact above.

Fact: Invertible matrices are closed under the transposition operation. In other words, if M is invertible, then M is invertible. How can we prove this using the rank operator? We know that rank is preserved under transposition, so we have:

n
=
rank(A)
=
rank(A)
=
n

Thus, the rank of rank(A) is n, so it is invertible.

The following table summarizes the important facts about the rank and rref operators.

rank (rref M) = rank M
rref (rref M ) = rref M
rank (M) = rank M
for M Rn×n, M is invertible iff rank M = n



3.12. Assignment #3: Matrix Properties and Operations

In this assignment you will perform step-by-step algebraic manipulations involving vector and matrix properties and operations. You may not add new lines or premises above an "implies" logical operator. If you add any new premises, you will recieve no credit.

    1. Finish the argument below that shows that matrix multiplication of vectors in R3 preserves lines.

      M R3×3, ∀ u,v,w R3, ∀ s R,
      w = s(uv) + v

      (w) is on the line defined by (u) and (v)

      implies

      # ... put proof steps here ...

      (M w) is on the line defined by (M u) and (M v)

    2. Finish the argument below that shows that if multiplication by a matrix M maps three vectors to [0; 0; 0], it maps any linear combination v' of those vectors to [0; 0; 0].

      M R3×3, ∀ u,v,w,v' R3, ∀ a,b,c R,
      M u
      =
       
      0
      0
      0
       
      M v
      =
       
      0
      0
      0
       
      M w
      =
       
      0
      0
      0
       
      v'
      =
      a u + b v + c w

      implies

      # ... put proof steps here ...

      M v' =
       
      0
      0
      0
       


      Extra credit: if u, v, and w are setwise linearly independent, what can you say about the matrix M? Be as specific as possible.

    3. Show that if a matrix M in R2×2 is invertible, there is a unique solution for u given any equation M u = v with v R2.

      M R2×2, ∀ u,v R2,
      (M) is invertible

      M u = v

      implies

      # ... put proof steps here ...

      # replace the right−hand side below
      # with an expression in terms of M and v
      u = ?

    4. Show that if the column vectors of a matrix in R2×2 are linearly dependent, then its determinant is 0.

      a,b,c,d R,
      (
       
      a
      c
       
      ) and (
       
      b
      d
       
      ) are linearly dependent

      implies

      # ... put proof steps here ...

      det 
       
      ab
      cd
       
      = 0

      Extra credit: you may include the opposite direction of this proof for extra credit (build a new, separate argument in which det  [a,c; c,d] = 0 is found above the implies and `([a;c]) and ([b;d]) are linearly dependent` is at the end of the argument below it).

  1. In this problem, you will define explicit matrices that correspond to elementary row operations on matrices in R2×2.
    1. Find appropriate matrices A, B, C, and D in R2×2 to finish the following argument.

      A,B,C,D R2×2, ∀ a,b,c,d,t R,
      A
      =
      ?
      B
      =
      ?
      C
      =
      ?
      D
      =
      ?

      implies

      # ... put proof steps here ...

      A
       
      ab
      cd
       
      =
       
      a+c b+d
      c d
       
      B
       
      ab
      cd
       
      =
       
      a b
      c+a d+b
       
      C
       
      ab
      cd
       
      =
       
      ta tb
      c d
       
      D
       
      ab
      cd
       
      =
       
      a b
      tc td
       


    2. Use the matrices A, B, C, and D from part (a) with matrix multiplication to construct a matrix E that can be shown to satisfy the last line in the following argument.

      A,B,C,D,E R2×2, ∀ a,b,c,d,t R,
      t
      =
      ?
      A
      =
      ?
      B
      =
      ?
      C
      =
      ?
      D
      =
      ?
      E
      =
      ?

      implies

      # ... put proof steps here ...

      E
       
      ab
      cd
       
      =
       
      cd
      ab
       


    3. Extra credit: the row operations defined by A, B, C, and D are all invertible as long as t ≠ 0; prove this for t = -1 by showing that the matrices A, B, C, and D are invertible.

      A,B,C,D R2×2, ∀ t R,
      t
      =
      −1
      A
      =
      ?
      B
      =
      ?
      C
      =
      ?
      D
      =
      ?

      implies

      # ... put proof steps here ...

      (A) is invertible
      (B) is invertible
      (C) is invertible
      (D) is invertible




Review 1. Vector and Matrix Algebra and Applications

The following is a breakdown of what you should be able to do at this point in the course (and of what you may be tested on in an exam). Notice that many of the tasks below can be composed. This also means that many problems can be solved in more than one way.

Below is a comprehensive collection of review problems going over the course material covered until this point. These problems are an accurate representation of the kinds of problems you may see on an exam.

Example: Find any h R such that the following two vectors are linearly independent.
 
5
-5
-2
 
,
 
20
-20
h
 
There are many ways to solve this problem. One way is to use the definition of linear dependence and find an h that does not satisfy it.
s
 
5
-5
-2
 
=
 
20
-20
h
 
Then, we have that any h such that h ≠ -8 is sufficient to contradict linear dependence (and, thus, imply linear independence):
s ⋅ 5
=
20
s
=
4
s ⋅ (-2)
=
h
-8
=
h
Another solution is to recall that orthogonality implies linear independence. Thus, it is sufficient to find h such that the two vectors are orthogonal.
 
5
-5
-2
 
 
20
-20
h
 
=
0
This implies h = 100.
5(20) + (-5)(-20) + (-2)h
=
0
h
=
100

Example: Suppose we have a matrix M such that the following three equations are true:
M
 
1
0
0
 
=
 
3
-2
 
M
 
0
1
0
 
=
 
3
0
 
M
 
0
0
1
 
=
 
-3
1
 
Compute the following:
M
 
2
1
-1
 
=
?

We should recall that multiplying a matrix by a canonical unit vector with 1 in the ith row in the vector gives us the ith column of the matrix. Thus, we can immediately infer that:
M
=
 
3 3 -3
-2 0 1
 
Thus, we have:
M
 
2
1
-1
 
=
 
(3,3,-3) ⋅ (2,1,-1)
(-2,0,1) ⋅ (2,1,-1)
 
=
 
12
-5
 

Example: List at least three properties of the following matrix:

 
0 1 0
1 0 0
0 0 1
 

The matrix has many properties, such as:

Example: Find the matrix B R2×2 that is symmetric, has a constant diagonal (i.e., all entries on the diagonal are the same real number), and satisfies the following equation:
B
 
2
1
 
=
 
15
6
 

We know that B is symmetric and has a constant diagonal, so we need to solve for a and b in:

 
a b
b a
 
 
2
1
 
=
 
15
6
 

Example: Compute the inverse of the following matrix:

 
2 3
1 2
 

One approach is to set up the following equation and solve for a,b,c, and d:
 
a b
c d
 
 
2 3
1 2
 
=
 
1 0
0 1
 
Another approach is to apply the formula for the inverse of a matrix in R2×2:
 
2 3
1 2
 
 -1
=
(1 / (2 ⋅ 2 - 3 ⋅ 1)) ⋅
 
2 −3
−1 2
 
=
 
2 −3
−1 2
 

Example: Let a R be such that a ≠ 0. Compute the inverse of the following matrix:

 
a -a
-a -a
 

As in the previous problem, we can either solve an equation or apply the formula:
 
a -a
-a -a
 
 -1
=
(1 / (-a2 - a2)) ⋅
 
-a a
a a
 

Example: Suppose x R is such that x ≠ 0. Compute the inverse of the following matrix:

 
x 0 -2x
0 x 0
0 0 4x
 

Because we have an upper triangular matrix, computing the inverse by solving the following equation is fairly efficient. Start by considering the bottom row and its product with each of the columns. This will generate the values for g, h, and i. You can then proceed to the other rows.
 
x 0 -2x
0 x 0
0 0 4x
 
 
a b c
d e f
g h i
 
=
 
1 0 0
0 1 0
0 0 1
 
The solution is:

 
1/x 0 1/(2x)
0 1/x 0
0 0 1/4x
 

Example: Assume a matrix M R2×2 is symmetric, has constant diagonal, and all its entries are nonzero. Show that A cannot be an orthogonal matrix.

Let a ≠ 0 and b ≠ 0, and let us define:

M
=
 
a b
b a
 
.

Suppose that M is an orthogonal matrix; then, we have that:

M -1
=
M
MM
=
I
 
a b
b a
 
 
a b
b a
 
=
 
1 0
0 1
 
.

Thus, we have ba + ab = 0, so 2ab = 0. This implies that either a = 0 or b = 0. But this contradicts the assumptions, so M cannot be orthogonal.

Example: Suppose the Earth is located at the origin and your spaceship is in space at the location corresponding to the vector [ -5 ; 4; 2 ]. Earth is sending transmissions along the vector [ √(3)/3 ; √(3)/3 ; √(3)/3 ]. What is the shortest distance your spaceship must travel in order to hear a transmission from Earth?

By the triangle inequality, for any vector v R3, the closest point to v on a line is the orthogonal projection of v onto that line. We need to find the distance from the spaceship's current position to that point.

We first compute the orthogonal projection of [ -5 ; 4; 2 ] onto the line L defined as follows:
L
=
{ a
 
√(3)/3
√(3)/3
√(3)/3
 
 |  a R}
We can compute the orthogonal projection using the formula for an orthogonal projection of one vector onto another. Notice that the vector specifying the direction of transmission is already a unit vector:
||
 
√(3)/3
√(3)/3
√(3)/3
 
||
=
1
 
-5
4
2
 
 
√(3)/3
√(3)/3
√(3)/3
 
=
√(3)/3
(vu/||u||) ⋅ u/||u||
=
√(3)/3 ⋅
 
√(3)/3
√(3)/3
√(3)/3
 
=
 
1/3
1/3
1/3
 
Now, we must compute the distance between the destination above and the spaceship's current position:
||
 
1/3
1/3
1/3
 
-
 
-5
4
2
 
||
=
||
 
16/3
-11/3
-5/3
 
||
=
√((16/3)2 + (-11/3)2 + (-5/3)2)
=
√(256/9 + 121/9 + 25/9)
=
√(402)/3
Thus, the shortest distance is √(402)/3.

Example: Two communication towers located at v R2 and u R2 are sending directed signals to each other (it is only possible to hear the signal when in its direct path). You are at the origin and want to travel the shortest possible distance to intercept their signals. What vector specifies the distance and direction you must travel to intercept their signals?

We can solve this problem by recognizing that the closest point to the origin on the line along which the signals can be intercepted is the vector that is orthogonal to the line (i.e., it is the orthogonal projection of the origin onto the line). The definition of the line is:
L
=
{ a ⋅ (u - v) + v  |  a R}
Thus, the point p R2 to which we want to travel must be both on the line and orthogonal to the line (i.e., its slope must be orthogonal to any vector that represents the slope of the line, such as u - v). In other words, it must satisfy the following two equations:
p
=
a ⋅ (u - v) + v
p ⋅ (u - v)
=
0
Thus, it is sufficient to solve the above system of equations for p.


4. Vector Spaces

We are interested in studying sets of vectors because they can be used to model sets of system states, observations and data that might be obtained about systems, geometric shapes and regions, and so on. We can then represent real-world problems (e.g., given some observations, what is the actual system state) as equations of the form Mv = w, and sets of vectors as the collections of possible solutions to those equations. But what is exactly the set of possible solutions to Mv = w? Can we characterize it precisely? Can we define a succinct notation for it? Can we say anything about it beyond simply solving the equation? Does this tell us anything about our system?

In some cases, it may make more sense to consider only a finite set of system states, or an infinite set of discrete states (i.e., only vectors that contain integer components); for example, this occurs if vectors are used to represent the number of atoms, molecules, cows, chickens, power plants, single family homes, and so on. However, in this course, we make the assumption that our our sets of system states (our models of systems) are infinite and continuous (i.e., not finite and not discrete); in this context, this simply means that the entries in the vectors we use to represent system states are real numbers.

Notice that the assumption of continuity means that, for example, if we are looking for a particular state (e.g., a state corresponding to some set of observations), we allow the possibility that the state we find will not correspond exactly to a state that "makes sense". Consider the example problem involving the barn of cows and chickens. Suppose we observe 4 heads and 9 legs. We use the matrix that represents the relationships between the various dimensions of the system to find the number of cows and chickens:
 
1 1
2 4
 
 
x chickens
y cows
 
=
 
4 heads
9 legs
 
x
=
3.5
y
=
0.5
Notice that the solution above is not an integer solution; yet, it is a solution to the equation we introduced because the set of system states we are allowing in our solution space (the model of the system) contains all vectors in R2, not just those with integer entries.

As we did with vectors and matrices, we introduce a succinct language (consisting of symbols, operators, predicates, terms, and formulas) for infinite, continuous sets of vectors. And, as with vectors and matrices, we study the algebraic laws that govern these symbolic expressions.

4.1. Sets of vectors and their notation

We will consider three kinds of sets of vectors in this course; they are listed in the table below.

kind of set (of vectors) maximum
cardinality
("quantity of
elements")
solution space of a... examples
finite set of vectors finite
  • {(0,0)}
  • {(2,3),(4,5),(0,1)}
vector space infinite homogenous system of
linear equations:
  Mv = 0
  • {(0,0)}
  • R
  • R2
  • span{(1,2),(2,3),(0,1)}
  • any point, line, or plane
    intersecting the origin
affine space infinite nonhomogenous
system of
linear equations:
  Mv = w
  • { a + v | v V} where
    V is a vector space and a is a vector
  • any point, line, or plane

To represent finite sets of vectors symbolically, we adopt the convention of simply listing the vectors between a pair of braces (as with any set of objects). However, we need a different convention for symbolically representing vector spaces and affine spaces. This is because we must use a symbol of finite size to represent a vector space or affine space that may contain an infinite number of vectors.

If the solution spaces to equations of the form Mv = w are infinite, continuous sets of vectors, in what way can be characterize them? Suppose that M R2×2 and that is M invertible. Then we have that:
M
=
 
a b
c d
 
w
=
 
s
t
 
Mv
=
w
 
a b
c d
 
v
=
 
s
t
 
v
=
(1/(ad-bc)) ⋅
 
d b
c a
 
 
s
t
 
v
=
(s/(adbc)) ⋅
 
d
c
 
+ (t/(adbc)) ⋅
 
b
a
 
Notice that the set of possible solutions v is a linear combination of two vectors in R2. In fact, if a collection of solutions (i.e., vectors) to the equation Mv = 0 exists, it must be a set of linear combinations (in the more general case of Mv = w, it is a set of linear combinations with some specified offset). Thus, we introduce a succinct notation for a collection of linear combinations of vectors.

Definition: A span of a set of vectors { v1, ..., vn } is the set of all linear combinations of vectors in { v1, ..., vn }:
span { v1, ..., vn } = { a1v1 + ... + anvn  |  a1 R, ..., an R }

Example: For each of the following spans, expand the notation into the equivalent set comprehension and determine if the set of vectors is a point, a line, a plane, or a three-dimensional space.
  1. The set defined by:
    span {
     
    1
    2
     
    }
    =
    {a
     
    1
    2
     
    | a R}
    The above span is a line.
  2. The set defined by:
    span {
     
    1
    -2
     
    ,
     
    −2
    4
     
    }
    =
    span {
     
    1
    −2
     
    }
    =
    {a
     
    1
    −2
     
    | a R}
    The above span is a line.
  3. The set defined by:
    span {
     
    0
    1
     
    ,
     
    1
    0
     
    ,
     
    1
    1
     
    }
    =
    span {
     
    0
    1
     
    ,
     
    1
    0
     
    }
    =
    {a
     
    1
    0
     
    + b
     
    0
    1
     
    | a,b R}
    =
    R2
    The above span is a plane.
  4. The set defined by:
    span {
     
    0
    0
     
    }
    =
    {
     
    0
    0
     
    }
    The above span is a set containing a single point (the origin).
  5. The set defined by:
    span {
     
    0
    1
    0
     
    ,
     
    1
    0
    0
     
    ,
     
    0
    0
    1
     
    }
    =
    {a
     
    0
    1
    0
     
    + b
     
    1
    0
    0
     
    + c
     
    0
    0
    1
     
    | a,b,c R}
    =
    R3
    The above span is a three-dimensional space.

Example: Using span notation, describe the set of vectors that are orthogonal to the vector [ 1 ; 2 ] R2.

We know that the set of vectors orthogonal to [ 1 ; 2 ] can be defined as the line L where:
L
=
{
 
x
y
 
 | 
 
x
y
 
 
1
2
 
= 0 }
It suffices to find a vector on the the line L. The equation for the line is:
 
x
y
 
 
1
2
 
=
0
1 ⋅ x + 2 ⋅ y
=
0
y
=
(-1/2) x
We can choose any point on the line y = (-1/2) x; for example, [ 2 ; -1 ]. Then, we have that:
L
=
span {
 
2
-1
 
}

Example: Using span notation, describe the set of solutions to the following matrix equation:

 
1 2
2 4
 
 
x
y
 
=
 
0
0
 
Notice that the above equation implies two equations:
 
1
2
 
 
x
y
 
=
0
 
2
4
 
 
x
y
 
=
0
In fact, the second equation provides no additional information because 2 ⋅ [ 1 ; 2 ] = [ 2 ; 4 ]. Thus, the solution space is the set of vectors:
L
=
{
 
x
y
 
 | 
 
1
2
 
 
x
y
 
= 0 }
We can now use our solution to the previous problem to find a span notation for the solution space:
L
=
span {
 
2
-1
 
}

Example: Suppose also that system states are described by vectors v R3 with the following units for each dimension:
v
=
 
x # carbon atoms
y # hydrogen atoms
z # oxygen atoms
 
Individual molecules are characterized using the following vectors:
C3H8:
 
3
8
0
 
,     O2:
 
0
0
2
 
,     CO2:
 
1
0
2
 
,     H2O:
 
0
2
1
 

Recall the definition for what constitutes a vector space (there are many equivalent definitions).

Definition: A vector space is a set of vectors that contains 0, is closed under vector addition and scalar multiplication, and is such that all the elements in the set satisfy the vector space axioms governing vector addition and scalar multiplication.

Fact: Any set of linear combinations of a collection of vectors is closed under vector addition and scalar multiplication, contains 0, and satisfies the vector space axioms. In other words, for any collection of vectors v1, ..., vn, span{v1, ..., vn} is a vector space.

Fact: For any set of vectors VRn that satisfies the vector space axioms, there exists a finite set of at most n vectors v1, ..., vk (where kn) such that span{v1, ..., vk} = V.

Given the two facts above, we can safely adopt span{v1, ..., vn} as a standard notation for vector spaces. In addition to this notation, we will also often use the notation Rn for specific values of n (e.g., R2 is equivalent to span{[1;0],[0;1]}), and {0} for specific vector 0 (e.g., {[0;0]} is equivalent to span{[0;0]}).

4.2. Membership and equality relations involving sets of vectors

Recall that many of the algebraic laws we saw governing operators on vectors and matrices involved equality of vectors and matrices. In fact, one can view these laws as collectively defining the semantic equality of the symbols (i.e., they specify when two symbols refer to the same object).

The meaning of symbols is closely tied to the equality relation we define over them. In the case of the span notation, one potential problem is that there is more than one way to describe a set using span notation. For example:
span {
 
2
3
 
}
=
span {
 
4
6
 
}
span {
 
1
0
 
,
 
0
1
 
}
=
span {
 
2
1
 
,
 
0
-1
 
}
How can we determine if two spans are equivalent? We must define the equality relation (i.e., the relational operator) that applies to sets of vectors (including infinite sets containing all linear combinations of vectors). We first recall the definition of equality for our vector notation.

Fact: Recall that two vectors v, w Rn are equal if and only if their components are equivalent:
 
a1
a2
an
 
  =  
 
b1
b2
bn
 
    
iff
     a1 = b1  and  a2 = b2  and  ...  and  an = bn

With the equality of pairs of vectors defined, it is possible to provide a definition of equality for sets of vectors (both finite and infinite). However, we will build the definition gradually. First, let us define when a vector is a member of a finite set of vectors using vector equality.

Fact: A vector v Rn is a member of the finite set of vectors { w1, ..., wk } ⊂ Rn if it is equivalent to one of the vectors in the set:
v { w1, ..., wk }     
iff
     ∃ w { w1, ..., wk } s.t. v = w

Notice that the above defines membership of v in the set using an equation (in this case, v = wi where i {1,...,k}). We can take the same approach in the case of an infinite set of vectors.

Fact: A vector v Rn is a member of the set of vectors span { w1, ..., wk } ⊂ Rn if it is equivalent to one of the vectors in the set:
v span { w1, ..., wk }     
iff
     ∃ w span { w1, ..., wk }  s.t.  v = w
iff
     ∃ w { a1w1 + ... + akwk  |  a1 R, ..., an R }  s.t.  v = w
iff
     ∃ a1w1 + ... + akwk span { w1, ..., wk }  s.t.  v = a1w1 + ... + akwk
iff
     ∃ a1 R ,..., an R  s.t.  v = a1w1 + ... + anwn
iff
     ∃
 
a1
ak
 
Rk  s.t. 
 
w1 ... wk
 
 
a1
ak
 
= v
Thus, we can determine membership of a vector in a span by solving a matrix equation of the form Mu = v.

Example: Determine whether the following formula is true:
 
-1
6
 
span {
 
1
4
 
,
 
1
-1
 
}
We proceed as in the fact above:
 
-1
6
 
span {
 
1
4
 
,
 
1
-1
 
}     
iff
     ∃ w span {
 
1
4
 
,
 
1
-1
 
}  s.t. 
 
-1
6
 
= w
iff
     ∃ w { a
 
1
4
 
+ b
 
1
-1
 
 |  a R, b R }  s.t. 
 
-1
6
 
= w
iff
     ∃ a
 
1
4
 
+ b
 
1
-1
 
span {
 
1
4
 
,
 
1
-1
 
}  s.t. 
 
-1
6
 
= a
 
1
4
 
+ b
 
1
-1
 
iff
     ∃ a R, b R  s.t. 
 
-1
6
 
= a
 
1
4
 
+ b
 
1
-1
 
iff
     ∃
 
a
b
 
R2  s.t. 
 
1 1
4 -1
 
 
a
b
 
=
 
-1
6
 
Since a solution to the matrix equation exists, the formula is true:
 
1 1
4 -1
 
 
a
b
 
=
 
-1
6
 
a
=
1
b
=
-2

Example: Solve each of the following problems.
  1. Determine whether the following formula is true or false:
     
    2
    5
    -1
     
    span {
     
    4
    0
    -2
     
    ,
     
    0
    1
    0
     
    }
  2. Define the following line using span notation:
    L
    =
    {
     
    x
    y
     
     |  y = -3 ⋅ x }

Now suppose we want to determine if a finite set of vectors is a subset of a span. Let us first consider how we would check if a finite set of vectors is a subset of another finite set of vectors.

Fact: A finite set of vectors { v1, ..., vj } ⊂ Rn is a subset of the finite set of vectors { w1, ..., wk } ⊂ Rn if every member of { v1, ..., vj } is a member of { w1, ..., wk }:
{ v1, ..., vj } ⊂ { w1, ..., wk }     
iff
     ∀ v { v1, ..., vj }, ∃ w { w1, ..., wk }  s.t.  v = w

Thus, we can generalize the above by replacing the second finite set of vectors with a span.

Fact: A finite set of vectors { v1, ..., vj } ⊂ Rn is a subset of the set of vectors span { w1, ..., wk } ⊂ Rn if every member of { v1, ..., vj } is a member of span { w1, ..., wk }:
{ v1, ..., vj } ⊂ span { w1, ..., wk }     
iff
     ∀ v { v1, ..., vj }, ∃ w span { w1, ..., wk }  s.t.  v = w
iff
     ∀ v { v1, ..., vj }, ∃
 
a1
ak
 
Rk  s.t. 
 
w1 ... wk
 
 
a1
ak
 
= v
iff
     ∃
 
a11 ... a1j
a1k ... akj
 
Rk × j  s.t. 
 
w1 ... wk
 
 
a11 ... a1j
a1k ... akj
 
=
 
v1 ... vj
 

Fact: For any finite set of vectors { v1, ..., vj } ⊂ Rn, for any real number scalars a1 R, ..., aj R, we have that:
{ v1, ..., vj } ⊂ span { w1, ..., wk }      implies      a1v1 + ... + ajvj span { w1, ..., wk }
We can see the above is true, because:
v1
=
s11w1 + ... + sk1wk
vj
=
s1jw1 + ... + skjwk
a1v1
=
(a1 s11) ⋅ w1 + ... + (a1 sk1) ⋅ wk
ajvj
=
(aj s1j) ⋅ w1 + ... + (aj skj) ⋅ wk
a1v1 + ... + ajvj
=
(a1 s11 + ... + aj s1j) w1 + ... + (a1 sk1 + ... + aj skj) wk
Thus, we have that:
{ v1, ..., vj } ⊂ span { w1, ..., wk }      implies      span { v1, ..., vj } ⊂ span { w1, ..., wk }

Definition: Using facts about sets, we can then specify the following definition of equality between two spans for two finite sets of vectors VRn and WRn:
span W = span V     
iff
     span W ⊂ span V and span V ⊂ span W

Example: Determine whether the following formula is true:
span {
 
1
2
 
,
 
2
1
 
}
span {
 
9
3
 
,
 
3
1
 
,
 
-6
-2
 
}
It suffices to set up two matrix equations and determine if solutions A R2 × 3 and B R3 × 2 exist:
 
1 2
2 1
 
A
=
 
9 3 -6
3 1 −2
 
 
9 3 -6
3 1 −2
 
B
=
 
1 2
2 1
 
Alternatively, we can check each vector individually (this is analogous to considering the above equations for A and B one column at a time).
Is
 
1
2
 
a linear combination of
 
9
3
 
,
 
3
1
 
, and
 
-6
-2
 
?
Is
 
2
1
 
a linear combination of
 
9
3
 
,
 
3
1
 
, and
 
-6
-2
 
?
Is
 
9
3
 
a linear combination of
 
1
2
 
and
 
2
1
 
?
Is
 
3
1
 
a linear combination of
 
1
2
 
and
 
2
1
 
?
Is
 
-6
-2
 
a linear combination of
 
1
2
 
and
 
2
1
 
?
If all of the above are true, then the two spans are equivalent. If any of the above is false, then the two spans are not equivalent.

The above implies that a matrix can be used to represent a particular vector space. We will see in other sections further below that a given vector space can be represented in more than one way by a matrix, and that more than one matrix can be used to represent a vector space.

Definition: Given two vector spaces W and V, we have:

W is a (vector) subspace of V      iff      VW.

4.3. Vector spaces as abstract structures

Our definitions of a vector space so far have explicitly referenced sets of concrete vectors as we usually understand them. However, this is not necessary.

Definition: A vector space is a set of objects X that satisfies the following conditions:

The above definition specifies conditions under which any set of objects S can be studied as a vector space.

We have encountered many sets that satisfy the above properties (thus making them vector spaces); below is a table of vector spaces, including vector spaces we have already encountered and vector spaces we will encounter later in the course.

vector space addition
operation
additive
identity
scalar
multiplication
operation
R addition of
real numbers
0 multiplication of
real numbers
R2 vector addition:
[ a ; b ] + [ c ; d ] = [ a + c ; b + d ]
[ 0 ; 0 ] scalar multiplication:
s ⋅ [ a ; b ] = [ sa ; sb ]
R3 vector addition:
[ a ; b ; c ] + [ d ; e ; f ] =
          [ a + d ; b + e ; c + f ]
[ 0 ; 0 ; 0] scalar multiplication:
s ⋅ [ a ; b ; c ] = [ sa ; sb ; sc ]
Rn vector addition:
[ a1 ; ... ; an ] + [ b1 ; ... ; bn ] =
          [ a1 + b1 ; ... ; an + bn ]
[ 0 ; ... ; 0 ] scalar multiplication:
s ⋅ [ a1 ; ... ; an ] = [ s ... a1 ; ... ; san ]
span { [0;0] } = { [0;0] } vector addition [0;0] scalar multiplication
of vectors in R2
span { v1 , ... , vk } ⊂ Rn vector addition [ 0 ; ... ; 0 ] scalar multiplication
of vectors in Rn
R2×2 matrix addition [ 0 , 0 ; 0 , 0 ] scalar multiplication
of a matrix
Rn×n matrix addition [ 0, ..., 0 ; ... ; 0, ..., 0 ] scalar multiplication
of a matrix
affine space with
origin at a R2
vw = (v - a) + (w - a) + a a sv = s ⋅ (v - a) + a
set of lines
through the origin
f(x) = a x
fg = h
where h(x) = f(x) + g(x)
f(x) = 0 ⋅ x sf = h
where h(x) = s × f(x)
set of polynomials
of degree 2
f(x) = a x2 + b x + c
fg = h
where h(x) = f(x) + g(x)
f(x) = 0 sf = h
where h(x) = s × f(x)
set of polynomials
of degree k
f(x) = ak xk + ... + a0
fg = h
where h(x) = f(x) + g(x)
f(x) = 0 sf = h
where h(x) = s × f(x)

Example: The affine space A = {a + v | v V} is a vector space for appropriate definitions of addition, scalar multiplication, and identity.

An abstract definition of vector spaces is useful because it also allows us to study by analogy and learn about other objects that are not necessarily sets of vectors. All the properties we can derive about vector spaces using the above definition will apply to other sets of objects that also satisfy the above definition.

Example: We consider the set of functions F = {f | f(x) = cx, c R}. How do we show that F is a vector space?

Example: Show that F = {f | f(x) = bx2 + cx, b,c R} is a vector space.

For our purposes, it is sufficient to show that there is an additive identity, that the set is closed under addition, and that the set is closed under scalar multiplication. There is an additive identity:
f(x) = 0 ⋅ x2 + 0 ⋅ x
The set is closed under the usual addition operation on polynomial curves; for any f, g F we have f + g = h where:
f(x)
=
bx2 + cx
g(x)
=
b'x2 + c'x
f(x) + g(x)
=
(bx2 + cx) + (b'x2 + c'x)
=
(b + b') ⋅ x2 + (c + c') ⋅ x
h(x)
=
(b + b') ⋅ x2 + (c + c') ⋅ x
h
F
For any f F and s R we have h = sf where:
f(x)
=
bx2 + cx
sf(x)
=
s ⋅ (bx2 + cx)
sf(x)
=
(sb)x2 + (sc)x
h(x)
=
(sb)x2 + (sc)x
h
F

Example: Find two elements in F = {f | f(x) = bx2 + cx + d, b,c,d R} that are linearly independent. What does it mean for two functions to be linearly independent?

Two vectors g, h \F are linearly independent if they are not linearly dependent. This means there exists no scalar such that sg = h or sh = g. One example of such a pair would be:
g(x)
=
x2
h(x)
=
1
There is no single scalar by which we can multiply the curve h(x) = 1 to obtain a parabola, and there is no single scalar by which we can multiply the parabola to obtain a non-zero flat line.

Example: What is a finite set of functions that spans {f | f(x) = bx2 + cx + d, b,c,d R}? One such set is:

{f,g,h}      where      f(x) = 1,      g(x) = x,      h(x) = x2

Example: We consider the set of functions {f | f(x) = xk, k R}. The following definitions of addition and scalar multiplication make this set of functions a vector space.

Fact: Once we start thinking of curves as a vector space, we can rewrite curve-fitting problems as equations involving linear combinations of curves. For example, suppose we have some curve φ for which we do not know the formula...
φ(x) = ...
...that represents some observable phenomenon. We can hypothesize that the curve φ(x) is a linear combination of simpler curves, such as f(x) = x and g(x) = x2. This is equivalent to hypothesizing that φ is in the following vector space:
span {f, g} = { f  |  f(x) = a x2 + b x where a,b R }
We can then set up the following equation:
af + bg
=
φ
However, the above equation cannot directly be solved for a,b R because we do not know the formula for φ. Suppose we can sample φ on some finite collection of input values { x1,...,xm } ⊂ R. We can then write an equation that approximates the above equation at those points:
a
 
f(x1)
f(xm)
 
+ b
 
g(x1)
g(xm)
 
=
 
φ(x1)
φ(xm)
 
The above is equivalent to the following system of equations:
af(x1) + bg(x1)
=
φ(x1)
af(xm) + bg(xm)
=
φ(xm)
It can also be rewritten as the following matrix equation:
 
f(x1) g(x1)
f(xm) g(xm)
 
 
a
b
 
=
 
φ(x1)
φ(xm)
 
Notice that because we know f and g exactly, the matrix above has all constant entries. Furthermore, the right-hand side of the equation also consists of a vector with all constants as entries because we assumed we can sample φ on those inputs. Thus, we now have a matrix equation of the form Mv = w.

We can now use rref computation or, if the matrix is invertible, any other method for computing the matrix inverse to solve the above equation for a,b R. This would give us the exact coefficients of a curve in span {f, g} that fits φ at the x values { x1,...,xm }.

Fact: For any two natural numbers k, m N where km, the curves f and g defined below are linearly independent:
f(x) = xk
g(x) = xm

Corollary: For any natural number k N, the curves f and g defined below are linearly independent:
f(x) = xk
g(x) = xk+1
We can see this by choosing any set of real number inputs { x1,...,xk+2 }; the equation below then has no solution s R:
f
=
sg
 
f(x1)
f(xk + 2)
 
=
s
 
g(x1)
g(xk + 2)
 
Since no such s exists, f and g must be linearly independent.

Example: Suppose we have the following two curves:
f(x) = x
g(x) = x2
If we choose any three inputs, e.g., {1,2,3}, we will find that g cannot be a linear combination of f because the three points will not be collinear.
f
=
sg
 
f(1)
f(2)
f(3)
 
=
s
 
g(1)
g(2)
g(3)
 
 
1
2
3
 
=
s
 
1
4
9
 
s
=
1
s
=
1/2
1
=
1/2
Since we derived a contradiction above, no such s exists, so f and g must be linearly independent.

Fact: For any collection of exactly k points in R2:
{
 
x1
y1
 
, ...,
 
xk
yk
 
}
there exists a polynomial f(x) = ak-1xk-1 + ... + a1 x + a0 that fits those points exactly.

We know the above must be true because the columns of the following matrix in Rk × k must be linearly independent:
 
(x1)k-1 ... (x1) 1
(xk)k-1 ... (xk) 1
 
This means that the equation below must have a unique solution:
 
(x1)k-1 ... (x1) 1
(xk)k-1 ... (xk) 1
 
 
ak-1
a0
 
=
 
y1
yk
 

Example: Just as with any vector space, we can set up and solve problems involving linear combinations of vectors. Suppose we want to find a function in the space {f | f(x) = ax3 + bx2 + cx + d, a,b,c,d R} that fits a certain collection of points, such as:

{(1,3), (-1,13), (2,1), (-2,33)}

We can interpret each point as a pair (x, f(x)). We can then set up the following system of equations:

f(1)
=
3
f(-1)
=
13
f(2)
=
1
f(-2)
=
33

Expanding, we have:

a(1)3 + b(1)2 + c(1) + d
=
3
a(-1)3 + b(-1)2 + c(-1) + d
=
13
a(2)3 + b(2)2 + c(2) + d
=
1
a(-2)3 + b(-2)2 + c(-1) + d
=
33

Notice that we can rewrite this in the form of an equation M v = w:

 
(1)3 (1)2 (1) 1
(-1)3 (-1)2 (-1) 1
(2)3 (2)2 (2) 1
(-2)3 (-2)2 (-2) 1
 
 
a
b
c
d
 
=
 
3
13
1
33
 

This shows us that we can interpret individual objects in {f | f(x) = ax3 + bx2 + cx + d, a,b,c,d R} as vectors in R4. We can also view this problem in terms of systems, system states, and observations. If the function f(x) = ax3 + bx2 + cx + d that solves the above system is a state of the system, then (3, 13, 1, 33) is a partial observation of that state along four distinct "dimensions".

Note that finding the equation for a line that crosses two points is just a special case of the above.

Example: We want to find a function f(x) = cx + d such that the points (2,3) and (5,-4) fall on the line the equation represents. Thus, we have:

 
(2) 1
(5) 1
 
 
c
d
 
=
 
3
-4
 

We can consider many spaces of functions beyond spaces of polynomial functions. For example, suppose exp(x) = ex. Then we can define the following vector space:

span{ exp, cos , sin  }.

We can also generalize this approach to families of related functions.

Example: Consider the following set of vectors of functions, where f' denotes the derivative of a function f:

F = { (f, f, f')      |      f(x) = bx2 + cx + d }

The set F is a vector space. Now, suppose we have the following problem. Find the function for a parabola in F such that (1,4) and (2,-3) lie on the parabola, and the maximum or minimum point on the parabola is at x = 1. Such a function is represented by the solution to the following equation:

 
(1)2 (1) 1
(2)2 (2) 1
2(1) 1 0
 
 
b
c
d
 
=
 
4
-3
0
 

Note that the number of rows in the matrix (and result vector) corresponds to the number of data points for which a model is being sought, and is not bounded.

Example: Find the order 5 polynomial that exactly fits the following points in R2:

 
-1
45
 
,
 
8
4
 
,
 
6
5
 
,
 
2
-8
 
,
 
-10
8
 
,
 
15
6
 

We must first compute the vectors that approximate each of the following curves that span our space of possible polynomials at each of the x components of the points above:
{ f  |  f(x) = a5 x5 + a4 x4 + a3 x3 + a2 x2 + a1 x + a0 + % where ai R }
=
span {f,g,h,i,j,l}
where
f(x)
=
x5
g(x)
=
x4
h(x)
=
x3
i(x)
=
x2
j(x)
=
x
l(x)
=
1
Thus, we want to solve for the coefficients of the following linear combination of curves:
a5 f + a4 g + a3 h + a2 i + a1 j + a0 l
=
curve represented by data points
Thus, we would have the following approximation of the above linear combination at the specific x values in { -1, 8, 6, 2, -10, 15 }:
a5
 
f(-1)
f(8)
f(6)
f(2)
f(-10)
f(15)
 
+ a4
 
g(-1)
g(8)
g(6)
g(2)
g(-10)
g(15)
 
+ a3
 
h(-1)
h(8)
h(6)
h(2)
h(-10)
h(15)
 
+ a2
 
i(-1)
i(8)
i(6)
i(2)
i(-10)
i(15)
 
+ a1
 
j(-1)
j(8)
j(6)
j(2)
j(-10)
j(15)
 
+ a0
 
l(-1)
l(8)
l(6)
l(2)
l(-10)
l(15)
 
=
 
45
4
5
-8
8
6
 
We can convert the above into the following matrix equation:
 
f(-1) g(-1) h(-1) i(-1), j(-1) l(-1)
f(8) g(8) h(8) i(8), j(8) l(8)
f(6) g(6) h(6) i(6), j(6) l(6)
f(2) g(2) h(2) i(2), j(2) l(2)
f(-10) g(-10) h(-10) i(-10) j(-10) l(-10)
f(15) g(15) h(15) i(15) j(15) l(15)
 
 
a5
a4
a3
a2
a1
a0
 
=
 
45
4
5
-8
8
6
 
It is then sufficient to solve the above equation to obtain the ceofficients, and thus the degree 5 polynomial that exactly fits the points above.

The disadvantage to using the approach presented in this subsection to fitting functions to data points is that the data must match an actual function perfectly (or, equivalently, the space of functions being considered must be rich enough to contain a function that can fit the data points exactly). We will see further below how to find functions that have the "best" fit (for one particular definition of "best") without necessarily matching the points exactly.

Example: Let polynomials in F = {f | f(t) = a t2 + b t + c } represent a space of possible radio signals. To send a vector v R3 to Bob, Alice sets her device to generate the signal corresponding to the polynomial in F whose coefficients are represented by v. Bob can then let his radio receiver sample the radio signals in his environment at multiple points in time t to retrieve the message.

Suppose Alice wants to transmit the following vector v R3 to Bob:
v
=
 
5
-2
1
 
Alice sets her radio transmitter to generate a signal whose amplitude as a function of time is:
f(t)
=
5 t2 - 2 t + 1
How can Bob recover the message Alice is sending? Bob can recover the message by having his device sample the signal at three points, e.g., t1 = 1, t2 = 2, and t3 = 3. Once Bob does this, he can set up an equation to recover the curve Alice used to generate the signal:
 
(1)2 (1) 1
(2)2 (2) 1
(3)2 (3) 1
 
v
=
 
f(1)
f(2)
f(3)
 
 
1 1 1
4 2 1
9 3 1
 
v
=
 
4
17
40
 
The above equation can then be solved to obtain Alice's message v.



4.4. Assignment #4: Vector Spaces and Polynomials

In this assignment you will solve problems involving vector spaces and vector spaces of polynomials. Please submit a single file a4.* containing your solutions. The file extension may be anything you choose; please ask before submitting a file in an exotic or obscure file format (plain text or PDF are preferred).

You can use the following form to compute the rref using WolframAlpha:

    1. Explain in detail why the following formula is true for any a R and b R if a ≠ 0 and b ≠ 0:
      span {
       
      a
      b
       
      ,
       
      a+1
      b
       
      }
      =
      span {
       
      1
      0
       
      ,
       
      0
      1
       
      }
      This can be done in multiple ways. One way is to show that the two vectors on the left-hand side are linearly independent. Suppose they are linearly dependent. Then there exists s R such that:
       
      a
      b
       
      =
      s
       
      a+1
      b
       
      a
      =
      s ⋅ (a + 1)
      s
      =
      a / (a + 1)
      b
      =
      sb
      s
      =
      1
      a / (a + 1)
      =
      1
      s
      s
      Since assuming that they are linearly dependent leads to the contradiction above, they must be linearly independent. Alternatively, we can show that the following matrix is invertible:
      det 
       
      a a + 1
      b b
       
      =
      ab - b ⋅ (a+1)
      =
      b ( a - a - 1)
      =
      b ⋅ (-1)
      b
      0
      det 
       
      a a + 1
      b b
       
      0
      Thus, the span of its columns is all of R2.
    2. Assume addition and scalar multiplication of curves in R2 are defined in the usual way as follows:
      f + g = h
          
      where      h(x) = f(x) + g(x)
      sf = h
          
      where      h(x) = sf(x)
      Find a basis of the following vector space:
      { f  |  f(x) = a ⋅ 2x+2 + b ⋅ 2x + c x where a,b,c R }

      The space of curves defined above can be defined as the set of linear combinations of the following three curves:
      f(x)
      =
      2x+2
      g(x)
      =
      2x
      h(x)
      =
      x
      However, {f, g, h} is not a basis because f and g are linearly dependent:
      f
      =
      4 ⋅ g
      f(x)
      =
      4 ⋅ g(x)
      2x+2
      =
      4 ⋅ 2x
      2x+2
      =
      22 ⋅ 2x
  1. Find an orthonormal basis of each of the following spaces. Please show your work.

    1. The line L defined by:
      L
      =
      {
       
      x
      y
       
       |  y = -3 x }

      Since L is a line, dim L = 1, so it is sufficient to find a single unit vector that spans L. Any vector on the line sufficies. If x = 1, then y = -3, so we can say:
      L
      =
      span {
       
      1
      -3
       
      }
      ||
       
      1
      -3
       
      ||
      =
      √(12 + (-3)2)
      =
      √(10)
      Thus, the orthonormal basis for L is:
      L
      =
      span {
       
      1/√(10)
      -3/√(10)
       
      }
    2. The space V defined by:
      V
      =
      span {
       
      1
      2
      0
      3
       
      ,
       
      1
      0
      3
      1
       
      ,
       
      2
      0
      0
      0
       
      ,
       
      1
      -2
      0
      -3
       
      }

      First, we note that the last vector is a linear combination of two other vectors:
       
      1
      -2
      0
      -3
       
      =
       
      2
      0
      0
      0
       
       
      1
      2
      0
      3
       
      None of the remaining three vectors are a linear combination of the other two, so we have found a basis:
      basis V
      =
      {
       
      1
      2
      0
      3
       
      ,
       
      1
      0
      3
      1
       
      ,
       
      2
      0
      0
      0
       
      }
      Alternatively, we could have found a basis by computing the following rref and keeping only the non-zero rows as the basis:
      rref
       
      1 2 0 3
      1 0 3 1
      2 0 0 0
      1 -2 0 -3
       

      We can now apply the Gram-Schmidt process to find an orthonormal basis:
      u1
      =
       
      2
      0
      0
      0
       
      e1
      =
       
      1
      0
      0
      0
       
      u2
      =
       
      1
      2
      0
      3
       
      − (
       
      1
      2
      0
      3
       
       
      1
      0
      0
      0
       
      ) ⋅
       
      1
      0
      0
      0
       
      =
       
      0
      2
      0
      3
       
      e2
      =
       
      0
      2/√(13)
      0
      3/√(13)
       
      u3
      =
       
      1
      0
      3
      1
       
      − (
       
      1
      0
      3
      1
       
       
      1
      0
      0
      0
       
      ) ⋅
       
      1
      0
      0
      0
       
      − (
       
      1
      0
      3
      1
       
       
      0
      2/√(13)
      0
      3/√(13)
       
      ) ⋅
       
      0
      2/√(13)
      0
      3/√(13)
       
      =
       
      0
      0
      3
      1
       
       
      0
      6/13
      0
      9/13
       
      =
       
      0
      −6/13
      3
      4/13
       
      e3
      =
      √(1573/169) ⋅
       
      0
      −6/13
      3
      4/13
       

    3. The plane in R3 orthogonal to the vector v where:
      v
      =
       
      1
      0
      2
       

      We must first find a basis for the plane P, which we can define as:
      P
      =
      { w | wv = 0 }
      =
      {
       
      x
      y
      z
       
      | 1 ⋅ x + 0 ⋅ y + 2 ⋅ z = 0 }
      To find one vector on the plane, we simply use the equation in the definition above:
      x + 2 ⋅ z
      =
      0
      x
      =
      −2 ⋅ z
      x
      =
      −2      (we choose x as the independent variable and assign a value to it)
      y
      =
      2      (since y can be anything, we choose something convenient)
      z
      =
      1      (this is determined by the equation)
      Next, we want to find another vector in the plane. Since we are looking for an orthonormal basis, we look for a vector that is orthogonal to the one we already found above. Thus, we must solve a system of two equations:
       
      x
      y
      z
       
       
      1
      0
      2
       
      =
      0
       
      x
      y
      z
       
       
      −2
      2
      1
       
      =
      0
      x + 2 ⋅ z
      =
      0
      −2 x + 2 y + z
      =
      0
      x
      =
      2      (we choose x as the independent variable and assign a value to it)
      y
      =
      5/2
      z
      =
      −1
      Thus, we now have two orthogonal vectors. It suffices to normalize both of them:
      V
      =
      span {
       
      −2
      2
      1
       
      ,
       
      2
      5/2
      −1
       
      }
      =
      span {
       
      −2/3
      2/3
      1/3
       
      ,
       
      4/(3 √(5))
      5/(3√(5))
      −2/(3/√(5))
       
      }
  2. For each of the following, find the exact polynomial (i.e., the natural number representing the degree of the polynomial and its real number coefficients).

    1. The input box below takes as input a space-separated list of real numbers. Clicking evaluate outputs the result of computing f(x) for each of the numbers x in the list.
      Suppose that the following is true:
      f
      { f  |  f(x) = a x2 + b x + c where a,b,c R }
      Find the coefficients of the polynomial f being computed by the box above. Explain how you obtained your result (write out the equations).
      It is sufficient to sample the polynomial at three points and then to solve for its coefficients:
       
      (0)2 (0) 1
      (1)2 (1) 1
      (2)2 (2) 1
       
       
      a
      b
      c
       
      =
       
      f(0)
      f(1)
      f(2)
       
      =
       
      20
      19
      12
       
       
      a
      b
      c
       
      =
       
      -3
      2
      20
       
      Any valid method or tool can be used to solve the equation, but the equation must be specified. Note that the sampled points may vary; this is fine as long as the coefficients match the points that were being used to fit the curve (that is, solutions with different coefficients should be accepted as long as the initial points and curve oefficients and mutually consistent).
    2. The input box below takes as input a space-separated list of real numbers. Clicking evaluate outputs the result of computing f(x) for each of the numbers x in the list. The only information available to you is that f is a polynomial; it has some degree k, but that degree is unknown to you.
      Find all the coefficients of the polynomial f being computed by the box above. Explain how you obtained your result (justify your answer by appealing to properties of polynomials and write out the equations).
      This problem can be approached by repeatedly falsifying hypotheses about the polynomial until a hypothesis that cannot be falsified is reached.

      First, suppose that the polynomial for the curve has degree k = 1 (i.e., it is of the form a x + b). Then it should be possible to fit a line (a polynomial of the form a x + b) exactly to three points on the curve. If this is not possible, then the curve's polynomial has degree greater than k = 1. By following this process, we find that an exact solution exists for 6 points and k = 5.

  3. Let polynomials in F = {f | f(t) = a t2 + b t + c } represent a space of possible radio signals. To send a vector v R3 to Bob, Alice sets her device to generate the signal corresponding to the polynomial in F whose coefficients are represented by v. Bob then has his receiver sample the radio signals in his environment at multiple points in time t to retrieve the message.
    1. Suppose Alice wants to transmit the following vector v R3 to Bob:
      v
      =
       
      -2
      1
      3
       
      Alice sets her radio transmitter to generate a signal whose amplitude as a function of time is:
      f(t)
      =
      -2 t2 + t + 3
      Suppose Bob wants to recover the message sent by Alice. At how many values of t should Bob sample the signal to recover Alice's message? Write out the computation Bob would perform to recover the message v from Alice.
      Bob would need to sample the signal at three (3) distinct points, e.g. t {0,1,2}. Bob would then solve the following equation:
       
      (0)2 (0) 1
      (1)2 (1) 1
      (2)2 (2) 1
       
       
      a
      b
      c
       
      =
       
      f(0)
      f(1)
      f(2)
       
       
      0 0 1
      1 1 1
      4 2 1
       
       
      a
      b
      c
       
      =
       
      3
      2
      −3
       
       
      a
      b
      c
       
      =
       
      −2
      1
      3
       
    2. Suppose Bob samples the signals at t {1,2,3} and obtains the vectors listed below. What vector in R3 did Alice send? Show your work.
      {
       
      1
      1
       
      ,
       
      2
      −3
       
      ,
       
      3
      −11
       
      }
      Bob would need to solve the following equation:
       
      (1)2 (1) 1
      (2)2 (2) 1
      (3)2 (3) 1
       
       
      a
      b
      c
       
      =
       
      1
      −3
      −11
       
       
      a
      b
      c
       
      =
       
      −2
      2
      1
       
      The solution can be obtained by solving a system of equations, by computing the rref of an augmented matrix, or any other valid method
    3. Suppose the environment contains noise from other communication devices; the possible signals in this noise are always from the span of the following polynomials:
      g(t)
      =
      2 t − 2
      h(t)
      =
      t2 + 3 t
      Alice and Bob agree that Alice will only try to communicate to Bob one scalar r R at a time. They agree on a unit vector u R3 ahead of time. Any time Alice wants to send some r R to Bob, she will have her device generate the signal corresponding to the polynomial represented by ru. If the vectors below represent the samples collected by Bob, what scalar was Alice sending to Bob?
      {
       
      1
      3
       
      ,
       
      2
      9
       
      ,
       
      3
      13
       
      }

      To allow Bob to distinguish Alice's signal from the noise, Alice must choose a signal that is linearly independent from the two noise signals. While Alice and Bob could theoretically agree on any linearly independent vector (as long as it is linearly independent, a correct solution is possible), the computions are easiest if the vector is orthogonal to the span of the two noise vectors. The two noise polynomials can be represented as vectors in R3, and the noise plane P can be defined as their span:
      P
      =
      span {
       
      0
      2
      −2
       
      ,
       
      1
      3
      0
       
      }
      We want to find a unit vector orthogonal to the plane P. It is sufficient to find a solution to the following system of equations:
       
      0
      2
      −2
       
       
      x
      y
      z
       
      =
      0
       
      1
      3
      0
       
       
      x
      y
      z
       
      =
      0
      ||
       
      x
      y
      z
       
      ||
      =
      1
      Solving the above yields the vector:
      u
      =
      (1/√(11)) ⋅
       
      −3
      1
      1
       
      Next, we must determine the coefficients of the curve Bob observed. This curve is a linear combination of the noise polynomials and Alice's polynomial (Alice's polynoial is represented by u).
       
      (1)2 (1) 1
      (2)2 (2) 1
      (3)2 (3) 1
       
       
      a
      b
      c
       
      =
       
      3
      9
      13
       
       
      a
      b
      c
       
      =
       
      −1
      9
      −5
       
      Finally, we must remove the contribution of the noise vectors from the observed vector in order to isolate Alice's signal. There are two ways to approach this (they are equivalent). One approach is to find the orthogonal projection of the observed vector onto the span of the noise vectors, and then to subtract this projection from the observed vector. Another approach is to project the observed vector directly onto span {u}; we use the latter.
      (
       
      −1
      9
      −5
       
      ⋅ (1/√(11)) ⋅
       
      −3
      1
      1
       
      ) ⋅ (1/√(11)) ⋅
       
      −3
      1
      1
       
      =
       
      −21/11
      7/11
      7/11
       
      Since u is a unit vector, the above vector's length represents the scalar by which u was multiplied, so its length is the scalar s in su.
      s
      =
      ||
       
      −21/11
      7/11
      7/11
       
      ||
      =
      √(539)/11
    4. Suppose there is no noise in the environment. Alice, Bob, and Carol want to send individual scalars to one another at the same time: all three would be sending a signal simultaneously, and all three would be listening at the same time:
      • Alice's device would generate a signal corresponding to a scalar multiple of f(x) = 2 x + 1;
      • Bob's device would generate a signal corresponding to a scalar multiple of g(x) = -x2 + 1;
      • Carol's device would generate a signal corresponding to a scalar multiple of h(x) = x2 - 3 x.
      Suppose you sample the radio signals at a given moment and obtain the vectors below. Determine which scalars Alice, Bob, and Carol are each transmitting. Your answer can be in the form of a vector, but you must specify which scalar corresponds to which sender.
      {
       
      0
      5
       
      ,
       
      1
      7
       
      ,
       
      2
      7
       
      }

      We can set up the following equation, where a, b, and c are the scalars being transmitted by Alice, Bob, and Carol, respectively:
      a
       
      f(0)
      f(1)
      f(2)
       
      + b
       
      g(0)
      g(1)
      g(2)
       
      + b
       
      h(0)
      h(1)
      h(2)
       
      =
       
      5
      7
      7
       
       
      f(0) g(0) h(0)
      f(1) g(1) h(1)
      f(2) g(2) h(2)
       
       
      a
      b
      c
       
      =
       
      5
      7
      7
       
       
      1 1 0
      3 0 −2
      5 −3 −2
       
       
      a
      b
      c
       
      =
       
      5
      7
      7
       
      Using any valid method for solving the equation above, we get:
       
      a
      b
      c
       
      =
       
      3
      2
      1
       
      An alternative approach is to first determine the polynomial being observed by setting up an equation, finding the exact curve that fits the three points provided, and then setting up a second matrix equation to determine what linear combination of the three signal polynomials adds up to the curve that fits the three points.


4.5. Basis, dimension, and orthonormal basis of a vector space

We have adopted span{v1,...,vn} as our notation for vector spaces. However, this makes it possible to represent a given vector space in a variety of ways:

span{(0,1),(1,0)} = span{(0,1),(1,0),(1,0)} = span{(0,1),(1,0),(1,0),(1,0)}, and so on.

We might naturally ask: given a vector space V, what is the smallest possible size for a finite set of vectors W such that V = span W?

Definition: A finite set of vectors B is a basis of the vector space V if span B = V and for all finite sets of vectors W such that span W = V, |B| ≤ |W|.

Fact: A finite set of vectors v1,...,vn is a basis for V iff span {v1,...,vn} = V and the vectors v1,...,vn are setwise linearly independent.

Fact: Given a finite set of vectors v1,...,vn, we can find the basis for span {v1,...,vn} by creating a matrix M in which the rows are the vectors v1,...,vn and computing rref M. The set of distinct nonzero rows in rref M is a basis for span {v1,...,vn}.

Example: We can find the basis of V = span {[-3;0;7;0], [-9;0;21;0], [0;0;2;8], [0;0;1;4], [3;0;13;4], [0;0;1;0]} by computing the reduced row echelon form of the following matrix. We can use a third-party tool to do so (e.g., WolframAlpha).
M
=
 
-3070
-90210
0028
0014
30134
0010
 
rref M
=
 
1000
0010
0001
0000
0000
0000
 
The rows of the matrix rref M will correspond to the vectors in a basis for V. Thus, we have that:
V
=
span {
 
-3
0
7
0
 
,
 
-9
0
21
0
 
,
 
0
0
2
8
 
,
 
0
0
1
4
 
,
 
3
0
13
4
 
,
 
0
0
1
0
 
}
=
span {
 
1
0
0
0
 
,
 
0
0
1
0
 
,
 
0
0
0
1
 
}

Definition: The dimension of a vector space V, which we write as dim V, is the size of any basis of V (recall that every basis is the same size as every other basis, since they all must have the smallest size of any set of vectors that spans V).

Example: What is the dimension of {f | f(x) = ax3 + bx2 + cx + d, a,b,c,d R}?

The dimension of a space is the size of its basis. A basis must consist of vectors that are linearly independent from one another, and it must span the space. We know that the following set B spans the space:
B
=
{ f, g, h, l } where
f(x)
=
x3
g(x)
=
x2
h(x)
=
x
l(x)
=
1
We also know that the curves f, g, h, and l are linearly independent. Thus, B is a basis. Since |B| = 4 and span B = V, then we have that:
dim V
=
|B|
=
4

Example: Consider the vector space V consisting of finite sets of real numbers: What are the basis and dimension of this vector space?

The basis of the space is B = { {1} } since you can obtain any finite set of real numbers by taking linear combinations of the set {1} V. Since span { {1} } = V and |{ {1} }| = 1, dim V = 1.

Definition: A finite set of vectors B is an orthonormal basis of the vector space V if B is a basis of V, all the vectors in B are unit vectors, and the vectors in B are setwise orthogonal.

Recall that for any vector v Rn and any unit vector u Rn, (vu) ⋅ u is the projection of v onto the line parallel to u (i.e., the vector space {a u | a R}). We can use this fact to define a process for turning an arbitrary basis into an orthonormal basis.

Fact: Given a basis B = {v1,...,vn}, it is possible to turn it into an orthonormal basis {e1,...,en} by using the Gram-Schmidt process. This algorithm can be summarized as follows.

u1
=
v1
    
e1 = u1 / ||u1 ||
u2
=
v2 − ((v2e1) ⋅ e1)
    
e2 = u2 / || u2 ||
u3
=
v3 − ((v3e1) ⋅ e1) − ((v3e2) ⋅ e2)
    
e3 = u3 / || u3 ||
u4
=
v4 − ((v4e1) ⋅ e1) − ((v4e2) ⋅ e2) − ((v4e3) ⋅ e3)
    
e4 = u4 / || u4 ||
    
un
=
vn − Σi = 1n-1 ((vnei) ⋅ ei)
    
en = un / || un ||

Intuitively, for each vector vi B, the algorithm substracts out the contributions of the already-computed orthogonal unit vectors from vi, obtaining ui, and then rescales ui to make it into a unit vector ei.

At the end of the above process, {e1,...,en} is the orthonormal basis (and {e1,...,en} is a basis of orthogonal vectors that are not necessarily unit vectors).

Many computational environments for mathematics that support linear algebra provide an operator for turning a basis into an orthonormal basis.

Why is an orthonormal basis useful? Once again, we appeal to the fact that (vu) ⋅ u is the projection of v onto u, where vu is the length of that projection. Suppose we want to determine how to express an arbitrary vector v using an orthonormal basis u1,...,un.

Fact: Given a vector v Rn, and an orthonormal basis B = {u1,...,un} for Rn, we can determine what linear combination of the vectors u1,...,un yields v by computing:

a = (MB)v.

Notice that the components of the vector a, which we will call a1,...,an, are the scalars needed to compute v as a linear combination of the vectors u1,...,un:

v = a1 u1 + ... + an un = MB a.

Fact: Given a vector v Rn, some k < n, and an orthonormal basis B = {u1,...,uk} for a subspace WRn, the product (MB ⋅ (MB)v) is the projection of v onto W.

Example: Notice the relationship of the above fact to projection onto the span of a single unit vector u = (x,y) in R2. Suppose we have a basis B = {u} for a subspace {su | s R2} ⊂ R2. Then we have:

MB
=
 
x
y
 
Thus, for an arbitrary v R2 not necessarily in {s ⋅ (x,y) | s R2} we have:

MB ⋅ (MB)v
=
 
x
y
 
 
x
y
 
v
=
 
x
y
 
⋅ (
 
x
y
 
v )
=
(
 
x
y
 
v ) ⋅
 
x
y
 
=
( v
 
x
y
 
) ⋅
 
x
y
 
=
( vu ) ⋅ u

Thus, we see that the projection of a vector onto another unit vector in R2 is just a special case of the general approach to projection.

Example: Suppose we want to project the vector v R2 onto the space WR2 where:
v
=
 
2
4
 
W
=
span {
 
1/√(2)
1/√(2)
 
}
B
=
{
 
1/√(2)
1/√(2)
 
}
We compute the matrix MB and its transpose. MB is a 2 × 1 matrix in this case, and (MB) is a 1 × 2 matrix:
MB
=
 
1/√(2)
1/√(2)
 
(MB)
=
 
1/√(2) 1/√(2)
 
We can now apply the formula. Notice that it is equivalent to the formula for projection of v onto the single vector that spans W:
MB ⋅ (MB)v
=
 
1/√(2)
1/√(2)
 
 
1/√(2) 1/√(2)
 
 
2
4
 
=
 
1/√(2)
1/√(2)
 
⋅ (
 
1/√(2)
1/√(2)
 
 
2
4
 
)
=
 
1/√(2)
1/√(2)
 
⋅ (6/√(2))
=
 
3
3
 
Thus, the projection of v onto W is [3; 3].

Example: Suppose we want to project the vector v R3 onto the space WR3 where:
v
=
 
2
4
5
 
W
=
span {
 
1/√(2)
1/√(2)
0
 
,
 
0
0
1
 
}
B
=
{
 
1/√(2)
1/√(2)
0
 
,
 
0
0
1
 
}
We compute the matrix MB and its transpose. MB is a 3 × 2 matrix in this case, and (MB) is a 2 × 3 matrix:
MB
=
 
1/√(2) 0
1/√(2) 0
0 1
 
(MB)
=
 
1/√(2) 1/√(2) 0
0 0 1
 
We can now apply the formula:
MB ⋅ (MB)v
=
 
1/√(2) 0
1/√(2) 0
0 1
 
 
1/√(2) 1/√(2) 0
0 0 1
 
 
2
4
5
 
=
 
1/√(2) 0
1/√(2) 0
0 1
 
 
6/√(2)
5
 
=
 
3
3
5
 
Thus, the projection of v onto W is [3; 3; 5].

Example: Suppose we want to project the vector v R3 onto the space WR3 where:
v
=
 
3
2
4
 
W
=
span {
 
1/√(2)
0
1/√(2)
 
,
 
0
1
0
 
}
What is the orthogonal projection of v onto W?

Since the two vectors that span W are already an orthonormal basis, we can simply project v onto the two individual vectors in the orthonormal basis and take the sum to obtain the orthogonal projection of v onto W:
(
 
3
2
4
 
 
1/√(2)
0
1/√(2)
 
) ⋅
 
1/√(2)
0
1/√(2)
 
+ (
 
3
2
4
 
 
0
1
0
 
) ⋅
 
0
1
0
 
=
(7/√(2)) ⋅
 
1/√(2)
0
1/√(2)
 
+ 2 ⋅
 
0
1
0
 
=
 
7/2
2
7/2
 

Example: Suppose we want to project the vector v R3 onto the space WR3 where:
v
=
 
-1
2
1
 
W
=
span {
 
3
4
0
 
,
 
0
0
2
 
}
What is the orthogonal projection of v onto W?

Since the two vectors that span W are already orthogonal, it is sufficient to turn them into unit vectors in order to obtain an orthonormal basis of W. We can then project v onto the two individual vectors in the orthonormal basis and take the sum to obtain the orthogonal projection of v onto W:
span {
 
3
4
0
 
,
 
0
0
2
 
}
=
span {
 
3/5
4/5
0
 
,
 
0
0
1
 
} (
 
-1
2
1
 
 
3/5
4/5
0
 
) ⋅
 
3/5
4/5
0
 
+ (
 
-1
2
1
 
 
0
0
1
 
) ⋅
 
0
0
1
 
=
1 ⋅
 
3/5
4/5
0
 
+ 1 ⋅
 
0
0
1
 
=
 
3/5
4/5
1
 

4.6. Homogenous, non-homogenous, overdetermined, and underdetermined systems

While we have usually considered n × n matrices, in real-world applications (especially those involving data), system states usually have a shorter description than the corpus of data (e.g., observational data) being used to determine the system state. Thus, problems in such applications usually involve matrices in which the number of rows is very different from the number of columns. Furthermore, the data may be noisy, which means that there may be no exact system state that matches the data. Informally, this might mean that the system of equations being used to determine a system state is overdetermined. On the other hand, if the amount of data is not sufficient to determine a single system state, a system is underdetermined.

Definition: For any matrix M Rn×m and vector v Rm, the system M x = v is overdetermined if there exist no solutions to the equation M x = v.

Definition: For any matrix M Rn×m and vector v Rm, the system M x = v is undetermined if there exist two or more solutions for x that satisfy the equation M x = v.

Definition: For any matrix M Rn×m, the system M x = 0 is homogenous.

Definition: For any matrix M Rn×m and vector v Rm where v0, the system M x = v is nonhomogenous.

Fact: A homogenous system M x = 0 has at least one solution.

Fact: A homogenous system M x = 0 is never overdetermined.

Fact: The solution space of a homogenous system is a vector space:

Fact: The solution space of a nonhomogenous system is an affine space.

Motivating Example: Suppose we have a nonhomogenous system M x = v where v R1000000.

We may not know ahead of time whether or not it is overdetermined. Suppose we do not want to incur the cost of attempting to solve M x = v exactly. This might be because we only want to make one pass of the data to reduce costs, but it might also be because this is a data stream that is too voluminous to store, so the data will be gone unless we solve the system right away (e.g., data from a telescope, a particle collider, or Twitter).

In such cases, we can typically assume the data will have noise, so the system is overwhelmingly likely to be overdetermined and have no solution. Thus, we are okay with finding an approximate solution.

We do know, however, that M x = 0 has at least one solution, and that the solution space is a vector space (so, for example, we could try to "search" for a better solution using some other technique involving addition and scalar multiplication once we find at least one non-trivial one). Thus, we might instead choose to solve the system M x = 0, knowing that even in the worst case, a very poor approximate solution of 0 will always be found in the worst case.

Is there a better compromise between these two strategies of keeping v or changing it to 0? Can we find some other space of solutions that won't be overdetermined, but will still allow us to find a non-trivial (nonzero) approximate solution? One possibility is to project the vector v onto a subspace of the solution space, ensuring that the system is no longer overdetermined. This approach is covered in the section below.

4.7. Application: approximating overdetermined systems

Fact: Suppose we have an overdetermined system where M Rm × n, v Rn, and w Rm:
Mv
=
w
Let us rewrite M in terms of its columns ci Rm:
M
=
 
c1 ... cn
 
v
=
 
x1
xn
 
Let B = {c1,...,cn} be a basis for span BRm. To be more general, we will use W = Rm and W' = span B:
B
=
{c1,...,cn}
W
=
Rm
W'
=
span B
span B'
Rm
W'
W
Now, it is possible to restate the fact that Mv = w has no solution in a different way:
Mv
=
w
 
c1 ... cn
 
 
x1
xn
 
=
w
x1c1 + ... + xncn
=
w
x1c1 + ... + xncn
span B
Mv
span B
w
span B
The above tells us that the real problem is that w ∉ span B. It also tells us that for any w' in the span of the columns of M (which we call span B), Mv = w' will have a solution:
M
=
 
c1 ... cn
 
w'
span {c1, ..., cn}
Mv
=
w' has at least one solution

Definition: Suppose we have an overdetermined system where v Rn, and w Rm:
 
c1 ... cn
 
v
=
w
We can find an approximate solution v' by picking some w' span {c1, ..., cn} and solving the new equation:
 
c1 ... cn
 
v'
=
w'
The error of the approximate solution v' is defined to be ||w' − w||.

We see that we may have many choices for w' in changing an overdetermined system Mv = w to a system with at least one solution Mv' = w'. Above, we have introduced a notion of error so that we can compare the quality of different choices of w'. How can we pick the w' that minimizes the error ||w - w'||? Let us first define what the vector leading to the minimal error is.

Definition: Given w W and a subspace W' ⊂ W, the vector w' within the subspace W' that is closest to w is a vector w* W' such that ||w - w*|| is minimal. In other words, for all w' W,

||w - w*|| ≤ ||w - w'||.

How do we find w*?

Fact: We first review the triangle inequality in the case of a right triangle. Suppose we have a triangle with a height a, a base length of b, and a hypotenuse of length c. Then we have:
a2 + b2
a2
c2
a2
c
a
This implies that the orthogonal projection of a vector v V onto a subspace WV is the closest point in W to v.

Fact: Given w W and a subspace W' ⊂ W, the orthogonal projection w* of w onto W' is the closest vector in W' to w.

Fact: Suppose we are given an overdetermined system:
Mv
=
w
We assume the matrix can be written in terms of its columns:
M
=
 
c1 ... cn
 
The least-squares approximate solution to Mv = w is the solution v* to the equation:
Mv*
=
w*
where w* is the orthogonal projection of w onto span {c1, ..., cn}. The solution v* is the one that leads to the smallest possible error for any possible vector v that we can multiply by M:

||w - M v*|| ≤ ||w - MB v||

Why is it called a "least-squares" approximate solution? In the table below, we summarize the correspondences used in the above facts.

concept related to
solving MB x = v
notation relationship notation geometric concept
the space of values W' with
which we can replace w in
the overdetermined system M v = w
to make a system M v = w'
that has solutions
{Mv | v Rn} the span of the
columns of M
span B the subspace W' of W
spanned by B
the error of an approximate
solution v'
||w - M v' || M v' = w' ||w - w'|| the distance between
w W and w' span B
Mv* where v* is
the minimum error solution
for all v',
||w - M w* || ≤ ||w - M w' ||
M v* = w* for all w' span B,
||w - w*|| ≤ ||w - w'||
the orthogonal projection
w* of w W onto span B
(the closest vector in span B
to w)

Notice that we know a solution to M v* = w* exists, and that we can show this in two different ways. Since w* span B, it must be that w is a linear combination of the vectors in B, so it is a linear combination of the columns in M. Alternatively, if M is a square matrix and we know that B is a basis, then the columns of M are linearly independent, which means M is invertible, so

v* = M -1 w*.

Fact: We can compute the least-squares approximate solution v* to any equation Mv = w by using the following process:
  1. break M up into a set of its column vectors
  2. find an orthonormal basis B of the span of the column vectors of M to make MB
  3. use MB to compute the projection w* of w onto span B
  4. solve the system M v* = w* (e.g., by finding the rref of an augmented matrix)

Example: Find the least-squares approximate solution v* to the following equation:
 
3 0
4 0
0 2
 
v
=
 
-1
2
1
 

We have an overdetermined system (i.e., an equation with no solution) of the form Mv = w. We must turn it into an equation Mv = w* that does have a solution by replacing w with w*, the orthogonal projection of w that is in the span of the columns of M.

Let W be the span of the columns of the matrix in the equation above:
W
=
span {
 
3
4
0
 
,
 
0
0
2
 
}
We first need to find an orthonormal basis of W. Since the two vectors above are already orthogonal, it is sufficient to normalize them. More generally, we would need to apply the Gram-Schmidt process (this is technically what we are doing here, it is just that the terms we need to subtract are always 0 in this case). Thus, we have an orthonormal basis:
W
=
span {
 
3/5
4/5
0
 
,
 
0
0
1
 
}
We now compute the orthogonal projection of w to find w* W:
w*
=
(
 
-1
2
1
 
 
3/5
4/5
0
 
) ⋅
 
3/5
4/5
0
 
+ (
 
-1
2
1
 
 
0
0
1
 
) ⋅
 
0
0
1
 
=
1 ⋅
 
3/5
4/5
0
 
+ 1 ⋅
 
0
0
1
 
=
 
3/5
4/5
1
 
Thus, we now have a solvable equation Mv = w*:
 
3 0
4 0
0 2
 
v
=
 
3/5
4/5
1
 
v
=
 
1/5
1/2
 
The error of the approximate solution is ||ww*||:
error of the approximate solution
 
1/5
1/2
 
=
||ww*||
=
||
 
-1
2
1
 
 
3/5
4/5
1
 
||
=
||
 
-8/5
6/5
0
 
||
=
2

Example: Suppose a matrix M is such that Mv = 0 has exactly one solution. What can we say about the error of the least-squares approximation of any system Mv = w?

Since M is invertible (because there is exactly one solution to Mv = 0), then there is always an exact solution v to Mv = w:
Mv
=
w
v
=
M -1w
Thus, the columns of M are linearly indepenent and span the space of all possible vectors that can appear on the right-hand side of the equation. This means that w* = w, so the error is ||ww*|| = ||ww||| = 0.

Example: Suppose a matrix M is such that Mv = 0 has Rn as its space of solutions. What can we say about any system Mv = w? What can we say about the error of any least-squares approximation of a solution for Mv = w?

This means that M is the matrix consisting of all zeroes. Thus, the span of its columns is { 0 }, the set consisting of only the origin. Thus, the error of any approximate solution will be ||w - 0|| = ||w||.

Example: We saw that if M is the matrix consisting of all zero entries, w* must be the zero vector (i.e,. the origin). Are there other matrices M for which w* will be 0?

Yes, if all the columns of M are orthogonal to w, then the orthogonal projection of w onto the span of the columns of M will be the origin, 0. For example, consider the following overdetermined system:
 
1 2
2 4
 
v
=
 
−2
1
 
Since [−2; 1] ⋅ [1; 2] = 0 and [−2; 1] ⋅ [2; 4] = 0, the orthogonal projection of [−2; 1] onto the span of the two columns will be [0; 0].

Relationship to curve fitting and linear regressions in statistics

The method described above does overlap with standard curve fitting and linear regression techniques you see in statistics. There are many variants to these approaches, and the one considered above corresponds to a fairly basic variant that has no specialized characteristics (i.e., it makes no special assumptions about the data points, the relative importance of the data points, their distribution, or the way they can be interpreted).

4.8. Application: approximating a model of system state dimension relationships

The approach described in the previous section can be used to approximate a solution to the equation M v = w. Usually, if M describes a model of a system (in this sense, system refers to a system of states that are described by values along some number of dimensions), solving M v = w corresponds to determining a system state v given some other information about a system state w.

What if we instead have a large collection of pairs of observations of system states of the form (v,w) (or, more generally, any collection of observations of a system along multiple dimensions). Can we approximate a model of a set of relationships between some two subsets of the dimensions in the system? In other words, if we take all our observations (x1, ..., xn) and divide them into pairs of descriptions x = (x1,...,xk) and v = (xk+1,...,xn), can we find the best M that approximates the relationship between incomplete system state descriptions x and v?

Example: Suppose we have the following dimensions in our system:

Suppose that we already have some set of confirmed (but possibly noisy) observations of systems along all the dimensions (the dimensions of the quantities in each point correspond to the list of dimensions above):
 
4
5
1
243
3341
700
 
,
 
3
6
21
125
1431
1465
 
,
 
4
2
13
533
3432
334
 
,
 
16
4
4
334
143
762
 
,
 
13
8
13
235
1534
513
 
,
 
34
16
17
333
3532
450
 

For the above observations, we want to find a matrix that relates the first three dimensions (the number of each type of star) to the last three dimensions (the amount of each kind of radiation). In other words, we are looking for a matrix M R3× 3 of the form:

 
a units IR/red dwarf b units IR/G type c units IR/pulsar
d units UV/red dwarf e units UV/G type f units UV/pulsar
g units γ/red dwarf h units γ/G type i units γ/pulsar
 

Multiplication by the above matrix represents a system state description transformation with the following units:

(# red dwarf stars, # G type stars, # pulsars) → (units IR, units UV, units γ)

We can split the data into two collections of points: those that specify the number of each type of star (the inputs to the above transformation), and those that specify the amount of each kind of radiation (the output of the above transformation). Below, we call these P and Q. We then turn these two sets of data points into matrices. In order to turn this problem into the familiar form M v = w, we can transpose the two matrices.

Example: Suppose we have four dimensions in our system. Also, suppose that we already have some set of confirmed (but possibly noisy) observations of systems along all the dimensions (the dimensions of the quantities in each point correspond to the list of dimensions above):
 
1
2
1
0
 
,
 
0
3
0
1
 
,
 
2
4
2
0
 
,
 
0
2
0
1
 
,

For the above observations, we want to find a matrix that relates the first two dimensions to the last two dimensions. We would then set up the following equations:
 
a b
c d
 
 
1
0
 
=
 
1
2
 
 
a b
c d
 
 
0
1
 
=
 
0
3
 
 
a b
c d
 
 
2
0
 
=
 
2
4
 
 
a b
c d
 
 
0
1
 
=
 
0
2
 
Unfortunately, there is no single solution a,b,c,d R to the above collection of equations. To find an approximate solution, we can convert the above into a single matrix equation:
 
1 0
0 1
2 0
0 1
 
 
a c
b d
 
=
 
1 2
0 3
2 4
0 2
 
We can go further and turn this into an equation of the form Mv = w:
 
1 0 0 0
0 1 0 0
2 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
0 0 2 0
0 0 0 1
 
 
a
b
c
d
 
=
 
1
0
2
0
2
3
4
2
 
In either case, we can now approach the problem by finding the closest vector or matrix that is in the span of the matrix on the left-hand side of the equation. We do this by finding the orthogonal projection of the right-hand side onto the span of the matrix.
 
1
0
2;
0
 
span {
 
1
0
2;
0
 
,
 
0
1
0;
1
 
}
 
2
3
4;
2
 
span {
 
1
0
2;
0
 
,
 
0
1
0;
1
 
}
The above tells us that it is sufficient to project the vector [2; 3; 4; 2] onto the span. The projection is:
(
 
2
3
4
2
 
 
1
0
2
0
 
⋅ (1/√(3))) ⋅
 
1
0
2
0
 
⋅ (1/√(3)) + (
 
2
3
4
2
 
 
0
1
0
1
 
⋅ (1/√(2))) ⋅
 
0
1
0
1
 
⋅ (1/√(2))
The above yields:
10/3 ⋅
 
1
0
2
0
 
+ 5/2 ⋅
 
0
1
0
1
 
=
 
10/3
5/2
20/3
5/2
 
We can now set up an equation that has a solution:
 
1 0
0 1
2 0
0 1
 
 
a c
b d
 
=
 
1 10/3
0 5/2
2 20/3
0 5/2
 
 
a b
c d
 
 
1 0
10/3 5/2
 

4.9. Application: distributed computation of least-squares approximations

Fact: We assume there are n distinct computing devices {C0, ..., Cn} (e.g., servers, virtual machines, motes, mobile devices, etc.), that any device can communicate with a small number (e.g., two) of other devices at any given moment in time, and that all devices can compute or communicate simultaneously (i.e., independently of one another). Then the following are true:

Fact: The process of finding an approximate solution v* to an overdetermined system Mv = w can be broken down into the following basic operations:

Fact: If two vectors [a1; ...; an] and [b1; ...; bn] are stored in a distributed manner across multiple devices {C0, ..., Cn} such that device Ci stores only ai and bi, then the devices can collective compute the sum of these two vectors in one time step: each device Ci simply computes ai + bi and stores the result internally.

4.10. Orthogonal complements and algebra of vector spaces

Definition: The orthogonal complement W of a vector subspace WV is the set of all vectors in V that are orthogonal to all the vectors in W:

W = {v | v V, ∀ w W, vw = 0}

Notice that the orthogonal complement of a subspace is always taken within the context of some specified space V. We consider vector spaces, their orthogonal complements, and common set operations: set union, set intersection, and set product.

Example: Given any two vectors spaces V and W, is VW a vector space? No. For example, consider

V = span{[1;0]} ∪ span{[0;1]}.

We have that [1;0] V and [0;1] V, but not [1;0] + [0;1] = [1;1] V.

Example: Given any two vectors spaces V and W, is VW a vector space? Yes.

Example: Given any two vectors spaces V and W, is V × W a vector space? Yes.

Exercise: For a vector space WV, compute WW.

Fact: For a vector subspace WV,

dim W + dim W = dim V.

Fact: For a vector subspace WV,

W × W = V.


5. Linear Transformations

5.1. Set products, relations, and maps

We consider the following sets of vectors and their properties.

construct definition example graphical example
V × W { (v,w) | v V, w W } {1,2,3} × {4,5,6}
=
{(1,4),(1,5),(1,6),
(2,4),(2,5),(2,6),
(3,4),(3,5),(3,6)}
R is a relation between V and W RV × W {(1,D), (2,B), (2,C)}
is a relation between
{1,2,3,4} and {A,B,C,D}
f is a function from V to W
f is a map from V to W
f is a relation between V and W and
v V, there is at most one
w W s.t. f relates v to w
{ (x,f(x)) | f(x) = x2 }
R -1 is the inverse of R { (w,v) | (v,w) R }
f: XY is injective
f: XY is surjective
f: XY is bijective

Notice that we may have f such that f is a function, but f -1 is not a function.

Fact: If we say that a map f is a relation between V and W, we can denote this as f : VW or f VW. Then, V is the domain of f and W is the codomain.

Notice that the definition of f does not uniquely determine its codomain. For example, f(x) = x2 could have a codomain of R if f : RR, or it could have a codomain of R+ if f : RR+.

Example: Describe with as much detail as you can the domains, codomains, and images of the following maps (there is more than one right answer for these, as the definitions do not specify the domains of the functions). Determine whether each map is injective, surjective, or both given the domains and codomains you chose.

f(x)
=
3x
    
f(x)
=
x2
    
f(x)
=
-x
    
f(x)
=
x ⋅ √(-1)
    
f(x)
=
 
x 0
0 x
 
    
f( M, N )
=
MN
    
where M and N are matrices
f(
 
x
y
 
)
=
 
1 2
3 4
 
 
x
y
 

Fact: Given some f:VW, the following process can be used to show a map f is injective:
  1. assume v,v' V and vv';
  2. it is necessary to show that f(v) ≠ f(v'):
    1. expand f(v) and f(v') using the definition of f;
    2. use algebraic properties or other facts to derive f(v) ≠ f(v').
Alternatively, we can use the following approach:
  1. assume v,v' V;
  2. assume f(v) = f(v'):
    1. expand f(v) and f(v') using the definition of f;
    2. use algebraic properties or other facts to derive v = v'.

Fact: Given some f: VW, the following process can be used to determine whether a map f is surjective:
  1. let w W;
  2. solve f(v) = w:
    • if you can solve the equation (or derive an equation v = g(w)) such that v V, f:VW is surjective;
    • if the equation has no solutions in V, f:VW is not surjective.

Notice that the above does not mean that there is not some V' where VV' that does have a solution to the equation f(v) = w. For example, f(x) = 3 x is surjective if f RR but not surjective if f ZZ.

Example: Show that f: RR where f(x) = x + 1 is injective.

We can show this as follows by first adding 1 to both sides, then applying the definition of f:
x
x'
x+1
x'+1
f(x)
f(x')
Alternatively, we could use the other method by first apply the definition of f, then subtracting 1 from both sides.
f(x)
=
f(x')
x+1
=
x'+1
x
=
x'

Example: Show that f: RR where f(x) = x + 1 is surjective.

Suppose any value in the codomain r R is chosen. Then we can solve the following equation for a corresponding value x in the domain:
f(x)
=
r
x+1
=
r
x
=
r - 1

Definition: A map f: VW is a bijection or is bijective if f is an injection from V to W and f is a surjection from V to W.

Example: Show that f: RR where f(x) = x + 1 is bijective.

Definition: Given a map f: VW, we call { f(v) | v V } the image of f:
im(f)
=
{ f(v) | v V }

5.2. Homomorphisms and isomorphisms

The previous section defined a number of properties of functions f: AB that only address the equality of elements in A and B. However, what if A and B have relations and operators? For example, what if A and B are vector spaces with an identity, an addition operation, and a scalar multiplication operation?

We can restrict our notion of a map or function to maps or functions f: AB that somehow preserve or consistently transform the properties of operators and relations we may have already defined on A and B. For example, suppose we have A = {a1, a2, a3}, B = {b1, b2, b3}, an operator ⊕ such that a1a2 = a3, and an operator ⊕ over elements of B such that b3b2 = b1. A map f from A to B that preserves (or respects, or consistently transforms) ⊕ would be such that

f(a3) = b1
f(a2) = b2
f(a1) = b3

Notice that the above map is such that the operation ⊕ is respected by the map f:

f(a3)      =      f(a1a2)      =      f(a1) ⊕ f(a2)      =      b3b2      =      b1

Definition: Given a binary operator ⊗ over A and another operator ⊕ over B, we say that a map f: AB is a homomorphism if we have that

a,a' A,      f(aa') = f(a) ⊕ f(a').

Notice that a homomorphism might have, but does not necesserily have, any of the properties we introduced for maps: it could be injective, surjective, and so on.

Definition: A bijective homomorphism f: AB is called an isomorphism.

5.3. Linear transformations

In this course we are interested in a specific kind of homomorphism.

Definition: For any two vector spaces V and W, a map f: VW is a linear transformation iff we have that for all v,v' V and scalars s R,

f(v + v')
=
f(v) + f(v')
f(sv)
=
sf(v)

In other words, a linear transformation is a homomorphism between vector spaces that preserves vector addition and scalar multiplication. If a map f does not satisfy the above properties, it is not a linear transformation.

Example: For any matrix M Rn×m, the map f : Rm Rn defined as f(v) = Mv is a linear transformation.

We can confirm that any such f satisfies the properties of a linear transformation:
f(v + v')
=
M ⋅ (v + v')
=
Mv + Mv'
=
f(v) + f(v')
f(sv)
=
M ⋅ (sv)
=
s ⋅ (Mv)
=
sf(v)

Example: Representation of a polynomial curve f as a vector of its y values on some fixed number of inputs x1,...,xn is a linear transformation.

Suppose we have the transformation φ defined as follows:
φ(f)
=
 
f(x1)
f(xn)
 
We can confirm that φ is a linear transformation. Suppose that h(x) = f(x) + g(x). Then we have:
φ(f + g)
=
φ(h)
=
 
h(x1)
h(xn)
 
=
 
f(x1) + g(x1)
f(xn) + g(xn)
 
=
 
f(x1)
f(xn)
 
+
 
g(x1)
g(xn)
 
=
φ(f) + φ(g)
Suppose that h(x) = sf(x). Then we have:
φ(sf)
=
φ(h)
=
 
h(x1)
h(xn)
 
=
 
sf(x1)
sf(xn)
 
=
s
 
f(x1)
f(xn)
 
=
s ⋅ φ(f)

Fact: For any two linear transformations f: VW, g: UV, the composition f o g: UW is also a linear transformation. To show this, we must demonstrate that if h(x) = f(g(x)) then h respects vector addition and scalar multiplication. For any u,u' U, and s R, we have:
h(u + u')
=
f(g(u + u'))
=
f(g(u) + g(u'))
=
f(g(u)) + f(g(u'))
=
h(u) + h(u')
and
h(su)
=
f(g(su))
=
f(sg(u))
=
sf(g(u))
=
sh(u)
Because linear transformations are homomorphisms (and, thus, maps), we can ask whether they have properties of maps (i.e., whether they are injective, surjective, bijective, and so on). We will do so further below.

Example: Let the map φ : R3 → {f | f(x) = a x2 + b x + c} be defined as follows:
φ(
 
a
b
c
 
)
=
h      where h(x) = a x2 + b x + c
It is the case that φ is a linear transformation and a bijection. Thus, φ is an isomorphism between R3 and {f | f(x) = a x2 + b x + c}.

Example: Differentiation (i.e., finding the derivative) of polynomials is a linear transformation.

As an example, consider the space of polynomials of the form f(x) = a x2 + b x + c. If each polynomial is represented as a vector of its coefficients, the differentiation operator for this vector space of functions can be represented as a matrix:
 
0 0 0
2 0 0
0 1 0
 
 
a
b
c
 
=
 
0
2 a
b
 
Because there exists an isomorphism φ between {f | f(x) = a x2 + b x + c} and R3, we can compose φ and φ -1 with the following linear transformation ψ : R3R3
ψ(v)
=
 
0 0 0
2 0 0
0 1 0
 
v
Then, the following differential operator d/dx : {f | f(x) = a x2 + b x + c} → {f | f(x) = a x2 + b x + c} is a linear transformation because composition of linear transformations are linear transformations:
d/dx
=
φ o ψ o φ -1

Example: Orthogonal projection onto a vector space is a linear transformation.

We know that any vector space has an orthonormal basis B. Thus, we can compute the projection of any vector v using the formula MBMBv. Thus, orthogonal projection onto span B can be defined as a linear transformation f where:
f(v)
=
(MBMB) ⋅ v
Since (MBMB) is a matrix, f must be a linear transformation.

Fact: For any linear transformation f:VW and appropriate constants 0 V and 0' W,

f(0) = 0'

We can show this in the following way: given any v V,
0
=
0 ⋅ v
f(0)
=
f(0 ⋅ v)
=
0 ⋅ f(v)
=
0'

Fact: For any linear transformation f:VW and the corresponding additive inversion operations of V and W, it is the case that f respects additive inversion:

f(-v) = -f(v)

We can derive this fact in two ways. One approach is to use the previous fact that f(0) = 0' to show that f(-v) is indeed the inverse of f(v):

f(v) + f(-v)
=
f(v + (-v))
=
f(0)
=
0'

The other approach is to use the fact that in both vector spaces, v = -1 ⋅ v:

v
=
1 ⋅ v
v + (-1 ⋅ v)
=
(1 ⋅ v) + ((-1) ⋅ v)
=
(1 + (-1)) ⋅ v
=
0 ⋅ v
=
0

Then, because f respects scalar multiplication, we have that:

f(-1 ⋅ v)
=
-1 ⋅ f(v)

Thus, the argument is:

f(v) + f(-v)
=
f(v) + f(-1 ⋅ v)
=
f(v) + ((-1) ⋅ f(v))
=
f(v) + (-f(v))
=
0

5.4. Orthogonal projections as linear transformations

Fact: For v Rn, an orthogonal projection onto a one-dimensional vector space span{w} is a linear transformation. First, note that the dot product of two vectors can be rewritten as matrix multiplication of two matrices:

wv = wv.

Also, notice that

||w||
=
√(ww)
||w||2
=
ww

Finally, notice that for nonzero r R where r ≠ 0, the 1 × 1 matrix
 
r
 
R1 × 1 is invertible and has inverse
 
1/r
 
.

Consider the formula for the projection of w Rn onto span(v). We can rewrite the formula to use only multiplication of matrices.

(vw/||w||) ⋅ w/||w||
=
1/||w||2 ⋅ (vw) ⋅ w
=
w ⋅ 1/||w||2 ⋅ (vw)
=
w ⋅ 1/(ww) ⋅ (wv)
=
w ⋅ (ww) -1 ⋅ (wv)
=
(w ⋅ (ww) -1w) ⋅ v

Thus, we have a product of the matrices w Rn×1, (ww) -1 R1×1, and w Rn. This product is a matrix in Rn×n. Call it Mspan{w}. Then we can define the linear transformation f RnRn that performs the orthogonal projection onto span{w} as:

f(v) = Mspan{w}v.

We used the above approach because we will see later that it can be generalized to orthogonal projections onto multidimensional vector spaces. An alternative way to show the above is to notice that w/||w|| can be rewritten as a matrix u Rn×1. In that case, we have:

(vw/||w||) ⋅ w/||w||
=
(vu) ⋅ u
=
u ⋅ (uv)
=
(uu) ⋅ v

Here, uu Rn×n.

Example: Suppose we want to compute the projection of v onto w where:
v
=
 
1
-2
 
w
=
 
3
4
 
We can compute the projection as follows:
||
 
3
4
 
||
=
5
(vw/||w||) ⋅ w/||w||
=
(
 
1
-2
 
⋅ (
 
3
4
 
⋅ (1/5))) ⋅ (
 
3
4
 
⋅ (1/5))
=
 
3
4
 
⋅ (1/25) ⋅ (
 
1
-2
 
 
3
4
 
)
=
 
3
4
 
⋅ (
 
3
4
 
 
3
4
 
) -1 ⋅ (
 
3
4
 
 
1
-2
 
)
=
(
 
3
4
 
⋅ (
 
3 4
 
 
3
4
 
) -1
 
3
4
 
) ⋅
 
1
-2
 
=
(
 
3
4
 
 
25
 
 -1
 
3
4
 
) ⋅
 
1
-2
 
=
(
 
3
4
 
 
1/25
 
 
3
4
 
) ⋅
 
1
-2
 
=
(
 
3/25
4/25
 
 
3
4
 
) ⋅
 
1
-2
 
=
 
9/25 12/25
12/25 16/25
 
 
1
-2
 
Notice that the columns of the matrix above are in span {w}:
 
9/25
12/25
 
=
(3/25) ⋅
 
3
4
 
 
9/25
12/25
 
span {
 
3
4
 
}
 
12/25
16/25
 
=
(3/25) ⋅
 
3
4
 
 
12/25
16/25
 
span {
 
3
4
 
}
Thus, we have that:
span {
 
9/25
12/25
 
,
 
12/25
16/25
 
} = span {
 
3
4
 
}

Fact: Suppose that for M Rn×m and f RmRn we have an overdetermined system:

Mv = w

If MM is invertible (and, thus, (MM) -1 is defined), we can say that:
Mv
=
w
M ⋅ (Mv)
=
Mw
(MM) ⋅ v
=
Mw
(MM) -1 ⋅ (MM) ⋅ v
=
(MM) -1 Mw
v
=
(MM) -1 Mw
Mv
=
M ⋅ (MM) -1 Mw.
Thus, even if Mv = w is overdetermined, M ⋅ (Mv) = Mw has a solution v* such that Mv* corresponds to the orthogonal projection of v onto the span of the column vectors of M.

Thus, for any matrix M where MM is invertible, we can define the linear transformation that represents orthogonal projection onto the span of the columns of M:
f(v)
=
M ⋅ (MM) -1 Mv.



5.5. Assignment #5: Approximations and Linear Transformations

In this assignment you will solve problems involving overdetermined systems and linear transformations.

You are only required to choose and solve any four (4) of the problems below. Solutions to any additional problems will count as extra credit.

Please submit a single file a5.* containing your solutions. The file extension may be anything you choose; please ask before submitting a file in an exotic or obscure file format (plain text is preferred).

  1. For each of the linear transformations f defined below, determine the following.
    • What are the domain and codomain?
    • Is it injective? If it is, provide a step-by-step argument. If not, provide a counterexample.
    • Is it surjective? If it is, provide a step-by-step argument. If not, provide a counterexample.
    • Is it bijective? Explain why or why not.
    • What is dim(im(f))? Your answer must be an integer.

    1. The linear transformation f : R3R2 where:
      f(v)
      =
       
      1 2 0
      2 4 1
       
      v
    2. The linear transformation f : R2R2 where:
      f(v)
      =
       
      1 0
      0 0
       
      v
    3. The linear transformation f : R5R1 where:
      f(v)
      =
       
      0 0 0 1 2
       
      v
    4. The linear transformation f : R2R2 where:
      f(v)
      =
       
      0 a
      a 0
       
      v      where a ≠ 0
  2. In this problem, you will practice finding the least-squares approximate solutions to overdetermined systems.

    1. Find the least-squarse approximate solution to the following equation, and compute the error of that solution:
       
      1 -2
      -2 4
       
       
      x
      y
       
      =
       
      1
      5
       
    2. Suppose you are given the following polynomial:
      f(x)
      =
      x2 - 2 x + 6
      Find the best-fit curve in the space {f | f(x) = a x + b, a,b R} for the above polynomial using the x values {-1, 0, 2}.
    3. Suppose you are given the following data points:
       
      1
      2
       
      ,
       
      0
      1
       
      ,
       
      -1
      0
       
      ,
       
      -2
      3
       
      Find the best-fit curve in the space {f | f(x) = a x2 + b x + c, a,b,c R} for these data points.
  3. Two engineers are trying to find the best-fit curve in the space { f | f(x) = a x + b, a,b R } for a collection of 100 data points of the form (xi, yi) for i {1,...,100}, but that collection is split across two databases of 50 points each. Each engineer has access to only one of the databases. Symbolically, the overdetermined system they are trying to solve is represented using the following equation:
     
    x1 1
    x100 1
     
     
    a
    b
     
    =
     
    y1
    y100
     
    Suppose the two column vectors [x1; ...; x100] and [1; ...; 1] of the matrix are orthogonal to one another (but not necessarily unit vectors). Then, the first thing they need to do is project the vector of [y1; ...; y100] onto the two column vectors of the matrix. Each engineer computes the following values using the data available to them:
    ||
     
    x1
    x50
     
    ||
    =
    3
        
        x1y1 + ... + x50y50
    =
    62
        
        y1 + ... + y50
    =
    120
    ||
     
    x51
    x100
     
    ||
    =
    4
        
        x51y51 + ... + x100y100
    =
    38
        
        y51 + ... + y100
    =
    80
    1. Once the engineers compute the projection of [y1; ...; y100] onto span of the vector [x1; ...; x100], they collectively obtain some vector of the form:
      s
       
      x1
      x100
       
      =
      orthogonal projection of
       
      y1
      y100
       
      onto
       
      x1
      x100
       
      for some s. What is s? Your answer must be an integer.
    2. Once the engineers compute the projection of [y1; ...; y100] onto span of the vector [1; ...; 1], they collectively obtain some vector of the form:
      t
       
      1
      1
       
      =
      orthogonal projection of
       
      y1
      y100
       
      onto
       
      1
      1
       
      for some t. What is t? Your answer must be an integer.
    3. What are the coefficients a and b of the best-fit line f(x) = a x + b for the data? Your answers should be integers.
  4. Consider the following underdetermined system:
     
    1 3
    2 6
     
     
    x
    y
     
    =
     
    10
    20
     
    1. Define the solution space to the above system; you must use set notation. We will call this space W.
    2. Define an isomorphism between W and a vector space V. You must provide a definition of both the vector space and the isomorphism.
    3. Suppose that the cost of a solution vector v to the above system is defined to be ||pv|| where
      p
      =
       
      2
      3
       
      What is the solution to the above system that has the lowest cost?
  5. Suppose you are assembling a processing plant that generates electricity and produces biofuel as a byproduct. You can purchase any number of each of the following two types of generators:
    • generators of type A produce 4 units of electricity for every 1 unit of biofuel produced;
    • generators of type B produce 8 units of electricity for every 2 units of biofuel produced.
    Furthermore, in order for the generators to work, each generator must be connected to every other generator of the same type as well as a central controller, and connections cost $4. As a result, the connection cost for each kind of generator is computed as follows:
    • when you buy x generators of type A, each generator of type A costs 4 ⋅ x dollars;
    • when you buy y generators of type B, each generator of type B costs 4 ⋅ y dollars.
    If you must generate exactly 40 units of electricity and 10 units of biofuel while paying the least connection cost possible, how many of each kind of generator should you purchase?
  6. Suppose that vectors in R2 represent population quantities in two locations, the linear transformation f:R2 R2 represents a change in the quantities over the course of one year if the economy is doing well, g:R2 R2 represents a change in the quantities over the course of one year if the economy is not doing well, and v0 is the initial state:
    f(v)
    =
     
    2 1
    4 2
     
    v
    g(v)
    =
     
    0.1 0.2
    0.4 0.3
     
    v
    v0
    =
     
    100
    200
     
    The population state is initially v0; after 302 years have passed, the total number of years with a good economy was 100 and the total number of years with a poor economy was 202. What is the state of the population at this point?


5.6. Matrices as linear transformations

Fact: Suppose we have the following matrix M Rm×n and the corresponding linear transformation f : RnRm:
M
=
 
c1 ... cn
 
f(v)
=
Mv
Then the following is true:
span {c1, ..., cn}
=
im(f)

Fact: Suppose we have an invertible matrix M Rn×n and the corresponding linear transformation f : RnRn:
f(v)
=
Mv
Then f is a bijection. Because f is also a linear transformation, f is an isomorphism.

Example: Consider the following linear transformation f: RR2:
f(x)
=
 
x
x
 
The linear transformation f is injective but not surjective.

We can show f is injective:
x
x'
 
x
x
 
 
x'
x'
 
f(x)
f(x')
To show it is not surjective, we choose the following vector w R2 (i.e., in the codomain). The equation is then not solvable, so f is not surjective.
w
=
 
1
0
 
f(x)
=
 
1
0
 
 
x
x
 
=
 
1
0
 
x
=
1
x
=
0
1
=
0

Example: Consider the following linear transformation f: R2 → span{ [3;4] }:
f(
 
x
y
 
)
=
 
3 6
4 8
 
 
x
y
 
The linear transformation f is surjective but not injective.

Example: Consider the following linear transformation f: R2 → span{ [3;4] }:
f(
 
x
y
z
 
)
=
 
0 1 1
1 0 1
 
 
x
y
z
 
The linear transformation f is surjective but not injective.

5.7. Application: communication

We want to transmit information, but we have constraints (e.g., security or high cost of transmission). Suppose we have two linear transformations f : V W and f -1 : W V such that f -1 o f is a bijection (and, thus, an isomorphism). If im(f) has some desirable property that the domain of f does not possess (e.g., it is obscured, or it has a smaller representation size), we can use f as an encoding function for the information we want to transmit, and f -1 as a decoding function.

Example: Suppose Alice wants to transmit the following vectors to Bob:
{
 
-3
6
 
,
 
2
-4
 
,
 
1
-2
 
,
 
0
0
 
,
 
-5
10
 
}
If she simply transmitted the individual real numbers to Bob one-by-one, she would need to transmit 10 real numbers. Can Alice and Bob agree on some protocol that would allow Alice to send fewer real numbers but transmit the same amount of information in this case?

All the vectors in the set are in R2. However, Alice and Bob can take advantage of the fact that the vectors are in a subspace of R2. In particular, all five vectors are in:
span {
 
1
-2
 
}
R2

Suppose Alice and Bob define the following encoding and decoding linear transformations:
f : span {
 
1
-2
 
} R
f -1 : R span {
 
1
-2
 
}
f(
 
x
y
 
)
=
x
f -1(s)
=
s
 
1
-2
 
Then f -1 o f is invertible, and Alice can use f to encode vectors that contain two real numbers within a single real number. This means it is sufficient for Alice to send just five real numbers in order to send Bob the five vectors:
{ f(
 
-3
6
 
), f(
 
2
-4
 
), f(
 
1
-2
 
), f(
 
0
0
 
), f(
 
-5
10
 
) }
=
{-3, 2, 1, 0, -5}

5.8. Affine spaces and affine transformations

We are interested in studying solution spaces of systems of the form Mv = w. However, these are not vector spaces because they do not always contain 0 (i.e., the origin). Recall that if a solution space of a system does contain 0, the solution space is a vector space and Mv = w must be a homogenous system where w = 0.

Definition: Suppose W is a set of vectors such that 0W, and suppose w0 is the orthogonal projection of 0 onto W. Then we can define W to be a vector space with the following vector addition and scalar multiplication operations:

Fact: If V is a vector space with addition operation + and scalar multiplication operation ⋅, the vector space W = {v + w0 | v V} with addition operation ⊕ and scalar multiplication operation ⊗ is isomorphic to V, with isomorphism f:V W defined as:
f(v)
=
v + w0
We can see that f is a linear transformation because it has the properties of a linear transformation:
f(v + v')
=
v + v' + w0
=
v + w0 + v'
=
v + w0 + v' + w0 - w0
=
(v + w0) + (v' + w0) - w0
=
f(v) ⊕ f(v')
f(s ⋅ v)
=
sv + w0
=
sv + w0 + (sw0 - sw0)
=
sv + sw0 + w0 - sw0
=
s ⋅ (v + w0) + (1-s) ⋅ w0
=
sf(v)
We can see it is injective:
v
v'
v+w0
v'+w0
f(v)
f(v')
We can see it is surjective because for any w' W:
f(v)
=
w'
v+w0
=
w'
v
=
w' - w0
Since f is injective, surjective, and a linear transformation, it is an isomorphism. Thus, V and W are isomorphic. This means we can work with W as if it is a vector space by simply mapping back to V using the isomorphism, doing any necessary work, and then mapping back to W by using the inverse of that isomorphism, f -1.

Example: Consider the following affine space:
W
=
{
 
x
y
 
| y = 2 x + 3}
Suppose we want to find the orthogonal projection of the following point p R onto the affine space W:
p
=
 
4
3
 
To do so, we first need to find the isomorphism between a vector space V and the space W. We will choose the vector space V that is parallel to W but that goes through the origin; it is defined as:
V
=
{
 
x
y
 
| y = 2 x }
Notice that V is a line with the same slope as W. Next, we must the additive identity w0 of W by finding the intersection of V and W, where V is the orthogonal complement of V. We first compute V:
V
=
span {
 
1
2
 
}
V
=
{ u | uv = 0 for all v V}
=
{ u | uv = 0 for all v basis V}
=
{ u | uv = 0 for all v {
 
1
2
 
}}
=
{ u | u
 
1
2
 
= 0 }
=
{
 
x
y
 
|
 
x
y
 
 
1
2
 
= 0 }
=
{
 
x
y
 
| x + 2 ⋅ y = 0 }
=
{
 
x
y
 
| y = -(1/2) x }
We can now compute the intersection and, thus, w0:
{w0}
=
WV
=
{
 
x
y
 
| y = 2 x + 3} ∩ {
 
x
y
 
| y = -(1/2) x }
=
{
 
x
y
 
| y = 2 x + 3 and y = -(1/2) x}
=
{
 
−6/5
3/5
 
}
w0
=
 
−6/5
3/5
 
We can now define an isomorphism f:V W:
f(v)
=
v + w0
=
v +
 
−6/5
3/5
 
Next, to project the point p onto W, it is now sufficient to first project the point onto V, then apply f:
V
=
span {
 
1
2
 
}
||
 
1
2
 
||
=
√(5)
orthogonal projection of
 
4
3
 
onto span {
 
1
2
 
}
=
(
 
4
3
 
⋅ (1/√(5))
 
1
2
 
) ⋅ (1/√(5))
 
1
2
 
=
(10/√(5)) ⋅ (1/√(5))
 
1
2
 
=
2 ⋅
 
1
2
 
=
 
2
4
 
Finally, the orthogonal projection of p onto W can be computed via f:
orthogonal projection of
 
4
3
 
onto W
=
f(
 
2
4
 
)
=
 
2
4
 
+
 
−6/5
3/5
 
=
 
4/5
23/5
 

How do we find an isomorphism between an affine space and a parallel vector space?

Fact: Suppose that for a matrix M Rn×n, Mu = w is a non-homogenous system with at least one solution. Let U be the solution space of Mu = w:
U
=
{u | Mu = w }
How can we find an isomorphism between U and some vector space V? We let V be the corresponding homogenous system:
V
=
{v | Mv =
 
0
0
 
}
We must now find V; we can do so by taking the transpose of M and setting up a new homogenous system:
V
=
{Mv | v Rn }
To find u0, it is now sufficient to find the intersection of U and V:
{u0}
=
UV
The isomorphism f:V U is then:
f(v)
=
v + u0

Example: Suppose we have a non-homogenous system Mv = w where M Rn×n and we want to compute the projection of some p Rn onto the affine space {v | Mv = w}. How can we do so?

Suppose that M is of the following form:
M
=
 
c1 ... cn
 
=
 
r1
rn
 
Then we can say that we want to project p onto {v | Mv = 0} and find an isomorphism f: {v | Mv = 0} {v | Mv = w}; the orthogonal projection is then f(p).

Thus, we find an orthonormal basis of {v | Mv = 0} and compute the orthogonal projection of p onto {v | Mv = 0}; call this p*:
p*
=
orthogonal projection of p onto {v | Mv = 0}
We then find f by solving the following system of equations for u0:
Mu0
=
w
Mv
=
u0
We solve the above by first solving for v and then computing u0:
M ⋅ (Mv)
=
w
u0
=
Mv
We now have the isomorphism f:
f(v)
=
v + u0
Then, we can compute the orthogonal projection:
orthogonal projection of p onto {v | Mv = w}
=
f(p*)
=
p* + u0

Example: We can find a basis of the following set of solutions of a homogenous system (thus, a vector space):
V
=
{ v |
 
3 6
1 2
 
v = 0 }
We can do so by starting with a basis that spans the entire space that contains V (which in this case is R2), and then "removing" the contribution of the orthogonal complement V from these basis vectors.

Thus, we begin with:
R2
=
span {
 
1
0
 
,
 
0
1
 
}
We know that:
V
=
span {
 
3
6
 
,
 
1
2
 
}
=
span {
 
1
2
 
}
Thus, we project each basis vector for R2 onto V, then subtract that from the basis vector:
 
1
0
 
- (
 
1
0
 
⋅ (1/√(5)) ⋅
 
1
2
 
) ⋅ (1/√(5)) ⋅
 
1
2
 
=
 
4/5
−2/5
 
 
0
1
 
- (
 
0
1
 
⋅ (1/√(5)) ⋅
 
1
2
 
) ⋅ (1/√(5)) ⋅
 
1
2
 
=
 
−2/5
1/5
 
The remaining vectors above span V:
V
=
{ v |
 
3 6
1 2
 
v = 0 }
=
span {
 
4/5
−2/5
 
,
 
−2/5
1/5
 
}
=
span {
 
−2/5
1/5
 
}

5.9. Fixed points, eigenvectors, and eigenvalues

We introduce several properties of linear transformations; these properties it possible to work with linear transformations in new and convenient ways.

Definition: Given a linear transformation f: VV, v V is a fixed point of f if:
f(v)
=
v

Example: Consider the following linear transformation f: R2R2:
f(v)
=
 
2 1
1 2
 
v
The following vector v is a fixed point of f:
v
=
 
−1
1
 
f(
 
−1
1
 
)
=
 
2 1
1 2
 
 
−1
1
 
=
 
−1
1
 

Fact: For any linear transformation f, 0 is a fixed point of f (see the definition of a linear transformation).

Fact: If v is a fixed point of f and v0, then f has an infinite number of fixed points. This is because for any s R and any fixed point v, we have that:
f(v)
=
v
f(sv)
=
sf(v)
=
sv

Fact: For any linear transformation f where f(v) = Mv, let S be the set of fixed points of f:

S = {v | f(v) = v }

Then S is a vector space. There are several ways to show this. Let v be any fixed point of f. Then we have:

f(v)
=
v
Mv
=
v
(Mv) − v
=
0
(Mv) − Iv
=
0
(MI) ⋅ v
=
0

Thus, the fixed points of f represented by M are exactly the elements of the solution space of the homogenous system (M - I) ⋅ v = 0. Alternatively, we could show that S contains 0, S is closed under scalar addition, and S is closed under scalar multiplication.

We can generalize the notion of a fixed point by observing that a fixed point is just a vector on which the linear transformation acts as a scalar multiplier (in the case of a fixed point, the multiplier is 1). If v is a fixed point of f, then we have that:

f(v) = 1 ⋅ v

What if we created another notion that was not restricted to 1?

Definition: Given a linear transformation f: VV, v V is an eigenvector of f with eigenvalue λ if:

f(v) = λ ⋅ v
Fact: For any linear transformation f, if v is a fixed point of f, then it is an eigenvector of f with eigenvalue 1.

Definition: The set of eigenvectors of a linear transformation f that have eigenvalue λ is its eigenspace:
eigenspace of f with eigenvalue λ
=
{ v | f(v) = λ ⋅ v }
Any eigenspace is a vector space.

Fact: Given a linear transformation f RnRn represented by M Rn×n and an eigenvector v, consider the following:
f(v)
=
λ v
Mv
=
λ v
(Mv) − λ v
=
0
(Mv) − λ Iv
=
0
(M − λ I) ⋅ v
=
0
The above equation has only zero solutions if (M - λ I) is invertible:
(M - λ I) ⋅ v
=
0
(M - λ I) -1 ⋅ (M - λ I) ⋅ v
=
(M − λ I) -10
v
=
(M − λ I) -10
v
=
0
Thus, nonzero eigenvectors v exist only if (M - λ I) is not invertible. However, if it is not invertible, this means that det  (M − λ I) = 0. Thus, if it is the case that det  (M - λ I) = 0, then there must exist at least one λ that solves this equation. In fact, the eigenvalues of T are exactly the solutions to the equation:
det  (M − λ I)
=
0

Example: Suppose that we are modelling a system with two dimensions that are each modelled using R:

The following matrix (call it T) represents the movement of the two populations between the two locations (with entries in percentages) over one year:

from city from suburbs
to city     
 
0.95 stay in the city      0.03 move from suburbs to city     
 
to suburbs     
 
0.05 move from city to suburbs      0.97 stay in the suburbs     
 

For example, suppose the population in 1999 is represented by v below. Then the population in 2000 is represented by Tv:

Tv
=
 
0.95 0.03
0.05 0.97
 
 
600,000
400,000
 
=
 
582,000
418,000
 

Let f(v) = Tv be the linear transformation represented by T.

  1. What does a fixed point of f represent?

    A fixed point of f represents a stable population distribution that will not change from one year to the next. For example:

     
    0.95 0.03
    0.05 0.97
     
     
    375,000
    625,000
     
    =
     
    375,000
    625,000
     

    Notice that any scalar multiple of this vector, including a vector that is normalized so that the two components add up to 1, is also a fixed point of f:

     
    0.95 0.03
    0.05 0.97
     
     
    375,000
    625,000
     
    =
     
    0.375
    0.625
     

    Thus, for this transformation, for any distribution of the population in which 37.5% live in the city, the distribution is stable.

  2. What does an eigenvector of f represent?

    An eigenvector of f represents a population distribution that may grow or shrink, but whose relative distribution between the city and the suburbs remains the same from one year to the next.

  3. Does f have any nonzero eigenvectors other than the fixed points?

    No, because the sum of the components of any vector in im f is always the same as the sum of the components of the input vector that produced that image.
     
    0.95 0.03
    0.05 0.97
     
     
    x
    y
     
    =
     
    0.95 x + 0.03 y
    0.05 x + 0.97 y
     
    x + y
    =
    (0.95 x + 0.05 x) + (0.03 y + 0.97 y)
    =
    (0.95 x + 0.03 y) + (0.05 x + 0.97 y)

    This is because the sums of the components of the column vectors of T are 1. This makes T a stochastic matrix, and it makes the fixed points the steady-state or equilibrium vectors of T.

  4. Find the vector space of fixed points of f.

    Note that either the fixed points of f are {0}, or there are infinitely many. Thus, if we can find a matrix equation whose solutions are the fixed points, we will obtain either a system whose only solution is 0, or an underdetermined system. We know that for our particular f and T, the space of fixed points is the set of solutions to the equation:
    S
    =
    { v | (
     
    0.95 0.03
    0.05 0.97
     
    -
     
    1 0
    0 1
     
    ) ⋅ v
    =
     
    0
    0
     
    }
    We then find the basis that spans S:
    S
    =
    {v | v
     
    -0.05
    0.03
     
    = 0}
    =
    span {
     
    0.03
    0.05
     
    }
    =
    span {
     
    3
    5
     
    }

  5. Suppose we want to find a closed formula for the population k years after the initial state v (i.e., after applying T to an initial vector v eight times, or Tkv) where we have:

    v
    =
     
    0.6
    0.4
     

    The formula should be in terms of k and should not require matrix multiplication. In other words, we should be able to obtain closed formulas for the city population and the suburb population in terms of k.

    We can approach this problem by finding the eigenvectors of f. Then, we can express the result of Tkv as a linear combination of eigenvectors.
    det  (
     
    0.95 0.03
    0.05 0.97
     
    - λ ⋅
     
    1 0
    0 1
     
    )
    =
    0
    det 
     
    0.95 - λ 0.03
    0.05 0.97 - λ
     
    =
    0
    (0.95 - λ) ⋅ (0.97 - λ) - (0.03 ⋅ 0.05)
    =
    0
    λ2 - 1.92 λ + 0.9215 - 0.0015
    =
    0
    λ2 - 1.92 λ + 0.92
    =
    0
    λ
    =
    (1.92 ± √(1.922 - 4(0.92)))/2
    λ
    =
    1.92/2 ± 0.08/2
    λ
    =
    0.96 ± 0.04
    λ
    =
    1      or      0.92

    We already know the eigenvector for eigenvalue 1 (it is the fixed point). To find the eigenvector for the other eigenvalue, we solve:

     
    0.95 0.03
    0.05 0.97
     
     
    x
    y
     
    =
    0.92 ⋅
     
    x
    y
     
    0.95 x + 0.03 y
    =
    0.92 x
    0.05 x + 0.97 y
    =
    0.92 y
    0.03 x + 0.03 y
    =
    0
    0.05 x + 0.05 y
    =
    0
    x
    =
    1
    y
    =
    -1

    Thus, our eigenvectors are:

    e1
    =
     
    3
    5
     
    e2
    =
     
    1
    -1
     

    Notice that we had a space of solutions because the system was underdetermined; we chose a particular eigenvector. Finally, notice that:

    Tk ⋅ (ae1 + be2)
    =
    (Tkae1) + (Tkbe2)
    =
    a ⋅ (Tke1) + b ⋅ (Tke2)
    =
    ae1 + b ⋅ 0.92ke2

    Thus, if the initial input vector can be rewritten in terms of the two eigenvectors, we can find the closed formula. In fact, it can be because the two eigenvectors are linearly independent:

    a
     
    3
    5
     
    + b
     
    1
    -1
     
    =
     
    x
    y
     
     
    3 1
    5 -1
     
     
    a
    b
     
    =
     
    x
    y
     
     
    a
    b
     
    =
     
    3 1
    5 -1
     
     -1
     
    x
    y
     
     
    a
    b
     
    =
     
    3 1
    5 -1
     
     -1
     
    0.6
    0.4
     
     
    a
    b
     
    =
     
    0.125
    0.225
     
    0.125 ⋅
     
    3
    5
     
    + 0.225 ⋅
     
    1
    -1
     
    =
     
    0.6
    0.4
     

    The closed formula is:

    Tk
     
    0.6
    0.4
     
    =
    0.125 ⋅
     
    3
    5
     
    + 0.225 ⋅ 0.92k
     
    1
    -1
     

Eigenspaces have many other applications. In particular, they make it possible to provide "natural" interpretations of general notions of concepts such as differentiation in the context of vector spaces.

Example: Differentiation is a linear transformation from the vector space of differentiable functions (or a subset, e.g., the polynomials):

 
d
dx
 
(a f + b g)     
=
     a
 
d
dx
 
f + b
 
d
dx
 
g

As an example, consider the space of polynomials of the form f(x) = a x2 + b x + c. If each polynomial is represented as a vector of its coefficients, the differentiation operator for this vector space of functions can be represented as a matrix:

 
0 0 0
2 1 0
0 1 0
 
 
a
b
c
 
=
 
0
2 a
b
 

Thus, an eigenvector in this vector space is any differentiable function f such that:

 
d
dx
 
f
=
λ f

Notice that the above is a differential equation. If λ = 0, then we have for any constant c R the solution:

f(x) = c

If λ ≠ 0 and we do not restrict ourselves to polynomials but allow all infinitely differentiable functions, then we have the solution:

f(x) = c eλx


Review 2. Vector and Matrix Algebra, Vector Spaces, and Linear Transformations

The following is a breakdown of what you should be able to do at the end of the course (and of what you may be tested on in an exam). Notice that many of the tasks below can be composed. This also means that many problems can be solved in more than one way.

Below is a comprehensive collection of review problems going over the material covered in this course. These problems are an accurate representation of the kinds of problems you may see on an exam.

Example: Consider the following vector space:
V
=
span{
 
2
1
4
 
,
 
0
1
-1
 
}

Example: Consider the vector space of polynomials of degree at most 2:

F = { f | f(x) = a x2 + b x + c, a,b,c R}

The map d:FF represent differentiation. For example:

f(x)
=
5 x2 - 2 x + 3
g(x)
=
10 x - 2
d(f)
=
g

Example: Alice wants to send vectors in R2 to Bob. For any vector v R2 that she wants to send, she generates a random scalar r R and sends w to Bob as defined below:
v
=
 
x
y
 
w
=
 
1 2 0
0 3 0
0 0 2
 
 
x
y
r
 

Example: Suppose there are two locations across which a population is distributed. Over the course of each year, the population moves between the two locations according to one of two population distribution transformations depending on how well the economy is doing (A if the economy is doing well, B otherwise):
A
=
 
0.55 0.9
0.45 0.1
 
B
=
 
0.75 0.3
0.25 0.7
 

Example: Given u Rn, what is the orthogonal projection of v Rn onto span{u}?

It is sufficient to compute the orthogonal projection of v onto u. Given a unit vector e parallel to u, the projection of v onto e would be:

(ev) ⋅ e.

However, we cannot assume u is a unit vector. Thus, we scale u to obtain a unit vector e = u/||u||. Then, the solution is:

((u/||u||) ⋅ v) ⋅ (u/||u||).

Example: Let M R17× 9 and f(v) = Mv. What is the largest possible value of dim(im(f))?

We know that we can only compute M v if v has as many rows as M has columns. Thus, if f(v) = Mv, then v R9. We also know that Mv will be a vector with 17 rows because M has 17 rows, so f(v) R17. Thus, f R9R17.

This means that dim(im(f)) cannot be greater than dim(R17), so it cannot exceed 17. However, we also need to note that M has 9 columns, and that any value in the image of f is thus a linear combination of at most 9 vectors. Thus, any basis of im(f) has at most 9 distinct vectors in it. Since dim(im(f)) is the size of the basis of im(f), dim(im(f)) can be at most 9.

Example: Find an orthonormal basis for the following vector space:

span {
 
1
0
2
0
 
,
 
0
1
2
3
 
,
 
0
0
0
1
 
}.

We can use the algorithm for computing the vectors in an orthonormal basis. We can work with the vectors in any order, so suppose we have:

v1 =
 
0
0
0
1
 
,      v2 =
 
0
1
2
3
 
, and      v3 =
 
1
0
2
0
 
.

According to the algorithm, we then let u1 = v1 and e1 = u1 / ||u1||. In this case, we still have e1 = v1. Next, we compute:

u2
=
v2 - ((v2e1) ⋅ e1)
=
 
0
1
2
3
 
-
 
0
0
0
3
 
=
 
0
1
2
0
 
    
e2
=
 
0
1/√(5)
2/√(5)
0
 

u3
=
v3 - ((v3e1) ⋅ e1) - ((v3e2) ⋅ e2)
=
 
1
0
2
0
 
-
 
0
0
0
0
 
-
 
0
4/5
8/5
0
 
=
 
1
-4/5
2/5
0
 
    
e3
=
(3√(5))/5 ⋅
 
1
-4/5
2/5
0
 

Thus, {e1, e2, e3} is an orthonormal basis for span{v1, v2, v3}.

Example: Suppose we are given a linear transformation f:R2R2 where:

f(v)
=
 
2 7
3 1
 
v.

  1. Find an orthonormal basis for im(f).

    We know that the column vectors of the matrix are linearly independent. Thus, dim(im(f)) as at least 2, and since the codomain of f is R2, im(f) = R2. Thus, any orthonormal basis of R2 is appropriate, such as:

    {
     
    1
    0
     
    ,
     
    0
    1
     
    }.

  2. Find the set of all v such that f(v) = 0.

    Let the matrix in the definition of f be M. We know from lecture that M is invertible iff Mv = 0 has exactly one solution 0. Since det  M ≠ 0, M is invertible, so there is exactly one solution in the solution space {0}.

    Alternatively, it is sufficient to find the set of solutions to the equation f(v) = 0, which is the set of solutions (expanding the definition of f) of:
     
    2 7
    3 1
     
    v
    =
     
    0
    0
     
    .
    Since M is invertible, we can multiply both sides by M -1 to obtain:
    v
    =
    1/(3-21) ⋅
     
    1 -7
    -3 2
     
     
    0
    0
     
    v
    =
     
    0
    0
     
    .
    Thus, there is only one solution in the solution set: {0}.

  3. Show that f is surjective.

    To show that f is surjective, it is sufficient to show that for every v in the codomain, there exists x such that:
    f(x)
    =
    v
    In other words, we want a formula for x in terms of v that is defined for any v. Let the matrix in the definition of f be M. Since M is invertible, we can define:
    x
    =
    1/(3-21) ⋅
     
    1 -7
    -3 2
     
    v
    This formula for x is always defined, so f is surjective.

Example: Suppose we are given a linear transformation f:R2R2 where:

f(v)
=
 
2 1
8 4
 
v.

  1. Find dim(im(f)).

    Recall that the dimension of a space is the size of any basis of that space (all bases of a space have the same size). Let M be the matrix in the definition of f. The space im(f) is equivalent to the span of the column vectors of the matrix:

    im(f)
    =
    span{
     
    2
    8
     
    ,
     
    1
    4
     
    }

    To find a basis of a space spanned by a collection of vectors, we create a matrix whose rows are the vectors in that collection, find its reduced row echelon form, and keep only the nonzero rows in the basis:

     
    2 8
    1 4
     
     
    1 4
    1 4
     
     
    1 4
    0 0
     
    im(f)
    =
    span{
     
    1
    4
     
    }

    Thus, the dimension of im(f) is 1.

  2. Show that f is not injective.

    To show that a map is not injective, it is sufficient to find v and v' such that vv' but f(v) = f(v').

    One approach to finding such v and v' is to expand f(v) = f(v') until the constraints are simple enough that it is straightforward to try some inputs and easily check that they satisfy the constraints.

    f(x)
    =
    f(x')
     
    2 1
    8 4
     
     
    x
    y
     
    =
     
    2 1
    8 4
     
     
    x'
    y'
     

    Since the top row of the matrix is a multiple of the bottom row, we get one equation:

    x
    x'
    y
    y'
    2x + y
    =
    2x' + y'

    One possible pair of vectors in the domain that satisfies the above is:

     
    x
    y
     
    =
     
    0
    2
     
     
    x'
    y'
     
    =
     
    1
    0
     

    Thus, f is not injective.

  3. Define the solution space f(v) = 0 as a span and find its basis.

    We write down the definition of the solution space of the homogenous system above:
    {
     
    x
    y
     
    | f(
     
    x
    y
     
    ) =
     
    0
    0
     
    }
    =
    {
     
    x
    y
     
    |
     
    2 1
    8 4
     
     
    x
    y
     
    =
     
    0
    0
     
    }
    The constraints imposed above can be summarized as:
    {
     
    x
    y
     
    | 2x + y = 0}
    =
    {
     
    x
    y
     
    | y = -2x}
    Thus, the solution space is the line in R2 defined by the equation y = -2x. We can choose any vector on this line and take the span of the set containing only that vector to provide an explicit definition for the set of solutions:
    ker(f)
    =
    span {
     
    -1
    2
     
    } .

Example: Find a matrix M such that for f(v) = Mv,
im(f)
=
(span{
 
2
0
0
0
 
,
 
0
0
2
0
 
})

One way to approach this problem is to expand the definition on the right-hand side using the definition of the orthogonal complement operation:

im(f)
=
(span{
 
2
0
0
0
 
,
 
0
0
2
0
 
})
=
{
 
x
y
z
t
 
|
 
2
0
0
0
 
 
x
y
z
t
 
= 0,     
 
0
0
2
0
 
 
x
y
z
t
 
= 0 }
=
{
 
x
y
z
t
 
| 2x = 0,      2z = 0,      y,t R }
=
{
 
x
y
z
t
 
| x = 0,      z = 0,      y,t R }
=
{
 
0
y
0
t
 
| y,t R }.
Thus,
im(f)
=
span{
 
0
1
0
0
 
,
 
0
0
0
1
 
}.
Recall that the image of f is the span of the columns of M. Thus, one possible solution is:
M
=
 
0 0
1 0
0 0
0 1
 
.

Example: Suppose that a2 + b2 = 1. What is the orthogonal projection of v onto im(f) if f(v) = Mv, and:

v
=
 
1
2
3
 
M
=
 
a 0 0
b 0 0
0 c 0
 
We have that:
im(f)
=
span{
 
a
b
0
 
,
 
0
0
c
 
}.
Because a2 + b2 = 1, the first vector is already a unit vector and orthogonal to the second vector. By rescaling the second vector to be a unit vector, we can obtain an orthonormal basis:
im(f)
=
span{
 
a
b
0
 
,
 
0
0
1
 
}.
We can now find the orthogonal projection of v onto each vector in the orthonormal basis, and add these to find the orthogonal projection of the vector onto im(f):
(
 
1
2
3
 
 
a
b
0
 
) ⋅
 
a
b
0
 
+ (
 
1
2
3
 
 
0
0
1
 
) ⋅
 
0
0
1
 
=
 
a2 + 2ba
ab + 2b2
3
 
.

Example: Suppose we are given the following points:

 
0
6
 
,
 
2
6
 
,
 
2
12
 
.

  1. Find a function of the form f(x) = ax2 + bx + c that is the best least-squares fit for these points.

    We begin by writing down the equations in terms of f for each point:

    f(0)
    =
    6
    f(2)
    =
    6
    f(2)
    =
    12

    We construct the matrix equation that represents the above system:

     
    (0)2 (0) 1
    (2)2 (2) 1
    (2)2 (2) 1
     
     
    a
    b
    c
     
    =
     
    6
    6
    12
     
    There is no solution to the above equation. However, we are looking for the least-squares best fit approximation. Thus, we first find an orthonormal basis of the image of the matrix. If the matrix is M and f(v) = Mv, we have that:

    im(f)
    =
    span {
     
    0
    4
    4
     
    ,
     
    0
    2
    2
     
    ,
     
    1
    1
    1
     
    }.

    Using the algorithm for finding an orthonormal basis, we obtain:
    im(f)
    =
    span {
     
    0
    1/√{2}
    1/√{2}
     
    ,
     
    1
    0
    0
     
    }.

    We now find the orthogonal projection of
     
    6
    6
    12
     
    onto im(f) and use it to rewrite the equation so that it is not overdetermined:

     
    (0)2 (0) 1
    (2)2 (2) 1
    (2)2 (2) 1
     
     
    a
    b
    c
     
    =
     
    6
    9
    9
     

    This system is underdetermined. The space of solutions implied by the above equation is:
    { f      |      f(x) = ax2 + bx + c,      c = 6,      4a + 2b + c = 9}
    This means that any solution to the system is a least-squares best fit approximation. One possible best fit approximation is:

    f(x) = x2 + (-1/2)x + 6

  2. Find a function of the form f(x) = bx + c that is the best least-squares fit for these points.

    We follow the same process as in part (a) above. The matrix equation is:

     
    (0) 1
    (2) 1
    (2) 1
     
     
    b
    c
     
    =
     
    6
    6
    12
     

    There is no solution to the above equation. However, we are looking for the least-squares best fit approximation. Thus, we first find an orthonormal basis of the image of the matrix. If the matrix is M and f(v) = Mv, we have that:

    im(f)
    =
    span {
     
    0
    2
    2
     
    ,
     
    1
    1
    1
     
    }.

    Using the algorithm for finding an orthonormal basis, we obtain:

    im(f)
    =
    span {
     
    0
    1/√{2}
    1/√{2}
     
    ,
     
    1
    0
    0
     
    }.

    We again find the orthogonal projection of
     
    6
    6
    12
     
    onto im(f) and use it to rewrite the equation so that it is not overdetermined:

     
    (0) 1
    (2) 1
    (2) 1
     
     
    b
    c
     
    =
     
    6
    9
    9
     

    This yields c = 6 and b = 3/2. Thus, the least-squares best fit approximation is:

    f(x) = (3/2)x + 6

Example: Show that for M Rn×n, the space of solutions to M x = 0 is a vector space (you do not need to show that all the axioms hold; it is sufficient to show the appropriate closure properties).

Let S be the space of solutions.

We show that 0 S:
M0 = 0.

We show that if v,v' S, then v + v' S:
Mv
=
0
Mv'
=
0
M ⋅ (v + v')
=
Mv + Mv'
=
0 + 0
=
0.

We show that if v S, then for any scalar s R, s v S:
Mv
=
0
M ⋅ (sv)
=
s ⋅ (Mv)
=
s0
=
0.

Example: Compute the orthogonal projection of v onto im(f) where f(v) = Mv,

v
=
 
1
-2
3
 
, and M
=
 
1 1
2 8
1 5
 
.

Normally, the way to approach such problems is to find an orthonormal basis for im(f) where f(v) = Mv, then use that orthonormal basis to project v onto im(f). However, it's a good idea to check for special cases before delving into a large computation. In this particular example, we have that:

 
1
-2
3
 
 
1
2
1
 
=
0
 
1
-2
3
 
 
1
8
5
 
=
0.

Thus, v is orthogonal to both vectors, so the orthogonal projection of v onto im(f) is 0 R3.

It is worth noting that v is in the orthogonal complement of im(f). In general, given a vector subspace WV, for any w W, the orthogonal projection of w onto W is 0.

Example: Is the system below overdetermined or underdetermined? Does it have a solution?

 
1 2 0
0 1 -4
0 0 1
 
v
=
 
8
3
0
 

The system is neither overdetermined nor underdetermined. It has a solution. Since the matrix is upper triangular, we can use the algorithm for solving a system with an upper triangular matrix to obtain v:
v
=
 
2
3
0
 
.

Example: Determine whether the system below has a solution for all possible v R3. If it does not, describe exactly the set of vectors v for which the system below has a solution and determine whether this set a vector space.

 
1 2 2
0 2 -1
1 4 1
 
x
=
v .

Notice that determining whether the system has a solution for any v is equivalent to determining whether the linear transformation represented by the matrix is surjective. It is also equivalent to determining whether the matrix is invertible.

To determine whether the matrix is invertible, we could compute its reduced row echelon form by finding an appropriate series of row operations.

 
1 2 2
0 2 -1
1 4 1
 
 
1 2 2
0 1 -1/2
1 4 1
 
 
1 0 3
0 1 -1/2
1 4 1
 
 
1 0 3
0 1 -1/2
0 4 -2
 
 
1 0 3
0 1 -1/2
0 0 0
 

Given the above, we see that the matrix is not invertible.

Let the matrix be M. The set of vectors for which there is a solution to the above equation is im(f) where f(x) = Mx. Thus, we want to find im(f) explicitly. We could say:
im(f) = span {
 
1
0
1
 
,
 
2
2
4
 
,
 
2
-1
1
 
}.
If we want to be more precise, we could compute rref(M) to find a basis for im(f).

Since the set of possible vectors v for which the equation has a solution is im(f) and f is a linear transformation, the set of such v is a vector space.



Extra Problems


Appendix A. Notes on tools

This section contains some useful information and comments about some of the tools you may employ in this course.

Aartifact

Aartifact is an interactive logic and algebra verifier developed within the Boston University Computer Science Department that runs inside a web browser.

Syntactic layout of an argument: Each formula in an argument starts on an un-indented line (a line with no spaces at the beginning). Thus, it is possible to process multiple formulas simultaneously. See example below.

2 + 3 = 5

x,y R,
x + y = y + x

x R,
0 + x = x

Each formula consists of one or more lines; the first line is unindented, while all subsequent lines in the formula are indented. Each new line after a quantifier represents a subformula; each new line is assumed to begin with an and logical operator that connects that subformula to the subformulas on the lines above it.

An implies logical operator should appear on its own line. All lines above implies are premises. All lines below it are the conclusions.

w,x,y,z R,
w
=
x # this is a premise
x
=
y # this is a premise
y
=
z # this is a premise

implies
w
=
z # this line is connected to the next
w + 1
=
z + 1 # ... by an "and" operation


Output: The results of a verification process are color-coded:

See the example below.

x,y R,
x = 0 # assumption (assumed to be true)

implies
x
=
y # unknown (may be true or false)
1
=
1 # true (due to logical and algebraic properties)
1
=
2 # false
2
=
1 # contradiction (both true and false)