Basic notions

Vector spaces

  • Defining axioms of Vector spaces and properties derivable from them.

Linear combinations and bases

  • Definition of a linear combination.

  • Definitions of complete and linearly independent systems.

  • Definition of a basis: a system of vectors such that any admits a unique representation as a linear combination of the basis vectors. This implies that a basis must be complete and linearly independent.

  • any finite complete system contains a basis.

Linear transformations

  • Definition of LT
  • It is sufficient to know how a LT acts on a basis of to know how acts on any vector in . In , this implies that an LT is completely defined by its values on the standard basis of .
  • Considering a basis is not essential, one can use any generating set. So, a LT is completely defined by its values on a generating set.
  • Thus, LTs from to can be represented as multiplication by a matrix (notice how LTs from to are represented by multiplication by real numbers, which are essentially matrices).

Composition of linear transformations and matrix multiplication

Definition of matrix multiplication.

  • Properties of matrix multiplication
    • Distributivity
    • can take multiples out
    • Associativity of linear transformations

Trace and matrix multiplication

  • If and are defined, .
    For the proof, consider two linear transformations, and , acting from to , defined by and . To prove the theorem it is sufficient to show that . This can be done by showing that and take the same values on the standard basis in .

Invertible transformations

Preliminaries

  • Let be a linear transformation. We say that the transformation is left invertible if there exists a linear transformation such that .
  • We say that is right invertible if there exists a linear transformation such that .
  • Note that we did not assume the uniqueness of or here, and generally left and right inverses are not unique.
  • A LT is invertible if it is both left and right invertible.
    • left invertible the LT is injective (p50)
    • right invertible the LT is surjective
  • If a LT is invertible then its left and right inverses are unique, and coincide
    • Corollary: A transformation is invertible iff there exists a unique transformation, denoted by , such that and .
  • A matrix is invertible if its corresponding LT is invertible
  • Properties of invertible transformations

Isomorphisms

  • isomorphic spaces can be considered as different representations of the same space, i.e, all properties and constructions involving vector space operations are preserved under isomorphisms.
  • If is an isomorphism, so is .
  • isomorphisms map bases/linearly independent sets/linearly dependent sets/complete sets to bases/linearly independent sets/linearly dependent sets/complete sets
    • Theorem: Let be an isomorphism, and let be a basis in . Then the system is a basis in .
    • Proof: Let . Then, . Since is a basis in , must have a unique decomposition in their terms. . Multiplying by on both sides, we get . Thus, the system is complete. To show that it is linearly independent, consider another decomposition of in their terms: . Multiplying both sides by , we get . This gives . Note that both identities and were used here.
  • basis to basis maps are isomorphisms

Invertibility and equations

Links the notion of invertibility with the existence and number of solutions.

  • A LT is invertible iff has a unique solution for every . corr: An matrix is invertible iff its columns form a basis in , since the column vectors are obtained by applying the transformation on the standard basis in .

An matrix is invertible its corresponding LT is invertible/an isomorphism must map the standard basis in to a basis in the columns of must be linearly independent.

From pivot analysis, we will later show that the column vectors forming a basis in requires . so the matrix must be square.


Systems of linear equations

  • Row operations (which are equivalent to left multiplication by an invertible matrix) do not change the solution set of a system.
  • Echelon form:
    • All zero rows, if any, are below the non zero rows.
    • For any non zero row, its leading entry is to the right of the previous row’s leading entry.
  • Reduced echelon form:
    • All pivot entries are .
    • All entries above the pivots are . Note that all entries below the pivots are also because of the echelon form.

Note

Let be a basis in . Let a linear transformation map these vectors to respectively. Let be the matrix with as its columns. Then will transform coordinates from the standard basis to . So, The matrix of the transformation in the standard basis will be , where is the matrix with as its columns.

Analyzing pivots

  • relation between column vectors and positions of pivots in the echelon form
    • pivot in every column column vectors are linearly independent a solution, if it exists, is unique (injectivity)
    • pivot in every row column vectors are complete a solution always exists (surjectivity)

remember how the notions of pivots, linear independence/completeness, and the existence/uniqueness of solutions are linked.

  • constraints on the sizes of linearly independent and

    • any linearly independent set in cannot have more than vectors in it. proof by previous point
    • Any complete set in has at least vectors.
  • constraints on the size of a basis in

    • any two bases in a vector space have the same number of vectors in them.
      proof: consider two bases and in having and vectors. WLOG . Define a basis to basis map mapping the standard basis in to . This makes it an isomorphism. The inverse of an isomorphism is also an isomorphism. Apply on . Isomorphisms map a basis to a basis. So, is a basis with vectors in . from the previous point, . so .
    • since the standard basis in has vectors, it follows that any basis in has vectors.

Corollaries about invertible matrices

  • A matrix is invertible iff its echelon form has a pivot in every column and every row.
    Proof: a matrix is invertible has a unique solution for every , drawn from the appropriate sets must have a pivot in every column and every row.
  • An invertible matrix must be square, since it must have a pivot in every column and every row.
  • If a square () matrix is left invertible or right invertible, then it is invertible.

Finding by row reduction

Any invertible matrix is row equivalent to the identity matrix, and thus any invertible matrix can be represented as a product of elementary matrices.

Dimension

The dimension of a vector space is the number of vectors in a basis.

A vector space is finite-dimensional iff it has a finite spanning system.

If , there always exists an isomorphism . This allows us to import quite a few results we proved for to general finite dimensional vector spaces:

  • Any linearly independent system in a fdvsp cannot have more than vectors in it.
    Prototypical proof: Let be a linearly independent system, and let be an isomorphism. Then, is a linearly independent system in , which implies .
  • Any complete system in a fdvsp must have at least vectors in it.

A linearly independent system in a fdvsp can be completed to a basis.

General solution of a linear system

Fundamental subspaces of a matrix

The fundamental subspaces: , , , .

the rank of a matrix is the dimension of its range.

Computing fundamental subspaces and rank

Let be a matrix. Let be its echelon form.

  • The pivot columns of give us a basis in .
  • The pivot rows of give us a basis in , the row space of .
  • To find a basis in the null space of , one needs to solve the homogeneous equation .

Why do the pivot columns of give us a basis in ?

First, notice that the pivot columns of are linearly independent. Since row reduction is left multiplication by invertible matrices, it does not change linear independence. Thus, the pivot columns of are linearly independent. We now have to show that the pivot columns of span the column space. Let be the pivot columns of . Consider a column vector of . We have to show

Note that the pivot columns of span the column space of . Thus, we have

Left multiplication by gives us the required equation.


Why do the pivot rows of give a basis in the row space of ?

It is pretty obvious that the pivot rows of are a basis in the row space of , and we know that row operations do not change the row space of a matrix.
(
Proof that row operations do not change the row space of a matrix:
Note that for any LT .

)


Rank theorem

Since the dimensions of both the row space and column space are equal to the number of pivots in the RREF of the matrix, they are equal.

The rank-nullity theorem: Let be a matrix.

  • .
  • .

Change of coordinates

Matrix of a linear transformation

  • Let be a linear transformation, and let and be bases in (dim n) and (dim m) respectively. The matrix of , denoted , is an matrix which maps .
  • .

Change of coordinate matrix

  • Let and be two bases in . The matrix of the identity transformation in different bases gives change of bases matrix. For example, maps .
  • .
  • The columns of are just the coordinates in basis of the basis vectors .

change of coordinates for the matrix of a transformation

similar matrices

Determinants

Basic motivation: -dimensional volume of parallelepiped determined by vectors.

Basic properties of the determinant

  • Multilinearity
  • antisymmetry
  • normalization: .

Properties derivable from above properties:

  • The determinant does not change under column operations of the third kind.

Determinants of diagonal and triangular matrices

Determinant of a diagonal/triangular matrix is equal to the product of its diagonal entries.

Computing the determinant for a general matrix: Row reduce it to a triangular form.

iff is not invertible.

Formal definition and uniqueness

ALG1_L16

Cofactor expansions

.

LADW derives the cofactor expansion formula from the properties of the determinant. This has the advantage of importing all previously known properties of the determinant directly to cofactor expansions, and also easy proofs that cofactor expansions along any row or column are equal.

On the other hand, if we are directly presented with the cofactor expansion formula, we will have to prove that it satisfies the properties of the determinant, and that cofactor expansions along different rows and columns are equal. Here’s a roadmap:

Consider the specific formula for cofactor expansion along column-1.

Prove antisymmetry, linearity for ROWS. Thus, we have shown that CF-C1 computes the determinant of the row vectors of a matrix.

This allows us to import all the properties we have proved abstractly for determinants, particularly, the fact that CF-C1 of is equal to CF-C1 of . Thus, we get that cofactor expansion along row 1 also computes the determinant. We can proceed to use the properties of the determinant with respect to the column vectors. This allows us to swap the first column with any arbitrary column, and change the sign of the determinant. We can now go back to the cofactor expansion along column 1 (the sign change due to column exchange is cancelled by the sign change in the cofactors when expanding along column 1). Thus, column expansion along any column gives the determinant.


Cofactor formula for the inverse of a matrix

If is an invertible matrix and is it’s cofactor matrix, then

Since is invertible, the equation has a unique solution

Note that is the vector

where is the matrix obtained on replacing column of with the vector .


Spectral theory

Diagonalization

A matrix with entries in admits a representation , where is a diagonal matrix and is an invertible one iff there exists a basis in of eigenvectors of . Moreover, in this case the diagonal entries of are the eigenvalues and the columns of are the corresponding eigenvectors.

Functions of diagonal matrices are easy to compute.

Eigenvectors with distinct eigenvalues are linearly independent. Note that this implies that the eigenspaces of a linear operator are linearly independent.

If a linear operator has exactly distinct eigenvalues, then it is diagonalizable. (Since this implies also has eigenvectors with different eigenvalues each, so they can form a basis).

An operator having exactly eigenvalues is diagonalizable iff the geometric multiplicity and algebraic multiplicity of each of its eigenvalues coincides.

If a real matrix admits complex factorization and all of its eigenvalues are real,
then it admits real factorization.


Inner product spaces

Inner product and norm in

We first define the norm in , and then define the inner product such that that norm is induced by our inner product.

The inner product we just defined has the following properties:

  • Conjugate symmetry:
  • Linearity in the first slot
  • Non negativity and Non degeneracy.

More generally, for any complex or real vector space , an inner product on is a function from such that the above properties are satisfied. A space together with an inner product is called an inner product space. An inner product induces a norm.

Properties of the inner product:

  • Conjugate linear in the second slot

The CS inequality
Triangle inequality