Let . Let be a subspace of with basis vectors , where . If is the kernel of , find all possible matrices .

Method 1

The column space must be dimensional. If we assume is in RREF, then must have pivots. Try to solve for the non pivot cells in every possible arrange of the pivots. One should work. Can get very tedious very quick.

Example 1.1

Basis of is and , .

must have 1 pivot. So, is of the form

Try to solve for the unknowns in each possibility. For example, for the first one we get

and satisfy the equations. It can be verified that the other two possible forms of do not work. Thus,

is one possible solution for . Since performing row operations does not change the kernel of a matrix, any matrix obtained by performing row operations on the above matrix is also a solution.

Method 2

Let be in RREF. Recall how we found the null space of a matrix here. We will reverse engineer it. Note that given the three basis vectors we derived there, it is possible to write the matrix we started with. Also note that those three vectors are in a specific form - if you combine them to form a matrix, and take its transpose, they seem to be in a kind of “reverse reduced row echelon form”:

We will try to achieve this form using row operations. Remember, row operations do not change the span of the row vectors.

Example 2.1

Example: basis of is and , .

If we package the basis of as row vectors in a matrix, we get

We will now, using row operations, bring it to the form mentioned above.

So, and are a basis for in the form we required. We can now write express the elements of like so:

Let . Then, all of the form

should satisfy , i.e,

Clearly, is a pivot variable, since it is determined by and , which are free variables. It is easy to work out now. We get

Again, we are free to perform row operations on this to obtain solutions that are not in RREF form.

Example 2.2

Example: basis of is , .
Note that is already in “reverse reduced row echelon form”. Thus, we have

Note that is the only free variable, with and being pivot variables. Thus, we get

Method 3

Here, we will first find the set of all vectors that are perpendicular to . This is a subspace (If in doubt, prove). We will find the basis of this subspace. Then, we will use the fact that any vector in is mapped to by (i.e, is a member of ) iff the dot product of with every member in the basis is zero.

Example 3.1

Example: basis of is and , .

First, we find the subspace perpendicular to . Let be in . Then, we must have

Thus, we have

Thus, is a basis of . Note that it will always be the case that the dimension of will be equal to the rank or column span of the matrix. Thus, we can arbitrarily pick vectors form a basis of the codomain, and scale each by the dot product of the input vector with one of the basis vectors of . In this case, we have

Example 3.2

Find one matrix such that the basis for is , and then describe all matrices with this property.

Let be the kernel of . Let be the set of all vectors perpendicular to . Let .
Then, we have

Thus, is a basis for . Thus, we have

Any matrix obtained by performing row operations on this matrix will also have the same kernel. Thus, matrices whose rows are of the form will have as their kernel.