## Math 390 Problem List: Spring 2021, University of Puget Sound

“You will get as much out of this course as you put into it.”―RAB

Dates roughly correlate the material for that day's class session to the content or purpose of the exercise. See the online course calendar for an indication of material covered.

###### Exercise 1. (January 19).

The vector space axiom, \(1\vect{x}=\vect{x}\text{,}\) is necessary. Meaning it cannot be derived/inferred from the other nine axioms. Proof: work FCLA Exercise VS.M10.

###### Exercise 2. (January 21).

Let \(M_{24}\) be the set of all \(2\times 4\) matrices.

###### (a)

Endow \(M_{24}\) with “natural” definitions of vector equality, vector addition, and scalar multiplication. (What would these be?) Prove that the result is a vector space.

###### (b)

Consider the subset of \(W\) consisting of matrices

such that

Prove that \(W\) is a subspace of \(M_{24}\text{.}\)

###### (c)

Compute the dimension of \(W\text{.}\)

###### (d)

Construct a linearly independent subset of \(W\) with four vectors that does not span \(W\text{.}\)

###### (e)

Construct a basis of \(W\) that is substantially different from the one you used to determine the dimension.

###### (f)

Construct a linearly dependent subset of \(W\) that also spans \(W\text{.}\)

###### Exercise 3. (January 22).

We discussed in class that preimages form a partition of the domain of a linear transformation. In Abstract Algebra an important principle is that every partition gives rise to an equivalence relation, and vice versa. Matrix similarity is an equivalence relation, and this is a topic in FCLA (link in these exercises) so you have already seen that. Exercise Group LT.T30–LT.T31 explores these ideas in a different order.

###### Exercise 4. (January 22).

Consider the linear transformation, \(\ltdefn{T}{P_4}{\complex{5}}\) defined by

###### (a)

Prove that \(T\) is invertible (without using a matrix representation).

###### (b)

Compute a formula for an output of the inverse linear transformation (without using a matrix representation). In other words, find an expression for the right-hand side of \(\lteval{\ltinverse{T}}{\colvector{u\\v\\w\\x\\y}}=\text{.}\)

###### (c)

Compose \(T\) and \(\ltinverse{T}\) (in both orders) to verify that you get the correct identity linear transformation in each case. (Why are there *two* different identity linear transformations in this problem?)

###### Exercise 5. (January 22).

Work IVLT.M50 in the style of our example in class.

###### Exercise 6. (January 25).

\(E\) is a basis for \(\complex{3}\text{,}\) and \(F\) is a basis for \(P_1\text{,}\) the vector space of polynomials with degree at most 1. Find the matrix representation of \(T\) relative to \(E\) and \(F\text{.}\)

Find the key for Exam R for 290 Spring 2019.

###### Exercise 7. (January 28).

The following computational exercises have been adapted for Math 390. Statements are sometimes adjusted, and solutions are always 390-specific.

###### Exercise 8. (January 28).

The following exercises are about triangular matrices, and are new in FCLA.

###### Exercise 9. (February 1).

###### Exercise 10. (February 1).

Verify that \(A\) is singular, and thus zero is an eigenvalue. Compute the generalized eigenspace of this eigenvalue. Come back later and apply Theorem GEB and concoct an argument that zero is the *only* eigenvalue of \(A\text{.}\) What is the rather surprising thing you discover about this matrix in the course of finding its generalized eigenspace?

A = matrix(QQ, [ [ 0, 3, 0, -2, -2, -2, 1], [ 6, 1, 6, 1, -5, 2, 3], [-2, -3, -2, 2, 4, 1, -2], [ 4, 1, 4, 0, -4, 1, 2], [-1, -1, -1, 1, 2, 0, -1], [ 6, 1, 6, 1, -5, 2, 3], [-4, -3, -4, 1, 5, 0, -3] ])

###### Exercise 11. (February 5).

FCLA Exercise SD.M60 is the same as the Sage worksheet from class. Do it over by using vectors for \(\vec{x}_2\) and \(\vec{x}_4\) that you think nobody else from class will use. Full credit if you have correct vectors and nobody else uses them. Use `sage_input(S)`

to duplicate your similarity matrix and email it to me.

###### Exercise 12. (February 5).

FCLA Exercise SD.M61 will take some creativity and experimentation, but will make the material on Jordan canonical form much easier to understand.

###### Exercise 13. (February 11).

FCLA Exercise CB.C40 will provide a good review and consolidate our transition to a linear transformation point of view. Study Example ELTT for hints, rather than using the provided solution.

###### Exercise 14. (February 12).

See three exercises in SCLA about computing complements and orthogonal complements.

###### Exercise 15. (February 15).

FCLA Exercise MR.T80 is absolutely fundamental. Please bring it up during one of our problem sessions.

###### Exercise 16. (February 15).

In class we *derived* the main formula of Theorem EMP from a very natural idea: composition of linear “morphisms” (aka linear transformations). Suppose that we therefore feel that this entry-by-entry expression for the result of matrix multiplication should be taken as our *definition* of matrix multiplication. In FCLA, Definition MM defines matrix multiplication as repeated matrix-vector products (linear combinations, really). Convert Definition MM to a theorem, and then prove this result as a consequence of our new (entry-by-entry) definition of matrix multiplication.

###### Exercise 17. (February 22, 23,25).

Consider the linear transformation \(T:\mathbf{C}^{12}\rightarrow\mathbf{C}^{12}\) given by \(T(\vec{x})=A\vec{x}\) with

The following exercise will go from linear transformation to Jordan canonical form in steps.

matrix(QQ, [ [ 4, -18, -11, 33, -24, 11, -18, 20, 20, -10, -4, -1], [-3, 13, 6, -24, 13, -4, 12, -15, -13, 3, 5, -1], [ 0, 20, -3, -4, 19, -23, 7, -7, -28, 16, -7, 13], [ 0, 13, -5, 7, 11, -17, 1, -1, -20, 14, -8, 12], [ 0, 11, -9, 15, 7, -16, -3, 3, -19, 13, -10, 14], [ 0, 2, -9, 24, -4, -8, -9, 9, -7, 7, -10, 11], [-2, -4, 3, 3, 1, -2, -2, 2, 3, 2, -2, 1], [-5, 29, 12, -45, 37, -23, 25, -28, -34, 15, 4, 5], [ 0, -21, 1, 8, -21, 23, -9, 9, 28, -16, 6, -12], [ 0, -16, 5, -2, -15, 20, -3, 3, 24, -14, 8, -13], [ 0, 23, 1, -19, 27, -23, 14, -14, -30, 17, -2, 9], [ 3, -18, -12, 39, -20, 5, -20, 23, 16, -4, -9, 4] ])

###### (a)

Compute the eigenvalues of \(T\) from the simplest possible matrix representation.

###### (b)

Compute the kernels of the powers of \(T-\lambda I\) and determine the index of each eigenvalue. Continue to compute in Sage with the “obvious” matrix representation you are already using.

###### (c)

Compute the generalized eigenspace of each eigenvalue. Use the algebraic multiplicities to be sure you have not missed any eigenvalues.

###### (d)

Compute the Jordan canonical form purely as a combinatorial exercise, based on the dimensions of the kernels of the powers above.

###### (e)

Compute Jordan chains (generalized eigenvectors) which together form a basis of \(\mathbf{C}^{12}\) and provide a basis for a matrix representation in Jordan canonical form.

###### (f)

Perform the similarity transformation (matrix operation) that verifies the correctness of your answer to the previous part.

###### Exercise 18. (February 26).

The Sage worksheet from class left the eigenvalue \(\lambda = -1\) undone. Obtain a basis for the generalized eigenspace of \(\lambda = -1\) that yields a matrix representation that is composed of Jordan blocks. You could use an arbitrary basis for the generalized eigenspace of \(\lambda = 2\) when doing teh similarity transformation that is a check on your work.

###### Exercise 19. (March 1).

Compute the minimal polynomial of \(A\) without using Sage's `.minpoly()`

method. Compute rational canonical form, and factor the polynomials for each companion matrix, observing the divisibility condition.

matrix(QQ, [ [-33, 16, -34, -18, -8, 18, 0], [-27, 13, -38, -20, -10, 22, 0], [ 84, -41, 63, 33, 16, -28, -3], [-36, 18, -23, -11, -10, 8, 5], [-70, 34, -60, -32, -16, 29, 3], [ 66, -32, 53, 28, 10, -25, 1], [-58, 28, -42, -23, -12, 19, 4] ])

###### Exercise 20. (March 1).

This problem requires knowledge of basic group theory. Suppose \(T:V\rightarrow V\) is a linear transformation and \(W\) is a \(T\)-invariant subspace of \(V\text{.}\) Explain why the quotient vector space \(V/W\) makes sense. Define a new linear transformation \(T^\ast\) on \(V/W\) that is induced by \(T\text{.}\) (Part of the problem is figuring out what this means.) Now show that \(T^\ast\) is well-defined.

###### Exercise 21. (March 4).

Compute the LU decomposition of \(A\) by forming the row operations (all of the third type) to convert \(A\) into an upper-triangular matrix \(U\) and then determine \(L\text{.}\) Now that you know what the decomposition is, apply the formulas of SCLA Theorem 2.1.4 to obtain the decomposition again.

matrix(QQ, [ [-1, 1, 0, -3, -6], [ 3, -2, -3, 4, -5], [ 2, -2, -1, 5, 6], [ 0, -1, -2, 1, -2], [ 4, -1, -3, 3, -8] ])

###### Exercise 22. (March 8).

See four exercises in SCLA about Householder matrices.

- SCLA 1.5.6
- SCLA 1.5.7
- SCLA 1.5.8
- SCLA 1.5.9 Solutions (of different styles) can be found in GVL and TB.

###### Exercise 23. (March 11).

SCLA describes informally an algorithm for obtaining a full QR decomposition with a sequence of Householder reflections. Build a \(4\times 4\) matrix of random integers using `M = random_matrix(QQ, 4, algorithm="unimodular", upper_bound=9)`

which will make a matrix with determinant \(1\text{,}\) and hence will be nonsingular. (Try creating a few matrices before you choose one to work with—too many zeros or ones and it will not be as interesting.) Convert your matrix to `QQbar`

with `A = M.change_ring(QQbar)`

. Now build the the Householder reflections necessary to create \(R\text{,}\) form their product and verify that you have a QR decomposition (including that `Q`

is unitary).

Hints: Be sure all your vectors and matrices are over `QQbar`

and stay that way (and do not become symbolic matrices with `sqrt()`

in them). The Sage matrix method `.change_ring()`

can help with this, as well as providing `QQbar`

to various constructors. You might find it convenient to recycle our Python function to build a Householder matrix from a Householder vector, or to create a Householder matrix from a part of a column. The Sage matrix method `.column()`

could be useful, and a Python slice like `v[2:4]`

could also be handy. Note that this exercise does not suggest building a totally general function to create a QR decomposition of any old matrix. Just find the correct three Householder matrices and do the right things with them. Extra credit: build a rank 3 matrix and do it again to see what happens with a singular matrix. Use `M = random_matrix(QQ, 4, algorithm="echelonizable", rank=3, upper_bound=9)`

.

###### Exercise 24. (March 18).

Consider the matrix

The following exercise will have you construct the SVD of \(A\) with fairly basic Sage commands.

matrix(QQ, [[-1, -1, 1, -2, -2, -6, 7, -2, 6, -4], [-1, 0, 0, -1, 0, -1, 4, -1, 1, 1], [ 0, 2, -1, -1, -1, -6, 8, 0, -1, 2], [ 1, 1, -1, 1, 0, 0, -2, 1, -2, 0], [ 0, 0, 1, 0, 2, 5, -2, 1, -5, 6], [ 0, 1, -1, 0, -1, -4, 3, 0, 1, -1], [ 0, 1, -1, -1, -2, -7, 7, -1, 3, -3], [-1, -2, 2, 1, 2, 7, -8, 1, -1, 3] ])

###### (a)

Compute the eigenvalues of \(\adjoint{A}A\) and \(A\adjoint{A}\text{,}\) and verify that they are equal. (Use Sage's `.eigenvalues()`

method, not a more tedious procedure.) Note that Sage reports the eigenvalues in the “wrong” order. Save the list in a variable, and then use the Python `.reverse()`

method to reverse the list/variable in-place. Compute a separate list of the singular values.

###### (b)

Use Sage's `.eigenmatrix_right()`

method to get the eigenvectors of \(\adjoint{A}A\text{.}\) The eigenvectors for the nonzero eigenvalues should form an orthogonal set. Make them an orthonormal set. Collect the eigenvectors for the zero eigenvalue, and convert them to an orthonormal set. Sage's `.QR()`

method might be a quick way to do this. Form the matrix \(V\) whose columns are these eigenvectors. Check that \(V\) is unitary. (Note that you will likely need to reverse the lists of eigenvectors at some point to match the order of the singular values.)

###### (c)

Create the \(r\) vectors \(\vect{y}_i\) from the eigenvectors in the previous part. They should form an orthonormal set. Find an orthonormal set of eigevectors for the zero eigenvalue of \(A\adjoint{A}\text{.}\) Package these eigenvectors into the matrix \(U\text{,}\) and verify that it is unitary. (Note that you will likely need to reverse the lists of eigenvectors at some point to match the order of the singular values.)

###### (d)

Construct the \(S\) matrix with the singular values from before and verify that \(A=US\adjoint{V}\) AND/OR “diagonalize” \(A\) using \(U\) and \(V\) properly and verify that the result has the singular values on the diagonal.

###### (e)

Use the singular values, and the columns of \(V\) and \(U\text{,}\) to build the matrix \(A\) with the rank one decomposition.

###### Exercise 25. (April 8).

Suppose that \(A\) is a nonsingular matrix. What can you say about a solution, \(\hat{\vect{x}}\text{,}\) to the normal equations? Comment on this situation.

###### Exercise 26. (April 9).

It is thought that tooth decay in children is caused by sugar in candy and sugar in carbonated drinks (“soda”). Six ten-year old children, and their parents, were surveyed to determime weekly amounts of candy and soda consumed (in ounces) and the number of fillings due to cavities from tooth decay.

Candy | Soda | Fillings |

0 | 0 | 1 |

5 | 12 | 2 |

24 | 48 | 3 |

20 | 60 | 3 |

30 | 128 | 5 |

0 | 150 | 4 |

###### (a)

Form a linear model whose parameters may be estimated via the normal equations. What are the resulting estimates?

###### (b)

Compute the residual vector and the coefficient of determination (“R squared”).

###### (c)

A five-year old child who did not participate in the study consumes 40 ounces of candy and 30 ounces of soda. Predict how many fillings the child will have when they are ten years old.

###### (d)

Suppose parents think an 8 ounce candy bar and a 16 ounce soda are “equivalent” treats for a young child. Which is more detrimental to the child's teeth?