## Matrix Decomposition Methods

### QR Decomposition

QR Decomposition, or QR Factorization is the process of decomposing a matrix A this way: $\left [ \bold{A} \right ] = \left [ \bold{Q} \right ] \cdot \left [ \bold{R} \right ]$

Where:

• Q is an orthogonal matrix
• R is an upper triangular matrix

#### Applications

QR decomposition is often used to solve the linear least squares problem, and is the basis for a particular eigenvalue algorithm, the QR algorithm.

### SVD Decomposition

Singular Value Decomposition (SVD) is the process of decomposing a matrix A this way:

$\left [ \bold{A} \right ] = \left [ \bold{U} \right ] \cdot \left [ \bold{\Sigma} \right ] \cdot \left [ \bold{V} \right ]^T$

Where:

• A is an $m \times n$ real or complex matrix
• U is an $m \times m$ real or complex unitary orthonormal matrix
• $\bold{\Sigma}$ is an $m \times n$ rectangular diagonal matrix with non-negative real numbers on the diagonal
• V is an $n \times n$ real or complex unitary orthonormal matrix.

NOTES:

• The diagonal entries $\sigma_i$ of $\bold{\Sigma}$ are known as the singular values of A and can be viewed as the semiaxes of an n-dimensional ellipsoid.
• The columns of U and the columns of V are called the left-singular vectors and right-singular vectors of A, respectively.

The singular value decomposition can be computed using the following observations:

• The left-singular vectors of A are a set of orthonormal eigenvectors of $\bold{A}\bold{A}^T$.
• The right-singular vectors of A are a set of orthonormal eigenvectors of $\bold{A}^T\bold{A}$.
• The non-zero singular values of A (found on the diagonal entries of Σ) are the square roots of the non-zero eigenvalues of both $\bold{A}^T\bold{A}$ and $\bold{A}\bold{A}^T$.

#### Applications

Applications that employ the SVD include computing the pseudoinverse, least squares fitting of data, multivariable control, matrix approximation, and determining the rank, range and null space of a matrix.

### LU Decomposition

LU Decomposition, or LU Factorization is the process of decomposing a matrix A this way: $\left [ \bold{A} \right ] = \left [ \bold{L} \right ] \cdot \left [ \bold{U} \right ]$

Where:

• L is a Lower triangular matrix
• U is an Upper triangular matrix

$\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} = \begin{bmatrix} l_{11} & 0 & 0 \\ l_{21} & l_{22} & 0 \\ l_{31} & l_{32} & l_{33} \end{bmatrix} \begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \end{bmatrix}. $

#### Applications

Computers usually solve square systems of linear equations using the LU decomposition, and it is also a key step when inverting a matrix, or computing the determinant of a matrix.