Matrix Algebra for Engineers (part one)

reference from Matrix Algebra for Engineers by Jeffrey R. Chasnov

Inner and outer products

The inner product (or dot product or scalar product) between two vectors is obtained from the ma- trix product of a row vector times a column vector

If the inner product between two vectors is zero, we say that the vectors are orthogonal.

The norm of a vector is defined by

If the norm of a vector is equal to one, we say that the vector is normalized. If a set of vectors are mutually orthogonal and normalized, we say that these vectors are orthonormal.

An outer product is also defined, and is used in some applications. The outer product between u and v is given by

Inverse matrix

Square matrices may have inverses. When a matrix A has an inverse, we say it is invertible and denote its inverse by $A^{−1}$. The inverse matrix satisfies

The determinant of a two-by-two matrix is the product of the diagonals minus the product of the off-diagonals. Evidently, A is invertible only if $det \quad A \neq 0.$ Notice that the inverse of a two-by-two matrix, in words, is found by switching the diagonal elements of the matrix, negating the off-diagonal elements, and dividing by the determinant.
Later, we will show that an n-by-n matrix is invertible if and only if its determinant is nonzero. This will require a more general definition of determinant.

Orthogonal matrices

A square matrix Q with real entries that satisfies

is called an orthogonal matrix.

Since the columns of $Q^{T}$ are just the rows of Q, and $QQ^{T}$ = I, the row vectors that form Q must
be orthonormal. Similarly, since the rows of $Q^{T}$ are just the columns of Q, and $Q^{T}Q$ = I, the column vectors that form Q must also be orthonormal.

Rotation matrices

Rotating.png

Writing the equations for $x’$ and $y’$ in matrix form, we have

The above two-by-two matrix is a rotation matrix and we will denote it by $R_{θ}$. Observe that the rows and columns of $R_{θ}$ are orthonormal and that the inverse of $R_{θ}$ is just its transpose. The inverse of $R_{θ}$ rotates a vector by −θ.

Permutation matrices

Another type of orthogonal matrix is a permutation matrix. An n-by-n permutation matrix, when multiplying on the left permutes the rows of a matrix, and when multiplying on the right permutes the columns. Clearly, permuting the rows of a column vector will not change its norm.

Gaussian elimination

which can be written in matrix form as

Row reduction is then performed on this augmented matrix. Allowed operations are (1) interchange the order of any rows, (2) multiply any row by a constant, (3) add a multiple of one row to another row. These three operations do not change the solution of the original equations. The goal here is to convert the matrix A into upper-triangular form, and then use this form to quickly solve for the unknowns x.

We first form what is called an augmented matrix by combining the matrix A with the
column vector b

These equations can be solved by back substitution, starting from the last equation and working
backwards. We have

We have thus found the solution

When performing Gaussian elimination, the diagonal element that is used during the elimination procedure is called the pivot

Reduced row echelon form

If we continue the row elimination procedure so that all the pivots are one, and all the entries above and below the pivots are eliminated, then we say that the resulting matrix is in reduced row echelon form

Computing inverses

Because:
\begin{align}
& p_{1}p_{2}p_{3}p_{4}…p_{n}A \rightarrow E \\
& p_{1}p_{2}p_{3}p_{4}…p_{n} = A^{-1} \\
& p_{1}p_{2}p_{3}p_{4}…p_{n}E \rightarrow A^{-1} \\
\end{align
}

LU decomposition

row reduction of a matrix A can be written as

where U is upper triangular. Upon inverting the elementary matrices, we have

Therefore,

Our LU decomposition of A is therefore:

Solving (LU)x = b

The LU decomposition is useful when one needs to solve Ax = b for many right-hand-sides. With the LU decomposition in hand, one writes

Donate article here