Skip to main content

Section 11.2 Outer products

If a and b are column vectors with \(n\) entries, then a'*b in Matlab is their scalar product (also called the inner product or the dot product):

\begin{equation*} \begin{pmatrix}a_1 \amp a_2 \amp \cdots \amp a_n \end{pmatrix} \begin{pmatrix}b_1 \\ b_2 \\ \cdots \\ b_n \end{pmatrix} = a_1b_1 + a_2b_2 + \dots +a_n b_n \end{equation*}

where multiplication is carried out by the rules of matrix products: 1×n matrix times n×1 matrix gives a 1×1 matrix, a single number. The rules of matrix multiplication also allow us to compute a*b':

\begin{equation*} \begin{pmatrix}a_1 \\ a_2 \\ \cdots \\ a_n \end{pmatrix} \begin{pmatrix}b_1 \amp b_2 \amp \cdots \amp b_n \end{pmatrix} = \begin{pmatrix} a_1b_1 \amp a_1b_2 \amp \dots \amp a_1 b_n \\ a_2b_1 \amp a_2b_2 \amp \dots \amp a_2 b_n \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ a_nb_1 \amp a_nb_2 \amp \dots \amp a_n b_n \\ \end{pmatrix} \end{equation*}

This matrix is the outer product of vectors \(\mathbf a\) and \(\mathbf b\text{.}\) Since all of its rows are proportional to one another, its rank is at most 1. The rank is 0 if one of two vectors is the zero vector; otherwise it is equal to 1.

Let \(M\) be the outer product of two random vectors with 5 entries, generated with rand. Display the rank and determinant of \(M\text{.}\)

Solution
a = rand(5, 1);
b = rand(5, 1);
M = a*b';
disp(rank(M))
disp(det(M))

The result should be: rank is 1, and the determinant is some extremely small (but nonzero) number, for example 3.2391e-69. Mathematically this is impossible: a 5×5 matrix of rank less than 5 must have determinant equal to 0. But the reality of computer arithmetic is that floating point numbers rarely add up exactly to zero, as noted in Section 6.4. Matlab's rank command takes this into account and reports the rank as 1 when the matrix is “close enough” to actually having rank 1.

In mathematical notation, the outer product would be written as \(\mathbf a\mathbf b^T\text{,}\) with T indicating the transposed vector (column turned into row). Since \(\mathbf a\mathbf b^T\) is a matrix, we can apply it to some vector \(\mathbf c\text{.}\) The associative property of matrix multiplication shows that

\begin{equation} (\mathbf a\mathbf b^T)\mathbf c = \mathbf a (\mathbf b^T \mathbf c) = \mathbf a (\mathbf b\cdot \mathbf c) \label{eq-outer-product}\tag{11.2.1} \end{equation}

that is, we get the vector \(\mathbf a\) multiplied by the dot product \(\mathbf b\cdot \mathbf c\text{.}\) Formula (11.2.1) gives us an easy way to find a matrix \(M\) which satisfies the equation \(M\mathbf c = \mathbf d\) for given vectors \(\mathbf c, \mathbf d\text{.}\) Namely, we can let

\begin{equation} M = \frac{1}{\mathbf b^T \mathbf c} \mathbf d \mathbf b^T \label{eq-construct-M}\tag{11.2.2} \end{equation}

which is an outer product of two vectors with a scalar factor in front. The associative property shows that

\begin{equation*} M\mathbf c = \frac{1}{\mathbf b^T \mathbf c} \mathbf d (\mathbf b^T \mathbf c) = \mathbf d \end{equation*}

Note that the choice of \(\mathbf b\) is up to us: any vector will work as \(\mathbf b\) as long as \(\mathbf b^T \mathbf c \ne 0\text{.}\)

Once more, to make this point clear: one cannot “solve” \(M\mathbf c = \mathbf d\) for \(M\) by “dividing” vector \(\mathbf d\) by \(\mathbf c\text{.}\) But the process outlined above produces some matrix \(M\) that satisfies this equation.