Proofs involving the Moore–Penrose inverse

From Wikipedia, the free encyclopedia

In linear algebra, the Moore–Penrose inverse is a matrix that satisfies some but not necessarily all of the properties of an inverse matrix. This article collects together a variety of proofs involving the Moore–Penrose inverse.

Definition[]

Let be an m-by-n matrix over a field and be an n-by-m matrix over , where is either , the real numbers, or , the complex numbers. The following four criteria are called the Moore–Penrose conditions:

  1. ,
  2. ,
  3. ,
  4. .

Given a matrix , there exists a unique matrix that satisfies all four of the Moore–Penrose conditions, which is called the Moore–Penrose inverse of .[1][2][3][4] Notice that is also the Moore–Penrose inverse of . That is, .

Useful lemmas[]

These results are used in the proofs below. In the following lemmas, A is a matrix with complex elements and n columns, B is a matrix with complex elements and n rows.

Lemma 1: A*A = 0 ⇒ A = 0[]

The assumption says that all elements of A*A are zero. Therefore,

Therefore, all equal 0 i.e. .

Lemma 2: A*AB = 0 ⇒ AB = 0[]

Lemma 3: ABB* = 0 ⇒ AB = 0[]

This is proved in a manner similar to the argument of Lemma 2 (or by simply taking the Hermitian conjugate).

Existence and uniqueness[]

Proof of uniqueness[]

Let be a matrix over or . Suppose that and are Moore–Penrose inverses of . Observe then that

Analogously we conclude that . The proof is completed by observing that then

Proof of existence[]

The proof proceeds in stages.

1-by-1 matrices[]

For any , we define:

It is easy to see that is a pseudoinverse of (interpreted as a 1-by-1 matrix).

Square diagonal matrices[]

Let be an n-by-n matrix over with zeros off the diagonal. We define as an n-by-n matrix over with as defined above. We write simply for .

Notice that is also a matrix with zeros off the diagonal.

We now show that is a pseudoinverse of :

General non-square diagonal matrices[]

Let be an m-by-n matrix over with zeros off the main diagonal, where m and n are unequal. That is, for some when and otherwise.

Consider the case where . Then we can rewrite by stacking where is a square diagonal m-by-m matrix, and is the m-by-(nm) zero matrix. We define as an n-by-m matrix over , with the pseudoinverse of defined above, and the (nm)-by-m zero matrix. We now show that is a pseudoinverse of :

  1. By multiplication of block matrices, so by property 1 for square diagonal matrices proven in the previous section,.
  2. Similarly, , so
  3. By 1 and property 3 for square diagonal matrices, .
  4. By 2 and property 4 for square diagonal matrices,

Existence for such that follows by swapping the roles of and in the case and using the fact that .

Arbitrary matrices[]

The singular value decomposition theorem states that there exists a factorization of the form

where:

is an m-by-m unitary matrix over .
is an m-by-n matrix over with nonnegative real numbers on the diagonal and zeros off the diagonal.
is an n-by-n unitary matrix over .[5]

Define as .

We now show that is a pseudoinverse of :

Basic properties[]

The proof works by showing that satisfies the four criteria for the pseudoinverse of . Since this amounts to just substitution, it is not shown here.

The proof of this relation is given as Exercise 1.18c in.[6]

Identities[]

A+ = A+ A+* A*[]

and imply that .

A+ = A* A+* A+[]

and imply that .

A = A+* A* A[]

and imply that .

A = A A* A+*[]

and imply that .

A* = A* A A+[]

This is the conjugate transpose of above.

A* = A+ A A*[]

This is the conjugate transpose of above.

Reduction to the Hermitian case[]

The results of this section show that the computation of the pseudoinverse is reducible to its construction in the Hermitian case. It suffices to show that the putative constructions satisfy the defining criteria.

A+ = A* (A A*)+[]

This relation is given as exercise 18(d) in,[6] for the reader to prove, "for every matrix A". Write . Observe that

Similarly, implies that i.e. .

Additionally, so .

Finally, implies that .

Therefore, .

A+ = (A* A)+A*[]

This is proved in an analogous manner to the case above, using Lemma 2 instead of Lemma 3.

Products[]

For the first three proofs, we consider products C = AB.

A has orthonormal columns[]

If has orthonormal columns i.e. then . Write . We show that satisfies the Moore–Penrose criteria.

Therefore, .

B has orthonormal rows[]

If B has orthonormal rows i.e. then . Write . We show that satisfies the Moore–Penrose criteria.

Therefore,

A has full column rank and B has full row rank[]

Since has full column rank, is invertible so . Similarly, since has full row rank, is invertible so .

Write (using reduction to the Hermitian case). We show that satisfies the Moore–Penrose criteria.

Therefore, .

Conjugate transpose[]

Here, , and thus and . We show that indeed satisfies the four Moore–Penrose criteria.

Therefore, . In other words:

and, since

Projectors and subspaces[]

Define and . Observe that . Similarly , and finally, and . Thus and are orthogonal projection operators. Orthogonality follows from the relations and . Indeed, consider the operator : any vector decomposes as

and for all vectors and satisfying and , we have

.

It follows that and . Similarly, and . The orthogonal components are now readily identified.

If belongs to the range of then for some , and . Conversely, if then so that belongs to the range of . It follows that is the orthogonal projector onto the range of . is then the orthogonal projector onto the orthogonal complement of the range of , which equals the kernel of .

A similar argument using the relation establishes that is the orthogonal projector onto the range of and is the orthogonal projector onto the kernel of .

Using the relations and it follows that the range of P equals the range of , which in turn implies that the range of equals the kernel of . Similarly implies that the range of equals the range of . Therefore, we find,

Additional properties[]

Least-squares minimization[]

In the general case, it is shown here for any matrix that where . This lower bound need not be zero as the system may not have a solution (e.g. when the matrix A does not have full rank or the system is overdetermined).

To prove this, we first note that (stating the complex case), using the fact that satisfies and , we have

so that ( stands for the complex conjugate of the previous term in the following)

as claimed.

If is injective i.e. one-to-one (which implies ), then the bound is attained uniquely at .

Minimum-norm solution to a linear system[]

The proof above also shows that if the system is satisfiable i.e. has a solution, then necessarily is a solution (not necessarily unique). We show here that is the smallest such solution (its Euclidean norm is uniquely minimum).

To see this, note first, with , that and that . Therefore, assuming that , we have

Thus

with equality if and only if , as was to be shown.

Notes[]

  1. ^ Ben-Israel & Greville (2003, p. 7)
  2. ^ Campbell & Meyer (1991, p. 10)
  3. ^ Nakamura (1991, p. 42)
  4. ^ Rao & Mitra (1971, pp. 50–51)
  5. ^ Some authors use slightly different dimensions for the factors. The two definitions are equivalent.
  6. ^ Jump up to: a b Adi Ben-Israel; Thomas N.E. Greville (2003). Generalized Inverses. Springer-Verlag. ISBN 978-0-387-00293-4.

References[]

  • Ben-Israel, Adi; Greville, Thomas N.E. (2003). Generalized inverses: Theory and applications (2nd ed.). New York, NY: Springer. doi:10.1007/b97366. ISBN 978-0-387-00293-4.
  • Campbell, S. L.; Meyer, Jr., C. D. (1991). Generalized Inverses of Linear Transformations. Dover. ISBN 978-0-486-66693-8.
  • Nakamura, Yoshihiko (1991). Advanced Robotics: Redundancy and Optimization. Addison-Wesley. ISBN 978-0201151985.
  • Rao, C. Radhakrishna; Mitra, Sujit Kumar (1971). Generalized Inverse of Matrices and its Applications. New York: John Wiley & Sons. pp. 240. ISBN 978-0-471-70821-6.

Retrieved from ""