Inverse function theorem

From Wikipedia, the free encyclopedia

In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its derivative is continuous and non-zero at the point. The theorem also gives a formula for the derivative of the inverse function. In multivariable calculus, this theorem can be generalized to any continuously differentiable, vector-valued function whose Jacobian determinant is nonzero at a point in its domain, giving a formula for the Jacobian matrix of the inverse. There are also versions of the inverse function theorem for complex holomorphic functions, for differentiable maps between manifolds, for differentiable functions between Banach spaces, and so forth.

Statement[]

For functions of a single variable, the theorem states that if is a continuously differentiable function with nonzero derivative at the point a; then is invertible in a neighborhood of a, the inverse is continuously differentiable, and the derivative of the inverse function at is the reciprocal of the derivative of at :

An alternate version, which assumes that is continuous and injective near a, and differentiable at a with a non-zero derivative, will also result in being invertible near a, with an inverse that's similarly continuous and injective, and where the above formula would apply as well.[1]

As a corollary, we see clearly that if is -th differentiable, with nonzero derivative at the point a, then is invertible in a neighborhood of a, the inverse is also -th differentiable. Here is a positive integer or .

For functions of more than one variable, the theorem states that if F is a continuously differentiable function from an open set of into , and the total derivative is invertible at a point p (i.e., the Jacobian determinant of F at p is non-zero), then F is invertible near p: an inverse function to F is defined on some neighborhood of . Writing , this means that the system of n equations has a unique solution for in terms of , provided that we restrict x and y to small enough neighborhoods of p and q, respectively. In the infinite dimensional case, the theorem requires the extra hypothesis that the Fréchet derivative of F at p has a bounded inverse.

Finally, the theorem says that the inverse function is continuously differentiable, and its Jacobian derivative at is the matrix inverse of the Jacobian of F at p:

The hard part of the theorem is the existence and differentiability of . Assuming this, the inverse derivative formula follows from the chain rule applied to :

Example[]

Consider the vector-valued function defined by:

The Jacobian matrix is:

with Jacobian determinant:

The determinant is nonzero everywhere. Thus the theorem guarantees that, for every point p in , there exists a neighborhood about p over which F is invertible. This does not mean F is invertible over its entire domain: in this case F is not even injective since it is periodic: .

Counter-example[]

The function is bounded inside a quadratic envelope near the line , so . Nevertheless, it has local max/min points accumulating at , so it is not one-to-one on any surrounding interval.

If one drops the assumption that the derivative is continuous, the function no longer need be invertible. For example and has discontinuous derivative and , which vanishes arbitrarily close to . These critical points are local max/min points of , so is not one-to-one (and not invertible) on any interval containing . Intuitively, the slope does not propagate to nearby points, where the slopes are governed by a weak but rapid oscillation.

Methods of proof[]

As an important result, the inverse function theorem has been given numerous proofs. The proof most commonly seen in textbooks relies on the contraction mapping principle, also known as the Banach fixed-point theorem (which can also be used as the key step in the proof of existence and uniqueness of solutions to ordinary differential equations).[2][3]

Since the fixed point theorem applies in infinite-dimensional (Banach space) settings, this proof generalizes immediately to the infinite-dimensional version of the inverse function theorem[4] (see Generalizations below).

An alternate proof in finite dimensions hinges on the extreme value theorem for functions on a compact set.[5]

Yet another proof uses Newton's method, which has the advantage of providing an effective version of the theorem: bounds on the derivative of the function imply an estimate of the size of the neighborhood on which the function is invertible.[6]

A proof of the inverse function theorem[]

The inverse function theorem states that if is a C1 vector-valued function on an open set , then if and only if there is a C1 vector-valued function defined near with near and near . This was first established by Picard and Goursat using an iterative scheme: the basic idea is to prove a fixed point theorem using the contraction mapping theorem. Taking derivatives, it follows that .

The chain rule implies that the matrices and are each inverses. Continuity of and means that they are homeomorphisms that are each inverses locally. To prove existence, it can be assumed after an affine transformation that and , so that .

By the fundamental theorem of calculus if is a C1 function, , so that . Setting , it follows that

Now choose so that for . Suppose that and define inductively by and . The assumptions show that if then

.

In particular implies . In the inductive scheme and . Thus is a Cauchy sequence tending to . By construction as required.

To check that is C1, write so that . By the inequalities above, so that . On the other hand if , then . Using the geometric series for , it follows that . But then

tends to 0 as and tend to 0, proving that is C1 with .

The proof above is presented for a finite-dimensional space, but applies equally well for Banach spaces. If an invertible function is Ck with , then so too is its inverse. This follows by induction using the fact that the map on operators is Ck for any (in the finite-dimensional case this is an elementary fact because the inverse of a matrix is given as the adjugate matrix divided by its determinant). [7][8] The method of proof here can be found in the books of Henri Cartan, Jean Dieudonné, Serge Lang, Roger Godement and Lars Hörmander.

Generalizations[]

Manifolds[]

The inverse function theorem can be rephrased in terms of differentiable maps between differentiable manifolds. In this context the theorem states that for a differentiable map (of class ), if the differential of ,

is a linear isomorphism at a point in then there exists an open neighborhood of such that

is a diffeomorphism. Note that this implies that the connected components of M and N containing p and F(p) have the same dimension, as is already directly implied from the assumption that dFp is an isomorphism. If the derivative of F is an isomorphism at all points p in M then the map F is a local diffeomorphism.

Banach spaces[]

The inverse function theorem can also be generalized to differentiable maps between Banach spaces X and Y.[9] Let U be an open neighbourhood of the origin in X and a continuously differentiable function, and assume that the Fréchet derivative of F at 0 is a bounded linear isomorphism of X onto Y. Then there exists an open neighbourhood V of in Y and a continuously differentiable map such that for all y in V. Moreover, is the only sufficiently small solution x of the equation .

Banach manifolds[]

These two directions of generalization can be combined in the inverse function theorem for Banach manifolds.[10]

Constant rank theorem[]

The inverse function theorem (and the implicit function theorem) can be seen as a special case of the constant rank theorem, which states that a smooth map with constant rank near a point can be put in a particular normal form near that point.[11] Specifically, if has constant rank near a point , then there are open neighborhoods U of p and V of and there are diffeomorphisms and such that and such that the derivative is equal to . That is, F "looks like" its derivative near p. The set of points such that the rank is constant in a neighbourhood of is an open dense subset of M; this is a consequence of semicontinuity of the rank function. Thus the constant rank theorem applies to a generic point of the domain.

When the derivative of F is injective (resp. surjective) at a point p, it is also injective (resp. surjective) in a neighborhood of p, and hence the rank of F is constant on that neighborhood, and the constant rank theorem applies.

Holomorphic functions[]

If a holomorphic function F is defined from an open set U of into , and the Jacobian matrix of complex derivatives is invertible at a point p, then F is an invertible function near p. This follows immediately from the real multivariable version of the theorem. One can also show that the inverse function is again holomorphic.[12]

Polynomial functions[]

If it would be true, the Jacobian conjecture would be a variant of the inverse function theorem for polynomials. It states that if a vector-valued polynomial function has a Jacobian determinant that is an invertible polynomial (that is a nonzero constant), then it has an inverse that is also a polynomial function. It is unknown whether this is true or false, even in the case of two variables. This is a major open problem in the theory of polynomials.

Selections[]

When with , is times continuously differentiable, and the Jacobian at a point is of rank , the inverse of may not be unique. However, there exists a local selection function such that for all in a neighborhood of , , is times continuously differentiable in this neighborhood, and ( is the Moore–Penrose pseudoinverse of ).[13]

See also[]

Notes[]

  1. ^ "Derivative of Inverse Functions". Math Vault. 2016-02-28. Retrieved 2019-07-26.
  2. ^ McOwen, Robert C. (1996). "Calculus of Maps between Banach Spaces". Partial Differential Equations: Methods and Applications. Upper Saddle River, NJ: Prentice Hall. pp. 218–224. ISBN 0-13-121880-8.
  3. ^ Tao, Terence (September 12, 2011). "The inverse function theorem for everywhere differentiable maps". Retrieved 2019-07-26.
  4. ^ Jaffe, Ethan. "Inverse Function Theorem" (PDF).
  5. ^ Spivak, Michael (1965). Calculus on Manifolds. Boston: Addison-Wesley. pp. 31–35. ISBN 0-8053-9021-9.
  6. ^ Hubbard, John H.; Hubbard, Barbara Burke (2001). Vector Analysis, Linear Algebra, and Differential Forms: A Unified Approach (Matrix ed.).
  7. ^ Hörmander, Lars (2015). The Analysis of Linear Partial Differential Operators I: Distribution Theory and Fourier Analysis. Classics in Mathematics (2nd ed.). Springer. p. 10. ISBN 9783642614972.
  8. ^ Cartan, Henri (1971). Calcul Differentiel (in French). Hermann. pp. 55–61. ISBN 9780395120330.
  9. ^ Luenberger, David G. (1969). Optimization by Vector Space Methods. New York: John Wiley & Sons. pp. 240–242. ISBN 0-471-55359-X.
  10. ^ Lang, Serge (1985). Differential Manifolds. New York: Springer. pp. 13–19. ISBN 0-387-96113-5.
  11. ^ Boothby, William M. (1986). An Introduction to Differentiable Manifolds and Riemannian Geometry (Second ed.). Orlando: Academic Press. pp. 46–50. ISBN 0-12-116052-1.
  12. ^ Fritzsche, K.; Grauert, H. (2002). From Holomorphic Functions to Complex Manifolds. Springer. pp. 33–36.
  13. ^ Dontchev, Asen L.; Rockafellar, R. Tyrrell (2014). Implicit Functions and Solution Mappings: A View from Variational Analysis (Second ed.). New York: Springer-Verlag. p. 54. ISBN 978-1-4939-1036-6.

References[]

  • Allendoerfer, Carl B. (1974). "Theorems about Differentiable Functions". Calculus of Several Variables and Differentiable Manifolds. New York: Macmillan. pp. 54–88. ISBN 0-02-301840-2.
  • Baxandall, Peter; Liebeck, Hans (1986). "The Inverse Function Theorem". Vector Calculus. New York: Oxford University Press. pp. 214–225. ISBN 0-19-859652-9.
  • Nijenhuis, Albert (1974). "Strong derivatives and inverse mappings". Amer. Math. Monthly. 81 (9): 969–980. doi:10.2307/2319298. hdl:10338.dmlcz/102482.
  • Protter, Murray H.; Morrey, Charles B., Jr. (1985). "Transformations and Jacobians". Intermediate Calculus (Second ed.). New York: Springer. pp. 412–420. ISBN 0-387-96058-9.
  • Renardy, Michael; Rogers, Robert C. (2004). An Introduction to Partial Differential Equations. Texts in Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. pp. 337–338. ISBN 0-387-00444-0.
  • Rudin, Walter (1976). Principles of mathematical analysis. International Series in Pure and Applied Mathematics (Third ed.). New York: McGraw-Hill Book. pp. 221–223.
Retrieved from ""