Stirling's approximation

From Wikipedia, the free encyclopedia
Comparison of Stirling's approximation with the factorial

In mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of n. It is named after James Stirling, though it was first stated by Abraham de Moivre.[1][2][3]

The version of the formula typically used in applications is

(in big O notation, as ), or, by changing the base of the logarithm (for instance in the worst-case lower bound for comparison sorting),

Specifying the constant in the O(ln n) error term gives 1/2ln(2πn), yielding the more precise formula:

where the sign ~ means that the two quantities are asymptotic: their ratio tends to 1 as n tends to infinity.

Derivation[]

Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sum

with an integral:

The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximating n!, one considers its natural logarithm, as this is a slowly varying function:

The right-hand side of this equation minus

is the approximation by the trapezoid rule of the integral

and the error in this approximation is given by the Euler–Maclaurin formula:

where Bk is a Bernoulli number, and Rm,n is the remainder term in the Euler–Maclaurin formula. Take limits to find that

Denote this limit as y. Because the remainder Rm,n in the Euler–Maclaurin formula satisfies

where big-O notation is used, combining the equations above yields the approximation formula in its logarithmic form:

Taking the exponential of both sides and choosing any positive integer m, one obtains a formula involving an unknown quantity ey. For m = 1, the formula is

The quantity ey can be found by taking the limit on both sides as n tends to infinity and using Wallis' product, which shows that ey = 2π. Therefore, one obtains Stirling's formula:

Alternative derivation[]

An alternative formula for n! using the gamma function is

(as can be seen by repeated integration by parts). Rewriting and changing variables x = ny, one obtains

Applying Laplace's method one has

which recovers Stirling's formula:

In fact, further corrections can also be obtained using Laplace's method. For example, computing two-order expansion using Laplace's method yields (using little-o notation)

and gives Stirling's formula to two orders:

A complex-analysis version of this method[4] is to consider as a Taylor coefficient of the exponential function , computed by Cauchy's integral formula as

This line integral can then be approximated using the saddle-point method with an appropriate choice of countour radius . The dominant portion of the integral near the saddle point is then approximated by a real integral and Laplace's method, while the remaining portion of the integral can be bounded above to give an error term.

Speed of convergence and error estimates[]

The relative error in a truncated Stirling series vs. n, for 0 to 5 terms. The kinks in the curves represent points where the truncated series coincides with Γ(n + 1).

Stirling's formula is in fact the first approximation to the following series (now called the Stirling series[5]):

An explicit formula for the coefficients in this series was given by G. Nemes.[6][a] The first graph in this section shows the relative error vs. n, for 1 through all 5 terms listed above.

The relative error in a truncated Stirling series vs. the number of terms used

As n → ∞, the error in the truncated series is asymptotically equal to the first omitted term. This is an example of an asymptotic expansion. It is not a convergent series; for any particular value of n there are only so many terms of the series that improve accuracy, after which accuracy worsens. This is shown in the next graph, which shows the relative error versus the number of terms in the series, for larger numbers of terms. More precisely, let S(n, t) be the Stirling series to t terms evaluated at n. The graphs show

which, when small, is essentially the relative error.

Writing Stirling's series in the form

it is known that the error in truncating the series is always of the opposite sign and at most the same magnitude as the first omitted term.

More precise bounds, due to Robbins,[7] valid for all positive integers n are

Stirling's formula for the gamma function[]

For all positive integers,

where Γ denotes the gamma function.

However, the gamma function, unlike the factorial, is more broadly defined for all complex numbers other than non-positive integers; nevertheless, Stirling's formula may still be applied. If Re(z) > 0, then

Repeated integration by parts gives

where Bn is the n-th Bernoulli number (note that the limit of the sum as is not convergent, so this formula is just an asymptotic expansion). The formula is valid for z large enough in absolute value, when |arg(z)| < π − ε, where ε is positive, with an error term of O(z−2N+ 1). The corresponding approximation may now be written:

where the expansion is identical to that of Stirling's series above for n!, except that n is replaced with z − 1.[8]

A further application of this asymptotic expansion is for complex argument z with constant Re(z). See for example the Stirling formula applied in Im(z) = t of the Riemann–Siegel theta function on the straight line 1/4 + it.

Error bounds[]

For any positive integer N, the following notation is introduced:

and

Then [9][10]

For further information and other error bounds, see the cited papers.

A convergent version of Stirling's formula[]

Thomas Bayes showed, in a letter to John Canton published by the Royal Society in 1763, that Stirling's formula did not give a convergent series.[11] Obtaining a convergent version of Stirling's formula entails evaluating Raabe's formula:

One way to do this is by means of a convergent series of inverted rising exponentials. If

then

where

where s(nk) denotes the Stirling numbers of the first kind. From this one obtains a version of Stirling's series

which converges when Re(x) > 0.

Versions suitable for calculators[]

The approximation

and its equivalent form

can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultant power series and the Taylor series expansion of the hyperbolic sine function. This approximation is good to more than 8 decimal digits for z with a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory.[12]

Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler:[13]

or equivalently,

An alternative approximation for the gamma function stated by Srinivasa Ramanujan (Ramanujan 1988[clarification needed]) is

for x ≥ 0. The equivalent approximation for ln n! has an asymptotic error of 1/1400n3 and is given by

The approximation may be made precise by giving paired upper and lower bounds; one such inequality is[14][15][16][17]

Estimating central effect in the binomial distribution[]

In computer science, especially in the context of randomized algorithms, it is common to generate random bit vectors that are powers of two in length. Many algorithms producing and consuming these bit vectors are sensitive to the population count of the bit vectors generated, or of the Manhattan distance between two such vectors. Often of particular interest is the density of "fair" vectors, where the population count of an n-bit vector is exactly . This amounts to the probability that an iterated coin toss over many trials leads to a tie game.

Stirling's approximation to , the central and maximal binomial coefficient of the binomial distribution, simplifies especially nicely where takes the form of , for an integer . Here we are interested in how the density of the central population count is diminished compared to , deriving the last form in decibel attenuation:

This simple approximation exhibits surprising accuracy:

Binary diminishment obtains from dB on dividing by .

As a direct fractional estimate:

Once again, both examples exhibit accuracy easily besting 1%:

Interpreted at an iterated coin toss, a session involving slightly over a million coin flips (a binary million) has one chance in roughly 1300 of ending in a draw.

Both of these approximations (one in log space, the other in linear space) are simple enough for many software developers to obtain the estimate mentally, with exceptional accuracy by the standards of mental estimates.

The binomial distribution closely approximates the normal distribution for large , so these estimates based on Stirling's approximation also relate to the peak value of the probability mass function for large and , as specified for the following distribution: .

History[]

The formula was first discovered by Abraham de Moivre[2] in the form

De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely .[3]

See also[]

Notes[]

  1. ^ Further terms are listed in the On-Line Encyclopedia of Integer Sequences as A001163 and A001164.

References[]

  1. ^ Dutka, Jacques (1991), "The early history of the factorial function", Archive for History of Exact Sciences, 43 (3): 225–249, doi:10.1007/BF00389433
  2. ^ Jump up to: a b Le Cam, L. (1986), "The central limit theorem around 1935", Statistical Science, 1 (1): 78–96 [p. 81], doi:10.1214/ss/1177013818, MR 0833276, The result, obtained using a formula originally proved by de Moivre but now called Stirling's formula, occurs in his `Doctrine of Chances' of 1733..[unreliable source?]
  3. ^ Jump up to: a b Pearson, Karl (1924), "Historical note on the origin of the normal curve of errors", Biometrika, 16 (3/4): 402–404 [p. 403], doi:10.2307/2331714, JSTOR 2331714, I consider that the fact that Stirling showed that De Moivre's arithmetical constant was 2π does not entitle him to claim the theorem, [...]
  4. ^ Phillipe Flajolet and Robert Sedgewick, Analytic Combinatorics, p. 555.
  5. ^ F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, and B. V. Saunders, eds. "NIST Digital Library of Mathematical Functions".CS1 maint: uses authors parameter (link)
  6. ^ Nemes, Gergő (2010), "On the Coefficients of the Asymptotic Expansion of n!", Journal of Integer Sequences, 13 (6): 5
  7. ^ Robbins, Herbert (1955), "A Remark on Stirling's Formula", The American Mathematical Monthly, 62 (1): 26–29, doi:10.2307/2308012, JSTOR 2308012
  8. ^ Spiegel, M. R. (1999). Mathematical handbook of formulas and tables. McGraw-Hill. p. 148.
  9. ^ F. W. Schäfke, A. Sattler, Restgliedabschätzungen für die Stirlingsche Reihe, Note. Mat. 10 (1990), 453–470.
  10. ^ G. Nemes, Error bounds and exponential improvements for the asymptotic expansions of the gamma function and its reciprocal, Proc. Roy. Soc. Edinburgh Sect. A 145 (2015), 571–596.
  11. ^ "Archived copy" (PDF). Archived (PDF) from the original on 2012-01-28. Retrieved 2012-03-01.CS1 maint: archived copy as title (link)
  12. ^ Toth, V. T. Programmable Calculators: Calculators and the Gamma Function (2006) Archived 2005-12-31 at the Wayback Machine.
  13. ^ Nemes, Gergő (2010), "New asymptotic expansion for the Gamma function", Archiv der Mathematik, 95 (2): 161–169, doi:10.1007/s00013-010-0146-9, ISSN 0003-889X.
  14. ^ Karatsuba, Ekatherina (2001), "On the asymptotic representation of the Euler gamma function by Ramanujan", Journal of Computational and Applied Mathematics, 135 (2): 225–240, doi:10.1016/S0377-0427(00)00586-0.
  15. ^ Mortici, Cristinel (2011), "Ramanujan's estimate for the gamma function via monotonicity arguments", Ramanujan J., 25: 149–154
  16. ^ Mortici, Cristinel (2011), "Improved asymptotic formulas for the gamma function", Comput. Math. Appl., 61: 3364–3369.
  17. ^ Mortici, Cristinel (2011), "On Ramanujan's large argument formula for the gamma function", Ramanujan J., 26: 185–192.
  • Olver, F. W. J.; Olde Daalhuis, A. B.; Lozier, D. W.; Schneider, B. I.; Boisvert, R. F.; Clark, C. W.; Miller, B. R. & Saunders, B. V., NIST Digital Library of Mathematical Functions, Release 1.0.13 of 2016-09-16
  • Abramowitz, M. & Stegun, I. (2002), Handbook of Mathematical Functions
  • Nemes, G. (2010), "New asymptotic expansion for the Gamma function", Archiv der Mathematik, 95 (2): 161–169, doi:10.1007/s00013-010-0146-9
  • Paris, R. B. & Kaminski, D. (2001), Asymptotics and the Mellin–Barnes Integrals, New York: Cambridge University Press, ISBN 978-0-521-79001-7
  • Whittaker, E. T. & Watson, G. N. (1996), A Course in Modern Analysis (4th ed.), New York: Cambridge University Press, ISBN 978-0-521-58807-2
  • Dan Romik, Stirling’s Approximation for n!: The Ultimate Short Proof?, The American Mathematical Monthly, Vol. 107, No. 6 (Jun. – Jul., 2000), 556–557.
  • Y.-C. Li, A Note on an Identity of The Gamma Function and Stirling’s Formula, Real Analysis Exchang, Vol. 32(1), 2006/2007, pp. 267–272.

External links[]

Retrieved from ""