Consider the simple-looking second-order differential equation

$\frac{{d}^{2}g}{d{t}^{2}}+{g}^{n}=1$

where the exponent *n* can take any positive real value. This equation appropriately scaled describes the evolution of the square of the radial variable of a body moving under the influence of a power potential.

The equation is in general hopelessly nonlinear and as such not amenable to exact solution. It turns out, however, that a very good approximate solution can be found for continuous variation of the exponent. Initial conditions for the solution will have a vanishing first derivative at the origin, which restricts the solution to functions even in the variable of evolution. This corresponds to a body beginning movement from one turning point.

Numerical solutions with these initial conditions look very similar as a function of the exponent, as can be readily seen in this interactive graphic:

The initial condition *c*_{0} is limited by the value *n* of the exponent in a manner to be described later.

For all values of the exponent there is one trivial exact solution:

For positive values of the exponent there are two simple exact solutions. For *n* = 1 the solution is a simple sinusoidal,

$g\left(t\right)=1+({c}_{0}-1)cost\phantom{\rule{1em}{0ex}},\phantom{\rule{1em}{0ex}}n=1$

as can be verified by differentiation. The solution is symmetric about the trivial solution, with extrema at

$g\left(t\right)=2-{c}_{0}+2({c}_{0}-1){cos}^{2}\frac{t}{2}\phantom{\rule{1em}{0ex}},\phantom{\rule{1em}{0ex}}n=1$

For *n* = 2 the exact solution is given by elliptic functions. Since any elliptic function can be expressed in terms of other elliptic functions with the same periods, choose a representation of the solution that relates easily to the form just given:

$g\left(t\right)={c}_{1}+({c}_{0}-{c}_{1}){cn}^{2}(\omega t,k)\phantom{\rule{1em}{0ex}},\phantom{\rule{1em}{0ex}}n=2$

The solution is no longer symmetric about the trivial solution, necessitating two constants for the extrema. The second of the two extrema occurs at quarter periods of the Jacobi elliptic function,

$\omega {t}_{1}=K\left(k\right)\phantom{\rule{1em}{0ex}}\to \phantom{\rule{1em}{0ex}}cn\left[K\right(k),k]=0$

where the elliptic function becomes zero. The second derivative of the squared Jacobi elliptic function is

$\begin{array}{l}\frac{{d}^{2}}{d{t}^{2}}{cn}^{2}(\omega t,k)=2{\omega}^{2}[{sn}^{2}(\omega t,k){dn}^{2}(\omega t,k)-{cn}^{2}(\omega t,k){dn}^{2}(\omega t,k)\\ \hfill +{k}^{2}{sn}^{2}(\omega t,k){cn}^{2}(\omega t,k)]\\ \phantom{\frac{{d}^{2}}{d{t}^{2}}{cn}^{2}(\omega t,k)}=2{\omega}^{2}[1-{k}^{2}+2(2{k}^{2}-1){cn}^{2}(\omega t,k)-3{k}^{2}{cn}^{4}(\omega t,k)]\end{array}$

Substituting this in the differential equation leads to a set of equations among the four constants,

$\begin{array}{r}2{\omega}^{2}(1-{k}^{2})({c}_{0}-{c}_{1})+{c}_{1}^{2}=1\\ 2{\omega}^{2}(2{k}^{2}-1)+{c}_{1}=0\\ -6{\omega}^{2}{k}^{2}+({c}_{0}-{c}_{1})=0\end{array}$

Using the last two equations to eliminate the extrema from the first equation gives

$4{\omega}^{4}({k}^{4}-{k}^{2}+1)=1$

while adding the last two equations together gives the relation

$2{\omega}^{2}({k}^{2}+1)={c}_{0}$

Eliminating the frequency between these two relations, the elliptic modulus is a solution of

$\begin{array}{c}({c}_{0}^{2}-1){k}^{4}-({c}_{0}^{2}+2){k}^{2}+{c}_{0}^{2}-1=0\\ {k}^{2}=\frac{{c}_{0}^{2}+2\pm \sqrt{3}{c}_{0}\sqrt{4-{c}_{0}^{2}}}{2({c}_{0}^{2}-1)}=\frac{[\sqrt{3}{c}_{0}\pm \sqrt{4-{c}_{0}^{2}}{]}^{2}}{4({c}_{0}^{2}-1)}\end{array}$

The square of the frequency is

${\omega}^{2}=\frac{{c}_{0}}{2({k}^{2}+1)}=\frac{{c}_{0}^{2}-1}{\sqrt{3}[\sqrt{3}{c}_{0}\pm \sqrt{4-{c}_{0}^{2}}]}$

and the second extremum is

${c}_{1}={c}_{0}-6{\omega}^{2}{k}^{2}={c}_{0}-\frac{\sqrt{3}}{2}[\sqrt{3}{c}_{0}\pm \sqrt{4-{c}_{0}^{2}}]$

Since both extrema are equal to unity for the trivial exact solution, the correct sign for the radical is the lower one. The final solution, with a bit of simplification, is thus

$\begin{array}{l}{c}_{1}=-\frac{{c}_{0}}{2}+\frac{3}{2}\sqrt{\frac{4-{c}_{0}^{2}}{3}}\\ \phantom{\rule{.3em}{0ex}}\omega =\frac{1}{2}\sqrt{{c}_{0}+\sqrt{\frac{4-{c}_{0}^{2}}{3}}}\\ \phantom{\rule{.4em}{0ex}}k=\frac{\sqrt{3}{c}_{0}-\sqrt{4-{c}_{0}^{2}}}{2\sqrt{{c}_{0}^{2}-1}}=\frac{2\sqrt{{c}_{0}^{2}-1}}{\sqrt{3}{c}_{0}+\sqrt{4-{c}_{0}^{2}}}\end{array}$

The frequency and elliptic modulus have been rationalized to show that they both remain finite for the trivial solution, with the latter going to zero.

For an solution analytic in the exponent *n*, the constants here would include a functional dependence on that value. Since it is a property of the Jacobi elliptic function that
*n* = 2 has a simple relationship to that for *n* = 1 . If the functional dependence on exponent of the former solution were to allow *k* → 0 for arbitrary *c*_{0} as *n* → 1 , then the exact solution for *n* = 1 would then be recovered with the additional limits

${c}_{1}\to 2-{c}_{0}\phantom{\rule{1em}{0ex}}\mathrm{and}\phantom{\rule{1em}{0ex}}\omega \to \frac{1}{2}\phantom{\rule{2em}{0ex}},\phantom{\rule{1em}{0ex}}n\to 1$

While such a solution analytic in the exponent *n* is not forthcoming, the relationship between these two exact solutions suggests that one of them can be used as an approximate solution for all values of the exponent. Since the the squared Jacobi function already displays an asymmetry about the trivial solution that the circular function does not, the former is a more appropriate choice for the approximation.

Of the three constants needed apart from the initial condition to describe the approximate solution, two can be stated exactly. First multiply the original differential equation by the first derivative of the dependent variable and integrate once:

$\begin{array}{c}\frac{dg}{dt}[\frac{{d}^{2}g}{d{t}^{2}}+{g}^{n}=1]\\ \frac{1}{2}(\frac{dg}{dt}{)}^{2}+\frac{{g}^{n+1}}{n+1}=g-l\end{array}$

The constant of integration *l* is proportional to the square of angular momentum in physical applications, hence the letter. At the extrema of the motion the first derivative goes to zero so that

${c}_{0}-\frac{{c}_{0}^{n+1}}{n+1}=l={c}_{1}-\frac{{c}_{1}^{n+1}}{n+1}$

The second extremum is thus a solution to the non-polynomial equation

$\frac{{c}_{1}^{n+1}}{n+1}-{c}_{1}+l=0$

With the scaling ${c}_{1}=(n+1{)}^{1/n}c$ this becomes

${c}^{n+1}-c+\frac{l}{(n+1{)}^{1/n}}=0$

which is close to the functional form given at the beginning of a presentation on the real roots of non-polynomials. Accounting for minor differences, the solution found for the root in the vicinity of zero is

${c}_{\mathrm{zero}}=\sum _{i=0}^{\infty}\frac{\Gamma [ni+i+1]}{\Gamma [ni+2]}\frac{1}{i!}[\frac{l}{(n+1{)}^{1/n}}{]}^{ni+1}$

and the second extremum in terms of the first is

${c}_{1}=(n+1{)}^{1/n}\sum _{i=0}^{\infty}\frac{\Gamma [ni+i+1]}{\Gamma [ni+2]}\frac{1}{i!}[\frac{(n+1){c}_{0}-{c}_{0}^{n+1}}{(n+1{)}^{1+1/n}}{]}^{ni+1}$

It was demonstrated in the presentation that this series converges for its entire domain of application, so that it can be taken as an exact result.

The real periodic solutions of the differential equation reach a limit when the second extremum is zero, since an arbitrary power of a negative functional value would necessitate a complex solution. Inspection of the series above indicates that the corresponding value for the first extremum occurs when each term is zero, or

$({c}_{0}{)}_{\mathrm{max}}=(n+1{)}^{1/n}$

which is the limitation mentioned at the outset. This limit corresponds to an orbit with zero angular momentum.

For values of the initial condition larger than this limit the constant *l* becomes negative, and being proportional to the square of angular momentum implies complex values for the latter. This potentially interesting feature of the equation will be left for a future presentation.

Formally integrating the differential equation provides an expression for the independent variable:

$t=\frac{1}{\sqrt{2}}\int \frac{dg}{\sqrt{g-{\displaystyle \frac{{g}^{n+1}}{n+1}}-l}}$

Integrating between the two extrema gives the time between, which as stated above is a quarter period of the Jacobi elliptic function. Replacing *g* with a dummy variable for clarity, the frequency of the Jacobi elliptic function in terms of the initial condition and the elliptic modulus is

$\omega =\frac{K\left(k\right)}{{t}_{1}}=\sqrt{2}K\left(k\right)[\underset{{c}_{1}}{\overset{{c}_{0}}{\int}}\frac{dc}{\sqrt{c-{\displaystyle \frac{{c}^{n+1}}{n+1}}-l}}{]}^{-1}$

The order of endpoints in the integral keeps the frequency positive. This is an exact expression for a given elliptic modulus *k* and ensures that the approximation always matches an exact solution of the differential equation at extrema. This can be seen by comparing the approximation so far with the numerical integration given at the outset:

The approximation in red is to the eye already a pretty good match for the numerical solution for arbitrary elliptic modulus. The discrepancy can be seen better by visualizing the difference between the numerical and approximate solutions:

Varying the elliptic modulus *k* shows that the approximation, while matching the numerical solution at extrema, is generally either slightly above or slightly below the latter between extrema. Careful variation with cursor keys will show that there is a small range of elliptic modulus for which the approximation appears to be equally above and below the numerical solution between extrema.

To make this more exact, define a simple deviation by

$\sigma =\sqrt{\sum _{\mathrm{evaluation\; points}}[{g}_{\mathrm{numerical}}\left(t\right)-{g}_{\mathrm{approximate}}\left(t\right){]}^{2}}$

where the summation is taken over some number of points equally spaced between extrema. The value of this deviation will of course depend the number of points used, but its value is less important than its general behavior. Choosing the number of points equal to ten for calculational expediency, deviation as a function of elliptic modulus looks like this:

Clearly there is a value of elliptic modulus for which the deviation assumes a minimum. The location of the minimum is insensitive to the number of points chosen.

A natural best choice for the elliptic modulus would thus appear to be the value that minimizes the deviation for a given exponent and initial condition. Estimating the behavior of these values as a function of the two other parameters will complete the approximate solution of the differential equation.

Directly following this paragraph is a hidden interactive graphic ( ) that evaluates locations of minima for given *n* as a function of *c*_{0}. Since the evaluations take some time, they have already been processed with this graphic and the resulting data embedded in the page. This allows subsequent visualizations of the data to be immediately interactive. The embedded data is explicitly constrained to be identically zero for both *n* = 1 and *c*_{0} = 1 : the former condition matches the behavior of the exact solution for *n* = 1 , while the second is consistent with the behavior of the exact solution for *n* = 2 .

With the minima in deviation evaluated and embedded, visualizing their behavior is straightforward. In three dimensions this looks like

but is perhaps more usefully visualized in two dimensions as a function of exponent for a given initial condition,

or as a function of initial condition for a given exponent,

along with the behavior in exponent for the maximum allowed initial condition ${c}_{0}=(n+1{)}^{1/n}$ :

For most of their domains, the first two graphs are both reminiscent of an inverse secant, with the third mostly definitely so. All of these graphs go to zero for either *n* = 1 or *c*_{0} = 1 , and since
${sec}^{-1}\left(1\right)=0$
the desired behavior is achieved by having an argument in (*n* - 1) and (*c*_{0} - 1) offset by unity. The graphs also appear to have asymptotes to unity for large values, and this is achieved by dividing by the asymptotic value of the inverse secant. The behavior of the elliptic modulus that minimizes the deviation from the numerical solution is then of the form

$k(n,{c}_{0})\approx \frac{2}{\pi}{sec}^{-1}[{f}_{1}(n-1){f}_{2}({c}_{0}-1)+1]$

where ${f}_{1}\left(0\right)={f}_{2}\left(0\right)=0$ . Since the solution to this point is approximate and not exact, what is needed now are choices for these two functions that do a decent job of matching the available data. The functions should be as simple as possible, without worrying about finding a precise match to the data.

Begin with the third graph, the behavior in exponent for the maximum allowed initial condition, since it has fewer variables. One estimate arises from temporarily removing the dependence on *c*_{0} by setting

indicates that there is a value between one and two where the two lines coincide quite nicely. A convenient way to set this value is from the exact solution for *n* = 2 , for which the limit value

$\begin{array}{c}k=\frac{1}{\sqrt{2}}=\frac{2}{\pi}{sec}^{-1}(a+1)\\ a=sec\left(\frac{\pi}{2\sqrt{2}}\right)-1\approx 1.252\end{array}$

The remaining behavior for the argument of the inverse secant can be approximated by taking

${f}_{2}=[\frac{{c}_{0}-1}{(n+1{)}^{1/n}-1}{]}^{b}$

The need for the exponent *b* can be seen by visualizing the dependence of elliptic modulus *k* as a function of exponent *n* along with the numerical data,

or as a function of initial condition for a given exponent along with the numerical data:

There again appears to be a value between one and two for which the estimate matches the numerical data fairly well, albeit not nearly as exactly as that for the behavior in exponent for the maximum allowed initial condition. Since there is no compelling way to select this value by comparison to known exact solutions, a value of *b* = 1.75 will be chosen. To the same level of approximation, let *a* = 1.25 .

With these approximate values for the parameters, the three-dimensional behavior in comparison with the numerical data is

which is a pretty good approximation to the surface, considering the complexity of the actual problem.

In summary, the differential equation

$\frac{{d}^{2}g}{d{t}^{2}}+{g}^{n}=1$

has a decent approximate solution of the form

$g\left(t\right)\approx {c}_{1}+({c}_{0}-{c}_{1}){cn}^{2}(\omega t,k)$

with the three dependent parameters given by

$\begin{array}{l}{c}_{1}=(n+1{)}^{1/n}\sum _{i=0}^{\infty}\frac{\Gamma [ni+i+1]}{\Gamma [ni+2]}\frac{1}{i!}[\frac{l}{(n+1{)}^{1/n}}{]}^{ni+1}\\ k\approx \frac{2}{\pi}{sec}^{-1}[1.25(n-1)[\frac{{c}_{0}-1}{(n+1{)}^{1/n}-1}{]}^{1.75}+1]\\ \omega =\sqrt{2}K\left(k\right)[\underset{{c}_{1}}{\overset{{c}_{0}}{\int}}\frac{dc}{\sqrt{c-{\displaystyle \frac{{c}^{n+1}}{n+1}}-l}}{]}^{-1}\phantom{\rule{1em}{0ex}},\phantom{\rule{1em}{0ex}}l={c}_{0}-\frac{{c}_{0}^{n+1}}{n+1}\end{array}$

In order for the solution to remain real, the initial condition lies between unity and

$({c}_{0}{)}_{\mathrm{max}}=(n+1{)}^{1/n}$

Keep in mind that the behavior of the elliptic modulus was constructed to minimize the deviation of the approximation from the numerical solution. When applying this approximate solution to the orbits determined by power potentials with arbitrary real exponents, it will turn out that another choice is more appropriate.

*Uploaded 2016.12.27 — Updated 2017.06.24*
analyticphysics.com