It is perhaps rather surprising, at least to me, that Jacobi elliptic functions can provide quite good approximate solutions to intractable systems. A previous presentation made use of the square of a Jacobi cosine function to capture the behavior of a simple yet quite nonlinear differential equation. This found application in the approximate analytic integration of orbits for power potentials with arbitrary real exponents.

This presentation continues that idea by constructing a good approximate analytic solution for the nonlinear Lotka-Volterra system

$dXdT =X-XY dYdT =−Y+XY$

which is the restriction of a more general system to $\alpha =1$ . Expressions used in what follows can be found in the development of the more general system. The restricted problem has symmetries that allow a cleaner presentation of the analytic solution.

The energy integral for the restricted system is

$E=X+Y -lnXY$

This integral allows the two dynamic variables to be expressed symmetrically in terms of each other using the Lambert W-function:

$X=−W[ −e Y-E Y] Y=−W[ −e X-E X]$

The maximum and minimum values for each of these variables occur when the other is equal its value at the critical point $\left[1,1\right]$ of the system that is a center for motion in the phase plane. Given the symmetric relationships of the variables, let $Z$ represent either one as appropriate. The minima occur on the principal real branch of the Lambert W-function and the maxima on the other real branch:

$Zmin=−W 0[−e 1-E] Zmax=−W −1[ −e 1-E]$

The period of the system can be expressed in terms of either variable. Using their common denotation, the integral expression for the period is

$Tsystem =∫Zmin Zmax dZZ [1[1 +W0[ −e Z-E Z]] -1[1 +W−1[ −e Z-E Z]]]$

Since the limits of integration depend only on energy, the period is also a function of energy alone.

Knowing from this that energy is the controlling parameter of the restricted system, one can choose initial conditions of the variables to simplify the temporal dependence of the solution. One judicious was to do this is by examining combinations of the two dynamic variables. From the original pair of nonlinear differential equations, one can construct equations for the combinations of variables

$ddT (X+Y) =(X-Y) ddT (X-Y) =(X+Y) -2XY ddT (XY) =(X-Y) XY$

Two of the combinations, the sum and product, are easily connected through the energy integral for this restricted system, which is not possible in the more general case. It is also clear that these two combinations have extrema at the same phase-space point $X=Y$ for a given energy. That indicates one way to simplify the problem: consider initial conditions for which the two dynamic variables begin equally and calculate the energy from that single value.

As an aside, when both variables begin at unity the exact solution is identically a constant of one for both dynamic variables, while the period of the system approaches $2\pi$ .

With this choice, the numerical integration of the pair of nonlinear equations over two periods along with combinations of the dynamic variables (prey is green, predators red) looks like this:

This interactive graphic is biased towards the product combination for a good reason: if the product can be approximated nicely then the differential equations become linear of first order and thus are always soluble. To wit, if one can determine some function such that $XY=f\left(T\right)$ then the two dynamic variables are given by

$X(T) =eT [Z0 -∫dT e−T f(T)] Y(T) =e−T [Z0 +∫dT eT f(T)]$

And indeed, a bit of playing around with Jacobi elliptic functions indicates that the product is very close to being proportional to a delta amplitude function. The motivation for this is visual inspection from having interactive graphics readily available for experimentation. If there is some deeper motivation possible, then I would love to know it myself!

It is worth noting that the sum of the two dynamic variables looks very similar to the square of a Jacobi elliptic function used in the presentation mentioned at the outset. Further playing with functions indicates that it is not difficult to find parameters for which the square of an elliptic cosine is very similar to the logarithm of a delta amplitude function. These two forms are linked via the energy integral, giving some context to the choice of fitting function.

The Jacobi delta amplitude function to be used will have the form

$dn[π Tsystem K(m) K(0) T,m]$

The first factor in the first argument converts the period of the system to a frequency corresponding to a circular function, remembering that the delta amplitude function has half the real period of other elliptic functions. The second factor corrects this frequency as described here to account for the presence of the elliptic parameter.

Other than the initial condition $XY\left(0\right)={Z}_{0}^{2}$ there are three other points that can be determined exactly. The two dynamic variables are equal at a second extremum of the product. The energy at this second extremum can be inverted to give

$E=2(Zext -lnZext) → Zext =−W[ −e −E/2 ]$

which due to the symmetry in the variables corresponds to a time equal to half of the period of the system,

$XY( Tsystem 2) =W[ −e −E/2 ]2$

where the principal branch of the Lambert function is used here for ${Z}_{0}\ge 1$ and the other real branch for ${Z}_{0}<1$ , since the second extremum will be on the branch different from the common initial condition.

At the extrema of individual dynamic variables, the other variable is always unity for the restricted system. That means

Since the delta amplitude function is even with respect to the chosen origin, the temporal coordinates in these expressions can be taken as absolute values of the simplest possible integrals,

$TZmin =|∫ Z0 Zmin dZZ [1+W( −e Z-EZ )]| TZmax =|∫ Z0 Zmax dZZ [1+W( −e Z-EZ )]|$

where now the principal branch is used for ${Z}_{0}\le 1$ and the other real branch for ${Z}_{0}>1$ , since the closest extrema of the individual dynamic variables remains on the same branch.

With four exact points available to match, one could concoct a variety of complicated fitting functions. In the interest of simplicity and clarity of the method, the fitting function will be taken to be

$A+B dn[π Tsystem K(m) K(0) T,m]$

Since the period of the system is already determined, there are only three independent parameters: the two linear coefficients and the elliptic parameter. This appears to be one of the simplest choices for a decent fit, while leaving open the possibility of more accurate form with added complexity.

The two linear coefficients can be chosen to make this function exactly match the full solution at the extrema,

with the branch chosen as described above. The two similar statements for matching at the extrema of individual variables are

$A+Bdn[ πTsystem K(m) K(0) Tmin,m] =−W0[ −e1-E ] A+Bdn[ πTsystem K(m) K(0) Tmax,m] =−W −1[ −e1-E ]$

can be combined with the results above for the conditions

$Z02 +W0[ −e1-E ] 1-dn[ πTsystem K(m) K(0) Tmin,m ] =B =Z02 -W[ −e −E/2 ]2 1-dn[ π2 K(m) K(0) ,m] Z02 +W −1[ −e1-E ] 1-dn[ πTsystem K(m) K(0) Tmax,m ] =B =Z02 -W[ −e −E/2 ]2 1-dn[ π2 K(m) K(0) ,m]$

Either condition can be used to produce a fit: the second, the match at the maxima of individual variables, will be chosen since the product is changing more rapidly there than at the minima. Numerical root finding will be needed in practice to determine a solution to either of these equations.

With all formulae in place, one can now visualize the fitting function for the product compared to the numerical differentiation of the pair of differential equations. For large values of the common initial condition the elliptic parameter approaches closer and closer to unity, and here one faces the possibility of inaccuracy using a machine-precision numerical library. The same is true for small values of the common initial condition, where the elliptic parameter becomes a very large negative number. To avoid such inaccuracies the common initial condition will be restricted to a range where the numerical solutions are accurate.

The fitting function compared to the numerical differentiation of the product looks like this:

The fit is not exact, but one can see by switching to the difference between it and the accurate numerical solution that it is quite good. Really not bad at all for such a relatively simple fitting function.

Another way to test the basic fit is to construct a differential equation for the product combination

$d2 dT2 ln(XY) =ddT (X-Y) =(X+Y) -2XY d2 dT2 ln(XY) =E +ln(XY) -2XY$

and visualize how closely the two side match. With the abbreviation $ω=π Tsystem K(m) K(0)$ , the second derivative of the logarithm of the fitting function is

$d2 dT2 ln[A+B dn(ωT, m)] =ddT ωBdn′ (ωT,m) A+Bdn (ωT,m) =ω2B dn′′ (ωT,m) A+Bdn (ωT,m) -ω2B2 [dn′ (ωT,m) ]2 [A+Bdn (ωT,m) ]2 =ω2B [(2-m)dn (ωT,m) -2dn3 (ωT,m)] A+Bdn (ωT,m) -ω2 B2m2 sn2 (ωT,m) cn2 (ωT,m) [A+Bdn (ωT,m) ]2$

This expression could be simplified further by writing it entirely in terms of the delta amplitude function, but for a numerical evaluation this is unnecessary. With right- and left-hand sides colored as starboard and port, the two look like this:

The two functions are of course not equal because the analytic solution is merely approximate. That they are generally about the same is an indication that the approximation has merit.

One thing that remains is to apply the fitting function for the product of dynamic variables to evaluate each individual one using the integral forms already given. For the predators this is straightforward and produces an expected result over a single period:

To prevent the integration from slowing down the browser excessively, the approximation is evaluated at a reduced number of selected points of the numerical integration and cached in incremental steps.

The application of the approximation to the prey population over a single period is more problematic:

Virtually the same code is used for this interactive graphic, with adjustments required to evaluate the second dynamic variable, yet the outcome is clearly inaccurate. Since all quantities involved are well above tolerance values used in the integration, the result is quite puzzling. It is likely due to an inherent numerical stability in how this particular variable is calculated, but remains an open problem to be resolved.

There are a couple ways around this predicament. Since the predator population variable is accurate, the prey population variable can be evaluated by the simple relation

$X(T) =XY(T) Y(T)$

replacing the product with the fitting function already determined. One can also use the exact symmetry

$X(T) =Y(−T) =Y(Tsystem -T)$

for this system to evaluate the prey population variable.

It is also worth nothing that one can rearrange differential equations above for the exact results

$X(T) =12[ d2 dT2 ln(XY) +ddT ln(XY) +2XY] Y(T) =12[ d2 dT2 ln(XY) -ddT ln(XY) +2XY]$

Plotting these combinations using derivatives above of the fitting function produces graphics that follow the general form of each individual dynamic variable qualitatively but not very accurately. Apparently both integration and differentiation of the approximate fitting function can amplify discrepancies between the numerical integration of the differential equations and the analytic approximation.