All first-order linear ordinary differential equations are soluble, and simply enough that the method appears in all textbooks on differential equations. Textbooks commonly state that there is no corresponding general solution for second-order equations when in fact there is. It appeared for the first time in Tricomi’s Differential Equations and was reprised in his Integral Equations, but to the best of my knowledge appears in no other texts. Presumably that is because the method requires a basic understanding of integral equations.

Consider a general second-order linear ordinary differential equation with nonconstant coefficients:

$A(x) y″ +B(x) y′ +C(x)y =0$

The coefficient functions can be split additively

$[a0(x) +a(x)] y″ +[b0(x) +b(x)] y′ +[c0(x) +c(x)]y =0$

in an infinite number of ways. The essence of Fubini’s method is to perform this split so that one set of coefficients satisfies an equation that can be solved explicitly,

$a0(x) u″ +b0(x) u′ +c0(x)u =0 → u(x) =f1(x) , f2(x)$

and then look for a solution to the original equation of the form

$y(x) =c1(x) f1(x) +c2(x) f2(x)$

along with the same constraint as is used in variation of parameters:

$c1′ (x) f1(x) +c2′ (x) f2(x) =0$

This form is now to be substituted in the original equation, where in the process of forming derivatives the imposed constraint removes half of the terms of the first derivative. The resulting equation is

$c1′ f1′ +c2′ f2′ +c1 af1″ +bf1′ +cf1 a0+a +c2 af2″ +bf2′ +cf2 a0+a =0$

where it is assumed that denominators remain nonzero. These two equations are easily solved for the derivatives of the varied parameters,

$c1′ =(c1g1 +c2g2) f2 c2′ =−(c1g1 +c2g2) f1$

which for conciseness uses the abbreviations

The Wronskian is nonzero by definition for the two linearly independent solutions of the intermediate differential equation. The varied parameters thus satisfy the system of Volterra integral equations

$c1(x) =γ1 +∫0x dt [c1(t) g1(t) +c2(t) g2(t)] f2(t) c2(x) =γ2 -∫0x dt [c1(t) g1(t) +c2(t) g2(t)] f1(t)$

where the constants of integration $γi =ci (0)$ are related to the initial conditions of the original equation by

In order to decouple the integral equations, first set

$h(x) =c1g1 +c2g2$

for the simpler equations

$c1(x) =γ1 +∫0x dt h(t) f2(t) c2(x) =γ2 -∫0x dt h(t) f1(t)$

Multiplying the first by g1 and the second by g2 then adding gives

$h(x) =γ1 g1(x) +γ2 g2(x) +∫0x dt [g1(x) f2(t) -g2(x) f1(t)] h(t)$

When this equation is solved using successive approximations, the result will be linear in the constants of integration. That means one can assume

$h(x) =γ1 h1(x) +γ2 h2(x)$

and arrive at the pair of decoupled Volterra integral equations:

To solve these equations by successive approximations, start each solution with the first term on the right-hand side and repeatedly substitute the result back into the equations:

$hi,0 (x) =gi(x) hi,1 (x) =gi(x) +∫0x dt K(x,t) gi(t) hi,2 (x) =gi(x) +∫0x dt K(x,t) gi(t) +∫0x dt K(x,t) ∫0t du K(x,u) gi(u)$

Reversing the order of integration step by step, one can define the iterated kernels

$K1 (x,t) =K(x,t) Kn (x,t) =∫tx du K(x,u) Kn-1 (u,t)$

where the change in the lower limit of integration comes from the reversal. The solutions to the integral equations are then

$hi(x) =gi(x) +∑ n=1 ∞ ∫0x dt Kn (x,t) gi(t) i=1,2$

When the iterated kernels can be recognized as converging to a known function this solution has only a single integral. With these solutions substituted in the equations for the varied parameters, the general solution of the original differential equation, in terms of the intermediate constants of integration, is

This result can easily be rewritten in terms of the initial conditions of the original differential equation, but this form is patently simpler.

Since an infinite sum appears in the integral equations determining hi it might not appear that this method is an improvement over a power series solution to the original equation. This method at least provides a straightforward algorithm to the solutions, while a recurrence relation derived from a power series might not be apparently soluble. It is also easier in general to establish if this form converges to a solution for given coefficient functions in the original equation.

As a first example of Fubini’s method, consider the differential equation for hyperbolic circular functions

$y″ -y =0$

which will be solved with respect to the auxiliary equation

$u″ =0 → f1=1 , f2=x$

The splitting coefficients of the original equation are

$a0=1 b0=0 c0=0 a=0 b=0 c=−1$

and the intermediate functions are

$W(x) =1 g1(x) =−1 g2(x) =−x$

The first few iterated kernels are

$K1 (x,t) =K(x,t) =x-t$

$K2 (x,t) =∫txdu (x-u) (u-t) =13! (x-t)3$

$K3 (x,t) =13! ∫txdu (x-u) (u-t)3 =15! (x-t)5$

$K4 (x,t) =15! ∫txdu (x-u) (u-t)5 =17! (x-t)7$

and the pattern is clear. In this case the iterated kernels can be summed to the single function

$∑n=1 ∞ Kn (x,t) =sinh(x-t)$

The solutions to the decoupled integral equations are

$h1(x) =−1 -∫0 xdt sinh(x-t) =−cosh(x) h2(x) =−x -∫0 xdt tsinh(x-t) =−sinh(x)$

and the solutions to the original differential equation are

$y1(x) =1+∫0 xdt (x-t) cosh(t) =cosh(x) y2(x) =x+∫0 xdt (x-t) sinh(t) =sinh(x)$

which are precisely the hyperbolic circular functions as expected. Nifty!

Repeat the process for the Airy differential equation

$y″ -xy =0$

using the same auxiliary equation and solutions. The splitting coefficients of the original equation are

$a0=1 b0=0 c0=0 a=0 b=0 c=−x$

and the intermediate functions are

$W(x) =1 g1(x) =−x g2(x) =−x2$

The first few iterated kernels are

$K1 (x,t) =K(x,t) =x(x-t)$

$K2 (x,t) =x∫tx du (x-u) u(u-t) =x12 (x-t)3 (x+t)$

$K3 (x,t) =x12 ∫txdu (x-u) u(u-t)3 (u+t) K3 (x,t) =x2520 (x-t)5 (5x2 +11xt +5t2)$

$K4 (x,t) =x2520 ∫txdu (x-u) u(u-t)3 (5u2 +11ut +5t2) K4 (x,t) =x90720 (x-t)7 (x+t) (x+2t) (2x+t))$

This time the pattern is far from clear, but one can still find a solution to this order of accuracy. The solutions to the decoupled integral equations are

and the two solutions to the original differential equation are

which curiously have the same numeric denominators as the solutions to the decoupled integral equations. Using power series expansions of combinations of Airy functions, one can make the identifications

$y1(x) =12 [Ai(x) Ai(0) +Bi(x) Bi(0)] y2(x) =Γ( 13) 2· 31/3 Γ(23) [Bi(x) Bi(0) -Ai(x) Ai(0)]$

The solutions to the decoupled integral equations are clearly related to the final solutions by

$h1(x) =−xy1 (x) h2(x) =−xy2 (x)$

The equal numeric denominators can be understood term by term with the integral

$∫0x dt tn (x-t) =xn+2 (n+1) (n+2)$

The relation of the iterated kernels to the terms in the two Airy series remains to be explored.

A Mathematica notebook containing evaluation of all integrals can be found here.

Since any second-order linear ordinary differential equation can be transformed to the normal form

$y″ +f(x)y =0$

one can now write a general solution to this equation, using the same auxiliary equation and similar intermediate functions as in the two examples. The iterated kernels are in general

$K1 (x,t) =−f(x) (x-t) Kn (x,t) =−f(x) ∫tx du (x-u) Kn-1 (u,t)$

Once these are determined, evaluate the functions

$h1(x) =f(x) +∑n=1 ∞ ∫0x dt Kn (x,t) f(t) h2(x) =xf(x) +∑n=1 ∞ ∫0x dt Kn (x,t) t f(t)$

Then the solutions to the differential equation are simply

$y1(x) =1-∫ 0xdt (x-t) h1(t) y2(x) =x-∫ 0xdt (x-t) h2(t)$

The main difficulty is of course the evaluation of the iterated kernels. This is again a completely straightforward algorithm to the general solution. As such it deserves to be much better known than currently.