The designation “quantum” in “quantum mechanics” is meant to enshrine one of the main achievements of the theory: generating finite quantities in what were previously continuous classical systems. This is done by reinterpreting the classical Hamiltonian as an operator upon a quantum mechanical state vector that produces discrete energy eigenvalues. For a one-dimensional system this is written

H|ψ =[p2 2m +V(x)] |ψ =E|ψ

With the standard operator interpretation p=i ddx of linear momentum, the action of the Hamiltonian operator becomes the second-order Schrödinger differential equation:

2 2m d2ψ dx2 +V(x)ψ =Eψ

The equation can be solved in principle for any reasonable choice of the classical potential, both for bound and unbound systems. One of the standard examples worked out fully in any graduate-level textbook is the one-dimensional simple harmonic oscillator with classical frequency ω, for which the Schrödinger equation is

2 2m d2ψ dx2 +mω2 2x2ψ =Eψ

The normalized wave function solutions, or eigenfunctions, are given by physicist’s Hermite polynomials multiplied by a decreasing Gaussian function,

ψk(x) =1 2kk! [mω π ]14 exp[ mω 2 x2] Hk[ mω x]

corresponding to the discrete energy eigenvalues

Ek =ω( k+12) k=0,1, 2,

The reason this is one of the problems fully solved in texts is because it is the simplest example of the handful of systems that can be solved exactly. An apparently minor change to the potential, for example letting the exponent be an arbitrary continuous value compensated by an alteration in a dimensioned constant,

2 2m d2ψ dx2 +m ωa2 2xaψ =Eψ

leads to cases of the equation that are extremely difficult to solve. For a = 1 , a linear potential, this equation can be transformed into the differential equation for Airy functions, and the resulting wave functions are said to represent a quantum mechanical bouncing ball subject to a constant force. There is an exact solution for a = 4 , which is a special case of the more general quartic oscillator, but it involves a triconfluent Heun function. The solution is quite detailed, with the relative simplicity of the harmonic oscillator disappearing very quickly.

The question then arises as to how one can get a sense of the behavior of a system that is not so different from a simple harmonic oscillator when the differential equation has a complicated analytic solution or none at all. The answer is that there is a straightforward method to approximate the energy eigenvalues and eigenfunctions for an arbitrary system, and these numerical solutions can be presented graphically for comparison with the harmonic oscillator.

In the interest of comparison, the potential to be used will be symmetric about the origin like the harmonic oscillator, so that the relevant Schrödinger equation is

2 2m d2ψ dx2 +m ωa2 2 |x|a ψ =Eψ

with a nonanalytic absolute value to ensure symmetry and remove a potential phase factor in expectation values on the negative x-axis. Since the potential is continuous at the one point of nonanalyticity, the origin, the resulting wave functions are well behaved.

The exponent of the potential will be restricted to positive values for this presentation. The subscript on the frequency is an indication that its dimensioned units must change to accommodate the difference in spatial behavior from the simple harmonic oscillator, and by definition ω2=ω  .

The variation of the potential with continuous exponent, apart from the constant fractional factor, is given by the following interactive graphic:

The method to be used in this presentation is based on a standard property of wave functions in quantum mechanics: given a complete set of eigenstates for a potential, any wave function can be expanded in terms of that basis set. If the basis states given above for the harmonic oscillator are abbreviated as |k =ψk(x)  , then any other wave function can be written in terms of them with (possibly complex) coefficients. For a wave function that depends continuously on the exponent of the potential this is written

|Ψ(a) =l=0 cl |l

If one already knew the complete behavior of the wave function, then the coefficients would be determined by an inner product with each basis state in turn,

k| Ψ(a) dx ψk* (x) Ψ(a,x) =l=0 cl k|l =m=0 cl δkl =ck

since the orthogonality of the basis states picks out each coefficient when contracted with the complete expansion. The behavior of the wave function is in general known from solving Schrödinger’s equation explicitly, but one can approximate this solution in terms of the known harmonic oscillator basis states. What is first needed is a way to evaluate the eigenvalues

H(a) |Ψ(a) =[p2 2m +m ωa2 2 |x |a] |Ψ(a) =E(a) |Ψ(a)

for the modified Hamiltonian as a function of the continuous exponent. Allow this Hamiltonian to act upon the expansion in terms of eigenstates,

H(a) |Ψ(a) =l=0 cl H(a) |l E(a) l=0 cl |l =E(a) |Ψ(a)

where the inner equivalence must hold by working in from the outer equalities. Now form an inner product with each basis state:

k| H(a) |Ψ(a) =l=0 cl k| H(a) |l =E(a) l=0 cl k|l =E(a) ck

This statement holds for each state k in turn. Arranging the wave function expansion as a column vector of coefficients, the second and fourth sections of this equality are equivalent to the infinite matrix equation

[ 0| H(a) |0 0| H(a) |1 0| H(a) |2 1| H(a) |0 1| H(a) |1 1| H(a) |2 2| H(a) |0 2| H(a) |1 2| H(a) |2 ] [ c0 c1 c2 ] =E(a) [ c0 c1 c2 ]

This is now a standard eigenvalue problem in linear algebra: finding the eigenvalues and eigenvectors of the matrix on the left-hand side immediately gives the eigenvalues of the modified Hamiltonian, along with column eigenvectors for the expansion of each energy level in terms of harmonic oscillator basis states. The accuracy of the evaluation clearly depends upon how large a matrix is used.

The matrix elements of the modified Hamiltonian, as a linear operator, are the sum of matrix elements for the potential energy and matrix elements for the kinetic energy. In terms of the variable u=mω x  , the matrix elements of the potential energy are

k| m ωa2 2 |x|a |l =m ωa2 2 12k+l k!l!π (mω )a/2 du |u|a eu2 Hk(u) Hl(u) =m ωa2 2 [1 +(1) k+l] 2k+l k!l!π (mω )a/2 0 duua eu2 Hk(u) Hl(u)

where the last step uses the parity property Hk(x) =(1)k Hk(x) of the Hermite polynomials to restrict the domain of integration to the positive axis. If the potential were not given by an absolute value, the fractional factor here would contain a phase factor from the ath root of minus one.

For purposes of numerical evaluation, the finite series expansion of the product of Hermite polynomials can be integrated term by term and left as a finite double series. Performing the expansion, the integral in the matrix elements is

0 duua eu2 Hk(u) Hl(u) =i=0 int(k/2) j=0 int(l/2) (1 )i+j 2k+l -2i-2j k!l! (k-2i)! (l-2j)! i!j! 0 du eu2 ua+k+l -2i-2j =i=0 int(k/2) j=0 int(l/2) (1 )i+j 2k+l -2i-2j -1 k!l! (k-2i)! (l-2j)! i!j! Γ(a+k +l+12 -i-j)

where the last step uses the integral definition of the gamma function with a squared variable of integration. While this form appears to be complicated, it is amenable to quick calculation and does not needed to be simplified any further for practical application. Substituting this double series in the previous formula gives the matrix elements for the potential energy.

The harmonic oscillator basis states are solutions of the Schrödinger equation, which means that the action of the kinetic energy term upon them must satisfy

p2 2m |l =[El -mω2 2x2] |l

This simple statement means that the matrix elements of the kinetic energy can be given in terms of previous results with no additional calculation:

k| p2 2m |l =ω( k+12) δkl -k| mω2 2x2 |l

The second term in this formula can be evaluated explicitly without much difficulty, but again for purposes of numerical evaluation can be left represented by the double series already determined.

Now the fun begins! For simplicity of code, let =m=ω =ωa =1  . With these simple choices of parameters, the values of the first few matrix elements of the modified Hamiltonian as a function of the arbitrary exponent are listed in the following interactive matrix:

Evaluating the eigenvalues of this matrix for a variety of values of the exponent allows one to see how energy levels shift with exponent. In theory a larger matrix of these elements should produce more accurate results, but in practice larger matrices can lead to disruptive rounding errors during diagonalization. Restricting the number of basis states used can also lead to inaccurate results, so a suitable balance needs to be found.

Directly following this paragraph is a hidden interactive graphic ( ) that can be used to evaluate the first ten eigenvalues for sixteen representative values of exponent for a given number of basis states. The evaluation takes a bit of time when the number of basis states exceeds about twenty, so to avoid sluggish browser response the data has been precalculated.

With all of the eigenvalue data embedded in the page, the behavior of eigenvalues as a function of exponent for the indicated number of basis states is

Selecting different numbers of basis states shows how the resulting eigenvalue curves change as a function of that number. Clearly thirty basis states produces smoother curves than any lower number, but the curves for twenty basis states are not so very different. In the interest of shorter calculation times for live evaluation of wave functions in the browser this latter number will be used below, but can easily be changed in the underlying code.

Finding the eigenvectors of the set of matrix elements, then using them as coefficients in the expansion formula, leads to the following interactive graphic for the wave functions:

The red line is the potential itself, shifted downward by the calculated energy for the state, for comparison with the wave function. The resulting wave functions are clearly approximations because they are not strictly decreasing exponentials outside of the potential. The exaggeration of the wave functions near the potential wall is presumably akin to the Gibbs phenomenon in Fourier analysis.

Beginning with an exponent of two, which is the harmonic oscillator, one can vary the value to see how the wave function changes relative to the basis states. It is not surprising that the wave functions are all fairly similar, since one would expect that potentials that look similar should produce wave functions that look similar. What is more problematic is that although these functions all look similar, only the harmonic oscillator basis states have a simple statement in terms of known functions. The relatively minor changes in the wave functions for exponent values near two is not a behavior that can be described easily with analytic mathematics, apart from their expression as the expansion.

As the value of the exponent in the potential approaches infinity, the potential becomes an infinite square well with walls at plus and minus one. A wave function inside an infinite square is constrained entirely inside the well. This can be seen the in graphic as a compression of the wave function along the x-axis for increasing exponent value.

The simplicity of the mathematical expression of harmonic oscillator wave functions, along with a corresponding simplicity of expression of wave functions for the Coulomb potential, leads physicists to ascribe particular meaning to these soluble systems. This presentation has used the simplicity of harmonic oscillator states as a tool to explore states that should be close to them but not expressible analytically, apart from their expansion in term of the simple states. This is a matter of calculational expediency: the harmonic oscillator states are only special in that they are useful.

The structure of quantum mechanics provides a means to quantize any classical potential, not just those that are soluble. In this sense there is continuity hiding within the structure of quantum mechanics, readily accessible with enough computing power. One also has to be prepared to encounter mathematical complexity that cannot be expressed on the proverbial back of an envelope. The graphics here are intended as a step toward finding tools that grasp this complexity as securely as classical analysis would if it were readily extendable to these systems.

Uploaded 2013.05.20 — Updated 2017.08.06