Rotationally symmetric systems in classical mechanics always have the symmetry group SO(3) corresponding to invariance under transformations generated by angular momentum. The Kepler potential is often described as having an ‘extra’ conserved quantity, the Runge vector, whose addition to the angular momentum transformations leads to the larger symmetry group SO(4). This larger group is often described as an ‘accidental’ or ‘hidden’ due to lack of understanding of the origin of the symmetry.

The purpose of this discussion is to show that the generalized Runge vector described in this presentation always leads to an enlarged symmetry group for rotationally symmetric systems, no matter what the functional dependence of the potential on the radial variable. This result is independent of the number of spatial dimensions, so that the addition of the generalized Runge vector to the symmetry group SO(n) of angular momentum always produces the larger group SO(n+1).

In the mathematical equations below, repeated indices represent sums over those indices, as is commonly done in coordinate-based presentations of classical general relativity.

Begin with the representation of the algebra for SO(n). Angular momentum is in general an antisymmetric tensor, and only corresponds to a vector in a three-dimensional space where the number of independent tensor components is equal to the dimension of the space. In an n-dimensional space, the angular momentum tensor is

$Lij =qipj -qjpi$

and has $12 n(n−1)$ components. The algebra of the group structure is demonstrated by evaluating Poisson brackets defined by

$[A,B] =∂A ∂qk ∂B ∂pk -∂A ∂pk ∂B ∂qk$

The Poisson bracket of components of the angular momentum tensor is

$[Lij, Lkl] =−δjk Lil -δil Ljk +δik Ljl +δjl Lik$

and while this looks complicated, what is important is that the bracket of two components of the angular momentum tensor always leads to another component. Coordinates and momenta are vectors in this n-dimensional space, and their brackets with components of the angular momentum tensor will describe how vectors are transformed by rotation:

$[qi, Ljk] =−δij qk +δik qj [pi, Ljk] =−δij pk +δik pj$

The angular momentum tensor corresponds to the symmetry of rotational invariance because the bracket of any component with a square of a vector or a dot product of two vectors is always zero,

$[qkqk ,Lij] =[pkpk ,Lij] =[qkpk ,Lij] =0$

so that squares and dot products of dynamic vectors are unchanged by rotation of the system. This is not true of dot products that include nondynamic constant vectors, i.e. vectors whose constancy does not have its source in the dynamic system. This can be seen by evaluating the brackets

$[ ckqk, Lij] =−ci qj +cjqi [ ckpk, Lij] =−ci pj +cjpi$

where the vectors ck consist of arbitrary constants. Dot products of dynamic with nondynamic vectors behave like vectors when evaluating Poisson brackets, a point that will lead to an interesting puzzle at the end of this presentation.

A scalar is by definition rotationally invariant, having an identically zero bracket with components of the angular momentum tensor. This means that scalar functions appearing in rotationally symmetric systems can only depend on squares or dot products of dynamic vectors. The bracket of a general scalar with a component of the angular momentum tensor is

$[f[ q2,p2, (qp)], Lij] =(qi∂ ∂qj -qj∂ ∂qi +pi∂ ∂pj -pj∂ ∂pi ) f[q2, p2, (qp)]$

with the notation

$q2=qk qk p2=pk pk (qp) =qkpk$

for the quantities that can appear in a rotationally symmetric function. When the individual derivatives are applied to the rotationally symmetric function, their effective action can be written

$∂ ∂qi =2qi∂ ∂q2 +pi∂ ∂(qp) ∂ ∂pi =2pi∂ ∂p2 +qi∂ ∂(qp)$

Substitution of these forms into the bracket above verifies that it is indeed identically zero. Use of these effective derivatives is unambiguous as long as all dynamic quantities are written explicitly in terms of coordinates and momenta.

Now introduce the generalized Runge vector from this presentation. For simplicity, take all of the βi of that presentation equal, so that the vector is

$Ri=m(β q·i -β· qi) =βpi -mβ· qi$

where the scalar function β is determined through the dynamical quantities by

$β ·· =−1mq dVdq β =−2m dV dq2 β$

Since brackets of scalar quantities with components of the angular momentum tensor are identically zero, the bracket with the Runge vector corresponds to those of the coordinate and momentum vectors,

$[Ri, Ljk] =β[pi, Ljk] -mβ· [qi, Ljk] =−δij Rk +δik Rj$

and this bracket forms part of the algebra for SO(n+1) . The remaining part of that algebra resides in the bracket $[Ri, Rj]$ : this bracket must be proportional to a component of the angular momentum tensor if the Runge vector contributes to the extended symmetry group SO(n+1) . Since the bracket is zero for equal indices, consider only i ≠ j when evaluating the bracket:

$[Ri, Rj] =Lij [β,m β·] +β( pi∂ ∂qj -pj∂ ∂qi )β +mβ· (qj∂ ∂pi -qi∂ ∂pj )mβ· +β( qj∂ ∂qi -qi∂ ∂qj )mβ· +mβ· (pi∂ ∂pj -pj∂ ∂pi )β$

Replacing the derivatives in parentheses with their effective action upon rotationally symmetric functions, the bracket can be written compactly as

where the Poisson bracket that has not been expressed in terms of effective action represents

$[β, mβ·] =4(qp) [∂β ∂q2 ∂m β· ∂p2 -∂β ∂p2 ∂m β· ∂q2 ] +2q2 [∂β ∂q2 ∂m β· ∂(qp) -∂β ∂(qp) ∂m β· ∂q2 ] -2p2 [∂β ∂p2 ∂m β· ∂(qp) -∂β ∂(qp) ∂m β· ∂p2 ]$

The coefficient F is a complicated function of coordinates and momenta, but consists solely of combinations of constants of the motion. To understand this, remember that the Poisson brackets satisfy the Jacobi identity

$[A, [B,C]] +[B, [C,A]] +[C, [A,B]] =0$

which can be verified by writing out the twenty-four index summed terms in this statement and canceling them in pairs. When this identity is applied to the constants of the motion Ri, Rj and the Hamiltonian H

$[Ri, [Rj,H]] +[Rj, [H,Ri]] +[H, [Ri, Rj]] =0$

the inner brackets of the first two terms are zero by definition of a constant of the motion, so that the last term must also be zero and its inner bracket a constant in its own right. This is a general result: the bracket of two constants of the motion is always another constant. Since this inner bracket is the function F multiplying a known constant of the motion Lij,

$[H,[Ri ,Rj]] =0=[H, LijF] =[H,F] Lij$

the function F must also have a zero bracket with the Hamiltonian, and can only consist of constants of the motion. The constants of the motion that can be part of a scalar function are the Hamiltonian, the squared contraction of the angular momentum tensor, and the square of the Runge vector:

$H=pk pk2m +V(qk qk) =p2 2m +V(q) L2=12 Lkl Lkl =q2p2 -(qp)2 R2 =RkRk =p2β2 -2(qp)m β·β +q2(m β· )2$

These correspond with the three quantities q2, p2 and (qp) available in rotationally symmetric systems, but there is a functional dependence among the three scalar constants of the motion.

In order to demonstrate fully the algebra of SO(n+1), the Runge vector components need to be rescaled by some scalar function that will cancel the coefficient function F that appears in $[Ri, Rj]$ . The Hamiltonian by definition has zero bracket with all constants of the motion, which includes the other two squared constants. Using brackets given previously, it is simple to show that

$[Lij, L2] =Lkl [Lij, Lkl] =0 [Lij, R2] =Rk [Lij, Rk] =0$

which means that any scalar function of the squared constants of the motion can be brought inside a bracket with any component of the angular momentum tensor. A complication arises for brackets containing individual Runge vector components, for which

$[Ri, L2] =Lkl [Ri, Lkl] =2Rk Lki [Ri, R2] =2Rk [Ri, Rk] =−2FRk Lki$

Scalar functions cannot simply be brought inside such brackets, which means that one cannot blithely multiply Runge-vector components by an arbitrary function of squared constants of the motion. As an aside, the right-hand sides of these equations are generalizations of the three-dimensional cross product to an arbitrary number of dimensions, and are in general nonzero.

Fortunately for the goal of this presentation, the components of the Runge vector replicate a structure seen earlier. Consider first the explicit effect on the bracket of rescaling the Runge vector components by multiplying with an arbitrary power of a scalar function to be determined,

$[Gu Ri, GuRj] =G2u [Ri, Rj] +uG 2u−1 [Rj [Ri, G] -Ri [Rj, G]]$

where the power u can be chosen for convenience. The trick is what happens when the difference of the two brackets is expanded,

$Rj [Ri,G] -Ri [Rj,G] =Lij [ mβ· [β, G] -β[ mβ·, G]] +β2( pi∂ ∂qj -pj∂ ∂qi )G +(mβ ·)2 (qj∂ ∂pi -qi∂ ∂pj )G +mβ· β( qj∂ ∂qi -qi∂ ∂qj )G +mβ· β( pi∂ ∂pj -pj∂ ∂pi )G$

which in terms of effective action becomes

where this last quantity is a linear differential operator applied to the function to be determined. The bracket of the rescaled components is now

$[Gu Ri, GuRj] =[G2u F +uG 2u−1 D(G)] Lij$

and can be made to conform to the algebra of SO(n+1) by setting the coefficient equal to unity and solving a differential equation. For the choice $u=12$ this is the linear first-order inhomogeneous differential equation

$12D(G) +F[q2, p2, (qp)]G =1$

When a solution to this equation has been found, the constants of the motion $Lij$ and $GRi$ will satisfy the algebra of SO(n+1) for any spherically symmetric potential in an arbitrary number of dimensions.

While this indicates that the scaling function G always exists in principle, determining it is not simple given the complexity of the differential equations. The Kepler potential $V(q) =−kq$ stands out starkly in this regard, because one can take

$β=(qp) mβ· =p2 -mkq$

from which follows

$F=−p2 +2mkq =2m(−E) =2m(−H)$

which is a positive quantity for bound states. Since the Hamiltonian has a bracket of zero with all constants of the motion, one can immediately conclude that $G=1F$ , and the constants forming the extended algebra are $Lij$ and $Ri 2m(−H)$ . This extremely simple behavior is not expected to hold in general.

It is worth noting that the differential equation to be solved is equivalent to

$[βG, mβ· G] -∂[ β2G] ∂q2 -∂[(m β· )2G] ∂p2 -∂ [mβ ·βG] ∂(qp) =1$

which could have been found by simply including factors of $\sqrt{G}$ when evaluating the coefficient F before setting it equal to unity. The bit of extra work for an arbitrary power of the scaling function confirms that the square root is the most convenient choice in that it leads to a linear differential equation.

And now for the puzzle mentioned above. For a Lagrangian $L=m2 q·k q·k -V(q)$ of standard form, the Lagrange equations are

$q ··i =−1mq dVdq qi$

which is exactly the same as the first form of the equation determining β. Since these are linear differential equations, the coordinates span the function space used to represent β. By standard existence theorems, the function β must be a linear superposition of the coordinates and its temporal derivative the same linear superposition of momenta:

$β=ck qk β· =ck q·k =1m ckpk$

It was pointed out above that a dot product with nondynamic constant vectors does not behave like a scalar with respect to angular momentum, so this solution will not produce the correct algebra under Poisson brackets. In fact, writing explicitly

$Ri =ckqk pi -ckpk qi =ck Lki$

the Runge vector is in a definite sense a linear superposition of components of the angular momentum tensor. While this is true numerically, it will of course not lead to an extended algebra in this form but apparently only when the function β is expressed in terms of the dynamic variables q2, p2 and (qp). Curiouser and curiouser.

Uploaded 2012.03.25 — Updated 2014.12.30 analyticphysics.com