If you have ever played with the `RotationTransform` command in Mathematica, you may have at some point become impressed at how quickly it returns answers in an arbitrary number of dimensions. After all, the details section of the documentation for this command says explicitly that it “can effectively specify any element of the n-dimensional rotation group SO(n).” Since elements of the group can be evaluated by exponentiating the generator of the element, in this case an orthogonal matrix, it appears at first sight that Mathematica knows how to exponentiate an orthogonal matrix in an arbitrary number of dimensions, and very quickly at that.

It turns out that there is a simpler way to evaluate arbitrary elements of the group than by explicitly exponentiating each individual matrix. To show how this works, first quickly review the relationship between a rotation matrix and its generator. For the simplest case of rotation in a two-dimensional plane, the rotated vector is related to the initial vector by

$( x′ y′ ) =( cosa−sina sinacosa )( xy)$

for a right-hand rotation through the angle a. The generator of this rotation is represented by the matrix $( 0−1 10 )$ . The square of this matrix is the negative of the identity matrix, its cube is its own negative, and the full exponentiation is

$exp[a ( 0−1 10 )] =( 10 01 ) +a ( 0−1 10 ) -a2 2! ( 10 01 ) -a3 3! ( 0−1 10 ) +⋯ exp[a ( 0−1 10 )] =cosa ( 10 01 ) +sina ( 0−1 10 ) exp[a ( 0−1 10 )] =( cosa−sina sinacosa )$

leading to the rotation matrix given above. The generator of the rotation can be written as an outer product of the two unit vectors along the x-axis and the y-axis,

$( 0−1 10 ) =( 01 )( 10) -( 10 )( 01) =y ^ x ^T -x ^ y ^T$

where the transposed vector on the right of each term is multiplied leftward onto each component of the vector to produce matrices that are added together. Since the final result is now a general vector statement, it can be extended immediately to higher dimensions. The equivalent three-dimensional generator is

$L12 =y ^ x ^T -x ^ y ^T =( 010 )( 100) -( 100 )( 010) =( 0−10 100 000 )$

where the subscript indicates that this is a right-hand rotation from the x-axis towards the y-axis. Note that in forming this generator, the first index appears with a negative sign as a non-transposed vector. The two remaining generators of right-hand rotations are

$L23 =z ^ y ^T -y ^ z ^T =( 001 )( 010) -( 010 )( 001) =( 000 00−1 010 ) L31 =x ^ z ^T -z ^ x ^T =( 100 )( 001) -( 001 )( 100) =( 001 000 −100 )$

Because there are three coordinate axes, the number of pairwise combinations is also three, so that in three dimensions one can make the identifications

$L1=L23 L2=L31 L3=L12$

and describe rotations as being about the corresponding axes. The rotation matrices can be evaluated separately about each axis just as for the rotation in the two-dimensional plane, with the results

$exp[a1 L1] =( 100 0cosa1 −sina1 0sina1 cosa1 ) exp[a2 L2] =( cosa20 sina2 010 −sina2 0cosa2 ) exp[a3 L3] =( cosa3 −sina30 sina3 cosa30 001 )$

In three dimensions one can also consider a rotation about a general axis by a single angle $a=a12 +a22 +a32$ . The generator of this rotation is the linear combination $a· L =an· L$ , where n is a unit vector in the direction of the axis of rotation. The rotation matrix for this general rotation is found by exponentiating the matrix

$A=( 0−a3 a2 a30 −a1 −a2 a10 ) =a( 0−n3 n2 n30 −n1 −n2 n10 )$

where $n12 +n22 +n32 =1$ because these numbers are direction cosines giving the inclination of the axis of rotation to the three coordinate axes. The square and cube of this matrix are

and the exponentiated matrix can be written as

$eA=I +(sina a)A +(1 -cosa a2) A2$

where the denominators normalize powers of the coefficient matrix. When applied to a vector, the result is Rodrigues’ rotation formula:

$eAr =r cosa +sina( 0−n3 n2 n30 −n1 −n2 n10 )( xyz ) +(1-cosa) ( n12 n1n2 n1n3 n1n2 n22 n2n3 n1n3 n2n3 n32 )( xyz ) eAr =r cosa +(n× r)sina +n (n· r) (1-cosa)$

In four dimensions one can no longer describe a rotation as being about an axis, because there are two axes perpendicular to every plane and there would be an ambiguity in the description. In higher dimensions there are even more axes perpendicular to each plane, so a rotation is best described as being in a two-dimensional hyperplane in a direction from one unit vector towards another.

The definitions above of generators in terms of outer products are in vector notation, and so can be extended immediately to describe rotation in any hyperplane defined by two n-dimensional vectors. Given any two orthogonal unit vectors n1 and n2, which means

$n1 T n1 =1 n1 T n2 =n2 T n1 =0 n2 T n2 =1$

the generator of rotations in the hyperplane spanned by the two vector is

$Ln1 n2 =n2 n1 T -n1 n2 T$

Forming powers of this generator,

$L n1 n2 2 =−( n1 n1 T +n2 n2 T) L n1 n2 3 =−( n2 n1 T -n1 n2 T)$

the n-dimensional rotation matrix is simply

$exp[aL n1 n2 ] =I +(n2 n1 T -n1 n2 T)sina +(n1 n1 T +n2 n2 T) (cosa-1)$

The difference of sign in the last term compared to Rodrigues’ rotation formula is due to an extra minus sign in the neighboring outer product. This n×n matrix can now be applied to an n-dimensional vector to find its final value after rotation in the specified hyperplane.

Given two vectors that are not orthogonal, one can apply the Gram-Schmidt orthogonality process, renormalize the second vector and form the rotation matrix with the two newly orthogonal unit vectors. And that is precisely what Mathematica does in producing answers so quickly in any number of dimensions. The exponentiation has already been carried out in the general formula and does not need to be done explicitly for every particular case, and forming the n×n matrix is trivial for Mathematica.

To demonstrate the consistency of this general formula with the result above in three dimensions, one need merely choose any vector satisfying v1 · n = 0 with respect to the three-dimensional axis of rotation, form its orthogonal vector v2 = v1×n , normalize both vectors and do a wee bit of algebra. Since there is an entire plane of rotation in which to choose the vectors, any initial choice leads by this method to the same result.

In four dimensions, exponentiation of a general orthogonal matrix is capable of explicit evaluation using a method based on the local isomorphism SO(4) $\sim$ SO(3)$\otimes$SO(3) . The general coefficient matrix is

$A=( 0a12 a13 a14 −a120 a23a24 −a13 −a23 0a34 −a14 −a24 −a340 )$

where the minus signs have all been kept below the diagonal for convenience. Squaring this matrix itself produces a symmetric matrix whose interpretation is not immediately clear, but if one first writes

$A=12 (A+ +A-) A+ =( 0 a12+a34 a13-a24 a14+a23 −a12 -a34 0 a14+a23 −a13 +a24 −a13 +a24 −a14 -a23 0 a12+a34 −a14 -a23 a13-a24 −a12 -a34 0 ) A- =( 0 a12-a34 a13+a24 −a14 +a23 −a12 +a34 0 a14-a23 a13+a24 −a13 -a24 −a14 +a23 0 −a12 +a34 a14-a23 −a13 -a24 a12-a34 0 )$

then it is straightforward to verify that these two matrices commute: $[A+, A-]=0$ . Forming their squares gives

$A+2 =−[(a12 +a34)2 +(a13 -a24)2 +(a14 +a23)2 ]I =−a+2I A-2 =−[(a12 -a34)2 +(a13 +a24)2 +(a14 -a23)2 ]I =−a-2I$

so that the complete exponentiated matrix is

$eA= eA+/2 eA-/2 = eA-/2 eA+/2 eA =[cos( a+/2)I +sin( a+/2) a+ A+] ×[cos( a-/2)I +sin( a-/2) a- A-]$

One can describe this method as arranging the six parameters of SO(4) into two vectors, remembering that one is working with four-dimensional representations of SO(3).

Uploaded 2013.01.15 — Updated 2013.01.24 analyticphysics.com