This page will document some special techniques available to overcome nontrivial circumstances when implementing Three.js for use in mathematical physics. This makes the techniques available on the Internet without the inconveniences of other ultimately closed-source systems.

Contents

Geometry Issues
Marching Cubes
Transparency Issues
Camera Issues
Lighting Issues
Rendering Issues
VR Implementation


Geometry Issues

The vertices of geometries can be easily modified by assigning altered values to their components and setting the flag

geometry.verticesNeedUpdate =  true

Once rendering begins, modifying a geometry by adding or removing vertices will have no effect because the action does not reset WebGL buffers. The original geometry can be removed from the scene and replaced by the modified geometry.

If the overall scale, position or rotation of a mesh is modified, subsequent operations immediately performed on the geometry of the mesh may return unexpected and inaccurate results. This is because updateMatrixWorld() is called on objects once every render loop and the call may not yet have occured when the operations are performed. The issue is resolved by manually calling updateMatrixWorld() on the mesh to update its underlying geometry before further calculations.

For a geometry constructed manually from vertices and faces, the indices defining faces should always be listed counter-clockwise for consistency. This is because the front of the face is determined by the right-hand rule applied to the vertices. Inconsistent ordering of indices can lead to defects in rendering for materials that respond to lighting according to the direction of face normals.


Marching Cubes

While Three.js does not currently implement a proper marching cubes geometry, the underlying code for the marching cubes example can be adapted to produce implicit geometries other than metaballs. The MarchingCubes object takes two parameters: a size that determines the extent of its field in all three coordinate directions and a material. This code visualizes an implicit function of three variables:

var march = new THREE.MarchingCubes( size, material );
march.isolation = 0;

for ( var k = 0 ; k < size ; k++ ) {
  for ( var j = 0 ; j < size ; j++ ) {
    for ( var i = 0 ; i < size ; i++ ) {

      var x = x( size );
      var y = y( size );
      var z = z( size );

      march.field[ i + j*size + k*size*size ] = f( x, y, z );

    }
  }
}

scene.add( march );

The property isolation sets the isolevel of the implicit function surface. Since the field takes some time to evaluate, there is a method end() that can be used to invoke a callback function for further processing of the output:

march.end( callback() );

The MarchingCubes object has a method generateGeometry() that now generates a buffer geometry that can be converted to a regular geometry with vertices and faces via the latter’s fromBufferGeometry() method. Be aware that by default this method returns vertex values with all three coordinates in the range [-1,1]. These must be scaled appropriately to produce a geometry matching the actual input function.


Here is one way to wrap your head around the lookup tables used for marching cubes:

1) Read this excellent explanation of marching squares.

2) Write code based on the sixteen cases on Wikipedia using a switch statement.

3) Compare the marching squares version with a switch statement to a version using lookup tables with four if statements.

4) Realize that the marching cubes tables are only there to avoid explicitly writing out 256 separate switch cases, but instead managing with twelve if statements. The tables are just a shortcut.

That was the path I personally took in writing a streamlined version of marching cubes that renders a static surface every bit as fast as the Three.js version without all the distraction of metaballs.


Transparency Issues

When rendering a scene in Three.js, objects are split into two basic groups: opaque and transparent. Opaque objects are rendered first from the front of the scene to the back, then transparent objects from the back of the scene to the front. For transparent objects to render properly from all camera angles, they must have distinct positions. Two transparent objects with the exact same position will render in the order they are added to the scene, so that one will always occlude the other.

When a mesh is created using a custom Geometry, its position defaults to (0,0,0) regardless of the values of the vertices in its underlying geometry. This will lead to unexpected occlusion for transparent objects, since they will by default all have exactly the same position. This can be fixed by centering the geometry in its own frame and using the opposite of the centering vector to set the position of the mesh:

var c = new THREE.Vector3();
geometry.computeBoundingBox();
geometry.boundingBox.getCenter( c );
geometry.translate( -c.x, -c.y, -c.z );
mesh.position.set( c.x, c.y, c.z );

Altering the position attribute without using set will not update the scene appropriately.

The order in which the faces of a custom geometry are assigned is crucial for coherent rendering of transparent objects. Faces should be assigned in some linear order across the object: top to bottom or right to left. This is a feature of WebGL in general.

A closed transparent object using a double-sided material will produce unexpected rendering artifacts among its constituent faces. This is again because all faces share the common position of the object. One solution is to use a single-sided material for the object.

Here is an example of nested concentric transparent spheres, using a render order assigned from inside outward with single sided materials.

Sprites are drawn after transparent objects, but if included in the same scene will always be occluded. One solution to this is create a separate scene containing only sprites and render it after other objects.


Camera Issues

Since cameras are automatically added to rendered scenes, one may not think too much about their positions relative to other objects. The positions of automatically added cameras are with respect to the origin of the rendering space. If a created scene has an explicit offset, it can be centered in the rendering space by setting its position opposite to its offset. OrbitControls will then transform the rendered scene about the origin as expected.

If a camera is added to an object, its position is now with respect to that object. The default behavior of OrbitControls is then to transform the rendered scene about the origin of the object to which the camera is added. To transform about some other point, for example the explicit offset of the object, the attribute target of OrbitControls must be set to this point and the controls updated.

Setting the target of OrbitControls does not affect the coordinates in the camera.position attribute. Transformations of the camera position with respect to a designated target must take this into account.

Reliance on OrbitControls can make one unaware of how much automatically happens behind the scene when they are in place. When OrbitControls are not used, as for example when developing for VR, then scenes can appear to be suddenly broken when the camera is moved. That is because its default orientation is down the z-axis: any movement of the camera too far off that axis will look past objects that default to the origin. This is solved by explicitly setting the lookAt value for the camera.


Lighting Issues

Lights can be made to track with a camera by adding them to the camera as children. For the lights to render, the camera must then be added to the scene. For scenes centered at the origin this presents no problems, but if the scene has an explicit offset the behavior of the lights is highly unexpected.

A light with a sense of direction shines from its position to a target, which by default is the origin. For a light added to a camera which is itself added to a scene with an explicit offset, the attribute target of the light should be set to the offset value. The object light.target must then itself be added to the scene in order for the illumination to render as expected.


Rendering Issues

To prevent text on sprites from looking blurry on high-density screens, double the size of the sprite canvas and set its scale accordingly. If the target is meant to be 128x32 pixels, the relevant code is

  var canvas = document.createElement( 'canvas' );
  var pixelRatio = Math.round( window.devicePixelRatio );
  canvas.width = 128 * pixelRatio;
  canvas.height = 32 * pixelRatio; // powers of two for sprites
  canvas.style.width = '128px';
  canvas.style.height = '32px';

  var context = canvas.getContext( '2d' );
  context.scale( pixelRatio, pixelRatio );

Setting the canvas style ensures consistent behavior across browsers.


A Three.js scene embedded in an iframe will grow endlessly in size when viewed in iOS 8/9 with Mobile Safari. For an iframe with id=view, the workaround is

if ( /(iPad|iPhone|iPod)/g.test( navigator.userAgent ) ) {

	view.style.width = getComputedStyle( view ).width;
	view.style.height = getComputedStyle( view ).height;
	view.setAttribute( 'scrolling', 'no' );

}

This code can follow the iframe immediately. There is no need to wait for the contents of the iframe to load in order to reset its style. All three attributes must be reset.

For a page with more than one iframe, retrieve the entire collection with

document.getElementsByTagName( 'iframe' )

and reset the attributes for all.

As of iOS 10.2 this fix is not always needed, as long as the attribute scrolling="no" is set on the iframe and width and height are specified with nonrelative values. The reset was necessary because Mobile Safari previously did not preserve the attributes when loading the page. When relative values are used for width and height, for example percentages, the reset is still needed for these two values.


VR Implementation

Steps to convert a working Three.js scene to VR:

1) Remove any keyboard- or mouse-based control scripts and associated code as nonessential.

2) Load VRButton.js most conveniently. This can be done within a script tag of type module:

<script type="module">

import { VRButton } from "https://threejs.org/examples/jsm/webxr/VRButton.js"

// remaining JavaScript

</script>

Alternately one can save that file locally, delete the export line and load it from a ordinary script tag. One drawback of modules is that variables declared in that type of tag are not visible in the JavaScript console, making debugging much more difficult. An ordinary tag simplifies things.

3) Reposition the scene so that its location respects the conventions of VR hardware. The default pose has the x-axis to the right and looks down the z-axis. For a scene originally situated at the origin, a typical adjustment is

scene.position.set( 0, 1.5, -3 );

but take into account the details of the original scene. Some rotation of the scene may also be appropriate.

4) Enable VR for the renderer and create the entry button with

renderer.xr.enabled = true;
document.body.appendChild( VRButton.createButton( renderer ) );

5) Remove any positioning of the primary camera as nonessential. The camera should sit at the origin to match the conventions of the hardware. Many camera settings are overridden by information provided by the hardware, so ignore them as inessential.

If the scene has not been repositioned, then it can be brought into view in the browser with

camera.position.set( 0, 1.6, 1 );

but this only applies to the view in the browser.

6) Remove requestAnimationFrame() from the render loop and invoke the rendering with

renderer.setAnimationLoop( render );

And that should do it!

The current direction in which the user is looking can normally be retrieved very simply with

camera.getWorldDirection( direction );

where direction is a Vector3 object into which the coordinates are copied. When that fails to work use

var e = camera.matrixWorld.elements;
direction.set( -e[8], -e[9], -e[10] ).normalize();

with direction as before.


Uploaded 2017.01.09 — Updated 2020.03.30 analyticphysics.com