To provide us with vision, our brain combines the two separate images from both of our eyes to create the perception of depth. Stereoscopy imitates that to create an artificial sense of depth, perceived by the viewer where there is in reality a flat surface (screen). Imitating the differences in those two images is not as simple as it would seem, and much research has gone into how to do this most effectively. From my own experience, I know that I used to get headaches from watching 3D movies a few years ago, and as technology has developed, viewing 3D content has gotten easier for me, feeling more natural. A lot of this has to do with recent advances in the stereoscopic finishing process. These advances include both new ways of aligning images, and ways to do it faster and with more precision, leading to a better final product.
The first obvious step for any stereoscopic shot is to fix any vertical offset or rotational difference between the left and right angles. These issues are caused by the rigging of the two cameras being ever so slightly out of alignment in some regard. Vertical position and rotation are easy attributes to change in any video processing application, but what if the difference varies throughout a shot? It becomes an exercise in motion tracking across two source images. This used to be quite an undertaking to do manually, but software has been developed in the last few years that can fix many of these issues almost automatically.
The next step is to set the horizontal offset, or convergence. This is much more complicated than the vertical alignment, because the difference in the images that causes the depth effect also prevent the separate angles from ever matching perfectly. The part of the image that is desired to be viewed at screen depth should be aligned, while anything that should be in front of or behind the screen will appear offset, if it was shot correctly. This offset is what gives the view the illusion of depth. By altering the convergence, one can change how “close” the image appears to the viewer, so this is a creative decision, and should take into account the depth of the other shots in the sequence. By changing the convergence over the course of a shot, the illusion of motion can be created as well.
Another issue that may need to be dealt with, especially if the content was shot with a beam splitting rig, is a color difference between the left and right images. This is usually handled the same way as color correction is in the standard post production process, but once again, can be somewhat automated in newer high end software.
There are a few other more advanced processing steps that can be taken to further refine a stereoscopic image for natural and easy viewing by the audience. These include ghostbusting, which attempts to eliminate crosstalk between the left and right images in high contrast areas, and floating windows, which make objects that cross the edges of the screen seem less jarring to the viewer. Over the past few years, I have used many of these techniques to manually process stereoscopic video, using standard 2D video tools, like Adobe After Effects. We are finally beginning to see dedicated stereoscopic features being developed for these applications, that help streamline that process, without requiring extremely expensive dedicated stereoscopic software toolsets.