Triangulation Geometrics

Triangulation Geometrics

The technique for gauging depth information given two offset images is called triangulation. Triangulation makes use of a number of variables; the center point of the cameras (c1, c2), the cameras focal lengths (F), the angles (O1, O2), the image planes (IP1, IP2), and the image points (P1, P2).

The following examples show how the triangulation technique works.

triang.JPG

For any point P of some object in the real world, P1 and P2 are pixel point representations of P in the images IP1 and IP2 as taken by cameras C1 and C2. F is the focal length of the camera (distance between lens and film). B is the offset distance between cameras C1 and C2. V1 and V2 are the horizontal placement of the pixel points with respect to the center of the camera. The disparity of the points P1 and P2 from image to image can be calculated by taking the difference of V1 and V2. This is the equivalent of the horizontal shift of point P1 to P2 in the image planes. Using this disparity one can calculate the actual distance of the point in the real world from the images. The following formula can be derived from the geometric relation above:

D = bf / d

Distance of point in real world = (base offset) * (focal length of camera) / (disparity)

This formula allows us to calculate the real world distance of a point. If what we are interested in is relative distance of points rather than exact distance we can do this with even less information. The base offset and focal length of the camera are the same for both images. Hence the distance of different points in the images will vary solely based on this disparity component. Therefore we can gauge relative distance of points in images without having the base offset and focal length.

Triangulation works under the assumption that points P1 and P2 represent the same point P in the real world. An algorithm for matching these two points must be performed. This can be done by taking small regions in one image and comparing them to regions in the other image. Each comparison is given a score and the best match is used in calculating the disparity. The technique for scoring region matching varies, but usually is based on the number of pixels that are the same on an exact or near-exact point basis. Both triangulation technique for stereo image matching and technique for point-matching within a region are successfully implemented in the “Cooperative Algorithm for Stereo Matching and Occlusion Detection.”


Index


Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-Share Alike 2.5 License.