Camera-Mapping

Camera-mapping, or camera projection depending on the context, is a method of projecting images on 3D-geometry. Camera-mapping is an application of photogrammetry. Photogrammetry is a method of extracting geometric information from photographic images, and dates back to the beginning of modern photography. This could be said to be a reconstructive method of image-processing, and it can be used to find out dimensions and distances of buildings and other elements in a scene, and through it create a 3D-representation. This is what camera-mapping essentially is. I mostly deal with this subject through using the camera-mapping features in 3DS Max, but the basic philosophy is applicable across a range of software. In camera-mapping, user takes a picture, and takes into the viewport background. There are certain things which one needs to know about the image for camera-mapping to successfully work. Most important being focal length and in effect sensor size of the camera the picture was taken with. If the focal length is not known by its absolute value, one can determine it by eye. Though it can be hard, when in 3DS Max the camera values are represented by film-size and focal length. The film size is in default 36mm which represents the size of analog film-frame, and the sensor size of “full-frame” DSLR’s. One must be noted, that if the camera the image was taken the picture with has a cropped sensor, the crop-factor has to be found out and by it, multiply the given angle. For example if the picture is taken with Canon EOS 60D, and the image is of 18mm focal length, then the accurate focal length for the virtual camera is 18mm focal length x 1,6 crop-factor which results in 28,8mm. After the correct focal-length has been supplied, then the virtual camera should be matched as closely as possible to the original camera in the scene. The viewport background of the image is a good aid in this. Once the camera is in position, do not move it, for the camera is used to project the image through to the geometry which is built to represent the shape of the elements in the image. Moving the camera after it has been bound to project imagery, it will effectively ruin the composition. When the camera is in place, then user should build geometry to represent the elements of the scene. How accurately the scene will be built, is up to the user. But however, it is recommended to build geometry for protruding objects near the lens of the camera, for higher parallax near the lens quickly reveals the 2D-nature of the projection and thus decreases beliavability greatly. After building the representative geometry, one can merge all objects into one geometry object, or leave them as is, it depends on the desired flexibility.


After the geometry has been built, one should go the modifier stack and use Camera Map (WSM) -modifier. WSM stands for World-Space Modifier. There is an option in the modifier to select the projection camera, and one should choose the camera which is put to represent the real-world camera. Now any textures applied on the geometry will be projected through the camera. Now look for your image and apply it on a material. Apply the material onto the geometry and check the option “Show Standard Map in Viewport”. Now the image should be reprsented on the geometry. If the geometry is faulty in a way or does not accurately represent the image, some errors will occur. If the user merged the geometry objects into one, this is a bit more difficult to modify. User will need to use the Editable Poly choices, by moving individual vertexes or faces or lines, or detaching objects, making the required chances and then merging them all together again. The easy way of the one geometry object method is that only one shader can be applied to geometry.


But if the geometry objects are left as separate entities, then one needs to apply the image-bearing shader to all of them individually. But now later on, if changes need be made, one can easily shift and scale the geometry without the hassle of entering polygonal editing modes or breaking down the geometry and merging it together again.


If however there are other errors not caused by ill geometry, it might be that the geometry does not hold enough resolution for successful projection. This can be avoided by applying a Subdivide or Tessellation modifier on top of the modifier stack. When using Tessellation, it is required to lower the Tension parameter down to zero, for then the geometry remains unchanged.

Here is an example of a short camera-mapping sequence. I added some barrels on top of the chair in the right side of the image, for the parallax would have quickly revealed that it is actually a 2D-surface. And for the visual effect some light rays.

small1.jpg

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s