26

When learning to program simple 2D games, each object would have a sprite sheet with little pictures of how a player would look in every frame/animation. 3D models don't seem to work this way or we would need one image for every possible view of the object!

For example, a rotating cube would need a lot images depicting how it would look on every single side. So my question is, how are 3D model "images" represented and rendered by the engine when viewed from arbitrary perspectives?

SuperBiasedMan
  • 9,814
  • 10
  • 45
  • 73
reddead
  • 320
  • 3
  • 7
  • go to google and type in: sprite sheet 3d model the first hits are from the unity forums... – Sorceri Nov 23 '15 at 20:55
  • 1
    @Sorceri I doubt a 3D spritesheet is the answer since each object needs different light and shadow etc. – reddead Nov 23 '15 at 20:57
  • there is something called WebGL https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API :: http://caniuse.com/#feat=webgl :: http://learningwebgl.com/ – Mi-Creativity Nov 23 '15 at 20:57
  • 2
    This belongs on [gamedev.se]. It also has nothing to do with [tag:unity3d], and possibly nothing to do with [tag:sprite-sheet]. I think this question should be migrated, though it probably already has an answer on [gamedev.se]. – Lysol Nov 23 '15 at 21:04
  • @AidanMueller 3D models and Spritesheets are elements used to ask the question. They have nothing to do with it? – reddead Nov 23 '15 at 21:14
  • @reddead Maybe spritesheets do, but Unity3D is a game engine. It is not the only 3D game engine out there, so using Unity3D as a tag is like referring to all cars as "Ford". Please use a more general 3D graphics tag. – Lysol Nov 23 '15 at 21:29
  • 7
    This question is currently [being discussed on meta](http://meta.stackoverflow.com/questions/311062/very-broad-question-very-nice-answer). – Shog9 Nov 25 '15 at 02:40

2 Answers2

86

Multiple methods

There is a number of methods for rendering and storing 3D graphics and models. There are even different methods for rendering 2D graphics! In addition to 2D bitmaps, you also have SVG. SVG uses numbers to define points in an image. These points make shapes. The points can also define curves. This allows you to make images without the need for pixels. The result can be smaller file sizes, in addition to the ability to transform the image (scale and rotate) without causing distortion. Most 3D graphics use a similar technique, except in 3D. What these methods have in common, however, is that they all ultimately render the data to a 2D grid of pixels.

Projection

The most common method for rendering 3D models is projection. All of the shapes to be rendered are broken down into triangles before rendering. Why triangles? Because triangles are guaranteed to be coplanar. That saves a lot of work for the renderer since it doesn't have to worry about "coloring outside of the lines". One drawback to this is that most 3D graphics projection technologies don't support perfect spheres or other round surfaces. You have to use approximations and other tricks to make round surfaces (although there are some renderers which support round surfaces). The next step is to convert or project all of the 3D points into 2D points on the screen (as seen below).

Picture demonstrating projection

From there, you essentially "color in" the triangles to make everything look solid. While this is pretty fast, another downside is that you can't really have things like reflections and refractions. Anytime you see a refractive or reflective surface in a game, they are only using trickery to make it look like a reflective or refractive material. The same goes for lighting and shading.

Here is an example of special coloring being used to make a sphere approximation look smooth. Notice that you can still see straight lines around the smoothed version:

Picture of smooth shading

Ray tracing

You also can render polygons using ray tracing. With this method, you basically trace the paths that the light takes to reach the camera. This allows you to make realistic reflections and refractions. However, I won't go into detail since it is too slow to realistically use in games currently. It is mainly used for 3D animations (like what Pixar makes). Simple scenes with low quality settings can be ray traced pretty quickly. But with complicated, realistic scenes, rendering can take several hours for a single frame (as is the case with Pixar movies). However, it does produce ultra realistic images:

Picture of a ray traced scene

Ray casting

Ray casting is not to be confused with the above-mentioned ray tracing. Ray casting does not trace the light paths. That means that you only have flat surfaces; not reflective. It also does not produce realistic light. However, this can be done relatively quickly, since in most cases you don't even need to cast a ray for every pixel. This is the method that was used for early games such as Doom and Wolfenstein 3D. In early games, ray casting was used for the maps, and the characters and other items were rendered using 2D sprites that were always facing the camera. The sprites were drawn from a few different angles to make them look 3D. Here is an image of Wolfenstein 3D:

Picture of Wolfenstein3D (Javascript clone)
Castle Wolfenstein with JavaScript and HTML5 Canvas: Image by Martin Kliehm

Storing the data

3D data can be stored using multiple methods. It is not necessarily dependent on the rendering method that is used. The stored data doesn't mean anything by itself, so you have to render it using one of the methods that have already been mentioned.

Polygons

This is similar to SVG. It is also the most common method for storing model data. You define the geometry using 3D points. These points can have other properties, such as texture data (in the form of UV mapping), color data, and whatever else you might want.

The data can be stored using a number of file formats. A common file format that is used is COLLADA, which is an XML file that stores the 3D data. There are a lot of other formats though. Fundamentally, however, all file formats are still storing the 3D data.

Here is an example of a polygon model:

Picture of a polygon model

Voxels

This method is pretty simple. You can think of voxel models like bitmaps, except they are a bunch of bitmaps layered together to make 3D bitmaps. So you have a 3D grid of pixels. One way of rendering voxels is converting the voxel points to 3D cubes. Note that voxels do not have to be rendered as cubes, however. Like pixels, they are only points that may have color data which can be interpreted in different ways. I won't go into much detail since this isn't too common and you generally render the voxels with polygon methods (like when you render them as cubes. Here is an example of a voxel model:

Voxel data
Image by Wikipedia user Vossman

Community
  • 1
  • 1
Lysol
  • 1,547
  • 13
  • 27
  • @Ike Thank you very much! Now I'll just be honest. I'm not entirely sure what you are talking about. Are you talking about the fragment shading process (a bit OpenGL specific but you know what I mean)? – Lysol Nov 25 '15 at 05:09
  • @AidanMueller great answer. Mainly I was looking at the type of "file" used to save the models for each object (I assumed it was some sort of XML coordinate system, as you pointed out) which are SVG-like, great analogy. – reddead Nov 30 '15 at 16:56
  • @reddead redemption...(see [what I did there](https://en.wikipedia.org/wiki/Red_Dead_Redemption) :P). Anyways, there are tons of formats that are used, and there are different techniques (like voxel data). But in general, meshes are the most common. There is no real common file format though, since many games will even use custom formats. But in general they basically just store verticies. – Lysol Dec 01 '15 at 08:04
  • @AidanMueller to be complete your answer is missing analytical representation (model is set of equations instead of polygons) which is sometimes used in raytracing and or physics simulations ... – Spektre Nov 19 '17 at 11:13
1

In the 2D world with sprite sheets, you are drawing one of the sprites depending on the state of the actor (visual representation of your object). In the 3D world you are rendering a model for your actor that is a series of polygons with a texture mapped to it. There are standardized model files (I am mostly familiar with Autodesk 3DS Max), in which the model and the assigned textures can be packaged together (a .3DS or .MAX file), providing everything your graphics library needs to render the object and its textures.

In a nutshell, you don't use images for each view of a 3D object, you have a model with a texture rendered on it, creating a dynamic view as it is rendered by the graphics library.

TheGrandPackard
  • 611
  • 4
  • 11
  • `.obj` is supported s well – Mi-Creativity Nov 23 '15 at 21:02
  • @thegrandpackard any idea what these files are composed of? Coordinates, binary data? How does the engine make use of such a file? – reddead Nov 23 '15 at 21:18
  • The .3DS format is a binary format that requires you to parse it as a blob of data. Very involved and not human readable at all, here's an example of a parser (not sure if it works or not): http://www.gamedev.net/topic/313126-3ds-parsing-tutorial/ The .obj format is more human readable but isn't binary. Here's a tutorial with some example code (not sure if it works either): http://rodrigo-silveira.com/opengl-tutorial-parsing-obj-file-blender/ – TheGrandPackard Nov 24 '15 at 00:39