CGI movies are pre-rendered, meaning that all the “stuff” in the film that requires considerable resources to “create,” were already pre-loaded.
Games, on the other hand, involve interaction. The next decision(s) you make will dictate what the game should show next on the screen. That’s why games will need to have all the pixels rendered on the spot.
In many cases, cut-scenes in games do look equal to that of films. However, I’m assuming that you’re not asking about cut-scenes, cinematics or other pre-rendered video game elements within a game. So, I’ll answer assuming you’re asking specifically about in-game rendering.
Consoles are designed to game
Consoles are probably 1/10th (perhaps less) the power of a rig needed to render images seen in films. CGI FX studios render their imagery using farms of servers that take hours and hours to render just a few minutes of footage. Compare this with a PS4 or Xbox One X that renders images in real-time.
For a video game console to render its imagery in real-time, sacrifices must be made, considering the memory in the console is finite and small, and the CPU and GPUs (graphical processing units) are mid-range level components. Consider a video game console’s price tag in this equation also ($300–500). At this price point, a video game console is only able to load models and textures that will fit within these mid-level constraints. Additionally, to move the 1080p screen data at 60fps in real-time, it has to be able to generate images that rapidly.
To perform this level of rendering speed, many different tricks are employed including the level of detail (LOD), rendering distance, shadow mapping, approximated lighting, low model polygons, and on and on. All of these tricks serve to reduce the amount of time it takes to render a single frame. These tricks also serve to degrade the quality of each resulting rendered frame to increase rendering speed.
A video game console’s first priority is rendering speed, followed by realism.
Let’s compare the console trickery method of rendering to the rendering used in films. Films employ much more sophisticated rendering engines and techniques which may or may not cut corners. Rendering one frame in a film could take minutes. Sure, rendering speed is important in films to get the shots completed as fast as possible, but film producers gauge speed in hours, not seconds.
Rendering for a film is nowhere near real-time because a film’s first priority is realism followed rendering speed.
Cut Scenes in Games
Keep in mind that cut scenes in video games utilize these same film pre-rendering techniques to improve realism in these elements. In fact, game studios likely use the same rendering engines that Hollywood uses to produce its blockbusters. These cut scenes are small vignettes placed between levels and describe parts of the story before leading into the next gameplay segment. These are the creative elements used to produce trailers for video games as they make the game look like a professionally produced film.
Games, Realism and the Future
In recent years, a number of video game rendering engines have come about improving realism in games. Some of these hyper-realistic engines include Crytek’s Cryengine, Ubisoft’s AnvilNext (Assassin’s Creed) engine, Sledgehammer’s (Call of Duty: Advanced Warfare) engine, and Guerilla’s Decima (Killzone Shadow Fall) engine. However, even with the current generation of video game consoles, realism at the level shown in most movies (i.e., Avatar) isn’t possible yet using $300–500 hardware… not even with these hyper-realistic game engines. The imagery they can quickly produce is amazing for real-time rendering, but no matter how great the environments may look, they very much appear video-gamey in the end. That’s partly because of the engine and partly because of the GPU’s capabilities.
However, the gap between what Hollywood is producing in films and what video game consoles are producing on-screen is closing. In time, video game consoles will be able to approximate the realism we see on movie screens very closely. At that point, both films and video games may utilize the same rendering engines. It’s likely that video games and films will merge in some way when this happens. However, that point is likely about 15-20 years off, unless some major breakthrough occurs in rendering hardware and software.
Until then, console games will continue to look video-gamey. Personally, I prefer slightly less realism in both characters and environments to ensure the game performance doesn’t suffer. Nothing is worse than screen tearing, huge frame rate drops, object pop-in, scenery pop-up, LOD pop-up or visible NPC / object discarding.
Can this problem be solved today? Sure, buy a PC. Realism does improve buying an expensive NVIDIA or ATI card. But, it also means you need to invest in an expensive PC rig. However, even as expensive as these rigs can be, realism will only improve by at most by 20%, many times less. Ultimately, you’re paying a lot to improve only a little. These limitations are primarily in the rendering engines themselves, not the rig. While the added realism is nice to look at, it doesn’t greatly improve the gameplay experience.
Tricked by Marketing
One thing that annoys me about this whole realism situation is the marketing teams who sell video games to the public. When trailers are now released for video games, instead of showing actual gameplay, they now show pre-rendered cut scenes. This is both deceptive and annoying. I don’t want to see cut scenes. I want to see actual gameplay. I want to see the game mechanics, the status bars, the screen motion, the camera placement, and how the weapons work. None of this is shown when watching a cut scene. Yes, the story is important, but the gameplay is more important. That’s why you buy a video game.
This is only likely to become more of a problem as GPU rendering technologies improve and close the gap between film FX rendering and video game rendering.