How 3D Game Rendering Works: Texturing and Texture Filtering

0 of votes

289

In our first blog post about rendering in 3D games, we delve into what happens to the 3D world after the vertex processing and scene rasterization are complete. Texturing stands out as one of the most critical stages in rendering, despite its role being primarily focused on computing and altering the colors of a two-dimensional mesh of colorful blocks.

Most visual effects in modern games boil down to clever use of textures — without them, games would appear dull and lifeless. So, let’s explore how all of this works!

Textures Unleashed

Dive into the Visual Symphony: Textures Unleashed

Starting Off Simple.
Take a closer look at 3D gaming masterpieces from the last year. Each one makes extensive use of texture maps, also known simply as textures.
This concept has become so widespread that it typically conjures up an image of a flat, unassuming square or rectangle, imbued with a texture that mimics real-world surfaces like grass, stone, metal, fabric, or even a human face.

Layered Complexity.
Yet, when these seemingly simple images are strategically layered and intricately combined in a 3D scene, they morph into something spectacularly realistic.
Curious about how significant this transformation is? Consider what happens when we strip away all textures and observe the stark, bare bones of 3D objects in their most primitive form.

The Building Blocks.
Reflecting on insights from our earlier explorations, we’re reminded that the foundation of the 3D world is composed of vertices.
These basic geometric shapes undergo movement and coloring, serving as the building blocks for primitives.
These primitives are eventually woven together into a two-dimensional pixel tapestry. In a world devoid of textures, our mission shifts to manually imbuing these pixels with life.

Shading Techniques.
Enter flat shading, a method where the color derived from the first vertex of a primitive is uniformly applied across all pixels it encompasses.
The result? A teapot that, while clearly defined, strikes the viewer as unmistakably artificial, largely due to abrupt, unrealistic color transitions.

Enter Gouraud Shading.
As a potential remedy, we introduce Gouraud shading to the scene. This technique employs linear interpolation to blend the colors of vertices across the surface of a triangle.
Imagine a scenario where one edge of a primitive radiates 0.2 red, and the opposite edge vibrates with 0.8 red. The center? It settles into a perfect median of 0.5 red.

This process, praised for its simplicity and efficiency, became a staple in earlier generations of 3D games. These games, constrained by the computational limits of their era, found a reliable ally in Gouraud shading.

Barrett and Cloud in all the glory of Gouraud's shading

Challenges Remain.
However, this approach isn’t without its flaws. Should a beam of light strike directly at the heart of a triangle, its corners might fail to reflect this interaction, potentially erasing any specular highlights.

Beyond Flat and Gouraud.
Despite flat and Gouraud shadings carving out their niche within the digital artist’s palette, the examples spotlight their limitations, hinting at the transformative power of textures.
To fully grasp the depth of this impact, we embark on a journey back in time — all the way to the pivotal year of 1996.

Vertex lighting and simple textures in Quake 1996

A Brief Journey Through Game and GPU Evolution

The Milestone of Quake.
Around 23 years ago, id Software released Quake, marking a significant milestone. While not the first to use 3D polygons and textures for rendering environments, it was among the pioneers to do so effectively.

Quake’s Contribution.
But it did something more — it showcased the potential of OpenGL (then in its infancy) and significantly aided the first generation of graphic cards like Rendition Verite and 3Dfx Voodoo.

Vertex Lighting and Simple Textures.
Pure 1996, pure Quake. By today’s standards, Voodoo was incredibly basic: lacking 2D graphics support and vertex processing, focusing solely on primitive pixel processing.

The Beauty of Voodoo.
With a TMU (Texture Mapping Unit) for fetching pixels from textures and an FBI (Frame Buffer Interface) for blending them with raster pixels, Voodoo could perform a few additional processes, like fog implementation and transparency effects. Yet, these features nearly summed up its capabilities.

Under the Hood.
Delving into the architecture underlying these graphic cards reveals the intricacy of these processes.

3Dfx Specification Insights.
The FBI chip blended two color values, one potentially from a texture. The blending process is mathematically straightforward but varies slightly depending on the mix and the API executing the instructions.

Direct3D’s Blending Functions.
Direct3D offers blending functions and operations, where each pixel first gets multiplied by a value between 0.0 and 1.0. This determines the pixel color’s impact on the final outcome. The altered colors are then added, subtracted, or multiplied; some functions even select the brightest pixel through logical operations.

Practical Demonstration.
The image above illustrates how this works in practice; note how the left pixel’s alpha value, indicating transparency, is used as a coefficient.

Further Processes.
Subsequent stages involve fog values (taken from a developer-created table) followed by similar blending calculations; visibility and transparency checks and modifications; finally, the pixel color gets stored in the graphic card’s memory.

Why This Historical Dive?
Despite its simplicity compared to today’s behemoths, this process lays down the fundamentals of texturing: blending color values to make models and environments look as they should in any given context.

Modern Games: Evolution of Complexity.
Today’s games employ the same principles, with the main difference being the number of textures and the complexity of blending calculations. Together, they simulate visual effects seen in movies or the interaction of light with various materials and surfaces.

Texturing Fundamentals Unwrapped

Texturing Fundamentals Unwrapped

Textures: The 2D Illusion of Depth.
For us, a texture is a flat 2D image laid over polygons composing the 3D structures within a frame. Yet, to a computer, it’s merely a small memory block, structured as a 2D array. Each array element signifies the color value of a texture’s pixel (often referred to as texels — texture pixels).

The Coordinates Connection.
Every polygon vertex is tagged with a pair of coordinates (u, v), instructing the computer on the corresponding texture pixel. Meanwhile, the vertex itself is defined by three coordinates (x, y, z), and the act of binding texels to vertices is known as texture mapping.

The Coordinates Connection

Visualizing the Process.
To see this in action, let’s revisit a tool we’ve used several times in this series—Real Time Rendering WebGL. Temporarily sidelining the z-coordinate of vertices, we simplify our view to a flat plane.

Texture Coordination.
From left to right: u, v texture coordinates directly linked to the x, y coordinates of corner vertices. In the second image, the upper vertices’ y-coordinates are increased, stretching the texture vertically since it remains attached. The right image shows texture alteration: increased u values lead to the texture being compressed and then repeated.

Why Repetition Occurs.
This repetition occurs because, despite the texture essentially becoming taller with increased u values, it still must fit within the primitive, leading to partial repetition. This method enables the texture repetition effect often seen in 3D games, like stone or grassy landscapes and brick walls.

Expanding the Scene.
Let’s add more primitives and reintroduce scene depth. Below, a classic landscape view is shown, but now the crate’s texture is copied and repeated across all primitives.

Texture Dimension Dynamics

Texture Dimension Dynamics.
The original crate texture, a 66 KB gif with a 256 x 256 resolution, theoretically should only display about 20 crate textures over a 1900 x 680 frame section. Yet, we observe far more than twenty crates, indicating that distant crate textures are significantly smaller than 256 x 256 pixels. This shrinkage is the result of a process called texture minification.

Zooming In

Zooming In.
Remember, the texture is just 256 x 256 pixels, but here it appears larger than half of a 1900 pixel wide image. This enlargement is known as texture magnification.

The Ever-Present Processes.
These two texturing processes are omnipresent in 3D games as camera movements make models appear closer or further away, requiring textures on primitives to scale accordingly. Mathematically, this is a minor issue; even the simplest integrated graphics chips manage it with ease. However, texture minification and magnification introduce new challenges that need addressing.

Introducing Mini-Copies of Textures for Distance Handling

The first challenge to tackle with textures is distance. Referencing the initial landscape of crates, those at the horizon essentially reduce to just a few pixels. Trying to compress a 256 x 256 image into such tiny space is impractical for two main reasons.

Firstly, a smaller texture consumes less graphics memory, potentially fitting into a smaller cache volume. This means it’s less likely to be evicted from the cache, thus repeated use of this texture boosts performance by keeping data in closer memory. The second reason ties back to the same issue with textures near the camera, which we’ll revisit shortly.

The go-to solution for compressing large textures into small primitives is the use of mipmaps. These are downsized versions of the original texture; they can be generated by the engine or pre-created by designers. Each subsequent mipmap level halves the dimensions compared to the previous one.

Mini-Copies of Textures for Distance Handling

For a crate texture, the dimensions would follow: 256 x 256 → 128 x 128 → 64 x 64 → 32 x 32 → 16 x 16 → 8 x 8 → 4 x 4 → 2 x 2 → 1 x 1.

All mipmaps are packaged together, so the texture retains its file name but increases in overall size. The u,v coordinates not only determine which texel overlays a frame pixel but also from which mipmap level. Programmers write renderers that decide which mipmap to use based on the frame pixel’s depth value. For example, a high value means the pixel is far away, warranting a smaller mipmap.

Astute readers might note a downside to mipmaps—they come at the cost of increased texture size. The original 256 x 256 crate texture, now with mipmaps, sizes up to 384 x 256. Despite some empty space, packaging smaller textures inevitably leads to at least a 50% increase in one dimension.

However, this only applies to pre-created mipmaps; if the game engine generates them efficiently, the size increase is no more than 33% of the original texture size. Thus, a slight memory size increase yields performance and visual quality gains.

The comparison below shows images with mipmaps turned off/on:

The comparison below shows images with mipmaps

On the left, using textures “as is” leads to graininess and moiré patterns in the distance. On the right, mipmaps smooth transitions, blurring distant crate textures into a uniform color.

Yet, who would want blurred textures to mar the backgrounds of their favorite games?

Demystifying Texture Filtering: Bilinear, Trilinear, Anisotropic

Texture Filtering Explained.
Selecting a texture pixel for overlay onto a frame pixel is known as texture sampling. In an ideal world, there’d be a perfect one-to-one match between a texture and its primitive, regardless of size, position, or orientation. In reality, however, we need to consider several factors:

  1. Has the texture been scaled up or down?
  2. Is it an original texture or a mip-map?
  3. At what angle is the texture viewed?

Scaling Challenges.
Firstly, scaling is straightforward: enlarging a texture means too many texels per frame pixel; reducing it means one texel covers multiple pixels. Both scenarios present a dilemma.

Mip-Maps to the Rescue.
Mip-maps, the second factor, solve the distance sampling issue, leaving us with the challenge of angled texture display. Since textures are designed for a “straight-on” view, displaying them at an angle without distortion requires texture filtering.

Mip-Maps to the Rescue

The Chaos Without Filtering.
Without texture filtering, the result is a visual mess, as demonstrated by replacing a crate’s texture with an ‘R’. This illustrates the havoc unfiltered textures wreak on image clarity!

Filtering Types Across APIs.
Graphics APIs like Direct3D, OpenGL, and Vulkan offer a standardized set of filtering techniques, albeit under different names. They boil down to:

  • Nearest Point Sampling
  • Bilinear Texture Filtering
  • Anisotropic Texture Filtering

Nearest Point Sampling.
This method isn’t technically filtering, as it simply selects the nearest texel to the required texture pixel, then blends it with the pixel’s original color.

Enter Bilinear Filtering.
Bilinear filtering takes over by sampling four texels (above, below, left, right) around the nearest point, blending them based on proximity. Vulkan’s formula showcases this blending process, incorporating weights to achieve smooth transitions – Tf​=(1−α)(1−β)T1​+(α)(1−β)T2​+(1−α)(β)T3​+(α)(β)T4​.

Trilinear and Anisotropic Filtering.
Trilinear filtering enhances bilinear filtering by blending texels from two mip-map levels for even smoother transitions. Anisotropic filtering further refines this by adjusting for surface orientation, significantly improving the appearance of textures viewed at sharp angles or from a distance.

Performance Considerations.
While anisotropic filtering offers stunning smoothness, it demands more from the GPU. For instance, a setting of 16x anisotropic filtering means sampling 32 texel values instead of just four, a task modern GPUs handle with ease but at a computational cost.

Evolution of Texture Processing.
Even back in 1996, the 3Dfx Voodoo could handle one texel per clock with bilinear filtering. Today, GPUs like the AMD Radeon RX 5700 XT boast the ability to process 32 texel addresses in a single cycle, demonstrating the leaps in performance and complexity handling since the early days of 3D gaming.

The Bottom Line.
While technology has evolved to make detailed and complex texture filtering feasible at high resolutions, it underscores the incredible advancements in GPU technology and graphics processing techniques, transforming what was once “alien” into today’s standard.

Illuminating the Scene: The Role of Texture in Lighting

Understanding Texture’s Importance.
Consider this scene from Quake:

Consider this scene from Quake

The image is shrouded in darkness, fitting the game’s atmospheric tone, yet not all areas are uniformly dark. Some wall and floor segments are lighter, creating an illusion of light hitting these spots.

Light Maps as Textures.
In many ways, a light map is simply another texture, making this scene an early example of multitexturing—the process of applying two or more textures to a primitive. Quake’s use of light maps was a technique to bypass Gouraud shading limitations, expanding the possibilities of multitexturing as graphics cards evolved.

Rendering Pass Limitations.
Back then, cards like the 3Dfx Voodoo had limitations on the operations they could perform in a single rendering pass. A pass involves the complete cycle from vertex processing to frame rasterization and pixel modification before outputting the final frame buffer. Two decades ago, games primarily relied on single-pass rendering.

Evolving Hardware.
This was due to the performance cost of reprocessing vertices just for additional texture overlays. It wasn’t until the arrival of ATI Radeon and Nvidia GeForce 2 GPUs, capable of multitexturing in a single pass, that this changed.

These GPUs featured multiple texture units within the pixel processing pipeline, simplifying the task of fetching texels with bilinear filtering from two separate textures. This advancement not only popularized light maps but also enabled games to dynamically change lighting conditions, reacting to in-game environments.

Beyond Dual Textures.
However, the potential for multitexturing extended far beyond just dynamic lighting, opening up a realm of possibilities for enhancing game visuals and realism.

Embracing Height Variations in 3D Texturing

Embracing Height Variations in 3D Texturing

Beyond Vertex Processing.
In our exploration of 3D rendering, we’ve yet to dive into how GPUs revolutionize the process. While the intricate choreography of vertex processing might seem like the GPU’s most formidable task, innovations in texture technology demonstrate otherwise. Game developers have historically devised clever strategies to simulate high-quality visuals without the hefty computational cost of processing numerous vertices.

The Magic of Height and Normal Maps.
Enter the realm of height maps and normal maps. These textures are interconnected, with normal maps often derived from height maps. Yet, for now, let’s focus on a technique known as bump mapping.

Bump Mapping: A Deceptive Depth.
Bump mapping employs a 2D array called a height map, which at first glance appears as a peculiar rendition of the original texture. For instance, a realistic brick texture applied to flat surfaces gains an illusion of depth through its corresponding height map.

A Deceptive Depth

The Color-coded Depth.
The colors in a height map represent the normals of the brick surface, altering the texture’s color based on these normals to give bricks a more three-dimensional appearance. Despite their flat nature, the technique cleverly adds detail, though limitations are noticeable upon closer inspection.

Normal Maps: Color as Direction.
Normal maps resemble height maps but directly encode normals as color values, simplifying the transformation process. Each texel’s RGB values correspond to the XYZ components of a normal vector, providing directional information to the rendering processor for accurate lighting calculations.

Dynamic Lighting’s Best Friend.
The true power of bump mapping and normal maps shines with dynamic lighting. Modern games employ a suite of textures to enhance this effect, making even flat surfaces look astonishingly detailed and three-dimensional.

Квинтет текстур для реализма.
Рассмотрим стену, которая, хотя и выглядит богато детализированной из кирпича и раствора, на самом деле представляет собой плоскую поверхность.Этот реализм достигается не миллионами полигонов, а всего пятью текстурами и умными вычислениями.

  • Height maps generate shadows as if the bricks were physically protruding.
  • Карты нормалей имитируют нюансы текстуры поверхности.
  • Roughness textures vary the reflection of light across different materials, distinguishing the smooth bricks from the rough mortar.
  • Ambient occlusion (AO) maps enhance the realism of shadows, a technique we’ll explore further in upcoming discussions.

This method demonstrates how advanced texturing techniques can create deeply immersive environments, significantly enhancing visual fidelity with minimal geometric complexity.

The Crucial Role of Texturing in Game Development

Texturing is essential in game development. Consider the 2019 RPG, Kingdom Come: Deliverance. Set in 15th-century Bohemia, designers aimed for maximum realism. To immerse players in a life centuries ago, achieving historically accurate landscapes, buildings, attire, hairstyles, and everyday items was key.

Some textures are minor, with simple details, thus undergoing minimal filtering or blending (e.g., chicken wings).

Others are high-resolution with intricate details; these undergo anisotropic filtering and blending with normal maps and other textures — observe the human face in the foreground. Programmers tailor texturing requirements for each scene object, mindful of the diversity in detail demand.

This method prevails in modern games as players seek ever higher detail and realism. The best technologies never fade, regardless of their age!

PLEASE, RATE MY ARTICLE :)

0 of votes

Post views:
289

Our game development studio is based in London. With over 13 years of expertise in design, marketing, 3D animation, and programming, we are a cornerstone of the koloro.group of companies. We craft games tailored for clients and also passionately drive our in-house projects. Our commitment is clear – delivering premium content and deriving revenues from games, all while adhering to the highest moral standards and valuing people’s interests. If you’re looking to craft magnificence and reap monetary rewards, we’re your destination.

GET A TOUCH

Send us a message via messenger or email