n2liquid's sandbox

Abstracting the render engines—Render States and Blocks

Posted on: May 21, 2011

「抽象化たん」The flying abstraction loli

I’m a beginner in 3D programming, but from what I understand, both OpenGL and Direct3D are nothing but big state machines, and that’s what they’ll always be. They’re far from object-oriented, and their design is pretty much unfit for being OO. If I were to wrap them in an OO API, I’d probably hurt performance or flexibility in some way, so I decided to port the “big state machine” status over to Sonetto’s abstraction layer.

Features are implemented as RenderState classes that encapsulate desired rendering effects like setting a texture channel, a vertex buffer, etc.

Render States

Each RenderState implementation is responsible for queuing render engine and hardware capabilities and (delta-)applying themselves to the graphics core. Delta-applying means that, when requested to, a render state may compare itself to another render state of the same type and apply only the changes needed to switch from one to another. This is used to easily minimize actual state changes when a lot of similar operations are going on.

Render states are not intended to directly match a render engine’s actual render state, but instead match an indivisible Sonetto feature. That means a lot of deprecated stuff (or soon-to-be-deprecated / useless stuff) in the backends are simply unavailable and some things that are useless independently are packed into a single juicy class.

Also, instead of wrapping up everything I see in front of me like mad, I’ll just wrap the things I need for my game as needed, one by one.

Render State Blocks

Render states can be grouped into blocks that simply forward the apply / delta-apply functionality over to their containing render states. Render state blocks are also called render blocks for short, and can be stacked together to minimize state machine fiddling and modularize code. Stacking and unstacking render blocks appropriately is key for designing good render systems.

The concept starts to look beautiful when you picture a frame being rendered using it: you’re coding the map module of Tales of The Abyss (spoilers there; be warned). At first the state machine is factory-default. Then the ambient code kicks in, stacks a block with near and far clipping, transformation matrices, fog, and default shaders (we won’t use the fixed-function pipeline, so we need basic shaders to draw anything) set, then the map code kicks in, probably overrides the default shaders, possibly the fog configuration depending on the map, then the basic geometry is rendered, but the map has some grass patches here and there, so code for rendering those kicks in, configurations are stacked once, many batches of grass are fired to the GPU, the stack unwinds a bit and proceeds to the next special piece of the map needing attention.

At last the stack is almost fully unwound and the skybox and other ambient geometry are rendered. Unwind that again; overlays (2D windows, text, etc.) kick in.

Rinse, repeat. Render, unwind. Proceed to the next frame.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

un.ma.i!

Twitter (technical)

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Twitter (personal)

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Get messaged when I post something new!
Just enter your e-mail and hit Follow:

Join 171 other followers

%d bloggers like this: