# BFP's Forging Notes

## Saturday, September 22, 2012

### Moving forward

Sadly I haven't enough time to update this blog, and still my contribution to the community is low but...I've moved back to my hometown and I will move again in 10 days.

Can't wait for the next move!

I will go in a great company in a great place...

Stay tuned!

## Tuesday, January 17, 2012

### Is software engineering compatible with making games?

*software engineering is*

**NOT**Object Oriented Programming.Object Oriented Programming is one way of solving the problem, but it is another tool.

*Object Oriented Programming is*

**NOT**design patterns.**PATTERN**is a

**SOLUTION**to a known

**PROBLEM.**

*"What cames BEFORE a project".*

**DEFINING A BAD PROBLEM WILL LEAD TO BAD SOLUTIONS.**

Our mind, that always needs to categorize and label stuff to organize data, works with associations: when we find something similar to what we already know, under a certain percentage of similarity, we assume that is equal.

**DESCRIBE CORRECTLY PROBLEMS**.

**Data Oriented Programming**they say.

*describing the problem in a correct way.*

**data**and performances.

## Monday, January 9, 2012

### Position reconstruction from depth (3)

**Camera/View space**to

**PostProjection space**and viceversa.

// This must be done on the CPU and passed to shaders:

float2 getProjParams()

{

//#define PROJ_STANDARD

#ifdef PROJ_STANDARD

float rangeInv = 1 / (gFar - gNear);

float A = -(gFar + gNear) * rangeInv;

float B = -2 * gFar * gNear * rangeInv;

#else // We get rid of the minus by just inverting the denominator (faster):

float rangeInv = 1 / (gNear - gFar);

float A = (gFar + gNear) * rangeInv;

float B = 2 * gFar * gNear * rangeInv;

#endif

return float2(A, B);

}

// Input: 0..-far - Output: -1..1

float postDepthFromViewDepth( float depthVS )

{

float2 projParams = getProjParams();

// Zn = (A * Ze + B) / -Ze

// Zn = -A - (B/Ze)

float depthPS = -projParams.x - (projParams.y / depthVS);

return depthPS;

}

// Input: -1..1 - Output: 0..-far

float viewDepthFromPostDepth( float depthPS )

{

float2 projParams = getProjParams();

// Ze = -B / (Zn + A)

float depthVS = -projParams.y / (projParams.x + depthPS);

return depthVS;

}

///////////////////////////////////////////////////

// POST PROJECTION SPACE

///////////////////////////////////////////////////

// Returns post-projection depth

float decodeProjDepth( float2 uv )

{

return tex2D( depthMap, uv ).r;

}

// Returns viewspace depth from projection (negative for left-handed)

float decodeViewDepthFromProjection( float2 uv )

{

float depthPS = decodeProjDepth( uv );

return viewDepthFromPostDepth( depthPS );;

}

// Returns depth in range 0..1

float decodeLinearDepthFromProjection( float2 uv )

{

float depthVS = decodeViewDepthFromProjection( uv );

// Left handed coords needs the minus,

// because the depth is negative

// and we are converting towards 0..1 domain

return -depthVS / gFar;

}

// Returns post-projection depth

float encodePostProjectionDepth( float depthViewSpace )

{

return postDepthFromViewDepth( depthViewSpace );

}

///////////////////////////////////////////////////

// VIEW/CAMERA SPACE

///////////////////////////////////////////////////

// Returns stored linear depth (0..1)

float decodeLinearDepthRaw( float2 uv )

{

return tex2D( depthMap, uv ).r;

}

// Returns viewspace depth (0..-far)

float decodeViewDepthFromLinear( float2 uv )

{

return decodeLinearDepthRaw( uv ) * -gFar;

}

// Returns linear depth from left-handed viewspace depth (0..-far)

float encodeDepthLinear( float depthViewSpace )

{

return -depthViewSpace / gFar;

}

#ifdef DEPTH_LINEAR

#define encodeDepth encodeDepthLinear

#define decodeViewDepth decodeViewDepthFromLinear

#define decodeLinearDepth decodeLinearDepthRaw

#else

#define encodeDepth encodePostProjectionDepth

#define decodeViewDepth decodeViewDepthFromProjection

#define decodeLinearDepth decodeLinearDepthFromProjection

#endif

// View-space position

float3 getPositionVS( float2 uv )

{

float depthVS = decodeLinearDepth(uv);

//float4 positionPS = float4((uv.x-0.5) * 2, (0.5-uv.y) * 2, 1, 1);

float4 positionPS = float4( (uv - 0.5) * float2(2, -2), 1, 1 );

float4 ray = mul( gProjI, positionPS );

ray.xyz /= ray.w;

return ray.xyz * depthVS * gFar;

}

float3 getPositionVS( float2 uv, float3 ray )

{

float depthLin = decodeLinearDepth(uv);

return ray.xyz * depthLin;

}

float3 getPositionWS( float2 uv )

{

float3 positionVS = getPositionVS( uv );

float4 positionWS = mul( gViewI, float4(positionVS, 1) );

return positionWS.xyz;

}

float3 getPositionWS( float2 uv, float3 viewDirectionWS )

{

float depthVS = decodeViewDepth(uv);

#if defined(METHOD1)

// Super-slow method ( 2 matrix-matrix mul )

float4 pps = mul( gProj, float4(getPositionVS( uv ), 1) );

float4 positionWS = mul( gViewProjI, pps );

positionWS /= positionWS.w;

return positionWS.xyz;

#elif defined(METHOD2)

// Known working slow method

float3 positionWS = getPositionWS( uv );

#else return positionWS .xyz;

// Super fast method

viewDirectionWS = normalize(viewDirectionWS.xyz);

float3 zAxis = gView._31_32_33;

float zScale = dot( zAxis, viewDirectionWS );

float3 positionWS = getCameraPosition() + viewDirectionWS * depthVS / zScale;

return positionWS;

#endif // METHOD1,2,3

}

// This is only with an educational purpose,

// so that we can switch towards storing

// viewspace or postprojection depth.

// In the GBuffer creation, the depth stored like this:

depth = float4( encodeDepth(IN.positionViewSpace.z), 1, 1, 1);

// A very cheap and easy way to detect if

// we've worked correctly is to add a

// point-light and light a bit more

// the rendering with something like:

#ifdef POSITION_RECONSTRUCTION_VIEWSPACE

float4 lightPos = mul( gView, float4(100.0, 0.0, 0.0, 1.0) );

float3 pixelPos = getPositionVS( uv, viewDirVS );

#else

float4 lightPos = float4(100.0, 0.0, 0.0, 1.0);

float3 pixelPos = getPositionWS( uv, viewDirWS );

#endif

Note the different rays, viewDirVS and viewDirWS. They are calculated as MJP showed a lot of time and two different ways, one for meshes and the other for fullscreen quads. ### Position reconstruction from depth (2)

I want to finish my summary for position reconstruction.

The missing part is the position, that can be in

**ViewSpace**(or CameraSpace) and in

**WorldSpace**.

So far we know the relationship between

**Post-Perspective**and

**ViewSpace**depth of the fragment. If not, roll back the previous post.

The methods are taken directly from the amazing MJP, kudos to his works!

My target is again left-handed coordinate system like OpenGL, that needs some more attention

(really a minus sign in the correct place make a HUGE difference!).

The methods I'm using are the one that pass down a ray to the fragment shader.

**VIEWSPACE POSITION**

**Ray Generation**

For a fullscreen quad we take the ClipSpace position and multiply by the inverse of the projection matrix to obtain a ray in ViewSpace.

float4 ray = mul( gProjectionInverse, positionCS );

ray /= ray.w;

OUT.viewRayVS = ray;

**Position reconstruction**

Here we need the LinearDepth between 0 and 1, that depends upon your choise of storage.

The formula is:

viewRayVS * linearDepth

If you are reading from the depth-buffer, then you'll need to convert it to view-space and then

divide by the far value.

Pay attention here. If you are using a left handed coordinate system, your ViewSpace depth will be ALWAYS negative.

So the left-handed passage will be: rawDepth -> viewSpaceDepth(0..far) -> division by -far

Right handed: rawDepth -> viewSpaceDepth(0..far) -> division by far

**WORLDSPACE POSITION**

**Ray generation**

The generation here is:

ViewDirectionWS = positionWS - cameraPositionWS

and the camera position in WorldSpace can be found as the last column of the

**inverse of the view matrix.**

In HLSL if you want to access this, you can create an helper method:

float3 getCameraPosition()

{

return gViewInverse._14_24_34;

}

**Position reconstruction**

In this case the reconstruction is longer.

Around the web you can find the solution like get the viewspace position and then multiply by the inverse of the view matrix.

It is ok, but can be a lot faster.

Matt in his blog suggest the solution (here) to scale the depth on the camera zAxis:

float3 viewRayWS = normalise( IN.ViewRayWS );

float3 zAxis = gView._31_32_33;

float zScale = dot( zAxis, viewRayWS );

float3 positionWS = cameraPositionWS + viewRayWS * depthVS / zScale;

here we're talking about the real viewSpace depth, that can be converted from the post-projection depth by using:

float depthVS = ProjectionB / (depthCS - ProjectionA);

This is already the depth in view space (that is always negative for left-handed systems).

When storing the linear depth between 0..1, we'll need to convert it back to viewspace.

In case of left-handed system, we stored the linear depth like that:

float depthLinear = depthViewSpace / -Far;

So to have again the depth in ViewSpace, we have:

float depthViewSpace = depthLinear * -Far;

Hope this is useful guys.

Credits goes to MJP for his amazing work, and this is just a way to summarize all the possible problems in reconstruction.

The problem I found around the web is in defining the real DOMAIN of variable and spaces, and I hope that this contributes to have more clearer ideas about how to handle depth reconstruction and space changes without fear.

In the next post I will simply write down all the methods in the shader I used to test all this stuff!

## Monday, November 28, 2011

### Position reconstruction from depth (1)

this post is just a quick recap about the possible ways to reconstruct position from the depth buffer that I found around: almost all the credits goes to http://mynameismjp.wordpress.com/.

Let's define (again) the problem:

**Reconstruct pixel position from the depth buffer.**

Applying a personal way of seeing code, let's put in evidence

**Data**and

**Transformations**.

In my experience, I came up with a simplicistic idea about coding:

**Coding is a sequence of Data transformed into other Data.**

I know it is very simplicistic, and low level (we're not taking in account any architecture) but this

**is**a low-level view of the problem.

And more views of the same problem can shed more light on the true nature of the problem itself (as in life in general).

In this problem we have two data:

**Pixel Position**and

**Depth buffer**.

The transformation is

**reconstruction**.

To understand further, we can define the

**Domains**of the datas.

**Pixel position**can be either in**World Space**or in**View Space****Depth buffer**can be encoded either in linear or in post-perpsective z (raw depth buffer)

So either the transformations will be from **Depth buffer **to **Pixel position** and can be:

**Linear**depth buffer to**View Space**position**Linear**depth buffer to**World Space**position**Post-Perspective**depth buffer to**View Space**position**Post-Perspective**depth buffer to**World Space**position

To finish, we have two other transformations:

- Encode to
**Linear**depth buffer - Encode to
**Post-Perspective**depth buffer

the post-perspective is hidden by the hardware, and it is what it's inside the real depth buffer.

The linear one maps the eye/camera/view space z to the domain 0..1.

To really finish this introduction to the problem, we must know a little bit about our coordinate system. Moving data from** world **to **view** to **projection** spaces, we must define those domains.

We can just skip **World space **and concentrate on the other.

If we follow OpenGL or DirectX APIs, we know that they are different in both spaces:

**OpenGL**uses a**right-handed**system for the**view**space**DirectX**uses a**left-handed**one**OpenGL**uses a cube between (-1, 1) on x,y,z as**projection**cube**DirectX**uses a cube between (-1,1) on x,y and (0, 1) on z

Using a right-hand system ends up looking at **negative z**. Keep this in mind.

**ENCODING AND DECODING TRANSFORMATIONS**

In this section we'll talk about encoding: what we want to encode?

The raw-depth-buffer contains a depth transformed from the **view-space** depth, and they are encoded in a simple way, depending on your projection matrix.

Let's take only the relevant part of the matrix (the last 2x2 corner), that is:

( A B ( zView

-1 0 ) 1 )

and multiply it with the point in viewspace Pview(zView, 1).

Doing the multiplication has the result:

Pndc = (A * zView + B, -zView )

to became a 3d point (1d here) we apply the division by W:

Pndc = ( A * zView + B / -zView, 1 ) that further simplified became

Pndc = ( -A - (B / zView), 1).

Zndc so is -A - (B /zView).

This is the way in which the depth is encoded in the depth buffer, and the value is between -1 and 1.**Note:** if you try to do some maths and put zView = n and zView = f, you'll notice that the values are not mapped correctly between -1 and 1. This is because we're using negative values, so the correct ones are zView = -n and zView = -f.

To find zView, just solve by zView and we'll obtain:

zView = -B / (zNdc + A )

So now we have defined the two transformations:

**Projection-Space Encoding:** **-A - (B / zView )**

**Projection-Space Decoding: -B / (zNdc + A)**

Ok then, it's finished.

Wait...what are thos A and B???

Those values depends again on the choice of your projection matrix.

In OpenGL they are defined as (n is near plane, f is far plane):**A = - (f + n) / (f - n)**

**B = -2 * n * f / (f - n)**

those values can be easly calculated and passed to the shaders (don't bother doing it inside a shader, those are perfect values to be set once in a frame with other frame-constants) to reconstruct depth.

Different is the linear depth encoding. We're still encoding **view-space** depth, but that became easier. The values in camera/eye/view space are like world-space, but just centered around the camera. For the right-handed systems, we will encode all the negative z, because the camera is looking into the negative z semi-space.

The z values will be in the range 0, -infinite: the projection will take care of getting rid of values that are smaller than the near plane and greater than the far.

**Linear depth encoding: -zView / fLinear depth decoding: zLin * f**

Those values are between 0 and 1.

Finally...some **CODE**!!!

This is **POST-PROJECTION DEPTH:**

// Calculate A and B

float rangeInv = 1 / (gFar - gNear);

float A = -(gFar + gNear) * rangeInv;

float B = -2 * gFar * gNear * rangeInv;

// Write -1,1 post-projection z

float encodePostProjectionDepth( float depthViewSpace )

{

float depthCS = -projParams.x - (projParams.y / depthViewSpace);

return depthCS;

}

// Read -1,1 post-projection z

float decodePostProjectionDepth( float2 uv )

{

float depthPPS = tex2D( depthMap, uv ).r;

return depthPPS;

}

// Reconstruct view-space depth (0..far)

float decodeViewSpaceDepth( float2 uv )

{

float depthPPS = decodePostProjectionDepth( uv );

float depthVS = -B / (A + depthPPS);

return depthVS;

}

This is

**Linear depth**

// Encode 0..1 view-space depth

float encodeLinearDepth( float depthViewSpace )

{

return -depthViewSpace / far;

}

// Decode 0..1 view-space depth

float decodeLinearDepth( float2 uv )

{

float linearDepth = tex2D( depthMap, uv ).r;

return linearDepth;

}

// Reconstruct view-space depth (0..far)

float decodeViewSpaceDepth( float2 uv )

{

float linearDepth = decodeLinearDepth( uv );

return linearDepth * f;

}

As you can see using a linear depth is easier to encode and decode, but it's more expensive from a memory point of view (you'll need an additional render target), and you're already using a depth buffer so you already have those informations.

## Wednesday, November 23, 2011

### Rendering Architecture (2)

- Geometry informations
- Shading informations
- Render states

and we set those informations in various way, for example on DX/X360 we use Set*** command and Draw*** to issue the drawcall.

The geometry informations are relative to vertex buffer, vertex format/declaration and index buffer; the shading ones are the various shaders (depending on the API, vertex, fragment, geometry,...) and the informations to be used by the shaders (constants and textures); the render states are the all the other informations, like render targets, depth/stencil, alpha blending, so all the configurable states that are grouped in directx 10 and 11.

The render interface thus is splitted in two: a RenderContext, that sets all the informations to issue drawcalls only and draws, and the RenderDevice that manage the creation, destruction and mapping/unmapping of low level graphic resources.

This division permit to easly divide what is "deferrable" to what not, so if you want to create your own command buffer or use the DX11 one (good luck) than you already know that the RenderContext is the right guy to call.

Every object that can be renderable will have a render method that will take pass the RenderContext around, so that it can set the data for the draw calls.

The real catch is to use the curiously recursive template pattern to create the interface for both the RenderContext and the RenderDevice, and create the different implementations for each platforms: even though you need to typedef the specific template implementation, you can assume (see above) that for each target you have only ONE type for the RenderContext implementation (RenderContext

The methods called in the API-dependent class can be all protected so that you enforce the interface, and inlining all the calls in the RenderContext class will map a call of your render context to a direct call of the method, thus avoiding virtuals and with "static polymorphism".

Even though on PC is not a cost, on consoles ( I really suggest you to try, if you can ) is a bad hit (especially on ps3) to call virtual functions a LOT of times, but let's try to figure out the numbers:

if you have 1000 draw-calls, probably you'll have 4 or 5 RenderContext calls (SetVertexBuffer, SetIndexBuffer, SetVertexShader,SetPixelShader,SetConstants,SetVertexFormats, DrawIndexed, ...) for each draw-call, thus having 4000-5000 virtual calls for each frame. So you end up having 4000-5000 cache misses per frame and all without any apparent reason, and the cost of cache misses on consoles...is varying, but can be from 40 to 600 cycles for each call.

How many cycles are we wasting?

With this system, you have a common interface and no virtuals. No silver bullet, but the problem to find a solution requires a correct definition of the constraints...

BFP

## Monday, October 31, 2011

### Rendering architecture (1)

Too much time I don't update this blog!

I wanted to share with you a way in which I describe rendering for my home-engine, used to study technologies quickly.

I'm doing this way since 2008, so now it's starting to become very mature, but I'm sure will improve in the future!

Let's start from the objective: I wanted to create something flexible and quick to iterate to study all the different rendering techniques that I'm interested into.

Speaking about DX9+ GPUs, the context of the problem has some constraints.

The rendering APIs are a state machine, and to draw we must provide at least those informations:

1) Vertex Buffer

2) Index Buffer

3) Vertex Format

4) Vertex Program

5) Fragment Program

6) Shader Constants

7) Render States

8) Render Targets

We can see the Rendering as a problem on HOW to create those information to be sent to the GPU.

The Rendering nowadays can be seen as a sequence of drawcalls applied to a render target.

So back in time I started to define the concept of Rendering Pipeline as a sequence of drawcalls in a render target.

Each of those sequences became a Stage of the pipeline.

At the end of the day we have

Pipeline

Stage0 -> Stage1 -> Stage2

This sounds a little simplified, and actually it is simple, but it's really powerful.

An example of Stage can be as simple as a ForwardRendering that renders in the framebuffer (that can be seen as a Render Target),

a GBuffer generation, Shadow Creation, SSAO, ...

For each stage we define some RenderTargetInputs and Outputs, to link the different stages together.

If I made the picture clear, each rendering became a series of rendering of stages.

To describe the rendering I started using an XML file like that:

Each stage type corresponds to a class that extends the Stage class in the code.

- Each stage has some RenderStates set before rendering
- Each stage can draw per material or providing a shader
- Each stage that describes a postprocess, draws a single quad and has a shader associated.

Further defining the rendering problem, I found another missing information in the rendering objects that were sent to each stage.

How can I define them?

I ended up with what I called a RenderView, that is a camera plus a list of "render instances". So now each stage renders a particular RenderView and has some renderstates defined, plus an optional shader if it is a postprocess.

The point is...why just not describe all those informations in the XML?

Enter ScriptableStage!

This stage has a renderview, some renderstates, and an optional shader. All defined in the XML. This provides me the flexibility to describe a lot of the rendering techniques around.

After some more technique explorations, I found another emerging pattern:

Each time I want to render, I need to provide 3 groups of informations

- Geometry
- Shading
- Render states

When you define all those informations, you can describe all the rendering you want! Even materials can be seen as a list of Shaders corresponding to stages and can be described through XMLs...

The application logic that sits on top must really provide mostly the geometry and link it with materials (that are shaders + render states).

With this simple description I found a very powerful way of describing the rendering and prototype/explore very fast.

Next post I'll describe also a way to achieve a simple reloading mechanism and avoid virtuals for the renderer itself to be more console friendly ;)

Enjoy!