Saturday, September 22, 2012

Moving forward

After 2.5 years in Codemasters, I've moved to my next adventure.
Sadly I haven't enough time to update this blog, and still my contribution to the community is low but...I've moved back to my hometown and I will move again in 10 days.
Can't wait for the next move!
I will go in a great company in a great place...

Stay tuned!

Tuesday, January 17, 2012

Is software engineering compatible with making games?

This is one of the hundred of questions that I ask myself many times.
After some mumbling and experience, I came to the understanding that what's really misleading is the equality software engineering == O.O.P. == design patterns.
This is what it appears when reading forums and speaking with peoples around.
That's wrong. Completely wrong.
First of all: software engineering is NOT Object Oriented Programming.
Software engineering is a tool to find solutions to problems.
Object Oriented Programming is one way of solving the problem, but it is another tool.
Second: Object Oriented Programming is NOT design patterns.
The real trouble here is that people is lazy and many times not focused / precise enough to describe the problem.
A PATTERN is a SOLUTION to a known PROBLEM.
So, what's a problem?
The word problem cames from greek, and means "What cames BEFORE a project".
The real error here is the definition of the problem.
DEFINING A BAD PROBLEM WILL LEAD TO BAD SOLUTIONS.
That can be sound silly or foggy, but really is a common error I see around (and also in myself).
So going back to the equality, patterns are solutions to problem.
Our mind, that always needs to categorize and label stuff to organize data, works with associations: when we find something similar to what we already know, under a certain percentage of similarity, we assume that is equal.
This is normal and common: you know the experiment that if we read words with only the initial
and fnail wrdos we can understand them the same.
We already (normally) encounter that word, so we can understand which one is.
Using patterns is not the only way of using OOP: they are completely different domains.
Software engineering is an analysis and design mindset.
What we're really missing is to DESCRIBE CORRECTLY PROBLEMS.
Also, we have not so many patterns that are usable for the gaming industry.
That's for two reasons.
1) There are not so many software engineers/architect that have enough time to study and design a so large software like a game and an engine, with the production constraints we have.
2) Believing in this inequality, a lot of peoples believes that software engineering is not for realtime applications.
But that's hope.
Even in a strange way, some new patterns/way of programming is coming out.
Data Oriented Programming they say.
There are no explicit patterns, written in an academic way, but there are patterns.
In that case, what happened is describing the problem in a correct way.
For example, in a lot of patterns there is no mention of hardware constraints or performances.
Data oriented programming thus is becoming a new way of coding: coding thinking about data and performances.
But data oriented does not exclude object oriented.
They can co-exist!
We must be open-minded: still we need to go one level up, thus being independent of Object or Data oriented paradigms.
We must focus on each problems, and why they're there.
Object oriented programming is born to solve the problem of creating high-maintainable code used by different persons during a long time-span.
Data oriented programming is born to solve the performances issues and platform bindings that
object oriented was leaving behind.
Each problem its own solution.
Need 1000 calls per frame on consoles? Data oriented, cache friendly, multithreaded and no virtuals are the keyword.
Need 1-10 calls per frame? Object oriented is good.
Looking from this angle, the two paradigm CAN and MUST co-exist, because they're not strict and they're really a way of dealing a certain class of problems.

Monday, January 9, 2012

Position reconstruction from depth (3)


Hi guys,
just wanted to share the code I've used to study the position reconstruction problem.
This code lets you switch easly between linear depth and post-projection depth, and I'll show also a way to check if the reconstruction is correct.
Following is the code to convert between Camera/View space to PostProjection space and viceversa.




// This must be done on the CPU and passed to shaders:
float2 getProjParams()
{
//#define PROJ_STANDARD
#ifdef PROJ_STANDARD
float rangeInv = 1 / (gFar - gNear);
float A = -(gFar + gNear) * rangeInv;
float B = -2 * gFar * gNear * rangeInv;
#else // We get rid of the minus by just inverting the denominator (faster):
float rangeInv = 1 / (gNear - gFar);
float A = (gFar + gNear) * rangeInv;
float B = 2 * gFar * gNear * rangeInv;
#endif

return float2(A, B);
}

// Input: 0..-far - Output: -1..1
float postDepthFromViewDepth( float depthVS )
{
float2 projParams = getProjParams();

// Zn = (A * Ze + B) / -Ze
// Zn = -A - (B/Ze)
float depthPS = -projParams.x - (projParams.y / depthVS);

return depthPS;
}

// Input: -1..1 - Output: 0..-far
float viewDepthFromPostDepth( float depthPS )
{
float2 projParams = getProjParams();

// Ze = -B / (Zn + A)
float depthVS = -projParams.y / (projParams.x + depthPS);

return depthVS;
}

Next I'll show some helper functions to encode/decode the depth in different spaces.


///////////////////////////////////////////////////
// POST PROJECTION SPACE
///////////////////////////////////////////////////

// Returns post-projection depth
float decodeProjDepth( float2 uv )
{
return tex2D( depthMap, uv ).r;
}

// Returns viewspace depth from projection (negative for left-handed)
float decodeViewDepthFromProjection( float2 uv )
{
float depthPS = decodeProjDepth( uv );
return viewDepthFromPostDepth( depthPS );;
}

// Returns depth in range 0..1
float decodeLinearDepthFromProjection( float2 uv )
{
float depthVS = decodeViewDepthFromProjection( uv );
// Left handed coords needs the minus,
// because the depth is negative
// and we are converting towards 0..1 domain
return -depthVS / gFar;
}

// Returns post-projection depth
float encodePostProjectionDepth( float depthViewSpace )
{
return postDepthFromViewDepth( depthViewSpace );
}

///////////////////////////////////////////////////
// VIEW/CAMERA SPACE
///////////////////////////////////////////////////

// Returns stored linear depth (0..1)
float decodeLinearDepthRaw( float2 uv )
{
return tex2D( depthMap, uv ).r;
}

// Returns viewspace depth (0..-far)
float decodeViewDepthFromLinear( float2 uv )
{
return decodeLinearDepthRaw( uv ) * -gFar;
}

// Returns linear depth from left-handed viewspace depth (0..-far)
float encodeDepthLinear( float depthViewSpace )
{
return -depthViewSpace / gFar;
}



With simple defines we can control and switch between using a linear depth or a post-projection (raw) depth buffer to check that our calculations are fine:


#ifdef DEPTH_LINEAR
#define encodeDepth encodeDepthLinear
#define decodeViewDepth decodeViewDepthFromLinear
#define decodeLinearDepth decodeLinearDepthRaw
#else
#define encodeDepth encodePostProjectionDepth
#define decodeViewDepth decodeViewDepthFromProjection
#define decodeLinearDepth decodeLinearDepthFromProjection
#endif

Now all the reconstruction methods, both the slow and the one that uses rays:



// View-space position
float3 getPositionVS( float2 uv )
{
float depthVS = decodeLinearDepth(uv);

//float4 positionPS = float4((uv.x-0.5) * 2, (0.5-uv.y) * 2, 1, 1);
float4 positionPS = float4( (uv - 0.5) * float2(2, -2), 1, 1 );
float4 ray = mul( gProjI, positionPS );
ray.xyz /= ray.w;
return ray.xyz * depthVS * gFar;
}

float3 getPositionVS( float2 uv, float3 ray )
{
float depthLin = decodeLinearDepth(uv);

return ray.xyz * depthLin;
}

float3 getPositionWS( float2 uv )
{
float3 positionVS = getPositionVS( uv );
float4 positionWS = mul( gViewI, float4(positionVS, 1) );
return positionWS.xyz;
}

float3 getPositionWS( float2 uv, float3 viewDirectionWS )
{
float depthVS = decodeViewDepth(uv);
#if defined(METHOD1)
// Super-slow method ( 2 matrix-matrix mul )
float4 pps = mul( gProj, float4(getPositionVS( uv ), 1) );
float4 positionWS = mul( gViewProjI, pps );
positionWS /= positionWS.w;

return positionWS.xyz;
#elif defined(METHOD2)

// Known working slow method
float3 positionWS = getPositionWS( uv );
#else return positionWS .xyz;

// Super fast method
viewDirectionWS = normalize(viewDirectionWS.xyz);

float3 zAxis = gView._31_32_33;
float zScale = dot( zAxis, viewDirectionWS );
float3 positionWS = getCameraPosition() + viewDirectionWS * depthVS / zScale;

return positionWS;
#endif // METHOD1,2,3
}

Last but not least, I'll show you the code to encode the depth and perform a simple calculation to understand if we've done right:


// This is only with an educational purpose,
// so that we can switch towards storing
// viewspace or postprojection depth.
// In the GBuffer creation, the depth stored like this:

depth = float4( encodeDepth(IN.positionViewSpace.z), 1, 1, 1);

// A very cheap and easy way to detect if
// we've worked correctly is to add a
// point-light and light a bit more
// the rendering with something like:

#ifdef POSITION_RECONSTRUCTION_VIEWSPACE
float4 lightPos = mul( gView, float4(100.0, 0.0, 0.0, 1.0) );
float3 pixelPos = getPositionVS( uv, viewDirVS );
#else
float4 lightPos = float4(100.0, 0.0, 0.0, 1.0);
float3 pixelPos = getPositionWS( uv, viewDirWS );
#endif


Note the different rays, viewDirVS and viewDirWS. They are calculated as MJP showed a lot of time and two different ways, one for meshes and the other for fullscreen quads.
I think that's all for now, I'll attach a screenshot of the simple test I've used to test the reconstruciton. Note that the light is the same in all the conditions, view/world space reconstruction, linear/postprojection depth storage.
Enjoy!!!

Position reconstruction from depth (2)

Happy new year guys!
I want to finish my summary for position reconstruction.
The missing part is the position, that can be in ViewSpace (or CameraSpace) and in WorldSpace.

So far we know the relationship between Post-Perspective and ViewSpace depth of the fragment. If not, roll back the previous post.

The methods are taken directly from the amazing MJP, kudos to his works!
My target is again left-handed coordinate system like OpenGL, that needs some more attention
(really a minus sign in the correct place make a HUGE difference!).

The methods I'm using are the one that pass down a ray to the fragment shader.

VIEWSPACE POSITION

Ray Generation
For a fullscreen quad we take the ClipSpace position and multiply by the inverse of the projection matrix to obtain a ray in ViewSpace.

float4 ray = mul( gProjectionInverse, positionCS );
ray /= ray.w;
OUT.viewRayVS = ray;

Position reconstruction
Here we need the LinearDepth between 0 and 1, that depends upon your choise of storage.
The formula is:

viewRayVS * linearDepth

If you are reading from the depth-buffer, then you'll need to convert it to view-space and then
divide by the far value.
Pay attention here. If you are using a left handed coordinate system, your ViewSpace depth will be ALWAYS negative.
So the left-handed passage will be: rawDepth -> viewSpaceDepth(0..far) -> division by -far
Right handed: rawDepth -> viewSpaceDepth(0..far) -> division by far


WORLDSPACE POSITION

Ray generation
The generation here is:

ViewDirectionWS = positionWS - cameraPositionWS
and the camera position in WorldSpace can be found as the last column of the inverse of the view matrix.
In HLSL if you want to access this, you can create an helper method:

float3 getCameraPosition()
{
return gViewInverse._14_24_34;
}

Position reconstruction
In this case the reconstruction is longer.
Around the web you can find the solution like get the viewspace position and then multiply by the inverse of the view matrix.
It is ok, but can be a lot faster.
Matt in his blog suggest the solution (here) to scale the depth on the camera zAxis:

float3 viewRayWS = normalise( IN.ViewRayWS );
float3 zAxis = gView._31_32_33;
float zScale = dot( zAxis, viewRayWS );
float3 positionWS = cameraPositionWS + viewRayWS * depthVS / zScale;


here we're talking about the real viewSpace depth, that can be converted from the post-projection depth by using:
float depthVS = ProjectionB / (depthCS - ProjectionA);
This is already the depth in view space (that is always negative for left-handed systems).

When storing the linear depth between 0..1, we'll need to convert it back to viewspace.
In case of left-handed system, we stored the linear depth like that:

float depthLinear = depthViewSpace / -Far;

So to have again the depth in ViewSpace, we have:

float depthViewSpace = depthLinear * -Far;

Hope this is useful guys.
Credits goes to MJP for his amazing work, and this is just a way to summarize all the possible problems in reconstruction.
The problem I found around the web is in defining the real DOMAIN of variable and spaces, and I hope that this contributes to have more clearer ideas about how to handle depth reconstruction and space changes without fear.

In the next post I will simply write down all the methods in the shader I used to test all this stuff!