Monday, January 9, 2012

Position reconstruction from depth (2)

Happy new year guys!
I want to finish my summary for position reconstruction.
The missing part is the position, that can be in ViewSpace (or CameraSpace) and in WorldSpace.

So far we know the relationship between Post-Perspective and ViewSpace depth of the fragment. If not, roll back the previous post.

The methods are taken directly from the amazing MJP, kudos to his works!
My target is again left-handed coordinate system like OpenGL, that needs some more attention
(really a minus sign in the correct place make a HUGE difference!).

The methods I'm using are the one that pass down a ray to the fragment shader.

VIEWSPACE POSITION

Ray Generation
For a fullscreen quad we take the ClipSpace position and multiply by the inverse of the projection matrix to obtain a ray in ViewSpace.

float4 ray = mul( gProjectionInverse, positionCS );
ray /= ray.w;
OUT.viewRayVS = ray;

Position reconstruction
Here we need the LinearDepth between 0 and 1, that depends upon your choise of storage.
The formula is:

viewRayVS * linearDepth

If you are reading from the depth-buffer, then you'll need to convert it to view-space and then
divide by the far value.
Pay attention here. If you are using a left handed coordinate system, your ViewSpace depth will be ALWAYS negative.
So the left-handed passage will be: rawDepth -> viewSpaceDepth(0..far) -> division by -far
Right handed: rawDepth -> viewSpaceDepth(0..far) -> division by far


WORLDSPACE POSITION

Ray generation
The generation here is:

ViewDirectionWS = positionWS - cameraPositionWS
and the camera position in WorldSpace can be found as the last column of the inverse of the view matrix.
In HLSL if you want to access this, you can create an helper method:

float3 getCameraPosition()
{
return gViewInverse._14_24_34;
}

Position reconstruction
In this case the reconstruction is longer.
Around the web you can find the solution like get the viewspace position and then multiply by the inverse of the view matrix.
It is ok, but can be a lot faster.
Matt in his blog suggest the solution (here) to scale the depth on the camera zAxis:

float3 viewRayWS = normalise( IN.ViewRayWS );
float3 zAxis = gView._31_32_33;
float zScale = dot( zAxis, viewRayWS );
float3 positionWS = cameraPositionWS + viewRayWS * depthVS / zScale;


here we're talking about the real viewSpace depth, that can be converted from the post-projection depth by using:
float depthVS = ProjectionB / (depthCS - ProjectionA);
This is already the depth in view space (that is always negative for left-handed systems).

When storing the linear depth between 0..1, we'll need to convert it back to viewspace.
In case of left-handed system, we stored the linear depth like that:

float depthLinear = depthViewSpace / -Far;

So to have again the depth in ViewSpace, we have:

float depthViewSpace = depthLinear * -Far;

Hope this is useful guys.
Credits goes to MJP for his amazing work, and this is just a way to summarize all the possible problems in reconstruction.
The problem I found around the web is in defining the real DOMAIN of variable and spaces, and I hope that this contributes to have more clearer ideas about how to handle depth reconstruction and space changes without fear.

In the next post I will simply write down all the methods in the shader I used to test all this stuff!

1 comment:

Anonymous said...

I like the helpful information you provide in your articles. I’ll bookmark your weblog and check again here regularly. I'm quite sure I’ll learn plenty of new stuff right here! Best of luck for the next!
WoodRiver Guide Rail Angle Jig