Monday, January 9, 2012
Position reconstruction from depth (2)
I want to finish my summary for position reconstruction.
The missing part is the position, that can be in ViewSpace (or CameraSpace) and in WorldSpace.
So far we know the relationship between Post-Perspective and ViewSpace depth of the fragment. If not, roll back the previous post.
The methods are taken directly from the amazing MJP, kudos to his works!
My target is again left-handed coordinate system like OpenGL, that needs some more attention
(really a minus sign in the correct place make a HUGE difference!).
The methods I'm using are the one that pass down a ray to the fragment shader.
VIEWSPACE POSITION
Ray Generation
For a fullscreen quad we take the ClipSpace position and multiply by the inverse of the projection matrix to obtain a ray in ViewSpace.
float4 ray = mul( gProjectionInverse, positionCS );
ray /= ray.w;
OUT.viewRayVS = ray;
Position reconstruction
Here we need the LinearDepth between 0 and 1, that depends upon your choise of storage.
The formula is:
viewRayVS * linearDepth
If you are reading from the depth-buffer, then you'll need to convert it to view-space and then
divide by the far value.
Pay attention here. If you are using a left handed coordinate system, your ViewSpace depth will be ALWAYS negative.
So the left-handed passage will be: rawDepth -> viewSpaceDepth(0..far) -> division by -far
Right handed: rawDepth -> viewSpaceDepth(0..far) -> division by far
WORLDSPACE POSITION
Ray generation
The generation here is:
ViewDirectionWS = positionWS - cameraPositionWS
and the camera position in WorldSpace can be found as the last column of the inverse of the view matrix.
In HLSL if you want to access this, you can create an helper method:
float3 getCameraPosition()
{
return gViewInverse._14_24_34;
}
Position reconstruction
In this case the reconstruction is longer.
Around the web you can find the solution like get the viewspace position and then multiply by the inverse of the view matrix.
It is ok, but can be a lot faster.
Matt in his blog suggest the solution (here) to scale the depth on the camera zAxis:
float3 viewRayWS = normalise( IN.ViewRayWS );
float3 zAxis = gView._31_32_33;
float zScale = dot( zAxis, viewRayWS );
float3 positionWS = cameraPositionWS + viewRayWS * depthVS / zScale;
here we're talking about the real viewSpace depth, that can be converted from the post-projection depth by using:
float depthVS = ProjectionB / (depthCS - ProjectionA);
This is already the depth in view space (that is always negative for left-handed systems).
When storing the linear depth between 0..1, we'll need to convert it back to viewspace.
In case of left-handed system, we stored the linear depth like that:
float depthLinear = depthViewSpace / -Far;
So to have again the depth in ViewSpace, we have:
float depthViewSpace = depthLinear * -Far;
Hope this is useful guys.
Credits goes to MJP for his amazing work, and this is just a way to summarize all the possible problems in reconstruction.
The problem I found around the web is in defining the real DOMAIN of variable and spaces, and I hope that this contributes to have more clearer ideas about how to handle depth reconstruction and space changes without fear.
In the next post I will simply write down all the methods in the shader I used to test all this stuff!
Monday, November 28, 2011
Position reconstruction from depth (1)
this post is just a quick recap about the possible ways to reconstruct position from the depth buffer that I found around: almost all the credits goes to http://mynameismjp.wordpress.com/.
Let's define (again) the problem:
Reconstruct pixel position from the depth buffer.
Applying a personal way of seeing code, let's put in evidence Data and Transformations.
In my experience, I came up with a simplicistic idea about coding:
Coding is a sequence of Data transformed into other Data.
I know it is very simplicistic, and low level (we're not taking in account any architecture) but this is a low-level view of the problem.
And more views of the same problem can shed more light on the true nature of the problem itself (as in life in general).
In this problem we have two data: Pixel Position and Depth buffer.
The transformation is reconstruction.
To understand further, we can define the Domains of the datas.
- Pixel position can be either in World Space or in View Space
- Depth buffer can be encoded either in linear or in post-perpsective z (raw depth buffer)
So either the transformations will be from Depth buffer to Pixel position and can be:
- Linear depth buffer to View Space position
- Linear depth buffer to World Space position
- Post-Perspective depth buffer to View Space position
- Post-Perspective depth buffer to World Space position
To finish, we have two other transformations:
- Encode to Linear depth buffer
- Encode to Post-Perspective depth buffer
the post-perspective is hidden by the hardware, and it is what it's inside the real depth buffer.
The linear one maps the eye/camera/view space z to the domain 0..1.
To really finish this introduction to the problem, we must know a little bit about our coordinate system. Moving data from world to view to projection spaces, we must define those domains.
We can just skip World space and concentrate on the other.
If we follow OpenGL or DirectX APIs, we know that they are different in both spaces:
- OpenGL uses a right-handed system for the view space
- DirectX uses a left-handed one
- OpenGL uses a cube between (-1, 1) on x,y,z as projection cube
- DirectX uses a cube between (-1,1) on x,y and (0, 1) on z
Using a right-hand system ends up looking at negative z. Keep this in mind.
ENCODING AND DECODING TRANSFORMATIONS
In this section we'll talk about encoding: what we want to encode?
The raw-depth-buffer contains a depth transformed from the view-space depth, and they are encoded in a simple way, depending on your projection matrix.
Let's take only the relevant part of the matrix (the last 2x2 corner), that is:
( A B ( zView
-1 0 ) 1 )
and multiply it with the point in viewspace Pview(zView, 1).
Doing the multiplication has the result:
Pndc = (A * zView + B, -zView )
to became a 3d point (1d here) we apply the division by W:
Pndc = ( A * zView + B / -zView, 1 ) that further simplified became
Pndc = ( -A - (B / zView), 1).
Zndc so is -A - (B /zView).
This is the way in which the depth is encoded in the depth buffer, and the value is between -1 and 1.
Note: if you try to do some maths and put zView = n and zView = f, you'll notice that the values are not mapped correctly between -1 and 1. This is because we're using negative values, so the correct ones are zView = -n and zView = -f.
To find zView, just solve by zView and we'll obtain:
zView = -B / (zNdc + A )
So now we have defined the two transformations:
Projection-Space Encoding: -A - (B / zView )
Projection-Space Decoding: -B / (zNdc + A)
Ok then, it's finished.
Wait...what are thos A and B???
Those values depends again on the choice of your projection matrix.
In OpenGL they are defined as (n is near plane, f is far plane):
A = - (f + n) / (f - n)
B = -2 * n * f / (f - n)
those values can be easly calculated and passed to the shaders (don't bother doing it inside a shader, those are perfect values to be set once in a frame with other frame-constants) to reconstruct depth.
Different is the linear depth encoding. We're still encoding view-space depth, but that became easier. The values in camera/eye/view space are like world-space, but just centered around the camera. For the right-handed systems, we will encode all the negative z, because the camera is looking into the negative z semi-space.
The z values will be in the range 0, -infinite: the projection will take care of getting rid of values that are smaller than the near plane and greater than the far.
Linear depth encoding: -zView / f
Linear depth decoding: zLin * f
Those values are between 0 and 1.
Finally...some CODE!!!
This is POST-PROJECTION DEPTH:
// Calculate A and B
float rangeInv = 1 / (gFar - gNear);
float A = -(gFar + gNear) * rangeInv;
float B = -2 * gFar * gNear * rangeInv;
// Write -1,1 post-projection z
float encodePostProjectionDepth( float depthViewSpace )
{
float depthCS = -projParams.x - (projParams.y / depthViewSpace);
return depthCS;
}
// Read -1,1 post-projection z
float decodePostProjectionDepth( float2 uv )
{
float depthPPS = tex2D( depthMap, uv ).r;
return depthPPS;
}
// Reconstruct view-space depth (0..far)
float decodeViewSpaceDepth( float2 uv )
{
float depthPPS = decodePostProjectionDepth( uv );
float depthVS = -B / (A + depthPPS);
return depthVS;
}
This is Linear depth
// Encode 0..1 view-space depth
float encodeLinearDepth( float depthViewSpace )
{
return -depthViewSpace / far;
}
// Decode 0..1 view-space depth
float decodeLinearDepth( float2 uv )
{
float linearDepth = tex2D( depthMap, uv ).r;
return linearDepth;
}
// Reconstruct view-space depth (0..far)
float decodeViewSpaceDepth( float2 uv )
{
float linearDepth = decodeLinearDepth( uv );
return linearDepth * f;
}
As you can see using a linear depth is easier to encode and decode, but it's more expensive from a memory point of view (you'll need an additional render target), and you're already using a depth buffer so you already have those informations.
Wednesday, November 23, 2011
Rendering Architecture (2)
- Geometry informations
- Shading informations
- Render states
and we set those informations in various way, for example on DX/X360 we use Set*** command and Draw*** to issue the drawcall.
The geometry informations are relative to vertex buffer, vertex format/declaration and index buffer; the shading ones are the various shaders (depending on the API, vertex, fragment, geometry,...) and the informations to be used by the shaders (constants and textures); the render states are the all the other informations, like render targets, depth/stencil, alpha blending, so all the configurable states that are grouped in directx 10 and 11.
The render interface thus is splitted in two: a RenderContext, that sets all the informations to issue drawcalls only and draws, and the RenderDevice that manage the creation, destruction and mapping/unmapping of low level graphic resources.
This division permit to easly divide what is "deferrable" to what not, so if you want to create your own command buffer or use the DX11 one (good luck) than you already know that the RenderContext is the right guy to call.
Every object that can be renderable will have a render method that will take pass the RenderContext around, so that it can set the data for the draw calls.
The real catch is to use the curiously recursive template pattern to create the interface for both the RenderContext and the RenderDevice, and create the different implementations for each platforms: even though you need to typedef the specific template implementation, you can assume (see above) that for each target you have only ONE type for the RenderContext implementation (RenderContext
The methods called in the API-dependent class can be all protected so that you enforce the interface, and inlining all the calls in the RenderContext class will map a call of your render context to a direct call of the method, thus avoiding virtuals and with "static polymorphism".
Even though on PC is not a cost, on consoles ( I really suggest you to try, if you can ) is a bad hit (especially on ps3) to call virtual functions a LOT of times, but let's try to figure out the numbers:
if you have 1000 draw-calls, probably you'll have 4 or 5 RenderContext calls (SetVertexBuffer, SetIndexBuffer, SetVertexShader,SetPixelShader,SetConstants,SetVertexFormats, DrawIndexed, ...) for each draw-call, thus having 4000-5000 virtual calls for each frame. So you end up having 4000-5000 cache misses per frame and all without any apparent reason, and the cost of cache misses on consoles...is varying, but can be from 40 to 600 cycles for each call.
How many cycles are we wasting?
With this system, you have a common interface and no virtuals. No silver bullet, but the problem to find a solution requires a correct definition of the constraints...
BFP
Monday, October 31, 2011
Rendering architecture (1)
Too much time I don't update this blog!
I wanted to share with you a way in which I describe rendering for my home-engine, used to study technologies quickly.
I'm doing this way since 2008, so now it's starting to become very mature, but I'm sure will improve in the future!
Let's start from the objective: I wanted to create something flexible and quick to iterate to study all the different rendering techniques that I'm interested into.
Speaking about DX9+ GPUs, the context of the problem has some constraints.
The rendering APIs are a state machine, and to draw we must provide at least those informations:
1) Vertex Buffer
2) Index Buffer
3) Vertex Format
4) Vertex Program
5) Fragment Program
6) Shader Constants
7) Render States
8) Render Targets
We can see the Rendering as a problem on HOW to create those information to be sent to the GPU.
The Rendering nowadays can be seen as a sequence of drawcalls applied to a render target.
So back in time I started to define the concept of Rendering Pipeline as a sequence of drawcalls in a render target.
Each of those sequences became a Stage of the pipeline.
At the end of the day we have
Pipeline
Stage0 -> Stage1 -> Stage2
This sounds a little simplified, and actually it is simple, but it's really powerful.
An example of Stage can be as simple as a ForwardRendering that renders in the framebuffer (that can be seen as a Render Target),
a GBuffer generation, Shadow Creation, SSAO, ...
For each stage we define some RenderTargetInputs and Outputs, to link the different stages together.
If I made the picture clear, each rendering became a series of rendering of stages.
To describe the rendering I started using an XML file like that:

Each stage type corresponds to a class that extends the Stage class in the code.
- Each stage has some RenderStates set before rendering
- Each stage can draw per material or providing a shader
- Each stage that describes a postprocess, draws a single quad and has a shader associated.
Further defining the rendering problem, I found another missing information in the rendering objects that were sent to each stage.
How can I define them?
I ended up with what I called a RenderView, that is a camera plus a list of "render instances". So now each stage renders a particular RenderView and has some renderstates defined, plus an optional shader if it is a postprocess.
The point is...why just not describe all those informations in the XML?
Enter ScriptableStage!
This stage has a renderview, some renderstates, and an optional shader. All defined in the XML. This provides me the flexibility to describe a lot of the rendering techniques around.
After some more technique explorations, I found another emerging pattern:
Each time I want to render, I need to provide 3 groups of informations
- Geometry
- Shading
- Render states
When you define all those informations, you can describe all the rendering you want! Even materials can be seen as a list of Shaders corresponding to stages and can be described through XMLs...
The application logic that sits on top must really provide mostly the geometry and link it with materials (that are shaders + render states).
With this simple description I found a very powerful way of describing the rendering and prototype/explore very fast.
Next post I'll describe also a way to achieve a simple reloading mechanism and avoid virtuals for the renderer itself to be more console friendly ;)
Enjoy!
Saturday, May 8, 2010
What about your challenge?
But some days after, I ended doing an interview @ Codemasters.
And the big news is...NOW I WORK AT CODEMASTERS!!!
A big leap into the possibility to grew and study techniques, doing a work that I LOVE with a great company!
But hey...and your challenge?
My challenge is returnig back, even if I decided to restart again from scratch.
Why that?
In the meantime I've read some books around (mainly c++ stuff...) and ended up with some new ideas.
Also, my will is to create a framework to explore all these papers and thesis around the web that take my attention!
But all of this will not happen now, first I have to enter in the apartment and get all the stuff out of the boxes!
Stay tuned for more infos!
Friday, March 19, 2010
Progress report: Day 1
As you know, I'm facing the challange of creating a small game-engine in 10 days.
This can be considered a game to improve speed and understanding of a game engine and the decisions about the design. I found that personally, when coding at home (without time constraint) I need to study too much time before taking a decision on design. After this decision is taken, and some UML written down, I code with all in my mind (and papers) in a good way.
To improve the decision time I choose the way of training myself (as with guitar, breakdancing, motorbikes...).
What is the first principle of training? Practice, practice and PRACTICE.
Reading some books of NLP and Self-Improvement, I found in Anthony Robbins a perfect description of what he call the 'decision muscle':
Decision -> Action -> Results and Feedback.
It is the SAME as a controller in a dynamic system (automation systems...), and applies to all the field of the life.
And decision making is crucial also in engine design.
To improve decision making, I found really useful four things:
- Learn more. Learn all. Many times knowing different techniques improves you understanding of other problems: any techniques, and desing has a mentality behind.
Learning the MENTALITY is powerful. Then apply to other fields. Eg: data oriented structure of arrays used in vectorized math and shaders, can be successufully applied to game objects (Pitfalls_of_Object_Oriented_Programming paper is a good example). - Try completely different approaches.
- Practice, practice, practice!
- Learn from mistakes:
there are no failure, there are only results
(Robbins)
So finally I decided to begun another iteration of my home-engine with new knowledge from books and papers.
FIRST DAY
- Total time: 10 hours
- Evident results: running up window, multithread engine loop on.
I've almost finished the Platform Abstraction Layer, on which every other layer will rely.
The first thing I noticed is that you must create a complete environment for other programmers to work with your engine. Many times, due to lack of documentation or time to study, it's better to create restrictions to code use and rails for the other programmers.
For example, if you want the complete control of how classes are accessed, you can create some macros that fordib (declare private) copy constructor and equal operator. Or you can typedef pointer, const pointer, reference and such and use ALWAYS them.
Another example can be the error detection: a bunch of macros like _CHECK( condition ), or _ASSERT( condition ), _ASSERTNOFAIL (condition ) can be EXTREMELY useful to have a consistent way of developing. With that you can redirect all these macros to write down in a global output device you messagges, and this can be in a transparent way to programmers: think of an external window that brings up when you are in debug, with all informations sent everytime a check fail (or success), something missing, asserting.
Consistency is the key.
Decide the guideline in how you want to handle situations, and then use all the c++ features to create this Consistency.
Watching other code I've felt that when you give too much degree of freedom in the code to other using your engine, they will always do what you have not expected.
This leads to the need of "extending" C++, or use it to place constraint and quickly change behaviour.
Constraints.
Flexibility.
These two words commonly are the opposite, but have a variable number of choises, but with precise choises, leads to choose one of the path you thinked of.
Take the following code snippet:
HBOOL WorkerThread::GiveUpSomeWork(WorkerThread* pIdleThread)
{
SpinMutexLock Locker;
_HPKM_CHECK(Locker.TryLock(&m_oTaskMutex));
_HPKM_CHECK(!m_uiTaskCount);
// Grab work
SpinMutexLock LockIdleThread(&pIdleThread->m_oTaskMutex);
// Taskpool has some new tasks, quit.
_HPKM_CHECK(pIdleThread->m_uiTaskCount);
// We have only 1 task, try to split it.
if (m_uiTaskCount == 1)
{
TaskPtr pTask = HNULL;
if (m_apTasks[0]->Split(pIdleThread, &pTask))
{
pTask->m_pCompletion->MarkBusy(HTRUE);
pIdleThread->m_apTasks[0] = pTask;
pIdleThread->m_uiTaskCount = 1;
return HTRUE;
}
}
// Grab half tasks (rounding up)
U32 uiGrabCount = (m_uiTaskCount + 1) / 2;
// Copy this thread tasks to the idle thread list.
TaskPtrPtr ppTask = pIdleThread->m_apTasks;
U32 i;
for (i = 0; i < uiGrabCount; i++)
{
*ppTask++ = m_apTasks[i];
m_apTasks[i] = HNULL;
}
pIdleThread->m_uiTaskCount = uiGrabCount;
// Move remaining tasks down
ppTask = m_apTasks;
for ( ; i < m_uiTaskCount; i++)
{
*ppTask++ = m_apTasks[i];
}
m_uiTaskCount -= uiGrabCount;
return HTRUE;
}
The macro _HPKM_CHECK checks the condition and return HFALSE if the condition is not met.
This in the Release version. In debug or profile version, you can substitute it with other commands that send an event to an output device, or maybe print something in the game console.
As you saw, also TaskPtr type is a typedef. This ensure that we can test and let some classes use smart pointers, provide timings about access with/without smart pointers in a transparent way.
As Engine Programmer, engine is not only a c++ (and a bunch of other languages in other subsystems...) code mess, but a TOOL with which everyone MUST express himself.
For me, this is something that really lacks in many engines, even commercial ones.
Placing many smalls constraint, guides and hints gives everyone the power to use the engine as its full glory.
With macros, templates, defines (really not new stuff...) you have to give CREDITED TOOLS to code with.
You have to redirect almost ALL calls inside your code in a way YOU decide.
Almost EVERY method call must be under your control. Even simple memcpy, strlen, sin...they must be wrapped and even in the case of using the standard functions, you have to decide it.
This TOTAL ABSTRACTION (almost) leads to better code control and later optimization.
Even in coding, DECIDING ALL is the key! Decide that every sin call leads to a modified version, maybe with a table-lookup, or to the standard function. But you have to DECIDE!
Outside of those condiserations, I've worked on the platform abstraction. This includes:
- All types redefinition;
- Multithread-pooltask implementation (completely abstract);
- Timing management;
- Engine architecture based on an abstract engine, and external-declared subsystems;
- Client definition (under Windows a window that handle OS messagges);
The almost-hated virtual table, a good enemy on xbox360 and ps3 intensive operations, can be achieved in very straightforwarding way.
The final result is an almost controlled window that takes its messagges and call its task pool to use every kind of task possible divided on all the worker threads.
I'm implementing some interesting things like parallel data processing (parallel_for...) and want to contiune this way.
Of course, these are all mine thoughs and I know that there are many other way of doing the same thing better.
But you know, I have 9 more days to end the engine!
Demiurge
Wednesday, March 17, 2010
Challenge!
I want to work on my decision-time, that I want to improve, and also speed up my coding abilities.
So I decided to rise a challenge: write a multi-threading mini-game engine in the shortest time possible. I'm deciding to give me a time of 10 days. I know for sure that there are days in which I'll not even have my pc with me (next weekend, and next week a couple of days...) but I want to try this.
The goal is to provide a basic framework for windows7, directx9.0c, cg capable of letting me create a small game.
Will I be strong enough to win?
Let's see!
Every day I'll write down on which part I'll work.
The challenge will begin thursday, so stay tuned!