PDA

View Full Version : Motion Compensation implementation notebook



sebastianaalton
05-27-2008, 06:10 PM
The current Trials 2 Second Edition graphics engine features per pixel motion blur. The motion blur is implemented by first storing the screen space 2d motion vectors of each pixel to a offscreen surface (one pixel per screen output pixel during the deferred rendering geometry pass). Then at post process pass for each pixel the shader samples the result texture 7 times along the motion vector. This works pretty well and smoothes the frame transition nicely. However if the framerate is low enough, the image becomes very blurry and doesnt feel smooth anymore.

Last weekend I bought me a new Full HD (1920x1080) 100hz TV. It has very good built in motion compensation system. A 24 frames per second Blu-Ray movie feels very fluid and lifelike because of it. So I thought, what about implementing a similar system to our graphics engine. With the deferred renderer system in place, it's actually very much doable (and doesn't require more than one or two workdays to implement). This way we could potentially double or triple our framerate without noticeable visible errors in the graphics rendering quality.

Currently I have planned 2 alternative ways to implement motion compensation system.

A)
Use the same per pixel 2d motion vector system as our motion blur uses currently. After the frame is ready to be displayed, copy the motion vector buffer and final color buffer for later use. During the next frame rendering (after g-buffers have been rendered), render a single quad using these 2 buffers as textures to the back buffer (and flip buffers). For each pixel, calculate pixel texture coordinate using the motion vector (motion vector divided by two) and sample the color of the final color buffer using those texture coordinates. This extra pass is very quick, and doubles the motion fluidness of the scene.

This method only causes the rendering to be more smooth. The game control is not double as responsive. So the game may look very good on a mainstream graphics card at solid 60 fps, but it still plays like a 30 fps game (the control feels as lagged as before).

B)
Split the heavy deferred rendering image creation process to 2 parts. First render only g-buffers, and then render lights and post process effects. Normally the frame is final after the post process effects and is displayed on the screen. Now when the rendering is interleaved to 2 steps, we can run another logic tick between the frame render and update the screen buffer with a nice projection trick we already use in the motion blur system. Like with the technique A, we store the final color buffer of the last rendered frame. However we do not use motion vectors at all. Instead we render the scene geometry again with a ultra light shader that only samples one texture (the last frame final image). Similarly as in motion vector calculation we give the vertex shader both the current object transformation matrix and the last frame matrix. By using the last frame matrix we can calculate in the pixel shader the last screen space 2d position of the pixel we are rendering. We use the color of this pixel as the color of the newly rendered pixel. So in practice everything is moving at 2 x framerate and responding perfectly. The only difference to real 2 x frame rendering is that we do not calculate the pixel colors at all during every other frame. We just move the pixels to new positions according to the updated object transformation matrices. This one extra geometry frame is approximately double as fast as one "low mode" rendering pass (low more is almost 10x faster than high mode), so the framerate would almost double with this trick without hardly any noticeable graphics gliches.

Just my thoughts. If this technique works as well as I have planned, expect to see it in the v1.08 patch.

sebastianaalton
05-28-2008, 03:25 PM
Model B implemented and it works very well, even if I render only one real frame every 64 frames (making the game reach over 500 fps on my computer on high mode + all effects on 1920x1200 res). The only very visible image quality problem is with the surfaces that are not visible when the real frame is rendered, and the surface becomes visible on the motion compensation frames. To fix this issue, the real frame must be rendered ahead of time (estimate new world and camera state in the near future), and we must also keep the old rendered frame (2 frames always stored). Then in the motion compensation shader compare which z-coordinate (stored in the color alpha channel) is closer to the real z-coordinate (the projected z coordinate must be transformed to view space). Use that texture for the pixel. This should fix the problem with almost no performance hit.

[Update]

This solution also gives us 2 additional advantages:
- As most of the pixels are visible in both "last" and "future" screens, we can calculate average of these pixel colors to get 2x supersampling for free on these pixels.
- All rendering to backbuffer is basically standard forward rendering. Only the textures are generated by a deferred rendering system. This allows us to properly support hardware MSAA on all object edges.

rlmergeuser
06-13-2008, 02:13 AM
Sounds like a good idea, my pc is not that strong so sometimes I get weird framerate lag, so if that could smooth some issues out that I have been experiencing I am all for it, a few times I got faults from lagging out.