• Read the three papers and decide whether they are related to FR (Not directly, but there are many techniques which could be used to accelarate rendering)
    • [He, Extending, 2014]: The idea is totally the same with that of coarse pixel shading. But this is an approach for forward shading.
    • [Yee, Spatiotemporal, 2001]: Accelerate global illumination computation for dynamic environments. Use human visual system property.
    • [Liktor, Decoupled Deferred, 2012]:
      • compact geometry buffer: stores shading samples independently from the visibility
    • [ClarBerg, AMFS, 2014]: powerful hardware architecture for pixel shading, which enables flexible control of shading rates and automatic shading reuse between triangles in tessellated primitives.
    • [Foveated Real-Time Ray Tracing for Virtual Reality Headset]
      • foveated sampling
    • [Combining Eye Tracking with Optimizations for Lens Astigmatism in modern wide-angle HMDs]
      • Foveated sampling: taking the minimum values of two sampling maps (lens astigmatism & current eye gaze) in the foveated region.
    • [Perception-driven Accelerated Rendering, 2016]
      • survey
    • [A Retina-Based Perceptually Lossless Limit and a Gaussian Foveation Scheme With Loss Control, 2014]:
      • not related to foveated rendering
    • [User, Metric, and Computational Evaluation of Foveated Rendering Methods]
    • [H. Tong and R. Fisher. Progress Report on an Eye-Slaved Area-of Interest Visual Display. Defense Technical Information Center, 1984.]
    • [Proceedings of the 1990 Symposium on Interactive 3D Graphics]
  • Mutlisampling & Supersampling
    • Multisampling: only apply super samples at the edge of primitives
    • Supersampling: sample for the whole frame and amplify the whole frame
  • Forward rendering & deferred rendering
  • User, Metric, and Computational Evaluation of Foveated Rendering Methods
    • Compare 4 foveated rendering methods
      • Lower resolution for foveated view
      • Screen-Space Ambient Occlusion instead of global ambient occlusion
      • Terrain Tessellation
      • Foveated Real-time Ray-Casting
    • Provide foveated image metric
      • HDR-VDP: compare two images and get visibility and quality
      • Only spacial artifacts are considered! Temporal is not considered!
        • Find a saliency map during free viewing.

Report 08/07/2017

As we discussed last week, to make our paper better, we should:

  1. Make our algorithm better than others.
  2. Do user study.
  3. Compare with work of others.

To make our algorithm better than others, what I did last week is:

  1. Do interpolation for the rendered scene so there is no feeling of large pixels.
  2. Although interpolated, there are still jaggies in the frame. To reduce jaggies:
    1. I built a contrast map and did contrast-related blur. Jaggies exist at the position where contrast is large.
    2. I did bilateral filtering. The jaggies is not reduced, but it has very good effect of smoothing.


To compare with work of others, I need to redo the work of others.

I have already implemented the work of Microsoft without blur.


Built a summary for the

Next week:

Implement code of nvidia paper.

A similar apporach: https://github.com/GameTechDev/DeferredCoarsePixelShading

Microsoft code:


Write the summary paper.



Deferred shading is a screen-space shading technique. It is called deferred because no shading is actually performed in the first pass of the vertex and pixel shaders: instead shading is “deferred” until a second pass.

Decoupled shading: shades fragments rather than micropolygons vertices. Only shades fragments after precise computation of visibility.

Visibility: how much we read from the primitives

Shading rate: how much we render.