532. K-diff Pairs in an Array

Description:

https://leetcode.com/problems/k-diff-pairs-in-an-array/description

Code:

Time & Space:
O(nlog(n)) & O(n)

485. Max Consecutive Ones

Description:

https://leetcode.com/problems/max-consecutive-ones/description/

Code:

 

Time & Space:
O(n) & O(1)

628. Maximum Product of Three Numbers


Description:

https://leetcode.com/problems/maximum-product-of-three-numbers/description/


Code:


Time & Space:

O(n) & O(1)

621. Task Scheduler

Description:
https://leetcode.com/problems/task-scheduler/description/

Code:

 

Time & Space:
Record: O(1)
Sort: O(nlogn)
Calculate idle: O(1)

Space: O(1)

611. Valid Triangle Number

Description:

https://leetcode.com/problems/valid-triangle-number/description/

Algorithm1:

Use for loop for three times.

It works but slow (beats 23.60%)

Code1:

 

Timing & Space:

O(n^3) & O(1)

 

Algorithm2:

 

Code2:

 

Timing & Space:

Time:

Worst: O(n^2)

Best: O(1)

Space: O(1)

565. Array Nesting

Description:

https://leetcode.com/problems/array-nesting/description/

Algorithm:

Actually we are separating the array into several groups by determining S[k]. The numbers inside of S[k] will form a cycle.

For example:

Input: A = [5,4,0,3,1,6,2]

Output: 4

Explanation:

A[0] = 5, A[1] = 4, A[2] = 0, A[3] = 3, A[4] = 1, A[5] = 6, A[6] = 2.

One of the longest S[K]:

S[0] = {A[0], A[5], A[6], A[2]} = {5, 6, 2, 0}

S[5] = {A[5], A[6], A[2], A[0]} = {6, 2, 0, 5}

S[6] = {A[6], A[2], A[0], A[5]} = {2, 0, 5, 6}

S[2] = {A[2], A[0], A[5], A[6]} = {0, 5, 6, 2}

 

S[1] = {A[1], A[4]} = {4,1}

S[4] = {A[4], A[1]} = {1,4}

 

S[3] = {A[3]} = {3}

So just mark the used number in the finding process. If meets used number -> stop.

Code:

Timing & Space:

fastest

O(n) & O(1)

08/08/2017

Research:

  • Read the three papers and decide whether they are related to FR (Not directly, but there are many techniques which could be used to accelarate rendering)
    • [He, Extending, 2014]: The idea is totally the same with that of coarse pixel shading. But this is an approach for forward shading.
    • [Yee, Spatiotemporal, 2001]: Accelerate global illumination computation for dynamic environments. Use human visual system property.
    • [Liktor, Decoupled Deferred, 2012]:
      • compact geometry buffer: stores shading samples independently from the visibility
    • [ClarBerg, AMFS, 2014]: powerful hardware architecture for pixel shading, which enables flexible control of shading rates and automatic shading reuse between triangles in tessellated primitives.
    • [Foveated Real-Time Ray Tracing for Virtual Reality Headset]
      • foveated sampling
    • [Combining Eye Tracking with Optimizations for Lens Astigmatism in modern wide-angle HMDs]
      • Foveated sampling: taking the minimum values of two sampling maps (lens astigmatism & current eye gaze) in the foveated region.
    • [Perception-driven Accelerated Rendering, 2016]
      • survey
    • [A Retina-Based Perceptually Lossless Limit and a Gaussian Foveation Scheme With Loss Control, 2014]:
      • not related to foveated rendering
    • [User, Metric, and Computational Evaluation of Foveated Rendering Methods]
    • [H. Tong and R. Fisher. Progress Report on an Eye-Slaved Area-of Interest Visual Display. Defense Technical Information Center, 1984.]
    • [Proceedings of the 1990 Symposium on Interactive 3D Graphics]
  • Mutlisampling & Supersampling
    • Multisampling: only apply super samples at the edge of primitives
    • Supersampling: sample for the whole frame and amplify the whole frame
  • Forward rendering & deferred rendering
  • User, Metric, and Computational Evaluation of Foveated Rendering Methods
    • Compare 4 foveated rendering methods
      • Lower resolution for foveated view
      • Screen-Space Ambient Occlusion instead of global ambient occlusion
      • Terrain Tessellation
      • Foveated Real-time Ray-Casting
    • Provide foveated image metric
      • HDR-VDP: compare two images and get visibility and quality
      • Only spacial artifacts are considered! Temporal is not considered!
        • Find a saliency map during free viewing.

Report 08/07/2017

As we discussed last week, to make our paper better, we should:

  1. Make our algorithm better than others.
  2. Do user study.
  3. Compare with work of others.

To make our algorithm better than others, what I did last week is:

  1. Do interpolation for the rendered scene so there is no feeling of large pixels.
  2. Although interpolated, there are still jaggies in the frame. To reduce jaggies:
    1. I built a contrast map and did contrast-related blur. Jaggies exist at the position where contrast is large.
    2. I did bilateral filtering. The jaggies is not reduced, but it has very good effect of smoothing.

 

To compare with work of others, I need to redo the work of others.

I have already implemented the work of Microsoft without blur.

 

Built a summary for the

Next week:

Implement code of nvidia paper.

A similar apporach: https://github.com/GameTechDev/DeferredCoarsePixelShading

Microsoft code:

https://www.microsoft.com/en-us/download/details.aspx?id=52390

Write the summary paper.

 

Today:

Deferred shading is a screen-space shading technique. It is called deferred because no shading is actually performed in the first pass of the vertex and pixel shaders: instead shading is “deferred” until a second pass.

Decoupled shading: shades fragments rather than micropolygons vertices. Only shades fragments after precise computation of visibility.

Visibility: how much we read from the primitives

Shading rate: how much we render.