Numpy Precision

Precision changed!

Failed again with CUDA…

Another sad experience with cuda.

  1. Tensorflow compiling with cuda just doesn’t work when I suspend my machine. (Error: GPU cannot be found.)
    1. Tried to reinstall tensorflow again…FAILED!
    2. Tried to restart the PC…WORKED!
  2. However, I met this error again:https://github.com/zengarden/light_head_rcnn/issues/9
    1. Tried to change:
      1. /home/xiaoxu/Documents/tf_install/venv/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/util/cuda_device_functions.h
        1. line 32:
          1. -#”cuda/include/cuda.h”
          2. +#include “cuda.h”
      2.  /home/xiaoxu/Documents/tf_install/venv/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/util/cuda_kernel_helper.h
        1. line 24:
          1. -#”cuda/include/cuda_fp16.h”
          2. +#include “cuda_fp16.h”
  3. Then, I recompiled cuda functions, and got all-zero outputs.
    1. I forgot to switch cuda9.0(default) to cuda 10.0, switch to cuda 10.0…WORKED!

Switch version of g++ & Switch version of CUDA

  • Switch version of g++
        • Example: install g++ 5.3 and g++ 7.3, then switch between them
        • Step 1: install g++ 5.3 with priority 20

        • Step 2: install g++ 7.3 with priority 60

        • Choose one g++ version:

  • Switch version of CUDA
    • Download https://github.com/phohenecker/switch-cuda
    • run the .sh file

How to build new repository?

  • Firstly, add key to the workstation following the tutorial of https://www.runoob.com/w3cnote/git-guide.html
  • Build a new repository online called “GithubTest”
  • cd to the local folder
  • In git bash:
  • In git bash:
  • Add a new file in the local folder called “readme.md”
  • In git bash:
  • In git bash:
    • will get responce:
  • The local git repository is constructed by 3 trees:
    • The first one is working repository, which contains read files;
    • The second one is tmp repository (Index), which stores the changes;
    • The third one is head, which points to the last change. Now we have already stored the change to “head”, now we submit it to the working repository.
  • In git bash:
  • Now the file “readme.md” is online!

Structured prediction

baseNP: doesn’t contain any recursive parts.

chunking: build the tree for the sentence

Level of representation:

* Brown Corpus (level1: pos)

* Penn Trecbank (level2: sys)

* PropBank (level3: sen)

* Framenet (level4: )

All of these need lots of human labor.

 

h(x) = argmin(y in Y) E_(y~p(Y|X))[l(y,x,Y)]

l (y*,x,y) = 1 – delta(y,y*)

H(x) = argmax_(y in Y) Pr(y|x)

min_(h in H) E_{p}[loss(X;Y;h)] + model complexity(h)

Empirical risk = 1/N SUM_{I = 1}^{N}loss(x,y*,h)

 

generalized viterbi

recognize speech

wreak a nice beach

an ice beach

 

conditional random fields

Talk: Learning efficiency of outcome in games

  • By Eva Tardos
  • Repeated games
    • player’s value/cost additive over periods, while playing
    • players try to learn what is the best from past data
    • what can we say about the outcome? how long do they have to stay to ensure OK social welfare?
  • Result: routing, limit for very small users
    • Theorem:
      • In any network with continuous, non-decreasing cost functions and small users
      • cost of Nash with rates ri for all i <= cost of opt with rate 2ri for all i
    • Nash equilibrium: stable solution where no player had incentive to deviate.
    • Price of Anarchy = cost of worst Nash equilibrium / social optimum cost;
  • Examples of price of anarchy bounds
  • Price of anarchy in auctions
    • First price is auction
    • All pay auction…
    • Other applications include:
      • public goods
      • fair sharing
      • Walrasian Mechanism
  • Repeated game that is slowly changing
    • Dynamic population model
      • at each step t each player I is replaced with an arbitrary new player with probability p
      • in a population of N players, each step, Np players replaced in expectation
      • population changes all the time: need to adjust
      • players stay long enough…
  • Learning in repeated game
    • what is learning?
    • Does learning lead to finding Nash equilibrium?
    • fictitious play = best respond to past history of other players goal: “pre-play” as a way to learn to play Nash
  • Find a better idea when the game is playing?
    • Change of focus: outcome of learning in playing
  • Nash equilibrium of the one shot game?
    • Nash equilibrium of the one-shot game: stable actions a with no regret for any alternate strategy x.
    • cost_i(x, a_-i) >= cost_i(a)
  • Behavior is far from stable
  • no regret without stability: learning
    • no regret: for any fixed action x (cost \in [0,1]):
      • sum_t(cost_i(a^t)) <= sum_t(cost_i(x, a_-i^t)) + error
      • error <= √T (if o(T) called no-regret)
  • Outcome of no-regret learning in a fixed game
    • limit distribution sigma of play (action vectors a = (a1, a2,…,an))
  • No regret leanring as a behavior model:
    •  Pro:
      • no need for common prior or rationality assumption on opponents
      • behavioral assumption: if there is a consistently good strategy: please notice!
      • algorithm: many simple rules ensure regret approx.
      • Behavior model ….
  • Distribution of smallest rationalizable multiplicative regret
    • strictly positive regret: learning phase maybe better than no-regret
  • Today (with d options):
    • sum_t(cost_i(a^t)) <= sum_t(cost_i(x, a_-i^t)) + √Tlogd
    • sum_t(cost_i(a^t)) <= (1 + epsilon)sum_t(cost_i(x, a_-i^t)) + log(d)/epsilon
  • Quality of learning outcome
  • Proof technique: Smoothness
  • Learning and price of anarchy
  • Learning in dynamic games
    • Dynamic population model
      • at each step t each player I is replaced with an arbitrary new player with probability p
    • how should they learn from data?
  • Need for adaptive learning
  • Adapting result to dynamic populations
    • inequality we wish to have
  • Change in optimum solution
  • Use differential privacy -> stable solution

How to write MP4 with OpenCV3

When trying to write MP4 file (H264), I tired code of

And I got error saying:

This problem is solved by changing the fourcc to the ASCII number directly to cv2.VideoWriter(), i.e.

reference:

https://devtalk.nvidia.com/default/topic/1029451/jetson-tx2/-python-what-is-the-four-characters-fourcc-code-for-mp4-encoding-on-tx2/

Paper Reading: View Direction and Bandwidth Adaptive 360 Degree Video Streaming using a Two-Tier System

Each segment is coded as a base-tier (BT) chunk, and multiple enhancement-tier (ET) chunks.

BT chunks:

represent the entire 360 view at a low bit rate and are pre-fetched in a long display buffer to smooth the network jitters effectively and guarantee that any desired FOV can be rendered with minimum stalls.

ET chunks:

Facebook 360 video:
https://code.facebook.com/posts/1638767863078802
Assessment:
https://code.facebook.com/posts/2058037817807164

PointNet, PointNet++, and PU-Net

point cloud -> deep network -> classification / segmentation / super-resolution

traditional classification / segmentation:

projection onto 2D plane and use 2D classification / segmentation

unordered set

point(Vec3) -> feature vector (Vec5) -> normalize (end with the bound of the pointcloud)

N points:

segmentation:

feature from N points ->NxK classes of each point (each point will have a class)

classification:

feature from N points -> K x 1 vector (K classes)