Install openpose on ubuntu 16.04

Config:
System: Ubuntu 16.04
CUDA: 10.0
Graphic Crad: RTX 2080
——————————————————————————————————————————

      1. Download openpose and dependencies:

        If “–recursive” is not added here, the default caffe will not be downloaded!
      2. The process will not be so smooth!!
            1. Error:

            Solve: open “openpose/build/caffe/src/openpose_lib-build/CMakeCache.txt” with cmake-gui
            Change cuda-9.0 to cuda-10.0

          1. Error:

            Solve: Goto “build/python/openpose” and run “make”

Install Caffe

Configuration:
Ubuntu 16.04
CUDA 10.0
GTX 2080

The reference is the step-by-step tutorial.
I met one error:

To solve it, two steps are necessary:

  1. https://blog.csdn.net/fdd096030079/article/details/84451811
  2. https://stackoverflow.com/questions/48383846/nvcc-fatal-unsupported-gpu-architecture-compute-20-while-cuda-9-1caffeopen

Connect to wifi using commend line

You know, the ubuntu system in my Great Alienware is not healthy. Many functions are not working well, including the wifi connection. It is impossible to connect to a new wifi using the graphical interface. I will get error like:

Image result for active connection removed before it was initialized

I solved the connection issue using

Reference

      1. Determine the name of the wifi. In many tutorials, they directly call their wifi “wlan0”. However, the name is different on my machine. Run the following command:

        The name of my wifi is “wlp4s0”.
      2. List the wifi networks
      3. choose the wifi you want to connect to, and run the following commend:
      4.  Check whether the wifi is connected:
      5.  Notice: Actually in my desktop I still see an icon as no connection, but actually it is connected!

Numpy Precision

Precision changed!

Failed again with CUDA…

Another sad experience with cuda.

  1. Tensorflow compiling with cuda just doesn’t work when I suspend my machine. (Error: GPU cannot be found.)
    1. Tried to reinstall tensorflow again…FAILED!
    2. Tried to restart the PC…WORKED!
  2. However, I met this error again:https://github.com/zengarden/light_head_rcnn/issues/9
    1. Tried to change:
      1. /home/xiaoxu/Documents/tf_install/venv/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/util/cuda_device_functions.h
        1. line 32:
          1. -#”cuda/include/cuda.h”
          2. +#include “cuda.h”
      2.  /home/xiaoxu/Documents/tf_install/venv/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/util/cuda_kernel_helper.h
        1. line 24:
          1. -#”cuda/include/cuda_fp16.h”
          2. +#include “cuda_fp16.h”
  3. Then, I recompiled cuda functions, and got all-zero outputs.
    1. I forgot to switch cuda9.0(default) to cuda 10.0, switch to cuda 10.0…WORKED!

Switch version of g++ & Switch version of CUDA

  • Switch version of g++
        • Example: install g++ 5.3 and g++ 7.3, then switch between them
        • Step 1: install g++ 5.3 with priority 20

        • Step 2: install g++ 7.3 with priority 60

        • Choose one g++ version:

  • Switch version of CUDA
    • Download https://github.com/phohenecker/switch-cuda
    • run the .sh file

How to build new repository?

  • Firstly, add key to the workstation following the tutorial of https://www.runoob.com/w3cnote/git-guide.html
  • Build a new repository online called “GithubTest”
  • cd to the local folder
  • In git bash:
  • In git bash:
  • Add a new file in the local folder called “readme.md”
  • In git bash:
  • In git bash:
    • will get responce:
  • The local git repository is constructed by 3 trees:
    • The first one is working repository, which contains read files;
    • The second one is tmp repository (Index), which stores the changes;
    • The third one is head, which points to the last change. Now we have already stored the change to “head”, now we submit it to the working repository.
  • In git bash:
  • Now the file “readme.md” is online!

Structured prediction

baseNP: doesn’t contain any recursive parts.

chunking: build the tree for the sentence

Level of representation:

* Brown Corpus (level1: pos)

* Penn Trecbank (level2: sys)

* PropBank (level3: sen)

* Framenet (level4: )

All of these need lots of human labor.

 

h(x) = argmin(y in Y) E_(y~p(Y|X))[l(y,x,Y)]

l (y*,x,y) = 1 – delta(y,y*)

H(x) = argmax_(y in Y) Pr(y|x)

min_(h in H) E_{p}[loss(X;Y;h)] + model complexity(h)

Empirical risk = 1/N SUM_{I = 1}^{N}loss(x,y*,h)

 

generalized viterbi

recognize speech

wreak a nice beach

an ice beach

 

conditional random fields

Talk: Learning efficiency of outcome in games

  • By Eva Tardos
  • Repeated games
    • player’s value/cost additive over periods, while playing
    • players try to learn what is the best from past data
    • what can we say about the outcome? how long do they have to stay to ensure OK social welfare?
  • Result: routing, limit for very small users
    • Theorem:
      • In any network with continuous, non-decreasing cost functions and small users
      • cost of Nash with rates ri for all i <= cost of opt with rate 2ri for all i
    • Nash equilibrium: stable solution where no player had incentive to deviate.
    • Price of Anarchy = cost of worst Nash equilibrium / social optimum cost;
  • Examples of price of anarchy bounds
  • Price of anarchy in auctions
    • First price is auction
    • All pay auction…
    • Other applications include:
      • public goods
      • fair sharing
      • Walrasian Mechanism
  • Repeated game that is slowly changing
    • Dynamic population model
      • at each step t each player I is replaced with an arbitrary new player with probability p
      • in a population of N players, each step, Np players replaced in expectation
      • population changes all the time: need to adjust
      • players stay long enough…
  • Learning in repeated game
    • what is learning?
    • Does learning lead to finding Nash equilibrium?
    • fictitious play = best respond to past history of other players goal: “pre-play” as a way to learn to play Nash
  • Find a better idea when the game is playing?
    • Change of focus: outcome of learning in playing
  • Nash equilibrium of the one shot game?
    • Nash equilibrium of the one-shot game: stable actions a with no regret for any alternate strategy x.
    • cost_i(x, a_-i) >= cost_i(a)
  • Behavior is far from stable
  • no regret without stability: learning
    • no regret: for any fixed action x (cost \in [0,1]):
      • sum_t(cost_i(a^t)) <= sum_t(cost_i(x, a_-i^t)) + error
      • error <= √T (if o(T) called no-regret)
  • Outcome of no-regret learning in a fixed game
    • limit distribution sigma of play (action vectors a = (a1, a2,…,an))
  • No regret leanring as a behavior model:
    •  Pro:
      • no need for common prior or rationality assumption on opponents
      • behavioral assumption: if there is a consistently good strategy: please notice!
      • algorithm: many simple rules ensure regret approx.
      • Behavior model ….
  • Distribution of smallest rationalizable multiplicative regret
    • strictly positive regret: learning phase maybe better than no-regret
  • Today (with d options):
    • sum_t(cost_i(a^t)) <= sum_t(cost_i(x, a_-i^t)) + √Tlogd
    • sum_t(cost_i(a^t)) <= (1 + epsilon)sum_t(cost_i(x, a_-i^t)) + log(d)/epsilon
  • Quality of learning outcome
  • Proof technique: Smoothness
  • Learning and price of anarchy
  • Learning in dynamic games
    • Dynamic population model
      • at each step t each player I is replaced with an arbitrary new player with probability p
    • how should they learn from data?
  • Need for adaptive learning
  • Adapting result to dynamic populations
    • inequality we wish to have
  • Change in optimum solution
  • Use differential privacy -> stable solution

How to write MP4 with OpenCV3

When trying to write MP4 file (H264), I tired code of

And I got error saying:

This problem is solved by changing the fourcc to the ASCII number directly to cv2.VideoWriter(), i.e.

reference:

https://devtalk.nvidia.com/default/topic/1029451/jetson-tx2/-python-what-is-the-four-characters-fourcc-code-for-mp4-encoding-on-tx2/