DEBUG: unable to execute ‘:/usr/local/cuda/bin/nvcc’: No such file or directory error: command ‘:/usr/local/cuda/bin/nvcc’ failed with exit status 1

zWhen trying to install SoftRas & neural renderer, I got error:

unable to execute ‘:/usr/local/cuda/bin/nvcc’: No such file or directory

error: command ‘:/usr/local/cuda/bin/nvcc’ failed with exit status 1

Solution:

Install openpose on ubuntu 16.04

Config:
System: Ubuntu 16.04
CUDA: 10.0
Graphic Crad: RTX 2080
——————————————————————————————————————————

      1. Download openpose and dependencies:

        If “–recursive” is not added here, the default caffe will not be downloaded!
      2. The process will not be so smooth!!
            1. Error:

            Solve: open “openpose/build/caffe/src/openpose_lib-build/CMakeCache.txt” with cmake-gui
            Change cuda-9.0 to cuda-10.0

          1. Error:

            Solve: Goto “build/python/openpose” and run “make”

Install Caffe

Configuration:
Ubuntu 16.04
CUDA 10.0
GTX 2080

The reference is the step-by-step tutorial.
I met one error:

To solve it, two steps are necessary:

  1. https://blog.csdn.net/fdd096030079/article/details/84451811
  2. https://stackoverflow.com/questions/48383846/nvcc-fatal-unsupported-gpu-architecture-compute-20-while-cuda-9-1caffeopen

Connect to wifi using commend line

You know, the ubuntu system in my Great Alienware is not healthy. Many functions are not working well, including the wifi connection. It is impossible to connect to a new wifi using the graphical interface. I will get error like:

Image result for active connection removed before it was initialized

I solved the connection issue using

Reference

      1. Determine the name of the wifi. In many tutorials, they directly call their wifi “wlan0”. However, the name is different on my machine. Run the following command:

        The name of my wifi is “wlp4s0”.
      2. List the wifi networks
      3. choose the wifi you want to connect to, and run the following commend:
      4.  Check whether the wifi is connected:
      5.  Notice: Actually in my desktop I still see an icon as no connection, but actually it is connected!

Numpy Precision

Precision changed!

Failed again with CUDA…

Another sad experience with cuda.

  1. Tensorflow compiling with cuda just doesn’t work when I suspend my machine. (Error: GPU cannot be found.)
    1. Tried to reinstall tensorflow again…FAILED!
    2. Tried to restart the PC…WORKED!
  2. However, I met this error again:https://github.com/zengarden/light_head_rcnn/issues/9
    1. Tried to change:
      1. /home/xiaoxu/Documents/tf_install/venv/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/util/cuda_device_functions.h
        1. line 32:
          1. -#”cuda/include/cuda.h”
          2. +#include “cuda.h”
      2.  /home/xiaoxu/Documents/tf_install/venv/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/util/cuda_kernel_helper.h
        1. line 24:
          1. -#”cuda/include/cuda_fp16.h”
          2. +#include “cuda_fp16.h”
  3. Then, I recompiled cuda functions, and got all-zero outputs.
    1. I forgot to switch cuda9.0(default) to cuda 10.0, switch to cuda 10.0…WORKED!

Switch version of g++ & Switch version of CUDA

  • Switch version of g++
        • Example: install g++ 5.3 and g++ 7.3, then switch between them
        • Step 1: install g++ 5.3 with priority 20

        • Step 2: install g++ 7.3 with priority 60

        • Choose one g++ version:

  • Switch version of CUDA
    • Download https://github.com/phohenecker/switch-cuda
    • run the .sh file

How to build new repository?

  • Firstly, add key to the workstation following the tutorial of https://www.runoob.com/w3cnote/git-guide.html
  • Build a new repository online called “GithubTest”
  • cd to the local folder
  • In git bash:
  • In git bash:
  • Add a new file in the local folder called “readme.md”
  • In git bash:
  • In git bash:
    • will get responce:
  • The local git repository is constructed by 3 trees:
    • The first one is working repository, which contains read files;
    • The second one is tmp repository (Index), which stores the changes;
    • The third one is head, which points to the last change. Now we have already stored the change to “head”, now we submit it to the working repository.
  • In git bash:
  • Now the file “readme.md” is online!

Structured prediction

baseNP: doesn’t contain any recursive parts.

chunking: build the tree for the sentence

Level of representation:

* Brown Corpus (level1: pos)

* Penn Trecbank (level2: sys)

* PropBank (level3: sen)

* Framenet (level4: )

All of these need lots of human labor.

 

h(x) = argmin(y in Y) E_(y~p(Y|X))[l(y,x,Y)]

l (y*,x,y) = 1 – delta(y,y*)

H(x) = argmax_(y in Y) Pr(y|x)

min_(h in H) E_{p}[loss(X;Y;h)] + model complexity(h)

Empirical risk = 1/N SUM_{I = 1}^{N}loss(x,y*,h)

 

generalized viterbi

recognize speech

wreak a nice beach

an ice beach

 

conditional random fields