The key of using Unity Texture array

The texture array structure in different from GLSL and Unity + HLSL, the sequence of image index i and j must be adjusted. Suppose the light field has structure 16 x 16: In GLSL Shader:

In Unity Shader:

The program of using texture array in unity Suppose we want to load the images in folder “lytro”, … Read moreThe key of using Unity Texture array

Word2Vec Models

Collection of all pre-trained Word2Vec Models: http://ahogrammer.com/2017/01/20/the-list-of-pretrained-word-embeddings/ Google’s model seems not reliable… Here are some similarity tests of Google’s model: The similarity between good and great is: 0.7291509541564205 The similarity between good and awesome is: 0.5240075080190216 The similarity between good and best is: 0.5467195232933185 The similarity between good and better is: 0.6120728804252082 The similarity between … Read moreWord2Vec Models

Lecture 8: Evaluation

Information about midterm PCFG Start with S ∑Pr(A -> gamma | A) = 1 (conditional) probability of each item has to sum to one Pr(O = o1,o2,…,on|µ) HMM: Forward PCFG: Inside-Outside Guess Pr: argmax_(Z)[ Pr(Z|O, µ) ] HMM:Use Viterbi to get PCFG: Use Viterbi CKY to get *Z is the best sequence of states Guess µ: argmax_(µ)[Pr(O|µ)] HMM:Use … Read moreLecture 8: Evaluation

03102018

Gensim Tutorial: https://radimrehurek.com/gensim/models/word2vec.html Use google-news model as pre-trained model clustering based on distance matrix Question: how do we do the clustering? should cluster on the keywords? should cluster on the keywords-related words? Leg dissection demo: 18 cameras 30frames 10G 5 cameras 100 frames 6G Question: what is our task? we cannot change focal length now. … Read more03102018

Lecture 6: Context-free parsing

Questions: Generative Model P(X,Y) Discriminative model P(Y|X) MainPoints Block sampler: Instead of sample one element at a time, we can sample a batch of samples in Gibbs Sampling. Lag and Burn-in: can be viewed as parameters (we can control the number of iterations) lag: mark some iterations in the loop as lag, then throw away … Read moreLecture 6: Context-free parsing

Lecture 5: Reduced-dimensionality representations for documents: Gibbs sampling and topic models

watch the new talk and write summary Noah Smith: squash network Main points: difference between LSA & SVD Bayesian graphical models informative priors are useful in the model Bayesian network DAG X1X2…Xn Po(X1, X2, …, Xn) Generative story: HMM (dependencies) A and B are conditionally independent given C iff P(A,B|C) = P(A|C) * P(B|C)   … Read moreLecture 5: Reduced-dimensionality representations for documents: Gibbs sampling and topic models