- Firstly, add key to the workstation following the tutorial of https://www.runoob.com/w3cnote/git-guide.html
- Build a new repository online called “GithubTest”
- cd to the local folder
- In git bash:
-
1git init
-
- In git bash:
-
1git remote add origin git@github.com:xmeng525/GitHubTest.git
-
- Add a new file in the local folder called “readme.md”
- In git bash:
-
1git add readme.md
-
- In git bash:
-
1git commit -m "first commit: Add readme.md"
- will get responce:
-
12345[master (root-commit) c94f00c] first commit: Add readme.md1 file changed, 1 insertion(+)create mode 100644 readme.md
-
-
- The local git repository is constructed by 3 trees:
- The first one is working repository, which contains read files;
- The second one is tmp repository (Index), which stores the changes;
- The third one is head, which points to the last change. Now we have already stored the change to “head”, now we submit it to the working repository.
- In git bash:
-
123$ git push origin masteror$ git push origin HEAD:main
-
123456789Counting objects: 3, done.Writing objects: 100% (3/3), 234 bytes | 0 bytes/s, done.Total 3 (delta 0), reused 0 (delta 0)remote:remote: Create a pull request for 'master' on GitHub by visiting:remote: https://github.com/xmeng525/GithubTest/pull/new/masterremote:To git@github.com:xmeng525/GitHubTest.git* [new branch] master -> master
-
-
Now the file “readme.md” is online!
Author: xiaoxumeng
Structured prediction
baseNP: doesn’t contain any recursive parts.
chunking: build the tree for the sentence
Level of representation:
* Brown Corpus (level1: pos)
* Penn Trecbank (level2: sys)
* PropBank (level3: sen)
* Framenet (level4: )
All of these need lots of human labor.
h(x) = argmin(y in Y) E_(y~p(Y|X))[l(y,x,Y)]
l (y*,x,y) = 1 – delta(y,y*)
H(x) = argmax_(y in Y) Pr(y|x)
min_(h in H) E_{p}[loss(X;Y;h)] + model complexity(h)
Empirical risk = 1/N SUM_{I = 1}^{N}loss(x,y*,h)
recognize speech
wreak a nice beach
an ice beach
conditional random fields
Talk: Learning efficiency of outcome in games
- By Eva Tardos
- Repeated games
- player’s value/cost additive over periods, while playing
- players try to learn what is the best from past data
- what can we say about the outcome? how long do they have to stay to ensure OK social welfare?
- Result: routing, limit for very small users
- Theorem:
- In any network with continuous, non-decreasing cost functions and small users
- cost of Nash with rates ri for all i <= cost of opt with rate 2ri for all i
- Nash equilibrium: stable solution where no player had incentive to deviate.
- Price of Anarchy = cost of worst Nash equilibrium / social optimum cost;
- Theorem:
- Examples of price of anarchy bounds
- Price of anarchy in auctions
- First price is auction
- All pay auction…
- Other applications include:
- public goods
- fair sharing
- Walrasian Mechanism
- Repeated game that is slowly changing
- Dynamic population model
- at each step t each player I is replaced with an arbitrary new player with probability p
- in a population of N players, each step, Np players replaced in expectation
- population changes all the time: need to adjust
- players stay long enough…
- Dynamic population model
- Learning in repeated game
- what is learning?
- Does learning lead to finding Nash equilibrium?
- fictitious play = best respond to past history of other players goal: “pre-play” as a way to learn to play Nash
- Find a better idea when the game is playing?
- Change of focus: outcome of learning in playing
- Nash equilibrium of the one shot game?
- Nash equilibrium of the one-shot game: stable actions a with no regret for any alternate strategy x.
- cost_i(x, a_-i) >= cost_i(a)
- Behavior is far from stable
- no regret without stability: learning
- no regret: for any fixed action x (cost \in [0,1]):
- sum_t(cost_i(a^t)) <= sum_t(cost_i(x, a_-i^t)) + error
- error <= √T (if o(T) called no-regret)
- no regret: for any fixed action x (cost \in [0,1]):
- Outcome of no-regret learning in a fixed game
- limit distribution sigma of play (action vectors a = (a1, a2,…,an))
- No regret leanring as a behavior model:
- Pro:
- no need for common prior or rationality assumption on opponents
- behavioral assumption: if there is a consistently good strategy: please notice!
- algorithm: many simple rules ensure regret approx.
- Behavior model ….
- Pro:
- Distribution of smallest rationalizable multiplicative regret
- strictly positive regret: learning phase maybe better than no-regret
- Today (with d options):
- sum_t(cost_i(a^t)) <= sum_t(cost_i(x, a_-i^t)) + √Tlogd
- sum_t(cost_i(a^t)) <= (1 + epsilon)sum_t(cost_i(x, a_-i^t)) + log(d)/epsilon
- Quality of learning outcome
- Proof technique: Smoothness
- Learning and price of anarchy
- Learning in dynamic games
- Dynamic population model
- at each step t each player I is replaced with an arbitrary new player with probability p
- how should they learn from data?
- Dynamic population model
- Need for adaptive learning
- Adapting result to dynamic populations
- inequality we wish to have
- Change in optimum solution
- Use differential privacy -> stable solution
How to write MP4 with OpenCV3
When trying to write MP4 file (H264), I tired code of
1 2 3 4 |
fourcc = cv2.VideoWriter_fourcc(*'MP4V') //or fourcc = cv2.VideoWriter_fourcc(*'X264') voObj = cv2.VideoWriter('output.mp4',fourcc, 15.0, (1280,360)) |
And I got error saying:
1 |
<span class="pln">FFMPEG</span><span class="pun">:</span><span class="pln"> tag </span><span class="lit">0x5634504d</span><span class="pun">/</span><span class="str">'MP4V'</span> <span class="kwd">is</span> <span class="kwd">not</span><span class="pln"> supported </span><span class="kwd">with</span><span class="pln"> codec id </span><span class="lit">13</span> <span class="kwd">and</span><span class="pln"> format </span><span class="str">'mp4 / MP4 (MPEG-4 Part 14)'</span> |
This problem is solved by changing the fourcc to the ASCII number directly to cv2.VideoWriter(), i.e.
1 |
outputVideo.open(writeName, 0x00000021, texHelper->caps[0].get(cv::CAP_PROP_FPS), S, true); |
reference:
Paper Reading: View Direction and Bandwidth Adaptive 360 Degree Video Streaming using a Two-Tier System
BT chunks:
ET chunks:
STACKGAN: Text to photo-realistic Image Synthesis
STACKGAN: generate image from text
PointNet, PointNet++, and PU-Net
point cloud -> deep network -> classification / segmentation / super-resolution
traditional classification / segmentation:
projection onto 2D plane and use 2D classification / segmentation
unordered set
point(Vec3) -> feature vector (Vec5) -> normalize (end with the bound of the pointcloud)
N points:
segmentation:
feature from N points ->NxK classes of each point (each point will have a class)
classification:
feature from N points -> K x 1 vector (K classes)
Lecture 10: Neural Network
- Deep learning
- Representation learning
- Rule-based
- high explainability
- Linguistic supervision
- Semi-supervision
- have small set of data with label
- has large set of data without label
- Recurrent-level supervision
- Language structure
description lengths DL= size(lexicon) + size( encoding)
- lex1
- do
- the kitty
- you
- like
- see
- Lex2
- do
- you
- like
- see
- the
- kitty
- How to evaluate the two lexicons?
- lex 1 have 5 words, lex 2 has 6 words
- Potential sequence
- lex1: 1 3 5 2, 5 2, 1 3 4 2
- lex2: 1 3 5 2 6, 5 2 6, 1 3 4 2 6
- MDL: minimum description lengths
- unsupervised
- prosodic bootstrapping
Lexical space
relatedness vs. similarity
- use near neighbors: similarity
- use far neighbors: relatedness
ws-353 has similarity & relatedness
loss function:
project:
Part1: potential methods
- LDA
- readability
- syntactic analysis
Results of Training
The results are obvious in 1135, 1124, 1113
Questions about “Foveated 3D Graphics (Microsoft)” User Study
- Problem1: They did test for only one scene.
- The first problem is that foveation level is highly depentent on scene. They may get totally different parameters if they change to another scene. Of course, this is the problem of all the user studies. Till now, only NVIDIA mentiond the multliple factors affecting vision. However, they don’t have good ways to deal with this.
- The second problem is about data analysis. They avoid the problem of one parameter ->multiple result by testing only one scene.
- Problem2: I don’t believe that their result is monotone.
- They just said:
- Ramp Test: For the ramp test, we identified this threshold as the lowest quality index for which each subject incorrectly labeled the ramp direction or reported that quality did not change over the ramp.
-
Pair Test: for the pair test, we identified a foveation quality threshold for each subject as the lowest variable index jhe or she reported as equal to or better in quality than the non-foveated reference.
- Suppose their quality level is 11,12,13,14,15. What if they get result of 1,1,1,0,1 ? Is their final quality level 13 or 15?
- I don’t believe this situation did happen in their user study.
- If it happens, what should we do? Of course we should test for multiple scenes for many participants, and get the average. So we go back to problem 1.
- They just said: