1128

Challenges:

  1. Mathematical representation of model
  2. learning models parameters from data

Overview:

  • generative face model
    • space of faces: goals
      • each face represented as high dimensional vector
      • each vector in high dimensional space represents a face
    • Each face consists of
      • shape vector Si
      • Texture vector Ti
    • Shape and texture vectors:
      • Assumption: known point to point corrsp between faces
      • construction: for eaxample 2D parameterization using few manually selected feature points
      • shape, texture vecs: sampled at m common locations in parameterization
        • Typically m = few thousand
        • shape texture vecs 3m dimensional (x, y, z & r, g, b)
        • *feature points may not be sampled.
    • linear face model
      • linear combinations of faces in database
        • s = sum(ai * Si)
        • t = sum(bi * Ti)
        • Basis vectors Si, Ti
        • Avg face Savg = 1/n sum(Si)
        • Avg face Tavg = 1/n sum(Ti)
    • Probabilistic modeling
      • PDF over space of faces gives probability to encounter certain face in a population
      • Sample the PDF generates random new faces.
      • Ovservation
        • Shape & tex vecs are not suitable for probabilistic modeling
        • Too much redundancy
        • many vecs do not resemble real faces
        • real faces occupy
      • AssumptionL faces occupy linear subspace of high dimensional space.
        • Faces lie on hyperplane
        • Illustration in 3D
          • faces lie on 2D plane in 3D
        • How to determine hyperplane?
          • PCA:
            • find orthogonal sets of vecs that best represent input data points
            • first basis vec: largest variance along its direction
            • Following basis vecs: orthogonal, ordered according to variance along their directions
          • Dimensionality reduction:
            • discard basis vectors with low variance
            • represent each data point using remaining k basis vecs (k-dim hyperplane)
            • can show: hyperplanes obtained via PCA minimize L2 distance to plane
          • First basis vec maximized variance along its direction
            • w{1} = argmanx{sum((t1)i^2)} – argmax sum(xi * w)^2 = argmax{||Xw||^2} = argmax{w’X’Xw}
            • data points as row vecs xi , zero mean
            • matrix X consists of row vecs xi
          • w{1} is the eigenvec corespp to the max eigenvalue of X’X
        • Properties;
          • Matrix X’X is proportional to so-called sample covariance matrix of dataset X
          • if dataset has multi-variance normal distribution, maximum likelihood estimation of distribution is
            • f(x) = (2* pi) ….
        • Node
          • X is very large X is n x m matrix
            • n is # data vec
            • m is length of data vec
          • m>> n in general
          • X is m x m matrix, very large
        • SVD of X
          • X = U sigma V’
          • X’X  = V sigma’ UU’ sigma V’ = V sigma^2 V’
          • right singular vecs V of X are eigenvec of X’X
          • singular values sigma(k) of X are square roots of eigenvalues lambda(k) of X’X
          • change of bassis into orthogonal PCA basis by projection onto eigenvectors
        • T = XV = Usigma V’V = uSigma
        • left singular vectors U multiplied by singular velues in sigma
      • Dimensionality reduction
        • only consider l eigenvectors/ singular vectors correp to l largest singular values
        • Tl = XVl = U l * sigma l
        • Matrix Tl in R (mxl)  contains coord of m samples in reduced number l of dim
        • computation of only l components directly via truncated SVD
        • multivariate normal distribution in reduced space has covariance matrix
        • Diagonal matrix, l largest eigenvalues of X’X
        • PCA diagonalizes covariance matrix decorrelates data
    • attribute based modeling
      • mnually define attributes, label each face i with a weight mui for each attribute mu
      • attribute vecs
      • add/ subtract multiple of attribute vecs
    • model fitting, tracking
      • Assume a parametric shape model
        • Given parameters,can generate shape
      • model fitting, tracking problem
        • given some observation, find shape
        • parameters that most likely to produce the observation
      • bayes theorem
    • Matching to images
      • Model parameters to generate an image
        • Shape vec: alpha
        • tex vec: beta
        • rendering parameters (projection, lighting) rou
      • Given image, what are the most likely rendering parameters that generate that image
        • MAP
        • BAyes
        • compute max p
  • Discussion
    • adv:
    • Disadv:
      • linear mdoel may not be accurate
      • linear model not suitable for large geometric deformations (rotations)

Iterator of set

deformation

  • Reference
  • Deformation energy
    • Geometric energy to strtch and band thin shell from one shape t another as difference between first and second fundamental form
      • First: stretching
      • second: bending
    • Approach:
      • Given constraints (handle position / orientation, find surface that min deformation energy)
    • Linear Approximation
      • Energy based on fundamental forms in non-linear function of displacements
        • Hard to minimize
      • linear approximation using partial derivatives of displacement function.
        • Assume parameterization of displacement field d(u,v)
        • Bending:
          • linear energy:
            • laplace = 0 minimize surface area
          • Variational calculus, Euler0-Lagrange equations:
            • laplace of laplace: make it smooth (the derivative of surface change continuously and is minimized)
          • So, apply bi-laplacian on mesh
    • Skeletal animation

Nov14. Mesh Smoothing

  • Mesh smoothing:
    • local averaging
    • minimize local gradient energy in 3 dimensions
    • Fourier transform (low pass filter) similar to local averaging idea
      • image convolution
      • F(A*B) = F(A) * F(B)
  • Spectral Analysis
    • In general: extending eigenvalues, eigenvectors to linear operators on (continuous) functions.
    • Fourior transform:
      • approximate signal as weighted sum (linear combination) of sines and cosines of different frequencies.
      • change of basis using eigenfunctions of Laplace operator (complex exponentials including sines and cosines)
      • Fourier transform function: {\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }f(x)\ e^{-2\pi ix\xi }\,dx,}
      • spacial domain–>frequency domain F(epsilon) complex amplitude
      • Inverse transform:{\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )\ e^{2\pi ix\xi }\,d\xi ,}
      • denoising: fourier transform–>filter out high frequency–>fourior inverse transform
    • For mesh:
      • Intuition: Fourior transform by projecting onto eigenfunctions of Laplacian
      • mesh laplacian L is n x n matrix,  n is number of vertices
        • Use PSD L (not normalized by vertex valence of voronoi area)
        • eigenvectors orthogonal
      • Project geometry onto eigenvectors.
      • reconstruction from eigenvectors associated with low frequencies
      • Chanllenge:
        • Too complex!
        • Too much computation!
  • Diffusion
    • Laplace smoothing
      • Laplace is second derivative.
      • Smooth with Gaussian kernel.
      • backward Euler
        • solve p’ = p + mu * dt *L * p’
        • (I – mu * dt * L) p’ = p, identity matrix I
        • solve linear system for  p’ in each step
        • Advantages: Allow larger time steps, no numerical stability problems.
  • Energy minimization
  • Alternatives