Data Mining and Exploration [INFR11007]

Lab 2: Principal component analysis (PCA)

Principal component analysis (PCA) is one of the most used techniques for exploratory data analysis and preprocessing.

We start this lab off by introducing some fundamental concepts regarding covariance matrices. We then look at several methods for identifying the principal component (PC) directions and scores for an artificial dataset.

Let's start by importing the packages we will need throughout the lab.

In [47]:
from __future__ import print_function, division
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline

Covariance matrices

Covariance matrices are by definition symmetric positive semi-definite. One property of positive semi-definite matrices is that their eigenvalues are all non-negative. You are provided with the function generate_spsd_matrix() which generates a random symmetric positive semi-definite matrix. The random_seed parameter can be used to set the numpy random seed in order to ensure reproducible results.

In [48]:
def generate_spsd_matrix(d, random_seed=None):
    """Reproducible random generation of symmetric 
    positive semi-definite matrices. 
    
    Parameters
    ----------
    d : integer
        Number of dimensions.
    
    random_seed : integer
        Random seed number.
        
    Returns 
    ----------
    A : ndarray, shape (n,n)
        Symmetric positive definite matrix.
    
    """
    
    if random_seed is not None:
        np.random.seed(random_seed)
    A = np.random.randn(d,d)
    return np.dot(A.T, A)

========== Question 1 ==========

Write a function which checks if a matrix is positive semi-definite.

Generate a random 10 x 10 matrix by using the generate_spd_matrix() function and double-check that it is positive semi-definite by using the function you just wrote.

Hint: you might find the numpy.linalg.eigh() function useful.

In [49]:
# Your code goes here
def is_positive_semi_definite(a):
    """Tests whether a matrix is symmetric positive 
    semi-definite.
    
    Parameters
    ----------
    a : ndarray, shape (n,n)
        Symmetric matrix.
      
    Returns 
    ----------
    True if matrix is positive semi-definite.
        
    Raises 
    ----------
    ValueError
        If the provided matrix is not real or symmetric.
    """
    
    a = np.asarray(a)
    
    # Check that matrix is real
    is_real = np.isreal(a)
    if is_real.sum() / a.size != 1:
        raise ValueError("The provided matrix is \
                         not real.")
    
    # Check that matrix is symmetric
    is_symmetric = np.array_equal(a, a.T)
    if is_symmetric is not True:
        raise ValueError("The provided matrix is \
                         not symmetric.")
    
    # Eigenvalue decomposition
    eigval, _ = np.linalg.eigh(a)
    if np.all(eigval >= 0):
        return True
    else:
        return False 
In [50]:
# Example
a = generate_spsd_matrix(d=10, random_seed=10)
if is_positive_semi_definite(a):
    print("Matrix is positive semi-definite.")
else:
    print("Matrix is not positive semi-definite.")
Matrix is positive semi-definite.

Sampling data with a given covariance matrix

Covariance matrix eigendecomposition

Suppose that you want to sample (i.e. generate) some data points from a multivariate normal distribution with known mean vector and covariance matrix. We will assume that we have access to the tool required to sample from a standard univariate normal distribution, that is, one with zero mean and unit variance (all you need for that is just the cumulative function of the normal distribution).

We know that if a univariate gaussian random variable $x$ has zero mean and unit variance, then $y = \sigma x + \mu$ has mean $\mu$ and variance $\sigma^2$ . Therefore, we can use this property to sample from an arbitrary univariate normal distribution with mean $\mu$ and variance $\sigma^2$.

By repeating the process described above $d$ times, we can sample from a $d$-dimensional normal distribution with an arbitrary mean vector and diagonal covariance matrix. But, how can we sample from a multivariate normal distribution with given covariance matrix $\mathbf{C}$? The answer is through decomposing the covariance matrix. One such method is the eigen (or spectral) decomposition which, for a real symmetric matrix (such as covariance matrices), takes the following form: $$\mathbf{C} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^T$$ where $\mathbf{U}$ is an orthogonal matrix containing the (right) eigenvectors of $\mathbf{C}$ and $\mathbf{\Lambda}$ is a diagonal matrix whose entries are the eigenvalues of $\mathbf{C}$. Now, you might wonder how this decomposition can help us sample from a multivariate distribution.

Assume $\mathbf{x}$ is sampled from a standard multivariate normal distribution (i.e. zero mean vector, identity covariance matrix). We can now create a new random variable $\mathbf{y}$ by transforming $\mathbf{x}$ as follows: $$\mathbf{y} = \mathbf{U} \mathbf{\Lambda}^{1/2} \mathbf{x} + \mathbf{\mu}$$.

========== Question 2 ==========

Assume that $\mathbf{x}$ is a multivariate random variable with zero mean and unit covariance matrix, and $\mathbf{C} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^T$ is the eigendecomposition of $\mathbf{C}$.

If $\mathbf{y} = \mathbf{U} \mathbf{\Lambda}^{1/2} \mathbf{x} + \mathbf{\mu}$, show that $\mathbf{y}$ has mean $\mathbf{\mu}$ and covariance $\mathbf{C} $.

Your answer goes here

$E\left[ \mathbf{y} \right] = E \left[ \mathbf{U} \mathbf{\Lambda}^{1/2} \mathbf{x} + \mathbf{\mu} \right] = \mathbf{U} \mathbf{\Lambda}^{1/2} E \left[ \mathbf{x} \right] + E \left[\mathbf{\mu} \right] = \mathbf{0} + \mathbf{\mu} = \mathbf{\mu}$.

$Var\left[ \mathbf{y} \right] = E \left[ \left( \mathbf{y} - E \left[ \mathbf{y} \right] \right) \left( \mathbf{y} - E \left[ \mathbf{y} \right] \right)^T \right] = E \left[ \left( \mathbf{y} - \mathbf{\mu} \right) \left( \mathbf{y} - \mathbf{\mu} \right)^T \right] = E \left[ \mathbf{U} \mathbf{\Lambda}^{1/2} \mathbf{x} \left( \mathbf{U} \mathbf{\Lambda}^{1/2} \mathbf{x} \right) ^T \right] = \mathbf{U} \mathbf{\Lambda}^{1/2} E\left[ \mathbf{x} \mathbf{x}^T \right] \mathbf{\Lambda}^{1/2} \mathbf{U} ^T = \mathbf{U} \mathbf{\Lambda} \mathbf{U} ^T = \mathbf{C}$, where we have used $ \left( \mathbf{\Lambda}^{1/2} \right) ^T=\mathbf{\Lambda}^{1/2}$, since $\mathbf{\Lambda}$ is diagonal, and $E\left[ \mathbf{x} \mathbf{x}^T \right] = Var \left[ \mathbf{x} \right] = \mathbf{I}$.

Please note that the above holds without any assumptions on the probability distribution of $\mathbf{x}$. As a special case, we focus on the multivariate normal distribution.

The procedure of sampling one data point from a multivariate normal distribution with mean $\mathbf{\mu}$ and covariance matrix $\mathbf{C}$ is as follows:

  1. Decompose the covariance matrix by using its eigendecompostion, i.e. $\mathbf{C} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^T$
  2. Sample a data point $\mathbf{x} \in \mathbb{R}^d$ from a standard normal distribution (i.e. zero mean, identity covariance matrix).
  3. Transform $\mathbf{x}$ by using $\mathbf{y} = \mathbf{U} \mathbf{\Lambda}^{1/2} \mathbf{x} + \mathbf{\mu}$.

========== Question 3 ==========

Write a function that generates random samples from a multivariate normal distribution with given mean vector and covariance matrix. You should make use of the numpy.random.randn() function that generates samples from a standard multivariate gaussian distribution and the eigendecomposition of the covariance matrix. For computing the eigendecomposition of a symmetric matrix you should use the numpy.linalg.eigh() function.

Important:

Most scientific computing packages like scikit-learn follow the $n \times d$ convention for a data matrix, where $n$ is the number of samples and $d$ the dimensionality, that is, $\mathbf{X} \in \mathbb{R} ^ {n \times d}$. Please note that many textbooks and the lecture notes for this course follow reverse notation (i.e. $d \times n$). Here, you should follow the $n \times d$ convention. If the standard normal data are stored in a matrix $\mathbf{X}$, then you should compute $\mathbf{Y}$ by $$\mathbf{Y} = \mathbf{X} \mathbf{\Lambda}^{\frac{1}{2}} \mathbf{U}^T $$

Finally, generate a 3 x 3 random covariance matrix C by using the generate_positive_semi_definite() function. Use the function you just wrote to generate 1 million random samples with zero mean and covariance matrix C. Compute the empirical covariance matrix of the data (you can use numpy.cov() by setting rowvar=False) and double-check that it is a good estimate of the true covariance matrix C.

Tip: don't get stuck on this, move on after 10 mins or so (it's not critical)

In [51]:
# Your code goes here
def sample_multivariate_normal_eig(mean, cov, n, random_seed=None):
    """Sample from multivariate normal distribution
    with given mean and covariance matrix by
    using eigendecomposition of covariance matrix.
    
    Parameters
    ----------
    mean  : array, shape (d,)
        Mean vector.
        
    cov : array, shape (d,d)
        Covariance matrix.
    
    n : integer
        Number of samples.
        
    random_seed : integer (optional)
        Random seed.
      
    Returns 
    ----------
    X : array, shape(n, d)
        Random samples.
    
        
    Raises 
    ----------
    ValueError
        If the provided matrix is not positive definite.
        
    """
    
    # Check that covariance matrix is positive definite.
    is_pd = is_positive_semi_definite(cov)
    if is_pd is not True:
        raise ValueError("Covariance matrix must be positive definite.")
    
        
    # Check that mean vector has the same dimensionality
    # as the covariance matrix.
    if mean.size != cov.shape[0]:
        raise ValueError("Mean vector and covariance matrix must have the same dimensionality.")
    
    if random_seed is not None:
        np.random.seed(random_seed)
    
    d = mean.size
    X_standard = np.random.randn(n, d)
    eigvals, eigvecs = np.linalg.eigh(cov)
    return np.dot(X_standard, (eigvecs * (eigvals**0.5)).T)
In [52]:
# Numerical example
n_sam = int(1e6)
n_dim = 3
C = generate_spsd_matrix(d=n_dim, random_seed=10)
data = sample_multivariate_normal_eig(mean=np.zeros((n_dim,)), cov=C, n=n_sam)
print("True covariance matrix:\n{}\nEmpirical (sample) covariance matrix:\n{}"
      .format(C, np.cov(data, rowvar=False)))
True covariance matrix:
[[ 1.84368931  0.97606753 -2.05065766]
 [ 0.97606753  0.90946519 -1.55234157]
 [-2.05065766 -1.55234157  2.90680369]]
Empirical (sample) covariance matrix:
[[ 1.83998213  0.97346173 -2.04594651]
 [ 0.97346173  0.90754625 -1.54868095]
 [-2.04594651 -1.54868095  2.90015654]]

Note: numpy built-in function

Numpy implements the numpy.random.multivariate_normal() function that can be used to generate samples from a multivariate normal distribution with given mean vector and covariance matrix. You are encouraged to use such built-in functions whenever available, as they will most likely be highly optimised, and bug-free. Nevertheless, it is some times very useful to know what these functions do under the hood.

Covariance matrix structure

You are provided with the following function generate_gaussian_data() that can be used to generate a multivariate gaussian dataset from a given mean and covariance. When the mean and covariance are not defined, they are generated at random. The random_seed parameter can be used to ensure reproducible results. The function returns a tuple containing three items; the dataset, the true mean, and the true covariance matrix of the probability distribution the data were sampled from. Execute the cell below to load this function.

In [53]:
def generate_gaussian_data(n_samples, n_features=None, mu=None, cov=None, random_seed=None):
    """Generates a multivariate gaussian dataset.
    
    Parameters
    ----------
    
    n_samples : integer
        Number of samples.
        
    n_features : integer
        Number of dimensions (features).
        
    mu : array, optional (default random), shape (n_features,)
        Mean vector of normal distribution.
        
    cov : array, optional (default random), shape (n_features,n_features)
        Covariance matrix of normal distribution.
    
    random_seed : integer
        Random seed.
        
    Returns
    -------
    
    x : array, shape (n_samples, n_features)
        Data matrix arranged in rows (i.e. 
        columns correspond to features and 
        rows to observations).
    
    mu : array, shape (n_features,)
        Mean vector of normal distribution.
        
    cov : array, shape (n_features,n_features)
        Covariance matrix of normal distribution.
        
    Raises
    ------
    
    ValueError when the shapes of mu and C are not compatible
    with n_features.
    
    """
    
    if random_seed is not None:
        np.random.seed(random_seed)
        
    if mu is None:
        mu = np.random.randn(n_features,)
    else:
        if n_features is None:
            n_features = mu.shape[0]
        else:
            if mu.shape[0] != n_features:
                raise ValueError("Shape mismatch between mean and number of features.")
                
    if cov is None:
        cov = generate_spsd_matrix(n_features, random_seed=random_seed)
    else:
        if (cov.shape[0] != n_features) or (cov.shape[1] != n_features):
            raise ValueError("Shape mismatch between covariance and number of features.")
            
    x = np.random.multivariate_normal(mu, cov, n_samples)
    return (x, mu, cov)

========== Question 4 ==========

By using the function provided above, generate a 2-dimensional gaussian dataset consisting of 1000 samples. Use a zero mean and the following covariance matrix:

$$ \mathbf{C} = \left( \begin{array}{ccc} 1 & 0.3 \\ 0.3 & 2 \\ \end{array} \right) $$

Print the empirical mean, covariance and correlation matrices. You should use numpy built-in functions for this purpose, but you are also free to implement your own functions if you are keen. Finally, use the seaborn jointplot() function to produce a joint scatter plot of the two variables. This function also shows the marginal histograms on the top and right hand sides of the plot. Label axes appropriately.

In [54]:
# Your code goes here
# By using numpy functions
mu = np.zeros((2,))
cov = np.array([[1, 0.3], [0.3, 2]])
x_2d, _, _ = generate_gaussian_data(n_samples=1000, mu=mu, cov=cov, random_seed=10)
print("Estimated mean:\n{}".format(np.mean(x_2d, axis=0)))
print("Estimated covariance matrix:\n{}".format(np.cov(x_2d, rowvar=False)))
print("Estimated correlation matrix:\n{}".format(np.corrcoef(x_2d, rowvar=False)))
g = sns.jointplot(x_2d[:,0],x_2d[:,1], size=4, stat_func=None, ratio=5, joint_kws={"s" : 1}, color='grey')
g.set_axis_labels("Dim 1", "Dim 2")
plt.show()
Estimated mean:
[-0.01509878 -0.0196932 ]
Estimated covariance matrix:
[[ 0.92458509  0.28919834]
 [ 0.28919834  1.89833705]]
Estimated correlation matrix:
[[ 1.          0.21829093]
 [ 0.21829093  1.        ]]
In [55]:
# By using custom-written functions
def data_mean(x):
    x = np.asarray(x)
    if x.ndim != 2: # Check that data are 2D
        raise ValueError('Array x must be 2-dimensional.')
    
    n = x.shape[0] # Number of samples
    return np.sum(x, axis=0) / n

def data_cov(x):
    x = np.asarray(x)
    if x.ndim != 2: # Check that data are 2D
        raise ValueError('Array x must be 2-dimensional.')
    
    n = x.shape[0] # Number of samples
    mean = data_mean(x) # mean
    return np.dot((x - mean).T, (x-mean)) / (n-1)

def data_corr(x):
    cov = data_cov(x)
    d = x.shape[1] # Number of dimensions
    corr = np.zeros_like(cov)
    for dim_i in range(d):
        for dim_j in range(d):
            corr[dim_i, dim_j] = cov[dim_i, dim_j] / np.sqrt(cov[dim_i, dim_i]*cov[dim_j, dim_j])
    return corr
    
mu = np.zeros((2,))
cov = np.array([[1, 0.3], [0.3, 2]])
x_2d, _, _ = generate_gaussian_data(n_samples=1000, mu=mu, cov=cov, random_seed=10)
print("Estimated mean:\n{}".format(data_mean(x_2d)))
print("Estimated covariance matrix:\n{}".format(data_cov(x_2d)))
print("Estimated correlation matrix:\n{}".format(data_corr(x_2d)))
Estimated mean:
[-0.01509878 -0.0196932 ]
Estimated covariance matrix:
[[ 0.92458509  0.28919834]
 [ 0.28919834  1.89833705]]
Estimated correlation matrix:
[[ 1.          0.21829093]
 [ 0.21829093  1.        ]]

========== Question 5 ==========

Repeat the same procedure as above, but now modify the covariance matrix such that the true correlation ceofficient between the two variables is 0.6 while the variances stay unchanged.

Visualise the jointplot() of the new dataset and compare to the one from the previous question. Do your observations agree with the modification you just applied to the covariance matrix?

In [56]:
# Your code goes here
rho_prime = 0.6 # New correlation coefficient
cov_prime = np.copy(cov) # New covariance matrix (copy and modify below)
cov_prime[0,1] = np.sqrt(cov[0,0]*cov[1,1])*rho_prime # rho = cov(x1,x2) / (std(x1)*std(x2))
cov_prime[1,0] = cov_prime[0,1]

if is_positive_semi_definite(cov_prime): # Double-check that the new covariance matrix is positive semi-definite
    print("Original (true) covariance matrix:\n{}".format(cov))
    print("Modified (true) covariance matrix:\n{}".format(cov_prime))
    x_2d_prime, _, _ = generate_gaussian_data(n_samples=1000, mu=mu, cov=cov_prime, random_seed=10)
    print("\nEstimated mean:\n{}".format(np.mean(x_2d_prime, axis=0)))
    print("Estimated covariance matrix:\n{}".format(np.cov(x_2d_prime, rowvar=False)))
    print("Estimated correlation matrix:\n{}".format(np.corrcoef(x_2d_prime, rowvar=False)))
    g = sns.jointplot(x_2d_prime[:,0],x_2d_prime[:,1], size=4, stat_func=None, ratio=5, joint_kws={"s" : 1}, color='grey')
    g.set_axis_labels("Dim 1", "Dim 2")
    plt.show()
else:
    print("Modified matrix is not positive semi-definite.")
Original (true) covariance matrix:
[[ 1.   0.3]
 [ 0.3  2. ]]
Modified (true) covariance matrix:
[[ 1.          0.84852814]
 [ 0.84852814  2.        ]]

Estimated mean:
[ 0.01851669  0.01836272]
Estimated covariance matrix:
[[ 0.93691732  0.81070808]
 [ 0.81070808  1.89711606]]
Estimated correlation matrix:
[[ 1.          0.60808868]
 [ 0.60808868  1.        ]]

Principal component analysis (PCA)

Let us now dive into PCA.

For this section, we will use a 3-dimensional Gaussian dataset. Execute the cell below to generate the dataset and print the true mean and covariance matrix of the distribution the data was sampled from.

In [57]:
# Generates a 3D dataset and prints true mean and covariance
x_3d, mu_true, C_true = generate_gaussian_data(n_samples=1000, n_features=3, random_seed=10)
print("Dataset consists of {} samples and {} features.".format(x_3d.shape[0], x_3d.shape[1]))
print("True mean:\n{}".format(mu_true))
print("True covariance matrix:\n{}".format(C_true))
Dataset consists of 1000 samples and 3 features.
True mean:
[ 1.3315865   0.71527897 -1.54540029]
True covariance matrix:
[[ 1.84368931  0.97606753 -2.05065766]
 [ 0.97606753  0.90946519 -1.55234157]
 [-2.05065766 -1.55234157  2.90680369]]

========== Question 6 ==========

Estimate the empirical mean and covariance matrix from this dataset and save them in variables mu_est and C_est respectively.

In [58]:
# Your code goes here
mu_est = np.mean(x_3d, axis=0)
C_est = np.cov(x_3d, rowvar=False)
print("Estimated mean:\n{}".format(mu_est))
print("Estimated covariance matrix:\n{}".format(C_est))
Estimated mean:
[ 1.363671    0.73019504 -1.58756481]
Estimated covariance matrix:
[[ 1.7849728   0.90843537 -1.93779655]
 [ 0.90843537  0.85144439 -1.44450311]
 [-1.93779655 -1.44450311  2.70942614]]

PCA sklearn implementation

Sklearn offers a class implementation of pca. Please spend a minute to read the documentation of this class. The principal component (PC) directions of a dataset are computed by using the fit() method and stored into the components_ attribute. The PC scores can be computed by using the transform() method. The amount of variance explained by each of the selected components is stored into the explained_variance_ attribute.

Please note, this method centers the input data (i.e. removes the mean) before computing the PC directions.

========== Question 7 ==========

Initialise a pca object and "fit" it using the dataset x_3d. Print the three PC directions. Store the PC scores for x_3d in an array called pc_scores.

Hint: according to the documentation the components are stored row-wise, i.e. the components_ array has shape [n_components, n_features]. You should print the PC directions column-wise, i.e. take the transpose of the components_ array.

In [60]:
# Your code goes here
from sklearn.decomposition import PCA
pca = PCA().fit(x_3d)
pc_scores = pca.transform(x_3d)
print(pca.components_.T)
[[-0.56052625 -0.78030784  0.27736257]
 [-0.38349242  0.54142509  0.74819278]
 [ 0.73399174 -0.31301525  0.60272512]]

Most often, we do not want to compute all PC directions, but only a few (i.e. dimensionality reduction). We can define the desired number of PCs by setting the n_components parameter appropriately when we initialise the pca object.

========== Question 8 ==========

Initialise a pca_new object with 2 PCs and "fit" it using the dataset x_3d. Print the two PC directions.

Hint: the 2 PC directions should be the same as the first 2 directions you computed in the previous question. The reason for this is ultimately that PCA by sequential and simultaneous variance maximisation give the same result.

In [61]:
# Your code goes here
pca_new = PCA(n_components=2).fit(x_3d)
print(pca_new.components_.T)
[[-0.56052625 -0.78030784]
 [-0.38349242  0.54142509]
 [ 0.73399174 -0.31301525]]
In [62]:
p_pca = PCA(n_components=2).fit(x_3d,True)
print(p_pca.score_samples(x_3d))
print(pc_scores)
print(p_pca.noise_variance_)
[-2.14516334 -2.21383489 -1.97408422 -4.99266137 -5.87673721 -3.36812813
 -2.55426328 -1.41516252 -2.17954228 -2.25264517 -1.73177176 -2.14202517
 -1.35428253 -1.40898983 -1.69640733 -4.85019812 -1.74384556 -2.06954067
 -1.60269709 -2.84954131 -1.40187086 -1.78027427 -5.77322197 -4.20764065
 -4.52201663 -3.4845699  -3.36433723 -1.93299297 -2.22021029 -1.71944236
 -5.14716594 -1.74612156 -1.74054149 -1.84765187 -4.35714525 -1.42049925
 -2.59475603 -3.0684791  -5.67005354 -3.38754697 -2.12971231 -2.22384333
 -2.11209421 -4.49527312 -2.28894946 -3.175614   -2.06810311 -2.50342733
 -1.38787824 -2.10877428 -6.48610611 -4.23714458 -2.34988269 -2.91717537
 -2.26192732 -1.6774808  -1.96452672 -2.66181897 -4.04208364 -1.34337365
 -1.64062578 -1.89915643 -3.03260007 -2.13116655 -2.52322457 -4.28748926
 -2.88562498 -2.2741468  -1.52200323 -3.43443669 -2.57326495 -2.63726057
 -1.49880245 -1.82654778 -1.49301064 -1.35155522 -2.5363453  -4.53775288
 -3.15226537 -4.41633682 -2.0014367  -1.83163429 -2.70554451 -3.53147084
 -2.00536404 -1.64584206 -1.80614318 -1.24883829 -2.2899289  -3.05580332
 -2.09249116 -4.71820101 -1.67397939 -4.94311929 -1.42201899 -1.98159206
 -1.63476191 -1.950787   -1.64597035 -2.84190287 -6.60445627 -3.69123695
 -1.60037776 -2.33241066 -2.41703673 -2.37895168 -2.15836675 -2.86966938
 -2.78025496 -3.89235049 -4.06455513 -1.57548229 -2.46084968 -3.36945578
 -2.09053724 -3.73959633 -2.4540738  -1.61138385 -2.1749219  -1.27506126
 -5.17961317 -3.96808375 -1.80960396 -1.96045505 -1.84918665 -1.54946696
 -1.33207084 -1.36123237 -1.78698485 -2.76278802 -2.06912995 -2.68089011
 -2.32611933 -1.80286475 -7.54081505 -1.81659976 -1.37103623 -1.54674152
 -1.98431621 -1.96461313 -1.5803525  -2.33015933 -4.82204376 -2.70140619
 -1.47934669 -2.02227068 -2.02445003 -2.26913164 -2.60976722 -2.46905292
 -1.57047265 -3.48454943 -2.76077972 -1.78621045 -3.97970781 -3.78404965
 -2.30489106 -2.18795896 -2.49071139 -4.42386411 -1.66783946 -1.81048845
 -1.54244248 -2.79141987 -2.84805939 -2.16133364 -1.84192899 -1.75206036
 -1.32210893 -1.96847218 -1.38146053 -2.0573727  -2.92199531 -3.98449774
 -2.91882692 -2.46162509 -2.18286304 -1.44958503 -1.81298786 -2.84267518
 -1.4103465  -1.35829587 -1.22782537 -1.37129471 -3.69505575 -3.37915697
 -2.65764274 -1.5470899  -2.86109576 -2.32261093 -3.91890797 -6.07217833
 -2.09858987 -7.78240201 -1.53146951 -2.98189552 -3.03762828 -1.5774435
 -1.4830788  -5.87693134 -1.82799287 -2.28177438 -3.10041443 -2.84440579
 -3.46002081 -1.30320207 -1.89178926 -4.17842696 -3.41741341 -1.67497817
 -3.19514498 -2.24627621 -4.3584862  -1.36523646 -2.68271767 -3.73732054
 -1.40074794 -1.4637262  -4.14694727 -1.85299127 -2.69502756 -2.77650836
 -1.24908772 -2.59843203 -1.75881052 -2.81756165 -1.82493729 -1.44579009
 -3.32941575 -3.94523541 -2.24020342 -2.72938385 -2.50130202 -3.78347839
 -3.28237553 -3.37293501 -1.40177023 -2.42912935 -2.1074203  -2.51068284
 -1.33975847 -3.29586767 -2.74041897 -2.4325169  -2.46141898 -2.19158379
 -3.47533882 -4.33574664 -4.11642209 -1.60820006 -1.88117232 -3.91678474
 -1.72819429 -1.55729779 -3.71988778 -2.54734358 -3.05642082 -1.29641616
 -2.60990811 -4.58181068 -1.55329166 -4.68457266 -1.49388686 -2.49619824
 -1.77024813 -2.77533274 -2.81543315 -3.14838131 -5.64847416 -3.402448
 -2.40260418 -1.74653981 -1.86337552 -3.51293981 -4.95748886 -2.02085321
 -1.67055645 -2.60186516 -2.43603714 -3.90898122 -1.32274032 -3.7393225
 -1.68550035 -1.60298152 -1.32826392 -1.24331702 -2.57024329 -3.16908618
 -2.74821292 -2.38392402 -2.09614505 -4.13804237 -3.00116874 -1.48392091
 -2.4737747  -1.45766717 -3.12850135 -1.34129269 -1.33031058 -2.70507785
 -1.41780203 -3.11090183 -1.41063725 -2.09418248 -2.03717276 -1.58415215
 -3.5356596  -2.78061451 -1.30628593 -3.00800786 -1.92994477 -2.91001996
 -2.97958635 -2.18070238 -2.18963069 -2.29429919 -2.1704175  -2.57071094
 -1.8189156  -1.35429804 -2.91807407 -1.43292282 -1.4419158  -5.42260155
 -1.84811495 -3.57934849 -2.34261782 -3.13935753 -1.7522003  -2.3251899
 -2.60605097 -2.72683728 -2.35201577 -2.99939386 -4.65704355 -1.89680224
 -2.18633267 -2.14922295 -1.31654619 -1.90547878 -2.96702378 -2.56727918
 -7.53261588 -2.108541   -3.93054982 -3.89629996 -4.76389352 -3.82489649
 -4.28865262 -2.12809902 -3.00828813 -2.01778495 -1.66066771 -1.56868432
 -3.12160405 -3.35115001 -1.83198369 -3.01056216 -1.46520222 -3.98791902
 -2.34175889 -2.58322802 -2.4880892  -3.05749825 -1.33929093 -2.83894899
 -2.2466104  -2.99677587 -1.40005946 -1.78384939 -7.44688122 -3.32167144
 -1.36723235 -2.40656829 -3.24637111 -1.27950849 -3.34210658 -1.25244108
 -2.54726199 -2.82047442 -2.46709358 -1.44426499 -4.46640406 -3.03226899
 -1.89665867 -4.07285057 -2.23054583 -2.11800689 -2.5196407  -6.26632192
 -1.68797849 -3.50999598 -4.09707369 -2.57936698 -2.85185811 -1.39939846
 -5.0800676  -3.55127241 -1.58059477 -2.21676799 -3.83634152 -3.60869356
 -2.2843216  -1.79883672 -7.91804082 -3.0393854  -2.69108402 -6.06271164
 -3.27131503 -3.24920794 -3.39581032 -2.2945334  -3.0629303  -3.39453921
 -1.9346004  -2.22881397 -1.51807023 -3.89532505 -2.36768157 -2.72776262
 -3.15255386 -3.91996244 -1.68779636 -1.432774   -2.460686   -4.01143204
 -1.76815805 -2.32463049 -2.81828403 -1.92675008 -3.0720931  -3.09125568
 -1.31200415 -1.78077745 -2.14510712 -1.58790135 -1.61603824 -2.15171087
 -2.83479649 -1.36635322 -2.08703546 -2.01153227 -2.74123137 -3.07952834
 -2.25076418 -2.18010166 -1.51622954 -2.58740917 -2.82011916 -2.3678288
 -1.8272744  -1.94704909 -1.55033655 -2.92718631 -2.28222657 -1.53072353
 -2.5649356  -1.69767451 -1.36970458 -4.58384139 -2.89135876 -3.21488862
 -2.70895985 -2.19676097 -2.19889337 -1.75536743 -2.08677924 -1.31614208
 -5.01998712 -3.61690705 -1.36854767 -2.31114805 -3.29097682 -1.59349646
 -8.95448525 -3.29937924 -1.61849867 -3.37453508 -3.01595193 -3.76845124
 -2.44010267 -3.50392871 -2.2153828  -2.64223375 -2.69846854 -3.15294665
 -2.7847073  -4.81403454 -1.8587593  -1.72378174 -3.21237371 -4.08570397
 -1.42112281 -2.28789381 -2.88194762 -1.56138869 -2.21999482 -2.19823365
 -3.88558984 -5.01559179 -2.12182408 -3.67496357 -2.25924025 -4.48206307
 -2.32543642 -2.10502339 -1.39871421 -2.98765017 -2.97325601 -4.4439667
 -1.916876   -2.25378074 -1.28628655 -2.02144245 -1.33211279 -2.31032785
 -6.06437377 -3.26061764 -2.73788338 -2.13063581 -2.18793462 -5.51491018
 -2.09421292 -4.28404679 -3.7972388  -2.5285366  -2.0928694  -3.01391484
 -3.48916855 -2.0944835  -3.26252123 -2.22779289 -2.96589801 -4.03814248
 -2.65125966 -2.17237315 -3.10267222 -1.44667091 -3.8071717  -1.58790915
 -3.86448945 -1.72214777 -3.36050917 -1.81706758 -1.85520465 -1.76012927
 -2.62698425 -1.59534239 -1.34874796 -2.26708811 -2.25562588 -4.823891
 -3.5173984  -4.07070595 -3.3737248  -1.69767637 -1.52057828 -5.19452534
 -5.68981291 -1.60883639 -3.97425595 -2.22174242 -2.98165965 -3.23103749
 -2.7962875  -1.64239602 -1.76864233 -2.12108854 -3.08502466 -1.87682708
 -1.30487824 -1.81226014 -1.90555016 -2.01801311 -5.15847167 -4.72896555
 -1.48229382 -2.78425361 -2.06427134 -2.1546089  -2.12107661 -2.32578676
 -2.39217569 -3.90556341 -2.49089406 -2.38786086 -3.36373959 -4.04008703
 -2.93681256 -3.54698143 -2.20557759 -3.20882097 -2.01408043 -1.83434095
 -4.43627021 -1.82447349 -1.39029876 -3.46380233 -4.03510621 -2.47702604
 -1.77330009 -2.5485159  -3.23131039 -4.92081516 -2.05300406 -3.08776966
 -4.81009198 -3.48859683 -3.39207918 -1.62035967 -1.29772948 -4.84077374
 -4.77998827 -2.46676749 -2.27896839 -3.01677301 -6.30003164 -1.72246105
 -1.61875935 -2.1464551  -2.36946207 -2.38812471 -3.74973342 -1.8060104
 -3.11052038 -2.03315013 -3.82132604 -1.48565623 -1.2845369  -2.26001083
 -2.34539218 -2.41890322 -4.36263473 -1.4052522  -2.88100645 -3.89054805
 -1.65894851 -2.0364742  -4.82202005 -2.72851782 -1.57924216 -1.36435446
 -1.59998798 -3.95005045 -3.02441602 -1.43235686 -3.31843913 -2.34027191
 -1.78203058 -3.64143384 -1.25565675 -2.59275168 -1.49089547 -2.50514987
 -1.4612085  -2.63331679 -1.84652817 -2.4616129  -3.89545342 -3.68814198
 -1.65255059 -3.58573685 -2.22530288 -3.67312462 -1.57500167 -3.3031813
 -3.87468346 -2.63685032 -1.74603247 -1.32337118 -1.96654622 -2.49179508
 -2.04893934 -2.23089782 -2.18636064 -1.6005145  -3.58962852 -1.74296072
 -3.75789011 -5.69687923 -1.27002065 -1.81071966 -1.4501462  -2.85698129
 -1.63619122 -1.81642332 -1.97340948 -1.72845731 -3.81123544 -2.23827629
 -2.01250413 -2.46651017 -2.63170271 -2.90599823 -1.66580711 -1.68897689
 -3.96537734 -2.58402919 -2.3219323  -3.57747795 -1.45203802 -1.89256499
 -2.80420157 -1.5054811  -7.16228726 -3.87422249 -1.31019217 -2.81497029
 -2.26015441 -2.06126266 -1.37451316 -3.61202723 -2.51014845 -1.68652368
 -3.45917767 -4.90615896 -3.77527591 -4.41694845 -1.90180107 -2.31194152
 -2.76879257 -1.28222261 -3.0891388  -2.57076083 -4.09008009 -1.88036294
 -2.04647314 -8.53917684 -1.72581416 -1.77545897 -2.99235817 -1.87542033
 -3.78375267 -1.66632984 -1.76314814 -2.03776544 -3.07306648 -1.36734024
 -3.12590425 -4.29561419 -1.41599905 -1.69530817 -3.19722689 -1.85196288
 -4.04195572 -2.86367742 -4.43035572 -1.85410458 -2.97714755 -4.08896081
 -2.11721897 -1.80550837 -4.69283857 -2.39344712 -2.42802805 -1.8064202
 -2.05983009 -2.9329853  -2.63790302 -1.95151577 -1.27183069 -4.58096683
 -1.66449329 -2.75489836 -3.00723244 -1.80082617 -1.30349173 -1.99506719
 -5.72626332 -3.30878753 -1.95252111 -3.72407941 -1.93002684 -1.79709902
 -1.92254752 -1.58250688 -2.74194085 -2.16057506 -2.70225777 -4.2749801
 -2.43141066 -2.3308976  -2.77146923 -6.565744   -1.41770019 -1.59962483
 -3.71472892 -1.85034775 -2.23073323 -2.22598111 -1.36444911 -2.98514676
 -1.77392373 -2.44855599 -2.28639243 -3.10171806 -5.26601326 -1.39257959
 -6.23403597 -1.31286475 -2.20819115 -4.09882718 -4.68661377 -4.30959365
 -1.82095383 -1.74130366 -2.58750096 -2.30031239 -2.03203842 -1.22474709
 -2.71499582 -5.19696906 -2.42507728 -2.04099016 -3.87302508 -1.98223597
 -1.87510999 -4.40774395 -3.7456213  -2.63092884 -3.36215313 -2.19832634
 -4.16317078 -2.27563545 -5.20700665 -2.29587113 -4.39338585 -3.8213804
 -2.55106586 -3.86884907 -2.63947919 -3.05144985 -5.40676805 -2.93918181
 -2.21227902 -3.38722826 -6.64184054 -2.15189013 -2.91121087 -1.55438735
 -6.1577467  -3.08995514 -2.423074   -3.12089096 -1.71715515 -1.8750191
 -3.29365936 -4.06013312 -1.745709   -1.25866865 -2.67536159 -1.41705492
 -5.76812011 -2.6645292  -1.56112513 -3.00263613 -3.53873435 -3.51606936
 -1.59564098 -4.27971704 -1.7727893  -2.88918381 -2.2191721  -3.95394475
 -6.1034144  -1.6815652  -4.02009497 -5.19749336 -3.8137163  -4.13599938
 -2.2363687  -2.41482207 -2.9689868  -1.72896241 -3.05373237 -1.79933011
 -1.80744669 -6.35406747 -3.34724006 -2.8837037  -1.3852433  -2.6404118
 -2.55578954 -2.88201325 -2.07722976 -3.27911741 -3.23015016 -1.25419475
 -1.47539424 -1.28020295 -9.06301145 -1.67394072 -1.98387021 -1.44210248
 -2.94493519 -1.25832844 -2.07742616 -2.63020606 -1.33770451 -3.73104453
 -4.3606987  -4.60087282 -1.4337166  -5.27261541 -2.80034645 -1.47788672
 -1.87059449 -3.18332224 -4.911114   -2.20474379 -1.26114726 -2.03334348
 -3.32043526 -1.6077211  -1.24302123 -1.85542991 -1.53553963 -1.92558141
 -3.15424889 -2.44280961 -1.41177337 -2.12087252 -1.89094975 -2.39575916
 -4.13644017 -1.78913186 -1.9433111  -1.64664984 -3.00605902 -2.69035668
 -2.18933573 -4.10973639 -1.85024038 -2.6017587  -2.01524379 -1.8785025
 -1.25068467 -4.18431106 -2.11912457 -3.57743893 -2.77301649 -3.12626367
 -4.16310391 -1.99224254 -1.55293566 -1.99489014 -4.78192192 -4.55732789
 -1.34494129 -1.43846482 -1.9242005  -1.54804056 -3.69229051 -2.99232773
 -2.36855473 -2.23695015 -1.95093193 -2.23348939 -6.02537826 -2.48424909
 -3.17711285 -3.98896455 -1.63651547 -1.85911053 -2.46015501 -3.8686952
 -2.75675064 -1.63431778 -1.85912728 -2.73186558 -1.79070167 -3.41105436
 -3.1631906  -3.96800464 -1.41661741 -1.87344469 -4.5537492  -2.24686158
 -1.69500846 -3.68319565 -2.32838272 -3.93268057 -3.4139175  -3.40174736
 -1.78315869 -2.78301266 -1.98545821 -3.45988314 -2.74004717 -4.91894712
 -4.30577651 -4.35891856 -3.54315048 -5.87964029 -2.52121113 -3.0641294
 -2.67741701 -2.81648491 -5.4841643  -1.59393898]
[[-0.34835537 -0.25572171  0.20202817]
 [-2.1642542  -0.60471779  0.04453689]
 [ 1.08163972  0.68938103  0.0230694 ]
 ..., 
 [ 0.61803508  1.06416972  0.05518647]
 [ 1.75112399  0.29607637 -0.43441915]
 [ 0.92571975 -0.28384395  0.09541068]]
0.0245554375252

PCA implementation #1: covariance matrix eigendecomposition

Now we want to implement PCA from scratch. The procedure can be summarised as follows:

  1. Center the data (i.e. remove the mean).
  2. Compute the empirical covariance matrix.
  3. Perform the eigendecomposition of the estimated covariance matrix.
  4. Sort eigenvalues and associated eigenvectors, in eigenvalue descending order. The sorted eigenvectors correspond to the PC directions. If we want to reduce the dimensionality, we select the first k eigenvectors corresponding to the k largest eigenvalues (k < d).
  5. To compute PC scores we project the centered data matrix (i.e. matrix product) onto the PC directions.

(Regarding 3 and 4: some algorithms for eigendecompositions allow you to specify the number of eigenvectors and eigenvalues that should be computed. You then do not have to compute the complete eigendecomposition, which is wasteful if you are only interested in a few principle components.)

========== Question 9 ==========

Compute and print all three PC directions in the dataset x_3d by using the procedure described above. Hopefully, you will have already performed the first two steps in question 6. Then compute the PC scores for the centered data matrix.

As happens very often when writing code, it is likely that there will be a few bugs in your implementation. To double-check that your code is correct, compare the computed PC directions and scores to the ones achieved in question 7 with scikit-learn.

Hint: you might (or might not) find that some of the PC directions/scores you have computed have opposite signs to the ones returned by the sklearn implementation. Do not worry about this, the two solutions are equivalent (why?). To make debugging easier, you are provided with the following function, solutions_equivalent() which tests wether two solutions are equivalent, regardless of their signs. Execute the following cell to load this function.

In [20]:
def solutions_equivalent(b1, b2):
    """Checks whether two PC directions/scores
    solutions are equivalent regardless of their .
    respective signs.
    
    Parameters
    ----------
    
    s1 : array, shape(n_axes,)
        First solution.
        
    s2 : array, shape(n_axes,)
        Second solution.
        
    Returns
    -------
    
    True if solutions are equivalent.
    
    Raises
    ------
    
    ValueError if the two bases do not have
    the same dimensionality.
    
    """
    
    s1 = np.asarray(b1)
    s2 = np.asarray(b2)
    
    if s1.shape != s2.shape:
        raise ValueError("Solutions must have the same dimensionality.")
    
    dims_equal = []
    for dim in range(s1.shape[1]):
        if (np.allclose(s1[:,dim],s2[:,dim]) or np.allclose(s1[:,dim],-s2[:,dim])):
            dims_equal.append(True)
        else:
            dims_equal.append(False)
    return all(element for element in dims_equal)
In [23]:
# Your code goes here
def sort_eigvals_eigvec(eigvals, eigvecs):
    """Sorts eigenvalues and eigenvectors 
    in eigenvalue descending order."""
    order = np.argsort(eigvals)[::-1]
    eigvals_sorted = eigvals[order]
    eigvecs_sorted = eigvecs[:,order]
    return eigvals_sorted, eigvecs_sorted

x_3d_centered = x_3d - mu_est
# Eigendecomposition of C
eigvals, eigvecs = np.linalg.eigh(C_est)
eigvals_sorted, eigvecs_sorted = sort_eigvals_eigvec(eigvals, eigvecs)
my_pca_1_directions = eigvecs_sorted
my_pca_1_scores = np.dot(x_3d_centered, my_pca_1_directions)

# Tests
print("Principal components:\n{}".format(my_pca_1_directions))
print(solutions_equivalent(pca.components_.T, my_pca_1_directions)) # PC directions
print(solutions_equivalent(pc_scores, my_pca_1_scores)) # PC scores
Principal components:
[[-0.56052625 -0.78030784  0.27736257]
 [-0.38349242  0.54142509  0.74819278]
 [ 0.73399174 -0.31301525  0.60272512]]
True
True

PCA implementation #2: data matrix singular value decomposition (SVD)

========== Question 10 ==========

The SVD decomposition of the centered data matrix is: $$\mathbf{X} = \mathbf{V} \mathbf{S} \mathbf{U}^T $$ where the columns $\mathbf{V} \in \mathbb{R}^{n\times n}$ and $\mathbf{U} \in \mathbb{R}^{d\times d}$ contain the left- and right-singular vectors of $\mathbf{X}$, and $\mathbf{S} \in \mathbb{R}^{n\times d}$ is a diagonal matrix containing its singular values.

Compute the PC directions and scores in the dataset x_3d by using the SVD decomposition of the centered data matrix (cf. lecture notes). Double-check that these are correct by using the solutions_equivalent() function.

Hint 1: make sure you center the data before computing the SVD of x_3d.

Hint 2: Remember, we are dealing with n x d data matrices here, as opposed to d x n matrices in the lecture notes (that is why we have written $\mathbf{X} = \mathbf{V} \mathbf{S} \mathbf{U}^T$ as opposed to $\mathbf{X} = \mathbf{U} \mathbf{S} \mathbf{V}^T$ in the lecture notes). Which singular vectors of the centered data matrix correspond to the PC directions/scores in this case?

In [24]:
# Your code goes here
# Since we are dealing with the n x d matrices
# the right singular vectors correspond to PC directions
# and the left singular vectors correspond to PC scores
x_3d_centered = x_3d - mu_est
v_x, s_x, u_x_T = np.linalg.svd(x_3d_centered, full_matrices=False)
u_x = u_x_T.T # Recover U from its transpose that was returned by svd
my_pca_2_directions = u_x 
my_pca_2_scores = v_x * s_x

# Tests
print("Principal components:\n{}".format(my_pca_2_directions)) 
print(solutions_equivalent(pca.components_.T, my_pca_2_directions)) # PC directions
print(solutions_equivalent(pc_scores, my_pca_2_scores)) # PC scores
Principal components:
[[-0.56052625  0.78030784 -0.27736257]
 [-0.38349242 -0.54142509 -0.74819278]
 [ 0.73399174  0.31301525 -0.60272512]]
True
True

Assume now that you want to do dimensionality reduction by using k PCs only. One option would be to follow the same procedure as above and discard the last d-k PC directions. This, however, can be rather wasteful. There exist SVD variations that approximate the matrix $\mathbf{X}$ with a low-rank matrix.

Scipy implements the function svds that computes only the k largest singular values/vectors of a given matrix.

========== Question 11 ==========

Compute and print the first 2 PC directions by using the svds decomposition of the centered data matrix.

Hint: as opposed to the numpy function numpy.linalg.svd, svds does not automatically sort the singular values/vectors in singular value descending order. You should do this operation manually.

In [25]:
from scipy.sparse.linalg import svds
# Your code goes here
v_x_k, s_x_k, u_x_k_T = svds(x_3d_centered, k=2, which='LM')
u_x_k = u_x_k_T.T # Recover U from its transpose that was returned by svds
_, u_x_k_sorted = sort_eigvals_eigvec(s_x_k, u_x_k) # Sort by singular value descending order 
my_pca_low_rank_directions = u_x_k_sorted 

# Tests
print("Principal components:\n{}".format(my_pca_low_rank_directions))
print(solutions_equivalent(pca.components_.T[:,:2], my_pca_low_rank_directions)) # PC directions
Principal components:
[[ 0.56052625  0.78030784]
 [ 0.38349242 -0.54142509]
 [-0.73399174  0.31301525]]
True

PCA implementation #3: Gram matrix eigendecomposition

We have seen in the lectures that we can compute the PC directions and scores by performing the eigendecomposition of the Gram matrix $\mathbf{G}$.

The procedure of computing the PC directions and scores based on the eigendecomposition of the Gram matrix is as follows:

  1. Center data.
  2. Compute Gram matrix.
  3. Perform the eigendecomposition of the Gram matrix.
  4. Sort eigenvalues and eigenvectors in decreasing eigenvalue order.
  5. Pick the first k eigenvectors. This is required even if we do not to perform dimensionality reduction, since the eigendecomposition of the Gram matrix will yield n eigenvectors.
  6. The PC scores are given by these eigenvectors scaled by $\mathbf{\Sigma} ^ {1/2}$, where $\mathbf{\Sigma}$ is the diagonal matrix containing the eigenvalues of the gram matrix.
  7. To get the PC directions, we project the centered data matrix onto the eigenvectors and scale by $\mathbf{\Sigma} ^ {-1/2}$.

========== Question 12 ==========

Compute the PC directions and scores in the dataset x_3d by using the eigendecomposition of the Gram matrix. Double-check that these are correct by using the solutions_equivalent() function.

In [26]:
# Your code goes here
def data_gram(x):
    x = np.asarray(x)
    if x.ndim != 2: # Check that data are 2D
        raise ValueError('Array x must be 2-dimensional.')
    
    N = x.shape[0] # Number of samples
    return np.dot(x, x.T)

n_dim = x_3d.shape[1]
K = data_gram(x_3d_centered)

eigvals_k, eigvecs_k = np.linalg.eigh(K)
eigvals_k_sorted, eigvecs_k_sorted = sort_eigvals_eigvec(eigvals_k, eigvecs_k)

my_pca_3_scores = eigvecs_k_sorted[:,:n_dim] * eigvals_k_sorted[:n_dim]**(0.5)
my_pca_3_directions = np.dot(x_3d_centered.T, eigvecs_k_sorted[:,:n_dim]) * eigvals_k_sorted[:n_dim]**(-0.5)

# Tests
print("Principal components:\n{}".format(my_pca_3_directions))
print(solutions_equivalent(pca.components_.T, my_pca_3_directions))
print(solutions_equivalent(pc_scores, my_pca_3_scores)) # PC scores
Principal components:
[[-0.56052625  0.78030784  0.27736257]
 [-0.38349242 -0.54142509  0.74819278]
 [ 0.73399174  0.31301525  0.60272512]]
True
True

========== Question 13 ==========

Can you think of a particular case where it would be beneficial to compute the PC scores by using the eigendecomposition of the Gram matrix, as opposed to SVD decomposition of the data matrix?

Your answer goes here

One particular case where we would use the eigendecomposition of the Gram matrix, is when we only have access to that matrix and not the data matrix. In this case, it is still possible to identify the PC scores and hence do dimensionality reduction, as we will see in Lab 3.

In addition, by working directly with the Gram matrix, we can use the kernel trick and identify non-linear manifolds by using almost the same bit of code. This is exactly what kernel principal component analysis (kPCA) does, but more on this in the next lab.

Dimensionality reduction with PCA

PCA can be used to reduce the dimensionality of a data from $d$ to $k$.

========== Question 14 ==========

Use any implementation of PCA of your choice to project the dataset x_3d onto 1, 2, and 3 dimensions. This projection should just take the form $$\mathbf{X}_{red} = \mathbf{X} \mathbf{W}_{1:k}$$ where $\mathbf{X}$ is the centered data matrix and $\mathbf{W}$ is the matrix containing the PC directions as columns. The approximation of the data by the $k$ principle components is $$\mathbf{X}_{recon} = \mathbf{X}_{red} \mathbf{W}_{1:k} ^ T + \mathbf{\mu}$$.

Plot the mean-squared-error and explained variance of the approximation as a function of the number of components used. Label axes appropriately. You should make use of the sklearn mean_squared_error() and explained_variance_score() metrics. For explained_variance_score(), you should set the multioutput parameter to variance_weighted.

In [27]:
# Your code goes here
from sklearn.metrics import mean_squared_error, explained_variance_score
ev = []
mse = []
num_components = range(1,4)

for k in num_components:
    x_red = np.dot(x_3d_centered, my_pca_1_directions[:,:k])
    x_recon = np.dot(x_red, my_pca_1_directions[:,:k].T) + mu_est
    mse.append(mean_squared_error(x_3d, x_recon))
    ev.append(explained_variance_score(x_3d, x_recon, multioutput='variance_weighted'))
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
for metric, ax, label in zip([ev, mse], [ax1, ax2], ["explained variance", "mean squared error"]):
    ax.scatter(num_components, metric, color='k', s=30)
    ax.plot(num_components, metric, linestyle='--', color='k')
    ax.set_ylabel(label)
ax2.set_xlabel('Number of components')
ax2.set_xticks(num_components)
ax1.set_ylim(top=1.01)
ax2.set_ylim(bottom=-0.01)
plt.show()
In [28]:
# We can also visualise the original original and the approximation with a 3D plot. 
# NB: you were not required to do this.

x_orig = x_3d
x_orig_centered = x_3d - mu_est

n_samples = 500 # Number of data points to plot

from mpl_toolkits.mplot3d import Axes3D
s = 10 # Marker size for scatter plot
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x_orig[:n_samples,0], x_orig[:n_samples,1], x_orig[:n_samples,2], 
           c='b', s=s, marker='o', label='original')
for (k, c, marker,label) in zip(np.arange(1,3), ['k', 'r'], ['^', '*'], ['approximation, 1 PC', 'approximation, 2 PCs']):
    x_proj = np.dot(x_orig, my_pca_1_directions[:,:k])
    x_rec = np.dot(x_proj, my_pca_1_directions[:,:k].T) + mu_est
    ax.scatter(x_rec[:n_samples,0], x_rec[:n_samples,1], x_rec[:n_samples,2], 
               c=c, s=s, marker=marker, label=label)
    
ax.set_xlabel('Dim 1')
ax.set_ylabel('Dim 2')
ax.set_zlabel('Dim 3')
ax.azim = 400
ax.elev = 30 # Add a small elevation
plt.legend()
plt.show()

The black points lie on a line (the 1st PC direction). The red points lie in a 2D plane (spanned by the first two PC directions).

========== Question 15 ==========

How can you compute the variance explained by k principal components by only looking at the eigenvalues of the covariance matrix?

Your answer goes here

The fraction of explained variance can be computed by looking at the cumulative sum of the sorted eigenvalues of the covariance matrix.

In [29]:
# Your code goes here
print("Cumulative sum of eigenvalues:\n{}".format(np.cumsum(eigvals_sorted)/np.sum(eigvals_sorted)))
print("Explained variance:\n{}".format(ev))
Cumulative sum of eigenvalues:
[ 0.92482638  0.99540663  1.        ]
Explained variance:
[0.92482638112554139, 0.99540662978477823, 1.0]

Low-rank SVD approximation for image compression [optional]

In lecture, we have seen that the SVD allows us to find a low rank approximation of the data matrix. We here exemplify the low rank approximation property of the SVD on a image compression task.

As you might already know, grey-scale images are represented in the digital world as 2D matrices, whose elements correspond to pixel intensities. We here approximate this matrix by a low-rank approximation through the SVD. If there are correlations between the pixels in the image (which happens to be the case for natural images), then we should be able to achieve a relatively good reconstruction of the image by using only a few components.

Let us first load a sample image from the scipy package:

In [30]:
# Load sample image
from scipy.misc import face
img = face(gray=True)
print("Image array dimensionality: {}".format(img.shape))
Image array dimensionality: (768, 1024)

We can visualise the image by using the matplotlib imshow function:

In [31]:
# Show image 
sns.set_style("white")
plt.figure()
plt.imshow(img, cmap=plt.cm.gray)
plt.axis("off")
plt.show()

========== Question 16 [optional] ==========

Write a function image_low_rank_approx() that takes as input an image (i.e. 2-dimensional array) and an integer k and reconstructs the image by using a k-rank SVD approximation.

In [32]:
# Your code goes here
def image_low_rank_approx(img, k):
    """Approximates a two-dimensional array by low-rank SVD.
    
    Parameters
    ----------
    
    img : array, ndim = 2
        Image array.
    
    k : integer
        Rank of approximation.
        
    Returns
    -------
    
    img_recon : array, ndim = 2
        Reconstructed image.
    
    """
    
    if img.ndim != 2:
        raise ValueError("Image array should be two-dimensional.")
    
    u, s, v = np.linalg.svd(img, full_matrices=False)
    img_recon = np.dot(u[:,:k]*s[:k], v[:k,:]) # Low-rank approximation
    
    return img_recon

========== Question 17 [optional] ==========

Perform a low-rank approximation of the image stored in img by using a varying number of ranks (i.e. from 1 to 500) and visualise the approximation. What do you observe?

In [33]:
# Your code goes here
k_range = [1, 5, 10, 20, 50, 100, 200, 500]
f = plt.figure(figsize=(10, 10))
for ii, k in enumerate(k_range):
    recon = image_low_rank_approx(img, k)
    plt.subplot(4,2,ii+1)
    plt.imshow(recon, cmap=plt.cm.gray)
    plt.axis('off')
    plt.title("Approximation with {} singular vector(s)".format(k), fontsize=9)
f.tight_layout()

Your answer goes here.

We observe that a good approximation of the image can be achieved by using as few as 50 singular vectors and values.