[SOLVED] COEN240 Homework 6

24.99 $

Category:

Description

Rate this product

 

Problem 1 You are given a face image database of 10 subjects. Each subject has 10 images of 112 Γ— 92 pixels. Convert each image to a vector of length D=112 Γ— 92 = 10304.

1.1 Apply the principal-component analysis (PCA) method to the data set for face feature extraction. Use different rank values d=1,2,3,6,10,20, and 30 (i.e. find d principal components). Project the face images to the rank-d subspace (i.e. project the face images onto the d principal components) and apply the nearest-neighbor classifier in the projection space. Plot the recognition accuracy rate (π‘›π‘’π‘šπ‘π‘’π‘ŸΒ π‘œπ‘“ π‘π‘œπ‘Ÿπ‘Ÿπ‘’π‘π‘‘ π‘π‘™π‘Žπ‘ π‘ π‘–π‘“π‘–π‘π‘Žπ‘‘π‘–π‘œπ‘› %) versus different d

π‘‘π‘œπ‘‘π‘Žπ‘™ π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ 𝑑𝑒𝑠𝑑 π‘π‘Žπ‘ π‘’π‘  values.

1.2 Use the Fisher’s Linear Discriminant (FLD) method to find the projection directions to reduce the face image dimension, followed by a nearest-neighbor classifier to perform face recognition. Before applying FLD, you would first use PCA to reduce the dimensionality of face images to 𝑑0 = 40. The final reduced dimension of the images is 𝑑 = [1,2,3,6,10,20,30]. Β 

Use the PCA and LinearDiscriminantAnalysis libraries from sklearn. Mean subtraction is not needed. The following code snippet is for your reference:

from sklearn.decomposition import PCA from sklearn.discriminant_analysis import LinearDiscriminantAnalysis pca0 = PCA(n_components=d0)

pca0_operator = pca0.fit(L) # L is the training data set, each row is one training image

L0 = pca0_operator.transform(L) # reduced-dim of the data: rows are data points

# input of lda is the reduced-dim data from pca:

lda = LinearDiscriminantAnalysis(n_components=d) # FLD /LDA lda_operator = lda.fit(filled by yourself) train_proj_lda = lda_operator.transform(filled by yourself).transpose() # columns are examples Run 20 independent experiments. In each experiment, randomly choose 8 images per class to form the training set, and the remaining images form the test set. Plot the recognition accuracy rate

π‘‘β„Žπ‘’ π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘π‘œπ‘Ÿπ‘Ÿπ‘’π‘π‘‘ π‘π‘™π‘Žπ‘ π‘ π‘–π‘“π‘–π‘π‘Žπ‘‘π‘–π‘œπ‘›

% versus different d values for both PCA and FLD methods on the same figure

π‘‘β„Žπ‘’ π‘‘π‘œπ‘‘π‘Žπ‘™ π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ 𝑑𝑒𝑠𝑑 π‘π‘Žπ‘ π‘’π‘ 

with different colors, show the legends for both curves, and show the xlabel and ylabel on the figure. Comment on the results.

Β