Current location - Loan Platform Complete Network - Big data management - How to use OpenCV for face recognition
How to use OpenCV for face recognition
A friendly reminder that before you can read the code, you have to know the installation and configuration of OpenCV, know how to use C++, and have used some OpenCV functions. Basic image processing and matrix knowledge is also needed. [gm:I'm xiao Ming's comment] Since I'm just translating, for me who only passed the 6th grade, there must be some wrong or improper translation, so please correct me.

1.1.IntroductionIntroduction

From OpenCV2.4 onwards, a new class FaceRecognizer has been added, which we can use to conveniently perform face recognition experiments. This article introduces both the use of the code and the principle of the algorithm. (He wrote the source code, we can find in OpenCV's opencv\modules\contrib\doc\facerec\src under, of course, can also be found in his github, if you want to study the source code, naturally, you can go to look at the uncomplicated)

Currently supported algorithms are

Eigenfaces feature faces createEigenFaceRecognizer()

Fisherfaces createFisherFaceRecognizer()

LocalBinary Patterns Histograms local binary histograms createLBPHFaceRecognizer()

The code in all the examples below can be found under samples/cpp in the OpenCV installation directory, and all the code is free for commercial use or learning.

1.2.Face RecognitionFace Recognition

Face recognition is easy for humans. The literature [Tu06] tells us that babies as young as three days old can already distinguish familiar faces around them. So how hard is it for computers? In fact, so far we know very little about why humans themselves can distinguish between different people. Is it the internal features of a face (eyes, nose, mouth) or the external features (head shape, hairline) that are more effective for human recognition? How do we analyze an image and how does the brain encode it?David Hubel and TorstenWiesel show us that our brain has specialized nerve cells that respond to different scenes, such as lines, edges, corners, or motion, which are local features. Obviously we don't see the world as fragmented chunks, and our visual cortex must somehow translate different sources of information into useful patterns. Automatic face recognition is how to extract meaningful features from an image, put them into a useful representation, and then do some classification of them. Face recognition based on geometric features of faces is probably the most intuitive way to recognize faces. The first automatic face recognition system is described again in [Kanade73]: labeled points (locations of eyes, ears, nose, etc.) are used to construct a feature vector (distance between points, angle, etc.). Recognition is performed by calculating the Euclidean distance of the feature vectors of the test and training images. Such an approach is robust to illumination variations, but has huge drawbacks: the determination of the marked points is complex, even with state-of-the-art algorithms. Some recent work on geometric feature face recognition is described in the literature [Bru92]. A 22-dimensional feature vector was used on a large database where geometric features alone do not provide enough information for face recognition.

The feature face method is described in the literature [TP91], who describes a comprehensive approach to face recognition: the facial image is a point which is found from the high-dimensional image space to its representation in the low-dimensional space, so that the classification becomes simple. The low-dimensional subspace is found using Principal Component Analysis (PCA), which looks for the axis with the largest variance. Although such a transformation is considered from the point of view of optimal reconstruction, he does not take the labeling problem into account. [gm: reading this paragraph requires some machine learning knowledge]. Imagine a situation where the variation is based on an external source, such as light. The maximum variance of the axes does not necessarily contain any discriminative information, so classification is not possible at this point. Therefore, a class-specific projection method using Linear Discriminant Analysis (LDA) has been proposed to solve the face recognition problem [BHK97]. One of the basic ideas is to minimize the intra-class variance while maximizing the extra-class variance.

In recent years, various localized feature extraction methods have emerged. In order to avoid the high dimensional data of the input image, methods that describe the image using only local features have been proposed, and the extracted features (promisingly) are more robust for cases such as local occlusion, lighting variations, small samples, and so on. Methods related to local feature extraction are Gabor Waelets ([Wiskott97]), DiscreteCosinus Transform (DCT) ([Messer06]), LocalBinary Patterns (LBP) ([[AHP04]), LocalBinary Patterns (LBP) ([[AHP04]), LocalBinary Patterns (LBP) ([[AHP04]), LocalBinary Patterns (LBP) ([[AHP04]), LocalBinary Patterns (LBP) ([[AHP04])), and localized binary patterns (LBP) ([[AHP04]). AHP04]). What method to use to extract local features in the time domain space remains an open research question because spatial information is potentially useful information.

1.3.Face DatabaseFace Database

Let's get some data to experiment with. I don't want to make a naive example here. We are working on face recognition, so we need a real face image! You can create your own dataset, or you can get it from here(parts =10; num_components <300; num_components+=15) {

// Intercept a portion of the feature vector from the model

Mat evs = Mat(W, Range::all(), Range(0, num_components));

Mat projection = subspaceProject(evs, mean, images[0].reshape(1,1));

Mat reconstruction = subspaceReconstruct(evs, mean, projection);

// Normalize the result, for display:

reconstruction = norm_0_255(reconstruction.reshape(1, images[0]. rows));

// Show or save:

if(argc==2) {

imshow(format("eigenface_reconstruction_%d", num_components), reconstruction);

< p>} else {

imwrite(format("%s/eigenface_reconstruction_%d.png", output_folder.c_str(), num_components), reconstruction);

}<

}

// If we are not storing to a file, show him, here we use the tentative wait for keyboard input:

if(argc==2) {

waitKey(0);

}

return0;

}