Face-api.js is a JavaScript API for face detection and face recognition in the browser implemented on top of the tensorflow.js core API. It implements a series of convolutional neural networks (CNNs), optimized for the web and for mobile devices.
In conversation with InfoQ, Vincent Mühler, creator of the face-api.js and face-recognition.js, explained their motivation in creating the face-api.js:
Basically, I had this other library, face-recognition.js, which was able to detect faces and perform face recognition with Node.js. At some point, I discovered tensorflow.js and got interested in machine learning in the browser.
Thus, I was curious if it was possible to port existing models for face detection and face recognition to tensorflow.js and it worked quite well.
That's how it all started.
(Image taken from github.com)
For face detection, face-api.js implements the models SSD Mobilenet V1, Tiny Face Detector, and the experimental MTCNN.
SSD (Single Shot Multibox Detector) MobileNet V1 is a model based on MobileNet V1 that aims to obtain high accuracy in detecting face bounding boxes. This model basically computes the locations of each face in an image and returns the bounding boxes together with its probability for each face detected.
Tiny Face Detector is a model for real-time face detection, which is faster, smaller and consumes less resources, compared to SSD Mobilenet V1. This model has been trained on a custom dataset of 14k images labeled with bounding boxes. According to Mühler, clients with limited resources should not have problems using this model.
MTCNN (multi-task cascaded convolutional neural networks) is an experimental model that represents an alternative face detector to SSD MobileNet V1 and Tiny Yolo V2, which offers much more configuration possibilities..
For 68 point face landmark detection, there are two lightweight and fast models, the face_landmark_68_model requiring only 350kb, and face_landmark_68_tiny_model requiring 80kb. Both models employ the ideas of depth-wise separable convolutions as well as densely connected blocks. The models have been trained on a dataset of ~35k face images labeled with 68 face landmark points.
For face recognition, a model based on a ResNet-34-like architecture is provided in face.js to compute a face descriptor from any face image. This model is not limited to the set of faces used for training, meaning developers can use it for face recognition of any person. It is possible to determine the similarity of two arbitrary faces by comparing their face descriptors.
To get started with face-api.js, download the latest build from dist/face-api.js or dist/face-api.min.js and include the script:
<script src="face-api.js"></script>
To load a model, it is necessary to provide the model files as assets, and after that, assuming that the models reside in /models:
await faceapi.loadSsdMobilenetv1Model('/models')
// accordingly for the other models:
// await faceapi.loadTinyFaceDetectorModel('/models')
// await faceapi.loadMtcnnModel('/models')
// await faceapi.loadFaceLandmarkModel('/models')
// await faceapi.loadFaceLandmarkTinyModel('/models')
// await faceapi.loadFaceRecognitionModel('/models')
For developers who want to run the examples locally, just execute the steps below and browse to http://localhost:3000/
:
git clone https://github.com/justadudewhohacks/face-api.js.git
cd face-api.js/examples
npm i
npm start
More information about face-api.js can be found on GitHub repo. There is also the face recognition tutorial and face tracking tutorial.