Have you ever tried a real-time Face Filter app? Something like apps that allow you to apply a Dog, Cat, Crown, etc. filter on a live camera output.
You have probably seen this functionality in TikTok or Snapchat.
It is no doubt a fun way to share live streams or do a group call with your friends.
So, this article is going to be an interesting one. Today, I’ll guide you on how to create a face filter using your favorite programming language JavaScript. You can then impress your fellows by implementing this feature on your own website.
Table of contents:
In this tutorial, I will walk you through a step by step guide for creating two face filters.
The first one will be a little easier to apply because we just need to detect the face and put the mask on it. Whereas, the “Cat Ears and Nose Filter” will require additional processing. Here, we need to separately track the position of the user’s nose and head to accurately place cat ears and a nose.
We will implement these filters on a live webcam feed. Meaning that you can see the results in real-time, inside a web browser.
First of all, this project highly depends on face detection functionality. There are many open-source libraries that might help us detect a face in a video or a webcam. Practically, we can use any one of them. But, for the sake of this tutorial, I’ll use a very simple JavaScript library known as clmtrackr.
Similarly, I will also use p5.js because it is packed with many useful functions that will reduce the complexity of this project and speed up its development.
In other words, you must have some basic understanding of JavaScript concepts to follow along with this tutorial.
Finally, the latest version of Node.js is needed to run the project on the server.
We need a folder to organize the source code files, images, and dependencies. Let’s get started by creating the project’s root folder called javascript_face_filters.
Here’s the final overview of this folder.
As you can see, we are using four images in this tutorial. One image is for the Ironman face filter, whereas the other three will be used in the cat face filter. We just organized them in a sub-folder called images.
After that, a JavaScript file face_filters.js is added in a javascript sub-folder. This file will hold the main functionality of our face filters.
A folder called dependencies will be used to hold external packages. In our case, we are using clmtrackr and p5.js.
At last, index.html is the main file that will contain the front-end of this project. For example, this file will display the webcam output as well as place the filters on it.
Both clmtrackr and p5.js are open source libraries. Meaning that you can download them from GitHub for free.
Let’s start by downloading the latest release of clmtrackr which is 1.1.2 at the time of writing this article. You will get a .zip file which you have to extract inside the dependencies folder.
In the case of the p5.js library, you don’t have to download its full source code. Rather, we only need the p5.min.js file from it. Just place this file inside the dependencies folder and we are good to go.
It’s time to create an index.html file that will hold the frontend. Here, we will define an HTML document structure and link all the dependencies. A little bit of styling is also done to make it look nicer.
A point to be noted is that for now, we will not add any UI related elements (e.g. Canvas) in this file. Instead, we will use p5.js for that because it provides more flexibility.
Here are the contents of the index.html file.
<!DOCTYPE html> <html> <head> <title>JavaScript Face Filters</title> <style> body { margin: 0; background-color: #d1779e; } </style> </head> <body> <script src="dependencies/p5.min.js"></script> <script src="dependencies/clmtrackr-1.1.2/examples/js/libs/utils.js"></script> <script src="dependencies/clmtrackr-1.1.2/build/clmtrackr.min.js"></script> <script src="javascript/face_filters.js"></script> </body> </html>
In this section, we will create a face_filters.js file. Basically, this file is responsible to define all the functionality related to face filters.
First of all, we will create some variables. These variables allow us to store important information that can be used later in different functions.
var canvasWidth = 800; var canvasHeight = 600; var faceTracker; // Face Tracking var videoInput; var imgIronmanMask; // Ironman Mask Filter var imgCatEarRight, imgCatEarLeft, imgCatNose; // Cat Face Filter var selected = -1; // By default, no filter will be selected
Now, there is a function in the p5.js library that automatically runs at the start. This function is known as preload(). Most of the time, we use it to load external files like images, fonts, JSON, etc.
So, this is the most suitable place to load our images that will be used in filters.
function preload() { // Ironman Mask Filter asset imgIronmanMask = loadImage("images/ironman_mask.png"); // Cat Face Filter assets imgCatEarRight = loadImage("images/cat_ear_right.png"); imgCatEarLeft = loadImage("images/cat_ear_left.png"); imgCatNose = loadImage("images/cat_nose.png"); }
In p5.js, we also have access to another function called setup(). It also runs at the start but only after the preload() has completed its processing.
We’ll use setup() to perform various tasks. These tasks are as follow:
function setup() { createCanvas(canvasWidth, canvasHeight); // webcam capture videoInput = createCapture(VIDEO); videoInput.size(canvasWidth, canvasHeight); videoInput.hide(); // select filter using drop-down menu var sel = createSelect(); sel.position(0, 0); var selectList = ['Ironman Mask', 'Cat Filter']; // list of filters sel.option('Select Face Filter', -1); // Default no filter for (var i = 0; i < selectList.length; i++) { sel.option(selectList[i], i); } sel.changed(applyFilter); // tracker faceTracker = new clm.tracker(); faceTracker.init(); faceTracker.start(videoInput.elt); }
Let’s update the selected variable value whenever the user changes the face filter. The applyFilter() callback function is used inside setup().
// callback function function applyFilter() { selected = this.selected(); // change filter type }
Another function in p5.js is draw() that runs in a loop. We can use it to continuously update the webcam feed on the screen. You can also see a switch statement in this function. Basically, it is used to change the face filter anytime based on the user’s selection.
function draw() { image(videoInput, 0, 0, canvasWidth, canvasHeight); // render video from webcam // apply filter based on user's choice switch(selected) { case '-1': break; case '0': drawIronmanMask(); break; case '1': drawCatFilter(); break; } }
Finally, we are ready to write the actual code for our face filters.
Let’s try to understand the functionality of the Ironman face filter first. It is very straightforward because clmtrackr will give us the exact location where the human face is on a webcam. We will simply use that location and display an Ironman mask image on top of it.
// Ironman Mask Filter function drawIronmanMask() { var positions = faceTracker.getCurrentPosition(); if (positions != false) { push(); translate(-154, -240); // offset adjustment imgIronmanMask.resize(300, 403) image(imgIronmanMask, positions[62][0], positions[62][1]); pop(); } }
Now, the cat face filter needs some additional processing because we have to detect the eyes and nose in the face. The good news is that the clmtrackr library is capable of handling that too. We just need to get the position values from clmtrackr and place the cat ears and nose appropriately.
// Cat Face Filter function drawCatFilter() { var positions = faceTracker.getCurrentPosition(); if (positions != false) { for (var i = 0; i < positions.length; i++) { // Track right eye to locate right ear if (i == 20) { push(); translate(-80, -150); // offset adjustment image(imgCatEarRight, positions[i][0], positions[i][1]); pop(); } // Track left eye to locate left ear if (i == 16) { push(); translate(-20, -150); // offset adjustment image(imgCatEarLeft, positions[i][0], positions[i][1]); pop(); } // Track nose point if (i == 62) { push(); translate(-160, -40); // offset adjustment image(imgCatNose, positions[i][0], positions[i][1]); pop(); } } } }
Finally, we have to run this project on a web server because the webcam doesn’t work through local files. For now, we can create a localhost server using Node.js. You just have to open the console window in the project’s root directory and run this command.
npx http-server
After that, simply access the project by typing http://localhost:8080/ in your browser’s address bar.
Face filters are getting popular in social media apps because it adds a whole new level of entertainment. In this tutorial, we just created two face filters. But, you can add more by following the exact same steps.
You can start by changing the images used in these filters. Then, play with the values inside the translate() function. This way, you will be able to add new face filters in your own web projects and impress your friends and family members.