Estimated time to complete: 45 minutes – 1 hour
The use of masks has grown enormously in recent times, and it is more important than ever to make sure people are wearing them properly in order to keep everyone safe.
In this tutorial, we are going to teach you how to train VIA Pixetto to recognize whether a mask is being worn or not. Next, we will connect the VIA Pixetto to an Arduino development board and program it to sound an alarm when a person without a mask is detected.
In total, this tutorial contains two parts with different steps:
- How to train the VIA Pixetto:
Step 1. Preparation
Step 2. Training our model and generating Python code
Step 3. Editing the Python and more training
Step 4. Installing onto the Pixetto and testing
- How to control the Arduino development board:
Step 5. Setting up the Arduino with VIA Pixetto
Step 6. Programming the Arduino with VIA Pixetto Junior
Before we start training VIA Pixetto, we need to prepare photos of people both wearing masks and not wearing masks as a training database for the AI algorithm. Please find and save around 50 to 60 photos of people both wearing and not wearing a mask (you can search for images on Google). Create two folders named “mask” and “no_mask” and place the photos you have found into these two folders accordingly. Make sure to compress these folders into a zipped file afterwards, so that we are able to upload them to the machine learning tool.
In order to avoid machine learning bias, the folder should contain photos of people of different genders, ages, ethnicities and wearing glasses. We also need images taken from different angles. After preparing the photos, we can start the training.
Firstly, go to the machine learning accelerator platform, log into your account, and click on “Machine Learning”.
Next, click on “Upload Image” and upload the compressed “mask” and “no_mask” files we prepared earlier. Finally, where it says “Enter Model Name” enter the name “mask”:
The first image in each file should then be shown in the top right, as indicated by the blue arrow.
Next, we have to construct a neural network. We can experiment with the training results of different network architectures if we want, but the machine learning tool has a network directly provided for us which has a good performance standard already.
In this implementation, we need to classify two categories: “wearing a mask” and “not wearing a mask”. Therefore, in the last “Output Layer”, we need to set the number of classes to 2. Furthermore, the block which dictates the input size (in purple at the top) needs to be set to the amount of pictures we have in each file. In our case, the input sizes are 60.
We can get a ready-made block from the “Popular combinations”, however we will need to create another 2 core layers to add on, as seen in the above picture. Make sure the values are exactly the same as well (except the input size values which should be equal to how many photos are in the zipped files). We can just click on them in order to modify them.
After this is completed, click “Start” in the upper right corner to start the training. At the same time, the system will generate the Python code of the neural network for us.
In this implementation, we will need to directly modify the Python code to adjust the parameters.
Return to the homepage and click “Python”. We should see the “mask.py” file, which contains the Python neural network code generated in the previous step. We are going to modify some parameters according to the architecture of this network, and then do some more training.
We need to select all of the code in the “mask.py” file and copy it. The file should look like this:
After copying the code, go back to the previous page, create a new “notebook” file, and paste the code.
Here, we can modify the parameters of the neural network. Since the amount of training data we prepare is not much, we can reduce the “batch_size” and increase the “epochs” (the two variables circled in the below image). Here we have used 32 and 20 respectively, but you can experiment with these values later to try to improve the accuracy.
Finally, click “Run” in the toolbar above to start training our neural network, and wait for it to finish.
After the training is completed, the program will save the result as a “tflite” file. We then need to download this to our computer:
Next, use “Pixetto Utility” from the Pixetto Startup application in order to upload it to the VIA Pixetto vision sensor:
Plug in your VIA Pixetto at this point, if you haven’t already! Don’t forget to remove the lens cover and ensure that the 3 lights are on before proceeding with the next steps.
To implement the neural network model we have created, click on the text box at the bottom right corner labelled “Install Neural Network Model”. Select the tflite file we downloaded earlier and press ok.
To apply facemask recognition to our VIA Pixetto, there are two suitable object detection algorithms that we can use, namely “Central” and “Face Detection”. If we were to experiment with these two algorithms, we would find that they achieve different results. In this implementation, we will use the “Central” algorithm to track human faces. The configuration on the right-hand side should change automatically to the neural network function.
If the recognition accuracy is not satisfactory, we can always go back to the previous step in order to adjust the parameters and retrain (such as batch_size and epochs).
You can adjust the labels that indicate if a mask is on or not via Tools, and then “Index Label Editor”. Here you can attach the corresponding labels to their numbers displayed on the screen.
Now, using VIA Pixetto, we can easily identify whether people are wearing a mask when they should be
Next, we will set up an Arduino to sound an alarm when the VIA Pixetto detects a person who is not wearing a mask and, after the person wears a mask, to turn on a green light.
We will need an Arduino board, an expansion board, an LED light and a piezo buzzer.
First, plug in the expansion board into the Arduino.
Plug in the piezo buzzer into the D4 slot of the expansion board.
Plug in the VIA Pixetto into the UART slot of the Arduino.
Insert the LED light into the 13 and GND sockets as follows.
After plugging in the components into the Arduino, plug the Arduino into your PC using a Micro-USB to USB 2.0/3.0 cable.
Finally, we program the Arduino using the C programming language in order to control the buzzer and light.
Open the Pixetto Junior app and choose the “Manual Edit” setting.
We will now explain the logic behind the code. Feel free to skip to the final code snippet if you would like to copy and paste it into VIA Pixetto Junior.
First, launch the SmartSensor library.
Set pins 0 and 1 to connect the Arduino development board to the VIA Pixetto. Initialize it with setup(). Next, set pin 13 to control the LED lights and pin 4 to control the piezo buzzer.
In loop(), we confirm whether the VIA Pixetto can detect human faces before proceeding to the next steps.
Whenever the VIA Pixetto detects a person wearing a mask, the LED connected to pin 13 lights up.
If a person who is not wearing a mask is detected, a sound with a frequency of 4000 Hz will be emitted for 0.1 seconds and the LED will be turned off. There is a delay of 1 second between each sound.
The following is the complete code:
Select the Arduino model you are using as well as the port in which it is connected to your computer.
Upload the program to compile and transfer the C code to the Arduino.
Note: If an error occurs during upload, try the following steps to troubleshoot:
- Remove the VIA Pixetto from the expansion board, upload the code again, and then reconnect the VIA Pixetto;
- Unplug the Arduino from the PC, restart VIA Pixetto Junior and reconnect the Arduino to the PC.
And we are done! We have successfully used the VIA Pixetto with an Arduino to create a facemask detector. Have fun with this project and don’t forget to share your own creations with us on Instagram, Facebook, and Twitter at #VIAPixetto!