Some Documentation for the Mobile App needed
General Documentation like diagrams, tasks and roles
Some Documentation for the FotoFaces API and algorithms
React Native
Home Page
Login
Register
Photo Upload
Update Photo
Take photo from camera roll
Choose photo from gallery
Start Screen
Login Screen
Register Screen
Main Screen for Validation
Main Screen with properties
Phot Accept Screen
Winking
Rotating the head
Smiling
The Live Detection algorithm is used when a user chooses to take a new photo in the main screen or the register screen. The camera option will take the user to an interface, based on a project done by Osama Qarem, where he has to put his face inside a frame and do some verification steps, like winking an eye.
We use an expo package called FaceDetector which uses functions of Google Mobile Vision framework to detect the faces on images and gives an array that contains information about the face, e.g. the coordinates of the center of the nose, the winking probability, etc.. The FaceDetector package is usually used along with the Camera package also from expo, where we can define the properties of the FaceDetector detection, as for example the minimum detection interval, which defines in what space of time it should return a new array of the properties of the face. By analysing that array, we can confirm if the user is smiling or not, by checking the value of the key 'smilingProbability', if the number is bigger, it means that the user is most likely smiling.
Team communication
Advisor-Team discussion
Project Bakclog management
Promotional Website
Mobile App building
Code repository
Website building
Team communication
id.ua.pt already offers services that includes a similar software than the one we are going to build
Database Creation
Database Update
Database Get
Database Endpoints
Http message Example
Flask
OpenCV
Dlib
Each plugin folder (like Gaze) contains a python file with the algorithm code, with useful functions if needed, and a .yaml configuration file.
The main application will gather all the plugins and run them consecutively, until there are no more to execute.
Then it will gather each algorithm result and convert it into a single json message
All tests are conducted to a specified algorithm
They were made to observe the time the algorithms take to execute, from both the old and new fotofaces, as well as how well do they work
Same person = True
Different people = False
Returns a shape
Returns true or false, will return true
Returns true or false, will return true to the left and true to the right
Returns true or false, will return true to the left and true to the right
Returns bewteen 50 and 100, the higher the better
Returns value above 0, the lower the better
Returns values above 0, the lower the better
Returns true or false, will return true to the left and true to the right
Returns value above 0, the higher the better
Returns values between 0.10 and 0.50, the higher the better
Returns a cropped image