Py-FEAT
Py-FEAT is a suite for facial expressions (FEX) research written in Python. This package includes tools to detect faces, extract emotional facial expressions (e.g., happiness, sadness, anger), facial muscle movements (e.g., action units), and facial landmarks, from videos and images of faces, as well as methods to preprocess, analyze, and visualize FEX data.
For detailed examples, tutorials, and API please refer to the Py-FEAT website.
Installation
Option 1: Easy installation for quick use
Clone the repository
pip install py-feat
Option 2: Installation in development mode
git clone https://github.com/cosanlab/feat.git
cd feat && python setup.py install -e .
Usage examples
1. Detect FEX data from images or videos
FEAT is intended for use in Jupyter Notebook or Jupyter Lab environment. In a notebook cell, you can run the following to detect faces, facial landmarks, action units, and emotional expressions from images or videos. On the first execution, it will automatically download the default model files. You can also change the detection models from the list of supported models.
from feat.detector import Detector
detector = Detector()
# Detect FEX from video
out = detector.detect_video("input.mp4")
# Detect FEX from image
out = detector.detect_image("input.png")
2. Visualize FEX data
Visualize FEX detection results.
from feat.detector import Detector
detector = Detector()
out = detector.detect_image("input.png")
out.plot_detections()
3. Preprocessing & analyzing FEX data
We provide a number of preprocessing and analysis functionalities including baselining, feature extraction such as timeseries descriptors and wavelet decompositions, predictions, regressions, and intersubject correlations. See examples in our tutorial.
Supported Models
Please respect the usage licenses for each model.
Face detection models
Facial landmark detection models
Action Unit detection models
- FEAT-Random Forest
- FEAT-SVM
- FEAT-Logistic
- DRML: Deep Region and Multi-Label Learning
- JAANet: Joint AU Detection and Face Alignment via Adaptive Attention
Emotion detection models
- FEAT-Random Forest
- FEAT-Logistic
- FerNet
- ResMaskNet: Residual Masking Network
Head pose estimation models
- img2pose
- Perspective-n-Point algorithm to solve 3D head pose from 2D facial landmarks (via
cv2
)