Skip to content
#

multimodal-emotion-recognition

Here are 19 public repositories matching this topic...

This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation for the paper MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild.

  • Updated Sep 16, 2024
  • Python

This API utilizes a pre-trained model for emotion recognition from audio files. It accepts audio files as input, processes them using the pre-trained model, and returns the predicted emotion along with the confidence score. The API leverages the FastAPI framework for easy development and deployment.

  • Updated Apr 23, 2024
  • Python

Improve this page

Add a description, image, and links to the multimodal-emotion-recognition topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the multimodal-emotion-recognition topic, visit your repo's landing page and select "manage topics."

Learn more