News Post

  • IEEE ICME 2024 Grand Challenges PAIR Competition (2/2 start)

IEEE ICME 2024 Grand Challenges PAIR Competition (2/2 start)


Challenge Title: IEEE ICME 2024 Grand Challenges PAIR Competition

Registration URL: https://aidea-web.tw/icme2024

Competition Start Date: 02/02/2024

Challenge Description:

In the realm of computer vision, the field of facial-landmark detection has witnessed remarkable progress, gaining increasing significance in diverse applications like augmented reality, facial recognition, and emotion analysis. While object detection identifies objects within images and semantic segmentation meticulously outlines object boundaries down to the pixel level, facial-landmark detection’s purpose is to accurately pinpoint and track critical facial features.

Nevertheless, the intricacies of facial features, particularly in dynamic settings, combined with the substantial computational demands of deep learning-based algorithms, present formidable challenges when deploying these models on embedded systems with limited computational capabilities. Additionally, the diversity in facial features across various ethnicities and expressions poses difficulties in constructing a universally robust model. For example, the nuances in facial features and expressions within Asian populations, such as those in Taiwan, might not be comprehensively represented in existing open datasets, which predominantly focus on Western demographics.

In this competition, we extend an invitation to participants to engineer a lightweight yet potent single deep learning model tailored for excellence in facial-landmark detection tasks. This model should demonstrate the capacity to accurately locate key facial landmarks under a spectrum of conditions, encompassing diverse expressions, orientations, and lighting environments. The objective is to craft a model not only suitable for deployment on embedded systems but also one that maintains high accuracy and real-time performance.

The challenge underscores the development of a solitary model adept at pinpointing a range of facial landmarks with remarkable precision. This encompasses the detection of subtle variations in critical facial aspects like the eyes, nose, mouth, and jawline. Alongside accuracy, the spotlight is on low power consumption, streamlined processing, and real-time performance, particularly on MediaTek’s Dimensity Series platform.

The MediaTek platform, boasting heterogeneous computing capabilities, inclusive of CPUs, GPUs, and AI Processing Units, offers elevated performance and energy efficiency, making it an ideal foundation for constructing AI-driven facial-landmark detection applications. Participants have the option to manually target these processing units or leverage MediaTek’s NeuroPilot SDK for intelligent processing allocation.

Participants are expected to showcase their model’s prowess in the concurrent detection of multiple facial landmarks, thereby exemplifying precision and efficiency in a resource-constrained environment.

Given the test image dataset, participants are required to utilize a single model to perform the task of facial-landmark detection. The model must identify and locate 51 specific facial landmarks in each image. The landmarks correspond to salient features on the face, which are critical for various applications such as identity verification, emotion recognition, and augmented reality. The model’s output should include:

A set of coordinates for each of the 51 landmarks on the face.A confidence score for the detection of each landmark, indicating the model’s certainty.The landmarks to be detected will cover areas such as the eye contours, eyebrows, nose, and mouth. Participants must ensure that their model is robust and can handle variations in facial expressions, orientations, and lighting conditions. The precise detection of these facial points is crucial for the success of the model in real-world applications.

Participants will submit their results as a TXT file for each test image, where each row corresponds to a landmark and includes the landmark’s ID, the x and y coordinates, and the confidence score. The TXT file should be named according to the convention image_name_landmarks.txt. Accuracy will be assessed based on the mean error across all landmarks and images, normalized by the inter-ocular distance to account for different face sizes and positions within the images.

 

This competition includes two stages: qualification and final competition.

  • Qualification competition: all participants submit their answers online. A score is calculated. The top 15 teams would be qualified to enter the final round of the competition.
  • Final competition: the final score will be evaluated on new MediaTek platform (Dimensity Series) platform for the final score.

Regular Awards

According to the points of each team in the final evaluation, we select the highest three teams for regular awards.

  1. Champion:             $USD 3000
  2. 1stRunner-up:      $USD 2000
  3. 3rd-place:               $USD 1400

Special Award

  1. Best INT8 model development Award: $USD 600
    • Best overall score in the final competition using INT8 model development
  2. Best Detection model Award: $USD 600
    • Best overall detection Model

All the award winners must agree to submit contest paper and attend the IEEE ICME2024 Grand Challenge PAIR Competition Special Session to present their work. If the paper failed to submit, or the length of the submitted paper is less than 3 pages, the award would be cancelled.

Deadline for SubmissionUTC:

DATEEVENT
2/2/2024Qualification Competition Start Date
3/3/2024Date to Release Public Testing Data
3/17/2024 12:00 PM UTCQualification Competition End Date
3/18/2024 12:00 AM UTCFinalist Announcement
3/18/2024Final Competition Start Date
3/25/2024Date to Release Private Testing Data for Final
4/01/2024 12:00 PM UTCFinal Competition End Date
4/3/2024 12:00 PM UTCAward Announcement
4/1/2024Invited Paper Submission Deadline
5/7/2024Camera ready form Deadline

Host Organization:

  • Pervasive Artificial Intelligence Research (PAIR) Labs, National Yang Ming Chiao Tung University (NYCU), Taiwan    Website: https://pairlabs.ai/

Academic Partner:

  • Intelligent Vision System (IVS) Lab, National Yang Ming Chiao Tung University (NYCU), Taiwan Website: http://ivs.ee.nctu.edu.tw/ivs/
  • AI System (AIS) Lab, National Cheng Kung University (NCKU), Taiwan

Industry Partner: