Embedding multimodal machine intelligence in the digital life of AI technology

Introduction

Practical Verification

For the robust processing of speech signals, 8k audio files are converted into high-quality 16k audio files and can be developed with anti-noise technology. Speech technology is applied to real-time mode conversion, as well as personal stress based on the speech and physiological signals collected in the real field. These developing technologies will be verified in practical with industry-university cooperation, especially the robustness of speech will be verified in the customer service.

Trustworthy AI

International collaboration with the team that established MSP-PODCAST to conduct a large-scale Chinese corpus. All construction process details, applied technologies and versions will be recorded, such as data analysis, voice activity detection (VAD), speaker recognition, de-noise, automatic detection of speechless segments, emotion retrieval and annotation strategies, etc. The process of collecting from the database directly grasps all the records accumulated for the trustworthy emotion recognition, so as to examine the fairness and reliability of speech emotion construction from various perspectives.

Coping actions for the impact of humanities and legal system/governance on sharing of data and AI model

Relevant issues are more than database collection procedures. Data collection policies and procedures have been established, and the collected public data sources are in compliance with the terms of the Creative Commons (CC) License. According to the data sources and data types, relevant laws, regulations, orders, the author's authorization and contract are taken into consideration, and a standardized form is formulated to ensure that this project team can process or use the information on a legal and authorized basis.


Keywords : Emotional Corpus Fairness Algorithm Speech Emotion Recognition
Research Project : Advanced Technologies for Designing Trustable AI Services
Principal Investigator : Hsin-Hsi Chen
Co-Principal Investigator : Chi-Chun Lee