Suivre
Yelin Kim
Titre
Citée par
Citée par
Année
Deep Learning for Robust Feature Generation in Audiovisual Emotion Recognition
Y Kim, H Lee, EM Provost
IEEE International Conference on Acoustics, Speech and Signal Processing …, 2013
4792013
Towards Emotionally-Aware AI Smart Classroom: Current Issues and Directions for Engineering and Education
Y Kim, T Soyata, RF Behnagh
IEEE Access 6, 5308-5331, 2018
1872018
Emotion Classification via Utterance-Level Dynamics: A Pattern-Based Approach to Characterizing Affective Expressions
Y Kim, EM Provost
IEEE International Conference on Acoustics, Speech and Signal Processing …, 2013
942013
ISLA: Temporal segmentation and labeling for audio-visual emotion recognition
Y Kim, EM Provost
IEEE Transactions on affective computing 10 (2), 196-208, 2017
412017
Emotion Spotting: Discovering Regions of Evidence in Audio-Visual Emotion Expressions
Y Kim, E Mower Provost
ACM International Conference on Multimodal Interaction (ACM ICMI), 2016
322016
Say Cheese vs. Smile: Reducing Speech-Related Variability for Facial Emotion Recognition
Y Kim, EM Provost
Proceedings of the ACM International Conference on Multimedia (ACM MM'14), 2014
322014
Human-Like Emotion Recognition: Multi-Label Learning from Noisy Labeled Audio-Visual Expressive Speech
Y Kim, J Kim
IEEE International Conference on Acoustics, Speech and Signal Processing …, 2018
312018
Emotion Recognition During Speech Using Dynamics of Multiple Regions of the Face
Y Kim, E Mower Provost
ACM Transactions on Multimedia Computing Communications and Applications …, 2015
302015
Leveraging Inter-rater Agreement for Audio-Visual Emotion Recognition
Y Kim, E Mower Provost
Affective Computing and Intelligent Interaction (ACII), Xi'an, China, 2015
172015
Systems and Methods For Analyzing Time Series Data Based on Event Transition
J Chen, P Tu, MC Chang, Y Kim, S Lyu
US Patent 14/702,229, 2015
172015
Modeling Transition Patterns Between Events for Temporal Human Action Segmentation and Classification
Y Kim, J Chen, MC Chang, X Wang, E Mower Provost, S Lyu
IEEE International Conference on Automatic Face and Gesture Recognition (FG …, 2015
162015
Audio-Visual Emotion Forecasting: Characterizing and Predicting Future Emotion Using Deep Learning
S Shahriar, Y Kim
IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2019
142019
Speech Sentiment and Customer Satisfaction Estimation in Socialbot Conversations
Y Kim, J Levy, Y Liu
Interspeech, 2020
122020
Wild Wild Emotion: A Multimodal Ensemble Approach
J Gideon, B Zhang, Z Aldeneh, Y Kim, S Khorram, D Le, E Mower Provost
ACM International Conference on Multimodal Interaction (ACM ICMI), 2016
102016
Joint Discrete and Continuous Emotion Prediction Using Ensemble and End-To-End Approaches
E Albadawy, Y Kim
ACM International Conference on Multimodal Interaction (ACM ICMI), 2018
92018
Face Tells Detailed Expression: Generating Comprehensive Facial Expression Sentence through Facial Action Units
J Hong, HJ Lee, Y Kim, YM Ro
26th International Conference on Multimodal Modeling (MMM), 2020
82020
Acted vs. Improvised: Domain Adaptation for Elicitation Approaches in Audio-Visual Emotion Recognition
H Li, Y Kim, CH Kuo, S Narayanan
Interspeech, 2021
72021
Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition
Y Kim
2015 International Conference on Affective Computing and Intelligent …, 2015
72015
Multimodal Sentiment Detection
Y Kim, Y Liu, D Hakkani-tur, T Nelson, A Chen Santos, J Levy, S Gupta
US Patent 11,501,794, 2022
62022
End-to-End Listening Agent for Audio-Visual Emotional and Naturalistic Interactions
K El Haddad, Y Kim, H Çakmak, P Lin, J Kim, M Lee, Y Zhao
52017
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20