Yelin Kim
Title
Cited by
Cited by
Year
Deep Learning for Robust Feature Generation in Audiovisual Emotion Recognition
Y Kim, H Lee, EM Provost
IEEE International Conference on Acoustics, Speech and Signal Processing†…, 2013
2942013
Emotion Classification via Utterance-Level Dynamics: A Pattern-Based Approach to Characterizing Affective Expressions
Y Kim, EM Provost
IEEE International Conference on Acoustics, Speech and Signal Processing†…, 2013
552013
Towards Emotionally-Aware AI Smart Classroom: Current Issues and Directions for Engineering and Education
Y Kim, T Soyata, RF Behnagh
IEEE Access 6, 5308-5331, 2018
402018
Say Cheese vs. Smile: Reducing Speech-Related Variability for Facial Emotion Recognition
Y Kim, EM Provost
Proceedings of the ACM International Conference on Multimedia (ACM MM'14), 2014
252014
Emotion Recognition During Speech Using Dynamics of Multiple Regions of the Face
Y Kim, E Mower Provost
ACM Transactions on Multimedia Computing Communications and Applications†…, 2015
202015
Emotion Spotting: Discovering Regions of Evidence in Audio-Visual Emotion Expressions
Y Kim, E Mower Provost
ACM International Conference on Multimodal Interaction (ACM ICMI), 2016
172016
Modeling Transition Patterns Between Events for Temporal Human Action Segmentation and Classification
Y Kim, J Chen, MC Chang, X Wang, E Mower Provost, S Lyu
IEEE International Conference on Automatic Face and Gesture Recognition (FG†…, 2015
142015
Leveraging Inter-rater Agreement for Audio-Visual Emotion Recognition
Y Kim, E Mower Provost
Affective Computing and Intelligent Interaction (ACII), Xi'an, China, 2015
122015
Human-Like Emotion Recognition: Multi-Label Learning from Noisy Labeled Audio-Visual Expressive Speech
Y Kim, J Kim
IEEE International Conference on Acoustics, Speech and Signal Processing†…, 2018
62018
Systems and Methods For Analyzing Time Series Data Based on Event Transition
J Chen, P Tu, MC Chang, Y Kim, S Lyu
US Patent 14/702,229, 2015
62015
ISLA: Temporal Segmentation and Labeling for Audio-Visual Emotion Recognition
Y Kim, EM Provost
IEEE Transactions on Affective Computing, 2017
52017
Wild Wild Emotion: A Multimodal Ensemble Approach
J Gideon, B Zhang, Z Aldeneh, Y Kim, S Khorram, D Le, E Mower Provost
ACM International Conference on Multimodal Interaction (ACM ICMI), 2016
52016
End-to-End Listening Agent for Audio-Visual Emotional and Naturalistic Interactions
K El Haddad, Y Kim, H «akmak, P Lin, J Kim, M Lee, Y Zhao
22017
Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition
Y Kim
Affective Computing and Intelligent Interaction (ACII), 2015 International†…, 2015
22015
Audio-Visual Emotion Forecasting: Characterizing and Predicting Future Emotion Using Deep Learning
S Shahriar, Y Kim
IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2019
12019
Joint Discrete and Continuous Emotion Prediction Using Ensemble and End-To-End Approaches
E Albadawy, Y Kim
ACM International Conference on Multimodal Interaction (ACM ICMI), 2018
12018
Face Tells Detailed Expression: Generating Comprehensive Facial Expression Sentence through Facial Action Units
J Hong, HJ Lee, Y Kim, YM Ro
26th International Conference on Multimodal Modeling (MMM), 2020
2020
Towards Emotion Recognition with Automatic Social and Relational Context Discovery in HRI Systems
J Parent, Y Kim
The AAAI Fall Symposium Series: Artificial Intelligence for Human-Robot†…, 2017
2017
Automatic Emotion Recognition: Quantifying Dynamics and Structure in Human Behavior
Y Kim
Ph.D. Thesis, University of Michigan, 2016
2016
The system can't perform the operation now. Try again later.
Articles 1–19