Mfas: Multimodal fusion architecture search JM Pérez-Rúa, V Vielzeuf, S Pateux, M Baccouche, F Jurie Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019 | 228 | 2019 |
Temporal multimodal fusion for video emotion classification in the wild V Vielzeuf, S Pateux, F Jurie Proceedings of the 19th ACM International Conference on Multimodal …, 2017 | 208 | 2017 |
Centralnet: a multilayer approach for multimodal fusion V Vielzeuf, A Lechervy, S Pateux, F Jurie Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 0-0, 2018 | 204 | 2018 |
Efficient conformer: Progressive downsampling and grouped attention for automatic speech recognition M Burchi, V Vielzeuf 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 8-15, 2021 | 76 | 2021 |
An occam's razor view on learning audiovisual emotion recognition with small training sets V Vielzeuf, C Kervadec, S Pateux, A Lechervy, F Jurie Proceedings of the 20th ACM International Conference on Multimodal …, 2018 | 65 | 2018 |
Artificial intelligence to evaluate postoperative pain based on facial expression recognition D Fontaine, V Vielzeuf, P Genestier, P Limeux, S Santucci‐Sivilotto, ... European Journal of Pain 26 (6), 1282-1291, 2022 | 34 | 2022 |
Cake: Compact and accurate k-dimensional representation of emotion C Kervadec, V Vielzeuf, S Pateux, A Lechervy, F Jurie arXiv preprint arXiv:1807.11215, 2018 | 34 | 2018 |
Multilevel sensor fusion with deep learning V Vielzeuf, A Lechervy, S Pateux, F Jurie IEEE sensors letters 3 (1), 1-4, 2018 | 33 | 2018 |
Are E2E ASR models ready for an industrial usage? V Vielzeuf, G Antipov arXiv preprint arXiv:2112.12572, 2021 | 9 | 2021 |
The many variations of emotion V Vielzeuf, C Kervadec, S Pateux, F Jurie 2019 14th IEEE International Conference on Automatic Face & Gesture …, 2019 | 9 | 2019 |
OLISIA: a Cascade System for Spoken Dialogue State Tracking L Jacqmin, L Druart, Y Estève, B Favre, LM Rojas-Barahona, V Vielzeuf arXiv preprint arXiv:2304.11073, 2023 | 5 | 2023 |
Towards a general model of knowledge for facial analysis by multi-source transfer learning V Vielzeuf, A Lechervy, S Pateux, F Jurie arXiv preprint arXiv:1911.03222, 2019 | 5 | 2019 |
The many moods of emotion V Vielzeuf, C Kervadec, S Pateux, F Jurie arXiv preprint arXiv:1810.13197, 2018 | 5 | 2018 |
Towards efficient self-supervised representation learning in speech processing L Lugo, V Vielzeuf Findings of the Association for Computational Linguistics: EACL 2024, 340-346, 2024 | 2 | 2024 |
Efficiency-oriented approaches for self-supervised speech representation learning L Lugo, V Vielzeuf International Journal of Speech Technology, 1-15, 2024 | 1 | 2024 |
Sustainable self-supervised learning for speech representations L Lugo, V Vielzeuf arXiv preprint arXiv:2406.07696, 2024 | 1 | 2024 |
Investigating Low-Cost LLM Annotation for Spoken Dialogue Understanding Datasets L Druart, V Vielzeuf, Y Estève International Conference on Text, Speech, and Dialogue, 199-209, 2024 | | 2024 |
Investigating the'Autoencoder Behavior'in Speech Self-Supervised Models: a focus on HuBERT's Pretraining V Vielzeuf arXiv preprint arXiv:2405.08402, 2024 | | 2024 |
Are cascade dialogue state tracking models speaking out of turn in spoken dialogues? L Druart, L Jacqmin, B Favre, LM Rojas-Barahona, V Vielzeuf arXiv preprint arXiv:2311.04922, 2023 | | 2023 |
Is one brick enough to break the wall of spoken dialogue state tracking? L Druart, V Vielzeuf, Y Estève arXiv preprint arXiv:2311.04923, 2023 | | 2023 |