Suivre
Valentin Vielzeuf
Valentin Vielzeuf
Orange Labs
Adresse e-mail validée de orange.com - Page d'accueil
Titre
Citée par
Citée par
Année
Mfas: Multimodal fusion architecture search
JM Pérez-Rúa, V Vielzeuf, S Pateux, M Baccouche, F Jurie
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019
2172019
Temporal multimodal fusion for video emotion classification in the wild
V Vielzeuf, S Pateux, F Jurie
Proceedings of the 19th ACM International Conference on Multimodal …, 2017
2042017
Centralnet: a multilayer approach for multimodal fusion
V Vielzeuf, A Lechervy, S Pateux, F Jurie
Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 0-0, 2018
1882018
Efficient conformer: Progressive downsampling and grouped attention for automatic speech recognition
M Burchi, V Vielzeuf
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 8-15, 2021
652021
An occam's razor view on learning audiovisual emotion recognition with small training sets
V Vielzeuf, C Kervadec, S Pateux, A Lechervy, F Jurie
Proceedings of the 20th ACM International Conference on Multimodal …, 2018
652018
Cake: Compact and accurate k-dimensional representation of emotion
C Kervadec, V Vielzeuf, S Pateux, A Lechervy, F Jurie
arXiv preprint arXiv:1807.11215, 2018
332018
Multilevel sensor fusion with deep learning
V Vielzeuf, A Lechervy, S Pateux, F Jurie
IEEE sensors letters 3 (1), 1-4, 2018
312018
Artificial intelligence to evaluate postoperative pain based on facial expression recognition
D Fontaine, V Vielzeuf, P Genestier, P Limeux, S Santucci‐Sivilotto, ...
European Journal of Pain 26 (6), 1282-1291, 2022
302022
The many variations of emotion
V Vielzeuf, C Kervadec, S Pateux, F Jurie
2019 14th IEEE International Conference on Automatic Face & Gesture …, 2019
92019
Are E2E ASR models ready for an industrial usage?
V Vielzeuf, G Antipov
arXiv preprint arXiv:2112.12572, 2021
82021
Towards a general model of knowledge for facial analysis by multi-source transfer learning
V Vielzeuf, A Lechervy, S Pateux, F Jurie
arXiv preprint arXiv:1911.03222, 2019
52019
The many moods of emotion
V Vielzeuf, C Kervadec, S Pateux, F Jurie
arXiv preprint arXiv:1810.13197, 2018
52018
OLISIA: a Cascade System for Spoken Dialogue State Tracking
L Jacqmin, L Druart, Y Estève, B Favre, LM Rojas-Barahona, V Vielzeuf
arXiv preprint arXiv:2304.11073, 2023
42023
Towards efficient self-supervised representation learning in speech processing
L Lugo, V Vielzeuf
Findings of the Association for Computational Linguistics: EACL 2024, 340-346, 2024
12024
Efficiency-oriented approaches for self-supervised speech representation learning
L Lugo, V Vielzeuf
arXiv preprint arXiv:2312.11142, 2023
12023
Investigating Low-Cost LLM Annotation for~ Spoken Dialogue Understanding Datasets
L Druart, V Vielzeuf, Y Estève
arXiv preprint arXiv:2406.13269, 2024
2024
Sustainable self-supervised learning for speech representations
L Lugo, V Vielzeuf
arXiv preprint arXiv:2406.07696, 2024
2024
Investigating the'Autoencoder Behavior'in Speech Self-Supervised Models: a focus on HuBERT's Pretraining
V Vielzeuf
arXiv preprint arXiv:2405.08402, 2024
2024
Are cascade dialogue state tracking models speaking out of turn in spoken dialogues?
L Druart, L Jacqmin, B Favre, LM Rojas-Barahona, V Vielzeuf
arXiv preprint arXiv:2311.04922, 2023
2023
Is one brick enough to break the wall of spoken dialogue state tracking?
L Druart, V Vielzeuf, Y Estève
arXiv preprint arXiv:2311.04923, 2023
2023
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20