Suivre
Mehdi Rezagholizadeh
Mehdi Rezagholizadeh
Principal Research Scientist, Noah's Ark Lab, Huawei Technologies
Adresse e-mail validée de mail.mcgill.ca
Titre
Citée par
Citée par
Année
EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing
Y Dong, Z Li, M Rezagholizadeh, JCK Cheung
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
1862019
A computer tracking system of solar dish with two-axis degree freedoms based on picture processing of bar shadow
H Arbab, B Jazi, M Rezagholizadeh
Renewable Energy 34 (4), 1114-1118, 2009
1182009
Alp-kd: Attention-based layer projection for knowledge distillation
P Passban, Y Wu, M Rezagholizadeh, Q Liu
Proceedings of the AAAI Conference on artificial intelligence 35 (15), 13657 …, 2021
1162021
Fully quantized transformer for machine translation
G Prato, E Charlaix, M Rezagholizadeh
arXiv preprint arXiv:1910.10485, 2019
102*2019
Dylora: Parameter efficient tuning of pre-trained models using dynamic search-free low-rank adaptation
M Valipour, M Rezagholizadeh, I Kobyzev, A Ghodsi
arXiv preprint arXiv:2210.07558, 2022
982022
Semi-supervised regression with generative adversarial networks for end to end learning in autonomous driving
M Rezagholizadeh, MA Haidar
83*2018
Textkd-gan: Text generation using knowledge distillation and generative adversarial networks
MA Haidar, M Rezagholizadeh
Canadian Conference on Artificial Intelligence, 107-118, 2019
792019
Making a miracl: Multilingual information retrieval across a continuum of languages
X Zhang, N Thakur, O Ogundepo, E Kamalloo, D Alfonso-Hermelo, X Li, ...
arXiv preprint arXiv:2210.09984, 2022
77*2022
Krona: Parameter efficient tuning with kronecker adapter
A Edalati, M Tahaei, I Kobyzev, VP Nia, JJ Clark, M Rezagholizadeh
arXiv preprint arXiv:2212.10650, 2022
682022
Annealing knowledge distillation
A Jafari, M Rezagholizadeh, P Sharma, A Ghodsi
arXiv preprint arXiv:2104.07163, 2021
682021
Systems and methods for multilingual text generation field
M Rezagholizadeh, MA Haidar, A Do-Omri, A Rashid
US Patent 11,151,334, 2021
47*2021
Context-aware adversarial training for name regularity bias in named entity recognition
A Ghaddar, P Langlais, A Rashid, M Rezagholizadeh
Transactions of the Association for Computational Linguistics 9, 586-604, 2021
432021
Mate-kd: Masked adversarial text, a companion to knowledge distillation
A Rashid, V Lioutas, M Rezagholizadeh
arXiv preprint arXiv:2105.05912, 2021
352021
A simplified fully quantized transformer for end-to-end speech recognition
A Bie, B Venkitesh, J Monteiro, MA Haidar, M Rezagholizadeh
arXiv preprint arXiv:1911.03604, 2019
34*2019
Latent code and text-based generative adversarial networks for soft-text generation
M Haidar, M Rezagholizadeh, A Do-Omri, A Rashid
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
32*2019
End-to-end self-debiasing framework for robust NLU training
A Ghaddar, P Langlais, M Rezagholizadeh, A Rashid
arXiv preprint arXiv:2109.02071, 2021
302021
Why skip if you can combine: A simple knowledge distillation technique for intermediate layers
Y Wu, P Passban, M Rezagholizade, Q Liu
arXiv preprint arXiv:2010.03034, 2020
302020
KroneckerBERT: Significant compression of pre-trained language models through kronecker decomposition and knowledge distillation
M Tahaei, E Charlaix, V Nia, A Ghodsi, M Rezagholizadeh
Proceedings of the 2022 Conference of the North American Chapter of the …, 2022
27*2022
Kronecker decomposition for gpt compression
A Edalati, M Tahaei, A Rashid, VP Nia, JJ Clark, M Rezagholizadeh
arXiv preprint arXiv:2110.08152, 2021
272021
Revisiting pre-trained language models and their evaluation for arabic natural language understanding
A Ghaddar, Y Wu, S Bagga, A Rashid, K Bibi, M Rezagholizadeh, C Xing, ...
arXiv preprint arXiv:2205.10687, 2022
25*2022
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20