Superloss: A generic loss for robust curriculum learning T Castells, P Weinzaepfel, J Revaud Advances in Neural Information Processing Systems 33, 4308-4319, 2020 | 72 | 2020 |
On architectural compression of text-to-image diffusion models BK Kim, HK Song, T Castells, S Choi arXiv preprint arXiv:2305.15798, 2023 | 23 | 2023 |
Bk-sdm: Architecturally compressed stable diffusion for efficient text-to-image generation BK Kim, HK Song, T Castells, S Choi Workshop on Efficient Systems for Foundation Models@ ICML2023, 2023 | 9 | 2023 |
Automatic neural network pruning that efficiently preserves the model accuracy T Castells, SK Yeom arXiv preprint arXiv:2111.09635, 2021 | 4 | 2021 |
Superloss: A generic loss for robust curriculum learning P Weinzaepfel, J Revaud, T Castells US Patent App. 17/383,860, 2022 | 3 | 2022 |
Shortened LLaMA: A Simple Depth Pruning for Large Language Models BK Kim, G Kim, TH Kim, T Castells, S Choi, J Shin, HK Song arXiv preprint arXiv:2402.02834, 2024 | 1 | 2024 |
Method and apparatus for information flow based automatic neural network compression that preserves the model accuracy Y Seul-Ki, T Castells US Patent App. 18/056,644, 2023 | | 2023 |
Supplementary Material for SuperLoss: A Generic Loss for Robust Curriculum Learning T Castells, P Weinzaepfel, J Revaud | | |