Autogen: Enabling next-gen llm applications via multi-agent conversation framework Q Wu, G Bansal, J Zhang, Y Wu, S Zhang, E Zhu, B Li, L Jiang, X Zhang, ... COLM 2024, 2023 | 945* | 2023 |
Mathchat: Converse to tackle challenging math problems with llm agents Y Wu, F Jia, S Zhang, H Li, E Zhu, Y Wang, YT Lee, R Peng, Q Wu, ... ICLR 2024@LLMAgents, 2023 | 78* | 2023 |
Targeted hyperparameter optimization with lexicographic preferences over multiple objectives S Zhang, F Jia, C Wang, Q Wu ICLR 2023 (Oral), 2022 | 31 | 2022 |
Training Language Model Agents without Modifying Language Models S Zhang, J Zhang, J Liu, L Song, C Wang, R Krishna, Q Wu ICML 2024, 2024 | 29* | 2024 |
IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models S Zhang, X Xia, Z Wang, LH Chen, J Liu, Q Wu, T Liu ICLR 2024, 2023 | 28 | 2023 |
You only compress once: Towards effective and elastic bert compression via exploit-explore stochastic nature gradient S Zhang, X Zheng, C Yang, Y Li, Y Wang, F Chao, M Wang, S Li, J Yang, ... Neurocomputing, 2021 | 28 | 2021 |
Ddpnas: Efficient neural architecture search via dynamic distribution pruning X Zheng, C Yang, S Zhang, Y Wang, B Zhang, Y Wu, Y Wu, L Shao, R Ji IJCV 2023, 2023 | 27 | 2023 |
StateFlow: Enhancing LLM Task-Solving through State-Driven Workflows Y Wu, T Yue, S Zhang, C Wang, Q Wu COLM 2024, 2024 | 18 | 2024 |
Refined Coreset Selection: Towards Minimal Coreset Size under Model Performance Constraints X Xia, J Liu, S Zhang, Q Wu, H Wei, T Liu ICML 2024 (Spotlight), 2024 | 14* | 2024 |
Hypertime: Hyperparameter optimization for combating temporal distribution shifts S Zhang, Y Wu, Z Zheng, Q Wu, C Wang ACM MM 2024, 2023 | 11 | 2023 |
Adaptive in-conversation team building for language model agents L Song, J Liu, J Zhang, S Zhang, A Luo, S Wang, Q Wu, C Wang SoCal NLP Symposium 2024, 2024 | 9 | 2024 |
EcoAct: Economic Agent Determines When to Register What Action S Zhang, J Zhang, D Ding, MH Garcia, A Mallick, D Madrigal, M Xia, ... ICLR 2025@Reasoning and Planning for Large Language Models, 2024 | 2 | 2024 |