Follow
Jiuhai Chen
Jiuhai Chen
Verified email at umd.edu
Title
Cited by
Cited by
Year
From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning
M Li, Y Zhang, Z Li, J Chen, L Chen, N Cheng, J Wang, T Zhou, J Xiao
arXiv preprint arXiv:2308.12032, 2023
332023
Instructzero: Efficient instruction optimization for black-box large language models
L Chen*, J Chen*, T Goldstein, H Huang, T Zhou
arXiv preprint arXiv:2306.03082, 2023
312023
When do you need chain-of-thought prompting for chatgpt?
J Chen, L Chen, H Huang, T Zhou
arXiv preprint arXiv:2304.03262, 2023
302023
A closer look at distribution shifts and out-of-distribution generalization on graphs
M Ding*, K Kong*, J Chen*, J Kirchenbauer, M Goldblum, D Wipf, ...
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
30*2021
Gaussian process assisted active learning of physical laws
J Chen, L Kang, G Lin
Technometrics 63 (3), 329-342, 2021
202021
Quantifying uncertainty in answers from any language model via intrinsic and extrinsic confidence assessment
J Chen, J Mueller
arXiv preprint arXiv:2308.16175, 2023
17*2023
Particle-based energetic variational inference
Y Wang, J Chen, C Liu, L Kang
Statistics and Computing 31, 1-17, 2021
172021
Why Propagate Alone
Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ...
Parallel Use of Labels and Features on Graphs. In, 2020
17*2020
GOAT: A Global Transformer on Large-scale Graphs
K Kong, J Chen, J Kirchenbauer, R Ni, CB Bruss, T Goldstein
International Conference on Machine Learning 2023, 2023
162023
How Many Demonstrations Do You Need for In-context Learning?
J Chen, L Chen, C Zhu, T Zhou
Empirical Methods in Natural Language Processing 2023, 2023
15*2023
Does your graph need a confidence boost? Convergent boosted smoothing on graphs with tabular node features
J Chen, J Mueller, VN Ioannidis, S Adeshina, Y Wang, T Goldstein, D Wipf
International Conference on Learning Representations (ICLR) 2022, 2021
142021
Reflection-tuning: Recycling data for better instruction-tuning
M Li, L Chen, J Chen, S He, T Zhou
NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
6*2023
Why propagate alone? parallel use of labels and features on graphs
Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ...
arXiv preprint arXiv:2110.07190, 2021
62021
Why propagate alone? parallel use of labels and features on graphs
Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ...
arXiv preprint arXiv:2110.07190, 2021
62021
Convergent boosted smoothing for modeling graph data with tabular node features
J Chen, J Mueller, VN Ioannidis, S Adeshina, Y Wang, T Goldstein, D Wipf
International Conference on Learning Representations (ICLR) 2022, 2021
52021
Understanding the role of self-supervised learning in out-of-distribution detection task
J Chen, C Zhu, B Dai
arXiv preprint arXiv:2110.13435, 2021
42021
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer
L Chen, J Chen, H Huang, M Cheng
Empirical Methods in Natural Language Processing 2023, 2023
32023
ODIN: Disentangled Reward Mitigates Hacking in RLHF
L Chen, C Zhu, D Soselia, J Chen, T Zhou, T Goldstein, H Huang, ...
arXiv preprint arXiv:2402.07319, 2024
22024
Automated Data Curation for Robust Language Model Fine-Tuning
J Chen, J Mueller
arXiv preprint arXiv:2403.12776, 2024
12024
Can llms speak for diverse people? tuning llms via debate to generate controllable controversial statements
M Li, J Chen, L Chen, T Zhou
arXiv preprint arXiv:2402.10614, 2024
12024
The system can't perform the operation now. Try again later.
Articles 1–20