Large language models can be guided to evade ai-generated text detection N Lu, S Liu, R He, K Tang arXiv preprint arXiv:2305.10847, 2023 | 18 | 2023 |
Multi-domain active learning: Literature review and comparative study R He, S Liu, S He, K Tang IEEE Transactions on Emerging Topics in Computational Intelligence 7 (3 …, 2022 | 17* | 2022 |
Dataset condensation for recommendation J Wu, W Fan, S Liu, Q Liu, R He, Q Li, K Tang arXiv preprint arXiv:2310.01038, 2023 | 3 | 2023 |
Perturbation-Based Two-Stage Multi-Domain Active Learning R He, Z Dai, S He, K Tang Proceedings of the 32nd ACM International Conference on Information and …, 2023 | 2 | 2023 |
Multi-Domain Learning From Insufficient Annotations R He, S Liu, J Wu, S He, K Tang arXiv preprint arXiv:2305.02757, 2023 | 2 | 2023 |