Follow
Jing Xu
Jing Xu
Meta AI Research (FAIR)
Verified email at meta.com
Title
Cited by
Cited by
Year
Recipes for building an open-domain chatbot
S Roller
arXiv preprint arXiv:2004.13637, 2020
10512020
Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage
K Shuster, J Xu, M Komeili, D Ju, EM Smith, S Roller, M Ung, M Chen, ...
arXiv preprint arXiv:2208.03188, 2022
2502022
Beyond goldfish memory: Long-term open-domain conversation
J Xu
arXiv preprint arXiv:2107.07567, 2021
2352021
Chain-of-verification reduces hallucination in large language models
S Dhuliawala, M Komeili, J Xu, R Raileanu, X Li, A Celikyilmaz, J Weston
arXiv preprint arXiv:2309.11495, 2023
1902023
Recipes for safety in open-domain chatbots
J Xu, D Ju, M Li, YL Boureau, J Weston, E Dinan
arXiv preprint arXiv:2010.07079, 2020
1792020
Self-rewarding language models
W Yuan, RY Pang, K Cho, S Sukhbaatar, J Xu, J Weston
arXiv preprint arXiv:2401.10020, 2024
1562024
Bot-adversarial dialogue for safe conversational agents
J Xu, D Ju, M Li, YL Boureau, J Weston, E Dinan
Proceedings of the 2021 Conference of the North American Chapter of the …, 2021
1222021
Learning new skills after deployment: Improving open-domain internet-driven dialogue with human feedback
J Xu, M Ung, M Komeili, K Arora, YL Boureau, J Weston
arXiv preprint arXiv:2208.03270, 2022
352022
Some things are more cringe than others: Preference optimization with the pairwise cringe loss
J Xu, A Lee, S Sukhbaatar, J Weston
arXiv preprint arXiv:2312.16682, 2023
332023
Saferdialogues: Taking feedback gracefully after conversational safety failures
M Ung, J Xu, YL Boureau
arXiv preprint arXiv:2110.07518, 2021
332021
The cringe loss: Learning what language not to model
L Adolphs, T Gao, J Xu, K Shuster, S Sukhbaatar, J Weston
arXiv preprint arXiv:2211.05826, 2022
262022
On anytime learning at macroscale
L Caccia, J Xu, M Ott, M Ranzato, L Denoyer
Conference on Lifelong Learning Agents, 165-182, 2022
222022
When life gives you lemons, make cherryade: Converting feedback from bad responses into good labels
W Shi, E Dinan, K Shuster, J Weston, J Xu
arXiv preprint arXiv:2210.15893, 2022
152022
Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls
D Ju, J Xu, YL Boureau, J Weston
arXiv preprint arXiv:2208.03295, 2022
142022
Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage, 2022
K Shuster, J Xu, M Komeili, D Ju, EM Smith, S Roller, M Ung, M Chen, ...
URL https://arxiv. org/abs/2208.03188, 0
14
Meta-rewarding language models: Self-improving alignment with llm-as-a-meta-judge
T Wu, W Yuan, O Golovneva, J Xu, Y Tian, J Jiao, J Weston, S Sukhbaatar
arXiv preprint arXiv:2407.19594, 2024
92024
Housing choices, sorting, and the distribution of educational benefits under deferred acceptance
J Xu
Journal of Public Economic Theory 21 (3), 558-595, 2019
92019
Training models to generate, recognize, and reframe unhelpful thoughts
M Maddela, M Ung, J Xu, A Madotto, H Foran, YL Boureau
arXiv preprint arXiv:2307.02768, 2023
72023
Improving open language models by learning from organic interactions
J Xu, D Ju, J Lane, M Komeili, EM Smith, M Ung, M Behrooz, W Ngan, ...
arXiv preprint arXiv:2306.04707, 2023
52023
Distilling system 2 into system 1
P Yu, J Xu, J Weston, I Kulikov
arXiv preprint arXiv:2407.06023, 2024
42024
The system can't perform the operation now. Try again later.
Articles 1–20