Authors
Alexandros Papangelis, Yi-Chia Wang, Piero Molino, Gokhan Tur
Publication date
2019/7/11
Journal
arXiv preprint arXiv:1907.05507
Description
We present the first complete attempt at concurrently training conversational agents that communicate only via self-generated language. Using DSTC2 as seed data, we trained natural language understanding (NLU) and generation (NLG) networks for each agent and let the agents interact online. We model the interaction as a stochastic collaborative game where each agent (player) has a role ("assistant", "tourist", "eater", etc.) and their own objectives, and can only interact via natural language they generate. Each agent, therefore, needs to learn to operate optimally in an environment with multiple sources of uncertainty (its own NLU and NLG, the other agent's NLU, Policy, and NLG). In our evaluation, we show that the stochastic-game agents outperform deep learning based supervised baselines.
Total citations
201920202021202220232024110101286
Scholar articles
A Papangelis, YC Wang, P Molino, G Tur - arXiv preprint arXiv:1907.05507, 2019