Authors
Douglas Summers-Stay, Taylor Cassidy, Clare Voss
Publication date
2014/8
Conference
Proceedings of the Third Workshop on Vision and Language
Pages
9-16
Description
The prospect of human commanders teaming with mobile robots “smart enough” to undertake joint exploratory tasks—especially tasks that neither commander nor robot could perform alone—requires novel methods of preparing and testing human-robot teams for these ventures prior to real-time operations. In this paper, we report work-in-progress that maintains face validity of selected configurations of resources and people, as would be available in emergency circumstances. More specifically, from an off-site post, we ask human commanders (C) to perform an exploratory task in collaboration with a remotely located human robot-navigator (Rn) who controls the navigation of, but cannot see the physical robot (R). We impose network bandwidth restrictions in two mission scenarios comparable to real circumstances by varying the availability of sensor, image, and video signals to Rn, in effect limiting the human Rn to function as an automation stand-in. To better understand the capabilities and language required in such configurations, we constructed multi-modal corpora of time-synced dialog, video, and LIDAR files recorded during task sessions. We can now examine commander/robot dialogs while replaying what C and Rn saw, to assess their task performance under these varied conditions.
Total citations
20152016201720182019202020212022202320243412111
Scholar articles
D Summers-Stay, T Cassidy, C Voss - Proceedings of the Third Workshop on Vision and …, 2014