You are here
Generating instructions in virtual environments by learning from human-human corpora
Instruction giving in dynamic virtual environments can be used in several applications, such as e-learning, gaming, pedestrian navigation, marketing, etc. In this talk I present a novel model for generating instructions from human-human corpora without manual annotations. Our approach is restricted in that only the virtual instructor can talk while the human following the instructions is limited to responding by acting in the virtual world. We evaluate our model with human users using task success as well as human satisfaction metrics. We compare our results with both human and rule-based virtual instructors, hand-coded for the same task.