Code for 'Chasing Ghosts: Instruction Following as Bayesian State Tracking' published at NeurIPS 2019
-
Updated
Jan 10, 2020 - C++
Code for 'Chasing Ghosts: Instruction Following as Bayesian State Tracking' published at NeurIPS 2019
Code and utilities for creating a Vision-and-Language Navigation (VLN) simulator environment from a physical space.
[ECCV 2022] Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Code for ORAR Agent for Vision and Language Navigation on Touchdown and map2seq
Codebase of ACL 2023 Findings "Aerial Vision-and-Dialog Navigation"
Contrastive-VisionVAE-Follower is a model used for multi-modal task called Vision-and-Language Navigation (VLN).
Reading list for research topics in embodied vision
Official repository of "Mind the Error! Detection and Localization of Instruction Errors in Vision-and-Language Navigation". We present the first dataset - R2R-IE-CE - to benchmark instructions errors in VLN. We then propose a method, IEDL.
Add a description, image, and links to the vln topic page so that developers can more easily learn about it.
To associate your repository with the vln topic, visit your repo's landing page and select "manage topics."