Indoor Localization is a primary task for social robots. We are particularly interested in how to solve this problem for a mobile robot using primarily vision sensors. This work examines a critical issue related to generalizing approaches for static environments to dynamic ones: (i) it considers how to deal with dynamic users in the environment that obscure landmarks that are key to safe navigation, and (ii) it considers how standard localization approaches for static environments can be augmented to deal with dynamic agents (e.g., humans). We propose an approach which integrates wheel odometry with stereo visual odometry and perform a global pose refinement to overcome previously accumulated errors due to visual and wheel odometry. We evaluate our approach through a series of controlled experiments to see how localization performance varies with increasing number of dynamic agents present in the scene.
Paper accepted to be published at:
Indoor Localization in Dynamic Human Environments using Visual Odometry and Global Pose Refinement, By Raghavender Sahdev, Bao Xin Chen and John K. Tsotsos, In the 15th Conference on Computer and Robot Vision Systems (CRV 2018), Toronto, China , May 9-11, 2018 (accepted, to be published).
* Download paper: click here
* Download Dataset with ground truth evaluation file: click here
* VIDEO FOR THE PROJECT :
Paper accepted to be published at:
Indoor Localization in Dynamic Human Environments using Visual Odometry and Global Pose Refinement, By Raghavender Sahdev, Bao Xin Chen and John K. Tsotsos, In the 15th Conference on Computer and Robot Vision Systems (CRV 2018), Toronto, China , May 9-11, 2018 (accepted, to be published).
* Download paper: click here
* Download Dataset with ground truth evaluation file: click here
* VIDEO FOR THE PROJECT :