View/state planning for three-dimensional object reconstruction under uncertainty

J. Irving Vasquez-Gomez, L. Enrique Sucar, Rafael Murrieta-Cid

Research output: Contribution to journalArticlepeer-review

26 Scopus citations


We propose a holistic approach for three-dimensional (3D) object reconstruction with a mobile manipulator robot with an eye-in-hand sensor; considering the plan to reach the desired view/state, and the uncertainty in both observations and controls. This is one of the first methods that determines the next best view/state in the state space, following a methodology in which a set of candidate views/states is directly generated in the state space, and later only a subset of these views is kept by filtering the original set. It also determines the controls that yield a collision free trajectory to reach a state using rapidly-exploring random trees. To decrease the processing time we propose an efficient evaluation strategy based on filters, and a 3D visibility calculation with hierarchical ray tracing. The next best view/state is selected based on the expected utility, generating samples in the control space based on an error distribution according to the dynamics of the robot. This makes the method robust to positioning error, significantly reducing the collision rate and increasing the coverage, as shown in the experiments. Several experiments in simulation and with a real mobile manipulator robot with 8 degrees of freedom show that the proposed method provides an effective and fast method for a mobile manipulator to build 3D models of unknown objects. To our knowledge, this is one of the first works that demonstrates the reconstruction of complex objects with a real mobile manipulator considering uncertainty in the controls.

Original languageEnglish
Pages (from-to)89-109
Number of pages21
JournalAutonomous Robots
Issue number1
StatePublished - 1 Jan 2017


  • Motion planning
  • Next best view
  • Object reconstruction
  • Uncertainty


Dive into the research topics of 'View/state planning for three-dimensional object reconstruction under uncertainty'. Together they form a unique fingerprint.

Cite this