The major thrust of this paper is to develop a sensor model based on a probabilistic approach that could accurately provide information about individual sensor’s uncertainties and limitations. The sensor model aims to provide a most informative likelihood function that can be used to obtain a statistical and probabilistic estimate of uncertainties and errors due to some environmental parameters or parameters of any feature extraction algorithm used in estimation based on sensor’s outputs. This paper makes use of a neural network that has been trained with the help of a novel technique that obtains training signal from a maximum likelihood estimator. The proposed technique was applied to model stereo-vision sensors and Infra-Red (IR) proximity sensor, and information from these sensors were fused in a Bayesian framework to obtain a three-dimensional occupancy profile of objects in robotic workspace. The capability of the proposed technique in accurately obtaining three-dimensional occupancy profile and efficiently removing individual sensor uncertainties was demonstrated and validated via experiments carried out in the Robotics and Manufacturing Automation (RAMA) Laboratory at Duke University.

1.
Manyika, J. and Durrant-Whyte, H., Data Fusion and Sensor Management: a Decentralized Information-Theoretic Approach, Ellis Howard Limited, 1994.
2.
Durrant-Whyte, H.F., Integration, Coordination and Control of Multi-Sensor Robot Systems, Kluwer Academic Publishers, Norwell, MA, 1988.
3.
Porrill, J., “Optimal Combination and Constraints for Geometrical Sensor Data,” The International Journal of Robotics Research, 1988, pp. 66–77.
4.
Rumelhart, D. E. and McClelland, J. L., Explorations in Parallel Distributed Processing: a Handbook of Models, Programs, and Exercises (MIT Press), 1988.
5.
Haykin, S. S., Neural Networks: A Comprehensive Foundation (Prentice Hall Press), 1998.
6.
Garg, D., Ananthraman, S., and Prabhu, S., “Neural Network Applications,” Wiley Encyclopedia of Electrical and Electronic Engineering, John G. Webster (ed.), Vol. 14 (John Wiley, New York, NY), 1999, pp. 255–265.
7.
Hornik
K.
,
Stinchombe
M.
, and
White
H.
, “
Multilayer Feedforward Network are Universal Approximators
,”
Neural Networks
, Vol.
2
,
1989
, pp.
359
366
.
8.
Irie, B. and Miyake, S., “Capabilities of Three-Layered Perceptrons,” Proceedings of IEEE International Conference on Neural Networks, California, USA, 24–27 July, 1988, pp. 641–648.
9.
Garg, D. and Kumar, M., “Object Classification via Stereo Vision in a Flexible Manufacturing Work Cell,” Proceedings of the 10th International Conference on Mechatronics and Machine Vision in Practice, Perth, Western Australia, December 9–11, 2003.
10.
Zhang
Z.
,
Deriche
R.
,
Faugeras
O.
,
Luong
Q.-T.
, “
A Robust Technique for Matching Two Uncalibrated Images Through the Recovery of the Unknown Epipolar Geometry
,”
Artificial Intelligence Journal
, Vol.
78
, October
1995
, pp.
87
119
.
11.
Elfes, A., “Multi-Source Spatial Data Fusion Using Bayesian Reasoning,” Data Fusion in Robotics and Machine Intelligence, Abidi, M. A. and Gonzalez, R. A. (eds.), Academic Press, 1992.
12.
Kumar, M. and Garg, D., “Intelligent Multi Sensor Fusion Techniques in Flexible Manufacturing Workcells,” Proceedings of American Control Conference, 2004, Boston, MA, pp. 5375–5380.
13.
HoseinNezhad, R., Moshiri, B., Asharif, M.R., “Sensor Fusion for Ultrasonic and Laser Arrays in Mobile Robotics: a Comparative Study of Fuzzy, Dempster and Bayesian Approaches,” Proceedings of IEEE Conference on Sensors, Vol. 2, 12–14 June 2002, pp. 1682–1689.
14.
Kumar, M. and Garg, D., “Three-Dimensional Occupancy Grid with the Use of Vision and Proximity Sensors in a Robotic Workcell,” Paper Number IMECE2004-59593, Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Anaheim, CA, November 14–19, 2004, 8p.
This content is only available via PDF.
You do not currently have access to this content.