An Analysis on the Role of Multisensor Fusion and Integration In Robotics Manipulation
Advancements in Multisensor Fusion and Integration for Robotics Manipulation
by Lect. M. Ramesh Babu*, Dr. Vinit Kumar,
- Published in International Journal of Information Technology and Management, E-ISSN: 2249-4510
Volume 4, Issue No. 1, Feb 2013, Pages 0 - 0 (0)
Published by: Ignited Minds Journals
ABSTRACT
The use of multisensor integration and fusion enables a multisensor-basedmobile robot to operate in uncertain or unknown dynamic environments. Afterfirst distinguishing between multisensor integration and the more restrictednotion of multisensor fusion, the role of multisensor integration and fusion inthe operation a mobile robot is described with reference to the type ofinformation that the integrated multiple sensors can uniquely provide therobot. Hypothetical mobile robot architecture is used to illustrate the genericfunctions necessary for intelligent autonomous mobility. A variety of proposedhigh-level representations for multisensory information are presented, alongwith a discussion of different sensor combinations that have been used inmobile robots. The paper concludes with short descriptions of a selection ofdifferent mobile robots to illustrate the role of multisensor integration andfusion in their operation.
KEYWORD
multisensor fusion, integration, robotics manipulation, mobile robot, uncertain environments, dynamic environments, information, sensor combinations, mobile robots, autonomous mobility
INTRODUCTION
A Data fusion is a technique that is use to combine data from multiple sources and gather that information into discrete, actionable items in order to achieve inferences, which will be more efficient and narrowly tailored than if they were achieved by means of disparate sources. Data fusion processes are often categorized as low, intermediate or high, depending on the processing stage at which fusion takes place. Low level data fusion combines several sources of raw data to produce new raw data. The expectation is that fused data is more informative and synthetic than the original inputs. Sensor Fusion is also known as (multi-sensor) data fusion and is a subset of information fusion. Multisensor integration is the synergistic use of the information provided by multiple sensory devices to assist in the accomplishment of a task by a system. Multisensor fusion refers to any stage in the integration process where there is an actual combination of different sources of sensory information into one representational format. The goal of our research is to develop programming support for the implementation of complex, sensor-based robotic tasks in the presence of geometric uncertainty. Examples of complex tasks include sensor-based navigation and 3D manipulation in partially or completely unknown environments, using redundant robotic systems such as mobile manipulator arms, cooperating robots, robotic hands or humanoid robots, and using multiple sensors such as vision, force, torque, tactile and distance sensors. While in many multisensor systems the information from each sensor serves as a separate input to the system, the unique aspects involved in the actual combination or fusion of information prior to its use in the system has made it useful to distinguish between multisensor integration and multisensor fusion. Multisensor integration refers to the synergistic use of the information provided by multiple sensory devices to control the operation of an intelligent system. Multisensor fusion refers to any stage in the integration process where there is an actual combination (or fusion) into one representational format of different sources of sensory information or information from a single sensory device acquired over an extended time period. The distinction of fusion from integration erves to separate the more general issues involved in the integration of multiple sensory devices at the system architecture and control level, from the more specific issues (e.g., mathematical or statistical) involving the actual fusion of sensory information—e.g., in many multisensor-based mobile robots the information from one sensor may be used to guide the operation of other sensors on the robot without ever actually fusing the sensors' information.
Available online at www.ignited.in Page 2
The need for an effective method of integrating multisensory information was recognized during the development of the first mobile robots. In addition to robotic research, multisensor integration and fusion has since become a topic of research in computer vision, artificial intelligence, pattern recognition, neural networks, statistics, and automatic control; and an important aspect in the development of a number of intelligent systems in areas of application including material handling, assembly, military command and control, target tracking, and aircraft navigation. Common among all of these applications is the requirement that the system intelligently interact with and operate in an unstructured environment without the complete control of a human operator. Luo and Kay, have discussed some of the multisensor integration and fusion issues and approaches common to all of these applications, and Levi has described multisensor fusion techniques appropriate for mobile robot navigation and has reviewed their use in a number of existing robots.
MULTISENSOR FUSION AND INTEGRATION
Multisensor Fusion - The fusion of data or information from multiple sensors or a single sensor over time can takes place at different levels of representation. The different levels of multisensor fusion can be used to provide information to a system that can be used for a variety of purposes. Eg. signal level fusion can be used in real time application and can be considered as just an additional step in the overall processing of the signals, pixel level fusion can be used to improve the performance of many image processing tasks like segmentation, and feature and symbol level fusion can be used to provide an object recognition system with additional features that can be used to increase its recognition capabilities. Multisensor Integration - The multisensor integration represents a composite of basic functions. A group of n sensors provide input to the integration process. In order for the data from each sensor to be used for integration, it must first be effectively modeled. A sensor model represents the uncertainty and error in the data from each sensor and provides a measure of its quality that can be used by the subsequent integration functions. After the data from each sensor has been modeled, it can be integrated into the operation of the system in accord with three different types of sensory processing: fusion, separate operation, and guiding or cueing. Sensor registration refers to any of the means used to make data from each sensor commensurate in both its spatial and temporal dimensions. If the data provided by a sensor is significantly different from that provided by any other sensors in the system, its influence on the operation of the sensors might be indirect. The separate operation of such a sensor will influence the other sensors indirectly through the effects the sensor has on the system controller and the world model. A guiding or cueing type sensory processing refers to the situation where the data from one sensor is used to guide or cue the operation of other sensors. The results of sensory processing functions serve as inputs to the world model .A world model is used to store information concerning any possible state of the environment that the system is expected to be operating in. A world model can include both a priori information and recently acquired sensory information. High level reasoning processes can use the world model to make inferences that can be used to detect subsequent processing of the sensory information and the operation of the system controller. Sensor selection refers to any means used to select the most appropriate configuration of sensors among the sensors available to the system.
Figure 1. General Pattern of Multisensor Fusion and Integration in System.
MULTI-SENSOR ROBOTIC HEAD FOR HUMAN ROBOT INTERACTION
In this paper, the design of the multi-sensor robotic head, Muecas, is presented. Muecas has been designed to study and compare new methods for natural HRI that help to improve attention and empathy during the communication. The main key is a simple design that takes into consideration several studies on social robot design and the importance of the appearance in the interaction. Different concepts, such as the shape, features, level of rejection of the design and its compatibility with natural language, has been taken into account. Muecas has 12 degrees of freedom that are distributed as follows: neck (four), mouth (one), eyes (three) and eyebrow (four). The design requirements of the robotic head, Muecas, aims to combine these mobiles elements and sensors to establish an affective HRI through a
Available online at www.ignited.in Page 3
human-like appearance and the use of facial expressions. Thus, a perception system is integrated in the architecture, which consists of a set of sensors and software modules. On one hand, Muecas is equipped with an audio system (i.e., microphones and speaker), inertial system (i.e., compass, gyroscope and accelerometer) and video and depth acquisition systems (i.e., stereo cameras and RGBD sensor). On the other hand, the software modules consist of different subsystems for the human emotion recognition and imitation. Design Requirements - This paper introduces the design requirements of the multi-sensor robotic head described in this paper. Muecas has to be equipped with different sensors in order to acquire information of the robot’s surrounding. Besides, the robotic head has to be designed to express emotions and to imitate the natural language of humans. The final design has taken into account not only the functions or abilities to understand and respond to the context of the communication, but also other concepts, such as the appearance or the acceptance level of the robot. A biologically inspired approach has been adopted for the design and manufacturing of the proposed robotic head. Similar to other works, Muecas uses different systems that are based on psychology, cognitive architectures or the structure of interaction, among others. On the one hand, as the morphology of the robotic head changes the perception and the response of the user in the interaction, it is presented in the proposed design as a significant factor. On the other hand, the level of realism is also a decisive factor that determines the emotional response of the user (i.e., a highly realistic appearance presents an obvious rejection response, as is described in the uncanny valley theory). Hardware Architecture - An overview of the hardware used in the multi-sensor robotic head presented in this paper. Muecas uses a set of sensors and processing systems in order to acquire information of the surroundings and of the users in the interaction. Besides, a set of mobile elements are combined in order to generate facial expressions and to interact with humans (i.e., actuators). Software Architecture - The multi-sensor robotic head, Muecas, uses the framework RoboComp developed by the Robotics and Artificial Vision Lab (RoboLab) at the University of Extremadura. This software monitors and communicates the processes of the motors and sensors deployed in this platform via a computer and its connections to the hardware.
SENSOR COMBINATIONS
Due to the advantages and limitations of each type of sensor, most multisensor-based mobile robots use some combination of different sensor types to enable them to operate in environments ranging from roadways, to unstructured indoor environments to unknown natural terrain, and to be used for applications including assembly and nuclear power station maintenance. Some sensors cannot be used in a particular environment due to their inherent limitations (e.g., acoustic sensors in space), while others are limited due to either technical or economic factors. Obstacle detection with contact sensors necessarily limits the speed of a robot because contact must be made before detection can take place. Laser sensors require an intense energy source, and they have a short range and slow scan rate—their use can also cause eye problems. Vision sensors are critically dependent on ambient lighting conditions and their scene analysis and registration procedures can be complex and time consuming. Shaky, one of the first autonomous vehicles, used vision together with tactile sensors for obstacle detection. JASON combined acoustic and infrared proximity sensors for obstacle detection, and also used these sensors for path planning. The Stanford University Cart used acoustic and infrared sensors together with stereo vision for navigating over a flat terrain while avoiding obstacles. Bixler and Miller used simple low-resolution vision in their autonomous mobile robot to locate the direction of an obstacle, and then used an ultrasonic range finder to determine its depth and shape. Other combinations of sensors used in mobile robot systems have included: contact, infrared, and stereo vision; contact and acoustic; acoustic and stereo vision; and stereo vision and laser range finding.
Applications of Multisensor Fusion and Integration -
In recent years, benefits of multisensor fusion have motivated research in a variety of application area as follows: Robotics - Robots with multisensor fusion and integration enhance their flexibility and productivity in industrial application such as material handling, part fabrication, inspection and assembly. Mobile robot present one of the most important application areas for multisensor fusion and integration .When operating in an uncertain or unknown environment, integrating and tuning data from multiple sensors enable mobile robots to achieve quick perception for navigation and obstacle avoidance. Marge mobile robot equipped with multiple sensors. Perception, position location, obstacle avoidance vehicle control, path planning, and learning are necessary functions for an autonomous mobile robot. Honda humanoid robot is equipped with an
Available online at www.ignited.in Page 4
inclination sensor that consists of three accelerometer and three angular rate sensors. Each foot and wrist is equipped with a six axis force sensor and the robot head contains four video cameras. Multisensor fusion and integration of vision, tactile, thermal, range, laser radar, and forward looking infrared sensors play a very important role for robotic system. Military application - It is used in the area of intelligent analysis, situation assessment, force command and control, avionics, and electronic warfare. It is employed for tracking targets such as missiles, aircrafts and submarines. Remote sensing - Application of remote sensing include monitoring climate, environment, water sources, soil and agriculture as well as discovering natural sources and fighting the important of illegal drugs. Fusing or integrating the data from passive multispectral sensors and active radar sensors is necessary for extracting useful information from satellite or airborne imaginary. Biomedical application - Multisensor fusion technique to enhance automatic cardiac rhythm monitoring by integrating electrocardiogram and hemodynamic signals. Redundant and complementary information from the fusion process can improve the performance and robustness for the detection of cardiac events including the ventricular activity and the atria activity. Transportation system - Transportation system such as automatic train control system, intelligent vehicle and high way system, GSP based vehicle system, and navigation air craft landing tracking system utilize multisensor fusion technique to increase the reliability, safety, and efficiency.
CONCLUSION
Sensors play an important role in our everyday life because we have a need to gather information and process it for some tasks. Successful application of sensor depends on sensor performance, cost and reliability. The paradigm of multisensor fusion and integration as well as fusion techniques and sensor technologies are used in micro sensor based application in robotics, defense, remote sensing, equipment monitoring, and biomedical engineering and transportation systems. Current social robotics requires the development of new agents for interacting in certain social environments, such as robotic care (care for the elderly and people with disabilities), rehabilitation and education, among others. This interaction is usually based on verbal and nonverbal communication. Therefore, the multi-sensor robotic head, Muecas, for affective human-robot interaction is presented in this paper. Muecas is able to be integrated with different robotic platforms, while maintaining its ability to convey emotional information. The design of the robotic head has followed the main psychological theories about the human acceptance of robots, and thus, Muecas is based on an anthropomorphic and caricatured design. In the future, mobile robot development and multisensor integration and fusion research will continue their symbiotic relationship: both theoretical and practical techniques of multisensor integration and fusion will increase the ability of mobile robots to operate autonomously in unknown and dynamic environments, and mobile robots will, in turn, continue to provide data-rich platforms on which to test new integration and fusion techniques. As current progress in VLSI technology continues, the development of integrated solid-state chips containing multiple sensors will enable mobile robots to use an increasing number of sensors at lower cost without increasing their power requirements. It is likely that so- called "smart sensors" will be developed that will allow many low-level signal and fusion processing algorithms to be included in circuits on the chip. A smart sensor might also provide a better signal-to-noise ratio, and abilities for self-testing and calibration.
REFERENCES
- Bandera, J.P. Vision-Based Gesture Recognition in a Robot Learning by Imitation Framework. Ph.D. Thesis, University of Malaga, Malaga, Spain, 2009.
- J. Bleiholder and F. Neumann. Data fusion. ACM Computing Surveys, 41(1):1–41, 2008.
- J. De Schutter, J. Rutgeerts, E. Aertbelien, F. De Groote, T. De Laet, T. Lefebvre, W. Verdonck, and H. Bruyninckx, “Unified constraint based task specification for complex sensor-based robot systems,” in Int. Conf. Robotics and Automation, Barcelona, Spain, 2005, pp. 3618–3623
- Prado, J.A.; Simplicio, C.; Lori, N.F.; Dias, J. Visual-auditory Multimodal Emotional Structure to Improve Human-Robot-Interaction. Int. J. Soc. Robot. 2011, 4, 29–51.
R.C. Luo and M.G. Kay, “Multisensor integration and fusion: issues and approach”, in Proc. SPIE, Vol. 931, Sensor Fusion C.W. Weaver, Ed., Orlando, FL,Apr.1988.
Available online at www.ignited.in Page 5
- S. Chen, "Multisensor fusion and navigation of mobile robots," Int. J. of intelligent Systems, vol. 2, no. 2, pp. 227-251, 1987.
- Tapus, A.; Mataric, M.J. Emulating Empathy in Socially Assistive Robotics. In Proceedings of AAAI Spring Symposium on Multidisciplinary Collaboration for Socially Assistive Robotics, Palo Alto, CA, USA, 26–28 March 2007.