CHAPTER 4 - A SURVEY OF NEURAL NETWORK APPLICATIONS IN ROBOTICS

4.1 Introduction:

As an emergent technology, neural networks appear to have a great deal to offer robotics research. Two distinct directions of research appear to be the application of neural networks to control systems and to sensory integration.

Original work in the field involved the bang-bang control of a pole balancer by Widrow and Smith in the early 1960’s [as reported by Widrow, 1987]. This was before neural network research declined, after Minsky and Papert’s book [1969]. The introduction of Back Propagation [Rummelhart. 1986] and Hopfield’s work [1982] have helped rejuvenate the neural network field. The new robotics research covers several fields,

• Single Input/ Single Output controllers

• Adaptive Controllers based on System Identification

• Model Based Dynamic Controllers

• Inverse Kinematics

• Neuromorphic Controllers

• Vision and Sensor Systems

• Other Novel Applications

These areas will be described in brief to give the reader a background of the typical neural network applications to robotics. The final discussion will also briefly cover the main neural network paradigms used in robotics.

4.2 Neural Network Applications to SISO Control:

Single Input/ Single Output controllers have been the mainstay of classic linear control theory. Even though these controllers may be very dependable, their performance is greatly degraded when exposed to the non-linear nature of the robotics problem. These controllers do have some interesting benefits, as shown by Guez and Selinsky [1988a, 1988b]. They discuss a controller that learns its control rules by observing a human teacher. This work was displayed using the pole balancing problem.

 

Figure 4.1: A Single Input/ Single Output Controller

Elsley[1988] proposed a feedback controller for kinematic control of a manipulator that uses a vision system for a feedback of position. He discussed adaptive control strategies for kinematic control of a robotic manipulator, based upon a controller to replace a ‘normal controller’, like a PID controller. He also performed fault tolerance tests and found that the networks were able to continue operation and overcome failures with neurons removed.

The flaw with the SISO methods is that a linearized model of the robot must be assumed. Robots are rarely linear and as a result the method is subject to potential failure.

4.3 System Identification With Neural Networks:

Adaptive controllers [Astrom, 1987] allow compensation for non-linearities in systems. Even though these controllers are more complicated, they have inspired some interesting papers. The approaches which have been found, have been done with system identification.

 

Figure 4.2: A Self Tuning Controller

System Identification involves using the current state of the system to estimate controller parameters. This is a good method for dealing with non-linear systems. Guez et. al. [1987] proposed a neural network system which identifies the state of a manipulator and estimates the control parameters that should be used in the feedback controller. F.C. Chen [1989] also describes a self tuning controller scheme using system identification to select system parameters. Narendra and Parthasarathy [1990] give a rather rigorous approach to system identification and modelling. Their work was quite successful, and produced excellent control of a manipulator with neural networks.

An alternative to self tuning controllers are model based controllers, these typically require good models of robot processes.

 

4.4 Non-Linear Control with Neural Networks:

The basic control functions of the robot may be modelled. The state of the process will be used as inputs to the neural network and the model outputs are estimated. These models may then be used in more sophisticated controllers.

A simple example of a model may be seen in the work by Suddarth et. al.[1988] who discuss the use of neural networks to control the thrust of a lunar lander as it descends to the moon’s surface. This controller uses three inputs to produce the recommended thrust.

Researchers have applied the non-linear neural network controllers to robotics problems. Ritter and Schulten [1986, 1988] discuss using Kohonen networks to remember the force output for a pole balancer, based on current position. Atkeson and Reinkensmeyer [1989] used content addressable memory to remember torque values for particular control states of the manipulator. Their work has identified some problems inherent in the feature detection networks with associative memory paradigms, they have “choppy” control signals. Chen and Pao [1989] use a novel learning paradigm that allows them to learn the inverse dynamics in real time for the pole balancing problem.

4.5 Model Based Neural Network Controllers:

Modelling the robotic system will allow compensation for the non-linearities. These models are generally based on the dynamics. The models are useful for developing Feedforward Controllers and Model Reference Adaptive Controllers (MRAC). The neural network models allow much faster calculation of the model, and thus make these controllers practical. Non-Neural Network MRAC controllers may not always be suitable to real time control [Guez et. al., 1987][Kuntze, 1989] because the controller requires an explicit model, and a lot of computation.

 

Figure 4.3: A Model Reference Adaptive Controller

 

Figure 4.4: A Feed Forward Controller

Advanced model based controllers have also been developed. Work by Kawato et al.[1987a, 1987b, 1988] discusses biologically based control systems, and investigates an approximation of the control scheme. This control scheme actually involves modelling inverse dynamics for a feed forward controller, with a proportional feedback controller. Other works have also been seen to involve modelling of the inverse dynamics. The Hebbian Neural Network paradigm was used to model the inverse dynamics of a manipulator in the work of Pourboghrat and Sayeh [1988]. They incorporate the inverse dynamics model into a feedforward controller then add a second neural network adaptive error feedback controller.

Miller et al.[1990] use the CMAC (“Cerebral Model Articulated Control”) paradigm to learn the inverse dynamics of a manipulator, and the models are used in a feed-forward control scheme. This scheme is very fast and could update in real time. The method successfully learns control of a robot, with dynamical considerations. The controller uses a combination of feed-forward, and proportional feed-back control, based on a simple neural network based model. The neural network is trained in real time, and adjusts weights in the massively parallel CMAC network.

4.6 Neuromorphic Controllers:

The approach, which is used often, is to pretrain a neural network for control. Sometimes it is advantageous to have the neural network learn as the process is occurring. This requires a neuromorphic approach.

Neuromorphic controllers may be devised which can learn on-line, when supplied with a measure of control success. These methods require that some objective be established for the system, and that an appropriate learning rule be used.

 

Figure 4.5: An Example of a Neuromorphic Controller

Sanner and Akin [1988] describe a neuromorphic controller that maps the system state to a control signal. This controller is updated with a payoff feedback function. TenDyke [1990] also uses a similar approach in which the control model is adapted by an error function feedback.

Psaltis et. al. [1987] discuss using architectures of controllers that will learn the inverse control model of a plant, on-line. A similar structure has been proposed by Guez and Selinsky [1988b], to control the non-linear pole balancing problem. Finally, the highly non-linear problem of the hopping robot was examined by Helferty et al. [1989]. They used a feedback of a reinforcement signal to control learning of a neural network. These approaches all seemed to have good success.

4.7 Neural Networks for Vision and Sensor Based Systems:

Sensory integration has become an increasingly popular topic in robotics. It is very hard to integrate sensors into many robotics control and planning schemes. These problems may be addressed by the massively parallel neural network architecture.

Fusion of vision and control are of great interest. Tolat and Widrow [1988] have discussed the use of an Adeline (An early perceptron like neuron) to control a pole balancer using visual inputs.

Graf and LaLonde [1988a, 1988b] have done work with adaptive control of a manipulator. They have developed a method which can glance at a point in space with 2D stereo vision, and then move to it. This method learns the workspace, to avoid collisions, and correct for kinematic variations over time. This method is based on three Kohonen networks. One map is used to for mapping vision to arm configuration. Another map is used for mapping collision to arm configuration, and a third is used for mapping arm configurations to the robot control.

Martinez et al. [1990] use Kohonen networks to make a topological map of space. They use two cameras that report the x-y positions of the end-effector in their images. The image positions are used to obtain the estimated joint angles of that position. Vision is also used by Pabon and Gossard [1988] who determine a visual perspective orientation for a vision camera, and then control an associated robot with neural networks. Finally, Miller [1988] vaguely discusses the use of vision and a CMAC paradigm to do object tracking.

Another approach is to use sensors to collect data and then generate neural network maps of space. Jorgenson[1987] has used Hopfield networks to remember the topography of a room (in a 1024 by 1024 map), as observed from sonar sensors. A path through the room is then found using a simulated annealing type of approach. Nagata et. al. [1988] briefly describe a mobile robot that has visual, tactile and auditory sensors, which serve as a basis for reasoning and instinct networks.

 

4.8 Novel Neural Network Applications:

Some researchers have investigated novel applications of neural networks to robotics problems.

Tsutsumi et al. [1987] discuss moving an elephant trunk-like manipulator (a 2D truss structure) towards a goal point through energy minimization with a Hopfield network. Tsutsumi and Matsumoto [1988] have developed a similar method for finding the optimal path for a 2D snake manipulator through sets of obstacles using energy minimization with a Hopfield network. Both methods assume a flexible truss manipulator which has a large number of sensors capable of measuring the distances between the robot links, base, and obstacles. Both methods may become deadlocked, and have to resort to algorithms to recover.

A rule and CMAC controller system was designed by Handelman et al. [1989] to control a tennis-like swing of a manipulator. Their neural network controller observed the rule based control of a tennis swing, then the trained network was used directly, with the rules monitoring the neural network.

4.9 Inverse Kinematics with Neural Networks

The inverse kinematics problem involves the conversion of the end effector position in space to robot joint coordinates. The neural network will learn to map the end effector position to a set of joint positions. These approaches are only hindered by singularities which exist in the workspace. Kinematic singularities are typically exemplified by mathematical solutions i) at or beyond the reach of the manipulator, ii) redundant arm configurations, and iii) at the origin.

Some of the early attempts at neural network inverse kinematics were done by Josin [1988b] and Josin et. al.[1988a, 1988b, 1988c]. Their work concentrated on a limited square within a Cartesian workspace, while avoiding all singularities. Their work also included solution of a 3 degree of freedom planar manipulator, by specifying two orientations, and positions coordinates. These results were good for limited regions in the workspace.

Guez and Ahmad [1988a][1988b] also examined the two degree of freedom problem, throughout a greater workspace area. Their attempts focused on space described with polar coordinates. These attempts were very successful, although they did not examine the problem of singularities which they had mentioned.

The inverse kinematics problem was examined in the entire work space by Lee et. al. [1990] who converted polar coordinates in space into joint angles for a 3 degree of freedom manipulator. This method was successful, but showed that the inverse kinematics maps have problems dealing with singularities. This work was further extended by Jack et. al. [unpublished] who discovered that the same conditions exist for inverse kinematics through out the entire workspace when using polar, Cartesian, and joint coordinates to map to joint coordinates.

All the approaches to inverse kinematics with neural networks, use the feedforward backpropagation algorithm. The results from this research indicate potential problems when a neural network must learn inverse kinematics. Thus, the inverse kinematics problem should be avoided, and joint coordinates used instead (as is done in this thesis).

There have been some methods which have inverse kinematics built in. The inverse kinematics of an inverted pendulum were taught to a Hopfield network by Kitamura et al. [1988]. Their model was created so that they could control a walking biped robot, as if it were an inverted pendulum. Sobajic et. al. [1988] explored a controller that used the current joint angles and the target in polar coordinates to produce the control signal. As mentioned earlier Elsley [1988] uses a Cartesian position feedback to drive the controller.

4.10 Estimating the Inverse Jacobian with Neural Networks

An alternative to using neural networks to determine a position in space, is to use neural networks to determine motion in space. The Inverse Jacobian can help convert movement of the end effector in space into joint movements. It still has some of the singularities of the inverse kinematics, but in fewer regions of the workspace.

There are some examples of the use of a Neural Network estimation of the Inverse Jacobian for robot control. As mentioned earlier, Elsley [1988] uses the Inverse Jacobian with a vision system to reduce errors. He has compared the results to a traditional controller, and found that the traditional controller performed better. The Inverse Jacobian was also modelled with a neural network by Yeung and Gekey [1989].

4.11 Discussion of Neural Network Based Robotics

The network paradigm, learning rule, and architecture may have a distinct effect upon the success of a method.

The most popular paradigm is backpropagation which is used in [F.C. Chen, 1989][Guez and Selinsky, 1988][Narendra and Parthasarathy, 1990][Pabon and Gossard, 1988][Psaltis et. al., 1987][Sanner and Akin, 1988][Tawel and Thakoor, 1988]. This method is good for learning smooth mappings of continuous functions.

A second popular paradigm is the Kohonen network [Kohonen, 1982], which is used by [Graf and LaLonde, 1988a, 1988b][Martinetz et. al., 1990][Ritter and Schulten, 1986, 1988]. Instead of learning an internal representation of a function, each neuron in the network will remember a feature of the problem, and the resultant response. This appears to work well when attempting to use the networks for topological mapping. Unfortunately this network is subject to ‘switching’ noise when mapping continuous functions, with the discontinuous Kohonen network. One distinct advantage of the Kohonen map is the ability to restrict weight updates to particular neurons, instead of the whole network [Ritter and Schulten, 1986, 1988], thus reducing computation time.

Hopfield networks have been used by some researchers to ‘minimize the energy state of their solution’ [Jorgenson, 1987][Kinser et. al., 1988][Tsutsumi et. al., 1987, 1988]. Hopfield networks are not very suitable for real time control which requires consistent, fast solution speed.

There are other paradigms that are rarely used. The Adaline has been used by Tolat [1988]. Adalines are simple, and fast, but not suitable for complex problems. The CMAC (“Cerebral Model Articulated Control”) paradigm has been used by some researchers for simple sensory based control [Handelman et. al., 1989][Miller, 1988][Miller et. al., 1990], because of their ability to recognize features from many inputs.

There are other researchers who attempt to develop new paradigms, designed to suit robotics problems. Nagata et. al. [1988] have developed their own learning rules for a neural network. Non-homogenous network structures were proposed by [Elsley, 1988][Nagata et. al., 1988]. These structures use mixtures of different types of neurons with complementary attributes. These give better results for particular applications.

Other researchers have entirely novel approaches. Hybrid rule based controls have been applied to neural network problems [Handelman, 1989][Suddarth et. al., 1988][Tsutsumi et. al., 1988]. This allows the computational advantages of formal Von Neumann computing (i.e., rules) to be mixed with the abilities of neural networks.

When the robotics problem is known to be dependant upon time, or on previous states, then it can be useful to have time delays, to allow the network to do time based estimations [Karsai et. al., 1989]. One, non-robotic, example of time delays in neural networks is shown by Waibel’s [1988] speech recognition system. This system of neural networks uses time delays on network inputs to allow the speech signal to be examined over time. This also means that it is possible to incorporate the delay into a neural robot controller. Thus, letting the network calculate the first, second, or other derivatives of an input. This is applicable to tasks like finding angular velocity from angular position.

There are several brief surveys available for neural networks in robotics. Kung and Hwang [1989] survey many different neural network paradigms and applications, with the intention of developing a VLSI support architecture for Neural Networks in robotics. Werbos [1989a, 1989b, 1989c, 1990] identifies some of the main issues in neuroengineering including some robotics issues.

4.12 Summary:

• Neural Networks have been used in a variety of linear and non-linear controllers.

• Neural networks can handle one or more inputs and outputs.

• Neural networks do not work well when dealing with the mathematical problem of converting space coordinates to joint coordinates.

• Neural networks have been used in most popular control schemes including controlling unmodelled processes.

• Various sensors have been used successfully with neural networks.

• Backpropagation is the most popular neural network paradigm for robotics research.