Jack, H., Lee, D.M.A., Buchal, R.O., ElMaraghy, W.H., “Neural Networks and the Inverse Kinematics Problem”, The Journal of Intelligent Manufacturing, March, 1994.
Neural Networks and the Inverse Kinematics Problem
H. Jack, D. M. A. Lee, R. O. Buchal, W. H. ElMaraghy, The University of Western Ontario, Department of Mechanical Engineering, London, Ontario, CANADA, N6A 5B9
Inverse kinematics is a fundamental problem in robotics. Past solutions for this problem have been realized through the use of various algebraic or algorithmic procedures. In this paper the use of Feedforward Neural Networks to solve the Inverse Kinematics problem is examined for three different cases. A closed kinematic linkage is used for mapping input joint angles to output joint angles. A three degree of freedom manipulator in 3D space is used to test mappings from both cartesian and spherical coordinates to manipulator joint coordinates. A majority of the results have average errors which fall below 1% of the robot workspace.
The accuracy indicates that neural networks are an alternate method for performing the inverse kinematics estimation; thus, introducing the fault tolerant and high speed advantages of neural networks to the inverse kinematics problem.
This paper also shows the use of a new technique which reduces neural network mapping errors with the use of error compensation networks. The results of the work are put in perspective with a survey of current applications of neural networks in robotics.
In manipulator kinematics, the desired position of a robot is usually specified with a convenient coordinate system such as cartesian coordinates. The inverse kinematics problem can be reduced to determining a relationship or mapping between the joint and reference coordinate systems. The forward kinematics problem, consisting of finding the end effector coordinates as a function of the joint coordinates, presents no difficulties. The more important inverse problem of finding the joint coordinates corresponding to a desired end effector position is more difficult because of the highly coupled, non-linear nature of the mapping. Difficulties arise when end effector positions are specified which are not reachable, also multiple joint configurations may exist for a specified end effector position.
The traditional approach to the solution of mechanism kinematics include two methods: (a) determining the set of closed form kinematic equations from the physical constraints of the system, or (b) employing iterative techniques to determine the spatial relationship of each link. Under controlled conditions, each of these methods may give accurate and reliable results; however, severe limitations may arise when these mechanisms are exposed to real world environments. When explicit closed form solutions are available, the kinematic relationships are only valid as long as the physical constraints of the mechanism do not change over time. When closed form solutions do not exist or cannot be found, iterative numerical techniques must be used. Although they are general, these techniques have several disadvantages in that:
• they are sensitive to the initial guess,
• terminal conditions for convergence of Newton-based algorithms
are not provided before execution,
• multiple solutions are not taken into account, and
• they are not guaranteed to converge to a correct solution under
Many of these iterative algorithms must be executed off-line because of their computational complexity, and thus, they are unsuitable for real-time control.
For the interested reader, there are some good references for exploring the inverse kinematics problem . Both of the aforementioned approaches are generally rigid and do not account for the uncontrollable variables such as wear on the mechanical parts over extended periods of use, poor tolerances during manufacture, damage to the mechanism during operation, poor calibration, or changes to sensor characteristics. As these changes occur, the mechanism must be maintained on a regular basis so as to keep and improve the accuracy in the control of these mechanisms. A neural network approach could be utilized to adapt to these changes in the mechanism, via on-line learning.
Neural networks also possess other benefits not inherent in modern digital computers. Traditional computers based upon the von Neumann architecture are most often used for solving complex algorithmic procedures and are susceptible to a number of problems. For instance, these computers can be prone to a number of physical faults, often rendering them unsuitable to function in hostile environments or in applications where a high degree of fault tolerance is desired, such as space applications. One method to avoid this is to incorporate some method of redundancy. This can appear in the form of having a number of identical computer hardware modules which work on the same problem simultaneously and independently. The solutions obtained between the modules are compared and a diagnosis is performed to determine if one or more of the modules are faulty. The most common method would be to accept the answer which is obtained by the majority of modules and determine that the non-conforming answer is a result of a “faulty” hardware module. Traditional digital computers are also limited by the requirement that they perform tasks with a rigid set of steps. The sequential processing of these computing architectures have a fundamental bottleneck. These issues of fault tolerance, speed of computation and adaptability are addressed in artificial neural networks.
An artificial neural network is described as a collection of simple processing units interconnected in a vast parallel array. The term “neural network” arises as a result of its resemblance to certain simplistic models of the biological brain. Neural networks are:
• exhibit the ability to learn and adapt as new situations are encountered,
• possess the ability to generalize to valid solutions based on noisy and limited
• are fast because of their parallelism.
Artificial neural networks are mainly used in two areas --- feature detection, and pattern mapping. Feature detection is performed by classifying an unknown pattern through comparisons with previously learned patterns. This ability is termed associative recall. An example of this is the use of neural networks to recognize handwritten digits . In pattern recognition, when a particular pattern is noisy or distorted, the network can generalize and choose the closest match . In pattern mapping, continuous input patterns are presented to the neural network in order to evoke continuous output patterns. An example is the use of a neural network as a basic controller in a manufacturing control system. The controller would accept vital operating and system conditions as the inputs, and a set of control outputs would drive the current process .
This paper will examine the inverse kinematics problem using artificial feedforward neural networks based on the backpropagation algorithm. Three mapping problems will be examined. The first is a joint coordinate mapping of a closed linkage manipulator. The second and third cases are 3-link, three degree of freedom manipulators with coordinates expressed in cartesian and spherical coordinates. A brief survey of neural networks in robotics precedes the results and is intended to provide a perspective of current research interests in the field.
2.0 Survey of Neural Networks in Robotics
Neural networks have inspired robotics researchers with their peculiar abilities. The two distinct directions of research appear to be the application of neural networks to control systems, and the application of neural networks to “intelligent systems”.
Four different network approaches seem to be common. Feedforward Sigmoidal networks are used for modelling continuous space, as is done in this paper. The Kohonen nets are used as feature detectors which recognize small features in a problem. Hopfield Networks are used for energy minimization of a problem. The Cerebral Model Adaptive Control (CMAC) networks are used to recognize problem features.
Control using neural networks can be traced back to the bang-bang control of a pole balancer by Widrow and Smith . This was before neural network research declined after Minsky and Papert’s book . The introduction of Back Propagation  and Hopfield’s work  have rejuvenated the neural network field and subsequent interest in the application of neural nets to the robotics field. Researchers have applied neural networks to such problems as:
• Model Based Dynamic Controllers
• Single input/ Single output controllers
2.1 Model Based Dynamic Controllers
Modelling involves training the neural network to recognize the forward and inverse equations of dynamics. This is useful for developing Feedforward Controllers or Model Reference Adaptive Controllers (MRAC). These controllers have some problems which may be addressed by neural networks. The MRAC controller may not always be suitable to real time control  because the controller requires an explicit model. Neural networks appear to offer a good alternative for modelling, and the adaptive controller rules. Work by Kawato et al.  discusses biologically based control systems, and investigates a control scheme which almost emulates it. This control scheme involves modelling forward and inverse dynamics. Other works have also been seen to involve modelling of the inverse dynamics. Ritter and Schulten  discuss using Kohonen nets to remember the force output for a pole balancer based on current position. The Hebbian Neural Network paradigm is used to model the inverse dynamics of a manipulator in the work of Pourboghrat and Sayeh . They incorporate the inverse dynamics model into a feedforward controller with a neural network error feedback controller. Both of the units may be adapted on-line. Miller et al. used the CMAC paradigm to learn the inverse dynamics of a manipulator, and the models are applied in a feedforward control scheme. This scheme is very fast and can update in real time. Chen and Pao  used a novel learning paradigm which allows them to learn the inverse dynamics in real time for the pole balancing problem. Atkeson and Reinkensmeyer  used content addressable memory to remember torque values for particular control states of the manipulator. This work identified some problems inherent in the feature detection networks with this sort of paradigm, by having “choppy” control signals.
Suddarth et al. discuss the use of neural networks to control the thrust of a lunar lander as it descends to the moon’s surface. This controller uses three inputs to produce the recommended thrust. A method of compensating for varying payloads was devised by Kuperstein and Wang . They use a single link manipulator with opposing actuators, with variable parameters, and used a neural network to perform control.
2.2 Adaptive Controllers
System Identification involves using the current state of the system to estimate controller parameters. This is a good method for dealing with non-linear systems. Guez et al.  proposed a system which identifies the configuration of a manipulator and estimates the control parameters which should be used in the feedback controller. Chen  describes a self tuning controller scheme using system identification to select system parameters. Narendra and Parthasarathy  give a rather rigorous approach to system identification and modelling. Their work was quite successful, and produced excellent control of a manipulator.
2.3 Single Input / Single Output Controllers
Single input/ Single output controllers have been the mainstay of classical control theory. Even though these controllers are very successful, they fail to help the non-linear nature of the robotics problem. These systems are very much like the classical feedback control strategies. Elsley proposed a feedback controller which uses a vision system for its feedback. Guez and Selinsky  discuss a controller which learns its control rules by observing a teacher. This was displayed using the pole balancing problem.
2.4 Neuromorphic Controllers
Non-linear control is a difficult problem which can be made much easier with neural networks. Sanner and Akin  describe a multivariable controller (they call it a Neuromorphic controller) which maps the system state to a control signal. This controller is updated with a payoff function feedback. Ten Dyke  appears to take an interesting approach in which the process model and feedback error corrector are combined.
Psaltis et al.  discuss using architectures of controllers that will learn the inverse control model of the system on-line. These models are used for open loop control, with feedback for learning. A similar structure has been proposed by Guez and Selinsky , which is called the HTAC (Human Trained Adaptive Controller). This was used to perform the non-linear control of the pole balancer problem. The highly non-linear problem of the hopping robot was examined by Helferty et al.  in which they used a feedback of a reinforcement signal to control learning of a neural network. This approach seemed to have good success.
2.5 Vision and Sensor Systems
Sensory integration has become an increasingly popular topic. Jorgenson has used Hopfield networks to remember the topography of a room, as recorded from sonar sensors. A path through the room is then found using a simulated annealing type of approach. Tolat and Widrow  have discussed the use of the Adeline to control a pole balancer using visual inputs. Graf and LaLonde  have done work with adaptive control of a manipulator which can glance at a point in space with 2D stereo vision, and then move to it. This method learns the workspace to avoid collisions, and correct for kinematic variations over time. This method is based on Kohonen networks. Vision is used by Pabon and Gossard  to determine a visual perspective orientation, to control a robot. Nagata et al.  have developed a mobile robot which has visual, tactile and auditory sensors, which serve as a basis for reasoning and instinct networks. Miller  discusses the use of vision and a CMAC paradigm to perform object tracking. Martinez et al.  use Kohonen networks to topologically map space with two cameras. The space mappings are then used to suggest possible joint angles to obtain that position.
2.6 Novel Applications
Novel applications of neural networks for particular problems have appeared. Tsutsumi et al.  discuss moving an elephant trunk-like manipulator (a 2D truss structure) towards a goal point through energy minimization with a Hopfield network. Tsutsumi and Matsumoto  have also developed a method for finding the optimal path for a 2D snake manipulator through sets of obstacles using energy minimization with a Hopfield network. A rule and CMAC controller system was designed by Handelman et al.  to control a tennis-like swing of a manipulator.
2.7 Inverse Kinematics (Including Inverse Jacobian)
Inverse Kinematics is relevant to other areas of control, either for deriving the set-points for controllers, or being an active part of an intelligent system. Previous neural network approaches to this problem have examined only limited regions of the robot work space. Early attempts by Josin , Josin et al. , Guez and Ahmad  examine two degree of freedom problems. These were very successful although they did not examine the problem of singularities that was mentioned by Guez and Ahmad. Josin et al.  extended earlier work to higher dimensional space by examining a manipulator which had three degrees of freedom dedicated to position and two additional degrees of freedom devoted to end effector orientation. The robot was trained to generalize the inverse kinematics over a small wedge region in the middle of the workspace.
The work of Guez and Ahmad used polar coordinates whereas the work of Josin et al. involved the use of cartesian coordinates. Their previous studies in this area do not report on the problems associated with training a neural network for a 3D manipulator which functions over the full spherical region of the robot workspace. Guez et al. explore this aspect slightly but only for the 2D case and only within a specified quadrant. Josin et al. avoids this problem altogether by training their neural networks far away from the singularity points of the manipulator. None of the authors consider a closed kinematic linkage or compare the difference using cartesian or spherical coordinates.
The estimation of the Inverse Jacobian can improve the control scheme. Elsley  uses the Inverse Jacobian with a vision system to reduce errors. The Jacobian is estimated by Yeung and Gekey . The Inverse Jacobian representation has the inverse kinematics as one of its internal components.
The inverse kinematics of an inverted pendulum were taught to a Hopfield network by Kitamura et al. . Their model was created so that they could control a walking biped robot, as if it were an inverted pendulum. Sobajic et al.  explored a controller which uses the current joint angles and the target in polar coordinates to produce the control signal. This controller must have a built in model of the inverse kinematics, and thus is affected by the results presented here.
2.8 Summary of Neural Network Based Control
The network type and learning rule have a distinct effect upon the success of a method. The most popular method is backpropagation which is used in . This method is quite good for learning smooth internal mappings of functions. A second paradigm which is popular was the Kohonen  network . Instead of learning an internal representation of a function, each neuron in the network will remember a feature of the mapping. This seemed to work well when attempting to use the networks for topological mapping, but was subject to noise when mapping continuous functions. One distinct advantage of the Kohonen map is the ability to restrict weight updates to particular neurons, instead of the whole network. Hopfield networks have been used by some researchers to “minimize the energy state of their solution”. Hopfield networks are not very suitable for real time control which requires consistent, fast solution speed. Some researchers have used the Adaline. Another common network paradigm is the CMAC .
As an alternative to the standard networks, there has been a trend to try different approaches to problems. Some researchers developed learning rules of their own , and others used non-homogenous network structures  which give better results, but are less desirable because they do not hold true to biological models. Some other researchers have taken a novel approach to solving network problems by including external rule based controls . The rule based control allows the advantages of Neural and von Neumann computing schemes to be obtained. When the problem is known to be dependent upon time, or on previous states, then it can be useful to have time delays, to allow the network to perform time based consideration .
There are several brief surveys available for neural networks in robotics. Kung and Hwang  survey a number of different neural network paradigms and applications, with the intention of developing a VLSI support architecture for Neural Networks in robotics. Werbos  identifies some of the main issues in neuroengineering, including a number of robotics issues.
3.0 Kinematics of Manipulators Using Artificial Neural Networks (ANNs)
The neural network will require some representation of the problem space. In this case the arm will be restricted to an elbow up solution for the three degree of freedom manipulator. This restriction will ensure that the manipulator will be described by a one-to-one mapping. The robot will also not be trained over the base (x = y = 0). This will prevent the mathematical singularity, when θ1 is undefined, from being introduced into the training set of the neural network. The Kinematics equations are outline in the appendices.
3.1 The Neural Architecture Utilized
The network architecture used in this paper is composed of a two-layer neural network with an architecture as shown in Appendix A. The activation function chosen for this application was the sigmoidal logistic function, which will represent the continuous nature of the mapping. All networks that were used in this research consisted of 20 or 40 neurons in one hidden layer. This is a reasonable number which is consistent with the results that other researchers have tried for similar problems. A bias neuron was connected to all neurons except for the input units. The bias neuron ensures that each processing neuron has a non-zero summation.
Several parallel error compensation neural networks  were trained to determine the correction factors for the main neural network. Essentially, each compensation network would be successively trained to recognize the errors of the main neural net. The outputs of the main neural net and compensation nets would then be summed together so as to give a better estimate of the correct answer. This appears to be a new concept; however, earlier work by Kinser  discusses using two different network paradigms to take advantage of the particular abilities of each network. This allows a network to find rough solutions to the problem and compensation nets are used for fine tuning. It will become more clear later, but a network will tend to be more accurate if it consists of a main neural network (using at least 20 hidden neurons), plus several compensation networks with a similar number of neurons, than if the neurons of the compensation nets were incorporated into the main neural network (Figure 1). That is, in a comparison between the number of neural connections for a neural architecture with several compensation nets and a neural architecture with a similar number of neural connections with no compensation nets, the architecture with more compensation networks would result in higher accuracy.
Figure 1 Neural architecture utilized (with two compensation networks).
3.2 Training and Testing Data Points
The continuous nature of the kinematic work space means that data points are plentiful. Marko and Feldkamp  indicate that a plentiful supply of data points warrants the use of a testing procedure which will train on one set of data points and be tested on another non-intersecting set of data points. They also indicate that a reduction in noise while training will speed network convergence. To eliminate noise, data points were derived directly from the explicit kinematic equations. Network generalization is aided by training with data sets which represent all features of the problem. To represent all features of the kinematics problem, points were chosen to be evenly distributed throughout cartesian space for the serial link manipulator, and evenly distributed in joint space for the closed-link manipulator.
It was decided not to make the efficiency of training an issue for this paper. As a result the networks were all trained with over 100,000 iterations. Figure 2 shows that the convergence of the network was almost complete when 40,000 iterations where complete. This would seem to indicate that the accuracies presented are the best accuracies obtainable with our methods.The dotted lines in Figure 2 indicate an estimation of what the learning curve should look like, but no data was collected for iteration between 0 to 10000.
When measuring average joint angles, all errors were measured as an absolute value for all joints. The errors are expressed as both an average and as an RMS value . This seems to be valid when examining the histograms of errors which indicate that the results are not normally distributed, and resemble signal noise. The procedure of using the average error and the root mean square (RMS) error was used by Yeung and Gekey  as given in equation (1),
where n is the number of test points, and error is the absolute error of a joint angle.
4.0 A Neural Network Solution for the Closed-Link Manipulator
A neural net was trained to approximate the value of the unknown angle (θ4), from the given angle (θ2) (see Appendix B for further derivations of the Kinematic Equations). The neural architecture consisted of a two-layer network, with one input plus a bias neuron, forty hidden neurons in the hidden layer, and one output neuron. To decrease the errors of the neural network solutions, one error compensation neural network with a construction similar to the main neural net was connected in parallel. After the main neural net was fully trained, the error compensation network was trained independently to correct for the errors of the main network. A more accurate solution to the problem was then achieved by summing the outputs of both networks.
In order for the results to be acceptable, the solution space of the problem should be continuous. If a discontinuity within the solution exists, then areas near the discontinuity do not generalize very well. For example, with the mechanism in Figure 3, a discontinuity will exist along the x-axis when the angular displacements change from +180o to -180o. To alleviate this problem, the range of input and output values for the training set was made to vary between 0o to 360o, inclusive. For the solution set, when θ2 is at its 0o position, the angular displacement of link 4 was also taken as 0o, regardless of the configuration of the mechanism.
The training set was also constrained to only one solution set so that the one-to-one mapping could be achieved. That is, the training set only consisted of either the positive (or negative) solution (± sign in equations B.1 and B.2 of Appendix B). In this paper, the positive solution was used (i.e. the apex of links 3 and 4 point “upwards”). Instead of gathering experimental information from an unknown mechanism, equation (B.1 of Appendix B) was used to generate 116 training points and also provide a means of evaluating the generalization capability of the neural network solution. The physical dimensions of the mechanism were chosen to be r1 = 1 unit, r2 = 2 units, r3 = 3 units, and r4 = 4 units.
The results as obtained by the main neural network alone, produced an absolute average error of 0.66o with a RMS error of 0.75o for a set of test points. This would indicate that the network has generalized quite well. With the addition of the compensation network, the absolute average error was decreased to 0.60o with a corresponding RMS error of 0.72o. The success of these results gives rise to consideration of a more complex manipulator. The total number of neural connections including the main neural network and the compensation network was 242 connections.
5.0 Neural Net Solution of Inverse Kinematics Using Cartesian Coordinates
An artificial neural network was constructed and trained to recognize the inverse kinematics of a three link manipulator. Cartesian coordinates of the end of the robot arm were used as the inputs and joint coordinates were used as the output for the network (See Appendix C). As a preliminary evaluation, a decision was made to train the network over a one eighth volume of the work sphere. This was based on the premise that by limiting the volume of the training points, the most accurate inverse kinematics mapping could be acquired. If the training set had covered the entire work space, then the solution may have been poorer with longer convergence times because the network would be forced to generalize over a volume which is eight times larger. The number of training points used was 889 evenly spaced points within an eighth of the sphere of the work space.The case of training over the entire work space volume is given in the next subsection for spherical coordinates.
All configurations tested were the feedforward model using the sigmoidal activation function. One hidden layer was used with the number of hidden neurons set at 10, 20 or 40 neurons. Four compensation networks were attached in parallel with the main neural network and each compensation net had the exact same configuration as the original network. Inputs to each network were the x, y, and z cartesian coordinates with a bias neuron connected to all processing units. The robot joint coordinates, θ1, θ2, and θ3, were the outputs of the neural network.
For all of the data presented, the learning rate was initially set to 0.9, and the smoothing rate was initially set to 0.8. As the network converged, the learning and smoothing rates were slowly decreased so that the network could be fine tuned to reach the global minimum. The number of training iterations was set very high (in the order of thousands). The training process was performed on a SUN™ Sparcstation 1, with training times in the order of several hours.
An evenly distributed set of test points, which were not a part of the training set, were presented to the network to evaluate the uniformity of the solutions. Figures 7 and 8 show error histograms for each joint angle. Figure 7 shows results with no compensation nets and Figure 8 shows results with four compensation nets in parallel with the main neural net. The histograms indicate how often a particular error between a neural network solution and the desired value occurs. Figures 9 displays the Inverse Kinematic mapping of the test points on single planes (i.e. at constant z levels). The first plot shows the work space with the test points (evenly distributed). The points in the subsequent cases indicate the neural network’s actual positioning of the robot arm. The joint coordinates as determined by the neural network were applied to the forward equations to obtain the cartesian locations.
The results in Tables 1 and 2 present a good picture of the relationship between the number of neurons and the generalization capability. The use of error compensation networks appears to have a beneficial effect upon the network errors. The first three compensation networks tended to have the most effect, but subsequent compensation networks were not very influential.
In a comparison between the different sizes of the neural networks (10, 20 or 40 neurons in the hidden layer), the network with the greater number of hidden neurons performed better than the others. For the 3-10-3 network (73 neural connections), the average and RMS errors were 3.02° and 4.22°, respectively. The 3-20-3 network had average and RMS errors of 2.22° and 3.39°, and the 3-40-3 network had average and RMS errors of 1.89° and 2.16°. Increasing the number of neurons in the hidden layer would seem to indicate that a higher accuracy may be produced at the cost of significant increases to the number of neural connections (from 73 connections to 283 connections). Therefore, the use of neural networks for solving the inverse kinematics must be dependent upon the specific problem. A designer is required to decide between network accuracy and size of the neural network. It must be noted that no studies on the optimum number of hidden neurons versus network accuracy were accomplished; however, it was known that further increases of hidden neurons to a network may or may not result in increased accuracy . For instance, comparison of the various architectures indicates diminishing returns for neuron investment. As the number of neurons was increased from 10 to 20 to 40 hidden neurons, the accuracy increases but the improvements are more subtle.
Table 3 compares the results between different network architectures. Consider the case where one neural network was trained with 20 hidden neurons with one compensation net similar in construction. This network architecture had a total of 286 neural connections. Comparing the result obtained from this network with another network with 40 hidden neurons and no compensation nets (283 neural connections), the network with 20 hidden neurons and one compensation net performed marginally better. The accuracy of adding more compensation nets to the original main neural net becomes more noticeable in a comparison between the network architectures with a higher number of hidden neurons. For instance, the 20 wide, 3 compensation nets (total of 572 neural connections) had a much more significant positive effect than the 40 wide, 1 compensation net (total of 566 neural connections). Although both architectures had similar numbers of neural connections, the architecture with more compensation nets performed much better. As a further example, consider the 20 wide, 5 compensation nets (858 neural connections) and the 40 wide, 2 compensation nets (849 neural connections). The neural architecture that was 20 wide, with 5 compensation nets had a better accurately than the 40 wide, 2 compensation architecture. These results indicate that augmenting a “stand alone” neural network (i.e. one with no compensation networks in the architecture) with the neurons from a compensation network produces marginal effects. By separating the “stand alone” neural networks into several compensation networks, a significant increased accuracy could be obtained, compared to “stand alone” neural networks functioned independently with a similar number of neural connections.
Figure 9 shows the position of the end effector as determined by the neural network. The discontinuities in the diagrams in Figure 9 are obvious when noting the “curl” of the workspace. The points should lie inside and be evenly distributed. The points which were well within the training region were placed within a tolerance which was less than one percent of the work space radius. As the manipulator approached the discontinuities, the errors approach ten percent. The errors increased as the edge of the training region was approached. This suggests that some technique would have to be developed to train the network to deal with, or avoid, the singularities. To correct these problems the network can be trained over the complete volume of the work space, but not include the regions where singularities occur. This is performed in the subsequent experiment.
6.0 Neural Net Solution of Inverse Kinematics using Spherical Coordinates
Another artificial neural network was constructed and trained to recognize the inverse kinematics of the three link manipulator, given the spherical coordinates of the end of the robot arm as the inputs and joint coordinates as the outputs for the network. In this case, the network was trained over the entire volume of the work sphere. This was done to determine the properties of the neural solution for the inverse kinematics problem so that a more realistic approach can be studied and analysed. The number of training points used was 1144 evenly spaced points within the entire volume of the work space.
A feedforward network with a single hidden layer of 10, 20 or 40 hidden neurons was used. Four error compensation networks were attached in parallel with the main network. Each compensation net had the same configuration as the main network. Inputs to each network were the φ1, φ2, and r spherical coordinates with a bias neuron connected to all processing units. The robot joint coordinates, θ1, θ2, and θ3, were the outputs of the neural network.
The learning rate was again initially set to 0.9, and the smoothing rate was initially set to 0.8. The learning and smoothing rates were slowly decreased as the network was trained to allow for fine tuning of the neural solutions.
Tables 4, 5 and 6 give the statistical results of the neural solution for different architectures. Figures 10 and 11 display the error histograms of each joint coordinate. Figure 10 shows results when no error compensation nets are used and Figure 11 shows the results when four compensation nets are used. Figure 12 shows slices of the spherical volume of the robot work space and graphically displays the end effector position as obtained by the neural network. The larger “humps” indicate a greater amount of error associated in that particular region. As can be observed, the errors increased dramatically as the end effector approached the z-axis. As was expected, other large errors occurred along the discontinuity at the negative x-axis where the θ1 joint angle can either be +180o or -180o.
The error compensation networks had the same effect on accuracy versus network architecture that was observed in the previous discussion of Table 3. The diminishing returns effect of adding more neurons was also the same as for the previous cartesian case.
7.0 Discussion and Practical Implications
This paper has discussed the application of neural networks to three different Inverse Kinematics problems. The case of the closed kinematic linkage was extremely successful. The cases using cartesian and spherical coordinates had slightly higher margins of error. In all cases the solution results were improved by the use of compensation networks. These demonstrated the ability to achieve higher accuracy while reducing the connections in a network.
Observation of the results which are recorded in the tables suggests that the compensation networks can lead to improvement of the inverse kinematics estimation. As the number of compensation networks are added, the data tables show that the average and RMS error decreases, suggesting that a reduction in error scatter also improves. Looking at the workspace results for the three degree of freedom manipulators, it can be easily seen that the spherical coordinates provide much better results than the cartesian coordinates. This occurs because their level of non-linearity is much less than the cartesian case. The closed kinematic linkage seemed to provide a good deal of success, but was only mapping a single variable function. This in turn suggests that as the number of degrees of freedom is scaled up, the error will remain relatively small. The histograms all show that the error distribution is not necessarily normal. This is still acceptable since the errors are all clustered around the 0 degree error point.
Neural network architecture affects accuracy. Previous observations show that as the number of neurons is increased, the accuracy also increases. Unfortunately, the effect of adding more neurons was a diminishing return in accuracy. The use of compensation networks allowed novel architectures in which the accuracy of the neural network could be increased. The architecture trade-off between the number of neurons, and the number of compensation networks, may be observed to be problem specific when comparing the cartesian and spherical cases.
As the results indicate, this method of robotic control will be accurate in the crucial areas of the workspace, and performs very well when the robot is working within small areas in the centre of the work space. These results agree with the work done by Josin , Josin et al. , and Guez and Ahmad . In Josin’s papers, for the planar manipulator, he uses a 2D square in the centre of the workspace, to avoid areas of discontinuity. He was able to obtain high accuracy by focusing on a subset of the inverse kinematics solution; however, if he were to approach the singularities within his workspace, then his accuracy would surely decrease. His research was also extended to a 3D wedge volume for the training region with a five degree of freedom manipulator where the results were highly accurate in this volume. Again, if singular configurations were approached, the generalization capability would certainly decrease. The overall results of this paper were not as accurate as Josin et al. but within the regions he used, the results were equivalent. The results of the three dimensional cases presented in this paper out performed the two dimensional cases of Guez and Ahmad.
From observation of Figures 9 and 12, the singularities appear to have a drastic effect upon the results. These singularities occur along the vector when x = y = 0. These singularities also occur when the arm is straight (near the edge of the workspace), and when the angular coordinate system goes from +180 to -180 degrees. If the singularities are not dealt with, and the network is used as it has been trained here, some problems would be encountered.
• Switching from an elbow up to elbow down solution would be difficult, because the network does not approach the singularity very well, and it is only trained for a single configuration (elbow up).
• Placing the end of the manipulator over the origin of the robot will result in a singularity configuration.
This paper has demonstrated that for a three link robot the inverse kinematics function can be approximately represented using an artificial neural network.The minimum absolute average joint errors achieved were of the order of one degree per joint. Clearly, a neural network approach is inappropriate for simple open loop estimation if high accuracy is required over the entire workspace; however, great potential exists for the development of adaptive, fault tolerant robot control strategies based on neural networks. This inherent inaccuracy of the neural network approach to inverse kinematics can be tolerated because there are two forms of motor control.There is the crude form of movement where arms and limbs are placed in approximate locations, and then a fine tuning of position with sensory feedback . If biological systems like human beings are considered, it can be observed that effective motor control relies on a high degree of sensory feedback such as vision and balance.
The compensation networks have displayed their ability to effectively reduce the errors within a larger inverse kinematics estimator. This leads to a conclusion that the concept of compensation networks could be applied to traditional inverse kinematics methods . These methods have long been criticized for their inability to compensate for inaccurate models.
The benefits of the networks are quite distinct. If implemented, the largest inverse kinematics network could be run on a standard neural network accelerator boards (available for IBM PCs, SUN workstations, and other computers). Some of the accelerators run at 10 million connections per second (A few run above 20 million connections per second). This would mean that for the network architecture which is 40 neurons wide with 5 compensation nets could be processed every 0.17ms or 5900 times per second. For the software which was written for this research, it was possible to process the most complex network in this paper about 150 times per second. The network could have been trained for any manipulator, which indicates that it is a good general technique.
If neural networks are to be implemented as robotic controllers, there are a few points to consider. If a neural network controller is trained for a specific manipulator, the network could be saved. When a new controller is attached to a manipulator a pretrained network could be loaded. Sample points may be used for occasional updates to the controller, and achieve some level of adaptive control.
8.0 Research Issues
Although the neural networks may not be suitable for precise robotic control, they display much potential. Neural networks seem to provide errors similar to what a human would obtain if the feedback senses were not used for error correction. This suggests that such neural network solutions could be used in a robotic controller that models human flexibility, instead of attempting to obtain superhuman capabilities. This could be done by incorporating a network to estimate the inverse kinematics into a feedback system.This system would first estimate the position, determine the error (possibly from a vision system) and then have a second network correct this error. Neural controllers may be applied to autonomous robots such as “walking” machines where high accuracy is unimportant. If this were to be attempted, then there is indeed a solution to the problem of robot coordination.
Some methods may be researched to help improve the use of neural networks in the inverse kinematics problem. The network could make use of the orientation of the end effector to determine whether an elbow up/elbow down solution or left arm/right arm solution is the best configuration. This network does not provide any estimate of the error. To help detect errors, a neural network model of the forward kinematics could be used to take the inverse kinematics output, and regenerate the x, y, z positions. From this, the error could be used as a confidence measure for the solution.
Neural network solutions are not suited for high precision robotic applications; however, this approach may be useful for generalized robot control such as dynamics, resolved rate control, path planning and task planning. This method could also be used in conjunction with existing methods. If an exact solution is required, and an iterative inverse kinematics method must be used, then the neural networks may be used to provide a good initial guess. Another implication of artificial neural networks is the ability to solve previously unknown or intractable problems. Algorithms that require complex computing power for robotic control could be simplified by implementing an artificial neural network controller.
In this paper, convergence times of the training algorithms have been neglected. For a neural network controller which must adapt in real-time to changes in the system parameters, faster training algorithms will be needed. Future use of faster training algorithms will have to be investigated such as the ones which are in use today. These algorithms appear to obtain higher convergence rates by only updating some connections within the network as opposed to a complete update of all the neural connections. Suddarth et al.  and Yu and Simmons  have found that by training the network for additional outputs which correlate to internal representations that the network will have to learn, convergence is hastened and accuracy is improved. Current problems with training speeds are related to the use of software simulations. Progress in the improvements of speed have resulted in systems which are capable of speeds which are hundreds and thousands of times faster. One example of this is the work done by Pomerleau et al. , in which they obtained two hundred times more throughput than the simulations presented in this paper. This means complete network training within minutes.
Future work could also focus on the application of alternate neural network paradigms. One such paradigm was devised by Savic and Tan  which is able to map highly non-linear systems. Another approach is to apply a combination of different network types for the benefits of both . The Hopfield net might show some promise for solving redundant manipulator problems, by using an energy formulation of the manipulator state. The Kohonen net could be used for this problem, and another consideration could also be the application of Rules to the network architecture. Implementation of a neuromorphic Inverse Kinematics approach is not entirely practical until sensors become available which may measure the coordinates of the manipulator endpoint in the world coordinates.
These approaches can provide many benefits which are obtained by combinations of AI and control , or by the benefits of smart control systems.
 Atkeson, C. G., and Reinkensmeyer, D. J., Using Associative Content-Addressable Memories To
Control Robots. Proc. of the IEEE Intern. Conf. on Robotics and Automation, Scottsdale, Az.,
 Chen, C.C., and Pao, Y., Learning Control with Neural Networks. Proc. of the 1989 IEEE International
Conference on Robotics and Automation, Vol. 3, pp 1448-1453, 1989.
 Chen, F. C., Back-propagation Neural Network for Nonlinear Self-tuning Adaptive Control. Proc. of the
IEEE Intern. Symp. on Intelligent Control, Albany, NY, USA, September 1989.
 Devore, J. L., Probability & Statistics for Engineering and the Sciences. Brooks/Cole Monterey, Ca., USA,
 Elsley, R. K., A Learning Architecture for Control Based on Back-propagation Neural Networks.
Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA, 1988.
 Fu, K. S., Gonzalez, R. C., and Lee, C. S. G., Robotics, Control, Sensing, Vision, and Intelligence.
McGraw-Hill Book Company, New York, NY, USA, 1987
 Graf, D. H., and LaLonde, W. R., The Design of An Adaptive Neural Controller for Collision-Free
Movement of General Robot Manipulators. Proc. of the INNS, Boston, Mass. USA, September 1988.
 Graf, D. H., and LaLonde, W. R., A Neural Controller For Collision-Free Movement of General
Robot Manipulators. Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA,
 Guez, A., and Ahmad, Z., Solution to the Inverse Kinematics Problem in Robotics by Neural Networks.
Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA, 1988.
 Guez, A., and Ahmad, A., Solution to the Inverse Kinematics Problem in Robotics by Neural Networks.
Proc. of the INNS, Boston, Mass., USA, September 1988.
 Guez, A., Eilbert, J., and Kam M., Neuromorphic Architecture For Adaptive Robot Control: A
Preliminary Analysis. Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA, 1987.
 Guez, A., and Selinsky, J., A Neuromorphic Controller with a Human Teacher. Proc. of the IEEE Intern. Conf.
on Neural Networks, San Diego, Ca., USA, 1988.
 Guez, A., and Selinsky, J., A Trainable Controller Based on Neural Networks. Proc. of the INNS, Boston,
 Guyon, I., Neural Network Systems. Proc. of the INME Symposium, Lausanne, France.
 Guyon, I., Poujard, I., Personnaz, L., Dreyfus, G., Denker, J., and LeCun, Y., Comparing
Different Neural Architectures for Classifying Handwritten Digits. Proc. of Intern. Jt. Conf. on Neural
Networks (IJCNN), Washington, DC, USA, 1989.
 Handelman, D. A., Lane, S. H., and Gelfand, J. J., Integrating Neural Networks and Knowledge-Based
Systems For Robotic Control. Proc. of the IEEE Intern. Conf. on Robotics and Automation, Scottsdale, Az.,
 Helferty, J. J., Collins, J. B., and Kam M., A Neural Network Learning Strategy For The Control
of a One-Legged Hopping Machine. Proc. of the IEEE Intern. Conf. on Robotics and Automation, Scottsdale,
 Hopfield, J. J., Neural Networks and Physical Systems with Emergent Collective Computational
Abilities. Proc. of the Nat. Acad. of Sciences, vol. 79, pp 2554 - 2558.
 Jorgensen, C., Neural Network Representation of Sensor Graphs in Autonomous Robot Path Planning.
Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA, 1987.
 Josin, G., Neural-Space Generalization of a Topological Transformation. Biological Cybernetics, vol.59,
 Josin, G., Neural Network For Electrical and Computer Engineering. Proc. of Conf. on Electrical and
Computer Engineering, Vancouver, British Columbia, Canada, November 1988.
 Josin, G., Charney, D., and White, D., A Neural-Representation of an Unknown Inverse Kinematic
Transformation. Proceedings nEuro 88 - First European Conference on Neural Networks, Paris, France,
 Josin, G., Charney, D., and White, D., A Robot Control Strategy Using Neural Networks. Proc. of the INNS,
Boston, Mass., USA, September 1988.
 Josin, G., Charney, D. and White, D., Robotic Control Using Neural Networks. Proc. of the IEEE Intern. Conf.
on Neural Networks, San Diego, California, USA, July 1988.
 Karsai, G., Andersen, K, Cook, G. E., and Ramaswamy, K., Dynamic modelling and control of Nonlinear
Processes Using Neural Network Techniques. Proc. of the IEEE Intern. Symp. on Intelligent Control, Albany,
 Kawato, M., Furukawa, K., and Suzuki, R., A Hierarchical Neural-Network Model for control and
Learning of Voluntary Movement. Bio. Cybern., vol. 57,pp 169 - 185, 1987.
 Kawato, M., Setoyama, T., and Suzuki, R., Feedback Error Learning of Movement by Multi-Layer Neural
Network. Proc. of the INNS, Boston, Mass., USA, September 1988.
 Kawato, M., Uno, Y., and Isobe, M., A Hierarchical Model for Voluntary Movement and its
Application to Robotics. Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA, 1987.
 Kinser, J. M., Caulfield, H. J., and Hester, C., Error-Correcting Neural Networks. Proc. of the INNS,
Boston, Mass., USA, September 1988.
 Kitamura, S., Kurematsu, Y., and Nakai, Y., Application of the Neural Network for the Trajectory
Planning of a Biped Locomotive Robot. Proc. of the INNS, Boston, Mass., USA, September 1988.
 Kohonen, T., Self-organized Formation of Typologically Correct Feature Maps. Biological Cybernetics,
 Kung, S. Y., and Hwang, J. N., An Algebraic Analysis for Optimal Hidden Units Size and Learning
Rates in Back-Propagation Learning, Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego,
 Kung, S. Y., K., and Hwang, J. N., Neural Network Architectures for Robotic Applications. IEEE
Trans. on Robotics and Automation, vol. 5., no. 5, pp. 641 - 657, October 1989.
 Kuntze, H. B., Position Control of Industrial Robots - Impacts, Concepts and Results in Robot Control
1988 (SYROCO’88) (ed. U. Rembold), Pergamon Press, 1989.
 Kuperstein, M., and Wang, J., Neural Controller for Adaptive Movements with Unforeseen Payloads.
IEEE Trans. on Neural Networks, vol. 1, no. 1, March 1990.
 Lee, D.M.A., Jack, H., ElMaraghy, W.H., and Buchal, R.O., Error Compensation Networks for Feedforward
Neural Networks, to be published.
 Lippman, R. P., An Introduction to Computing with Neural Nets. IEEE ASSP Magazine, vol. 4
 Marko, K. A., and Feldkamp, L. A., Automotive Diagnostics Using Neural Networks. Tutorial presented on
Neural Networks: Opportunities and Applications in Manufacturing, Detroit (Novi Hilton), Mich, USA, April
 Martinetz, T. M., Ritter, H. J., Schulten, K. J., Three-Dimensional Neural Net for Learning Visomotor
Coordination of a Robot Arm. IEEE Trans. on Neural Networks, vol. 1, no. 1, March 1990.
 Miller, W. T., Real Time Learned Sensor Processing and Motor. Proc. of the INNS, Boston, Mass., USA,
 Miller, W. T., Hewes, R. P, Glanz, F. H., and Kraft, L. G., Real-Time Dynamic Control of an Industrial
Manipulator Using a Neural-Network-Based Learning Controller. IEEE Trans. on Robotics and
Automation, vol. 6, no.1, pp 1 - 9, February 1990.
 Minsky, M. L., and Papert, S. A., Perceptrons. MIT Press, 1969.
 Moore, W. R., Conventional Fault-Tolerance and Neural Computers in NATO ASI Series, Vol. F41,
Neural Computers (eds. R. Eckmiller, and Ch. v. d. Malsburg), Springer-Verlag Berlin Heidelberg, 1988.
 Murugesan, S., Application of AI to Real-Time Intelligent Attitude Control of a Spacecraft. Proc. of the
IEEE Intern. Symp. on Intelligent Control, Albany, NY, USA, September 1989.
 Nagata, S., Kimoto, T., and Asakawa, K., Control of Mobile Reports With Neural Networks. Proc.
of the INNS, Boston, Mass., USA, September 1988.
 Narendra, K. S., and Parthasarathy, K., Identification and control of Dynamical Systems Using
Neural Networks. IEEE Trans. on Neural Networks, vol. 1, no. 1, March 1990.
 Neural Networks. a collection of articles in Byte, pp 216 - 245, August 1989.
 Pabon, J., and Gossard. D., The Role of Hidden Layers in Learning Motor Control in Autonomous
Systems. Proc. of the INNS, Boston, Mass., USA, September 1988.
 Pomerleau, D. A., Gusciora, G. L., Touretzky, D. S., and Kung, H. T., Neural Network Simulation at
Warp Speed: How We Got 17 Million Connections Per Seconds. Proc. of the IEEE Intern. Conf. on Neural
Networks, San Diego, Ca., USA, 1988.
 Pourboghrat, F., and Sayeh, M. R., Neural Network Learning Controller for Manipulators. Proc. of the INNS,
Boston, Mass., USA, September 1988.
 Psaltis, D., Sideris, A., Yamamura, A., Neural Controllers. IEEE First Intern. Conf. on Neural
Networks, vol. 4, pp 551 - 558, 1987.
 Ranky, P. G., and Ho, C. Y., Robot Modelling: Control and Applications with Software.
IFS (Publications) Ltd., UK, 1985.
 Reber, W. L., and Lyman, J., An Artificial Neural System Design For the Rotation and Scale
Invariant Pattern Recognition. IEEE First Intern. Conf. on Neural Networks, vol. 4, pp 277 - 283,
 Ritter, H., and Schulten, K., Kohonen’s Self-Organizing Maps: Exploring Their Computational
Capabilities. Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA, 1988.
 Ritter, H., and Schulten, K., Topology Conserving Mappings for Learning Motor Tasks. AIP Conference
Proceedings 151 in Neural Networks For Computing (ed. J. S. Denker), 1986.
 Rumelhart, D. E., Hinton, G. E., and Williams, R. J., Learning Internal Representations by Error Propagation.
in Parallel Distributed Processing (eds. D. E. Rumelhart and J. L. McClellend) vol. 1, MIT Press, 1986.
 Rumelhart, D. E., and McClelland, J. L., Parallel Distributed Processing. vol. 1, MIT Press,
 Sanner, R. M., and Akin, D. L., Neuromorphic Regulation of Dynamic Systems Using Back Propagation.
Proc. of the INNS, Boston, Mass., USA, September 1988.
 Savic, M., and Tan, S. H., A New Class of Neural Networks Suitable For Intelligent Control. Proc. of the IEEE
Intern. Symp. on Intelligent Control, Albany, NY, USA, September 1989.
 Sobajic, D. J., Lu, J. J., and Pao, Y. H., Intelligent Control of the Intelledex 605T Robot Manipulator.
Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA, 1988.
 Suddarth, S. C., Sutton, S. A., and Holden, A. D. C., A Symbolic-Neural Method for Solving Control
Problems. Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA, 1988.
 Tawel, R., and Thakoor, A. P., Neural Networks For Robotic Control. Proc. of the INNS, Boston,
 Ten Dyke, R. P., Neural Networks and Adaptive Control. Tutorial presented on Neural Networks:
Opportunities and Applications in Manufacturing, Detroit (Novi Hilton), Mich. USA, April 1990.
 Tolat, V.V., and Widrow, B., An Adaptive "Broom Balancer" with Visual Inputs. Proc. of the IEEE Intern.
Conf. on Neural Networks, San Diego, Ca., USA, 1988.
 Tourassis, V. D., and Ang, M. H., A Modular Architecture for Inverse Robot Kinematics.
IEEE Trans. on Robotics and Automation, vol. 5, no. 5, pp 555 - 568, October 1989.
 Troudet, T., and Merril, W. C., Neuromorphic Learning of Continuous-Valued Mappings in the
Presence of Noise. Proc. of the IEEE Intern. Symp. on Intelligent Control, Albany, NY, USA, September 1989.
 Tsutsumi, K., Katayama, K., and Matsumoto, H., Neural Computation for Controlling the Configuration
of 2-Dimensional Truss Structure. Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA,
 Tsutsumi, K., and Matsumoto, H., Neural Computation and Learning Strategy for Manipulator Position
Control. Proc. of the IEEE Intern. Conf. on Neural Networks, San Diego, Ca., USA, 1987.
 Wasserman, P. D., Neural Computing - Theory and Practice. Van Nostrand Reinhold, New York,
 Werbos, P. J., Backpropagation and Neurocontrol: A Review and Prospectus. Proc. of the Intern. Jt.
Conf. on Neural Networks, Washington, DC, USA, 1989
 Werbos, P. J., Consistency of HDP Applied To A Simple Reinforcement Learning Problem. Neural
Networks, March 1990 (in press).
 Werbos, P. J., Neural Networks for Control and System Identification. Proc. of the IEEE/CDC (Tampa, Fla.,
USA meeting), New York, NY, USA, 1989.
 Werbos, P. J., Neural Networks For Robotics and Control in WESCON/89 Conference Record (IEEE),
North Hollywood, Ca., USA, 1989.
 Widrow, B., The Original Adaptive Broom Balancer. Proc. of the IEEE Intern. Symp. on Circuits and
Systems, Philadelphia, Pa., USA, May 1987.
 Yeung, D. Y., and Gekey, G. A., Using a Context Sensitive Learning Network for Robot Arm
Control. Proc. of the IEEE Intern. Conf. on Robotics and Automation, Scottsdale, Az, USA,
 Yu, Y. H., and Simmons, R. F., Extra Output Biased Learning. Proc. of the Intern. Jt. Conf. on Neural
Appendix A Artificial Neural Networks
The basic building block of an artificial neural network is the neuron. In multi-layer feedforward neural nets, each neuron is connected to other neurons via a weighted communication line. During the training phase, the connection weights between neurons are adjusted by a predetermined algorithm. The neuron, uj, receives inputs, opi, from neuron ui while the network is exposed to input pattern p. Each input is multiplied by the connection weight, wij, where wij is the connection weight between neurons ui and uj. The connection weights correspond to the strength of the influence of each of the preceding neurons. After the inputs have been multiplied by the respective weights, their resulting values are summed. Included in the summation is a bias term θj used to offset the basic level of the input to the activation function, f(netpj). In order to establish the correct bias value θj, the bias term appears as an input from a separate neuron with a fixed value (usually a neuron with a constant value of +1). Each neuron requiring a bias value will be connected to the same bias neuron. The bias’ connection weights are then self-adjusted as the other neurons learn. Figure A.1 shows the structure of the basic neuron.
Figure A.1 Basic structure of the artificial neuron.
The circular nodes in Figure A.2 represent the basic processing neurons. The input neurons are shown as squares because they only act as terminal points (i.e. opi = input to the network). The input layer does not process information, hence, it is not considered to be a part of the structure and is referred to as layer 0. A simple one-layer network can effectively map many sets of inputs to particular outputs. In pattern recognition problems, this depends upon the linear separability of the problem. In pattern matching problems, this depends upon the continuity, and the topography of the problem domain. A multi-layer network consists of an input layer of neurons, one or more hidden layers, and an output layer. It has been shown that with enough neurons in the hidden layer any continuous function may be learned . Feedforward systems are described as networks that propagate their inputs forward through the neural net. Each neuron accepts data only from neurons in preceding layers. In this model, a set of inputs are applied to the network, and multiplied by a set of connection weights. All of the weighted inputs to the neuron are then summed and an activation function is applied to the summed value. This activation level becomes the neuron’s output and can be either an input for other neurons, or an output for the network. Learning in this network is performed by adjusting the connection weights in a supervisory role based upon the training vectors (input and corresponding desired output). When a training vector is presented to a neural net, the connection weights are adjusted to minimize the difference between the desired and actual output. After the network is adequately trained, it should produce a very close output match for the specific inputs.
Thorough reviews of neural networks are available in a number of references . These references are both introductory and advanced.
Figure A.2 Feedforward neural networks.
Appendix B Manipulators Containing Closed Kinematic Loops
Consider the four bar mechanism as shown in Figure B.1. This kinematic chain consists of closed links which are used in many industrial robots such as the ASEA Robot, Cincinnati Milacron T3, Bendix MA 510, FANUC Robot, and the YASUKAWA Motoman. Construction of robot end effectors may also be influenced by such closed-link mechanisms. By utilizing conventional techniques, the kinematics can be derived and are given in equations (B.1), (B.2), and (B.3). The ± sign indicates the presence of two solutions for a given value of θ2. The apex of links 3 and 4 will either point “upwards” or “downwards”.
Figure B.1 Four bar mechanism.
Appendix C Serial Link Manipulators in Three Dimensional Space
A serial link manipulator consists of a series of rigid links connected together in an articulated chain. An example of such a robot would be the PUMA robots. Consider the three link manipulator as shown in Figure C.1. Utilizing the Denavit-Hartenburg notation for describing the relative positions and orientations of objects in three dimensional space, a set of equations can be derived which represent the kinematics of the manipulator . Coordinate frames are attached to each link of the manipulator and homogeneous transformation matrices (referred to as A matrices) describe the location and orientation of the frame with respect to the reference frame. The forward kinematic equations determine the user coordinates from a given set of robot joint coordinates. The more difficult inverse case determines the robot joint coordinates from the given user coordinates. The forward and inverse kinematic equations are given in the equations (C.1) and (C.2). The ± sign in θ2 indicates the general configuration of the robot arm. The positive sign (+ve) corresponds to an elbow down solution, and the negative sign (-ve) corresponds to an elbow up solution.
Figure C.1 A typical three link manipulator.