Each of the three planning strategies presented in this thesis have shown varying degrees of success. The Maximum Velocity planner was completely successful. The maximum acceleration controller was also successful, but had a tendency to over shoot the goal position. The maximum torque controller was a success, managing to estimate the optimal control in many cases.
The Maximum Velocity Controller was almost ensured success. It was only required to solve a linearly separable problem. The controller was only required to take the distance to the goal, and turn the velocity to one of the velocity limits, or to zero. This mapping is very simple and has avoided the problems of the non-linear robot, and time evolution of the path.
The Maximum Acceleration Controller had to deal with a complex problem. If the acceleration was not switched at the proper instant in time, the robot would undershoot or overshoot the goal. Thus, when this method worked, it showed the neural network could deal with control based on the system velocity state. This in turn showed that the neural network could deal with a path that will evolve over time.
The non-linearities of the robot were finally considered when the maximum torque controller was used. This controller not only had to determine the effect of applied torque through time, but the effect one torque would have on the other joint. This method did work, although the solutions were sub-optimal, and sometimes inconsistent.
The results presented in this thesis confirm the proof of concept (The use of neural networks for estimating optimal motions). They show the ability of a neural network to learn a mapping of control vectors and generalize to untaught solutions. These methods also demonstrate that there are some small problems to be overcome as well.
For example, all the controllers used bang-bang control. This form of control requires sudden hard switching. In turn, the hard switching will require the neural network to produce some non-continuous functions. This is unnatural for the neural networks. Thus, a continuous and smooth switching function is desired. This could be achieved by adding more limitations to the various controllers. In particular, the maximum torque controller could have a jerk constraint added. This would smooth the solution space and result in a better estimation by the neural networks, although, it would introduce a time dependency upon the controller discussed here. This could be resolved by the addition of an input of acceleration as one of the system’s states.
The use of a neural network based on the Associative memory paradigm, would provide good results. The network could be directly scalable to more joints, learn quickly, and provide a more evenly distributed control response over the entire training region. The only drawback would be ‘switching noise’ which these neural networks generate when used for control. This noise occurs when control is transferred from one neuron to another.
As mentioned before, the neural network was only trained for the region in which the robot was working. By extending the training region, so that the working region is smaller, the network would not encounter some of the problems seen in the previous chapter. These occurred as the training boundaries were approached.
In the future, a better set of Motion Optimization Routines could be developed (or located?) to provide more optimal paths. The optimization routines used here did have flaws (they did not produce optimal paths), which were identified in the neural network results as well. This would allow the neural network to learn the optimal motions from more ‘noise free’ data.
The speed of the neural network simulator presented in this paper is slow. It will run at speeds of about 100,000 connections per second, depending upon the computer used. The network used in all these problems has from 102 connections, to 142 connections. This means that the network is capable of solving the neural network 1,000 times per second. Many faster algorithms and hardware are available. Some neural network accelerators available run at 10 million connections per second (A few run above 20 million connections per second). This would mean that for the network architecture in this thesis, it would be possible to obtain up to 200,000 network refreshes per second. A newly introduced neurocomputer from Hitachi runs at 2.3 billion connections per second [Wall Street Journal, 1990]. This makes a speed of 10,000,000 refreshes a second possible.
The research presented in this thesis would allow a good initial guess for optimization routines. In effect a first guess is generated by the neural network, and refined by a numerical method (like optimization). A similar idea was suggested by Guez et. al. , for iterative numerical problems in robotics. Their research could also be expanded to include more sophisticated systems.
An approach to control that could use the results of this thesis, is based on control estimates. This is a crude form of movement where arms and limbs are placed in approximate locations. Then a fine tuning of position with sensory feedback is performed [Young and Gekey, 1989]. Biological systems, like human beings, prove that effective motor control relies on a high degree of sensory feedback, such as vision and balance. This suggests a method of neural network application where the neural network approximates the path for large movements. It then uses sensory feedback for making in small moves at the end of a path. Observation of the figure below shows the maximum torque trying to act as a standard controller. The set point is following the circle shown below,
The industrial application of neural networks for optimal motion control could be examined. If a neural network controller is trained for a specific manipulator, the network weight values could be saved. When a new controller is attached to a manipulator a pretrained set of neural network weights could be loaded. During operation sample points could be used for occasional backpropagation updates to the controller weights. This could result in equipment that learns to compensate for mechanical wear over time, or damage on-line. This method would require a development of an on-line learning algorithm.
The configuration of the presented controllers does not necessarily lead to their direct introduction to a system. The actuators must be driven by their own controllers. These controllers will typically provide current or voltage, which is delivered in either a switched, or continuously varied form. Thus, the neural network controller may be designed to generate outputs, which satisfy the inputs appropriate to the actuator controller. These actuator control inputs could be position, velocity, acceleration, torque, current, voltage, etc.
This form of control is the most direct. If the controller was only capable of acceleration control, it would be necessary to integrate the equations of motion, to determine inputs to the controller. This is obviously not desired.
The maximum Velocity controller is quite unrealistic, except that some actuators are limited by velocity constraints. Here the input to the actuator controller is typically velocity. It is not practical to use acceleration, and torque actuator control in combination with velocity controllers. But, it is practical to control a velocity actuator controller with a velocity planner.
Collision avoidance with neural networks seems to be to have several approaches. The most obvious is to teach the robot planner to avoid areas of space that are occupied by obstacles. This is suited to a fixed workspace, and should be quite successful.
It is possible to teach robots to avoid obstacles when they are detected. If the obstacles have to be ‘hit’ before they can be avoided the robot will not be working at peak efficiency. Thus, a logical extension to this work would be a robot that avoids collisions when detected, remembers the collision, and avoids it in future. This is more complicated, and requires the neural network to learn a map of space. This technique could have the potential advantage of allowing obstacles in the workspace to change, and the map be updated.
These questions have been mentioned before, and have been seen to introduce some uncertainty about the results sometimes. If these questions were answered, the results would be improved estimation by the neural network.
In this thesis, convergence times of the training algorithms have been neglected. For a neural network controller, which must adapt in real-time to changes in the system parameters, faster training algorithms will be needed. Use of faster training algorithms will have to be investigated. There are currently some algorithms that obtain higher learning rates. This is done by only updating some connections within the network as opposed to a complete update of all the neural connections [Kohonen, 1982].
Other methods attempt to speed network learning by improving processing speed in the network. Current problems with training speeds are commonly related to the use of neural network simulation software. Progress in the improvements of speed are resulting in software, and hardware systems which are capable of speeds that are hundreds and thousands of times faster. One example of this (besides the description earlier) is the work done by Pomerleau et. al. , in which they obtained two hundred times more throughput than the simulations presented in this paper. This means complete network training within minutes, and solution updates 200,000 times per second.
Suddarth et al.  and Yu and Simmons  have found that by training the network for additional outputs which correlate to internal representations that the network will have to learn, convergence is hastened and accuracy is improved. For example, the maximum torque network could be trained with an acceleration, as well as a torque output. The Acceleration output could be disregarded, or possibly used as an estimate of neural network accuracy.
Future work could be focussed on the application of alternative neural network paradigms. One such paradigm was devised by Savic and Tan  that is able to map highly non-linear systems. Another approach is to apply a combination of different network types for the benefits of both [Kinser et. al., 1988].
Another approach that could have excellent results would be the combined use of neural networks, and expert systems. This combination was discussed by Caudill , and called ‘hybrid-systems’. The use of rules with neural networks was demonstrated by Murugesan . The use of rules with neural networks allows some of the shortcomings of neural networks to be avoided. For example a rule could be used to scale the Maximum Torque network outputs to their maximum value (as done here). Or, rules could be used to turn off the torque after a network has reached the goal. Rules could even be used to determine when neural networks should be used, as opposed to classical control methods.[an error occurred while processing this directive]