14. Chapter 13: Maximum Torque with Scaled Neural Network

14.1 13.1 Introduction:

This chapter is more of an extension to the Maximum Torque planner. It was concluded that the Maximum Torque planner was displaying many of the optimal features in the data. Unfortunately, by the nature of the neural networks, the outputs did not go to the limits. It was decided to investigate the nature of the problem by ‘forcing’ the network to produce a bang-bang control output.

14.2 13.2 Scaled Output Controller:

When the output of a neural network is scaled for bang-bang control, there will be two considerations,

The Torque with the greatest magnitude should be scaled to the magnitude of the torque limit. The other joint torque should also be scaled accordingly.

When the manipulator is expected to stand still (i.e., no joint torques applied) the bang-bang control should be stopped.

While the first statement is unquestionable, the second statement may be argued. The controller may be allowed to switch indefinitely, to maintain the position. It was decided that for small moves (near the goal) that the actual controller should be allowed to obtain unscaled control of the robot. This form of control is realized by turning off the scaling when the magnitude of both torques is below 2.0 N•m.

 

Figure 14.1 Figure 13.1: Output Scaling for Maximum Torque Control

14.3 13.3 Results:

The controller outputs follow. These were produced using the Neural Network trained for Maximum Torque Control. The addition of the output scaling routine is the only change.

An approximate measure of path time was also made. This incorporated a slightly different criterion for path end time. The previous path times were measured with cumulative segment times for the splines, and a 5 degree error band for Maximum torque control. The current method has a problem with steady state error. As a result the error band was increased to 10 degrees. For short paths this will be significant, but for longer paths the effect will be lessened.

 

Figure 14.2 Figure 13.2: Path Times for Test Paths, Including Scaled Neural Network Outputs

 

Figure 14.3 Figure 13.3: Maximum Torque Control With a Scaled Neural Network, Case 1

 

Figure 14.4 Figure 13.4: Maximum Torque Control With a Scaled Neural Network, Case 2

 

Figure 14.5 Figure 13.5: Maximum Torque Control With a Scaled Neural Network, Case 3

 

Figure 14.6 Figure 13.6: Maximum Torque Control With a Scaled Neural Network, Case 4

 

Figure 14.7 Figure 13.7: Maximum Torque Control With a Scaled Neural Network, Case 5

 

Figure 14.8 Figure 13.8: Maximum Torque Control With a Scaled Neural Network, Case 6

 

Figure 14.9 Figure 13.9: Maximum Torque Control With a Scaled Neural Network, Case 7

 

Figure 14.10 Figure 13.10: Maximum Torque Control With a Scaled Neural Network, Case 8

 

Figure 14.11 Figure 13.11: Maximum Torque Control With a Scaled Neural Network, Case 9

 

Figure 14.12 Figure 13.12: Maximum Torque Control With a Scaled Neural Network, Case 10

14.4 13.4 Conclusions:

The results show that the network is experiencing problems with the output scale. This is obvious because the time scaling produced better results in almost every case. In one case (case 1) the neural network apparently outperformed the optimization routines but this was due to the convergence criteria of 10°. Other results place the scaled results between the optimization routines and the unscaled neural network (cases 2, 3, 4, 6, 7, 8 & 9). Only the time of one case degraded notably (case 5). This case was experiencing problems with the unscaled output, thus it is safe to conclude that the problem with this output results from inherent instability of the solution.

The steady state error evident in most of the cases shows a problem with the network near goal positions. This could be resolved by a hybrid control system which only uses the controller for coarse movements (an example is given in the next chapter).