12. Chapter 11: Near-Optimal Motion with Bezier Splines

12.1 11.1 Introduction:

Finding a time optimal motion, with torque limits, is a very complex robotics problem. The multi-variable search space is highly non-linear, and highly constrained. This makes the problem hard to deal with using a standard optimization technique like gradient descent.

If a good initial guess is provided, it is possible to use a hybrid technique that capitalizes on the problem features. A Hookes and Jeeves pattern search [Siddall, 1982] was chosen to be used with a Random Walk search. The pattern search would allow a fast convergence to the nearest optima, but would tend to get caught by constraints. Thus, a random search was used to ‘bump’ the Hookes and Jeeves search out of the non-optimal valleys in the search space. Together, these two methods were able to provide high quality solutions that are near optimal, and are limited only by the motion models based on Bezier Splines.

12.2 11.2 The Problem:

The basic problem here may be described as “moving the robot between two positions, in the least time, while not violating any joint torque constraints.” These concepts are developed into mathematical expressions in all pages.

The problem will use an objective function ‘U’ for the path, and use inequality constraint function ‘C’ for the joint torque limits. Equality constraints exist for the path endpoints and velocity, these are satisfied by the Bezier Spline formulation, therefore they will not be considered.

The objective function ‘U’ must be formulated to ensure that the Bezier spline will be modelled to fit a bang-bang control approach, in the least time. In particular coefficients of the splines can be used to approximate the degree of bang-bang control. The constraints (C1 and C2) should ensure that if one of the joint torques, or position limits is exceeded, then the constraint function will take on a value less than zero.

12.3 11.3 The Objective Function:

The objective may be stated as minimizing the travelling time for the manipulator. This may be done several ways. The best is to use an objective function that considers the sum of the controller time steps as the most important factor.

Optimization techniques often experience problems when converging to an optimum. It can be of benefit to include some solution clues with the objective function. One such clue may be that, at least one of the joints should be at a torque limit at any one time (for bang-bang control). This can be approximated in the objective function. Over a path segment an approximate (linearized) measure of torque is the acceleration. The acceleration may be directly obtained from the coefficients of the Bezier spline segment. Increasing the magnitude of acceleration over the path segment, should move towards maximum joint torques, and bang-bang control.

To determine the value of the objective function it was decided to evaluate the path at many evenly spaced points in time. These time steps were made small so that all the details along the path could be considered. The mathematical description of the objective function is given below, and an explanation follows.

The time objective term (U1) was raised to the sixth exponent. The exponent was chosen to be large (through experimentation) to ensure that time reduction is the dominant factor in optimization of the motion. On the other hand, the bang-bang control component (U2) will be small, and thus will only become significant when the time factor cannot be reduced. These are summed without a weighting factor. Weight factors could be used in the instance that ‘U2’ was to be influential near the start of the search.

The bang-bang objective function has been formulated to ensure that a path segment has a constant maximum acceleration over the segment. The acceleration is made constant by reducing the jerk to zero (represented by Bezier Spline coefficient ‘a’). The acceleration is made to be near the limits by subtracting the absolute value of the Bezier Spline acceleration coefficient ‘b’.

This objective function will provide a small gradient encouraging bang-bang shape for the spline segments (i.e., at constant, and maximum, acceleration limits along the spline segment).

Since the bang-bang control only requires one joint to be at a torque limit, the objective function may only be calculated for one joint. The more dominant joint may be determined by a straight comparison between the objectives of each. The bang-bang control objective (U2) may be found for each joint, compared, and the least value used (since the objective is to be minimized).

The objective function should direct the optimization routines to search for the least time path with bang-bang control theory. (Note that this will not be valid when the manipulator is expected to stand still.)

12.4 11.4 The Motion Constraint Functions:

The optimal solution to this problem will be constrained. Thus, the objective function will be trying to push the solution past the constraints, and the constraints will be limiting the search. Therefore, the constraints of interest are the joint position, and torque limits, and the minimum time steps for the Bezier Spline. If any of these are violated, then some indication must be given to the optimization techniques.

As before, it is easiest to sample the torques at many time steps along the path. At each time step the torques must be evaluated and compared to the limits. If any of the torques violate the limits, the Constraint function will take on a value less than zero.

In this problem, this function is very critical because the final motion should be near the constraint boundaries (this is a constrained optimum problem).

The joint position limits must also be observed. This is a simple process of evaluating the joint positions at a number of points, and determining whether they are outside the limits.

This will simply assume a sum that indicates the magnitude of position constraint violation.

The Bezier spline path representation will also require that all time steps are positive. This may be done quite simply. Each time step in the table of values, used to generate the spline, is examined.

When all three are combined, they will provide an indicator when the motion plan has violated constraints and is not longer valid.

11.5 Decision Variables:

Optimization routines alter fundamental descriptive parameters in an attempt to improve a cost function. Here, it is the path that is to be optimized, and thus the path description variables must be altered. The path description (as seen before) is described with Bezier Spline segments. Each segment is based upon a set of points along the path, and a time duration for the segment. Together, these segments make up the path. The decision variables are a combination of all points along the path, and the duration of the segments, form the set of decision variables.

The reader should note that as a result of the spline representation of the motion, any point change will have an effect upon the other spline segments. Although, this will be small. It might also be important to reiterate that each path segment has its own time duration. If the segment path time is altered, the entire path shape may change to ensure continuity. These problems lead to a very interesting optimization problem.

12.5 11.6 Hookes and Jeeves Optimization:

Now that the Objective functions and Constraint functions have been formulated, an optimization technique will be described to reduce the Objective while observing the Constraints. One such technique which has been used in this thesis is the Hookes and Jeeves search.

The Hookes and Jeeves search technique was first developed in the early sixties. This particular search technique is very simple, easy to implement, and very dependable. The basic mode of operation, is explained in Siddall [1982].

In the Exploratory Stage, the Hookes and Jeeves search method will sequentially adjust each decision variable. The algorithm also checks to see if the change decreases the objective function, or violates any constraints. If the change results in an unconstrained improvement, the change will be kept, and the next decision variable will be adjusted. After a complete set of decision variables has been changed, a Pattern Search will be done. The same move found in the Exploratory search will be tried again, and kept if it results in an unconstrained improvement in the solution. This entire method is repeated until no more moves are possible. A visual depiction of this method may be seen in the following figure.

 

Figure 12.1 Figure 11.1: Hookes and Jeeves Search Method

The basic steps of this algorithm are also outlined below.

 

If an initial guess is provided, which does not violate constraints, then this routine will converge. Here, the first guess is provided by the routines in the previous chapter. These first guesses will not violate the constraints, but will take a long time to traverse the path. This routine is best run in stages with a decreasing step size. The first step size should be large.

As can be seen this search method avoids the need for derivatives, which many of the other multivariable optimization techniques require. The motion planning method presented in this thesis is not suited to determining derivatives of the objective function.

This method is not without drawbacks. This method has the potential to become stuck, because of the orientation of the search vectors. It is possible that a constraint could stop the search from making a move that would permit a more optimal solution.

 

Figure 12.2 Figure 11.2: An Example of a Hookes and Jeeves Search Deadlock

The possibility that such a deadlock will occur, dictates that a method be established to over-come deadlocks.

12.6 11.7 A Random Walk Search:

It is possible to make small random steps in the search space of the problem. If these steps do not violate any constraints, and they result in a reduced objective function, then they may be kept. Although this method is inefficient, it will overcome many difficult optimization problems [Dickinson, 1986].

This method may be very slow, compared to the Hookes and Jeeves search, but will help when the Hookes and Jeeves search becomes ‘hung up’. If the step sizes are large enough, they may also allow the solution to escape a local minima, which the Hookes and Jeeves search cannot do. Thus, this search will be used with the Hookes and Jeeves Search.

The search technique may be expressed as a few simple steps,

 

The first step size, the reduction of the step each time, and the upper limit may all be set by the calling algorithm. The step size will determine how far the search algorithm may step in space. As the algorithm moves into positions near the optima, the steps need to be made smaller. Thus, it was decided to reduce the step size after every 1000 iterations of the routine, regardless of success. The user may determine the degree of optimization produced by the routines, by setting the number of iterations appropriately. This allows the user to tailor the search to the application. An example of the Random Walk search is given below.

 

Figure 12.3 Figure 11.3: Example of a Random Walk Search

This routine is not necessarily efficient, but will find optima when applied appropriately. When using large step sizes to perform a global search, this algorithm will find many optima, and will jump around in search space. The large step size will allow the solution to step over ridges and hills. When the step size is reduced the method will tend to optimize locally, and may become trapped in sub-optimal solutions. Thus, if a step size is made large at first, and then reduced slowly, the search will tend to settle slowly into the globally optimal solution. Unfortunately this method may be very slow when converging, and is best used with another search method.

12.7 11.8 A Hybrid Optimization Routine For Motion Planning:

The Sub-Optimal path discussed in the previous chapter is supplied to the optimization routines. This ensures a good starting point, and faster convergence. The particular order and application of searches is arbitrary, and thus requires some consideration. The method determined (through experimental trial and error) may be described as,

1. The initial path guess is generated with the methods described in the previous chapter.

1. The Random Walk routine is applied first, with a large time step. This allows the path to be refined until it is near the global optimum. (10,000 iterations)

2. The Hookes and Jeeves search is then applied to move much closer to the optimal. This search tends to become caught by constraints, and stop. (used until convergence)

3. When the search has converged, the Random Walk subroutine is used again, but with smaller steps. This causes the solution to move away from the point where the Hookes and Jeeves Search has become ‘hung-up’. (6,000 iterations)

4. The Hookes and Jeeves Search is reapplied, and allowed to converge. The search should be very close to the optimal solution. (used until convergence)

5. A Random Walk search is done with a very small step size, for a large number of iterations. In effect this should overcome any more problems the Hookes and Jeeves search had. (50,000 iterations)

This method tends to provide optimization that converges, in reasonable time, to good solutions by exploiting the advantages of both search methods. This multi-stage search is excessive often, but was general enough to allow generation of near-optimal paths in many cases. One such example of an optimal path is seen below. This is the same path that was discussed in the previous chapter.

As seen, the optimal path time of 3.9 seconds for the example in the last chapter (the first guess for the optimization path) has been reduced to 3.1 seconds. The torque curves also display the characteristic bang-bang control shape, showing that this motion is near optimal. Although the reader should note that near the turn around point there is a point in time, during which the torque is not at a limit. But, since this period of time is very brief, the manipulator time may be considered near optimal.

 

Figure 12.4 Figure 11.4: Near-Optimal Path Plan for Previous Test Case (tpath = 3.1 seconds)

 

Figure 12.5 Figure 11.5: Near-Optimal Path Plan for Case 1 (tpath = 5.3 seconds)

12.8 11.9 Test Paths:

The 10 test paths that have been used in the previous chapters were evaluated. These paths provide an opportunity to examine the success of the optimization techniques in several cases. The position and torque curves are both shown. This allows the reader to observe the optimal path shapes, and also to observe the optimality (or lack thereof) of the paths.

An examination of all the ten test cases shows that there are problems with each. One common problem in every path was that the torque failed to maintain maximum value over the entire path. This effect was minor in most cases, where near maximum torque was approximated for most of the path.

For test path 10 there was an unusual effect that occurred at the midpoint of the path. There was a sudden double switch of torque. This can be used as an example of a path which has not converged to the optimum.

Test path 6 shows an aliasing effect. The test path shape shows that a very high torque must have been applied at the start of the path. This is not shown in the torque graph, and must have escaped the constraint subroutine. This would have happened because the torque was applied for a very brief period that was less than the time step used by the constraint checking subroutine. This effect is recognized to occur when the paths are very short. Although in a more sophisticated subroutine, the time step of the constraint algorithm should be scaled by an estimated path time. This would avoid this unusual aliasing effect.

In general, these routines do provide paths with characteristics of optimal time paths. These results will provide a reasonable source of data from which neural network training data may be extracted.

12.9 11.10 Conclusion:

The results of the optimization are motions that have a resemblance to bang-bang control. The results show that problems occurred in paths that were long (> 3.5 seconds) and short (<1.0 seconds).

 

Figure 12.6 Figure 11.6: Near-Optimal Path Plan for Case 2 (tpath = 4.0 seconds)

 

Figure 12.7 Figure 11.7: Near-Optimal Path Plan for Case 3 (tpath = 4.1 seconds)

 

Figure 12.8 Figure 11.8: Near-Optimal Path Plan for Case 4 (tpath = 2.2 seconds)

 

Figure 12.9 Figure 11.9: Near-Optimal Path Plan for Case 5 (tpath = 5.3 seconds)

 

Figure 12.10 Figure 11.10: Near-Optimal Path Plan for Case 6 (tpath = 0.9 seconds)

 

Figure 12.11 Figure 11.11: Near-Optimal Path Plan for Case 7 (tpath = 2.2 seconds)

 

Figure 12.12 Figure 11.12: Near-Optimal Path Plan for Case 8 (tpath = 3.6 seconds)

 

Figure 12.13 Figure 11.13: Near-Optimal Path Plan for Case 9 (tpath = 1.0 seconds)

 

Figure 12.14 Figure 11.14: Near-Optimal Path Plan for Case10 (tpath = 3.6 seconds)