21. Appendix D: Adaptive Time Step Runge-Kutta

21.1 D.1 Introduction:

The Runge-Kutta method is a dependable approach for integrating Ordinary Differential Equations. The method was developed by Carl Runge and Wilhelm Kutta to imitate a taylor series without requiring analytic differentiation of the original equations. Their method has become a well known, well used approach to numerical integration. A number of fundamental, and advanced, references are available [Spiegel, 1980], [Cheney and Kincaid, 1985], [Press et al., 1986]

There are a number of other techniques for numerical integration, for example Richardson Extrapolation and Predictor-Corrector methods. Richardson Extrapolation is generally faster and more efficient than Runge-Kutta, but may fail in some cases. Predictor-Corrector methods are difficult to start up because they have a dependency on previous information. In many cases these methods are generally more computationally efficient than Runge-Kutta. Unfortunately, both of these techniques, and many others, have features which make them non-general, and thus less desirable.

The Runge-Kutta was selected because of it’s reputation as a ‘workhorse’ algorithm which almost always succeeds. And, even though the method may be computationally inefficient, the use of the adaptive time step algorithm will speed integration and furnish more accuracy. The adaptive time step Runge-Kutta was also chosen because a Fortran version of the routines was available [Press et.al., 1986].

21.2 D.2 The Runge-Kutta Algorithm for Robot Motion Integration:

The Runge-Kutta technique requires that the problem being examined involves an Initial Boundary Problem. Where the Initial Boundary Value problem is, ‘given the system state at some initial condition, find the system state at some final condition’. The initial boundary problem here is to take the current robot joint configuration and velocity, and the input torque, at one point in time, and find the robot state at some finite time later, under some applied torque.

21.3 D.3 The Fourth Order Runge-Kutta Method:

Fourth Order Runge-Kutta involves the analysis of the derivatives of a function, four times, over an integration step. The derivatives are used to estimate the value of the function at the middle, and then at the end of the integration step. The mathematics are shown below for the Fourth Order Runge-Kutta method.

 

To estimate a function, it should be broken up by a number of smaller steps ‘h’. Evaluation of the function may begin with the starting conditions (i.e. x0, y0), and subsequent evaluation of the first step (i.e. x0 + h), using the Runge-Kutta method. This results in an estimate of the conditions after the first step (i.e. x1, y1). This evaluation is repeated over many steps until the final conditions are reached (i.e. xn, yn).

It should be noted that the only ‘variable’ which may be manipulated in these equations is the step size. A very small step size may be chosen, which results in an accurate, but very slow integration. A large step size may be chosen to reduce the calculation time, but this will result in an increase in error. The function does not require a fixed step size over the entire function, thus a method exists which selects a variable step size, based upon the localized accuracy requirements of the integration.

21.4 D.4 Variable Step Size Estimation:

The terrain of an integral will vary over the period of integration. It is desired to speed integration when the integral has a smooth topography. When the topography becomes rough, it is better to slow down and integrate carefully. The algorithm discussed in Press et.al. [1986] will perform the adjustment of step sizes. This algorithm will result in an increase of operations per integration step, but will result in a decrease in the number of integration steps. Also, considering that the steps are adjusted to maintain a certain accuracy, the solution will have a very controllable degree of accuracy.

Press et.al. [1986] discuss the adaptive time step algorithm in detail. The concept is that the time step may be adjusted up, or down, depending upon the change of the integral over the step. First the solution is analyzed in a two step procedure. This means that the original step is divided into two steps. Press demonstrates that this allows the technique to effectively become fifth order. He goes on to show that the desired step size may be estimated from both a suggested step size, and its resultant integral value.

 

The estimate is based upon the concept that the method is fifth order. The ratio of the error from the suggested step size O(h15) to the error of the required step size O(h25) is equivalent to the ratio of the error estimates Δ1 and Δ0. The calculation of the estimated truncation error Δ0 can be based upon some fraction ‘ε’ of the full range ‘yrange’. In effect ‘ε’ sets the fraction of the integration which one Runge-Kutta step can cover. A more detailed description of this particular implementation and its theory is available in consult Press et.al. [1986].

21.5 D.5 Implementation:

The routines have been converted to ‘C’ and appear at the end of this appendix. There is also a description of the subroutines, which may be found before the code. There were some considerations for implementing the dynamics equations in terms of the Formulation of the Runge-Kutta subroutines.

The differential equations have to be expressed with a state matrix for the Runge-Kutta routines. The state variables for the two link manipulator are joint positions and velocities. Torque is fixed over the period of integration.

 

The system state array (y) is provided to the Runge-Kutta routine as the starting point, and the Runge-Kutta routines calls for the system state differential array (dxdy). The state array (y) elements are filled with values from the end of the previous state. The system state differential array is calculated by using the joint velocities from the system state array, and the joint accelerations from the forward dynamics equations.

21.6 D.6 Subroutines:

This method is available in Fortran in Press [1986]. The subroutines were rewritten in ‘C’ to integrate the forward dynamics equations. Some of the unused features were also removed.

rk_test(): This is a sample program to demonstrate the use of the Runge-Kutta sub-routines. This subroutine also requires the rk_derivs() sub-routine to provide the system state differential array.

rk_derivs(): Calculates the system state differential array.

rk_4(): Performs the fourth order Runge-Kutta approximation of the integral for a single integration step.

rk_qc(): Calls the Runge-Kutta routine and uses the adjustable time steps to ensure quality control of the solution over an integration step.

rk_odeint(): Is the entry point for integrating over some number of steps. This routine will do house keeping, and call rk_qc() to integrate a single step.

 

/*

* FOURTH ORDER RUNGE-KUTTA NUMERICAL INTEGRATOR

*

* This is the fourth order Runge-Kutta integrator from the book

* Numerical Recipes. The methods will approximate a set of differential

* equations expressed in state space form. The routines will then be

* solved with a variable time step Runge-Kutta routine.

*

* The routines were originally typed into the computer in FORTRAN by

* Peter Budgell, and then converted to ’C’ by Hugh Jack. Some changes to

* the routines were made for the conversions, so pay attention to the

* subroutine headers.

*

* The quality of these routines was ensured by using the rk_test routine

* which embodies work of known results, and comparing them with the

* results in another implementation.

*

* The user is epected to supply a derivative routine to allow the program

* obtain the neccesary information. This routine will be called

* ’rk_derivs()’

*

* June 25th, 1990.

*/

#include <math.h>

 

int n, /* number of variables */

n2; /* start of variables for end of integration */

 

void rk_odeint(),

rk_derivs(),

rk_4(),

rk_qc(),

rk_test();

 

/*

*main()

*

* THE ORIGINAL TEST PROGRAM

*

* This program was originally used to verify the Runge-Kutta integrator.

*

* June 24th, 1990.

*

*{

* rk_test();

*}

*/

 

 

 

 

 

void rk_test()

/*

* RUNGE-KUTTA TEST ROUTINE

*

* This routine was originally written by Peter Budgell in Fortran, to

* test flexible beam experiments. This is a fairly complex problem

* which he had running properly. By comparing the results of this

* test program against the output of the original and correct program

* will ensure that the subroutines are valid.

*

* This program may also be used as an example of how to set up the

* integration with these routines.

*

* June 25th, 1990.

*/

{

static double ystart[20],

step,

start,

end,

h1,

hmin,

time,

tend,

eps;

static int i,

nok,

nbad,

nvar;

 

n = nvar = 18; /* Number of variables in array */

 

step = 0.01; /* A nominal time step */

start = 0.0; /* Start Time */

end = 6.0; /* End Time */

 

n2 = n/2; /* first location Result after current time step */

 

for(i = 0; i < n; i++) ystart[i] = 0.0; /* Set all conditions to 0 */

ystart[n2] = 1.0; /* Set one condition to 1.0 */

 

eps = 0.000000001; /* Pick a tolerance limit */

h1 = step*0.05; /* An initial step size */

hmin = step*0.00000000001; /* Minimum step size */

 

/* Loop for entire time of integration */

for(time = start; time <= end; time = time + step){

tend = time + step; /* Find end of current time step */

 

printf("%12.7f, ", time);

for(i = n2; i < n; i++) printf("%12.7f, ", ystart[i]);

printf("\n");

 

rk_odeint(ystart, nvar, time, tend, eps, h1, hmin,

&nok, &nbad); /* Do the magic */

}

}

 

 

 

 

 

/*

*void rk_derivs(x, y, dydx)

*double x, *y, *dydx;

*

* DERIVATIVE CALCULATION

*

* This calculation subroutine was used for testing, and will be preserved

* in case the routines are to be changed, and need to be reverified.

*

* Variables: x: independant variable

* y: current system state

* Return: dydx: derivatives of system state

*

* June 27th, 1990.

*

*{

* static double pfx1;

* static int j;

*

* for(j = 0; j < n2; j++) dydx[j+n2]=y[j];

*

* pfx1 = sin(y[9]);

*

* dydx[0] = -1.98*pfx1 + 801693.6156953528*y[17] + 238202.7335655744*

* y[16] + 347900.987724956*y[15] + 97439.65571318488*y[14] +

* 98362.70683351491*y[13] + 21956.32625008109*y[12] +

* 12114.79095981437*y[11] + 1301.388796632275*y[10];

*

* dydx[1] = 0.4518711099417621*pfx1: 179773.6313506934*y[17]:

* 61576.69785710338*y[16]: 77647.37449279918*y[15]:

* 25879.95640442619*y[14]: 21658.36710096124*y[13]:

* 6413.784140427552*y[12]: 2526.692502066684*y[11]: 567*y[10];

*

* dydx[2] = 0.2391077163121257*pfx1: 166082.56743252*y[17]:

* 6405.409017057381*y[16]: 79710.68283721448*y[15] -

* 844.8565432071885*y[14]: 28547.7815504883*y[13] +

* 1152.059529711898*y[12]: 6153*y[11] + 80.95999011857648*y[10];

*

* dydx[3] = 0.1565327441783348*pfx1 + 47064.70563768353*y[17]:

* 202940.3646111788*y[16] + 30080.11914662241*y[15] -

* 97290.99000636904*y[14] + 15052.16883833024*y[13] -

* 32545.8*y[12] + 2845.779506567576*y[11]:

* 1505.844972100382*y[10];

*

* dydx[4] = 0.1132900827837232*pfx1: 681263.6263703158*y[17] +

* 193176.6647342691*y[16]: 352585.2977708019*y[15] +

* 85442.65322814713*y[14]: 137474.04*y[13] +

* 21572.14379134085*y[12]: 17362.53077751052*y[11] +

* 715.2849542065636*y[10];

*

* dydx[5] = 0.08708677938043836*pfx1 + 558082.2528116575*y[17] -

* 996037.4711388333*y[16] + 268845.6025815515*y[15] -

* 439369.71*y[14] + 86691.57005283963*y[13] -

* 90553.41881423206*y[12] + 10389.2522589453*y[11] -

* 3699.738414305221*y[10];

*

* dydx[6] = 0.06972522601510263*pfx1: 2527433.065839176*y[17] +

* 742264.3579599134*y[16]: 1213711.25*y[15] +

* 280716.0947769071*y[14]: 336143.1918442992*y[13] +

* 56810.92040959832*y[12]: 38124.26660496784*y[11] +

* 1703.971893631381*y[10];

*

* dydx[7] = 0.05749689432606649*pfx1 + 1731034.860764493*y[17] -

* 2917929.13*y[16] + 740549.9706407816*y[15] -

* 988390.0815842206*y[14] + 203949.6229385724*y[13] -

* 184746.3711841028*y[12] + 22008.50451151402*y[11] -

* 7252.400991789842*y[10];

*

* dydx[8] = 0.04848968962573384*pfx1: 6398541.27*y[17] +

* 1748481.586261928*y[16]: 2507721.622589704*y[15] +

* 590957.0489621608*y[14]: 637801.8289136468*y[13] +

* 109906.4461477124*y[12]: 69565.55370655796*y[11] +

* 3155.196941555313*y[10];

*}

*/

 

 

 

 

 

void rk_4(y, dydx, n, x, h, yout)

double *y, *dydx, x, h, *yout;

int n;

/*

* FOURTH ORDER RUNGE-KUTTA

*

* This routine will use the fourth order Runge-Kutta method to advance

* the solution over an interval ’h’, and then return the incremental

* values. The user supplies the routine ’rk_derivs(x, y, dydx)’ which

* provides the derivatives of the system at ’x’.

*

* Variables: y: The current system state array

* dydx: the derivatives of the state variables at ’x’

* n: the number of system state variables

* x: current point in time

* h: Step Size

*

* Returns: yout: results after the time step

*

* June 25th, 1990.

*/

{

static int i;

static double yt[30], /* must have size of at least ’n’ */

dyt[30],

dym[30],

hh,

h6,

xh;

 

hh = h*0.5;

h6 = h/6.0;

xh = x + hh; /* First Step */

for(i = 0; i < n; i++) yt[i] = y[i] + hh*dydx[i];

rk_derivs(xh, yt, dyt); /* Second Step Calc */

for(i = 0; i < n; i++) yt[i] = y[i] + hh*dyt[i];

rk_derivs(xh, yt, dym); /* Third Step Calc */

for(i = 0; i < n; i++){

yt[i] = y[i] + h*dym[i];

dym[i] = dyt[i] + dym[i];

}

rk_derivs(x+h, yt, dyt); /* Fourth Step Calc */

for(i = 0; i < n; i++) /* Final Value Calc */

yout[i] = y[i] + h6*(dydx[i] + dyt[i] + 2.0*dym[i]);

}

 

 

 

 

 

 

void rk_qc(y, dydx, n, x, h, eps, yscal, hdid, hnext)

double *y, *dydx, *yscal, *x, h, eps, *hdid, *hnext;

int n;

/*

* FIFTH ORDER RUNGE-KUTTA WITH ADJUSTABLE TIME STEP

*

* A Fifth order approach is used for the Runge-Kutta which will take

* two small steps, to estimate the accuracy,

* to reduce the error, and adjust step size. This routine calls the user

* define subroutine ’rk_derivs(x, y, dydx)’ to obtain the derivatives.

*

* Variables: y: system state array

* dydx: derivatives for system in start config

* n: number of variables in ’y’

* x: independant vaiable for ’y’

* h: step size of appoximation

* eps: required tolerance for error

* yscal: vector against which the error is scaled

*

* Returns: hdid: step size accomplished

* hnext: Suggested stepsize for next step

* x: independant variable at end of time step

* y: state variables at end of time step

*

* June 25th, 1990.

*/

{

static int i;

static double fcor,

one,

safety,

errcon,

errmax,

pgrow,

pshrnk,

hh,

xsav;

static double ytemp[30], /* These arrays must be at least */

ysav[30], /* ’n’ long */

dysav[30];

 

fcor = 0.0666666667;

one = 1.0;

safety = 0.9;

errcon = 6.0e-4;

pgrow = -0.20;

pshrnk = -0.25;

xsav = x[0]; /* Store original value */

for(i = 0; i < n; i++){

ysav[i] = y[i];

dysav[i] = dydx[i];

}

REDO: hh = 0.5*h; /* For the half sized steps */

rk_4(ysav, dysav, n, xsav, hh, ytemp);

x[0] = xsav + hh;

rk_derivs(x[0], ytemp, dydx);

rk_4(ytemp, dydx, n, x[0], hh, y);

x[0] = xsav + h;

if(x[0] == xsav) puts("stepsize not significant in rkqc.\n");

rk_4(ysav, dysav, n, xsav, h, ytemp); /* Do the big step */

errmax = 0.0; /* Find the accuracy */

for(i = 0; i < n; i++){

ytemp[i] = y[i]: ytemp[i]; /* now have error estimate */

errmax = (errmax > fabs(ytemp[i]/yscal[i])) ?

errmax : fabs(ytemp[i]/yscal[i]);

}

errmax = errmax/eps; /* scale relative to required tolerance */

if(errmax > one){ /* reduce step size if too big */

h = safety*h*pow(errmax, pshrnk);

goto REDO; /* I hate this, but it was needed */

} else { /* Estimate good size for next step */

hdid[0] = h;

if(errmax > errcon){ hnext[0] = safety*h*(pow(errmax, pgrow));

} else { hnext[0] = 4.0*h;}

}

for(i = 0; i < n; i++) y[i] = y[i] + ytemp[i]*fcor; /* Reduce Error */

}

 

 

 

 

 

void rk_odeint(ystart, nvar, x1, x2, eps, h1, hmin, nok, nbad)

double *ystart, x1, x2, eps, h1, hmin;

int nvar, *nok, *nbad;

/*

* ADAPTIVE RUNGE-KUTTA ALGORITHM

*

* This is the runge-kutta friver with adaptive stepsize. This routine

* will use the user supplied routine ’rk_derivs(x, y, dydx)’.

*

* Variables: ystart: initial state of system

* nvar: number of variables

* x1: start independant variable

* x2: end independant variable

* eps: Required Tolerance

* h1: guess of first step size

* hmin: minimum allowable step size

*

* Return: nok: number of good steps

* nbad: number of bad step sizes encountered

*

* June 25th, 1990.

*/

{

static int maxstp,

i,

kmax,

nstp;

static double two,

zero,

tiny,

h,

hdid,

hnext,

x;

 

static double yscal[30], /* These three arrays must be at */

y[30], /* least ’nvar’ elements long */

dydx[30];

 

x = x1;

 

h = ((x2: x1) > 0) ? fabs(h1) : -fabs(h1);

two = 2.0;

zero = 0.0;

tiny = 1.e-30;

maxstp = 10000;

nok[0] = 0;

nbad[0] = 0;

for(i = 0; i < nvar; i++) y[i] = ystart[i];

for(nstp = 0; nstp < maxstp; nstp++){ /* limit steps to maxstp */

rk_derivs(x, y, dydx);

for(i = 0; i < nvar; i++) /* scaling to monitor accuracy */

yscal[i]=fabs(y[i])+fabs(h*dydx[i])+tiny;

if((x + h: x2)*(x + h: x1) > zero) h = x2: x;

rk_qc(y, dydx, nvar, &x, h, eps, yscal, &hdid, &hnext);

if(hdid == h){ nok[0]++;

}else{ nbad[0]++;}

if(((x: x2)*(x2: x1)) >= zero){

for(i = 0; i < nvar; i++) ystart[i]=y[i];

return; /* exit normally */

}

if(fabs(hnext) < hmin) puts("stepsize smaller than minimum.\n");

h = hnext;

}

puts("too many steps.\n");

}