APPENDIX I - NEURAL NETWORK MOTION PLANNING MODULE

I.1 Introduction:

The neural network routines for path planning are quite different than the other methods. To service the neural network requirements, a module has been created. This module handles creation, destruction, saving, loading, training and simulation and testing of neural networks for path planning. There are two important data structures used in these routines. The first data structure is the neural network itself, Most of the related data operation are carried out in the neural network routines (hidden from view). The training data for the neural network is very predominant in this section. It is stored in an array of structures which have been aimed at keeping trajectory data for the robot.

The array of Trajectory Structures is used for training the neural network, testing the convergence, and for storing path planning and tracking results. The training algorithms pull information from this array to train the neural network. The path planning algorithm will store the trajectories at each time step in this file, as is moves toward the goal. The path tracker will store the trajectories as it follows the point moving along a circular path in space.

The Training Trajectories will be put into the array by the neural network subroutines. When training is to begin the order of the array will be scrambled if point training is to be used. The training algorithm is then called. The algorithm will either use a standard generic algorithm, or a specialized algorithm if available (for speed). After training the network may be tested with the testing algorithm. The test algorithm directly compares the contents of the training Trajectory array to the network outputs, and statistical measures are derived.

The functions allow the neural network to be loaded and saved. This also includes network configuration information so that the executive program can deal with many different configurations easily. There are a number of intuitive functions also available for clearing networks and training sets. There are also routines for creating networks and training sets.

I.2 Simulation Subroutines:

nt_trajectory() - A path may be generated by the neural network with this subroutine. The intent of this routine is to use start and stop positions, along with a currently loaded neural network to generate a path. This method also uses the dynamics simulation module. The routine is capable of handling almost every combination of inputs and outputs possible. The torque output from the controller may also be limited or scaled. Steady state convergence is an outstanding problem with this work, a number of temporary measures have been instituted in the interim.

nt_tracker() - This subroutine is almost an exact duplicate of the path generator. The only difference is a substitution of point tracking for path motion. This allows a moving set point to be used, as would be done with a traditional control systems. In this case the point is moving in a circle on the x axis, passing between the first and fourth quadrants. A file may be created that contains the number of control steps to be used, and the duration of the motion.

I.3 Training and Testing Subroutines:

nt_train() - A central terminal point for all neural network training calls. This was set up to allow the faster training of the neural in the cases which are commonly used. This was achieved by calling a number of other routines. One routine available will training any neural network configuration, very slowly. Specialized training routines has also been written for the special cases. This routine decides when use of the specialized routines are warranted.

nt_t_gen() - The generic neural network training program. When called this program will use all inputs, and examine all output combinations possible. This is very tedious and slow. If a great deal of training is to be done, a specialized training algorithm should be written (tra_t_5() is as yet unused).

tra_t_1() - This program will train the neural network for inputs of Position and Difference, and an output of Position. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.

tra_t_2() - This program will train the neural network for inputs of Velocity, Acceleration, and Difference. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.

tra_t_3() - This program will train the neural network for inputs of Velocity and Difference. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.

tra_t_4() - This program will train the neural network for an input of Difference. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.

tra_t_5() - This is an unused function which has been left for the ease of expansion when another training algorithm will be added for fast training. If an algorithm is to be added it should be modelled on the other fast training algorithms.

tra_t_6() - This program will train the neural network for inputs of Velocity, Position, and Difference. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.

nt_test() - After training it is desirable to obtain a measure of convergence. This algorithm will use the training trajectories currently stored in the array, to perform comparative statistics on the neural network output. These statistics include average error, and RMS error. The results are not returned directly, but are placed in global variables. Any set of inputs and outputs should work with this subroutine.

nt_set_scramble() - Training set order can be of importance when particular training techniques are used. In particular, if point training (see neural network Appendix) is used then the set must be presented in a random order. This routine will randomly swap training trajectories in the array. This should provide an array with the order well scrambled.

I.4 Training Trajectory and Neural Network Maintenance:

nt_load() - The neural network is stored in an ASCII file format. To recover this network, this subroutine may be used. This function also recovers configuration information stored along with the neural network.

nt_save() - The neural network may be saved to disk with this routine. The configuration of the neural network is also saved at the same time.

nt_clear() - When the Training Trajectory array is no longer needed, this function will empty the contents.

nt_build() - This subroutine will use all of the endpoints in the path endpoint buffer to generate a set of training trajectories for the neural network. The trajectories are generated with whatever configuration flags are set.

nt_add() - When a single trajectory is to be generated, this function is called. Using the current path planning flags a set of trajectories along a path are generated, and entered into the list.

nt_new() - Both neural network and training data are eliminated by this subroutine. The logic is if the neural network is to be wiped out, then the training data that was intended for it should be wiped also.

nt_create() - Using the toggles and flags set for the neural network inputs and outputs, this subroutine reinitializes the neural network to fit the new configurations. This handles bias neurons, hidden layers of various widths, and a variable number of inputs (as dictated by the toggles).

/*

* ROBOT TRAJECTORY NET PROGRAM

*

* This code contains the trajectory planning functions, independant of the

* user interface to Sunview. This is causing redundancy, but it is also

* making the results more flexible. This may be used with the sunview

* Interface, or with a background job program for training.

*

* A complete set of routines are present for training, testing, and building

* neural networks for the control of robots. These routines have also been

* constructed to interact with the other path planning routines.

*

* December 1st, 1990.

*/

 

 

#define PSCALE_1 200.0 /* offset and scale values for neural */

#define PSCALE_2 200.0 /* network inputs and outputs */

#define VSCALE_1 12.0 /* It should be noted that the outputs*/

#define VSCALE_2 12.0 /* are scaled so that output limits */

#define ASCALE_1 12.0 /* are above the robot limits. */

#define ASCALE_2 12.0

#define TSCALE_1 12.0

#define TSCALE_2 12.0

 

#define POFF_1 200.0

#define POFF_2 200.0

#define VOFF_1 12.0

#define VOFF_2 12.0

#define AOFF_1 12.0

#define AOFF_2 12.0

#define TOFF_1 12.0

#define TOFF_2 12.0

 

 

 

 

 

void nt_trajectory(output_type, t1s, t2s, t1e, t2e, path, start, n)

int start,

*n,

output_type;

double t1s, t2s,

t1e, t2e;

train_data *path;

/*

* GENERATE TRAJECTORY - NEURAL NETWORK

*

* A trajectory will be generated, based upon the current operation modes

* using the neural network controller. Unfortunately this code is bulky

* and crude in spots, but it runs quickly. This code has a problem with

* convergence, thus two techniques have been used. One is used to approximate

* convergence, the other is a maximum limit on the number of steps. Together

* these effectively cut off the simulation at a reasonable limit.

*

* Variables: output_type = the output type of the neural network

* t1s, t2s = the start configuration of the manipulator

* t1e, t2e = the end configuration of the manipulator

* path = the array of structures to hold the path

* start = the start point in the path array

*

* Returns: n = the number of segments in the path

*

* March 28th, 1990.

*/

{

static int i, /* work variable */

flag, /* */

steps = 120, /* number of time steps */

flag_e, /* end of motion flag */

flag_1, /* Convergence Flag 1st */

flag_2, /* 2nd convergence flag */

flag_3, /* 3rd convergence flag */

flag_ok; /* stop message printed */

FILE *step_file; /* file containing num. steps*/

static double scale; /* for scaled outputs */

 

/*

* If the user wants an alternate number of time steps, a file

* may be created which stores the number, it is loaded here.

*/

step_file = fopen("steps.dat", "r"); /* Get the number of steps */

if(step_file != NULL){ /* From a file if it exists */

fscanf(step_file, "%d", &i);

if((i > 0) && (i < TRAIN_SET)){

steps = i;

}

}

fclose(step_file);

 

/*

* Define initial conditions for control

*/

init_inverse_dynamics(); /* set up dynamics module */

flag = 0; /* */

flag_e = flag_1 = flag_2 = flag_3 = 0; /* Clear Path Done flags */

flag_ok = 0; /* stop message not printed */

 

/*

* Set the initial path conditions (i.e. before the first step)

*/

path[start].t1_target = t1e; /* Set up initial conditions */

path[start].t2_target = t2e; /* for path points array */

path[start].t1_position = t1s;

path[start].t2_position = t2s;

kin_forward(t1s, t2s, &path[start].x_position, &path[start].y_position);

kin_forward(t1e, t2e, &path[start].x_target, &path[start].y_target);

path[start].t1_velocity = 0.0;

path[start].t2_velocity = 0.0;

path[start].t1_acceleration = 0.0;

path[start].t2_acceleration = 0.0;

inverse_dynamics(path[start].t1_position, path[start].t2_position,

path[start].t1_velocity, path[start].t2_velocity,

path[start].t1_acceleration, path[start].t2_acceleration,

&path[start].t1_torque, &path[start].t2_torque);

 

/*

* Start loop of ’steps’ path segments. This is only a temporary measure

* to be used until accuracy is a viable limiting factor.

*/

for(i = 1+start; ((i <= steps+start) && ((flag_1 == 0)

|| (flag_2 == 0) || (flag_3 == 0))); i++){

 

/*

* update network inputs with previous control state, and solve

*/

net_input(1, (path[i-1].t1_position + POFF_1)/PSCALE_1/2.0);

net_input(2, (path[i-1].t2_position + POFF_2)/PSCALE_2/2.0);

net_input(3, (path[i-1].t1_velocity + VOFF_1)/VSCALE_1/2.0);

net_input(4, (path[i-1].t2_velocity + VOFF_2)/VSCALE_2/2.0);

net_input(5, (path[i-1].t1_acceleration + AOFF_1)/ASCALE_1/2.0);

net_input(6, (path[i-1].t2_acceleration + AOFF_2)/ASCALE_2/2.0);

net_input(7, (path[i-1].t1_target + POFF_1)/PSCALE_1/2.0);

net_input(8, (path[i-1].t2_target + POFF_2)/PSCALE_2/2.0);

net_input(9, (path[i-1].t1_target - path[i-1].t1_position

+POFF_1)/PSCALE_1/2.0);

net_input(10, (path[i-1].t2_target - path[i-1].t2_position

+POFF_2)/PSCALE_2/2.0);

net_input(11, (path[i-1].t1_torque + TOFF_1)/TSCALE_1/2.0);

net_input(12, (path[i-1].t2_torque + TOFF_2)/TSCALE_2/2.0);

net_solve();

 

/*

* Determine current state of output, based upon output type

*/

if(output_type == POSITION){

path[i].t1_position =net_output(18)*PSCALE_1*2.0-POFF_1;

path[i].t2_position =net_output(19)*PSCALE_2*2.0-POFF_2;

}

if(output_type == VELOCITY){

path[i].t1_velocity =net_output(18)*VSCALE_1*2.0-VOFF_1;

path[i].t2_velocity =net_output(19)*VSCALE_2*2.0-VOFF_2;

if(path[i].t1_velocity > VMAX_T1)

path[i].t1_velocity = VMAX_T1;

if(path[i].t1_velocity < -VMAX_T1)

path[i].t1_velocity = -VMAX_T1;

if(path[i].t2_velocity > VMAX_T2)

path[i].t2_velocity = VMAX_T2;

if(path[i].t2_velocity < -VMAX_T2)

path[i].t2_velocity = -VMAX_T2;

path[i].t1_position = path[i-1].t1_position +

path[i-1].t1_velocity * T_STEP;

path[i].t2_position = path[i-1].t2_position +

path[i-1].t2_velocity * T_STEP;

}

if(output_type == ACCELERATION){

path[i].t1_acceleration = net_output(18)*ASCALE_1*2.0

-AOFF_1;

path[i].t2_acceleration = net_output(19)*ASCALE_2*2.0

-AOFF_2;

if(path[i].t1_acceleration > AMAX_T1)

path[i].t1_acceleration = AMAX_T1;

if(path[i].t1_acceleration < -AMAX_T1)

path[i].t1_acceleration = -AMAX_T1;

if(path[i].t2_acceleration > AMAX_T2)

path[i].t2_acceleration = AMAX_T2;

if(path[i].t2_acceleration < -AMAX_T2)

path[i].t2_acceleration = -AMAX_T2;

path[i].t1_velocity = path[i-1].t1_velocity +

path[i-1].t1_acceleration * T_STEP;

path[i].t2_velocity = path[i-1].t2_velocity +

path[i-1].t2_acceleration * T_STEP;

path[i].t1_position = path[i-1].t1_position +

path[i-1].t1_velocity * T_STEP +

0.5*path[i-1].t1_acceleration * T_STEP*T_STEP;

path[i].t2_position = path[i-1].t2_position +

path[i-1].t2_velocity * T_STEP +

0.5*path[i-1].t2_acceleration * T_STEP*T_STEP;

}

if(output_type == TORQUE){

path[i].t1_torque =net_output(18)*TSCALE_1*2.0-TOFF_1;

path[i].t2_torque =net_output(19)*TSCALE_2*2.0-TOFF_2;

/*

* If the train_opt flag is set then the output will

* be forced into bang-bang control mode. If not set

* the output will only be clipped at the torque limits

*/

if(train_opt == 0){ /* output clipper */

if(path[i].t1_torque > TMAX_T1)

path[i].t1_torque = TMAX_T1;

if(path[i].t1_torque < -TMAX_T1)

path[i].t1_torque = -TMAX_T1;

if(path[i].t2_torque > TMAX_T2)

path[i].t2_torque = TMAX_T2;

if(path[i].t2_torque < -TMAX_T2)

path[i].t2_torque = -TMAX_T2;

/*

* the output will be scaled in terms of the dominant

* joint. This will be done for all values over

* 2 N.m on either output.

*/

} else { /* output scaling */

if((fabs(path[i].t1_torque) >

fabs(path[i].t2_torque)) &&

(fabs(path[i].t1_torque) > 2.0)){

scale = 10.0/fabs(path[i].t1_torque);

path[i].t1_torque *= scale;

path[i].t2_torque *= scale;

}

if((fabs(path[i].t2_torque) >

fabs(path[i].t1_torque)) &&

(fabs(path[i].t2_torque) > 2.0)){

scale = 10.0/fabs(path[i].t2_torque);

path[i].t2_torque *= scale;

path[i].t1_torque *= scale;

}

}

/*

* Using NN output torque, resultant motion is found

*/

dynamics(T_STEP,

path[i-1].t1_position,

path[i-1].t2_position,

path[i-1].t1_velocity,

path[i-1].t2_velocity,

path[i].t1_torque,

path[i].t2_torque,

&path[i].t1_position,

&path[i].t2_position,

&path[i].t1_velocity,

&path[i].t2_velocity,

&path[i].t1_acceleration,

&path[i].t2_acceleration);

}

path[i].t1_target = path[i-1].t1_target;

path[i].t2_target = path[i-1].t2_target;

flag_3 = flag_2;

flag_2 = flag_1;

/*

* This scaling factor has been used for the maximum

* acceleration controller. This was used to determine

* convergence when maximum acceleration control is used.

* (not in use currently)....

*

if((fabs(path[i].t1_target - path[i].t1_position) < 1.0) &&

(fabs(path[i].t2_target - path[i].t2_position) < 1.0)

&& (fabs(path[i].t1_acceleration) < 1.0)

&& (fabs(path[i].t2_acceleration) < 1.0)){

flag_1 = 1;

printf("Done at t = %f\n", (i-start-1)*T_STEP);

} */

 

/*

* Another end of motion measure. This determines when the

* motion has stoppped based upon position (and acceleration

* which has been commented out)

*/

if((fabs(path[i].t1_target - path[i].t1_position) < 2.0) &&

(fabs(path[i].t2_target - path[i].t2_position) < 2.0)

/* && (fabs(path[i].t1_acceleration) < 1.0)

&& (fabs(path[i].t2_acceleration) < 1.0) */){

flag_e = 1;

printf("Done at t = %f\n", (i-start-1)*T_STEP);

}

 

/*

* This will wait for position to have been converged for

* one second, before stopping the simulation.

*/

if(flag_e > 0){

flag_e++;

if(((double)flag_e*T_STEP) > 1.0) flag_1 = 1;

}

 

/*

* Prints out a message when robot has effectively stopped.

*/

if((fabs(path[i].t1_target - path[i].t1_position) < 1.0) &&

(fabs(path[i].t2_target - path[i].t2_position) < 1.0) &&

(flag_ok == 0)){

printf("Arived at t = %f\n", (i-start-1)*T_STEP);

flag_ok = 1;

}

 

/*

* This section is intended to ensure that the path will be

* analyzed as long as the steady state is not being tested

* and the path of the robot has not turned back upon itself

*/

if((output_type != TORQUE) && (i > 2+start) &&(test_flag != 1)){

if(((path[i-2].t1_position - path[i-1].t1_position)*

(path[i-1].t1_position - path[i].t1_position) <= 0.0)

&& ((path[i-2].t2_position - path[i-1].t2_position)*

(path[i-1].t2_position - path[i].t2_position) <= 0.0)){

flag = 1;

}

}

}

 

/*

* set number of path segments

*/

n[0] = i - start;

}

 

 

 

 

 

void nt_tracker(output_type, t1s, t2s, t1e, t2e, path, start, n)

int start,

*n,

output_type;

double t1s, t2s,

t1e, t2e;

train_data *path;

/*

* FOLLOW TRAJECTORY - NEURAL NETWORK

*

* A trajectory will be followed, based upon the current operation modes

* using the neural network controller. This section of code is also quite

* bulky for the purpose of making the routines generic. This was written

* to allow testing of the neural networks in a traditional controller

* format. The format is quite similar to that above, except that the

* target position is now a moving point in space. A circle in the right

* hand plan of space is currently in use.

*

* Note: comments in this section are sparse, because the code is quite

* identical to that above.

*

* Variables: output_type = the output type of the neural network

* t1s, t2s = the start configuration of the manipulator

* t1e, t2e = the end configuration of the manipulator

* path = the array of structures to hold the path

* start = the start point in the path array

*

* Returns: n = the number of segments in the path

*

* March 28th, 1990.

*/

{

static int i, /* work variable */

flag, /* */

steps = 200, /* default number of steps */

flag_e, /* */

flag_1, /* convergence flags */

flag_2, /* */

flag_3, /* */

flag_ok; /* */

FILE *step_file; /* file with steps in it */

static double scale, /* for scale torque outputs*/

angle_1, angle_2, /* for position tracking */

pos_x, pos_y, /* for position tracking */

p_time = 4.5, /* a set path time */

aang; /* */

 

/*

* Load alternate number of steps

*/

step_file = fopen("steps.dat", "r"); /* Get the number of steps */

if(step_file != NULL){ /* From a file if it exists */

fscanf(step_file, "%d", &i);

fscanf(step_file, "%lf", &p_time);

if((i > 0) && (i < TRAIN_SET)){

steps = i;

}

}

fclose(step_file);

 

/*

* Define initial conditions for control

*/

init_inverse_dynamics();

flag = 0;

flag_e = flag_1 = flag_2 = flag_3 = 0;

flag_ok = 0;

 

/*

* Define initial actuator state.

*/

path[start].t1_target = t1e; /* Set up initial conditions */

path[start].t2_target = t2e;

kin_inverse( &path[start].t1_position,

&path[start].t2_position,

LENGTH_2/2.0 + LENGTH_1, 0.0, 0);

kin_forward(t1s, t2s, &path[start].x_position, &path[start].y_position);

kin_forward(t1e, t2e, &path[start].x_target, &path[start].y_target);

path[start].t1_velocity = 0.0;

path[start].t2_velocity = 0.0;

path[start].t1_acceleration = 0.0;

path[start].t2_acceleration = 0.0;

inverse_dynamics(path[start].t1_position, path[start].t2_position,

path[start].t1_velocity, path[start].t2_velocity,

path[start].t1_acceleration, path[start].t2_acceleration,

&path[start].t1_torque, &path[start].t2_torque);

 

/*

* Start loop of ’steps’ path segments. This is only a temporary measure

* to be used until accuracy is a viable limiting factor.

*/

for(i = 1+start; ((i <= steps+start) && ((flag_1 == 0)

|| (flag_2 == 0) || (flag_3 == 0))); i++){

 

/*

* update network inputs with previous control state, and solve

*/

net_input(1, (path[i-1].t1_position + POFF_1)/PSCALE_1/2.0);

net_input(2, (path[i-1].t2_position + POFF_2)/PSCALE_2/2.0);

net_input(3, (path[i-1].t1_velocity + VOFF_1)/VSCALE_1/2.0);

net_input(4, (path[i-1].t2_velocity + VOFF_2)/VSCALE_2/2.0);

net_input(5, (path[i-1].t1_acceleration + AOFF_1)/ASCALE_1/2.0);

net_input(6, (path[i-1].t2_acceleration + AOFF_2)/ASCALE_2/2.0);

net_input(7, (path[i-1].t1_target + POFF_1)/PSCALE_1/2.0);

net_input(8, (path[i-1].t2_target + POFF_2)/PSCALE_2/2.0);

aang = ((double)(i-start)*T_STEP/p_time)*3.141*2.0;

pos_x = LENGTH_1 + LENGTH_2/2.0* cos(aang);

pos_y = LENGTH_2/2.0 * sin(aang);

kin_inverse(&angle_1, &angle_2, pos_x, pos_y, 0);

net_input(9, (angle_1 - path[i-1].t1_position

+POFF_1)/PSCALE_1/2.0);

net_input(10, (angle_2 - path[i-1].t2_position

+POFF_2)/PSCALE_2/2.0);

net_input(11, (path[i-1].t1_torque + TOFF_1)/TSCALE_1/2.0);

net_input(12, (path[i-1].t2_torque + TOFF_2)/TSCALE_2/2.0);

net_solve();

 

/*

* Determine current state of output, based upon output type

*/

if(output_type == POSITION){

path[i].t1_position =net_output(18)*PSCALE_1*2.0-POFF_1;

path[i].t2_position =net_output(19)*PSCALE_2*2.0-POFF_2;

}

if(output_type == VELOCITY){

path[i].t1_velocity =net_output(18)*VSCALE_1*2.0-VOFF_1;

path[i].t2_velocity =net_output(19)*VSCALE_2*2.0-VOFF_2;

if(path[i].t1_velocity > VMAX_T1)

path[i].t1_velocity = VMAX_T1;

if(path[i].t1_velocity < -VMAX_T1)

path[i].t1_velocity = -VMAX_T1;

if(path[i].t2_velocity > VMAX_T2)

path[i].t2_velocity = VMAX_T2;

if(path[i].t2_velocity < -VMAX_T2)

path[i].t2_velocity = -VMAX_T2;

path[i].t1_position = path[i-1].t1_position +

path[i-1].t1_velocity * T_STEP;

path[i].t2_position = path[i-1].t2_position +

path[i-1].t2_velocity * T_STEP;

}

if(output_type == ACCELERATION){

path[i].t1_acceleration = net_output(18)*ASCALE_1*2.0

-AOFF_1;

path[i].t2_acceleration = net_output(19)*ASCALE_2*2.0

-AOFF_2;

if(path[i].t1_acceleration > AMAX_T1)

path[i].t1_acceleration = AMAX_T1;

if(path[i].t1_acceleration < -AMAX_T1)

path[i].t1_acceleration = -AMAX_T1;

if(path[i].t2_acceleration > AMAX_T2)

path[i].t2_acceleration = AMAX_T2;

if(path[i].t2_acceleration < -AMAX_T2)

path[i].t2_acceleration = -AMAX_T2;

path[i].t1_velocity = path[i-1].t1_velocity +

path[i-1].t1_acceleration * T_STEP;

path[i].t2_velocity = path[i-1].t2_velocity +

path[i-1].t2_acceleration * T_STEP;

path[i].t1_position = path[i-1].t1_position +

path[i-1].t1_velocity * T_STEP +

0.5*path[i-1].t1_acceleration * T_STEP*T_STEP;

path[i].t2_position = path[i-1].t2_position +

path[i-1].t2_velocity * T_STEP +

0.5*path[i-1].t2_acceleration * T_STEP*T_STEP;

}

if(output_type == TORQUE){

path[i].t1_torque =net_output(18)*TSCALE_1*2.0-TOFF_1;

path[i].t2_torque =net_output(19)*TSCALE_2*2.0-TOFF_2;

if(train_opt == 0){

if(path[i].t1_torque > TMAX_T1)

path[i].t1_torque = TMAX_T1;

if(path[i].t1_torque < -TMAX_T1)

path[i].t1_torque = -TMAX_T1;

if(path[i].t2_torque > TMAX_T2)

path[i].t2_torque = TMAX_T2;

if(path[i].t2_torque < -TMAX_T2)

path[i].t2_torque = -TMAX_T2;

} else {

if((fabs(path[i].t1_torque) >

fabs(path[i].t2_torque)) &&

(fabs(path[i].t1_torque) > 2.0)){

scale = 10.0/fabs(path[i].t1_torque);

path[i].t1_torque *= scale;

path[i].t2_torque *= scale;

}

if((fabs(path[i].t2_torque) >

fabs(path[i].t1_torque)) &&

(fabs(path[i].t2_torque) > 2.0)){

scale = 10.0/fabs(path[i].t2_torque);

path[i].t2_torque *= scale;

path[i].t1_torque *= scale;

}

}

dynamics(T_STEP,

path[i-1].t1_position,

path[i-1].t2_position,

path[i-1].t1_velocity,

path[i-1].t2_velocity,

path[i].t1_torque,

path[i].t2_torque,

&path[i].t1_position,

&path[i].t2_position,

&path[i].t1_velocity,

&path[i].t2_velocity,

&path[i].t1_acceleration,

&path[i].t2_acceleration);

}

path[i].t1_target = path[i-1].t1_target;

path[i].t2_target = path[i-1].t2_target;

flag_3 = flag_2;

flag_2 = flag_1;

 

/*

*

if((fabs(path[i].t1_target - path[i].t1_position) < 1.0) &&

(fabs(path[i].t2_target - path[i].t2_position) < 1.0)

&& (fabs(path[i].t1_acceleration) < 1.0)

&& (fabs(path[i].t2_acceleration) < 1.0)){

flag_1 = 1;

printf("Done at t = %f\n", (i-start-1)*T_STEP);

} */

 

if((fabs(path[i].t1_target - path[i].t1_position) < 2.0) &&

(fabs(path[i].t2_target - path[i].t2_position) < 2.0)

/* && (fabs(path[i].t1_acceleration) < 1.0)

&& (fabs(path[i].t2_acceleration) < 1.0) */){

flag_e = 1;

printf("Done at t = %f\n", (i-start-1)*T_STEP);

}

if(flag_e > 0){

flag_e++;

if(((double)flag_e*T_STEP) > 1.0) flag_1 = 1;

}

 

if((fabs(path[i].t1_target - path[i].t1_position) < 1.0) &&

(fabs(path[i].t2_target - path[i].t2_position) < 1.0) &&

(flag_ok == 0)){

printf("Arived at t = %f\n", (i-start-1)*T_STEP);

flag_ok = 1;

}

/*

* This section is intended to ensure that the path will be

* analyzed as long as the steady state is not being tested

* and the path of the robot has not turned back upon itself

*/

if((output_type != TORQUE) && (i > 2+start)&& (test_flag != 1)){

if(((path[i-2].t1_position - path[i-1].t1_position)*

(path[i-1].t1_position - path[i].t1_position) <= 0.0)

&& ((path[i-2].t2_position - path[i-1].t2_position)*

(path[i-1].t2_position - path[i].t2_position) <= 0.0)){

flag = 1;

}

}

}

/*

* set number of path segments

*/

n[0] = i - start;

}

 

 

 

 

 

int nt_load(file)

char *file;

/*

* LOAD NEURAL NETWORK

*

* The named neural network is loaded with routine. The relevant data is

* also recovered from the ’extra string’ storage capability of the

* neural network subroutines.

*

* Variables: file = the name of the neural network file to load

*

* Returns: a value of NO_ERROR if load was successful

*

* March 8th, 1990.

*/

{

static int error; /* the error variable */

static char tex[50]; /* A work string for data conversion */

 

error = ERROR;

 

/*

* If load is a success then recover variables

*/

if(net_read(file) != ERROR){ /* load net, and check if ok */

net_loaded_flag = 1; /* indicate that net is in */

end_train = 0; /* clear training set */

/*

* The important system variables have been stored in strings

* associated with the network. Here the values are recovered.

*/

net_get_string(0, tex);

sscanf(tex, "%s", run_title);

net_get_string(1, tex);

sscanf(tex, "%d", &n_width);

net_get_string(2, tex);

sscanf(tex, "%d", &n_layers);

net_get_string(3, tex);

sscanf(tex, "%d", &n_type);

net_get_string(4, tex);

sscanf(tex, "%d", &n_bias);

net_get_string(5, tex);

sscanf(tex, "%d", &n_position);

net_get_string(6, tex);

sscanf(tex, "%d", &n_velocity);

net_get_string(7, tex);

sscanf(tex, "%d", &n_acceleration);

net_get_string(8, tex);

sscanf(tex, "%d", &n_target);

net_get_string(9, tex);

sscanf(tex, "%d", &n_output);

net_get_string(10, tex);

sscanf(tex, "%d", &n_iterations);

net_get_string(11, tex);

sscanf(tex, "%lf", &n_smooth);

net_get_string(12, tex);

sscanf(tex, "%lf", &n_learn);

net_get_string(13, tex);

sscanf(tex, "%d", &elbow_flag);

net_get_string(14, tex);

sscanf(tex, "%d", &updater);

net_get_string(15, tex);

sscanf(tex, "%d", &train_up);

net_get_string(16, tex);

sscanf(tex, "%d", &train_alt);

net_get_string(17, tex);

sscanf(tex, "%d", &t_number);

net_get_string(18, tex);

sscanf(tex, "%d", &n_difference);

net_get_string(19, tex);

sscanf(tex, "%d", &train_opt);

net_get_string(20, tex);

sscanf(tex, "%d", &train_sim);

net_get_string(21, tex);

sscanf(tex, "%d", &n_torque);

net_get_string(22, tex);

sscanf(tex, "%lf", &T_STEP);

net_get_string(23, tex);

sscanf(tex, "%s", t_file);

 

error = NO_ERROR; /* indicate load ok */

}

 

return(error);

}

 

 

 

 

 

void nt_save(file)

char *file;

/*

* SAVE NEURAL NETWORK

*

* The neural network may be saved with these routines. The important data

* is also saved with the extra string functions of the neural network

* subroutines.

*

* Variables: file - the name of the file to save to

*

* March 8th, 1990.

*/

{

static char tex[50]; /* Work string for storing variables */

 

/*

* If a net is loaded then prepare variables, and save

*/

if(net_loaded_flag == 1){

/*

* variables are stored in message slots available with

* neural network subroutines.

*/

net_put_string(0, run_title);

sprintf(tex, "%d", n_width);

net_put_string(1, tex);

sprintf(tex, "%d", n_layers);

net_put_string(2, tex);

sprintf(tex, "%d", n_type);

net_put_string(3, tex);

sprintf(tex, "%d", n_bias);

net_put_string(4, tex);

sprintf(tex, "%d", n_position);

net_put_string(5, tex);

sprintf(tex, "%d", n_velocity);

net_put_string(6, tex);

sprintf(tex, "%d", n_acceleration);

net_put_string(7, tex);

sprintf(tex, "%d", n_target);

net_put_string(8, tex);

sprintf(tex, "%d", n_output);

net_put_string(9, tex);

sprintf(tex, "%d", n_iterations);

net_put_string(10, tex);

sprintf(tex, "%f", n_smooth);

net_put_string(11, tex);

sprintf(tex, "%f", n_learn);

net_put_string(12, tex);

sprintf(tex, "%d", elbow_flag);

net_put_string(13, tex);

sprintf(tex, "%d", updater);

net_put_string(14, tex);

sprintf(tex, "%d", train_up);

net_put_string(15, tex);

sprintf(tex, "%d", train_alt);

net_put_string(16, tex);

sprintf(tex, "%d", t_number);

net_put_string(17, tex);

sprintf(tex, "%d", n_difference);

net_put_string(18, tex);

sprintf(tex, "%d", train_opt);

net_put_string(19, tex);

sprintf(tex, "%d", train_sim);

net_put_string(20, tex);

sprintf(tex, "%d", n_torque);

net_put_string(21, tex);

sprintf(tex, "%lf", T_STEP);

net_put_string(22, tex);

sprintf(tex, "%s", t_file);

net_put_string(23, tex);

 

net_write(file); /* write network to disk */

}

}

 

 

 

 

void nt_clear()

/*

* WIPE OUT TRAINING SET

*

* The current Neural Network Training Set Will be wiped out.

*

* March 8th, 1990.

*/

{

/*

* Reset end of training set pointer to start

*/

end_train = 0;

}

 

 

 

 

 

void nt_build()

/*

* BUILD A NEW TRAINING SET

*

* The old training set will be wiped out and replaced with a new one.

* This training set will consist of all trajectory paths described by

* contents of the current trajectory buffer.

*

* March 28th, 1990.

*/

{

static int i; /* work variable */

 

/*

* Set training set to empty

*/

end_train = 0; /* empty trajectory buffer */

/*

* Add a path for each trajectory in the path endpoints buffer.

*/

for(i = 0; i < end_of_buffer; i++){ /* go through endpoints buffer */

/*

* This routine adds a single path to the trajectory list

* using the currently defined flags.

*/

nt_add(buffer[i].t1_start, buffer[i].t2_start,

buffer[i].t1_end, buffer[i].t2_end);

printf("Added %d of %d\n", i, end_of_buffer);

}

}

 

 

 

 

 

void nt_add(t1_1, t2_1, t1_2, t2_2)

double t1_1, t2_1, t1_2, t2_2;

/*

* ADD A PATH TO THE TRAINING SET

*

* The training set will be appended by the current trajectory path.

* This is the only path which will be added.

*

* Variables: t1_1, t2_1 = the joint start trajectory point

* t1_2, t2_2 = the joint end trajectory point

*

* March 28th, 1990.

*/

{

static int n;

/*

* Set the training flag to get full trajectory generation

*/

train_flag = 1;

/*

* Update trajectory, update pointer for training, reset flag

*/

get_trajectory(t_number, t1_1,t2_1,t1_2,t2_2, t_set, end_train, &n);

end_train += n; /* increase pointer for new data */

 

train_flag = 0; /* undo training flag */

}

 

 

 

 

 

void nt_new()

/*

* WIPE OUT NEURAL NETWORK

*

* The currently configured neural network will be wiped out. This

* will also wipe out the training set. This does not change the options

* which have been entered.

*

* March 8th, 1990.

*/

{

/*

* Reset training set, reset network, and set net not loaded

*/

end_train = 0; /* get rid of all trajectory points */

net_init(n_type); /* reset neural net stuff */

net_loaded_flag = 0; /* indicate no net in program */

}

 

 

 

 

void nt_create()

/*

* BUILD NEW NEURAL NETWORK

*

* A new neural network will be built, based upon the current options entered.

*

* March 8th, 1990.

*/

{

static int i, /* work variables */

j,

k,

start, /* These two determine the start and */

offset; /* spacing of neurons for the hidden */

/* neurons */

 

start = 20; /* first location for hidden neurons */

offset = n_width; /* neurons in hidden layers */

 

/*

* Setup network

*/

net_init(n_type); /* reset network */

 

/*

* Apply bias neurons, if required. neuron 0 should be constant

* valued input.

*/

if(n_bias == 1){ /* check for bias flag set */

for(i = 0; i < n_layers-2; i++){ /* bias all hidden neurons */

for(j = start+i*offset; j <start+i*offset+n_width; j++){

net_arc_define(0, j);

}

}

net_arc_define(0, 18); /* bias output neurons */

net_arc_define(0, 19);

def_input(0); /* set neuron as an input */

net_input(0, 1.0); /* give an input value */

}

 

/*

* if required create first hidden layer

*/

if(n_layers > 2){ /* check for hidden layers */

for(j = start; j < start+n_width; j++){

/*

* Check for input flags set. if set then define

* inputs to different neurons in first hidden layer.

*/

if(n_position == 1){

net_arc_define(1, j);

net_arc_define(2, j);

}

if(n_velocity == 1){

net_arc_define(3, j);

net_arc_define(4, j);

}

if(n_acceleration == 1){

net_arc_define(5, j);

net_arc_define(6, j);

}

if(n_target == 1){

net_arc_define(7, j);

net_arc_define(8, j);

}

if(n_difference == 1){

net_arc_define(9, j);

net_arc_define(10, j);

}

if(n_torque == 1){

net_arc_define(11, j);

net_arc_define(12, j);

}

}

}

 

/*

* If no hidden layers then tie inputs to outputs

*/

if(n_layers == 2){

if(n_position == 1){

net_arc_define(1, 18);

net_arc_define(2, 19);

}

if(n_velocity == 1){

net_arc_define(3, 18);

net_arc_define(4, 19);

}

if(n_acceleration == 1){

net_arc_define(5, 18);

net_arc_define(6, 19);

}

if(n_target == 1){

net_arc_define(7, 18);

net_arc_define(8, 19);

}

if(n_difference == 1){

net_arc_define(9, 18);

net_arc_define(10, 19);

}

if(n_torque == 1){

net_arc_define(11, 18);

net_arc_define(12, 19);

}

}

 

/*

* Fully connect all hidden layers

*/

for(i = 0; i < n_layers-3; i++){

for(j = start+i*offset; j < start+i*offset+n_width; j++){

for(k=start+(i+1)*offset;k<start+(i+1)*offset+n_width;

k++){ net_arc_define(j, k);

}

}

}

/*

* Tie last hidden layer to outputs

*/

if(n_layers > 2){

for(i=start+offset*(n_layers-3);

i<start+offset*(n_layers-3)+n_width; i++){

net_arc_define(i, 18); /* for joint 1 */

net_arc_define(i, 19); /* for joint 2 */

}

}

 

/*

* Define inputs, as specified by user defined variables

*/

if(n_position == 1){

def_input(1);

def_input(2);

}

if(n_velocity == 1){

def_input(3);

def_input(4);

}

if(n_acceleration == 1){

def_input(5);

def_input(6);

}

if(n_target == 1){

def_input(7);

def_input(8);

}

if(n_difference == 1){

def_input(9);

def_input(10);

}

if(n_torque == 1){

def_input(11);

def_input(12);

}

 

/*

* define outputs and set net created flag

*/

def_output(18);

def_output(19);

 

net_loaded_flag = 1;

}

 

 

 

 

void nt_train()

/*

* TRAIN NEURAL NETWORK

*

* This routine is a junction for training neural networks. This structure

* has been chosen to help speed training. There is one subroutine which

* can train any neural network. This generic subroutine is very slow, thus

* special versions were made for the cases when popular training examples

* were tried. This means that if the special case cannot be found, the

* general case will be used.

*

* December 3rd, 1990.

*/

{

static int t_flag; /* Network trained flag */

 

t_flag = 1; /* Indicates not trained */

 

/*

* The input type flags are checked. If the input flags have a

* specific pattern the appropriate routine is called.

*/

if( (n_output == POSITION) &&

(n_position == 1) &&

(n_velocity != 1) &&

(n_acceleration != 1) &&

(n_difference == 1) &&

(n_torque != 1) &&

(n_target != 1)){

if(t_flag == 1)printf("Using Special Case # 1\n");

nt_t_1();

} else if((n_position != 1) &&

(n_velocity == 1) &&

(n_acceleration == 1) &&

(n_difference == 1) &&

(n_torque != 1) &&

(n_target != 1)){

if(t_flag == 1)printf("Using Special Case # 2\n");

nt_t_2();

} else if((n_position != 1) &&

(n_velocity == 1) &&

(n_acceleration != 1) &&

(n_difference == 1) &&

(n_torque != 1) &&

(n_target != 1)){

if(t_flag == 1)printf("Using Special Case # 3\n");

nt_t_3();

} else if((n_position != 1) &&

(n_velocity != 1) &&

(n_acceleration != 1) &&

(n_difference == 1) &&

(n_torque != 1) &&

(n_target != 1)){

if(t_flag == 1)printf("Using Special Case # 4\n");

nt_t_4();

} else if((n_output == POSITION) &&

(n_position == 1) &&

(n_velocity != 1) &&

(n_acceleration != 1) &&

(n_difference == 1) &&

(n_torque != 1) &&

(n_target != 1)){

if(t_flag == 1)printf("Using Special Case # 5\n");

nt_t_5();

} else if((n_position == 1) &&

(n_velocity == 1) &&

(n_acceleration != 1) &&

(n_difference == 1) &&

(n_torque != 1) &&

(n_target != 1)){

if(t_flag == 1)printf("Using Special Case # 6\n");

nt_t_6();

} else {

/*

* if the t_flag is still 1 then the network has not been

* trained and the default case will be used.

*/

if(t_flag == 1)printf("Using General Case\n");

nt_t_gen();

}

}

 

 

 

 

 

void nt_t_gen()

/*

* TRAIN THE NEURAL NETWORK

*

* The current training set is used to train the neural network. This is the

* generic trainer. This is exceptionally bad for speed.

*

* March 8th, 1990.

*/

{

static int i, j; /* Work variables */

static double t1_e, t2_e; /* joint angle work variables */

/*

* Check that the training set and network exist

*/

if((end_train > 0) && (net_loaded_flag == 1)){

/*

* If point training is to be used, then mix up the data order

*/

if(updater == POINT_TRAIN) nt_set_scramble();

/*

* Loop for number of iterations

*/

for(i = 0; i < n_iterations; i++){

/*

* Print out training report (not for batch mode)

*/

if((batch_mode == 1) && ((i % 10) == 0))

printf("training %d of %d \n", i, n_iterations);

/*

* Run through training set

*/

for(j = 0; j < end_train; j++){

/*

* Set up network inputs and solve

*/

net_input(1, (t_set[j].t1_position+POFF_1)

/2.0/PSCALE_1);

net_input(2, (t_set[j].t2_position+POFF_2)

/2.0/PSCALE_2);

net_input(3, (t_set[j].t1_velocity+VOFF_1)

/2.0/VSCALE_1);

net_input(4, (t_set[j].t2_velocity+VOFF_2)

/2.0/VSCALE_2);

net_input(5, (t_set[j].t1_acceleration+AOFF_1)

/2.0/ASCALE_1);

net_input(6, (t_set[j].t2_acceleration+AOFF_2)

/2.0/ASCALE_2);

net_input(7, (t_set[j].t1_target+POFF_1)

/2.0/PSCALE_1);

net_input(8, (t_set[j].t2_target+POFF_2)

/2.0/PSCALE_2);

net_input(9, (t_set[j].t1_target

- t_set[j].t1_position + POFF_1)

/2.0/PSCALE_1);

net_input(10, (t_set[j].t2_target

- t_set[j].t2_position + POFF_2)

/2.0/PSCALE_2);

net_input(11, (t_set[j].t1_torque+TOFF_1)

/2.0/TSCALE_1);

net_input(12, (t_set[j].t2_torque+TOFF_2)

/2.0/TSCALE_2);

net_solve();

/*

* Determine which output is required

*/

if(n_output == POSITION){

t1_e = (t_set[j].t1_position+POFF_1)

/2.0/PSCALE_1;

t2_e = (t_set[j].t2_position+POFF_2)

/2.0/PSCALE_2;

}

if(n_output == VELOCITY){

t1_e = (t_set[j].t1_velocity+VOFF_1)

/2.0/VSCALE_1;

t2_e = (t_set[j].t2_velocity+VOFF_2)

/2.0/VSCALE_2;

}

if(n_output == ACCELERATION){

t1_e = (t_set[j].t1_acceleration+AOFF_1)

/2.0/ASCALE_1;

t2_e = (t_set[j].t2_acceleration+AOFF_2)

/2.0/ASCALE_2;

}

if(n_output == TORQUE){

t1_e = (t_set[j].t1_torque+TOFF_1)

/2.0/TSCALE_1;

t2_e = (t_set[j].t2_torque+TOFF_2)

/2.0/TSCALE_2;

}

/*

* Set expected values and apply backprop

*/

net_expect(18, t1_e);

net_expect(19, t2_e);

net_back_prop(n_learn, n_smooth);

}

/*

* Update the net (only has effect with set training)

*/

net_update();

}

}

}

 

 

 

 

 

void nt_t_1()

/*

* TRAIN THE NEURAL NETWORK (SPECIAL)

*

* The current training set is used to train the neural network with the

* neural inputs: POSITION & DIFFERENCE and then neural output of POSITION

* (for more comments see nt_t_gen())

*

* June 8th, 1990.

*/

{

static int i, j; /* Work variables */

static double t1_e,

t2_e;

if((end_train > 0) && (net_loaded_flag == 1)){

/*

* If point training is to be used, then mix up the data order

*/

if(updater == POINT_TRAIN) nt_set_scramble();

/*

* Do Math Conversion Before Training

*/

for(j = 0; j < end_train; j++){

/* Position */ in1_1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

in2_1[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

 

/* Difference */ in1_2[j] = (t_set[j].t1_target

-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

in2_2[j] = (t_set[j].t2_target

-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

 

out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

}

/*

* Loop for number of iterations

*/

for(i = 0; i < n_iterations; i++){

/*

* Print out training report (not for batch mode)

*/

if((batch_mode == 1) && ((i % 10) == 0))

printf("training %d of %d \n", i, n_iterations);

/*

* Run through training set

*/

for(j = 0; j < end_train; j++){

/*

* Set up network inputs and solve

*/

net_input(1, in1_1[j]); /* Position */

net_input(2, in2_1[j]);

 

net_input(9, in1_2[j]); /* Difference */

net_input(10, in2_2[j]);

 

net_solve();

 

/*

* Set expected values and apply backprop

*/

net_expect(18, out1[j]);

net_expect(19, out2[j]);

net_back_prop(n_learn, n_smooth);

}

/*

* Update the net (only has effect with set training)

*/

net_update();

}

}

}

 

 

 

 

 

void nt_t_2()

/*

* TRAIN THE NEURAL NETWORK (SPECIAL)

*

* The current training set is used to train the neural network with the

* neural inputs: VELOCITY, ACCELERATION & DIFFERENCE

* (for more comments see nt_t_gen())

*

* June 8th, 1990.

*/

{

static int i, j; /* Work variables */

static double t1_e,

t2_e;

if((end_train > 0) && (net_loaded_flag == 1)){

/*

* If point training is to be used, then mix up the data order

*/

if(updater == POINT_TRAIN) nt_set_scramble();

/*

* Do Math Conversion Before Training

*/

for(j = 0; j < end_train; j++){

/* Veloctiy */ in1_1[j] = (t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;

in2_1[j] = (t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;

 

/* Acceleration */ in1_2[j] = (t_set[j].t1_acceleration+AOFF_1)

/2.0/ASCALE_1;

in2_2[j] = (t_set[j].t2_acceleration+AOFF_2)

/2.0/ASCALE_2;

 

/* Difference */ in1_3[j] = (t_set[j].t1_target

-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

in2_3[j] = (t_set[j].t2_target

-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

 

if(n_output == POSITION){

out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

}

 

if(n_output == VELOCITY){

out1[j] =(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;

out2[j] =(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;

}

 

if(n_output == ACCELERATION){

out1[j] =(t_set[j].t1_acceleration+AOFF_1)

/2.0/ASCALE_1;

out2[j] =(t_set[j].t2_acceleration+AOFF_2)

/2.0/ASCALE_2;

}

}

/*

* Loop for number of iterations

*/

for(i = 0; i < n_iterations; i++){

/*

* Print out training report (not for batch mode)

*/

if((batch_mode == 1) && ((i % 10) == 0))

printf("training %d of %d \n", i, n_iterations);

/*

* Run through training set

*/

for(j = 0; j < end_train; j++){

/*

* Set up network inputs and solve

*/

/* Velocity */ net_input(3, in1_1[j]);

net_input(4, in2_1[j]);

 

/* Acceleration */ net_input(5, in1_2[j]);

net_input(6, in2_2[j]);

 

/* Difference */ net_input(9, in1_3[j]);

net_input(10, in2_3[j]);

 

net_solve();

 

/*

* Set expected values and apply backprop

*/

net_expect(18, out1[j]);

net_expect(19, out2[j]);

net_back_prop(n_learn, n_smooth);

}

/*

* Update the net (only has effect with set training)

*/

net_update();

}

}

}

 

 

 

 

 

void nt_t_3()

/*

* TRAIN THE NEURAL NETWORK (SPECIAL)

*

* The current training set is used to train the neural network with the

* neural inputs: VELOCITY and DIFFERENCE

* (for more comments see nt_t_gen())

*

* June 8th, 1990.

*/

{

static int i, j; /* Work variables */

static double t1_e,

t2_e;

if((end_train > 0) && (net_loaded_flag == 1)){

/*

* If point training is to be used, then mix up the data order

*/

if(updater == POINT_TRAIN) nt_set_scramble();

/*

* Do Math Conversion Before Training

*/

for(j = 0; j < end_train; j++){

/* Velocity */ in1_1[j] = (t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;

in2_1[j] = (t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;

 

/* Difference */ in1_2[j] = (t_set[j].t1_target

-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

in2_2[j] = (t_set[j].t2_target

-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

 

if(n_output == POSITION){

out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

}

 

if(n_output == VELOCITY){

out1[j] =(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;

out2[j] =(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;

}

 

if(n_output == ACCELERATION){

out1[j] =(t_set[j].t1_acceleration+AOFF_1)

/2.0/ASCALE_1;

out2[j] =(t_set[j].t2_acceleration+AOFF_2)

/2.0/ASCALE_2;

}

}

/*

* Loop for number of iterations

*/

for(i = 0; i < n_iterations; i++){

/*

* Print out training report (not for batch mode)

*/

if((batch_mode == 1) && ((i % 10) == 0))

printf("training %d of %d \n", i, n_iterations);

/*

* Run through training set

*/

for(j = 0; j < end_train; j++){

/*

* Set up network inputs and solve

*/

/* Velocity */ net_input(3, in1_1[j]);

net_input(4, in2_1[j]);

 

/* Difference */ net_input(9, in1_2[j]);

net_input(10, in2_2[j]);

 

net_solve();

 

/*

* Set expected values and apply backprop

*/

net_expect(18, out1[j]);

net_expect(19, out2[j]);

net_back_prop(n_learn, n_smooth);

}

/*

* Update the net (only has effect with set training)

*/

net_update();

}

}

}

 

 

 

 

 

void nt_t_4()

/*

* TRAIN THE NEURAL NETWORK (SPECIAL)

*

* The current training set is used to train the neural network with the

* neural inputs: DIFFERENCE

* (for more comments see nt_t_gen())

*

* June 8th, 1990.

*/

{

static int i, j; /* Work variables */

static double t1_e,

t2_e;

if((end_train > 0) && (net_loaded_flag == 1)){

/*

* If point training is to be used, then mix up the data order

*/

if(updater == POINT_TRAIN) nt_set_scramble();

/*

* Do Math Conversion Before Training

*/

if(batch_mode == 1)

printf("Preprocessing train set, size:%d \n",end_train);

for(j = 0; j < end_train; j++){

/* Difference */ in1_1[j] = (t_set[j].t1_target

-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

in2_1[j] = (t_set[j].t2_target

-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

 

if(n_output == POSITION){

out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

}

 

if(n_output == VELOCITY){

out1[j] =(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;

out2[j] =(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;

}

 

if(n_output == ACCELERATION){

out1[j] =(t_set[j].t1_acceleration+AOFF_1)

/2.0/ASCALE_1;

out2[j] =(t_set[j].t2_acceleration+AOFF_2)

/2.0/ASCALE_2;

}

}

if(batch_mode == 1) printf("Starting to train \n");

/*

* Loop for number of iterations

*/

for(i = 0; i < n_iterations; i++){

/*

* Print out training report (not for batch mode)

*/

if((batch_mode == 1) && ((i % 10) == 0))

printf("training %d of %d \n", i, n_iterations);

/*

* Run through training set

*/

for(j = 0; j < end_train; j++){

/*

* Set up network inputs and solve

*/

/* Difference */ net_input(9, in1_1[j]);

net_input(10, in2_1[j]);

 

net_solve();

 

/*

* Set expected values and apply backprop

*/

net_expect(18, out1[j]);

net_expect(19, out2[j]);

net_back_prop(n_learn, n_smooth);

}

/*

* Update the net (only has effect with set training)

*/

net_update();

}

}

}

 

 

 

 

 

void nt_t_5(){} /* An empty function just holding space */

 

 

 

 

 

void nt_t_6()

/*

* TRAIN THE NEURAL NETWORK (SPECIAL)

*

* The current training set is used to train the neural network with the

* neural inputs: POSITION, VELOCITY and DIFFERENCE

* (for more comments see nt_t_gen())

*

* June 8th, 1990.

*/

{

static int i, j; /* Work variables */

static double t1_e,

t2_e;

if((end_train > 0) && (net_loaded_flag == 1)){

/*

* If point training is to be used, then mix up the data order

*/

if(updater == POINT_TRAIN) nt_set_scramble();

/*

* Do Math Conversion Before Training

*/

if(batch_mode == 1)

printf("Preprocessing train set, size:%d \n",end_train);

for(j = 0; j < end_train; j++){

/* Velocity */ in1_1[j] = (t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;

in2_1[j] = (t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;

 

/* Position */ in1_2[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

in2_2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

 

/* Difference */ in1_3[j] = (t_set[j].t1_target

-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

in2_3[j] = (t_set[j].t2_target

-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

 

if(n_output == POSITION){

out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;

out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;

}

 

if(n_output == VELOCITY){

out1[j] =(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;

out2[j] =(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;

}

 

if(n_output == ACCELERATION){

out1[j] =(t_set[j].t1_acceleration+AOFF_1)

/2.0/ASCALE_1;

out2[j] =(t_set[j].t2_acceleration+AOFF_2)

/2.0/ASCALE_2;

}

if(n_output == TORQUE){

out1[j] =(t_set[j].t1_torque+TOFF_1)/2.0/TSCALE_1;

out2[j] =(t_set[j].t2_torque+TOFF_2)/2.0/TSCALE_2;

}

}

if(batch_mode == 1) printf("Starting to train \n");

/*

* Loop for number of iterations

*/

for(i = 0; i < n_iterations; i++){

/*

* Print out training report (not for batch mode)

*/

if((batch_mode == 1) && ((i % 10) == 0))

printf("training %d of %d \n", i, n_iterations);

/*

* Run through training set

*/

for(j = 0; j < end_train; j++){

/*

* Set up network inputs and solve

*/

/* Position */ net_input(1, in1_2[j]);

net_input(2, in2_2[j]);

 

/* Velocity */ net_input(3, in1_1[j]);

net_input(4, in2_1[j]);

 

/* Difference */ net_input(9, in1_3[j]);

net_input(10, in2_3[j]);

 

net_solve();

 

/*

* Set expected values and apply backprop

*/

net_expect(18, out1[j]);

net_expect(19, out2[j]);

net_back_prop(n_learn, n_smooth);

}

/*

* Update the net (only has effect with set training)

*/

net_update();

}

}

}

 

 

 

 

 

void nt_test()

/*

* TEST THE NETWORK FOR ERRORS

*

* The network is tested, using the current network configuration, and the

* current training set. This does not alter the network, but finds the

* error per reading.

*

* March 8th, 1990.

*/

{

static int i, /* work variables */

j,

count; /* number of points in train set */

 

/*

* Check for a training set, which may be used for testing

*/

if(end_train > 0){

/*

* Set all statistics to zero

*/

net_average_error = 0.0;

net_deviation = 0.0;

n1_error = 0.0;

n1_deviation = 0.0;

n2_error = 0.0;

n2_deviation = 0.0;

count = 0;

/*

* Run through the training set to determine errors

*/

for(j = 0; j < end_train; j++){

/*

* Set network inputs and solve

*/

net_input(1,(t_set[j].t1_position+POFF_1)/2.0/PSCALE_1);

net_input(2,(t_set[j].t2_position+POFF_2)/2.0/PSCALE_2);

net_input(3,(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1);

net_input(4,(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2);

net_input(5,(t_set[j].t1_acceleration+AOFF_1)

/2.0/ASCALE_1);

net_input(6,(t_set[j].t2_acceleration+AOFF_2)

/2.0/ASCALE_2);

net_input(7,(t_set[j].t1_target+POFF_1)/2.0/PSCALE_1);

net_input(8,(t_set[j].t2_target+POFF_2)/2.0/PSCALE_2);

net_input(9, (t_set[j].t1_target - t_set[j].t1_position

+POFF_1)/2.0/PSCALE_1);

net_input(10,(t_set[j].t2_target - t_set[j].t2_position

+POFF_2)/2.0/PSCALE_2);

net_input(11,(t_set[j].t1_torque+TOFF_1)/2.0/TSCALE_1);

net_input(12,(t_set[j].t2_torque+TOFF_2)/2.0/TSCALE_2);

net_solve();

 

/*

* Set errors to zero before output evaluation

*/

t_set[j].t1_error = 0.0;

t_set[j].t2_error = 0.0;

 

/*

* Determine output type and calculate output and error

*/

if(n_output == POSITION){

t_set[j].t1_output =

(net_output(18)*PSCALE_1 *2.0) - POFF_1;

t_set[j].t2_output =

(net_output(19)*PSCALE_2 *2.0) - POFF_2;

t_set[j].t1_error = t_set[j].t1_output

- t_set[j].t1_position;

t_set[j].t2_error = t_set[j].t2_output

- t_set[j].t2_position;

}

if(n_output == VELOCITY){

t_set[j].t1_output =

(net_output(18)*VSCALE_1 *2.0) - VOFF_1;

t_set[j].t2_output =

(net_output(19)*VSCALE_2 *2.0) - VOFF_2;

t_set[j].t1_error = t_set[j].t1_output

- t_set[j].t1_velocity;

t_set[j].t2_error = t_set[j].t2_output

- t_set[j].t2_velocity;

}

if(n_output == ACCELERATION){

t_set[j].t1_output =

(net_output(18)*ASCALE_1 *2.0) - AOFF_1;

t_set[j].t2_output =

(net_output(19)*ASCALE_2 *2.0) - AOFF_2;

t_set[j].t1_error = t_set[j].t1_output

- t_set[j].t1_acceleration;

t_set[j].t2_error = t_set[j].t2_output

- t_set[j].t2_acceleration;

}

if(n_output == TORQUE){

t_set[j].t1_output =

(net_output(18)*TSCALE_1 *2.0) - TOFF_1;

t_set[j].t2_output =

(net_output(19)*TSCALE_2 *2.0) - TOFF_2;

t_set[j].t1_error = t_set[j].t1_output

- t_set[j].t1_torque;

t_set[j].t2_error = t_set[j].t2_output

- t_set[j].t2_torque;

}

 

/*

* Do some statistics gathering

*/

n1_error += t_set[j].t1_error;

n1_deviation += t_set[j].t1_error*t_set[j].t1_error;

n2_error += t_set[j].t2_error;

n2_deviation += t_set[j].t2_error*t_set[j].t2_error;

net_average_error +=t_set[j].t1_error+t_set[j].t1_error;

net_deviation += t_set[j].t1_error*t_set[j].t1_error+

t_set[j].t2_error*t_set[j].t2_error;

count++;

}

 

/*

* Do final statistics calculations

*/

n1_error = n1_error / count;

n1_deviation = sqrt(n1_deviation / count);

n2_error = n2_error / count;

n2_deviation = sqrt(n2_deviation / count);

net_average_error = net_average_error / count / 2.0;

net_deviation = sqrt(net_deviation / count / 2.0);

}

}

 

 

 

 

void nt_set_scramble()

/*

* MIX UP THE ORDER OF THE TRAINING SET

*

* When training the network one point at a time, the order of the set is

* important. If the order has a trend, then the network will follow the

* trend of the training data, and only reproduce the last few points trained

* successfully. This means that the dtat points should be in a random order

* and thus this routine has been written to perform this task.

*

* March 28th, 1990.

*/

{

static int i, /* Work variable */

var_1, /* The two locations which are to be swapped */

var_2;

/*

* Swap the random locations a large number of times

*/

for(i = 0; i < end_train+end_train; i++){

 

/*

* Find two random locations to swap

*/

var_1 = (int)(end_train*(double)(rand() & 32767)/32767.0);

var_2 = (int)(end_train*(double)(rand() & 32767)/32767.0);

 

/*

* write first location into buffer

*/

t_set[TRAIN_SET].x_target = t_set[var_1].x_target;

t_set[TRAIN_SET].y_target = t_set[var_1].y_target;

t_set[TRAIN_SET].x_position = t_set[var_1].x_position;

t_set[TRAIN_SET].y_position = t_set[var_1].y_position;

t_set[TRAIN_SET].x_velocity = t_set[var_1].x_velocity;

t_set[TRAIN_SET].y_velocity = t_set[var_1].y_velocity;

t_set[TRAIN_SET].x_acceleration = t_set[var_1].x_acceleration;

t_set[TRAIN_SET].y_acceleration = t_set[var_1].y_acceleration;

t_set[TRAIN_SET].x_output = t_set[var_1].x_output;

t_set[TRAIN_SET].y_output = t_set[var_1].y_output;

t_set[TRAIN_SET].t1_target = t_set[var_1].t1_target;

t_set[TRAIN_SET].t2_target = t_set[var_1].t2_target;

t_set[TRAIN_SET].t1_position = t_set[var_1].t1_position;

t_set[TRAIN_SET].t2_position = t_set[var_1].t2_position;

t_set[TRAIN_SET].t1_velocity = t_set[var_1].t1_velocity;

t_set[TRAIN_SET].t2_velocity = t_set[var_1].t2_velocity;

t_set[TRAIN_SET].t1_acceleration = t_set[var_1].t1_acceleration;

t_set[TRAIN_SET].t2_acceleration = t_set[var_1].t2_acceleration;

t_set[TRAIN_SET].t1_torque = t_set[var_1].t1_torque;

t_set[TRAIN_SET].t2_torque = t_set[var_1].t2_torque;

 

/*

* write second location into first

*/

t_set[var_1].x_target = t_set[var_2].x_target;

t_set[var_1].y_target = t_set[var_2].y_target;

t_set[var_1].x_position = t_set[var_2].x_position;

t_set[var_1].y_position = t_set[var_2].y_position;

t_set[var_1].x_velocity = t_set[var_2].x_velocity;

t_set[var_1].y_velocity = t_set[var_2].y_velocity;

t_set[var_1].x_acceleration = t_set[var_2].x_acceleration;

t_set[var_1].y_acceleration = t_set[var_2].y_acceleration;

t_set[var_1].x_output = t_set[var_2].x_output;

t_set[var_1].y_output = t_set[var_2].y_output;

t_set[var_1].t1_target = t_set[var_2].t1_target;

t_set[var_1].t2_target = t_set[var_2].t2_target;

t_set[var_1].t1_position = t_set[var_2].t1_position;

t_set[var_1].t2_position = t_set[var_2].t2_position;

t_set[var_1].t1_velocity = t_set[var_2].t1_velocity;

t_set[var_1].t2_velocity = t_set[var_2].t2_velocity;

t_set[var_1].t1_acceleration = t_set[var_2].t1_acceleration;

t_set[var_1].t2_acceleration = t_set[var_2].t2_acceleration;

t_set[var_1].t1_torque = t_set[var_2].t1_torque;

t_set[var_1].t2_torque = t_set[var_2].t2_torque;

 

/*

* Write buffer into second location

*/

t_set[var_2].x_target = t_set[TRAIN_SET].x_target;

t_set[var_2].y_target = t_set[TRAIN_SET].y_target;

t_set[var_2].x_position = t_set[TRAIN_SET].x_position;

t_set[var_2].y_position = t_set[TRAIN_SET].y_position;

t_set[var_2].x_velocity = t_set[TRAIN_SET].x_velocity;

t_set[var_2].y_velocity = t_set[TRAIN_SET].y_velocity;

t_set[var_2].x_acceleration = t_set[TRAIN_SET].x_acceleration;

t_set[var_2].y_acceleration = t_set[TRAIN_SET].y_acceleration;

t_set[var_2].x_output = t_set[TRAIN_SET].x_output;

t_set[var_2].y_output = t_set[TRAIN_SET].y_output;

t_set[var_2].t1_target = t_set[TRAIN_SET].t1_target;

t_set[var_2].t2_target = t_set[TRAIN_SET].t2_target;

t_set[var_2].t1_position = t_set[TRAIN_SET].t1_position;

t_set[var_2].t2_position = t_set[TRAIN_SET].t2_position;

t_set[var_2].t1_velocity = t_set[TRAIN_SET].t1_velocity;

t_set[var_2].t2_velocity = t_set[TRAIN_SET].t2_velocity;

t_set[var_2].t1_acceleration = t_set[TRAIN_SET].t1_acceleration;

t_set[var_2].t2_acceleration = t_set[TRAIN_SET].t2_acceleration;

t_set[var_2].t1_torque = t_set[TRAIN_SET].t1_torque;

t_set[var_2].t2_torque = t_set[TRAIN_SET].t2_torque;

}

}