The neural network routines for path planning are quite different than the other methods. To service the neural network requirements, a module has been created. This module handles creation, destruction, saving, loading, training and simulation and testing of neural networks for path planning. There are two important data structures used in these routines. The first data structure is the neural network itself, Most of the related data operation are carried out in the neural network routines (hidden from view). The training data for the neural network is very predominant in this section. It is stored in an array of structures which have been aimed at keeping trajectory data for the robot.
The array of Trajectory Structures is used for training the neural network, testing the convergence, and for storing path planning and tracking results. The training algorithms pull information from this array to train the neural network. The path planning algorithm will store the trajectories at each time step in this file, as is moves toward the goal. The path tracker will store the trajectories as it follows the point moving along a circular path in space.
The Training Trajectories will be put into the array by the neural network subroutines. When training is to begin the order of the array will be scrambled if point training is to be used. The training algorithm is then called. The algorithm will either use a standard generic algorithm, or a specialized algorithm if available (for speed). After training the network may be tested with the testing algorithm. The test algorithm directly compares the contents of the training Trajectory array to the network outputs, and statistical measures are derived.
The functions allow the neural network to be loaded and saved. This also includes network configuration information so that the executive program can deal with many different configurations easily. There are a number of intuitive functions also available for clearing networks and training sets. There are also routines for creating networks and training sets.
nt_trajectory(): A path may be generated by the neural network with this subroutine. The intent of this routine is to use start and stop positions, along with a currently loaded neural network to generate a path. This method also uses the dynamics simulation module. The routine is capable of handling almost every combination of inputs and outputs possible. The torque output from the controller may also be limited or scaled. Steady state convergence is an outstanding problem with this work, a number of temporary measures have been instituted in the interim.
nt_tracker(): This subroutine is almost an exact duplicate of the path generator. The only difference is a substitution of point tracking for path motion. This allows a moving set point to be used, as would be done with a traditional control systems. In this case the point is moving in a circle on the x axis, passing between the first and fourth quadrants. A file may be created that contains the number of control steps to be used, and the duration of the motion.
nt_train(): A central terminal point for all neural network training calls. This was set up to allow the faster training of the neural in the cases which are commonly used. This was achieved by calling a number of other routines. One routine available will training any neural network configuration, very slowly. Specialized training routines has also been written for the special cases. This routine decides when use of the specialized routines are warranted.
nt_t_gen(): The generic neural network training program. When called this program will use all inputs, and examine all output combinations possible. This is very tedious and slow. If a great deal of training is to be done, a specialized training algorithm should be written (tra_t_5() is as yet unused).
tra_t_1(): This program will train the neural network for inputs of Position and Difference, and an output of Position. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.
tra_t_2(): This program will train the neural network for inputs of Velocity, Acceleration, and Difference. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.
tra_t_3(): This program will train the neural network for inputs of Velocity and Difference. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.
tra_t_4(): This program will train the neural network for an input of Difference. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.
tra_t_5(): This is an unused function which has been left for the ease of expansion when another training algorithm will be added for fast training. If an algorithm is to be added it should be modelled on the other fast training algorithms.
tra_t_6(): This program will train the neural network for inputs of Velocity, Position, and Difference. The training is based upon the current list of training trajectories stored in the array. This routine preprocesses the training set for faster execution during training. This involves performing all mathematical operations before training.
nt_test(): After training it is desirable to obtain a measure of convergence. This algorithm will use the training trajectories currently stored in the array, to perform comparative statistics on the neural network output. These statistics include average error, and RMS error. The results are not returned directly, but are placed in global variables. Any set of inputs and outputs should work with this subroutine.
nt_set_scramble(): Training set order can be of importance when particular training techniques are used. In particular, if point training (see neural network Appendix) is used then the set must be presented in a random order. This routine will randomly swap training trajectories in the array. This should provide an array with the order well scrambled.
nt_load(): The neural network is stored in an ASCII file format. To recover this network, this subroutine may be used. This function also recovers configuration information stored along with the neural network.
nt_save(): The neural network may be saved to disk with this routine. The configuration of the neural network is also saved at the same time.
nt_clear(): When the Training Trajectory array is no longer needed, this function will empty the contents.
nt_build(): This subroutine will use all of the endpoints in the path endpoint buffer to generate a set of training trajectories for the neural network. The trajectories are generated with whatever configuration flags are set.
nt_add(): When a single trajectory is to be generated, this function is called. Using the current path planning flags a set of trajectories along a path are generated, and entered into the list.
nt_new(): Both neural network and training data are eliminated by this subroutine. The logic is if the neural network is to be wiped out, then the training data that was intended for it should be wiped also.
nt_create(): Using the toggles and flags set for the neural network inputs and outputs, this subroutine reinitializes the neural network to fit the new configurations. This handles bias neurons, hidden layers of various widths, and a variable number of inputs (as dictated by the toggles).
* ROBOT TRAJECTORY NET PROGRAM
* This code contains the trajectory planning functions, independant of the
* user interface to Sunview. This is causing redundancy, but it is also
* making the results more flexible. This may be used with the sunview
* Interface, or with a background job program for training.
* A complete set of routines are present for training, testing, and building
* neural networks for the control of robots. These routines have also been
* constructed to interact with the other path planning routines.
#define PSCALE_1 200.0 /* offset and scale values for neural */
#define PSCALE_2 200.0 /* network inputs and outputs */
#define VSCALE_1 12.0 /* It should be noted that the outputs*/
#define VSCALE_2 12.0 /* are scaled so that output limits */
#define ASCALE_1 12.0 /* are above the robot limits. */
void nt_trajectory(output_type, t1s, t2s, t1e, t2e, path, start, n)
* GENERATE TRAJECTORY: NEURAL NETWORK
* A trajectory will be generated, based upon the current operation modes
* using the neural network controller. Unfortunately this code is bulky
* and crude in spots, but it runs quickly. This code has a problem with
* convergence, thus two techniques have been used. One is used to approximate
* convergence, the other is a maximum limit on the number of steps. Together
* these effectively cut off the simulation at a reasonable limit.
* Variables: output_type = the output type of the neural network
* t1s, t2s = the start configuration of the manipulator
* t1e, t2e = the end configuration of the manipulator
* path = the array of structures to hold the path
* start = the start point in the path array
* Returns: n = the number of segments in the path
static int i, /* work variable */
steps = 120, /* number of time steps */
flag_e, /* end of motion flag */
flag_1, /* Convergence Flag 1st */
flag_2, /* 2nd convergence flag */
flag_3, /* 3rd convergence flag */
flag_ok; /* stop message printed */
FILE *step_file; /* file containing num. steps*/
static double scale; /* for scaled outputs */
* If the user wants an alternate number of time steps, a file
* may be created which stores the number, it is loaded here.
step_file = fopen("steps.dat", "r"); /* Get the number of steps */
if(step_file != NULL){ /* From a file if it exists */
if((i > 0) && (i < TRAIN_SET)){
* Define initial conditions for control
init_inverse_dynamics(); /* set up dynamics module */
flag_e = flag_1 = flag_2 = flag_3 = 0; /* Clear Path Done flags */
flag_ok = 0; /* stop message not printed */
* Set the initial path conditions (i.e. before the first step)
path[start].t1_target = t1e; /* Set up initial conditions */
path[start].t2_target = t2e; /* for path points array */
path[start].t1_position = t1s;
path[start].t2_position = t2s;
kin_forward(t1s, t2s, &path[start].x_position, &path[start].y_position);
kin_forward(t1e, t2e, &path[start].x_target, &path[start].y_target);
path[start].t1_velocity = 0.0;
path[start].t2_velocity = 0.0;
path[start].t1_acceleration = 0.0;
path[start].t2_acceleration = 0.0;
inverse_dynamics(path[start].t1_position, path[start].t2_position,
path[start].t1_velocity, path[start].t2_velocity,
path[start].t1_acceleration, path[start].t2_acceleration,
&path[start].t1_torque, &path[start].t2_torque);
* Start loop of ’steps’ path segments. This is only a temporary measure
* to be used until accuracy is a viable limiting factor.
for(i = 1+start; ((i <= steps+start) && ((flag_1 == 0)
|| (flag_2 == 0) || (flag_3 == 0))); i++){
* update network inputs with previous control state, and solve
net_input(1, (path[i-1].t1_position + POFF_1)/PSCALE_1/2.0);
net_input(2, (path[i-1].t2_position + POFF_2)/PSCALE_2/2.0);
net_input(3, (path[i-1].t1_velocity + VOFF_1)/VSCALE_1/2.0);
net_input(4, (path[i-1].t2_velocity + VOFF_2)/VSCALE_2/2.0);
net_input(5, (path[i-1].t1_acceleration + AOFF_1)/ASCALE_1/2.0);
net_input(6, (path[i-1].t2_acceleration + AOFF_2)/ASCALE_2/2.0);
net_input(7, (path[i-1].t1_target + POFF_1)/PSCALE_1/2.0);
net_input(8, (path[i-1].t2_target + POFF_2)/PSCALE_2/2.0);
net_input(9, (path[i-1].t1_target: path[i-1].t1_position
net_input(10, (path[i-1].t2_target: path[i-1].t2_position
net_input(11, (path[i-1].t1_torque + TOFF_1)/TSCALE_1/2.0);
net_input(12, (path[i-1].t2_torque + TOFF_2)/TSCALE_2/2.0);
* Determine current state of output, based upon output type
path[i].t1_position =net_output(18)*PSCALE_1*2.0-POFF_1;
path[i].t2_position =net_output(19)*PSCALE_2*2.0-POFF_2;
path[i].t1_velocity =net_output(18)*VSCALE_1*2.0-VOFF_1;
path[i].t2_velocity =net_output(19)*VSCALE_2*2.0-VOFF_2;
if(path[i].t1_velocity > VMAX_T1)
path[i].t1_velocity = VMAX_T1;
if(path[i].t1_velocity < -VMAX_T1)
path[i].t1_velocity = -VMAX_T1;
if(path[i].t2_velocity > VMAX_T2)
path[i].t2_velocity = VMAX_T2;
if(path[i].t2_velocity < -VMAX_T2)
path[i].t2_velocity = -VMAX_T2;
path[i].t1_position = path[i-1].t1_position +
path[i-1].t1_velocity * T_STEP;
path[i].t2_position = path[i-1].t2_position +
path[i-1].t2_velocity * T_STEP;
if(output_type == ACCELERATION){
path[i].t1_acceleration = net_output(18)*ASCALE_1*2.0
path[i].t2_acceleration = net_output(19)*ASCALE_2*2.0
if(path[i].t1_acceleration > AMAX_T1)
path[i].t1_acceleration = AMAX_T1;
if(path[i].t1_acceleration < -AMAX_T1)
path[i].t1_acceleration = -AMAX_T1;
if(path[i].t2_acceleration > AMAX_T2)
path[i].t2_acceleration = AMAX_T2;
if(path[i].t2_acceleration < -AMAX_T2)
path[i].t2_acceleration = -AMAX_T2;
path[i].t1_velocity = path[i-1].t1_velocity +
path[i-1].t1_acceleration * T_STEP;
path[i].t2_velocity = path[i-1].t2_velocity +
path[i-1].t2_acceleration * T_STEP;
path[i].t1_position = path[i-1].t1_position +
path[i-1].t1_velocity * T_STEP +
0.5*path[i-1].t1_acceleration * T_STEP*T_STEP;
path[i].t2_position = path[i-1].t2_position +
path[i-1].t2_velocity * T_STEP +
0.5*path[i-1].t2_acceleration * T_STEP*T_STEP;
path[i].t1_torque =net_output(18)*TSCALE_1*2.0-TOFF_1;
path[i].t2_torque =net_output(19)*TSCALE_2*2.0-TOFF_2;
* If the train_opt flag is set then the output will
* be forced into bang-bang control mode. If not set
* the output will only be clipped at the torque limits
if(train_opt == 0){ /* output clipper */
if(path[i].t1_torque > TMAX_T1)
if(path[i].t1_torque < -TMAX_T1)
if(path[i].t2_torque > TMAX_T2)
if(path[i].t2_torque < -TMAX_T2)
* the output will be scaled in terms of the dominant
* joint. This will be done for all values over
(fabs(path[i].t1_torque) > 2.0)){
scale = 10.0/fabs(path[i].t1_torque);
(fabs(path[i].t2_torque) > 2.0)){
scale = 10.0/fabs(path[i].t2_torque);
* Using NN output torque, resultant motion is found
path[i].t1_target = path[i-1].t1_target;
path[i].t2_target = path[i-1].t2_target;
* This scaling factor has been used for the maximum
* acceleration controller. This was used to determine
* convergence when maximum acceleration control is used.
if((fabs(path[i].t1_target: path[i].t1_position) < 1.0) &&
(fabs(path[i].t2_target: path[i].t2_position) < 1.0)
&& (fabs(path[i].t1_acceleration) < 1.0)
&& (fabs(path[i].t2_acceleration) < 1.0)){
printf("Done at t = %f\n", (i-start-1)*T_STEP);
* Another end of motion measure. This determines when the
* motion has stoppped based upon position (and acceleration
* which has been commented out)
if((fabs(path[i].t1_target: path[i].t1_position) < 2.0) &&
(fabs(path[i].t2_target: path[i].t2_position) < 2.0)
/* && (fabs(path[i].t1_acceleration) < 1.0)
&& (fabs(path[i].t2_acceleration) < 1.0) */){
printf("Done at t = %f\n", (i-start-1)*T_STEP);
* This will wait for position to have been converged for
* one second, before stopping the simulation.
if(((double)flag_e*T_STEP) > 1.0) flag_1 = 1;
* Prints out a message when robot has effectively stopped.
if((fabs(path[i].t1_target: path[i].t1_position) < 1.0) &&
(fabs(path[i].t2_target: path[i].t2_position) < 1.0) &&
printf("Arived at t = %f\n", (i-start-1)*T_STEP);
* This section is intended to ensure that the path will be
* analyzed as long as the steady state is not being tested
* and the path of the robot has not turned back upon itself
if((output_type != TORQUE) && (i > 2+start) &&(test_flag != 1)){
if(((path[i-2].t1_position: path[i-1].t1_position)*
(path[i-1].t1_position: path[i].t1_position) <= 0.0)
&& ((path[i-2].t2_position: path[i-1].t2_position)*
(path[i-1].t2_position: path[i].t2_position) <= 0.0)){
void nt_tracker(output_type, t1s, t2s, t1e, t2e, path, start, n)
* FOLLOW TRAJECTORY: NEURAL NETWORK
* A trajectory will be followed, based upon the current operation modes
* using the neural network controller. This section of code is also quite
* bulky for the purpose of making the routines generic. This was written
* to allow testing of the neural networks in a traditional controller
* format. The format is quite similar to that above, except that the
* target position is now a moving point in space. A circle in the right
* hand plan of space is currently in use.
* Note: comments in this section are sparse, because the code is quite
* Variables: output_type = the output type of the neural network
* t1s, t2s = the start configuration of the manipulator
* t1e, t2e = the end configuration of the manipulator
* path = the array of structures to hold the path
* start = the start point in the path array
* Returns: n = the number of segments in the path
static int i, /* work variable */
steps = 200, /* default number of steps */
flag_1, /* convergence flags */
FILE *step_file; /* file with steps in it */
static double scale, /* for scale torque outputs*/
angle_1, angle_2, /* for position tracking */
pos_x, pos_y, /* for position tracking */
p_time = 4.5, /* a set path time */
* Load alternate number of steps
step_file = fopen("steps.dat", "r"); /* Get the number of steps */
if(step_file != NULL){ /* From a file if it exists */
fscanf(step_file, "%lf", &p_time);
if((i > 0) && (i < TRAIN_SET)){
* Define initial conditions for control
flag_e = flag_1 = flag_2 = flag_3 = 0;
* Define initial actuator state.
path[start].t1_target = t1e; /* Set up initial conditions */
kin_inverse( &path[start].t1_position,
LENGTH_2/2.0 + LENGTH_1, 0.0, 0);
kin_forward(t1s, t2s, &path[start].x_position, &path[start].y_position);
kin_forward(t1e, t2e, &path[start].x_target, &path[start].y_target);
path[start].t1_velocity = 0.0;
path[start].t2_velocity = 0.0;
path[start].t1_acceleration = 0.0;
path[start].t2_acceleration = 0.0;
inverse_dynamics(path[start].t1_position, path[start].t2_position,
path[start].t1_velocity, path[start].t2_velocity,
path[start].t1_acceleration, path[start].t2_acceleration,
&path[start].t1_torque, &path[start].t2_torque);
* Start loop of ’steps’ path segments. This is only a temporary measure
* to be used until accuracy is a viable limiting factor.
for(i = 1+start; ((i <= steps+start) && ((flag_1 == 0)
|| (flag_2 == 0) || (flag_3 == 0))); i++){
* update network inputs with previous control state, and solve
net_input(1, (path[i-1].t1_position + POFF_1)/PSCALE_1/2.0);
net_input(2, (path[i-1].t2_position + POFF_2)/PSCALE_2/2.0);
net_input(3, (path[i-1].t1_velocity + VOFF_1)/VSCALE_1/2.0);
net_input(4, (path[i-1].t2_velocity + VOFF_2)/VSCALE_2/2.0);
net_input(5, (path[i-1].t1_acceleration + AOFF_1)/ASCALE_1/2.0);
net_input(6, (path[i-1].t2_acceleration + AOFF_2)/ASCALE_2/2.0);
net_input(7, (path[i-1].t1_target + POFF_1)/PSCALE_1/2.0);
net_input(8, (path[i-1].t2_target + POFF_2)/PSCALE_2/2.0);
aang = ((double)(i-start)*T_STEP/p_time)*3.141*2.0;
pos_x = LENGTH_1 + LENGTH_2/2.0* cos(aang);
pos_y = LENGTH_2/2.0 * sin(aang);
kin_inverse(&angle_1, &angle_2, pos_x, pos_y, 0);
net_input(9, (angle_1: path[i-1].t1_position
net_input(10, (angle_2: path[i-1].t2_position
net_input(11, (path[i-1].t1_torque + TOFF_1)/TSCALE_1/2.0);
net_input(12, (path[i-1].t2_torque + TOFF_2)/TSCALE_2/2.0);
* Determine current state of output, based upon output type
path[i].t1_position =net_output(18)*PSCALE_1*2.0-POFF_1;
path[i].t2_position =net_output(19)*PSCALE_2*2.0-POFF_2;
path[i].t1_velocity =net_output(18)*VSCALE_1*2.0-VOFF_1;
path[i].t2_velocity =net_output(19)*VSCALE_2*2.0-VOFF_2;
if(path[i].t1_velocity > VMAX_T1)
path[i].t1_velocity = VMAX_T1;
if(path[i].t1_velocity < -VMAX_T1)
path[i].t1_velocity = -VMAX_T1;
if(path[i].t2_velocity > VMAX_T2)
path[i].t2_velocity = VMAX_T2;
if(path[i].t2_velocity < -VMAX_T2)
path[i].t2_velocity = -VMAX_T2;
path[i].t1_position = path[i-1].t1_position +
path[i-1].t1_velocity * T_STEP;
path[i].t2_position = path[i-1].t2_position +
path[i-1].t2_velocity * T_STEP;
if(output_type == ACCELERATION){
path[i].t1_acceleration = net_output(18)*ASCALE_1*2.0
path[i].t2_acceleration = net_output(19)*ASCALE_2*2.0
if(path[i].t1_acceleration > AMAX_T1)
path[i].t1_acceleration = AMAX_T1;
if(path[i].t1_acceleration < -AMAX_T1)
path[i].t1_acceleration = -AMAX_T1;
if(path[i].t2_acceleration > AMAX_T2)
path[i].t2_acceleration = AMAX_T2;
if(path[i].t2_acceleration < -AMAX_T2)
path[i].t2_acceleration = -AMAX_T2;
path[i].t1_velocity = path[i-1].t1_velocity +
path[i-1].t1_acceleration * T_STEP;
path[i].t2_velocity = path[i-1].t2_velocity +
path[i-1].t2_acceleration * T_STEP;
path[i].t1_position = path[i-1].t1_position +
path[i-1].t1_velocity * T_STEP +
0.5*path[i-1].t1_acceleration * T_STEP*T_STEP;
path[i].t2_position = path[i-1].t2_position +
path[i-1].t2_velocity * T_STEP +
0.5*path[i-1].t2_acceleration * T_STEP*T_STEP;
path[i].t1_torque =net_output(18)*TSCALE_1*2.0-TOFF_1;
path[i].t2_torque =net_output(19)*TSCALE_2*2.0-TOFF_2;
if(path[i].t1_torque > TMAX_T1)
if(path[i].t1_torque < -TMAX_T1)
if(path[i].t2_torque > TMAX_T2)
if(path[i].t2_torque < -TMAX_T2)
(fabs(path[i].t1_torque) > 2.0)){
scale = 10.0/fabs(path[i].t1_torque);
(fabs(path[i].t2_torque) > 2.0)){
scale = 10.0/fabs(path[i].t2_torque);
path[i].t1_target = path[i-1].t1_target;
path[i].t2_target = path[i-1].t2_target;
if((fabs(path[i].t1_target: path[i].t1_position) < 1.0) &&
(fabs(path[i].t2_target: path[i].t2_position) < 1.0)
&& (fabs(path[i].t1_acceleration) < 1.0)
&& (fabs(path[i].t2_acceleration) < 1.0)){
printf("Done at t = %f\n", (i-start-1)*T_STEP);
if((fabs(path[i].t1_target: path[i].t1_position) < 2.0) &&
(fabs(path[i].t2_target: path[i].t2_position) < 2.0)
/* && (fabs(path[i].t1_acceleration) < 1.0)
&& (fabs(path[i].t2_acceleration) < 1.0) */){
printf("Done at t = %f\n", (i-start-1)*T_STEP);
if(((double)flag_e*T_STEP) > 1.0) flag_1 = 1;
if((fabs(path[i].t1_target: path[i].t1_position) < 1.0) &&
(fabs(path[i].t2_target: path[i].t2_position) < 1.0) &&
printf("Arived at t = %f\n", (i-start-1)*T_STEP);
* This section is intended to ensure that the path will be
* analyzed as long as the steady state is not being tested
* and the path of the robot has not turned back upon itself
if((output_type != TORQUE) && (i > 2+start)&& (test_flag != 1)){
if(((path[i-2].t1_position: path[i-1].t1_position)*
(path[i-1].t1_position: path[i].t1_position) <= 0.0)
&& ((path[i-2].t2_position: path[i-1].t2_position)*
(path[i-1].t2_position: path[i].t2_position) <= 0.0)){
* The named neural network is loaded with routine. The relevant data is
* also recovered from the ’extra string’ storage capability of the
* Variables: file = the name of the neural network file to load
* Returns: a value of NO_ERROR if load was successful
static int error; /* the error variable */
static char tex[50]; /* A work string for data conversion */
* If load is a success then recover variables
if(net_read(file) != ERROR){ /* load net, and check if ok */
net_loaded_flag = 1; /* indicate that net is in */
end_train = 0; /* clear training set */
* The important system variables have been stored in strings
* associated with the network. Here the values are recovered.
sscanf(tex, "%d", &n_position);
sscanf(tex, "%d", &n_velocity);
sscanf(tex, "%d", &n_acceleration);
sscanf(tex, "%d", &n_iterations);
sscanf(tex, "%lf", &n_smooth);
sscanf(tex, "%d", &elbow_flag);
sscanf(tex, "%d", &train_alt);
sscanf(tex, "%d", &n_difference);
sscanf(tex, "%d", &train_opt);
sscanf(tex, "%d", &train_sim);
error = NO_ERROR; /* indicate load ok */
* The neural network may be saved with these routines. The important data
* is also saved with the extra string functions of the neural network
* Variables: file: the name of the file to save to
static char tex[50]; /* Work string for storing variables */
* If a net is loaded then prepare variables, and save
* variables are stored in message slots available with
sprintf(tex, "%d", n_position);
sprintf(tex, "%d", n_velocity);
sprintf(tex, "%d", n_acceleration);
sprintf(tex, "%d", n_iterations);
sprintf(tex, "%d", elbow_flag);
sprintf(tex, "%d", train_alt);
sprintf(tex, "%d", n_difference);
sprintf(tex, "%d", train_opt);
sprintf(tex, "%d", train_sim);
net_write(file); /* write network to disk */
* The current Neural Network Training Set Will be wiped out.
* Reset end of training set pointer to start
* The old training set will be wiped out and replaced with a new one.
* This training set will consist of all trajectory paths described by
* contents of the current trajectory buffer.
static int i; /* work variable */
end_train = 0; /* empty trajectory buffer */
* Add a path for each trajectory in the path endpoints buffer.
for(i = 0; i < end_of_buffer; i++){ /* go through endpoints buffer */
* This routine adds a single path to the trajectory list
* using the currently defined flags.
nt_add(buffer[i].t1_start, buffer[i].t2_start,
buffer[i].t1_end, buffer[i].t2_end);
printf("Added %d of %d\n", i, end_of_buffer);
void nt_add(t1_1, t2_1, t1_2, t2_2)
double t1_1, t2_1, t1_2, t2_2;
* ADD A PATH TO THE TRAINING SET
* The training set will be appended by the current trajectory path.
* This is the only path which will be added.
* Variables: t1_1, t2_1 = the joint start trajectory point
* t1_2, t2_2 = the joint end trajectory point
* Set the training flag to get full trajectory generation
* Update trajectory, update pointer for training, reset flag
get_trajectory(t_number, t1_1,t2_1,t1_2,t2_2, t_set, end_train, &n);
end_train += n; /* increase pointer for new data */
train_flag = 0; /* undo training flag */
* The currently configured neural network will be wiped out. This
* will also wipe out the training set. This does not change the options
* Reset training set, reset network, and set net not loaded
end_train = 0; /* get rid of all trajectory points */
net_init(n_type); /* reset neural net stuff */
net_loaded_flag = 0; /* indicate no net in program */
* A new neural network will be built, based upon the current options entered.
static int i, /* work variables */
start, /* These two determine the start and */
offset; /* spacing of neurons for the hidden */
start = 20; /* first location for hidden neurons */
offset = n_width; /* neurons in hidden layers */
net_init(n_type); /* reset network */
* Apply bias neurons, if required. neuron 0 should be constant
if(n_bias == 1){ /* check for bias flag set */
for(i = 0; i < n_layers-2; i++){ /* bias all hidden neurons */
for(j = start+i*offset; j <start+i*offset+n_width; j++){
net_arc_define(0, 18); /* bias output neurons */
def_input(0); /* set neuron as an input */
net_input(0, 1.0); /* give an input value */
* if required create first hidden layer
if(n_layers > 2){ /* check for hidden layers */
for(j = start; j < start+n_width; j++){
* Check for input flags set. if set then define
* inputs to different neurons in first hidden layer.
* If no hidden layers then tie inputs to outputs
* Fully connect all hidden layers
for(i = 0; i < n_layers-3; i++){
for(j = start+i*offset; j < start+i*offset+n_width; j++){
for(k=start+(i+1)*offset;k<start+(i+1)*offset+n_width;
* Tie last hidden layer to outputs
for(i=start+offset*(n_layers-3);
i<start+offset*(n_layers-3)+n_width; i++){
net_arc_define(i, 18); /* for joint 1 */
net_arc_define(i, 19); /* for joint 2 */
* Define inputs, as specified by user defined variables
* define outputs and set net created flag
* This routine is a junction for training neural networks. This structure
* has been chosen to help speed training. There is one subroutine which
* can train any neural network. This generic subroutine is very slow, thus
* special versions were made for the cases when popular training examples
* were tried. This means that if the special case cannot be found, the
static int t_flag; /* Network trained flag */
t_flag = 1; /* Indicates not trained */
* The input type flags are checked. If the input flags have a
* specific pattern the appropriate routine is called.
if(t_flag == 1)printf("Using Special Case # 1\n");
} else if((n_position != 1) &&
if(t_flag == 1)printf("Using Special Case # 2\n");
} else if((n_position != 1) &&
if(t_flag == 1)printf("Using Special Case # 3\n");
} else if((n_position != 1) &&
if(t_flag == 1)printf("Using Special Case # 4\n");
} else if((n_output == POSITION) &&
if(t_flag == 1)printf("Using Special Case # 5\n");
} else if((n_position == 1) &&
if(t_flag == 1)printf("Using Special Case # 6\n");
* if the t_flag is still 1 then the network has not been
* trained and the default case will be used.
if(t_flag == 1)printf("Using General Case\n");
* The current training set is used to train the neural network. This is the
* generic trainer. This is exceptionally bad for speed.
static int i, j; /* Work variables */
static double t1_e, t2_e; /* joint angle work variables */
* Check that the training set and network exist
if((end_train > 0) && (net_loaded_flag == 1)){
* If point training is to be used, then mix up the data order
if(updater == POINT_TRAIN) nt_set_scramble();
* Loop for number of iterations
for(i = 0; i < n_iterations; i++){
* Print out training report (not for batch mode)
if((batch_mode == 1) && ((i % 10) == 0))
printf("training %d of %d \n", i, n_iterations);
for(j = 0; j < end_train; j++){
* Set up network inputs and solve
net_input(1, (t_set[j].t1_position+POFF_1)
net_input(2, (t_set[j].t2_position+POFF_2)
net_input(3, (t_set[j].t1_velocity+VOFF_1)
net_input(4, (t_set[j].t2_velocity+VOFF_2)
net_input(5, (t_set[j].t1_acceleration+AOFF_1)
net_input(6, (t_set[j].t2_acceleration+AOFF_2)
net_input(7, (t_set[j].t1_target+POFF_1)
net_input(8, (t_set[j].t2_target+POFF_2)
net_input(9, (t_set[j].t1_target
: t_set[j].t1_position + POFF_1)
net_input(10, (t_set[j].t2_target
: t_set[j].t2_position + POFF_2)
net_input(11, (t_set[j].t1_torque+TOFF_1)
net_input(12, (t_set[j].t2_torque+TOFF_2)
* Determine which output is required
t1_e = (t_set[j].t1_position+POFF_1)
t2_e = (t_set[j].t2_position+POFF_2)
t1_e = (t_set[j].t1_velocity+VOFF_1)
t2_e = (t_set[j].t2_velocity+VOFF_2)
t1_e = (t_set[j].t1_acceleration+AOFF_1)
t2_e = (t_set[j].t2_acceleration+AOFF_2)
t1_e = (t_set[j].t1_torque+TOFF_1)
t2_e = (t_set[j].t2_torque+TOFF_2)
* Set expected values and apply backprop
net_back_prop(n_learn, n_smooth);
* Update the net (only has effect with set training)
* TRAIN THE NEURAL NETWORK (SPECIAL)
* The current training set is used to train the neural network with the
* neural inputs: POSITION & DIFFERENCE and then neural output of POSITION
* (for more comments see nt_t_gen())
static int i, j; /* Work variables */
if((end_train > 0) && (net_loaded_flag == 1)){
* If point training is to be used, then mix up the data order
if(updater == POINT_TRAIN) nt_set_scramble();
* Do Math Conversion Before Training
for(j = 0; j < end_train; j++){
/* Position */ in1_1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
in2_1[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
/* Difference */ in1_2[j] = (t_set[j].t1_target
-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
in2_2[j] = (t_set[j].t2_target
-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
* Loop for number of iterations
for(i = 0; i < n_iterations; i++){
* Print out training report (not for batch mode)
if((batch_mode == 1) && ((i % 10) == 0))
printf("training %d of %d \n", i, n_iterations);
for(j = 0; j < end_train; j++){
* Set up network inputs and solve
net_input(1, in1_1[j]); /* Position */
net_input(9, in1_2[j]); /* Difference */
* Set expected values and apply backprop
net_back_prop(n_learn, n_smooth);
* Update the net (only has effect with set training)
* TRAIN THE NEURAL NETWORK (SPECIAL)
* The current training set is used to train the neural network with the
* neural inputs: VELOCITY, ACCELERATION & DIFFERENCE
* (for more comments see nt_t_gen())
static int i, j; /* Work variables */
if((end_train > 0) && (net_loaded_flag == 1)){
* If point training is to be used, then mix up the data order
if(updater == POINT_TRAIN) nt_set_scramble();
* Do Math Conversion Before Training
for(j = 0; j < end_train; j++){
/* Veloctiy */ in1_1[j] = (t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;
in2_1[j] = (t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;
/* Acceleration */ in1_2[j] = (t_set[j].t1_acceleration+AOFF_1)
in2_2[j] = (t_set[j].t2_acceleration+AOFF_2)
/* Difference */ in1_3[j] = (t_set[j].t1_target
-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
in2_3[j] = (t_set[j].t2_target
-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
out1[j] =(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;
out2[j] =(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;
out1[j] =(t_set[j].t1_acceleration+AOFF_1)
out2[j] =(t_set[j].t2_acceleration+AOFF_2)
* Loop for number of iterations
for(i = 0; i < n_iterations; i++){
* Print out training report (not for batch mode)
if((batch_mode == 1) && ((i % 10) == 0))
printf("training %d of %d \n", i, n_iterations);
for(j = 0; j < end_train; j++){
* Set up network inputs and solve
/* Velocity */ net_input(3, in1_1[j]);
/* Acceleration */ net_input(5, in1_2[j]);
/* Difference */ net_input(9, in1_3[j]);
* Set expected values and apply backprop
net_back_prop(n_learn, n_smooth);
* Update the net (only has effect with set training)
* TRAIN THE NEURAL NETWORK (SPECIAL)
* The current training set is used to train the neural network with the
* neural inputs: VELOCITY and DIFFERENCE
* (for more comments see nt_t_gen())
static int i, j; /* Work variables */
if((end_train > 0) && (net_loaded_flag == 1)){
* If point training is to be used, then mix up the data order
if(updater == POINT_TRAIN) nt_set_scramble();
* Do Math Conversion Before Training
for(j = 0; j < end_train; j++){
/* Velocity */ in1_1[j] = (t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;
in2_1[j] = (t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;
/* Difference */ in1_2[j] = (t_set[j].t1_target
-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
in2_2[j] = (t_set[j].t2_target
-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
out1[j] =(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;
out2[j] =(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;
out1[j] =(t_set[j].t1_acceleration+AOFF_1)
out2[j] =(t_set[j].t2_acceleration+AOFF_2)
* Loop for number of iterations
for(i = 0; i < n_iterations; i++){
* Print out training report (not for batch mode)
if((batch_mode == 1) && ((i % 10) == 0))
printf("training %d of %d \n", i, n_iterations);
for(j = 0; j < end_train; j++){
* Set up network inputs and solve
/* Velocity */ net_input(3, in1_1[j]);
/* Difference */ net_input(9, in1_2[j]);
* Set expected values and apply backprop
net_back_prop(n_learn, n_smooth);
* Update the net (only has effect with set training)
* TRAIN THE NEURAL NETWORK (SPECIAL)
* The current training set is used to train the neural network with the
* (for more comments see nt_t_gen())
static int i, j; /* Work variables */
if((end_train > 0) && (net_loaded_flag == 1)){
* If point training is to be used, then mix up the data order
if(updater == POINT_TRAIN) nt_set_scramble();
* Do Math Conversion Before Training
printf("Preprocessing train set, size:%d \n",end_train);
for(j = 0; j < end_train; j++){
/* Difference */ in1_1[j] = (t_set[j].t1_target
-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
in2_1[j] = (t_set[j].t2_target
-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
out1[j] =(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;
out2[j] =(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;
out1[j] =(t_set[j].t1_acceleration+AOFF_1)
out2[j] =(t_set[j].t2_acceleration+AOFF_2)
if(batch_mode == 1) printf("Starting to train \n");
* Loop for number of iterations
for(i = 0; i < n_iterations; i++){
* Print out training report (not for batch mode)
if((batch_mode == 1) && ((i % 10) == 0))
printf("training %d of %d \n", i, n_iterations);
for(j = 0; j < end_train; j++){
* Set up network inputs and solve
/* Difference */ net_input(9, in1_1[j]);
* Set expected values and apply backprop
net_back_prop(n_learn, n_smooth);
* Update the net (only has effect with set training)
void nt_t_5(){} /* An empty function just holding space */
* TRAIN THE NEURAL NETWORK (SPECIAL)
* The current training set is used to train the neural network with the
* neural inputs: POSITION, VELOCITY and DIFFERENCE
* (for more comments see nt_t_gen())
static int i, j; /* Work variables */
if((end_train > 0) && (net_loaded_flag == 1)){
* If point training is to be used, then mix up the data order
if(updater == POINT_TRAIN) nt_set_scramble();
* Do Math Conversion Before Training
printf("Preprocessing train set, size:%d \n",end_train);
for(j = 0; j < end_train; j++){
/* Velocity */ in1_1[j] = (t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;
in2_1[j] = (t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;
/* Position */ in1_2[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
in2_2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
/* Difference */ in1_3[j] = (t_set[j].t1_target
-t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
in2_3[j] = (t_set[j].t2_target
-t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
out1[j] = (t_set[j].t1_position+POFF_1)/2.0/PSCALE_1;
out2[j] = (t_set[j].t2_position+POFF_2)/2.0/PSCALE_2;
out1[j] =(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1;
out2[j] =(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2;
out1[j] =(t_set[j].t1_acceleration+AOFF_1)
out2[j] =(t_set[j].t2_acceleration+AOFF_2)
out1[j] =(t_set[j].t1_torque+TOFF_1)/2.0/TSCALE_1;
out2[j] =(t_set[j].t2_torque+TOFF_2)/2.0/TSCALE_2;
if(batch_mode == 1) printf("Starting to train \n");
* Loop for number of iterations
for(i = 0; i < n_iterations; i++){
* Print out training report (not for batch mode)
if((batch_mode == 1) && ((i % 10) == 0))
printf("training %d of %d \n", i, n_iterations);
for(j = 0; j < end_train; j++){
* Set up network inputs and solve
/* Position */ net_input(1, in1_2[j]);
/* Velocity */ net_input(3, in1_1[j]);
/* Difference */ net_input(9, in1_3[j]);
* Set expected values and apply backprop
net_back_prop(n_learn, n_smooth);
* Update the net (only has effect with set training)
* The network is tested, using the current network configuration, and the
* current training set. This does not alter the network, but finds the
static int i, /* work variables */
count; /* number of points in train set */
* Check for a training set, which may be used for testing
* Run through the training set to determine errors
for(j = 0; j < end_train; j++){
* Set network inputs and solve
net_input(1,(t_set[j].t1_position+POFF_1)/2.0/PSCALE_1);
net_input(2,(t_set[j].t2_position+POFF_2)/2.0/PSCALE_2);
net_input(3,(t_set[j].t1_velocity+VOFF_1)/2.0/VSCALE_1);
net_input(4,(t_set[j].t2_velocity+VOFF_2)/2.0/VSCALE_2);
net_input(5,(t_set[j].t1_acceleration+AOFF_1)
net_input(6,(t_set[j].t2_acceleration+AOFF_2)
net_input(7,(t_set[j].t1_target+POFF_1)/2.0/PSCALE_1);
net_input(8,(t_set[j].t2_target+POFF_2)/2.0/PSCALE_2);
net_input(9, (t_set[j].t1_target: t_set[j].t1_position
net_input(10,(t_set[j].t2_target: t_set[j].t2_position
net_input(11,(t_set[j].t1_torque+TOFF_1)/2.0/TSCALE_1);
net_input(12,(t_set[j].t2_torque+TOFF_2)/2.0/TSCALE_2);
* Set errors to zero before output evaluation
* Determine output type and calculate output and error
(net_output(18)*PSCALE_1 *2.0): POFF_1;
(net_output(19)*PSCALE_2 *2.0): POFF_2;
t_set[j].t1_error = t_set[j].t1_output
t_set[j].t2_error = t_set[j].t2_output
(net_output(18)*VSCALE_1 *2.0): VOFF_1;
(net_output(19)*VSCALE_2 *2.0): VOFF_2;
t_set[j].t1_error = t_set[j].t1_output
t_set[j].t2_error = t_set[j].t2_output
(net_output(18)*ASCALE_1 *2.0): AOFF_1;
(net_output(19)*ASCALE_2 *2.0): AOFF_2;
t_set[j].t1_error = t_set[j].t1_output
t_set[j].t2_error = t_set[j].t2_output
(net_output(18)*TSCALE_1 *2.0): TOFF_1;
(net_output(19)*TSCALE_2 *2.0): TOFF_2;
t_set[j].t1_error = t_set[j].t1_output
t_set[j].t2_error = t_set[j].t2_output
* Do some statistics gathering
n1_error += t_set[j].t1_error;
n1_deviation += t_set[j].t1_error*t_set[j].t1_error;
n2_error += t_set[j].t2_error;
n2_deviation += t_set[j].t2_error*t_set[j].t2_error;
net_average_error +=t_set[j].t1_error+t_set[j].t1_error;
net_deviation += t_set[j].t1_error*t_set[j].t1_error+
t_set[j].t2_error*t_set[j].t2_error;
* Do final statistics calculations
n1_deviation = sqrt(n1_deviation / count);
n2_deviation = sqrt(n2_deviation / count);
net_average_error = net_average_error / count / 2.0;
net_deviation = sqrt(net_deviation / count / 2.0);
* MIX UP THE ORDER OF THE TRAINING SET
* When training the network one point at a time, the order of the set is
* important. If the order has a trend, then the network will follow the
* trend of the training data, and only reproduce the last few points trained
* successfully. This means that the dtat points should be in a random order
* and thus this routine has been written to perform this task.
static int i, /* Work variable */
var_1, /* The two locations which are to be swapped */
* Swap the random locations a large number of times
for(i = 0; i < end_train+end_train; i++){
* Find two random locations to swap
var_1 = (int)(end_train*(double)(rand() & 32767)/32767.0);
var_2 = (int)(end_train*(double)(rand() & 32767)/32767.0);
* write first location into buffer
t_set[TRAIN_SET].x_target = t_set[var_1].x_target;
t_set[TRAIN_SET].y_target = t_set[var_1].y_target;
t_set[TRAIN_SET].x_position = t_set[var_1].x_position;
t_set[TRAIN_SET].y_position = t_set[var_1].y_position;
t_set[TRAIN_SET].x_velocity = t_set[var_1].x_velocity;
t_set[TRAIN_SET].y_velocity = t_set[var_1].y_velocity;
t_set[TRAIN_SET].x_acceleration = t_set[var_1].x_acceleration;
t_set[TRAIN_SET].y_acceleration = t_set[var_1].y_acceleration;
t_set[TRAIN_SET].x_output = t_set[var_1].x_output;
t_set[TRAIN_SET].y_output = t_set[var_1].y_output;
t_set[TRAIN_SET].t1_target = t_set[var_1].t1_target;
t_set[TRAIN_SET].t2_target = t_set[var_1].t2_target;
t_set[TRAIN_SET].t1_position = t_set[var_1].t1_position;
t_set[TRAIN_SET].t2_position = t_set[var_1].t2_position;
t_set[TRAIN_SET].t1_velocity = t_set[var_1].t1_velocity;
t_set[TRAIN_SET].t2_velocity = t_set[var_1].t2_velocity;
t_set[TRAIN_SET].t1_acceleration = t_set[var_1].t1_acceleration;
t_set[TRAIN_SET].t2_acceleration = t_set[var_1].t2_acceleration;
t_set[TRAIN_SET].t1_torque = t_set[var_1].t1_torque;
t_set[TRAIN_SET].t2_torque = t_set[var_1].t2_torque;
* write second location into first
t_set[var_1].x_target = t_set[var_2].x_target;
t_set[var_1].y_target = t_set[var_2].y_target;
t_set[var_1].x_position = t_set[var_2].x_position;
t_set[var_1].y_position = t_set[var_2].y_position;
t_set[var_1].x_velocity = t_set[var_2].x_velocity;
t_set[var_1].y_velocity = t_set[var_2].y_velocity;
t_set[var_1].x_acceleration = t_set[var_2].x_acceleration;
t_set[var_1].y_acceleration = t_set[var_2].y_acceleration;
t_set[var_1].x_output = t_set[var_2].x_output;
t_set[var_1].y_output = t_set[var_2].y_output;
t_set[var_1].t1_target = t_set[var_2].t1_target;
t_set[var_1].t2_target = t_set[var_2].t2_target;
t_set[var_1].t1_position = t_set[var_2].t1_position;
t_set[var_1].t2_position = t_set[var_2].t2_position;
t_set[var_1].t1_velocity = t_set[var_2].t1_velocity;
t_set[var_1].t2_velocity = t_set[var_2].t2_velocity;
t_set[var_1].t1_acceleration = t_set[var_2].t1_acceleration;
t_set[var_1].t2_acceleration = t_set[var_2].t2_acceleration;
t_set[var_1].t1_torque = t_set[var_2].t1_torque;
t_set[var_1].t2_torque = t_set[var_2].t2_torque;
* Write buffer into second location
t_set[var_2].x_target = t_set[TRAIN_SET].x_target;
t_set[var_2].y_target = t_set[TRAIN_SET].y_target;
t_set[var_2].x_position = t_set[TRAIN_SET].x_position;
t_set[var_2].y_position = t_set[TRAIN_SET].y_position;
t_set[var_2].x_velocity = t_set[TRAIN_SET].x_velocity;
t_set[var_2].y_velocity = t_set[TRAIN_SET].y_velocity;
t_set[var_2].x_acceleration = t_set[TRAIN_SET].x_acceleration;
t_set[var_2].y_acceleration = t_set[TRAIN_SET].y_acceleration;
t_set[var_2].x_output = t_set[TRAIN_SET].x_output;
t_set[var_2].y_output = t_set[TRAIN_SET].y_output;
t_set[var_2].t1_target = t_set[TRAIN_SET].t1_target;
t_set[var_2].t2_target = t_set[TRAIN_SET].t2_target;
t_set[var_2].t1_position = t_set[TRAIN_SET].t1_position;
t_set[var_2].t2_position = t_set[TRAIN_SET].t2_position;
t_set[var_2].t1_velocity = t_set[TRAIN_SET].t1_velocity;
t_set[var_2].t2_velocity = t_set[TRAIN_SET].t2_velocity;
t_set[var_2].t1_acceleration = t_set[TRAIN_SET].t1_acceleration;
t_set[var_2].t2_acceleration = t_set[TRAIN_SET].t2_acceleration;
t_set[var_2].t1_torque = t_set[TRAIN_SET].t1_torque;