The path optimization sub-routines have been discussed at great length in the thesis body. In particular, the Hookes and Jeeves search is shown with program code in this section. The original source of the Hookes and Jeeves search routines were a book by Siddall [1982]. These routines perform the Hookes and Jeeves search, as well as a number of other searches. The routines were stripped down to perform only this search. The routines were also modified.

The original routines written by Siddall were based upon an assumption that the limitations of the Hookes and Jeeves search could be over come by relaxation of the penalty function. The original Hookes and Jeeves search could become ‘hung-up’ on constraints. As a result, Siddall’s subroutine allowed the Hookes and Jeeves search to violate the constraints. This was done with the hopes that the method could move over constraints which would normally stop them. This method had mixed results. When used on the path optimization problem the method did actually reduce the objective function, but since the solution was highly constrained, it was never able to produce a solution which was unconstrained.

The method was modified so that it strictly observed constraints. After the optimization method was made to observe constraints, the solutions improved dramatically. For example, the cost of one sample program in Siddall’s book (written by W.H.ElMaraghy) improved from 4.558 to 2.763. This was the caliper disc brake design program, which is included at the end of this appendix.

The subroutines them selves have been rewritten from Fortran as accurately as possible, to ease future translation of other optimization methods from Siddall’s book [1982]. These routines have been broken into three parts. One part is a sample program to find the optimal design of caliper disc brakes. This is only used for testing the validity of the subroutines. The second portion is a set of routines to perform the Hookes and Jeeves Search. The final portion is a set of subroutines are common to the Hookes and Jeeves subroutines, and to the other optimization methods in the optimization methods in Siddall’s book.

G.2 Sample Application Subroutines:

main() - Just initializes the design variables for the Caliper Disc brake design, and calls the optimization routine.

brake() - The design variables are prepared for optimization, optimized, and reported. This routine uses the Hookes and Jeeves search to optimize. The routine also depends upon other subroutines called later to determine the objective, and the constraint function (ureal() and const()).

ureal() - This routine calculates an array of objective values for the caliper disc brake problem.

const() - An array of inequality constraints are calculated by this routine.

equal() - An array of equality constraints are calculated by this routine.

fsimp() - A simpson’s rule integrator which is used for integrating a function found in the Caliper Disc Brake problem. This is called from ureal().

G.3 Hookes and Jeeves Subroutines:

seek() - A terminal point for setting up the call to the Hookes and Jeeves housekeeping routine. This routine will set up the work arrays for temporary usage.

seeki() - Will call and monitor the progress of the Hookes and Jeeves Search routine. This will call the search() subroutine until the method has converged, or the search has failed.

search() - Performs the Hookes and Jeeves search. Will return the converged solution, or a report of failure.

shot() - A shotgun approach used when the first cost function optim1() (i.e. npenal = 1) is used. This is only used to overcome the problem which could occur when the search routine is stuck on a ridge.

G.4 Common Optimization Support Routines:

answer() - Prints a report of the optimization results.

crit() - A convergence criterion used for cost functions optim2() and optim5() (i.e. npenal = 2 or 5). When this function has approached zero, then the function has approached convergence.

fc() - A convergence criterion used for penalty function optim3() (i.e. npenal = 3). The function basically reports on the greatest violation of the constraints.

frandn() - Generates an array of random numbers using the random number generator from the libraries. (Note: This routine may be machine dependant)

optim() - A terminal point which will call the cost function routine specified (i.e. by the value of npenal). This only allows the penalty function to be changed very easily.

optim1() - Returns a cost function which is based upon the objective function. If the constraints are violated, the cost function rises sharply at these violations.

optim2() - Is similar to optim1() except that the constraints have an internal value as well. This is to say that as the solution approaches constraints, the cost rises slowly, and as the solution passes constraints, the solution rises drastically.

optim3() - A penalty function based on Powell’s method. This is not used here.

optim5() - A penalty function based on Schuldt’s method. This is also unused here.

pendat() - Prints out the current constraint violations.

penout() - Prints a list of the current decision values for the current solution.

* A set of common functions used by most optimization routines, as outlined

* in Siddall [1982]. These mostly include input/output routines, cost

* function calculation (including penalties), and service routines.

* NOTE: IF A SUNBROUTINE DOES NOT HAVE A DATE OF JULY 8th, 1990, OR

* LATER, IT HAS NOT BEEN CHECKED FOR ERRORS !!!!

* Variables: nrun - current run number

* ncons - number of inequality constraints

* nequs - number of equality constraints

* phi - array of inequality constraints

* psi - array of equality constraints

* npenal - penalty function which is to be used

* rm - array of weights for penalty functions (for optim 4)

* o_iprint - generate a report every o_iprint(th) cycle

* o_idata - input data is printed when o_idata=1

* n - number of independant decision variables

* x - array of decision variables

* o_f- fraction of range used as step size

* step size fraction used as convergence criterion

* o_maxm - maximum moves permitted

* o_ko - flag for global convergence o_ko=1 when not at optimal

* The list of functions which appear in this code are listed here.

void answer(u, x, phi, psi, n, ncons, nequs)

* this subroutine is used merely to output the final solution in a

* standard form. if an optimum is not reached(o_ko=1)then the results

* at the last iteration may be printed out.

* Variables: x - array of decision variables

* n - number of decision variables

* ncons - number of inequality constraints

* nequs - number of equlaity constraints

* Global: o_ko - Flag for convergence to optimal solution

* Returns: u - new value for decision cost

* phi - array of inequality constraints

* psi - array of equality constraints

printf("\n\nOptimum Solution found \n");

printf("Minimum u = %f\n", u[0]);

printf("\n\nResults at last iteration \n");

printf("Current u = %f\n", u[0]);

printf("x(%d) = %f\n", i, x[i]);

printf("Inequality Constraints \n");

printf("phi(%d) = %f\n", i, phi[i]);

printf("Equality Constraints \n");

printf("psi(%d) = %f\n", i, psi[i]);

double crit(ncons, nequs, phi, psi, u, ulast, o_zero)

double *phi, *psi, u, ulast, o_zero;

* FIND CONVERGE CRITERION FOR OPTIM2 AND OPTIM5

* crit is convergence criterion for optim2 and optim5, and is calculated as

* the absolute sum of all changes to the objective and constraint violations.

* Variables: ncons - number of inequality constraints

* nequs - number of equality constraints

* phi - array of inequality constraints

* psi - array of equality constraints

* ulast - previous objective function

* o_zero - The ’o_zero’ line for inequality constraint violations

* Returns: A value which represents the ’activity’ in the optimization.

if(phi[i] < -fabs(o_zero)) critter += fabs(phi[i]);

for(i = 0; i < nequs; i++) critter += fabs(psi[i]);

critter += fabs(u - ulast)/fabs(ulast);

double fc (ncons,nequs,phi,psi,sig)

* CONVERGENCE CRITERION FOR OPTIM3

* fc is convergence criterion for optim3, which looks for the greatest

* value of constraint violation, and returns that value.

* Variables: ncons - number of inequality constraints

* nequs - number of equality constraints

* phi - array of inequality constraints

* psi - array of equalit constraints

* sig - storage array for objectives

* Returns: The value of the maximum constraint violation

temp = (phi[i] < sig[i]) ? fabs(phi[i]) : fabs(sig[i]);

if(fabs(psi[i]) > fcc) fcc = fabs(psi[i]);

* This function is very machine specific, and should generate random

* numbers between 0 and 1, and fill the array ’aa’ in ’n’ locations.

* Variables: nn - number of array locations to fill

* mm - a random number seed, needed in some cases

* Return: aa - an array of ’n’ random numbers

aa[i] = (double)(32767 & rand())/32767.0;

void optim(x, ncons, nequs, o_zero, o_r, o_reduce, rm, nrun, npenal, uart, phi,

psi, u, nviol, phip, psip, sig, tau, sigk, tauk)

double *x, *rm, *phi, *psi, *phip, *psip, *sig, *tau, *sigk, *tauk, *o_zero,

int ncons, nequs, nrun, npenal, *nviol;

* calling subroutine for penalty functions specified by ’npenal’. This will

* select one of a number of penalty functions. Every penalty function uses

* some of the arguments passed to this function.

* Variables: x - optimum decision variables

* ncons - number of inequality constraints

* nequs - number of equality constraints

* o_reduce - reduction factor for ’r’

* nrun - the number of runs so far

* npenal - the number of the penalty function to be used

* Return: o_zero - Any zero value assigned in function

* o_r - reduction rate for step size

* uart - a cost function return variable

* phi - array of inequality constraints

* psi - array of equality constraints

* nviol - number of constraint violations

* phip - a work array for objectives

* psip - a work array for constraints

* sig - another work array for objectives

* tau - another work array for constraints

* sigk - yet another work array for objectives

* tauk - yet another work array for constraints

optim1(x, ncons, nequs, o_zero[0], uart, phi, psi, u, nviol);

optim2(x, ncons, nequs, o_zero[0], o_r, o_reduce, nrun, uart,

optim3(x, ncons, nequs, o_zero[0], o_r[0], nrun, uart, phi, psi,

u, nviol, phip, psip, sig, tau, sigk, tauk);

puts("Penalty Function Option #4 is not available\n");

optim5(x, ncons, nequs, o_zero, o_r, nrun, uart, phi, psi,

u, nviol, phip, psip, sig, tau);

void optim1(x, ncons, nequs, o_zero, uart, phi, psi, u, nviol)

double *x, *phi, *psi, o_zero, *uart, *u;

* A ONE PASS EXTERIOR PENALTY FUNCTION

* one pass exterior penalty function, which simply finds the penalty

* function for the optimization routines. This routine uses a sudden

* increase in penalty function value, to force constraint violations

* Variables: x - optimzation decision variables

* ncons - number of inequality constraints

* nequs - number of equality constraints

* o_zero - The inequality o_zero

* Returns: nviol - number of violations

* uart - complete cost function

* phi - array of inequality constraints

* psi - array of equality constraints

nviol[0] = 0; /* zero feasibility tolerance */

sum1 += fabs(phi[i])*10.0e+20;

if(!((psi[i] >= -o_zero) && (psi[i] <= o_zero))){

sum2 += fabs(psi[i])*10.0e+20;

void optim2(x, ncons, nequs, o_zero, o_r, o_reduce, nrun, uart, phi,

double *x, *phi, *psi, o_zero, *o_r, o_reduce, *uart, *u;

int ncons, nequs, nrun, *nviol;

* FIACCO-MCCORMICK PENALTY FUNCTION METHOD

* penalty function based on fiacco-mccormick method, includes interior

* Variables: x - optimzation variables

* ncons - number of inequality constraints

* nequs - number of equality constraints

* o_zero - is tolerance on feasibility

* o_reduce - the factor changing ’r’

* nrun - number of cycles run to date

* Returns: nviol - number of violations

* uart - complete cost function

* phi - array of inequality constraints

* psi - array of equality constraints

* r - multiplier for changing each run

* Global: o_npast - keeps track of last number of runs when called

ncall; /* cummulative number of calls in a run */

if(!((ncall > 0) || (nrun == 1))) o_r[0] *= o_reduce;

/* avoid dividing by approximately zero, */

/* there is no point penalizing a very */

void optim3(x, ncons, nequs, o_zero, o_r, nrun, uart, phi, psi, u, nviol, phip,

double *x, *phi, *psi, *phip, *tau, *sig, *tauk, *sigk, *psip, o_zero, o_r,

int ncons, nequs, nrun, *nviol;

* penalty function based on powell’s method

* Variables: x - optimzation variables

* ncons - number of inequality constraints

* nequs - number of equality constraints

* o_zero - the zero tolerance level

* o_r - the step size reduction

* o_reduce - the factor to reduce ’o_r’ by

* nrun - number of runs to date

* Returns: nviol - number of violations

* uart - complete cost function

* o_zero is tolerance on feasibility and stopping criterion.

* tau and sig are updated parameters.

* tauk and sigk are previous values of tau and sig

* ncall is current cumulative number of calls in a run.

static double sum1, sum2, c, ck, temp;

if(nrun <= 1){ /* Intialize parameters on first run */

c = fc(ncons, nequs, phi, psi, sig); /* Check for convergence */

ck = c; /* Check for final optimum */

if(ck < o_zero){ /* Zero is used as a flag for final */

o_zero = - o_zero; /* convergence - must be reset in */

for(i = 0; i < ncons; i++) sigk[i] = sig[i];

for(i = 0; i < nequs; i++) tauk[i] = tau[i];

/* Check for adequate convergence */

if(!((ck < o_cold/4.0) || (o_nf == 1))){

/* set sig and tau back to previous values */

L2: for(i = 0; i < ncons; i++) sig[i] = sigk[i];

for(i = 0; i < nequs; i++) tau[i] = tauk[i];

L4: for(i = 0; i < ncons; i++) sig[i] = 0.1 * sig[i];

for(i = 0; i < nequs; i++) tau[i] = 0.1 * tau[i];

L3: for(i = 0; i < ncons; i++)

sig[i] -= (phip[i] < sigk[i]) ? phip[i] : sigk[i];

for(i = 0; i < nequs; i++) tau[i] -= psip[i];

if(phi[i] < -o_zero) nviol[0]++;

if(temp <= 0.0) sum1 += temp*temp;

if((psi[i] < -o_zero) || (psi[i] > o_zero)) nviol[0]++;

void optim5 (x, ncons, nequs, o_zero, o_r, nrun, uart, phi, psi, u, nviol,

double *x, *phi, *psi, *phip, *tau, *sig, *psip, *o_zero, *o_r, *uart, *u;

int ncons, nequs, nrun, *nviol;

* penalty function based on schuldts method

* Variables: x - optimzation variables

* ncons - number of inequality constraints

* nequs - number of equality constraints

* o_zero - zero level for zero detect

* o_r - step size reduction factor

* o_reduce - reduction factor for ’o_r’

* nrun - number of accumulated runs

* Returns: nviol - number of violations

* uart - complete cost function

* o_zero is tolerance on feasibility

* tau and sig are updated parameters

* ncall is current cumulative number of calls in a run

static double sum1, sum2, temp;

if(nrun <= 1){ /* First run initialize parameters */

} else { /* subsequent run, reset parameters */

if(phi[i] < -o_zero[0]) nviol[0]++;

sum1 += (temp*temp-sig[i]*sig[i]);

if((psi[i] < -o_zero[0]) || (psi[i] > o_zero[0]))

sum2 += (temp*temp-tau[i]*tau[i]);

void pendat(npenal, o_zero, o_tol, o_r, o_reduce, rm, ncons, nequs)

double *rm, o_zero, o_tol, o_r, o_reduce;

* REPORT ON PENALTY FUNCTION PARAMETERS

* Prints input data related to penalty function used. The data is taylored

* to match the actual variables used by the particular function. The entire

* report is dumped directly to the main IO stream.

* Variables: npenal - penalty function used.

* o_zero - tolerance on constraints

* o_tol - tolerance for stopping at final optimum

* o_r - penalty multiplier scalar

* o_reduce - adaptive factor for altering ’o_r’

* rm - penalty function multiplier array

* ncons - number of inequality constraints

* nequs - number of equlity constraints

printf("\n\nPenalty function used is optim %d\n", npenal);

printf("Tolerance on constraints %f\n", o_zero);

if((npenal > 1) && (npenal <= 5))

printf("Stopping tolerance for final optimum %f\n", o_tol);

printf("Penalty multiplier o_r = %f \n", o_r);

printf("Reduction factor for ’r’ after each minimization,\n");

printf(" o_reduce = %f \n", o_reduce);

if((npenal == 3) || (npenal == 5)){

printf("Penalty multiplier o_r = %f \n", o_r);

for(i = 0; i < (ncons + nequs); i++)

printf("Penalty function multiplier rm(%d) = %f\n",

void penout(npenal, x, o_r, rm, nrun, u, ncons, nequs, n)

int npenal, nrun, ncons, nequs, n;

* for printout at end of each run, with values taylored to particular

* penalty functions. All IO goes to current stream.

* Variables: npenal - penalty function used.

* x - array of decision variables

* o_r - penalty multiplier scalar

* rm - penalty function multiplier array

* nrun - current iteration count

* ncons - number of inequality constraints

* nequs - number of equlity constraints

* n - number of decision variables

printf("\n\nResults for run %d\n", nrun);

if((npenal == 2) || (npenal == 3)){

for(i = 0; i < (ncons + nequs); i++)

printf(" rm(%d) = %f\n", i, rm[i]);

printf(" x(%d) = %f\n", i, x[i]);

* HOOKES AND JEEVES OPTIMIZATION SUBROUTINES

* The Hookes and Jeeves’ search methods are contained here.

* NOTE: IF A SUNBROUTINE DOES NOT HAVE A DATE OF JULY 8th, 1990, OR

* LATER, IT HAS NOT BEEN CHECKED FOR ERRORS !!!!

* The optimization routines have been debugged for their basic operation.

* A large amount of overhead is present in these routines. This overhead

* is due to the fact that these routines were originally intended to be

* part of a much larger library of optimization routines [Sidall, 1982].

* These subroutines were originally presented in Fortran, these were

* rewritten in ’C’ and some modifications were made.

* Variables: nrun - current run number

* ncons - number of inequality constraints

* nequs - number of equality constraints

* phi - array of inequality constraints

* psi - array of equality constraints

* npenal - penalty function which is to be used

* rm - array of weights for penalty functions (for optim 4)

* o_iprint - generate a report every o_iprint(th) cycle

* o_idata - input data is printed when o_idata=1

* n - number of independant decision variables

* x - array of decision variables

* o_f- fraction of range used as step size

* step size fraction used as convergence criterion

* o_maxm - maximum moves permitted

* o_ko - flag for global convergence o_ko=1 when not at optimal

* The description for these global variables is given above. They are

* defined here in the blocks which they occupied previously when Fortran

* common statements were used.

/* Set default values for programming parameters */

int o_ko; /* Common for opti block */

int o_npast, /* Common for start block */

int o_idata = 1, /* Common for seek block */

* The list of functions which appear in this code are listed here.

* This includes the basic set of common functions used by these routines

* and other optimization routines proposed by Siddall [1982].

#include "/usr/people/hugh/thesis/src/optimize/opt_fun.c"

void seek(n, ncons, nequs, npenal, rmax, rmin, xstrt, x, u, phi, psi, nviol, w)

double *rmax, *rmin, *xstrt, *x, *phi, *psi, *w, *u;

int n, ncons, nequs, npenal, *nviol;

* HOOKES AND JEEVES DIRECT SEARCH METHOD

* This routine serves no function other than to organize the work array

* before calling it. The array w contains all working arrays and must be

* dimensioned at or greater than 4*(n+ncons+nequs)+n.

* Variables: (see subroutine ‘seeki()‘)

* Returns: (see subroutine ’seeki()’)

static int i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11;

if(ncons > 0) i6 = i5 + ncons;

if(nequs > 0) i7 = i6 + nequs;

if(nequs > 0) i9 = i8 + nequs;

if(ncons > 0) i10 = i9 + ncons;

if(nequs > 0) i11 = i10 + nequs;

seeki(n, ncons, nequs, npenal, rmax, rmin, xstrt, x, u, phi, psi,

nviol, &w[0], &w[i1-1], &w[i2-1], &w[i3-1], &w[i4-1], &w[i5-1],

&w[i6-1], &w[i7-1], &w[i8-1], &w[i9-1], &w[i10-1], &w[i11-1]);

if((npenal == 2) || (npenal == 3)) o_r = 1.0;

void seeki(n, ncons, nequs, npenal, rmax, rmin, xstrt, x, u, phi, psi,

nviol, xs, work1, work2, work3, work4, phip, psip, sig, tau, sigk,

double *rmax, *rmin, *xstrt, *x, *phi, *psi, *work1, *work2, *work3, *work4,

*phip, *psip, *sig, *tau, *sigk, *tauk, *rm, *xs, *u;

int n, ncons, nequs, npenal, *nviol;

* HOOKES AND JEEVES HOUSEKEEPING

* This subroutine is responsible for the pre and post processing of the

* hookes and jeeves search. Thus, this routine checks for convergence,

* prints success/failure, and determines when to quit, and step size

* Variables: n - number of decision variables

* ncons - number of inequality constraints

* nequs - number of equality constraints

* npenal - penalty function number

* rmax - array of maximum for decision variables

* rmin - array of minimums for decision variables

* xstrt - array of start points for decision variables

* x - array of decision variables

* phi - array of inequality constraints

* psi - array of equality constraints

* nviol - number of violations of constraints

* xs - Array of initial values of decision variables

* work1 - A work array to be passed to the Hookes and J search

* work2 - Another similar work array

* work4 - and the last misc work array

* phip - work array for objective values

* psip - work array for constraints

* sig - another work array for objectives

* tau - another work array for constraints

* Returns: x - array of decision variables

* phi - array of inequality constraints

* psi - array of equality constraints

* xs - Array of initial values of decision variables

static int i, kount, nrun, kk;

static double uart, c, ulast, f_original;

"Direct search optimization using Seek with penalty function %d\n",

printf("Output for every o_iprint(th) cycle o_iprint=%d\n",

printf("When = 1 input data is printed o_idata=%d\n", o_idata);

printf("Number of independant variables n=%d\n", n);

printf("Number of inequality constraints ncons=%d\n", ncons);

printf("Number of equality constraints nequs=%d\n", nequs);

printf("Fraction of range used as step size o_f=%f\n", o_f);

printf("Step size frac. used as converg. criter. o_g=%f\n",o_g);

printf("maximum number of moves permitted o_maxm=%d\n", o_maxm);

printf("Max # of Shot gun searches, o_nshot=%d\n",

printf("# of test pts in shot gun, o_ntest=%d\n",

pendat(npenal, o_zero, o_tol, o_r, o_reduce, rm, ncons, nequs);

for(i = 0; i < n; i++) printf(

"Est Upper bound of x(%d), rmax=%f\n", i, rmax[i]);

for(i = 0; i < n; i++) printf(

"Est Lower bound of x(%d), rmin=%f\n", i, rmin[i]);

for(i = 0; i < n; i++) printf(

"Start values of x(%d), xstrt=%f\n", i, xstrt[i]);

if (o_iprint > 0) printf("%d independant variables x,\n", n);

for(i = 0; i < n; i++) xs[i] = xstrt[i];

L4: search(n, ncons, nequs, npenal, rmax, rmin, xs, nrun, x, u, phi, psi,

nviol, work1, work2, work3, work4, phip, psip, sig, tau, sigk,

/* Shotgun search only valid for optim1 */

shot(n, ncons, nequs, npenal, rmax, rmin, nrun,x, u,

phi, psi, work1, work2, work3, &kk);

/* check to see whether subr.shot has found */

printf("shot u = %f\n", u[0]);

printf("x(%d) = %f\n", i, x[i]);

/* redefine starting point for search */

for(i = 0; i < n; i++) xs[i] = x[i];

/* call optim needed after failed shot to */

/* redefine all values to current */

optim(x, ncons, nequs, &o_zero, &o_r, o_reduce, rm, nrun,

npenal, &uart, phi, psi, u, nviol, phip,

for(i = 0; i < n; i++) xstrt[i] = x[i];

/* o_ko cannot be reset in subr.shot, therefore if */

/* o_ko=1 at this stage then subr.search failed and */

/* shot found no improvement. */

printf("Direct search has hung up, and shotgun \n");

printf("search cannot find a better point, try a \n");

printf("different starting point \n");

for(i = 0; i < n; i++) xstrt[i] = x[i];

printf("Direct search has hung up after all shots \n");

/* apparent optimum in a run */

penout(npenal, x, o_r, rm, nrun, u[0], ncons, nequs, n);

c = crit(ncons, nequs, phi, psi, u[0], ulast, o_zero);

if((npenal == 3) || (npenal == 4)){

if((o_zero < 0.0) || ((c < o_tol) && (nviol[0] == 0))){

if((c > o_tol) || (nviol[0] > 0)){

L13: if(o_zero < 0.0) o_zero = -o_zero;

/* optimum has been reached */

for(i = 0; i < n; i++) xstrt[i] = x[i];

printf("No convergence after 10 runs \n");

for(i = 0; i < n; i++) xs[i] = x[i];

void search(n, ncons, nequs, npenal, rmax, rmin, xstrt, nrun, x, u, phi,

psi, nviol, xo, xb, dxxx, txxx, phip, psip, sig, tau, sigk, tauk, rm)

double *x, *xstrt, *rmax, *rmin, *phi, *psi, *xo, *xb, *dxxx, *txxx, *phip,

*psip, *sig, *tau, *sigk, *tauk, *rm, *u;

int n, ncons, nrun, nequs, npenal, *nviol;

* direct search portion of ’seek’, based on the direct search algorithm

* of hooke and jeeves. Many sorries for the goto’s, but this was

* The algorithm was actually converted from it’s previous method. The

* method described in Siddall allowed constraint violation. this caused

* a problem with highly constrained solutions. Thus, the method was

* altered so that it viewed all constraints as hard, and thusly did not

* or explore solutions which violate the constraints. Originally the

* constraints were allowed to be violated (as their idea) so tha problems

* could over come the ’hang up’ encountered when the solution encounter

* a constrained problem. One of their examples was run through this

* modified algorithm, and a factor of 2 improvement was gained in the

* solution, over their results.

* Variables: n - number of decision variables

* ncons - number of inequality constraints

* nequs - number of equality constraints

* npenal - number of penalty function

* rmax - array of maximum decision variables

* rmin - array of minimum decision variables

* xstrt - array of initial decision variables

* nrun - number of cummulative runs

* x - array of decision variables

* xo - array of decision variables for tests with H & J

* dxxx - step sizes to be taken

* txxx - minimum step size allowed

* phip - work array for objectives

* psip - work array for constraints

* Returns: x - results of current hookes and jeeves search

* nviol - number of constraint violations

* u - cost function for optimization

* phi - array of inequality constraints

* psi - array of equality constraints

static int kkk, m1, k1, k2, i, nc, nfail;

static double uart, uarto, ureal;

kkk = 0; /* counter for printout loop */

m1 = 0; /* number of tries in this run */

k1 = 1; /* start position of in decision array */

k2 = n; /* last position in decision array */

x[i] = xstrt[i]; /* set up work array */

xo[i] = x[i]; /* set first base point */

dxxx[i] = o_f*(rmax[i] - rmin[i]);/* generate delx(i) and test(i) */

nc = 1; /* set the case number */

L2: optim(x, ncons, nequs, &o_zero, &o_r, o_reduce, rm, nrun, npenal, &uart,

phi, psi, &ureal, nviol, phip, psip, sig, tau, sigk, tauk);

if(nc == 1){ /* do first move */

uarto = uart; /* get the first valid cost */

L5: nfail = 0; /* make search */

i = k1-1; /* pointer to current variable */

x[i] = x[i] + dxxx[i]; /* try the first move */

nc = 2; /* try the next move */

if((uart < uarto) && (nviol[0] == 0))

goto L9; /* Check for success of first case */

x[i] = x[i] - 2.0*dxxx[i]; /* If fail then move other way */

nc = 3; /* set for test by next case */

if ((uart < uarto) && (nviol[0] == 0))

goto L9; /* Check for fail of move */

nfail++; /* keep track of fails */

x[i] = x[i] + dxxx[i]; /* undo last change */

goto L10; /* goto the fail mode */

if(i < k2){ /* if not at the end of the set */

x[i] = x[i] + dxxx[i]; /* do first weight update */

nc = 2; /* set for next case */

if(nfail >= n){ /* if no improvements to set */

for(i = k1 - 1; i < k2; i++){ /* reduce weights, or quit */

if(dxxx[i] < txxx[i]) goto L23;

dxxx[i] = dxxx[i] / 2.0; /* Reduce weights */

/* establish new base point */

for(i = k1-1; i < k2; i++) xb[i] = x[i]; /* Store current run */

if(kkk == o_iprint){ /* report results */

printf(" run %d, u = %f, fails %d\n", m1, uart, nviol[0]);

printf(" x(%d) = %f \n", i, x[i]);

if(m1 > o_maxm) goto L23; /* check for too many runs */

for(i = k1-1; i < k2; i++) x[i] += (x[i] - xo[i]);

L19: if ((uart >= uarto) || (nviol[0] > 0)){

for(i = k1-1; i < k2; i++) xo[i] = xb[i];

printf("No feasible solution after allowable \n");

printf("number of moves, o_maxm = %d\n", o_maxm);

void shot(n, ncons, nequs, npenal, rmax, rmin, nrun, x, u, phi, psi, rr, xx,

double *phi, *psi, *rmax, *rmin, *x, *rr, *xx, *rf, *u;

int n, ncons, nequs, npenal, nrun, *kk;

* this shotgun search is intended to get the solution off a fence

* rather than to inch it towards the optimum. therefore large steps,

* equal 10. times the initial step size in search are tried.

* Variables: n - number of decision variables

* ncons - number of inequality constraints

* nequs - number of equality constraints

* npenal - penalty function number

* rmin - minimum values of decision variables

* rmax - maximum values of decision variables

* nrun - number of runs to this point

* x - array of decision variables

* Returns: u - objective function

* phi - array of inequality constraints

* psi - array of equality constraints

* kk - if kk=1 then the search was successful

* xx - array of trial values from shotgun search

* fr - array of fraction of ranges used in search

* Global: o_ntest - number of points for shotgun test

* o_zero - A zero tolerance limit

* o_r - step size reduction factor

* o_f - reduction factor for ’o_r’

static double umin, rm[1], utest[1], phip[1], psip[1], sig[1],

tau[1], sigk[1], tauk[1], ureal[1];

frandn(rr, n, 100); /* seed the random generator */

for(i = 0; i < n; i++) rf[i] = 10.0*o_f*fabs(rmax[i]-rmin[i]);

for(i = 0; i < n; i++) xx[i] = (x[i] - rf[i]) + rr[i]*2.0*rf[i];

optim(x, ncons, nequs, &o_zero, &o_r, o_reduce, rm, nrun,

npenal, utest, phi, psi, ureal, nviol, phip, psip, sig,

if((nviol[0] == 0) && (utest[0] < umin)){