Final Report on CAPP / PPC Integration

 

 

The Design, Automation, and Manufacturing Research Laboratory

 

 

 

 

 

the 29th day of July in the year 1992

 

EXECUTIVE SUMMARY

 

This report outlines the final results of a multinational project which has been graciously funded by the Province of Ontario, Canada, and the prefect of Baden-Wuttemburg, Germany. The project saw The Fraunhofer Institute for Production Automation (IPA) in Stuttgart, Germany, McMaster University in Hamilton, and The University of Western Ontario (UWO) in London, all cooperating to jointly develop methods for integrating CAPP (Computer Aided Process Planning) and PPC (Production Planning and Control). CAPP and PPC are becoming more common in automated manufacturing. It was recognized that, although CAPP and PPC are being used, they were not being integrated, thus losing some of the key abilities of the software, in particular timely changes to process plans as the status of the shop floor evolves.

 

Various groups provided different forms of expertise. The group from IPA provided knowledge about PPC, based upon their software package GRIPPS which is being developed for commercial applications. McMaster discussed their software for reactive planning, and also provided some knowledge about CAPP systems. UWO provided expertise in communications and database issues. At present there are four software packages which are related to the project. They are GRIPPS from IPA, RPE from McMaster, and two packages which are described in this report. Both of the packages developed at UWO provide communications and data movement between CAPP, RPE and GRIPPS. The two versions of the Integrator are differentiated by communication methods. One Integrator uses Databases as the foundation for passing events and data. The Second Integrator uses OSI standards to pass events and data. Both have advantages and disadvantages, as will be described within the report.

 

The function of the integrator software is to deal with data format conversion, and event handling. The primary forms of data to be dealt with are process plans flowing from CAPP through RPE to PPC. Resource data will flow backwards from PPC through RPE to CAPP. The data is stored in many forms, such as relational databases, ASCII files, and proprietary databases. The data is also stored in various structures. Therefore the integrator is responsible for converting data as it is passed between modules. Events may be generated by any program at any time. As the Events are passed through the integrator they will cause data to be transferred, and may also drive other functions, such as long term statistics gathering. The Events will be generated for standard events such as requests for process plans. But, Events may also be generated for unexpected events on the shopfloor, such as bottlenecks and resource shortages. Some of these problems cannot be solved by scheduling techniques only and require replanning of the processes themselves with the least disturbance to the overall production system.

In total our Integrator resolves the integration issues between the two systems, product data representation and exchange, event handling and real time related issues, common definition of variables and attributes, and use of standard common data bases.

 

MOTIVATION

 

20 - 30% of all jobs in small and medium sized shops must be redirected to other resources to achieve production goals. Existing Process planners produce fixed linear sequences of operations. Schedules based on these are not flexible and are unable to react to disruptions on the shop floor. Order throughput is finally accomplished through improvisation. However, the associated cost penalties adversely affect manufacturing competitiveness. Flexible and rapid planning is crucial in Computer-Integrated Manufacturing (CIM) for high throughput in a Just-in-Time (JIT) environment. Nearly all existing CAPP systems are:

 

- merely to support the administration of plans previously created manually

- applicable to one domain only (mostly machining)

- not integrated of interfaced with PPC

 

A closed-loop CAPP/PPC system has the advantage of responding to unforeseen production problems or unforeseen events (Studies show that a third of all process plans are not valid or have to be modified on short notice when manufacturing starts). Dynamic and reactive CAPP is required to address the above limitations and to respond effectively to changes in product style versus manufacturing capability. A generic, modular and domain independent “integrator” capable of integrating various CAPP systems and various PPC systems is needed. PPC systems capable of utilizing the data and knowledge provided by the dynamic planner and integrator modules is necessary.

 

REPORT OUTLINE

 

This report will begin by outlining the purpose for developing this project. After describing the objectives the report goes on to describe who the key players are, and how they are related to the project on the whole, thus establishing the means of project execution. Industry contacts are then described to determine our constraints which guided our research and development. The issues of implementation, and existing problems are described in the sections which follow. This includes a description of existing literature, issues of importance to the integrator, and then details of the integrator developed. This covers details of events, data structures/formats, software written, databases, and existing software. In conclusion, progress to date and outstanding work is discussed. The readers attention should be drawn to the quantity of work which is documented in the appendices only. These are considered important, but too voluminous to include in the main body of this report.

 

PROJECT OBJECTIVES

We set out to develop an automated interface between CAPP and PPC. This interface was to resolve data conversion issues, and basic updates to process plans. The novel feature of this system was to be its ability to deal with feedback from the shop floor about failed process plans. As the availability of resources in the factory change, new plans must be generated in real time to allow work to continue unhindered.

 

The basic goals can be summarized with a few key points.

 

• Cooperative research in the field of computerized, highly automated proces planning and control

• Integration of CAPP and PPC for discrete parts manufacturing.

• Given the expertise of each team, it was agreed that:

- Ontario researchers to focus on CAPP

- Baden-Wurttemberg researchers to focus on PPC

• Establish a strong link between CAPP and PPC which:

- Allows PPC to respond more effectively to unexpected shop floor disturbances and bottlenecks

- Makes CAPP more responsive to shop floor status, unexpected shortages or failures that cannot be handled by PPC.

• Implementation with Industrial Partners (Proof-of-Concept Prototype)

 

In particular the Ontario research team will:

• Evaluate state of the art and conduct literature reviews in Generative Computer-Aided Process Planning and become familiar with the PPC models developed by IPA to date.

• Define product modelling requirements, representation schemes and necessary knowledge base for assembly and assignment of machined parts to manufacturing cells.

• Define bi-directional data and requirements for interfacing Process Planning Systems with Production Planning and Control modules.

• Investigate the effects of the status and availability of actual production resources on Process Planning, and the required interface between CAPP and PPC modules.

• Investigate Issues of standards and communication (with IPA) as they relate to representation schemes, data exchange, and on-line modes of operation.

• Produce specifications and guidelines for proposed interface and test a proof-of-concept prototype.

• Identify a candidate(s) Industrial application and encourage Industrial utilization of results.

• Hold joint colloquia in both Ontario and Baden-Wurttemberg to disseminate results of the research programme. Encourage joint publication where possible.

 

ORGANIZATION AND COMMUNICATIONS

A number of researchers are contributing to this project in 1991 (some on a part time basis) as follows:

 

The University of Western Ontario was represented by:

 

• Dr. W. H. ElMaraghy (Principal Investigator)

• Mr. J. Chien (Research Assistant, Yr. 2,3)

• Mr. D. Corrin (Research Assistant, Yr. 1,2,3)

• Mr. H. Jack (Research Assistant, Yr. 1,2,3)

• Mr. D. Lee (Research Assistant, Yr. 2)

• Ms. N. Lerner (Research Assistant, Yr. 1)

• Mr. B. McNeilly (Research Assistant, Yr. 3)

 

 

McMaster University was represented by:

 

• Dr. H. A. ElMaraghy (Principal Investigator)

• Dr. P. H. Gu (Research Assistant, Yr. 1)

• Mr. L. Laperriere (Research Assistant, Yr. 2)

• Mr. P. Nguyen (Research Assistant, Yr. 2,3)

• Mr. T. Pfaff (Research Assistant, Yr. 2,3)

• Mr. J. M. Rondeau (Research Assistant, Yr. 1)

• Mr. C. Stranc (Research Assistant, Yr. 2)

 

 

The Fraunhofer Institute for Production Automation (IPA) was represented by:

 

• Dr. Hermann Kuhnle (Principal Investigator)

• Jorg Buhrig (Research Assistant)

• Jochen Kurz (Research Assistant)

 

Communications were mainly in the form of meetings, faxes and e-mail. There were a total of three joint meetings.

• Stuttgart (IPA).

• Hamilton (McMaster).

• London (Western).

 

The final reporting of the results has been, and will be done, in terms of technical publications.

• Two papers accepted for presentation at the CIRP general assembly in France, August 1992.

• Joint publication (planned).

• Joint project report (in progress).

 

NEED FOR CAPP/PPC INTEGRATION

An outline of CAPP research may be found in the papers by Alting and Zhang(1989), Ham and Lu (1988), Eversheim (1985), Lenau and Alting (1990), and Weill et al. (1982). Most of the research has centred around metal cutting processes. Ham and Lu (1988) suggested future directions for research efforts in CAPP. The authors pointed out that process planning is often carried out without consideration of job shop status information, such as resource availability, breakdown of equipment or disruptions caused by stochastic bottlenecks. Replanning is done by improvisation and can result in long through-put times. Eversheim et al. (1990) describe the current situation of order processing in industry as: “detailed information about the order on the one hand and the actual shop floor situation on the other hand are not available; realistic planning of the order processing is still the exception”. The authors propose an Assembly Management System which should provide sufficient information about the actual situation in case of disturbances. It is suggested that during order processing (planning), process alternatives, which mirror the flexibility of the assembly process, be incorporated. However, integration and common definition between CAPP and PPC are not discussed. Another paper dealing with the subject of integration, from a high-level CIM perspective, is a model presented by Harhalakis et al. (1990). The model discusses integration at the facility level and is presented along with the rules of interaction between the constituent modules. The authors used this approach to automatically update the various databases used by CAD, CAPP, and MRP. Törnshoff and Detand (1990) proposed, as part of the ESPRIT Project 2457 FLEXPLAN, a “process description concept” which can be used by planning, scheduling and control systems throughout a manufacturing environment. A Petri-net graph-based representation is generated during process planning. It provides information structures which can be continuously enhanced during the progress of manufacturing. For example, the PPC system calculates order due dates, the scheduling system determines planned start and termination dates as well as resource allocation data, and the monitoring system updates the actual process history. However the issues of events and related feedback from PPC to CAPP to respond to shop floor “disturbances” and the way for CAPP to replan are not discussed. This approach assumes that the systems to be integrated e.g. - the CAPP and PPC systems - use Petri nets.

An approach for replanning “on-line” is presented by Ruf and Jablonski (1990). In this approach it is proposed that a static process planning system be used which identifies all combinations of manufacturing resources that are suited to produce a part. A dynamic resource allocation system decides on-line which of the possible resources have to be used in order to execute a manufacturing order. The paper deals with a feature-based part description, and does not consider issues of integration with a PPC system.

In summary, traditional CAPP systems are static, linear (strictly sequential), and they assume unlimited factory resources. To achieve an optimal schedule, process plans should take into consideration the actual workshop status as well as any capacity constraints. The reviewed research shows the necessity of breaking away from the process plan as a static and linear sequence, and the need to have plans that are able to represent parallelism and alternative operations or resources. Similarly PPC systems should use a strategy that can benefit from the non-linear, alternative plans representation. The task of integrating such CAPP and PPC systems, even in the presence of such capabilities, is not a minor one. This paper focuses on this important subject, which has not been much discussed in previously published research. In particular, we wish to discuss the need for common definitions and the use of distributed vs. standard and common databases. The type of events and resulting communication between various modules in a concurrent engineering and parallel heterogeneous processing environment are also considered.

 

 

INDUSTRIAL PERSPECTIVE

 

Typical medium size parts manufacturers in Ontario, Canada were surveyed to find out how production planning and process planning are carried out and how closely integrated they are. The outcome of such investigations was revealing. Nearly all process plans are created by humans, detailed (micro-level) and process planning is almost non-existent. What is passed on to production is mostly macro-level, mixed domain (including all required processes, e.g. metal sealing assembly, welding washing..., etc.) plans known as routing sheets. The sequence of operations and machine selection is based on ideal assumptions regarding availability of resources, and the best resources are always selected. These plans are linear and do not present alternative routes or resources. Once such plans are issued for production, the jobs of process planners end. On the shop floor, however, production disruptions occurs due to shortage of resources (tools, material, operators, etc.) and bottlenecks. Foremen, change the route sheets to meet production demands, however, it is done locally without complete knowledge of its effect on the overall operation of the factory and often leads to higher costs. It also became evident that expediting is a fact, and its associated costs is a fact of life in this environment and that capacity planning is hardly done.

 

Although some of the day to day production planning problems can be solved by scheduling techniques, it is apparent that rationalized alternate plans are needed in many cases to cope with the dynamic picture on the shop floor. The lack of communication between process planning and production planning obviously leads to higher costs and is a serious obstacle to achieving effective integrated manufacturing systems.

This situation is not different from those observed in other industrialized countries. This gave rise to an international collaborative research project which started in 1990.

INDUSTRIAL NEEDS AND PROJECT IMPACT

 

• The ultimate users of the research results will be discrete parts manufacturers in a variety of economic sectors.

 

• Significant potential for small, medium and large companies implementing Computer Integrated Manufacturing (CIM).

 

• Typical medium size parts manufacturers in Ontario, that were contacted, revealed the following:

- Nearly all process plans are created manually.

- Detailed (micro-level) process planning is almost non-existent.

- What is passed to production is macro-level, mixed domain plans known as ‘routing sheets’

- Plans are based on an ideal sequence of operation assuming available resources.

- Plans are linear, tend to use best available resources, and no alternates are given.

- Production interruptions due to unavailable resources, shortage of materials, statistical and unexpected bottlenecks are very common.

- There is a clear need for more responsive CAPP and PPC systems.

- Although some problems can be solved by scheduling techniques, the absence of feedback to planning results in higher costs, delays and obstacles in achieving CIM.

 

 

INDUSTRIAL PARTICIPANTS

 

During the past few months, Babcock&WIlcox and Allen-Bradley have been approached by McMaster and Western as potential sites for prototyping RPE.

B&W manufactures steam boilers for power generator. Their production volume is very small and the type of product is very much made to ordered. The process plan(s) received from B&W are for only a few components and no sub-assembly are involved. Many production steps are involved in these processes and most of them are either repeated (e.g. inspection) or involving human skills (e.g. wielding). It would be desirable to select an example where there are more components and sub-assemblies, and with a good mixture of human skills and machinery involved in the production steps. Also important are the possible alternative operations and the alternative resources, which are lacking or not needed in their manufacturing steps.

 

AB provides a better environment to test RPE. Their production is high voltage & low voltage starters. These are metal cabinets approximately the size of a fridge and contain both mechanical and electrical assemblies. Some of the sheet metal work are stamped in-house using presses. This is one area where they have alternate resources, i.e. alternate presses for stamping. Internal components of the cabinet (such as switches, wire harnesses, electrical components, etc...) are assembled together at different areas within the plant. Some operations are in precedence (e.g. sheet metals are stamped, welded together, and painted) while others are being done in parallel (e.g. preparation of wire harnesses are initiated ~5 to ~10 days before final assemblies). Final assembly is when all bought-out and in-house components are put together.

 

Another word, AB seems to have all the elements that are needed for prototyping RPE. They utilize both machining and assembly operations, also, alternate resources and methods are available. The flow of materials within the plant is known, and process plans are documented.

 

One of the production planning problem at AB is “revisions”. This is a problem of existing production planning solution as discussed earlier in this document. Linear planning is rigid and does not allow for reacting to changes in the production environment. One revision of any kind (engineering or production planning) would create havoc to the production routine.

 

Their second problem, from the management point of view, is that the decision making resulting from revisions or disruptions are not documented. They seemed to depend too much on the experiences and the decision making of the foreperson and the supervisor. This is a touchy issue but could also be a potential problem for people in the upper management level.

 

CONTACTS WITH INDUSTRY IN ONTARIO (TO DATE)

 

• PLC Controls Manufacturer

• Products,

- High and Low Voltage Starters.

- Electro-Mechanical Starters.

- Switches.

• Manufacturing Facilities,

- sheet metal working, electronic/mechanical assembly, testing,...

 

• Steam Boilers Manufacturer,

• Products,

- Steam generators for power stations.

• Manufacturing Facilities,

- metal cutting, welding, assembly.

 

• Electronics Manufacturer,

• Products,

- Aerospace & military equipment.

- Electro-mechanical equipment.

- Printed wiring assemblies.

• Manufacturing Facilities,

- fabrication, electromechanical assembly.

 

• Special Purpose Vehicles Manufacturer,

• Product,

- Customized Vehicles.

• Manufacturing Facilities,

- machining, fabrication, assembly,...

 

• Aerospace Manufacturer,

• Product,

- Aircraft engines.

• Manufacturing Facilities,

- metal removal, assembly,...

 

• Pressure Cylinder Manufacturer,

• Product,

- family of pressure cylinders.

• Manufacturing Facilities,

- metal removal, heat treating, assembly, finishing.

 

CORPORATE CONTACT PROFILE : PARKER-HANNIFIN

 

Corporation: Parker-Hannifin (Can.) Inc.

Location: Owen Sound, Ontario, Canada

Product: Large Hydraulic Cylinders

 

Description:

• A Canadian Branch Plant with American Parent Corp.

• Approximately 20 machines with various degrees of automation

• Using CAD and drafting

• Have a variant process planning system in development

• Have a material management system, which can generate work orders, and distribute NC programs through a DNC network.

• Scheduling is done in ad-hoc manner, but DNC network allows work queues to be tracked

 

Implications to Project:

• Has a basic set of variant process plans, which may be routed through alternate machines.

• Most jobs are small batches, often with size = 1.

• Should test all problems with multiple process plans,

- Metal Cutting / Purchased Parts / Assembly,

- Mixed technologies (e.g. NC and Manual),

- Has already illustrated unusual problems (from our expectations).

• Provides a factory model with a reasonable amount of statistical information.

• Will provide costing information.

• Computerized records should make their system more orderly.

 

CORPORATE CONTACT PROFILE : AMERTEK

 

Corporation: Amertek Inc.

Location: Woodstock, Ontario, Canada

Product: Aircraft Rescue Fire Fighting (A.R.F.F.) Vehicles

 

Description:

 

• Canadian company

• products are designed in-house

• limited CAD facilities used in the design department

• computerized MRP / BOM system is used

• plant area consists of two separate sections: in-house fabrication area and assembly-line area

• all fabrication and assembly operations are performed manually at present

• all scheduling is performed manually

 

Implications to Project:

• Amertek has supplied the project with a complete nested BOM representation of the C4000L A.R.F.F. vehicle.

• The BOM representation of the vehicle includes many sub-assemblies that could be used as test components for the integration system. A wide variety of test components could be established ranging from simple assemblies (e.g. engine hood) to very complex assemblies (e.g. suspension system).

• The assembly model of the rear body portion of the vehicles been presented in detail to be used as an initial test component for the integration system.

 

CAPP / PPC INTEGRATION ISSUES

 

When integrating the CAPP and PPC systems, some fundamental differences must be resolved. We have narrowed the problems to four specific areas.

 

• Data Structures and Definitions

• Events Handling

• Communication Issues

• Non-existent Functions

 

Data structures and definitions may vary between CAPP and PPC. As a result, a set of global data structures were established to support the transfer of information between the formats, and media of the various systems. Since the system is expected to respond, and perform in real-time, it is necessary to have the system at least event driven. Each event should have some effect which either pushes, or pulls data through the system, and triggers other functional modules to perform CAPP, PPC, or other functions. Current CAPP and PPC systems have been developed independently, thus there are some functions missing which are must be added to deal with failures. Finally, since the software packages are all dedicated to computers, and are all separate programs, we must deal with the details of communication between the programs. Two solutions were developed for this, one used shared databases, and the other used low level network sockets.

 

Some prerequisites are required for a truly successful integrator. The first is a successful CAPP system capable of reactive planning, and replanning. A flexible and responsive PPC system is required to deal with shop floor failures and alternate plans. Unfortunately, most existing CAPP and PPC systems do not meet the prerequisites. They are generally not generic, not responsive, and not reactive. This is further aggravated by the functional gap between CAPP and PPC.

 

- CAPP typically deals with one product in a single domain only, with a static resource definition.

- PPC typically deals with ‘dynamic’ scheduling of several products.

- CAPP is time independent, PPC is mainly time dependent

- PPC systems often aggregate individual resources (machines, tools, etc.) into a ‘Capacity Group’, while CAPP uses individual resources.

 

The data gap between functions is apparent, but it is simplified by a common data definition. Through the common data definition a common database may be set up to eliminate multiple copies of data, and the resulting data consistency problems. The previous lack of connection between CAPP and PPC means that all of the communication tools, and communications functions are missing.

 

FUNCTIONS AND DATABASE OF CAPP

 

CAPP generates process plan (and/or alternate process plans) for a product given a set of available resources. A process plan is an ordered sequence of manufacturing operations that produces from appropriate raw materials a product. There is no standard format for the process plans. Resources could include tools, machines, peoples, and materials. The database of CAPP stores both production-specific and production-general information. Examples of production-specific information are the geometry of the product, the relationships between parts, the availability of resources. Examples of production-general information are the rules of utilizing resources and the rules of manufacturing practices. This database contains all the information needed by CAPP to generate the process plan. For instance, CAPP will suggest a sequence of machining steps to meet the requirement by matching the capabilities of the available machines, and by applying the knowledge of good and bad machining practices.

 

The database of CAPP is typically static throughout the planning. The output process plan will then be given to a scheduler (either person or PPC). A process plan may become not usable during the on-line production because the planning environment may have become invalid as the shop-floor changes. As a result, CAPP will be re-started with new information to re-generate parts of the original process plan or a whole new process plan.

FUNCTIONS AND DATABASE OF PPC

 

PPC schedules a number of process plans for shop-floor for either short-term or long-term basis. It deals with the flow of materials by optimizing the utilization of the resources to meet the target production. Materials refers to raw materials, sub-assemblies, parts, lots. These materials are continuously being consumed and produced at work-sites. A work-site is a location where a step of the manufacturing operation takes place. A work-site could be a single resource or an assembly of resources. The materials flow from work-sites to work-sites. There are setup time and processing time for each work-site. The objective of PPC is to maximize the performance of the resources by balancing the rates of consumption and production of materials to meet the target production.

 

The shop-floor changes dynamically both expectably and unexpectedly during the actual production. The consumption of inventories, and maintenance of the machines are examples of the expected changes. The overloading of machines, lack of raw materials, changes in orders, and errors are examples of unexpected changes. PPC reacts immediately to an unexpected happening by re-scheduling the process plans to maintain the target production as much as possible. For instance, PPC will re-schedule some work that was originally scheduled for a overloaded machine to a less busy machine. Re-scheduling is an critical function of PPC.

 

PPC may or may not be able to reschedule the process plans. CAPP would be invoked to revise old process plan or to generate new process plan for PPC if so required.

 

The database of PPC stores the process plans, and shop-floor information. The shop-floor information contains the production schedule, the flow patterns of the materials, the utilization of work-sites, and others. The shop-floor information will be dynamically updated to reflect the latest status during the on-line production.

GAPS IN INTEGRATING CAPP AND PPC

 

CAPP and PPC are two stand-alone modules that are only logically connected by the relation that the output from CAPP is the input to PPC. There is no mechanism for either module to communicate with each other of its needs. For instance, PPC could not invoke CAPP to re-plan parts of a process plan based on the current shop-floor conditions. This represents the functional gap that exists in integrating CAPP and PPC.

 

The databases of these modules are typically different, and not necessarily compatible. They reside possibly on separate mediums. There exists a data gap during the integration in mapping and communicating information between the two databases. For instance, the formats of two related pieces of information on resources, the planning environment (of CAPP) and production environment (of PPC), are often different.

 

Figure 1.1 The existing functional and data gaps in CAPP/PPC integration

 

 

 

 

Figure 1.2 Conceptual Integration of CAPP and PPC

 

 

Data Transfer Issues:

 

Process plans can be stored without a great deal of trouble (refer to appendices). Unfortunately there is a large discrepancy between definitions of resources. It was experienced that some systems had specialized assumptions, or had to be customized for each application. Although a generic structure was proposed here, a resource data definition is still loosely defined. It is the opinion of the authors that an adequate data structure would only from the result of years of trial in various manufacturing institutions.

 

Figure 1.3 A Sample of Resource Data Structures

 

A secondary issue when storing process plan data is dealing with multiple plans, and versions of the plans. This may virtually ignored because of the modern version control capabilities available in all modern DataBase Management Systems (DBMS).

 

A latent issue arises when multiple systems use the data. In effect the system may have many sources for the same information. This leads to data synchronization, and validation problems. We had basically decided that only one software package would be allowed to update the common data. This eliminates problems of data change notification. But, with the eventual development of a global database, the data could be immediately updated in a global sense.

The Role of Events:

 

Previous manual, and partially automated, systems would pass paper, phone calls, messages, and other forms of communication between Process Planning and Scheduling to request, forward and trouble-shoot process plans. In a fully automated system this is not feasible due to time delays, and lack of order. Thus, a formal set of events have been defined to facilitate integration of action.

 

An event may drive a process plan towards PPC from CAPP. In turn, PPC may drive information about a failure from PPC to CAPP which will demand replanning. As these events are passed through the integrator, they will also call for movement from one source and format to another.

 

Figure 1.4 Overall Generic Structure of Integrated System

 

 

PDL Files and the Common Data Def’n:

 

• The PDL format that was given to us comes in several files:

PDL

MacroTask

Metric

Plant Description.

 

• We were able to get all the information for the process plan out of the MacroTask File.

 

 

• LEX (Lexical Analyser - a standard program on sun machines) is used to tokenize the input file.

 

• LEX converts groups of characters into numbers representing the string (a “Token”). The string value can also be retrieved as well as the token.

 

• There are three types of tokens used:

Keywords. (e.g. ATTRIBUTE, Tokens 1-21)

Characters. (e.g. “ or ; , Tokens 22-39)

IDs. (Other strings, e.g. air_cyl.piston, Token 40)

 

 

• These tokens are then read by a scanning program (YACC was not used though it could have), written in C.

 

• The ID after each keyword is placed according to various rules in the intermediate files.

Figure 1.1 Parsing Data Files to DataBases

 

• Some complications arose due to multiple IDs on a line, A constant keyword, Id paring would have been much easier to read.

 

• The intermediate files are then read into the common data structure (this is quite straight forward as the files represent each of the major data structures used in the common data).

 

 

• This data can then be manipulated, for example placed into a database.

 

• Another manipulation is to write the data back out into an MacroTask file. (e.g. To transfer Resource status) This file is not quite the same as the sample input file, as some fields (such as SETXY) are ignored on input, and thus cannot be re-generated from the common data.

 

 

• The intermediate file approach worked best for development of the programs.

 

• The parser could be combined with the code that reads the data into the structure into one routine.

 

• It may be more convenient to leave it this way to make the incorporation of other types of data sources easier, such as that from another CAPP program.

 

• In this case only 2/3 of the translation program would need to be re-written.

 

 

 

 

 

Process Plan of Air Cylinder in PDL

 

• The PDL files structure is given below.

 

// Accept completed product to shipping.

MACROTASK(receive) {

MACRO_OP(accept_sfg);

PARENT(~air_cyl);

ATTRIBUTE(desc, “Accept Semi Finished Goods”);

PRE_CNST(pc, ~inspect);

SETXY(15, 60);

}

// Inspect product as it leaves area.

MACROTASK(inspect) {

MACRO_OP(inspect_dx501);

PARENT(~air_cyl);

ATTRIBUTE(desc1, “Inspect completed air cylinder1”);

ATTRIBUTE(desc2, “Inspect completed air cylinder2”);

ATTRIBUTE(desc3, “Inspect completed air cylinder3”);

ALT_CNST(ac1) {

PRE_CNST(pc1, ~assm_cyl_1)

PRE_CNST(pc2, ~assm_cyl_2);

PRE_CNST(pc3, ~assm_cyl_3)

}

SETXY(100, 60);

}

 

 

Process Plan of Air Cylinder in PDL (cont’d)

 

// Manual Cylinder assembly.

MACROTASK(assm_cyl_1) {

MACRO_OP(assemble_cylinder);

TOOL(dt501_ma_jig);

PARENT(~air_cyl.screws);

RELATED(~air_cyl.piston, ~air_cyl.bushing, ~air_cyl.base,

~air_cyl.body, ~air_cyl.o_ring);

ATTRIBUTE(desc, “Manually assemble air cylinder”);

PRE_CNST(pc, ~assm_bushing, ~assm_piston);

SETXY(200, 10);

}

// Flexible Cylinder assembly.

MACROTASK(assm_cyl_2) {

MACRO_OP(assemble_cylinder);

TOOL(dt501_fa_jig);

PARENT(~air_cyl.screws);

RELATED(~air_cyl.piston, ~air_cyl.bushing, ~air_cyl.base,

~air_cyl.body, ~air_cyl.o_ring);

ATTRIBUTE(desc, “Robotic assembly of air cylinder”);

PRE_CNST(pc, ~assm_bushing, ~assm_piston);

SETXY(200, 60);

}
// Hard Automation for Cylinder assembly.

MACROTASK(assm_cyl_3) {

MACRO_OP(assemble_cylinder);

TOOL(dt501_ha_jig);

PARENT(~air_cyl.screws);

RELATED(~air_cyl.piston, ~air_cyl.bushing, ~air_cyl.base,

~air_cyl.body, ~air_cyl.o_ring);

ATTRIBUTE(desc, “Automated assembly of air cylinder”);

PRE_CNST(pc, ~assm_bushing, ~assm_piston);

SETXY(200, 110);

}

 

 

Process Plan of Air Cylinder in PDL (cont’d)

 

// Assemble the bushing.

MACROTASK(assm_bushing) {

MACRO_OP(assemble_bushing);

PARENT(~air_cyl.bushing.bushing);

RELATED(~air_cyl.bushing.o_ring);

ATTRIBUTE(desc, “Assemble the bushing”);

PRE_CNST(pc, ~release);

SETXY(300, 35);

}

// Assemble the Piston.

MACROTASK(assm_piston) {

MACRO_OP(assemble_piston);

PARENT(~air_cyl.piston.screw);

RELATED(~air_cyl.piston.shaft, ~air_cyl.piston.face,

~air_cyl.piston.o_ring);

ATTRIBUTE(desc, “Assemble the piston”);

PRE_CNST(pc, ~release);

SETXY(300, 85);

}

// Release parts from stock.

MACROTASK(release) {

MACRO_OP(release_rip);

PARENT(~air_cyl);

ATTRIBUTE(desc, “Release parts from the stock room.”);

SETXY(400, 60);

}

 

 

 

 

 

Air Cylinder Example

for Common Data Structures:

 

• There are three non-constraining tools in the process plan.

 

RESRCE

resource identifier

resource class

dt501_ma_jig

tool

dt501_fa_jig

tool

dt501_ha_jig

tool

 

• The air cylinder has ten parts, and two sub-assemblies (air_cyl.piston, and air_cyl.bushing).

 

 

PRT_DAT

part identifier

status

piston.screw

purchased

piston.face

purchased

piston.shaft

purchased

piston.o_ring

purchased

bushing.bushing

purchased

bushing.o_ring

purchased

air_cyl.screw

purchased

air_cyl.base

purchased

air_cyl.body

purchased

air_cyl.o_ring

purchased

air_cyl.piston

in process

air_cyl.bushing

in process

air_cyl

finished

 

 

• There are a definite relationships between the ten parts and two sub-assemblies of the air cylinder.

 

 

PRT_CNTN

part identifier

identifier of part

(to be manufactured)

piston.screw

air_cyl.piston

piston.face

air_cyl.piston

piston.shaft

air_cyl.piston

piston.o_ring

air_cyl.piston

bushing.bushing

air_cyl.bushing

bushing.o_ring

air_cyl.bushing

air_cyl.screw

air_cyl

air_cyl.base

air_cyl

air_cyl.body

air_cyl

air_cyl.o_ring

air_cyl

air_cyl.piston

air_cyl

air_cyl.bushing

air_cyl

 

 

• Five capacity groups are involved for the assembly.

 

 

CAP_GRP

identifier of

capacity group

description

of operations

member

resources

CG.p

assemble piston

 

CG.b

assemble bushing

 

CG.ac1

manual assembly of air_cyl

dt501_ma_jig

CG.ac2

robotic assembly of air_cyl

dt501_fa_jig

CG.ac3

automated assembly of air_cyl

dt501_ha_jig

 

 

 

• The complete assembly is divided into three phases.

 

 

SUPER_TASK

identifier of

super-task

identifier of

manufactured part

identifier of

process

description

assemble_piston

air_cyl.piston

assm_piston

assemble_bushing

air_cyl.bushing

assm_bushing

assemble_cylinder

air_cyl

assm_cyl

 

 

• There is a definite ordering of these phases.

 

 

PROC_CNTN

identifier of

super-task

identifier of

next super-task

assemble_piston

assemble_bushing

assemble_bushing

assemble_cylinder

 

 

 

 

 

 

 

 

 

 

• There are the preferred and alternative process descriptions for each of the three phases.

 

 

PROC_DSCR

identifier

of

process description

process plan

number

preferred

plan

indicator

description

of

operations

identifier

of

capacity

group

list of parts

assm_piston

1

 

YES

assemble the

piston

CG.p

 

piston.screw

piston.face

piston.shaft

piston.o_ring

assm_bushing

1

 

YES

assemble the

bushing

CG.h

 

bushing.bushing

bushing.o_ring

assm_cyl

1

 

YES

manual

assembly

of the

air_cyl

CG.ac1

 

air_cyl.screw

air_cyl.base

air_cyl.body

air_cyl.o_ring

air_cyl.piston

air_cyl.bushing

assm_cyl

2

 

NO

robotic

assembly

of the

air_cyl

CG.ac2

 

air_cyl.screw

air_cyl.base

air_cyl.body

air_cyl.o_ring

air_cyl.piston

air_cyl.bushing

assm_cyl

3

 

NO

automatic

assembly

of the

air_cyl

CG.ac3

 

air_cyl.screw

air_cyl.base

air_cyl.body

air_cyl.o_ring

air_cyl.piston

air_cyl.bushing

 

 

 

Data Transfer Issues:

 

• The Process Plan Database has been developed, but we lack a Resource Database. - Selection of an Industrial Partner should solve this problem.

 

• Without a Resource Database it is not possible to implement a third of the possible integrator functions. In the short term a hypothetical factory could be used to develop this.

 

• The Process Plan data has been simple with the air cylinder, but a more thorough testing will occur with multiple parts.

 

• Other sources of data should be identified, so that a complete set of interface modules may be developed. - This will also be aided by an industrial partner.

 

IMPLEMENTATION ISSUES WITH CAPP, RPE, and GRIPPS

 

The Role of Data Transfer:

 

As described in the previous section, there are many forms of data storage to be considered. The data gap has to be bridged by the integrator. Each package (CAPP, RPE and PPC) has its own data storage mechanism. for example RPE uses PDL files, and internal data structures, while GRIPPS (for PPC) uses Oracle, a relational database. As a result, we could not assume a global database, although this would have been preferable. Without a global database the integrator must have a separate interface to each data source. Each of the interfaces will transfer data to and from internal structures, and the external data source. This increases the independance of each software package, and simplifies replacement of one package with another.

 

As mentioned before, the global database stategy would allow a superior implementation, but is not practical until all packages store data in the same database, using the same structures. This problem was made obvious when considering resource descriptions, we found that while CAPP deals with specific resources, PPC deals with resources lumped in a capacity group. The representations would be quite different, even if both were stored in the same database. The actual solution

 

Figure 1.1 Data Distribution in CAPP/PPC Integrated System

APPROACH TO CAPP/PPC INTEGRATION

 

There are three possible approach for integrating CAPP and PPC. The first approach is a high-level integration of functions and CIM modules which can be called a “global integration scheme”. The work of Harhalakis et al. (1990) is in this category. Each CIM module (CAD, CAPP, and MRPII) is allowed to maintain its own database and an updating scheme is devised. This method is very data intensive, and results in duplication of data, and does not address the need for non-linear plan representation which considers actual manufacturing resources and constraining resources and events. The manufacturing systems’ “events” are not considered. Instead their events relate to each individual data record, not to the status of the modules in the system.

 

The second approach is the opposite extreme, and proposes complete integration of planning and scheduling. In this approach CAPP and PPC become one system. The merits of this is that planning and control depend on each other and must ultimately use the same data. Moreover, the borderline between planning scheduling and control is fuzzy. In this approach the system should obviously use a common database management system. The representation would be common, using Petri-nets for instance, to model logical and temporal relationships. FLEXPLAN is a system being developed in that direction by Törnshoff and Detand (1990).

 

However, CAPP systems are essentially time independent, while PPC systems are necessarily time dependent. Today’s CAPP systems do not take these dependencies into account. Even if we overcome this difficulty and merge the planning optimization task and the scheduling optimization task into a single optimization task, it cannot be solved due to complexity reasons as noted by Törnshoff et al. (1989).

 

The third approach, which is described in this paper, can be considered a realistic intermediate between the first two approaches. The proposed approach to integration is essentially modular. In this, Process Planning and Production Planning and Control do not need to be one system. However, the CAPP and PPC systems need to have the ability to interact with shop floor disturbances (events), non-linear process plans, and resources and constraints. Physically the database can be common or a standard distributed database. However common definitions, structures, interpretations of events and synchronization issues in a multi-tasking networked/parallel environment are considered. In fact in this approach a separate module called the “Integrator” is used. It should be recognized however that the boundaries between the various modules are in fact arbitrary. Several physical implementations are possible. The modular approach has practical advantages including flexibility of implementation as well as the possibility of integrating existing CAPP and PPC systems. In the remainder of this paper we describe the functions of the Integrator, and the RPE (Reactive Planning Environment) module which was developed in connection with this project (Stranc 1992). RPE allows for representative evaluation, and selection of alternate plans. The PPC system which is addressed in this project is GRIPPS (Kuhnle 1991).

REACTIVE PLANNING

 

Process plans for producing components and assembling them into products are used to make routing sheets which are used, in turn, by the PPC system to create a master schedule for the manufacturing facility. Ideally there should not be any deviation from the master schedule. In reality, however, 20-30% of the process plans and routing sheets are modified locally to cope with production bottlenecks, equipment failures, resource shortages and changes in order priorities. These problems cause unforeseen and unacceptable delays in production. They may require a reaction from the PPC system depending on their duration and severity. This will typically call for local rescheduling which requires shifting work to alternate resources, or in more extreme cases, different processes. Here we will focus or the reactive process planning aspects only, leaving reactive production planning to the accompanying paper by our collaborators at IPA.

RPE REACTIVE PLANNING ENVIRONMENT

 

The reactive planning environment (RPE) system is conceived and implemented to achieve a number of objectives:

 

1. Represent process plans at various levels of detail and abstraction to suit both detailed process planning (micro) and operations planning and sequencing (macro).

2. Allow the combination and representation of mixed domain operations in a plan. In particular it deals with product assembly planning as well as other processes which may be required to complete a product such as welding, soldering, cleaning, inspection, fabrication and machining at the macro operation level (not detailed task planning).

3. Represent precedence constraints for a given task as well as the resources required for completing the task.

4. Capture and model alternate resources, alternate routes and alternates processes, albeit less than optimal, along with the preferred or best plan.

5. Represent the resources and plant by models compatible with those used by PPC systems.

6. Allow alternate plans evaluation, according to user defined criteria such as time, scrap rate, load balancing and cost, and selection of the best plan under given conditions such as absence or over-utilization of certain resources.

 

The RPE system is designed and implemented, under the direction of Professor Hoda ElMaraghy at McMaster University, by C. Stranc (1992), and P. Nguyen.

 

A scheme for representing micro and macro tasks in a process plan and routing sheets using a multi-layered precedence graph has been developed. Resources are modelled and associated with each task. ‘PreConstraints’ define order between macro tasks (operations). ‘AltConstraints’ are used to specify alternative processing methods within a process plan which can achieve a common end result (Figures 7.1, 7.2, 7.3, & 7.4). For example, alternate plans for a product assembly using manual, semi-automatic or fully automated systems may be represented and used as substitutes to deal with bottlenecks. These alternatives are examined and evaluated as needed, using graph search methods, in response to feedback from the PPC system.

 

INTEGRATING RPE WITH CAPP & PPC

 

RPE uses a feature-based, object-oriented approach (ElMaraghy, 1991) to represent a product structure hierarchically. Bills of material produced by conventional CAD systems may also be used. Current process planners produce detailed (micro) tasks in a single domain (e.g. machining or assembly). The resulting plans are input to RPE and corresponding precedence graphs are generated. These are edited and modified interactively by the user to add operations not considered by the micro process planners. It is also possible to enter the whole plan and alternatives interactively by the user through an effective graphical interface. The output from RPE is the recommended plan. The precedence graph process plan format would be useful to those PPC systems which are capable of using this powerful representation in rescheduling. Alternatively, the precedence graph is converted to the usual sequential process plan format in a flat file for use by traditional PPC systems. This allows RPE to be interfaced with conventional PPC systems currently in use. The selected plan and operations sequence are also displayed along with the resources layout within the plant.

 

PPC systems often aggregate individual resources (machines, tools etc.) into a higher level resource called a capacity group. One of the important integration issues we faced was the development of a clear definition of resource models used by CAPP and RPE and capacity groups used by PPC and a mapping between the two.

 

Figure 1.2 . Breakdown of components in a Process Plan represented as a precedence graph.

 

Figure 1.3 Process alternative representation and specification.

 

Figure 1.4 .Relationship between Macro Tasks, Micro Tasks, and Resources in a PCB assembly example.
Figure 1.5 Sample plan model including resources and tools.

OTHER FEATURES

 

The Integrator has been developed, along with a number of features which make it useful as a functional bridge. Version and Revision control have been added to ensure continuity of plans within the Integrator. The Integrator has been given an interface which allows it to be used with RPE. This allows the added benefits of reactive planning, without resorting to the full replanning which would have to occur in the CAPP system. In both the database, and the sockets implementation, CAPP, RPE and PPC are very independant. They can operate concurrently, on the same, or many machines without complication. This has an added benefit of making the system robust and fault tolerant.

 

THE CAPP/PPC INTEGRATOR

 

The Interprocess communications used between CAPP, RPE, and PPC are divided into two categories,

 

- Common data (process plans and resources)

- Events (a notification of a change in data status, or a request)

 

The data is produced, utilized and updated by the CAPP, RPE, and PPC systems. When data is changed, it results in a data change notification event. If a system wants to declare data invalid, it does this with a request. Therefore, when operating in steady state the interfaced systems pass events and requests to push and pull process plans in production.

 

The issue of common data may have a profound impact on the event types which the system uses. For example, if a CAPP system is based on its own proprietary data base (or files), and the PPC system is based on another database, then:

 

- there are two copies of all plans,

- the internal representations may be different,

- transfer between databases is difficult.

 

This problem also occurs when using files, or other data storage mechanisms. Therefore, in lieu of a common database the integrator should use its own internal common data definition to transfer data between CAPP, RPE and PPC. The primary (and novel) function of the Integrator is dealing with events from CAPP, RPE, and PPC. Events are passed to the integrator using messages, and then to another client using messages. Depending upon the message source, and content, the Integrator may send a message to another process. The content of messages will commonly be:

 

- notification that data has changed,

- request to change data since a failure has occurred, and the data must be changed (request).

 

If a common data base is used, then a message does not need to contain any data, and only needs to refer to the data which has been changed. If a common database is not used, then the integrator must maintain its own database, which is updated when data changes. This update may come in two forms: either all data is passed as messages, or all data is remotely accessed from files and databases. To summarize, the three types (cases) of event handling features of the CAPP/PPC integrator are:

 

• With common databases,

1. Pass references to changed data.

• Without common database,

2. Pass all changes as messages

3. Pass references to all changes to be read into the Integrator database from CAPP, RPE and PPC databases and files.

 

Passing data as in case 2 is time consuming, and the integrator may be overwhelmed by the volume of data. Using the common database is the simplest solution, except that all applications are tied to the same database software. The final method in case 3 uses the references to changed data to load common data structures in the Integrator. It is commonly agreed that simply passing a reference to changed data is the best mechanism. Cases 1 and 3 above are dependant on direct access to the outside data sources, common or not. The case 3 approach was chosen to accommodate the greatest number of CAPP and PPC systems. Case 1 should be adopted when a global and common database is used.

 

For our implementation the Integrator uses the same database used by PPC. In this case it is a commercial Relational Database, and the PPC system is GRIPPS (Kuhnle, 1991). The RPE program runs on PDL files (ElMaraghy, 1991), and thus the integrator will handle reading these files and writing the data to the commercial Database. A similar function occurs for the CAPP system.

 

Two methods for communication between processes have been developed independently, but provide the same functionality. In the first message passing mechanism, a database table is used to store messages which may be picked or issued by any database client. In the other method, a message server (Jack and ElMaraghy, 1992) is used, and connects all modules (CAPP, RPE, PPC, and the Integrator) through the use of TCP/IP sockets (Sechrest, 1986). Using a complex communication scheme, messages are routed between groups. This method of communication is suited to client programs which are not registered on the database. The block diagram of the CAPP/PPC Integrator is shown in .

 

illustrates the basic structure of the software. The message layer deals with interprocess communication between the Integrator and CAPP, PPC and RPE. The Executive routines track message content, and decide how to respond, by directing data transfer and issuing new events. The data structures are used for internal storage of the data when transferring between applications. To load these structures there is a generic data interface layer, which may use various sources of data. These source are PDL, a standard database and CAPP files. The final features shown are the filtering routines. The filter functions will “screen-out” resources which are unavailable or overutilized for planning. This is used when sending resource data to RPE.

 

In , the basic flow of events is pictured. All events will start when a message is issued from CAPP, PPC, or RPE. This message will trigger the loading of data into the Integrator. The data is then downloaded to another data store, using filtering if required. A message is then issued to the recipient of the new message.

 

Figure 1.1 A Block diagram of the CAPP/PPC integrator.

 

Figure 1.2 Basic Flow of Events in the Integrator.

 

Common Data Definition:

 

The definitions of common data are essential to make the CAPP/PPC Integrator work. These are required so that data from either CAPP or PPC could be put in a common format, which could then be translated into another format. This also gives the Integrator the ability to store plans if required. While CAPP and PPC have common requirements for the process plans themselves, there is a significant difference in the representation of resources. The PPC program uses the concept of Capacity Group, which described a collection of resources, while CAPP and RPE refer to resources. Therefore the common definition of data includes a mapping between resources, and the capacity groups they are lumped into.

 

below shows the basic process plan structure used in the common data definitions. This representation was influenced by the PPC program GRIPPS (Kuhnle, 1991).

 

Figure 1.3 Diagram of a process plan (including how the various data groups are related).

 

On the other hand a complete description of resources is required so that the CAPP and RPE programs can pass adequate information so that when a PPC plan fails because of a capacity group, the failure can be mapped back to a particular resource.

OVERVIEW OF THE PROPOSED CAPP/PPC INTEGRATOR

 

In providing a fully automated integration of the CAPP and PPC modules, the Western team proposed to develop a knowledge-based, automated, and stand-alone integrator module, written in C, that could use in conjunction with any stand-alone CAPP and PPC modules on both the functional and data level.

 

With respect to the bridging of the data gap, the integrator is equipped with specific knowledge of how the databases map onto each other. This permits the integrator to operate as a true liaison between CAPP and PPC. For instance, a process plan generated by CAPP can be translated directly into a format readily recognizable by PPC, and the work-site information collected by PPC can be converted to the resource information readily usable by CAPP.

 

With respect to the bridging of the functional gap, the integrator is equipped with appropriate routines to coordinate and complement the existing functionality of CAPP and PPC. This permits the integrator to operate as an automated interface between CAPP and PPC. For instance, CAPP will be invoked automatically (indirectly via the integrator) by PPC when PPC needs to have parts of a process plan re-planned.

 

With respect to the full integrated system, CAPP, PPC, and the integrator do not link together, and do not run as one single module. The three will run as three separate concurrent processes.

 

The approach is summarized in .

BRIDGING THE DATA GAP

 

As mentioned before, the databases of CAPP and PPC are often different (and may not even be compatible). There was an attempt to provide a set of data, residing on a DBMS, mutually accessible by both CAPP and PPC. This set of data can be viewed as information that one module would maintain for another under an integrated setting. This set of data can also be viewed as an explicit union of the two databases. For instance, this union could include the information on machine utilization which is updated by PPC and is required by CAPP in process planning. There are two main drawbacks in this approach. First, it imposes a restriction on the implementation of the modules. Second, it solves only one specific scenario. Nonetheless, this attempt addresses two critical issues in bridging the data gap: data translation and data passing. The essence of the above attempt is that each module translates (parts of) its database to a pre-determined format, and then places this resulting translation at a pre-determined location for the other module to pick up.

 

In order to bridge this data gap, the Western team has proposed an integrator with the following components (as illustrated in ). First, an internal data structure that generically describes the databases of the CAPP and PPC. This data structure functions much like the previously mentioned pre-determined format. Second, a set of routines for the integrator to access its own internal data structure, as well as the databases of CAPP and PPC. Due to the scope of these routines, the databases refers only to the externally-residing databases if CAPP and PPC were to run in a stand-alone mode. Third, a set of routines that translates between the information kept by the internal data structure and the two databases. The coding of these routines is part of the setup of the CAPP/PPC intergration. These three components together permits the two separate databases to be reconciled by the integrator.

 

 

Figure 1.4 Bridging the data gap

 

BRIDGING THE FUNCTIONAL GAP

 

The functional gap can be bridged in a similar way. The first step is to identify the intended functionality of the integrated system, and to determine how the functionality of CAPP and PPC fit in with the big framework. The second step is to provide a set of routines that coordinate, as well as complement the existing functionality of CAPP and PPC to produce the intended overall functionality of the integrated system (as shown in ). Coordinating means interfacing between CAPP and integrator, and interfacing between PPC and integrator. Through these interfaces, the internal routines (of CAPP and PPC) can be invoked, and the result can be communicated back to the integrator. Complementing means an automated connections between the functionality of CAPP and PPC. As an illustration, the integrator could provide the following primitive routines to facilitate the request for process plan:

 

• routine for PPC to request (to the integrator) process plan,

• routine for PPC to specify (to the integrator) a production environment,

• routine for PPC to specify (to the integrator) a protocol for transferring process plan,

• routine (for the integrator) to initiate CAPP to revise or replan process plan,

• routine (for the integrator) to supply CAPP with a planning environment,

• routine (for the integrator) to reconcile the production and planning environments,

• routine for CAPP to signal (to the integrator) the completion of the process planning,

• routine for CAPP to specify (to the integrator) a protocol for process plan transfer,

• routine (for the integrator) to retrieve the newly-CAPP-generated process plan,

• routine (for the integrator) to translate this process plan into a format used by PPC,

• routine (for the integrator) to send the translated process plan to PPC,

• routine for PPC to signal (to the integrator) the reception of the process plan.

 

The coding of these routines is part of the setup of the CAPP/PPC integration. Often, these primitive routines make use of the routines that are developed to bridge the data gap.

 

 

Figure 1.5 Bridging the functional gap

 

 

Figure 1.6 Communicating between stand-alone modules

 

 

 

Figure 1.7 The complete integration of CAPP and PPC

 

INTRODUCTION

 

This chapter describes the mechanism of bridging the data gap for the integration of CAPP and PPC. Specifically, the bridging of data gap between the internal data structure (of the integrator), the ASCII files, and Oracle DBMS (within the context of the prototype) will be presented in detail.

 

In the following sections, the elements in the internal data structure together with example will be presented below. The interfacing between the integrator and the external storages will be dealt with in a similar manner. The topic of information translation will also be addressed. In conclusion, a list of improvement for the future implementation of the prototype will also be included.

GENERIC DATA STRUCTURE

Background

 

The generic data structure is a vital part of the integrator. This structure serves two purposes. First, it acts as a standard representation for the information, stored in the databases of CAPP and PPC, that are relevant to the integration. This serves a standard basis for communicating information between modules. Second, it allows this information be stored internally to the integrator for future manipulation.

 

The current version of the generic data structure was initially set up jointly by all three teams. The generic data structure, programed as C-structures, has subsequently been revised by the Western team. There are eight basic C-structures that handles four types of informations: resources, parts, capacity groups, and process plans. There is one array of each of the basic C-structures, and a super-structure of these eight arrays. This super-structure contains all the information for the integrator to relate one planning application between CAPP, PPC, and the integrator. The eight basic C-structures will be listed below.

 

Resources

 

Resource is an aggregate term that refers to all objects (e.g. machines, tools, materials, and people) involved in a production. Two C-structures are used to describe the available resources. The first C-structure “RESRCE” describes the identification, application, cost factor, time factor, and availability of each resource. The elements of RESRCE are:

 

for all objects:

resource identifier,

resource name,

resource description,

resource class (either machine, tool, material, or people),

for machines and tools:

capability/application,

quantity available,

setup cost (per job),

run cost (per job-hour),

setup time (per job),

usage time (per job),

availability (load/time table, maintenance schedule),

for materials:

specification/usage,

quantity available,

cost rate (per quantity),

availability (stocking schedule),

for people:

skill/qualification,

quantity available,

cost rate (per hour),

availability (work/shift schedule, vacation schedule).

 

The second C-structure “RESRCE_REL” describes the constraining relationships between resources. The elements in RESRCE_REL are:

 

resource identifier,

list of identifiers of the constraining resources.

 

As an example, the array below shows four relating resources (Machine345, Person345, Tool123, and Material3423): Machine345 needs Person054 and Tool123, and Person054 needs Material3423.

 

 

RESRCE_REL

resource identifier

list of constraining resources

Machine345

Person054, Tool123

ST.h

Person054

Material3423

 

Parts

 

A part refers to a clearly distinguishable material object exists between operations. It can be either a manufactured, finished, or purchased object. Two C-structures are used to describe the parts and their inter-relationships involved in the production. The C-structure “PRT_DAT” describes the identification, and characteristics of each part. The elements in PRT_DAT are listed below.

 

part identifier,

part name,

description of part,

version number,

database reference (of supplementary part data),

factory/manufacturing identifier,

status (either finished, in process, or purchased),

reference unit of measurement,

quantity in inventory.

 

The second C-structure “PRT_CNTN” denotes part connection. It describes how a part is used to make another part. The elements in PRT_CNTN are given below.

 

part identifier,

identifier for part (that is directly manufactured with this part),

qualitative factor from producer to consumer.

 

As an example, the table below shows four related parts (A, B, C, and D): 2 units of A and 1 unit of B are needed to produce 1 unit of C, 3 units of A and 1 unit of C are needed to produce 1 unit of D.

 

PRT_CNTN

part identifier

identifier of part

(to be manufactured)

quantity factor

A

C

2

B

C

1

A

D

3

C

D

1

 

Capacity Group

 

A capacity group is a clearly distinguishable work-site on the shop-floor. It is a logical grouping of resources, and it performs a sequence of operations. Often, a capacity group is set up individually to meet the specific production requirement. Capacity group is denoted by the C-structure “CAP_GRP”. CAP_GRP describes the identification, setup, time factor, cost factor, and availability of each capacity group. Below is a list of the elements in CAP_GRP.

 

capacity group identifier,

capacity group name,

description of the operations performed in capacity group,

usage offered,

utilization factor,

x and y logical coordinates,

list of member capacity groups,

list of member resources,

setup cost (per job),

run cost (per job-hour),

scrap rate,

planning horizon,

availability (load/time table, maintenance schedule).

 

Process Plan

 

A process plan is divided into tasks. The tasks are ordered, and every task is characterized in two ways: by the goal of the task, and by the operations required to achieve this goal. The goal is measured in terms of some clearly distinguishable object within the overall flow of materials. For instance, the goal could be a certain sub-assembly. Typically for a task, the goal stays the same while the operations vary during the complete production. Three C-structures are used to describe the process plan. The first C-structure “SUPER_TASK” denotes super-task. A super-task describes the result of all operations that happen within a single capacity group, but not specifying which capacity group. Below is a list of the elements in SUPER_TASK.

 

identifier for super-task,

description of super-task (i.e. what is being produced),

identifier for (manufactured) part,

identifier for process description (for the required operations),

minimum lot size,

container lot size,

average stock,

scrap rate.

 

As an example, the table below describes the parts to be manufactured, and the process descriptions for two super-tasks (Drill023, and Mill545).

 

SUPER_TASK

identifier of

super-task

identifier of

manufactured part

identifier of

process description

Drill023

X

PD.drill

Mill545

Y

PD.mill

 

The second C-structure “PROC_CNTN” denotes process connection. It describes the order of executing the super-tasks. The elements of PROC_CNTN are listed below.

 

identifier for super-task,

identifier for next super-task,

lead time between two super-tasks.

 

As an example, the table below shows two related super-tasks (Drill023, and Mill545): Drill023 is to be performed before Mill545 with 2 units of inter-super-task lead time.

 

PROC_CNTN

identifier of

super-task

identifier of

next super-task

lead time

Drill023

Mill545

2

 

The third C-structure, named “PROC_DSCR”, denotes process description. It describes the operations to be performed for the super-task. It could be either the preferred or alternative set of operations for the super-task. Below, the elements of PROC_DSCR are listed.

 

identifier for process description,

process plan number,

preferred plan indicator,

description (of the operations involved),

identifier for capacity group,

list of parts,

sequence of operations,

estimated average run time,

estimated setup time,

capacity required.

 

As an example, the table below shows both the preferred and alternative process descriptions (PD.drull, and PD.mill).

 

PROC_DSCR*

identifier of process description

PD.drill

PD.drill

PD.mill

process plan number

1

2

1

preferred plan indicator

YES

NO

YES

description of operations

machine drilling

manual drilling

machine milling

identifier of capacity group

CG.drill1

CG.drill2

CG.mill

list of parts

A, B

A, B

A, C

 

Comparing Two Versions of Generic Data Structures

 

The above generic data structure is derived from the structure given on the document “Generic Data for CAPP/PPC Integration”. Both versions share a number of similarities. For instance, they both have eight C-structures. These C-structures provide a relational scheme of representing information similar to that of relational DBMS. By relational, information is stored as tables of related records (although information is not necessarily organized in a normalized form). These structures have similar interpretation but slightly different representation. The main differences will be compared below.

 

The relation between parts and super-tasks is ambiguous in the original version. It is only stated that the super-task is measured in terms of some clearly distinguishable object. There is no mentioning of whether or not this resulting object is a part. There is also not a field for part identifier in the C-structure SUPER_TASK. The Western team resolved this by providing a field for part identifier in the current version of SUPER_TASK for an explicit declaration of the part that is resulted from the super-task.

 

In the original version of the C-structure PROC_CNTN, there is a field that specifies the quantity relation from supplier (super-task) to consumer (super-task). This again emphasizes the point made in the last paragraph about clarifying the relation between part and super-task. More importantly, this field is redundant because the information has already been kept, more appropriately, in the C-structure PRT_CNTN. This field is excluded from the current version of PROC_CNTN.

 

There is a similar redundancy between SUPER_TASK and PROC_DSCR. The field for the super-task identifier is removed from the current version of PROC_DSCR because this information can be retrieved from SUPER_TASK.

 

The original version of the C-structure PRT_CNTN does not provide a field for part identifier when PRT_CNTN is supposed to specify how one part is required for the production of another. The Western team treated this as an oversight, and provided a field for part identifier in the current version of PRT_CNTN.

 

There are two fields in the original version of the C-structure PRT_CNTN for the supplier and consumer (super-tasks) of the subject part. There is also a field in PRT_CNTN for an identifier of process connection. The information on the super-tasks have already been kept in the referenced process connection. The important issue here is the usage of super-tasks to specify how a part is required for the production of another. The information about which part is used under which super-task under and which process description can be readily available from the C-structures SUPER_TASK, PROC_CNTN, and PROC_DESP. A simpler way of describing the relation between parts is directly describing the parts that are related. This Information can be obtained from non-production-specific sources, such as the bills of materials. The C-structure PRT_CNTN in the current version has been simplified as mentioned above.

 

This data structure is currently used in the prototype, and is only a trial version. The Western team will continuously to revise this data structure to provide a full and detail representation of the complete planning and production environments.

 

Example

 

A simple example will be given here to demonstrate this generic data structure. As mentioned before, this application for the prototype is to communicate process plans between the different information storages. The integrator will read a process plan, from an ASCII file supplied by the McMaster team, into its internal data structure and subsequently into Oracle. The complete file is given in the document “Input Files for RPE”. The plan describes the assembly of an air cylinder. It covers the full production cycle from releasing stocks, assembling parts, inspecting products, up to the shipping. This process plan is listed in Appendix A. The representation of the portion on assembling the air cylinder with the generic data structure is presented below.

 

There are three non-constraining tools in the process plan.

 

RESRCE

resource identifier

dt501_ma_jig

dt501_fa_jig

dt501_ha_jig

resource class

tool

tool

tool

 

The air cylinder has ten parts, and two sub-assemblies (air_cyl.piston, and air_cyl.bushing).

 

PRT_DAT

part identifier

status

piston.screw

purchased

piston.face

purchased

piston.shaft

purchased

piston.o_ring

purchased

bushing.bushing

purchased

bushing.o_ring

purchased

air_cyl.screw

purchased

air_cyl.base

purchased

air_cyl.body

purchased

air_cyl.o_ring

purchased

air_cyl.piston

in process

air_cyl.bushing

in process

air_cyl

finished

 

There is a definite relationships between parts and sub-assemblies of the air cylinder.

 

PRT_CNTN

part identifier

identifier of part

(to be manufactured)

piston.screw

air_cyl.piston

piston.face

air_cyl.piston

piston.shaft

air_cyl.piston

piston.o_ring

air_cyl.piston

bushing.bushing

air_cyl.bushing

bushing.o_ring

air_cyl.bushing

air_cyl.screw

air_cyl

air_cyl.base

air_cyl

air_cyl.body

air_cyl

air_cyl.o_ring

air_cyl

air_cyl.piston

air_cyl

air_cyl.bushing

air_cyl

 

Five capacity groups are involved for the required assembly.

 

CAP_GRP

identifier of

capacity group

description

of operations

member

resources

CG.p

assemble piston

 

CG.b

assemble bushing

 

CG.ac1

manual assembly of air_cyl

dt501_ma_jig

CG.ac2

robotic assembly of air_cyl

dt501_fa_jig

CG.ac3

automated assembly of air_cyl

dt501_ha_jig

 

The complete assembly is divided into three phases.

 

SUPER_TASK

identifier of

super-task

identifier of

manufactured

part

identifier of

process

description

assemble_piston

air_cyl.piston

assm_piston

assemble_bushing

air_cyl.bushing

assm_bushing

assemble_cylinder

air_cyl

assm_cyl

 

There is a definite ordering of these phases.

 

PROC_CNTN

identifier of

super-task

identifier of

next super-task

assemble_piston

assemble_bushing

assemble_bushing

assemble_cylinder

There are the preferred and alternative process descriptions for each of the three phases.

 

PROC_DSCR

identifier of

process description

process plan

number

preferred

plan

indicator

description

of

operations

identifier of

capacity

group

list of parts

assm_piston

1

 

YES

assemble the

piston

CG.p

 

piston.screw

piston.face

piston.shaft

piston.o_ring

assm_bushing

1

 

YES

assemble the

bushing

CG.h

 

bushing.bushing

bushing.o_ring

assm_cyl

1

 

YES

manual

assembly of the

air_cyl

CG.ac1

 

air_cyl.screw

air_cyl.base

air_cyl.body

air_cyl.o_ring

air_cyl.piston

air_cyl.bushing

assm_cyl

2

 

NO

robotic

assembly of the

air_cyl

CG.ac2

 

air_cyl.screw

air_cyl.base

air_cyl.body

air_cyl.o_ring

air_cyl.piston

air_cyl.bushing

assm_cyl

3

 

NO

automatic

assembly of the

air_cyl

CG.ac3

 

air_cyl.screw

air_cyl.base

air_cyl.body

air_cyl.o_ring

air_cyl.piston

air_cyl.bushing

 

Accessing the Database of the Integrator

 

Three types of routines were provided for accessing the super-structure. Currently, there are routines that retrieve records from, insert records into, and print the content of the super-structure. In future implementations, there could also be routines that retrieve, update, delete according to the key identifiers.

ORACLE DBMS

 

Oracle DBMS, a relational DBMS, is used to simulate the database of either CAPP or PPC. The objective is to test the mechanics of data passing between the integrator and an external database. For this prototype, the database is set to be identical to the internal data structures of the integrator. There are eight data-tables (in Oracle DBMS) that parallel to the above mentioned eight arrays of C-structures. Pro*C routines have been programmed to allow the integrator to access the external Oracle DBMS: connect onto Oracle DBMS, release from Oracle DBMS, write records (from the generic data structure) into Oracle DBMS, and read records from Oracle DBMS (unto the generic data structure).

Pro*C is a Oracle-specific language that allows SQL-statements be embedded within C programs. SQL is the standard query language for relational DBMS. The embedded SQL-statements offer the most concise and accurate description of the necessary database operations. Oracle DBMS has a pre-compiler that translates a Pro*C program into a C program. The main drawback of Pro*C is that it does not have a true block structure. Pro*C uses goto’s and labels. Pro*C also requires all variables that are used in the embedded SQL-statements be globally declared.

 

The research effort has indicated that it is simple to communicate information between external DBMS and the integrator. This should not come as a surprise. The Western team has made the Oracle DBMS interface modular. In the event of a DBMS change, only this interface will have to be adjusted accordingly.

 

The effort has also revealed the significant difference in the time-performance between accessing internal and external data storages. It is much faster for the integrator to access its internal data structure than any external DBMS. This difference implies that minimizing the actual amount of data access to the external DBMS will improve the efficiency of the integrated system. This supports the importance of representing the information that are relevant to the CAPP/PPC intergration internal to the integrator.

FILE

 

ASCII file is the second form of external storage mentioned above. It is fundamental to read from, and write onto ASCII files by a program. The objective is to test the mechanics of data translation. Specifically, an ASCII file containing the process plan (of an air cylinder) given in PDL is used in implementing the prototype. This ASCII file is supplied by the McMaster team. PDL is a product description language designed by the McMaster team. Naturally, PDL describes process plans in a format different from the generic data structure. Special routines have been programmed to allow the integrator to translate the process plan of the air cylinder (given in PDL-format) into a format that can be stored in generic-data-structure.

 

The research effort has revealed several significant elements to the process of data translation. First, it is crucial to have a definite goal of the translation process, and to have a clear understanding of both the structure and content of the data that is to be translated before beginning any translation. Second, there may not be any compatible translation for certain pieces of information that must be translated (due to the individual makeup of the two formats). The situation requires the formats be modified, or the data not be translated. A possible solution is to refine the generic data structure at the setting up of the integrated environment.

 

The effort has showed that the current version of generic data structure does not support the full structure and format of the process plan given in PDL. Since the generic data structure will be revised continuously and PDL is only a test, the focus here is to translate the pieces of information that can be translated between PDL and generic data structure.

 

The Western team has used the UNIX utilities lex (a token analyzer) and yacc (a parser) to extract the necessary information from the ASCII file. The extracted information is then put into the internal data structure by the corresponding access routines.

 

As an illustration of the syntax of PDL, the process plan of the air cylinder is listed in Appendix A. The result of translation is given in section 2.2.7.

 

Events and Messages

The message board requires that each message must have an id. Ids which are applicable to the integrator are described below:

Table 1: Events and Messages

ID

Message

Parameters

C1

A new process plan is ready

File Name

C2

A new process plan is ready

Data Base Key

C3

Process plan altered

Data Base Key

P1

Process Plan has failed

Key

P2

Process Plan failed at Resource

Key / Key

P3

Resources have changed

n/a

P4

Resource is no longer available

Key

P5

Resource is temporarily unavailable

Key / Duration

P6

Resource is available again

Key

M1

Change Optimization function

 

P7

Batch Replan Follows

 

P8

Batch Replan Begin

 

P9

Quality Insufficient

Key

M2

Materials Flow analysis required

 

M3

Failure Summary

 

M4

Start CAPP

 

M5

Start PPC

 

 

Additional messages may be defined at a later date. Messages were derived from the events that could occur in CAPP, PPC, or directly from the user.

Implementation

 

Graphical interfaces were developed for CAPP, PPC, RPE, and the Integrator (some of the modules were simulators). These programs tied into the message board, and sent messages back and forth between each other at timed intervals, All of the programs were able to inject messages manually, as well as automatically in simulation mode. The text window in each display indicated all the messages received, processed or sent by each application.

 

 

Figure 1.8 CAPP, PPC, RPE, Integrator Interfaces

 

Implementation using the message board was simple and straightforward. There was only one significant difficulty which occurred: The two asynchronous processes (MPS, and X Windows), were difficult to operate simultaneously. This is seen as a difficulty which is due to the current release of the operating system, and thus could be quickly overcome when debugging for commercial applications.

 

CONCLUSION

 

In this chapter, the generic data structure was presented. The experimental work in accessing external DBMS, and performing data translation were also described.

 

The research effort has shown that the generic data structure is literally the heart of the integrated system (as the integrated system revolves and operates around the generic data structure). Although the generic data structure is still at its early stage of development, it was demonstrated to be capable of representing information that are relevant to the integration of the functionality of CAPP and PPC. Through this generic data structure, the modules can be communicating with each other, and the functionality of the integrator can be implemented.

 

For the generic data structure, information is represented in a relational fashion, and information is structured to reduce redundancy. It is a naive version, and it does not support the process plan given in PDL (regardless of whether or not it is necessary to support the full PDL files). There are several possible directions for the future development of the generic data structure. First, refining the current version to improve the capability of representing common information relevant to the integration. Second, developing a new generic data structure that does not have to use any specific scheme of structuring information employed by the last two teams. Third, experimenting with alternatives such as the object-oriented representation of information. Fourth, providing more access routines to the generic data structure.

 

It came as no surprise when the research effort revealed how straight forward it is to connect to external DBMS, and how much more complicated it is to perform the data translation. Four points can be observed from the research. First, regardless of the factors involved, the example of data translation is probably a typical scenario. Only some pieces of information will have be translated, and the generic data structure must be able to capture these pieces. This directly implies a loss of information that are not relevant to the integration when data is being translated back and forth. Second, the complexity of the translation depends on the data to be translated, and on the intended functionality of the integrator. Third, the generic data structure can be fine-tuned to suit any potential peculiarity of the data to be translated. Fourth, the most significant part in translation is a thorough understanding of both the content and format of the data that is to be translated.

 

The research outlined in this chapter is the first attempt to bridge the data gap. The results supported positively the approach taken by the Western team to bridge the data gap during the CAPP/PPC integration. The effort prepared the way of addressing other issues about the integration.

 

COMMUNICATION IN A CONCURRENT ENVIRONMENT

 

Integration of multiple processes requires the use of sophisticated techniques. If all processes are run on a single machine, these processes may communicate through common memory, files, etc. When the machines are distributed over a network of machines, then a more sophisticated approach is required. At Western we already posses a tool which may be applied to the Integrator Project. This tool is referred to as the Message Passing System (MPS). The system is socket based, using OSI standards, which makes it very portable between many operating systems and languages.

 

The basic design features for the Integrator support coarse grain concurrent processing over a number of machines. The various programs use a generic set of interface subroutines. These interface routines talk to a central server program which handles a number of communication schemes, including asynchronous, concurrent, filtered, grouped, hierarchical, etc. Even more important is the fact that because the source code is available, it is very easy to add features not anticipated at this time.

 

This system allows programs to be added and removed from the MPS system dynamically. As a result the system is very fault tolerant and robust. The client structure makes the architecture very modular. This modularity means that new functions may be added to the MPS system on-line, and new applications may be added without difficulty

 

Figure 1.1 Abstract diagram of socket connections between MPS and client programs.

- The Connections:

 

Each program has a small library of subroutines which are used to communicate to the MPS server. After a client has been enrolled on the message board, they may send messages, or check to see if any messages are waiting for them. The MPS server is a single program which runs on a single machine, while serving all of the clients on the network. To Clarify, the MPS server is a Utility program which always runs, and the clients can be any program, such as,

 

- A Process Planner,

- A Scheduler,

- A User interface for Scheduling,

- A data collection package,

- etc.

 

 

Figure 1.2 Basic Connectivity of the MPS System

- The Clients:

 

MPS will allow clients to enrol in an ad-hoc manner. As a result, some abstract structure was required to allow the clients to identify their function. By using group names, and priority numbers, clients are allowed to enrol by function types, and their order of application to a particular message. The diagram in below shows a structure of processes for two hypothetical groups. In these diagrams the messages will flow from top to bottom. In the case where there are two or more clients at the same level, the message will be picked up in a first come, first served basis (thus giving concurrency). If a message passes through a group, it should be addressed to another group by one of the clients. If a message originates from a client, it will be assigned a destination group. When a message gets to the bottom of a group, it will be passed to the top of the destination group. This scheme allows for easy to configure programs.

 

 

Figure 1.3 Example MPS structures for Clients

 

From this perspective, messages may be viewed as colored petri-net tokens. As can be seen, with a structure like that above, many complex computation schemes are possible.

- Technical Example:

 

A simple technical explanation is given below in which illustrates a very basic case of MPS operation. The first program (CAPP) will initialize itself, and wait for a message. The second program (PPC), will send a message to the first (CAPP) program.

 

Figure 1.4 Sample of MPS coding

 

For more information on MPS please refer to the Technical Report [Jack et. al., 1991].

Integrator Events:

The event model is flexibly defined, to allow updating when newer CAPP and PPC system technologies have become available. The list pictured below gives a good indication of what these events are,

 

• From CAPP to the Integrator

- A new process plan is ready

- An updated version of a process plan is ready

- Process plan is unavailable

• From the Integrator to CAPP

- A process plan has failed

- A process plan is required

• From RPE to CAPP

- An optimal process plan for available resources is ready

- Process plan is not available

• From CAPP to RPE

- Optimize process plan

• From PPC to Integrator

- Process plan required

- Resource is unavailable, send new plan

- Resource is over-utilized, send alternate plan

• From Integrator to PPC

- A new process plan is available

- A revised process plan is available

 

These events are encoded as part of the message, along with the reference to data (process plans or resources).

 

The Western team has identified a set of potential responses of CAPP, PPC, and the integrator. These responses are derived from four specific events of PPC:

 

• product planning,

• resource availability,

• permanent bottleneck,

• quality of production.

 

These responses are also derived from five possible actions to handle these events:

 

• wait for problem to go away,

• relocate by selecting alternative resources from process plan,

• replan to avoid the unavailable resource (if there is no alternative),

• replan shop orders to avoid medium-term continuing disturbance (if appropriate),

• replan entire portfolio to handle permanent loss of a resource.

 

This set of responses is listed in the tables below. However, these responses will not be dealt with in the prototype.

 

Table 1:

RESPONSES

CODE

DESCRIPTION

PARAMETERS

C1

new process plan is ready

file name

C2

new process plan is ready

DBMS key

C3

process plan is altered

DBMS key

C4

process plan is no longer valid

DBMS key

P1

process plan has failed

plan key

P2

process plan has failed at resource

plan key, resource key

P3

resources have changed

 

P4

resource is no longer available

resource key

P5

resource is temporary unavailable

resource key, time duration

P6

resource is available again

resource key

M1

optimize

priority or optimizing function

P7

batch replan follows

 

P8

batch replan begin

 

P9

quality insufficient

plan key

M2

material flow analysis required

 

M3

failure summary

 

M4

start CAPP

 

M5

start PPC

 

 

 

 

 

 

 

 

MPS (Message Passing System):

 

Although developed separately, the Message Passing System (MPS) has a number of features which are applicable to the problems which occur in the Integrator. (For more details see the Technical Report on MPS by Jack et. al., 1991). The software uses OSI sockets, which make it portable between a wide variety of software platforms and operating systems. The systems uses a central server for message passing, and client routines which are used by the client programs in the system. The key points of interest are,

 

• Asynchronous message passing.

• A Network Based Communication system, using OSI standards, but may be adapted to others.

• Allows concurrent processing.

• Allows dynamic changes to the distributed application.

• Allows Mixed Languages and Operating Systems.

• Intended for course grain processing.

• Robust/Fault Tolerant.

• Simple to add to existing computer code.

• Can work with other distributed processing methods.

• Forensic evaluation of system performance.

 

Comparison of MPS to Oracle Based Method

 

The advantages of MPS over the Oracle based message passing system may be list, as is done below,

 

• Faster operation.

• Applications are easier to develop.

• Database not required, thus less expensive, and a customer is not tied to any database.

• Less memory/CPU intensive.

• Client programs are simpler.

• Is suited to complicated concurrent processing.

• Easier to expand for new applications.

 

On the other hand, the Oracle approach has certain advantages over the Socket based approach of MPS.

 

• Is used in GRIPPS, and is good for database intensive programs.

• Messages can be made specific to data sets, and data contents.

 

CAPP / PPC INTEGRATION USING OBJECT-ORIENTED TECHNOLOGY

 

One of the principal areas of research currently in progress at the Design Automation and Manufacturing Research Laboratory at the University of Western Ontario involves the development and implementation of an integration module which will link Computer Aided Process Planning (CAPP) software with Production Planning and Control (PPC) software. PPC systems are often referred to as “scheduling” systems. The object of the project is to establish a complete process planning and production software package that implements a production cycle from initial planning stages through to shop floor scheduling. This system is being implemented in conjunction with the Flexible Manufacturing Research Center at McMaster University, who will be implementing the CAPP portion of the system, and with IPA in Stuttgart, Germany, whose PPC system, GRIPPS, will be used in the integration.

 

The “Object-Oriented” version of the project is a parallel implementation of the integration module using object-oriented software tools available at the DA&MRL. The focus of the project is on machining process planning and the use of a central Object-Oriented Database Management System (OODBMS) to serve data to the software modules. The process planner to be used will be MetCAPP. This package will be interfaced to the object-oriented database management system developed by Versant Object Technology. A front-end for the system will also be included which will allow the user to design feature-based parts for process planning. This design module will be implemented using ACIS, an object-oriented object modeling system.

 

The Object-Oriented Approach

 

Computer Integrated Manufacturing (CIM) systems often involve the use of many database intensive applications. Engineering data is becoming increasingly complex and occurs in such quantities that extensive database technology is employed by most applications. Until recently, engineering and manufacturing facilities have followed the trend set by many businesses for database storage by using relational database systems. Systems of this type involve the storage of data in the form of tables of text or numeric values. This strategy is useful for most business applications but has proven to be less than ideal for engineering applications due to the limitations placed on data structures.

 

OODBMSs have emerged only recently and provide a long awaited alternative to their relational counterparts. Besides being a radical departure from traditional storage and programming strategies, object-oriented databases are well suited to the complex nature of engineering data. These systems represent the current state-of-the-art in engineering computing applications and many manufacturing facilities are converting existing software applications to incorporate object-oriented features.

 

The development of object-oriented programming and database technology in recent years has been a result of a combination of several established research fields in the computing area. Research in programming languages, artificial intelligence and software engineering has contributed to the development of object-oriented concepts particularly in applications involving database technology [Zdonic and Maier, 1990].

 

Until recently, most data intensive applications were related to business applications. Much of the research that has occurred in the database field has been centered on tabular, relational systems because of their suitability to business-oriented tasks. The push in the manufacturing field for facilities to produce products at high rates in order to survive, as well as the rapid advancements in computing technology over the past few years has led to a situation where the fields of engineering science and computer science are becoming very closely related. The inability of relational systems to adequately meet the data storage needs of complex engineering applications has promoted research into alternate storage methods.

 

Traditional computing applications often maintained their own data usually in the form of flat files stored on magnetic storage media. As applications were developed that used the same sets of data, the storage of that data often became redundant. Also, as more advanced applications were developed these were limited because of the difficulty of altering data structures and yet maintaining compatibility with older applications. More complex applications such as CAD and manufacturing systems require central sets of persistent data which is often used by many applications at the same time [Zdonic and Maier, 1990]. This requirement for data storage and handling has led to the development of database management systems and to a reversal in the traditional role of data in engineering facilities. Most CIM implementations at present view the database (the data itself) as the central focus with applications built around it as opposed to the traditional view of data as a secondary component to the applications using it.

 

Process planning represents the basic bridge between design and manufacturing. As such it utilizes both design and manufacturing related data. Due to the complexity of both fields it is necessary that computer aided process planners have access to very complex data representations for products and manufacturing resources. Object-oriented database technology is ideally suited for this application because it has the capability of providing persistent database representations for any user-defined structure, referred to as an “object”.

 

Figure 1 on the next page shows all of the software modules that are under development for this project.

 

 

Figure 1.1 Software Modules for Object-Oriented Integration

 

Message and Event Passing Using an Object-Oriented Database

 

The database management system, as previously mentioned, is a key feature in the facilitation of data flow between the various modules of the system. The modules also must communicate with each in order to complete the functionality of the system. Some examples of the types of messages that must be passed between modules are:

 

1. Design telling Planning that a list of features are ready to be processed

2. Planning telling Production that a process plan is ready to be scheduled

3. Production telling Planning that resources have changed requiring replanning

4. Planning telling Design that resources have changed requiring redesign

 

The Versant OODBMS is useful not only for maintaining the data in the system but also for passing messages between the various modules. Its fully distributed architecture makes communications across a network easy to implement in the form of a “message” database which is accessible to all modules. The concurrent usage capabilities of Versant databases also make real-time message passing available to the modules of the system. Figure 2 on the next page shows how a basic “message-board” system is implemented using Versant.

 

The message class is implemented in C++ and this basic code is made available to all of the client modules. Therefore, each client has full access to the database containing the messages. Each client process is also issued a code identification. This identification is used for the retrieval of messages issued to that particular process. The messages are in the form of an address identifier tied to the message contents. If a module needs to send a message the address and the message are added to the database. If a module wishes to retrieve its messages, it simply accesses the database and retrieves all messages containing its address identifier.

 

Figure 1.2 Message Board Configuration using an OODBMS

 

This implementation of a message passing system does not require the development of complex communication protocols and unusual hardware arrangements. It simply utilizes the inherent distributed communication capabilities of Versant to pass simple messages between the client modules of the overall software system.

 

An Overview of MetCAPP

 

MetCAPP is a machining process planning software package which incorporates the extensive manufacturing and machining experience of the Metcut Corporation, which has recently become a division of the Institute of Advanced Manufacturing Sciences in Cincinnati, Ohio. The system is a semi-generative CAPP environment which automatically generates speed and feed parameters for the machining of user defined features and associated tooling and material characteristics.

 

The MetCAPP package consists of three layered modules used for the development of machining process plans.

 

Figure 1.3 MetCAPP Software Structure

 

The CUTPLAN module is used to develop the process plan for an entire part. The user defines all of the features which make up the part. The module suggests the appropriate work station that may be used to produce each of the features and also calculates the times required at that work station to produce each feature. MetCAPP at the present time supports 41 different features which may be chosen from menus. For each feature on the part a separate call is made to the second module: CUTTECH.

 

The CUTTECH module is used to define all of the operations required to produce an individual feature on the part. A sequence of machining steps (operations) are defined and associated with specific cutting tools. This module determines the required number of cutting passes as well as the time to perform each operation. For each operation a call is made to the CUTDATA module.

 

CUTDATA is the main Metcut machining database compiled from over 40 years of machining experience. This database is accessed for each machining operation defined in CUTTECH and speed and feed information is automatically generated using the operation and tooling information found in CUTTECH

 

The MetCAPP API (Application Programming Interface) consists of a set of C functions which may be used to directly access any of the three modules of the MetCAPP software without the use of the supplied user interface. These C functions may be incorporated into any application program.

 

Example API functions:

Session manager:

smCreate - creates a MetCAPP session

Searching tools:

srSearch - searches the database for a particular character string

CUTPLAN module:

cutplan - retrieves elements for a particular row of CUTPLAN

cptimecalc - calculates machining times for a CUTPLAN session

CUTTECH module:

cuttech - calculates a machining operation sequence and

determines tools and speed/feed data for a given feature

ctmachset - adds machine tool information to the session after

a tool is located using srSearch

ctfeatset - sets feature dimensions

CUTDATA module:

cdgetoper - returns the current operation

cdloaddata - loads speed/feed data

Report Writer:

rwPrint - prints all requested reports from MetCAPP

 

CUTTECH is the primary module incorporated into this project due to the fact that the process planning will occur at the feature level. CUTDATA will be used to obtain approximated times for operations to occur. The actual speed and feed data from CUTDATA, however, is not necessary information for the scope of this project.

 

Current Progress and Future Work

 

The code development for this project is approaching completion. The Design Module has been implemented for simple features and produces both feature lists in Versant and ACIS models for simple parts. Currently all design input is text-based and is entered from the keyboard. Eventually, the system will have a graphical user interface (GUI) in X-Windows which will simplify user input.

 

The Planning Module currently reads from the feature and resource databases and employs MetCAPP to generate machining process plans for the simple parts designed in the Design Module. The Production Module is very simplistic at present. It reads process plans from the Planning Module and updates the allocation of resources (materials and workstations) to the plans. The availability of resources are randomly set in this module and appropriate messages are passed to the other modules as random events occur.

 

The message passing in the system has been implemented in the Communication Module. The messages are the triggers for the operation of the various modules in the system. For example, when the Design Module has successfully stored a feature list to Versant it posts a message to the Planning Module that a part is ready to be planned. The Planning Module sits idle on the system and polls the message board periodically. When the message is found the process planning procedure is triggered.

 

The future work on this project involves the addition of functionality to the Production Module and perhaps the simulation of actual scheduling using another commercial package. A GUI will be added to each of the modules of the system.

 

CAPP / PPC INTEGRATION: THE OBJECT ORIENTED APPROACH

 

Relational systems are not ideally suited to handle the multi-dimensional nature of process plans from RPE. Tabular formats are not efficient for the storage of data that is hierarchical in nature (e.g. RPE submits alternate process plans as a hierarchy). Object-oriented database technology is better equipped to handle hierarchical, multi-dimensional data structures. Programming languages supplied with relational database management systems (e.g. SQLPlus in Oracle) are often proprietary and non-portable to other relational systems. SQL is a primitive query language not suitable for the development of complex engineering applications.

 

Most object-oriented database management systems support C and C++ application code in a standard form (ANSI). Database functionality is added to application code by including database class libraries. This makes application code portable among database systems with minimal changes.

 

DATABASE STANDARDS

 

The Standard Database agreed to by both teams for use in this project, is ORACLE DBMS. Oracle is a relational database used by many industries and other organizations for their DBMS needs. Considerable effort and progress was achieved by both teams in order to use the same data structures for the common data during Year 2 of this project. An ultimate objective would be to use the same physical ORACLE records of ‘the data’ by both CAPP and PPC. It was however more practical to use the same data structures, but two different physical databases. IPA uses ORACLE on a PC, McMaster/Western use a SUN Workstation version of ORACLE). Object-oriented databases offer advantages, but this is a future standard. Western has initiated a parallel project utilizing OODBMS.

 

The amount of disk usage can be an interesting point of comparison. Some figures of Versant (Object Oriented) and Oracle (Relational) disk usage are given below.

 

 

Versant Oracle

 

Disk usage for basic DBMS 2 0.2 Mb 2 1.2 Mb

 

Additional functionality * 3.3 Mb 6 3.5 Mb

 

“Add User Space” for each

Oracle product 80 Kb / product

1.6 Mb for whole system

 

Disk usage for empty database 1.3 Mb 10.1 Mb †

 

 

 

* The “additional functionality” for Oracle includes application development software and reporting products. The only additional functionality required by Versant is the SUN C/C++ compiler.

† This is the default database size (configurable).

 

Note: These statistics were taken from the SUN versions of the software and are approximations.

 

PAST ACHIEVEMENTS

 

The achievements of the first two years may be summarized as below,

 

• YEAR ONE ACHIEVEMENTS

 

• Evaluate state of the art and conducted literature reviews in Generative Computer-Aided Process Planning.

• Defined product modelling requirements, representation schemes and necessary knowledge base for assembly and assignment of machined parts to manufacturing cells.

• Continued research into the automatic generation of products assembly sequence using directed search graphs and optimality criteria.

• Initiated research in generic process planning for assembly and fabrication.

• Designed and partially implemented a process plan /constraints interactive graphical browser.

• Evaluated issues of standards and communication as they relate to representation schemes.

• Started to become familiar with IPA PPC models.

 

• YEAR TWO ACHIEVEMENTS

 

• Defined bi-directional data and requirements for integrating CAPP with PPC modules.

• Investigated standards as they relate to representation of exchanged data and CAPP/PPC interface. Defined implications regarding implementation.

• Produced specifications and guidelines for proposed interface & integrator module.

• Started implementation and proof-of-concept prototyping.

• Identified suitable industrial applications and established ongoing interaction and dialogue to guide research efforts

• Two publication

 

The technical contents within this report show the results of the third year (as well as previous work).

FUTURE DEVELOPMENTS

 

At present the system is not fully implemented and tested. In the future a few bugs must be worked out. Theses are described in the next section. Eventually the entire system should run off a single global database.

 

CONCLUSIONS

The justification and need for integrating process planning and production planning and control more closely, has been demonstrated. The benefits from this integration are equally valid in manual, automated and computer integrated manufacturing environments. Traditional CAPP systems produce linear sequential plans which do not consider resource availability. Modifications required for localized rescheduling mean complete replanning with obvious disadvantages. A reactive planning environment (RPE) has been developed to capture plans and resource alternatives and provide an effective means of evaluation and selection of plans based on the dynamically changing shop floor requirements. The integrator module addresses the time dependent issues related to event handling, communications, database updating and response time (short, medium & long). Both RPE and the Integrator are designed to be compatible with existing CAPP and PPC systems with distributed and/or common databases. The effectiveness of the proposed solution is currently being demonstrated using prototype industrial applications.

 

All year 1 and year 2 tasks and milestones have been met and exceeded. The synergy and cooperation between the two Ontario universities, and with IPA, Baden-Wurttemberg was very beneficial, on a project of this magnitude and expected impact. The important CAPP/PPC Integrator issues have been identified and solutions have been formulated and implemented. Standards for representation of models and data were addressed. The approach used is generic and will allow integration of alternate CAPP and PPC systems with minimal effort. We are now working with industry to address their specific needs and implementation issues. This CAPP/PPC Integration project provided the motivation to enhance ongoing research in generative process planning, reactive planning and concurrent engineering environments.

 

OUTSTANDING TASKS

 

There are a number of tasks which still require development and debugging. In particular the task which require debugging are,

 

• The MPS and Openwindows apparently have conflicts. This may only be an operating system bug, as the two should be independent. This becomes a problem when it is used for the simulation programs which have an Openwindows interface.

• There is a problem with recurring access to the Oracle database. This may simply require more expertise with Oracle management.

• Many functions are not fully implemented because of the lack of test cases and practical examples. For example the functions are,

- Database interfaces

- PDL file interface

- Data conversion functions

- Event handling functions

 

Other functions have not been implemented at all. In particular these include,

 

• Long term statistics have not been implemented.

• A CAPP system has not been located which can interface too.

• An Interface to GRIPPS was not possible because we had no access to the software.

• An Interface to RPE was not possible because we had no access to the software.

 

All of the unresolved problems can be dealt with when a manufacturer is located who will give us access to their software, and their databases. This will allow verification of the data structures, and clarify which events are missing. It will also allow the development of the missing functions, and debugging of the existing ones.

 

 

REFERENCES

 

Alting L., and Zhang, H., 1989, Computer Aided Process Planning: the state-of-the-art-survey, The International Journal of Production Research, vol. 27, no. 4, pp. 553-585.

ElMaraghy, H. A., 1991, Intelligent Product Design and Manufacture, in Artificial Intelligence in Design, edited by D. T. Pham, Springer-Verlag, pp 147 - 169.

Eversheim, W., 1985; Survey of Computer Aided Process Planning Systems, CIRP Annals, Vol. 34/2/1985.

Eversheim, W., Grop, M., and Lehmann, F., 1990; Innovative Assembly Management; CIRP Annals, Vol. 39/1/1990, pp. 1-4.

Ham, I., and Lu, S., 1988, Computer Aided Process Planning: The Present and The Future, Annals of the CIRP, Vol. 37, pp. 591-602.

Harhalakis, G., Ssemakula, M. E., and Johri, A., 1990, Architecture of a Facility Level CIM System, Proc. of CIMCON’90, U. S. Government Printing Office, pp. 430-445.

Jack, H., and ElMaraghy, W. H., 1992, A Manual for Interprocess Communication with the MPS (Message Passing System), DAMRL Report No. 92-08-01, The University of Western Ontario, London, Ontario, Canada.

Kuhnle, H., 1991, IPA Stuttgart Germany, personal communications regarding the IPA GRIPPS system for PPC.

Lenau, T., and Alting, L., 1990; Prerequisites for CAPP; 22nd CIRP International Seminar on Manufacturing Systems, University of Twente, Enschede, Netherlands.

Metcut Research Associates, MetCAPP User’s Guides, Institute of Advanced Manufacturing Sciences, Inc., Cincinnati, Ohio, 1990.

Ruf, T., and Jablonski, S., 1990, Flexible and Reactive Integration of CIM Systems: A Feature Based Approach, CSME Mechanical Engineering Forum.

Sechrest, S., 1986, An Introductory 4.3BSD Interprocess Communication Tutorial, in Unix Programmer’s Manual Supplementary Documents 1, by The Computer Systems Research Group, The University of California.

Stranc, C., 1992, M.Eng. Thesis, in progress, McMaster University, Hamilton, Ontario, Canada.

Törnshoff, H.K., and Detand, J., 1990; A Process Description Concept for Process Planning, Scheduling and Job Shop Control, 22nd CIRP Intern. Seminar on Manu. Sys., Univ. of Twente, Enschede, Netherlands.

Törnshoff, H.K., Beckendorff, Ur, and Anders, N., 1989; FLEXPLAN - A concept for Intelligent Process Planning and Scheduling”, CIRP Intern. Workshop on Computer Aided Process Planning, Hannover University Sept. 21-22, pp. 87-106.

Weill, R., Spur, G., and Eversheim, W., 1982; Survey of Computer-Aided Process Planning Systems, CIRP Annals, Vol. 31/2/1982.

Zdonic, S., and D. Maier, “Fundamentals of Object-Oriented Databases”, Readings in Object-Oriented Database Systems, S. Zdonic and D. Maier, ed., Morgan Kaufmann Publishers, Inc., San Mateo, CA, 1990, pages 1 - 32.

 

APPENDIX A : ORACLE INTERFACE FOR COMMON DATA

 

#include <stdio.h>

#include <stdlib.h>

#include <string.h>

#include <math.h>

 

#include "data.h"

#include "read_data_stub.h"

 

EXEC SQL BEGIN DECLARE SECTION;

varchar uid[20] ;/* user id */

varchar pwd[20] ;/* password */

int id ;/* id */

varchar nam[100] ;/* name */

varchar dscr[100] ;/* description */

int usage_ofr ;/* usage offered */

int util_ftr ;/* utility factor */

int logical_x ;/* logical x-coord */

int logical_y ;/* logical y-coord */

int prod_seg_01 ;/* product segment */

int prod_seg_02 ;

int prod_seg_03 ;

int prod_seg_04 ;

int prod_seg_05 ;

int prod_seg_06 ;

int prod_seg_07 ;

int prod_seg_08 ;

int prod_seg_09 ;

int prod_seg_10 ;

int prod_seg_11 ;

int prod_seg_12 ;

int prod_seg_13 ;

int prod_seg_14 ;

int prod_seg_15 ;

int prod_seg_16 ;

int prod_seg_17 ;

int prod_seg_18 ;

int prod_seg_19 ;

int prod_seg_20 ;

int prod_seg_21 ;

int prod_seg_22 ;

int prod_seg_23 ;

int prod_seg_24 ;

int prod_seg_25 ;

int prod_seg_cnt ;/* product segment count */

int resrce_prt_01 ;/* resoruce part */

int resrce_prt_02 ;

int resrce_prt_03 ;

int resrce_prt_04 ;

int resrce_prt_05 ;

int resrce_prt_06 ;

int resrce_prt_07 ;

int resrce_prt_08 ;

int resrce_prt_09 ;

int resrce_prt_10 ;

int resrce_prt_11 ;

int resrce_prt_12 ;

int resrce_prt_13 ;

int resrce_prt_14 ;

int resrce_prt_15 ;

int resrce_prt_16 ;

int resrce_prt_17 ;

int resrce_prt_18 ;

int resrce_prt_19 ;

int resrce_prt_20 ;

int resrce_prt_21 ;

int resrce_prt_22 ;

int resrce_prt_23 ;

int resrce_prt_24 ;

int resrce_prt_25 ;

int resrce_cnt ;/* resource count */

int plan_horiz ;/* planning horizon */

int cost_setup ;/* cost: setup */

int cost_run ;/* cost: run */

int scrap_rate ;/* scrap rate */

int avail ;/* availability */

int class ;/* class */

int capab ;/* capab */

int skill ;/* skill */

int qualf ;/* qualf */

int quant ;/* quant */

int cost_rate ;/* cost: rate */

int setup_time ;/* time: setup */

int usage_time ;/* time: usage */

int cnst_01 ;/* cnst */

int cnst_02 ;

int cnst_03 ;

int cnst_04 ;

int cnst_05 ;

int cnst_06 ;

int cnst_07 ;

int cnst_08 ;

int cnst_09 ;

int cnst_10 ;

int cnst_cnt ;/* constraint count */

int factory ;/* factory */

int db_key ;/* db_key */

int modf_idx ;/* modf_idx */

int unit_m ;/* unit_m */

int s_t_stat ;/* s_t_stat */

int part_id ;/* part_id */

int s_t_consumer ;/* s_t_consumer */

int s_t_supplier ;/* s_t_supplier */

int consumer_num ;/* consumer_num */

int relat_fct ;/* relat_fct */

int id_s_t ;/* id_s_t */

int id_assoc ;/* id_assoc */

int cap_grp ;/* cap_grp */

int resrce_01 ;/* resrce */

int resrce_02 ;

int resrce_03 ;

int resrce_04 ;

int resrce_05 ;

int resrce_06 ;

int resrce_07 ;

int resrce_08 ;

int resrce_09 ;

int resrce_10 ;

int res_cnt ;/* resrce_cnt */

int est_time_run ;/* est_time_run */

int est_time_setup ;/* est_time_setup */

int rank ;/* rank */

int usage_capacity ;/* usage_capacity */

int req_capacity ;/* req_capacity */

int plan_level ;/* plan_level */

int lot_min ;/* lot_min */

int lot_siz ;/* lot_siz */

int avg_stock ;/* avg_stock */

int min_lead_time ;/* min_lead_time */

int relat_quant ;/* relat_quant */

EXEC SQL END DECLARE SECTION;

 

EXEC SQL INCLUDE SQLCA; /* SQL Communication Area */

 

/* ********************************************************

init : process plan

******************************************************** */

 

int pd_init(plans)

PLAN_DATA *plans;

{

static int error;

 

if((pd_init_cap_grp(plans) == NO_ERROR) &&

(pd_init_resrce(plans) == NO_ERROR) &&

(pd_init_resrce_rel(plans) == NO_ERROR) &&

(pd_init_prt_dat(plans) == NO_ERROR) &&

(pd_init_prt_cntn(plans) == NO_ERROR) &&

(pd_init_proc_dscr(plans) == NO_ERROR) &&

(pd_init_super_task(plans) == NO_ERROR) &&

(pd_init_proc_cntn(plans) == NO_ERROR)){

error = NO_ERROR;

}else{

error = ERROR;

}

return(error);

}

 

int pd_init_cap_grp(plans)

PLAN_DATA *plans;

{

static int error;

 

error = NO_ERROR;

plans->ptr_cap_grp = -1;

return(error);

}

 

int pd_init_resrce(plans)

PLAN_DATA *plans;

{

static int error;

 

error = NO_ERROR;

plans->ptr_resrce = -1;

return(error);

}

 

int pd_init_resrce_rel(plans)

PLAN_DATA *plans;

{

static int error;

 

error = NO_ERROR;

plans->ptr_resrce_rel = -1;

return(error);

}

 

int pd_init_prt_dat(plans)

PLAN_DATA *plans;

{

static int error;

 

error = NO_ERROR;

plans->ptr_prt_dat = -1;

return(error);

}

 

int pd_init_prt_cntn(plans)

PLAN_DATA *plans;

{

static int error;

 

error = NO_ERROR;

plans->ptr_prt_cntn = -1;

return(error);

}

 

int pd_init_proc_dscr(plans)

PLAN_DATA *plans;

{

static int error;

 

error = NO_ERROR;

plans->ptr_proc_dscr = -1;

return(error);

}

 

int pd_init_super_task(plans)

PLAN_DATA *plans;

{

static int error;

 

error = NO_ERROR;

plans->ptr_super_task = -1;

return(error);

}

 

int pd_init_proc_cntn(plans)

PLAN_DATA *plans;

{

static int error;

 

error = NO_ERROR;

plans->ptr_proc_cntn = -1;

return(error);

}

 

/* ********************************************************

de-init : process plan

******************************************************** */

 

int pd_deinit(plans)

PLAN_DATA *plans;

{

static int error;

 

error = pd_init(plans);

return(error);

}

 

/* ********************************************************

put : process plan

******************************************************** */

 

int pd_put_cap_grp(plans, cg)

PLAN_DATA *plans;

CAP_GRP *cg;

{

static int error;

static int ptr;

static int i;

if((cg->prod_seg_cnt < 0) ||

(cg->prod_seg_cnt > 25) ||

(cg->resrce_cnt < 0) ||

(cg->resrce_cnt > 25) ||

(plans->ptr_cap_grp + 1 >= MAX_REC_CAP_GRP)) {

error = ERROR;

}else{

ptr = ++(plans->ptr_cap_grp);

plans->cap_grp[ptr].id = cg->id;

strcpy(plans->cap_grp[ptr].nam, cg->nam);

strcpy(plans->cap_grp[ptr].dscr, cg->dscr);

plans->cap_grp[ptr].usage_ofr = cg->usage_ofr;

plans->cap_grp[ptr].util_ftr = cg->util_ftr;

plans->cap_grp[ptr].logical_x = cg->logical_x;

plans->cap_grp[ptr].logical_y = cg->logical_y;

plans->cap_grp[ptr].prod_seg_cnt = cg->prod_seg_cnt;

for(i = 0; i < cg->prod_seg_cnt; i++){

plans->cap_grp[ptr].prod_seg[i] = cg->prod_seg[i];

}

plans->cap_grp[ptr].resrce_cnt = cg->resrce_cnt;

for(i = 0; i < cg->resrce_cnt; i++){

plans->cap_grp[ptr].resrce_prt[i] = cg->resrce_prt[i];

}

plans->cap_grp[ptr].plan_horiz = cg->plan_horiz;

plans->cap_grp[ptr].cost_setup = cg->cost_setup;

plans->cap_grp[ptr].cost_run = cg->cost_run;

plans->cap_grp[ptr].scrap_rate = cg->scrap_rate;

plans->cap_grp[ptr].avail = cg->avail;

error = NO_ERROR;

}

return error;

}

 

int pd_put_resrce(plans, rs)

PLAN_DATA *plans;

RESRCE *rs;

{

static int error;

static int ptr;

static int i;

if(plans->ptr_resrce + 1 >= MAX_REC_RESRCE) {

error = ERROR;

}else{

ptr = ++(plans->ptr_resrce);

plans->resrce[ptr].id = rs->id;

strcpy(plans->resrce[ptr].nam, rs->nam);

strcpy(plans->resrce[ptr].dscr, rs->dscr);

plans->resrce[ptr].class = rs->class;

plans->resrce[ptr].capab = rs->capab;

plans->resrce[ptr].skill = rs->skill;

plans->resrce[ptr].qualf = rs->qualf;

plans->resrce[ptr].quant = rs->quant;

plans->resrce[ptr].cost_setup = rs->cost_setup;

plans->resrce[ptr].cost_run = rs->cost_run;

plans->resrce[ptr].cost_rate = rs->cost_rate;

plans->resrce[ptr].setup_time = rs->setup_time;

plans->resrce[ptr].usage_time = rs->usage_time;

plans->resrce[ptr].avail = rs->avail;

error = NO_ERROR;

}

return error;

}

 

int pd_put_resrce_rel(plans, rr)

PLAN_DATA *plans;

RESRCE_REL *rr;

{

static int error;

static int ptr;

static int i;

if((rr->cnst_cnt < 0) ||

(rr->cnst_cnt > 10) ||

(plans->ptr_resrce_rel + 1 >= MAX_REC_RESRCE_REL)) {

error = ERROR;

}else{

ptr = ++(plans->ptr_resrce_rel);

plans->resrce_rel[ptr].id = rr->id;

plans->resrce_rel[ptr].cnst_cnt = rr->cnst_cnt;

for(i = 0; i < rr->cnst_cnt; i++){

plans->resrce_rel[ptr].cnst[i] = rr->cnst[i];

}

error = NO_ERROR;

}

return error;

}

 

int pd_put_prt_dat(plans, pd)

PLAN_DATA *plans;

PRT_DAT *pd;

{

static int error;

static int ptr;

static int i;

if(plans->ptr_prt_dat + 1 >= MAX_REC_PRT_DAT) {

error = ERROR;

}else{

ptr = ++(plans->ptr_prt_dat);

plans->prt_dat[ptr].id = pd->id;

strcpy(plans->prt_dat[ptr].nam, pd->nam);

strcpy(plans->prt_dat[ptr].dscr, pd->dscr);

plans->prt_dat[ptr].factory = pd->factory;

plans->prt_dat[ptr].db_key = pd->db_key;

plans->prt_dat[ptr].modf_idx = pd->modf_idx;

plans->prt_dat[ptr].unit_m = pd->unit_m;

plans->prt_dat[ptr].s_t_stat = pd->s_t_stat;

error = NO_ERROR;

}

return error;

}

 

int pd_put_prt_cntn(plans, pc)

PLAN_DATA *plans;

PRT_CNTN *pc;

{

static int error;

static int ptr;

static int i;

if(plans->ptr_prt_cntn + 1 >= MAX_REC_PRT_CNTN) {

error = ERROR;

}else{

ptr = ++(plans->ptr_prt_cntn);

plans->prt_cntn[ptr].id = pc->id;

plans->prt_cntn[ptr].part_id = pc->part_id;

plans->prt_cntn[ptr].s_t_consumer = pc->s_t_consumer;

plans->prt_cntn[ptr].s_t_supplier = pc->s_t_supplier;

plans->prt_cntn[ptr].consumer_num = pc->consumer_num;

plans->prt_cntn[ptr].relat_fct = pc->relat_fct;

error = NO_ERROR;

}

return error;

}

 

int pd_put_proc_dscr(plans, pd)

PLAN_DATA *plans;

PROC_DSCR *pd;

{

static int error;

static int ptr;

static int i;

if((pd->res_cnt < 0) ||

(pd->res_cnt > 10) ||

(plans->ptr_proc_dscr + 1 >= MAX_REC_PROC_DSCR)) {

error = ERROR;

}else{

ptr = ++(plans->ptr_proc_dscr);

plans->proc_dscr[ptr].id = pd->id;

plans->proc_dscr[ptr].id_s_t = pd->id_s_t;

plans->proc_dscr[ptr].id_assoc = pd->id_assoc;

strcpy(plans->proc_dscr[ptr].dscr, pd->dscr);

plans->proc_dscr[ptr].cap_grp = pd->cap_grp;

plans->proc_dscr[ptr].res_cnt = pd->res_cnt;

for(i = 0; i < pd->res_cnt; i++){

plans->proc_dscr[ptr].resrce[i] = pd->resrce[i];

}

plans->proc_dscr[ptr].est_time_run = pd->est_time_run;

plans->proc_dscr[ptr].est_time_setup= pd->est_time_setup;

plans->proc_dscr[ptr].rank = pd->rank;

plans->proc_dscr[ptr].usage_capacity= pd->usage_capacity;

plans->proc_dscr[ptr].req_capacity = pd->req_capacity;

error = NO_ERROR;

}

return error;

}

 

int pd_put_super_task(plans, st)

PLAN_DATA *plans;

SUPER_TASK *st;

{

static int error;

static int ptr;

static int i;

if(plans->ptr_super_task + 1 >= MAX_REC_SUPER_TASK) {

error = ERROR;

}else{

ptr = ++(plans->ptr_super_task);

plans->super_task[ptr].id = st->id;

strcpy(plans->super_task[ptr].dscr, st->dscr);

plans->super_task[ptr].plan_level = st->plan_level;

plans->super_task[ptr].lot_min = st->lot_min;

plans->super_task[ptr].lot_siz = st->lot_siz;

plans->super_task[ptr].avg_stock = st->avg_stock;

plans->super_task[ptr].scrap_rate = st->scrap_rate;

error = NO_ERROR;

}

return error;

}

 

int pd_put_proc_cntn(plans, pc)

PLAN_DATA *plans;

PROC_CNTN *pc;

{

static int error;

static int ptr;

static int i;

if(plans->ptr_proc_cntn + 1 >= MAX_REC_PROC_CNTN) {

error = ERROR;

}else{

ptr = ++(plans->ptr_proc_cntn);

plans->proc_cntn[ptr].id = pc->id;

plans->proc_cntn[ptr].s_t_consumer = pc->s_t_consumer;

plans->proc_cntn[ptr].s_t_supplier = pc->s_t_supplier;

plans->proc_cntn[ptr].min_lead_time = pc->min_lead_time;

plans->proc_cntn[ptr].relat_quant = pc->relat_quant;

error = NO_ERROR;

}

return error;

}

 

/* ********************************************************

get : process plan

******************************************************** */

 

int pd_get_cap_grp(plans, pos, cg)

PLAN_DATA *plans;

int pos;

CAP_GRP *cg;

{

static int error;

static int i;

if((pos < 0)||(plans->ptr_cap_grp < pos)) {

error = ERROR;

}else{

cg->id = plans->cap_grp[pos].id;

strcpy(cg->nam, plans->cap_grp[pos].nam);

strcpy(cg->dscr, plans->cap_grp[pos].dscr);

cg->usage_ofr = plans->cap_grp[pos].usage_ofr;

cg->util_ftr = plans->cap_grp[pos].util_ftr;

cg->logical_x = plans->cap_grp[pos].logical_x;

cg->logical_y = plans->cap_grp[pos].logical_y;

cg->prod_seg_cnt = plans->cap_grp[pos].prod_seg_cnt;

for(i = 0; i < plans->cap_grp[pos].prod_seg_cnt; i++){

cg->prod_seg[i] = plans->cap_grp[pos].prod_seg[i];

}

cg->resrce_cnt = plans->cap_grp[pos].resrce_cnt;

for(i = 0; i < plans->cap_grp[pos].resrce_cnt; i++){

cg->resrce_prt[i] = plans->cap_grp[pos].resrce_prt[i];

}

cg->plan_horiz = plans->cap_grp[pos].plan_horiz;

cg->cost_setup = plans->cap_grp[pos].cost_setup;

cg->cost_run = plans->cap_grp[pos].cost_run;

cg->scrap_rate = plans->cap_grp[pos].scrap_rate;

cg->avail = plans->cap_grp[pos].avail;

error = NO_ERROR;

}

return error;

}

 

int pd_get_resrce(plans, pos, rs)

PLAN_DATA *plans;

int pos;

RESRCE *rs;

{

static int error;

static int i;

if((pos < 0)||(plans->ptr_resrce < pos)) {

error = ERROR;

}else{

rs->id = plans->resrce[pos].id;

strcpy(rs->nam, plans->resrce[pos].nam);

strcpy(rs->dscr, plans->resrce[pos].dscr);

rs->class = plans->resrce[pos].class;

rs->capab = plans->resrce[pos].capab;

rs->skill = plans->resrce[pos].skill;

rs->qualf = plans->resrce[pos].qualf;

rs->quant = plans->resrce[pos].quant;

rs->cost_setup = plans->resrce[pos].cost_setup;

rs->cost_run = plans->resrce[pos].cost_run;

rs->cost_rate = plans->resrce[pos].cost_rate;

rs->setup_time = plans->resrce[pos].setup_time;

rs->usage_time = plans->resrce[pos].usage_time;

rs->avail = plans->resrce[pos].avail;

error = NO_ERROR;

}

return error;

}

 

int pd_get_resrce_rel(plans, pos, rr)

PLAN_DATA *plans;

int pos;

RESRCE_REL *rr;

{

static int error;

static int i;

if((pos < 0)||(plans->ptr_resrce_rel < pos)) {

error = ERROR;

}else{

rr->id = plans->resrce_rel[pos].id;

rr->cnst_cnt = plans->resrce_rel[pos].cnst_cnt;

for(i = 0; i < plans->resrce_rel[pos].cnst_cnt; i++){

rr->cnst[i] = plans->resrce_rel[pos].cnst[i];

}

error = NO_ERROR;

}

return error;

}

 

int pd_get_prt_dat(plans, pos, pd)

PLAN_DATA *plans;

int pos;

PRT_DAT *pd;

{

static int error;

static int i;

if((pos < 0)||(plans->ptr_prt_dat < pos)) {

error = ERROR;

}else{

pd->id = plans->prt_dat[pos].id;

strcpy(pd->nam, plans->prt_dat[pos].nam);

strcpy(pd->dscr, plans->prt_dat[pos].dscr);

pd->factory = plans->prt_dat[pos].factory;

pd->db_key = plans->prt_dat[pos].db_key;

pd->modf_idx = plans->prt_dat[pos].modf_idx;

pd->unit_m = plans->prt_dat[pos].unit_m;

pd->s_t_stat = plans->prt_dat[pos].s_t_stat;

error = NO_ERROR;

}

return error;

}

 

int pd_get_prt_cntn(plans, pos, pc)

PLAN_DATA *plans;

int pos;

PRT_CNTN *pc;

{

static int error;

static int i;

if((pos < 0)||(plans->ptr_prt_cntn < pos)) {

error = ERROR;

}else{

pc->id = plans->prt_cntn[pos].id;

pc->part_id = plans->prt_cntn[pos].part_id;

pc->s_t_consumer = plans->prt_cntn[pos].s_t_consumer;

pc->s_t_supplier = plans->prt_cntn[pos].s_t_supplier;

pc->consumer_num = plans->prt_cntn[pos].consumer_num;

pc->relat_fct = plans->prt_cntn[pos].relat_fct;

error = NO_ERROR;

}

return error;

}

 

int pd_get_proc_dscr(plans, pos, pd)

PLAN_DATA *plans;

int pos;

PROC_DSCR *pd;

{

static int error;

static int i;

if((pos < 0)||(plans->ptr_proc_dscr < pos)) {

error = ERROR;

}else{

pd->id = plans->proc_dscr[pos].id;

pd->id_s_t = plans->proc_dscr[pos].id_s_t;

pd->id_assoc = plans->proc_dscr[pos].id_assoc;

strcpy(pd->dscr, plans->proc_dscr[pos].dscr);

pd->cap_grp = plans->proc_dscr[pos].cap_grp;

pd->res_cnt = plans->proc_dscr[pos].res_cnt;

for(i = 0; i < plans->proc_dscr[pos].res_cnt; i++){

pd->resrce[i] = plans->proc_dscr[pos].resrce[i];

}

pd->est_time_run = plans->proc_dscr[pos].est_time_run;

pd->est_time_setup = plans->proc_dscr[pos].est_time_setup;

pd->rank = plans->proc_dscr[pos].rank;

pd->usage_capacity = plans->proc_dscr[pos].usage_capacity;

pd->req_capacity = plans->proc_dscr[pos].req_capacity;

error = NO_ERROR;

}

return error;

}

 

int pd_get_super_task(plans, pos, st)

PLAN_DATA *plans;

int pos;

SUPER_TASK *st;

{

static int error;

static int i;

if((pos < 0)||(plans->ptr_super_task < pos)) {

error = ERROR;

}else{

st->id = plans->super_task[pos].id;

strcpy(st->dscr, plans->super_task[pos].dscr);

st->plan_level = plans->super_task[pos].plan_level;

st->lot_min = plans->super_task[pos].lot_min;

st->lot_siz = plans->super_task[pos].lot_siz;

st->avg_stock = plans->super_task[pos].avg_stock;

st->scrap_rate = plans->super_task[pos].scrap_rate;

error = NO_ERROR;

}

return error;

}

 

int pd_get_proc_cntn(plans, pos, pc)

PLAN_DATA *plans;

int pos;

PROC_CNTN *pc;

{

static int error;

static int i;

if((pos < 0)||(plans->ptr_proc_cntn < pos)) {

error = ERROR;

}else{

pc->id = plans->proc_cntn[pos].id;

pc->s_t_consumer = plans->proc_cntn[pos].s_t_consumer;

pc->s_t_supplier = plans->proc_cntn[pos].s_t_supplier;

pc->min_lead_time = plans->proc_cntn[pos].min_lead_time;

pc->relat_quant = plans->proc_cntn[pos].relat_quant;

error = NO_ERROR;

}

return error;

}

 

/* ********************************************************

print : process plan

******************************************************** */

 

int pd_print_cap_grp(stream, plans, max)

FILE *stream;

PLAN_DATA *plans;

int max;

{

static int error;

static int total;

static int i;

static int j;

static CAP_GRP cg;

 

error = NO_ERROR;

total = plans->ptr_cap_grp + 1;

max = max < total ? max : total;

printf("%d out of %d ...\n", max, total);

for(i = 0; i < max; i++){

pd_get_cap_grp(plans, i, &cg);

fprintf(stream, "%d ", cg.id);

fprintf(stream, "%s ", cg.nam);

fprintf(stream, "%s ", cg.dscr);

fprintf(stream, "%d ", cg.usage_ofr);

fprintf(stream, "%d ", cg.util_ftr);

fprintf(stream, "%d ", cg.logical_x);

fprintf(stream, "%d ", cg.logical_y);

fprintf(stream, "%d ", cg.prod_seg_cnt);

fprintf(stream, "%d ", cg.resrce_cnt);

fprintf(stream, "%d ", cg.plan_horiz);

fprintf(stream, "%d ", cg.cost_setup);

fprintf(stream, "%d ", cg.cost_run);

fprintf(stream, "%d ", cg.scrap_rate);

fprintf(stream, "%d ", cg.avail);

fprintf(stream, "( ");

for(j = 0; j < cg.prod_seg_cnt; j++){

fprintf(stream, "%d ", cg.prod_seg[j]);

}

fprintf(stream, ") ");

fprintf(stream, "( ");

for(j = 0; j < cg.resrce_cnt; j++){

fprintf(stream, "%d ", cg.resrce_prt[j]);

}

fprintf(stream, ") ");

fprintf(stream, "\n");

}

return error;

}

 

int pd_print_resrce(stream, plans, max)

FILE *stream;

PLAN_DATA *plans;

int max;

{

static int error;

static int total;

static int i;

static int j;

static RESRCE rs;

 

error = NO_ERROR;

total = plans->ptr_resrce + 1;

max = max < total ? max : total;

printf("%d out of %d ...\n", max, total);

for(i = 0; i < max; i++){

pd_get_resrce(plans, i, &rs);

fprintf(stream, "%d ", rs.id);

fprintf(stream, "%s ", rs.nam);

fprintf(stream, "%s ", rs.dscr);

fprintf(stream, "%d ", rs.class);

fprintf(stream, "%d ", rs.capab);

fprintf(stream, "%d ", rs.skill);

fprintf(stream, "%d ", rs.qualf);

fprintf(stream, "%d ", rs.quant);

fprintf(stream, "%d ", rs.cost_setup);

fprintf(stream, "%d ", rs.cost_run);

fprintf(stream, "%d ", rs.cost_rate);

fprintf(stream, "%d ", rs.setup_time);

fprintf(stream, "%d ", rs.usage_time);

fprintf(stream, "%d ", rs.avail);

fprintf(stream, "\n");

}

return error;

}

 

int pd_print_resrce_rel(stream, plans, max)

FILE *stream;

PLAN_DATA *plans;

int max;

{

static int error;

static int total;

static int i;

static int j;

static RESRCE_REL rr;

 

error = NO_ERROR;

total = plans->ptr_resrce_rel + 1;

max = max < total ? max : total;

printf("%d out of %d ...\n", max, total);

for(i = 0; i < max; i++){

pd_get_resrce_rel(plans, i, &rr);

fprintf(stream, "%d ", rr.id);

fprintf(stream, "%d ", rr.cnst_cnt);

fprintf(stream, "( ");

for(j = 0; j < rr.cnst_cnt; j++){

fprintf(stream, "%d ", rr.cnst[j]);

}

fprintf(stream, ") ");

fprintf(stream, "\n");

}

return error;

}

 

int pd_print_prt_dat(stream, plans, max)

FILE *stream;

PLAN_DATA *plans;

int max;

{

static int error;

static int total;

static int i;

static int j;

static PRT_DAT pd;

 

error = NO_ERROR;

total = plans->ptr_prt_dat + 1;

max = max < total ? max : total;

printf("%d out of %d ...\n", max, total);

for(i=0; i < max; i++){

pd_get_prt_dat(plans, i, &pd);

fprintf(stream, "%d ", pd.id);

fprintf(stream, "%s ", pd.nam);

fprintf(stream, "%s ", pd.dscr);

fprintf(stream, "%d ", pd.factory);

fprintf(stream, "%d ", pd.db_key);

fprintf(stream, "%d ", pd.modf_idx);

fprintf(stream, "%d ", pd.unit_m);

fprintf(stream, "%d ", pd.s_t_stat);

fprintf(stream, "\n");

}

return error;

}

 

int pd_print_prt_cntn(stream, plans, max)

FILE *stream;

PLAN_DATA *plans;

int max;

{

static int error;

static int total;

static int i;

static int j;

static PRT_CNTN pc;

 

error = NO_ERROR;

total = plans->ptr_prt_cntn + 1;

max = max < total ? max : total;

printf("%d out of %d ...\n", max, total);

for(i=0; i < max; i++){

pd_get_prt_cntn(plans, i, &pc);

fprintf(stream, "%d ", pc.id);

fprintf(stream, "%d ", pc.part_id);

fprintf(stream, "%d ", pc.s_t_consumer);

fprintf(stream, "%d ", pc.s_t_supplier);

fprintf(stream, "%d ", pc.consumer_num);

fprintf(stream, "%d ", pc.relat_fct);

fprintf(stream, "\n");

}

return error;

}

 

int pd_print_proc_dscr(stream, plans, max)

FILE *stream;

PLAN_DATA *plans;

int max;

{

static int error;

static int total;

static int i;

static int j;

static PROC_DSCR pd;

 

error = NO_ERROR;

total = plans->ptr_proc_dscr + 1;

max = max < total ? max : total;

printf("%d out of %d ...\n", max, total);

for(i = 0; i < max; i++){

pd_get_proc_dscr(plans, i, &pd);

fprintf(stream, "%d ", pd.id);

fprintf(stream, "%d ", pd.id_s_t);

fprintf(stream, "%d ", pd.id_assoc);

fprintf(stream, "%s ", pd.dscr);

fprintf(stream, "%d ", pd.cap_grp);

fprintf(stream, "%d ", pd.res_cnt);

fprintf(stream, "%d ", pd.est_time_run);

fprintf(stream, "%d ", pd.est_time_setup);

fprintf(stream, "%d ", pd.rank);

fprintf(stream, "%d ", pd.usage_capacity);

fprintf(stream, "%d ", pd.req_capacity);

fprintf(stream, "( ");

for(j = 0; j < pd.res_cnt; j++){

fprintf(stream, "%d ", pd.resrce[j]);

}

fprintf(stream, ") ");

fprintf(stream, "\n");

}

return error;

}

 

int pd_print_super_task(stream, plans, max)

FILE *stream;

PLAN_DATA *plans;

int max;

{

static int error;

static int total;

static int i;

static int j;

static SUPER_TASK st;

 

error = NO_ERROR;

total = plans->ptr_super_task + 1;

max = max < total ? max : total;

printf("%d out of %d ...\n", max, total);

for(i=0; i < max; i++){

pd_get_super_task(plans, i, &st);

fprintf(stream, "%d ", st.id);

fprintf(stream, "%s ", st.dscr);

fprintf(stream, "%d ", st.plan_level);

fprintf(stream, "%d ", st.lot_min);

fprintf(stream, "%d ", st.lot_siz);

fprintf(stream, "%d ", st.avg_stock);

fprintf(stream, "%d ", st.scrap_rate);

fprintf(stream, "\n");

}

return error;

}

 

int pd_print_proc_cntn(stream, plans, max)

FILE *stream;

PLAN_DATA *plans;

int max;

{

static int error;

static int total;

static int i;

static int j;

static PROC_CNTN pc;

 

error = NO_ERROR;

total = plans->ptr_proc_cntn + 1;

max = max < total ? max : total;

printf("%d out of %d ...\n", max, total);

for(i=0; i < max; i++){

pd_get_proc_cntn(plans, i, &pc);

fprintf(stream, "%d ", pc.id);

fprintf(stream, "%d ", pc.s_t_consumer);

fprintf(stream, "%d ", pc.s_t_supplier);

fprintf(stream, "%d ", pc.min_lead_time);

fprintf(stream, "%d ", pc.relat_quant);

fprintf(stream, "\n");

}

return error;

}

 

/* ********************************************************

writing PDL file

******************************************************** */

 

int fi_write_pdl(filename, plans)

char *filename;

PLAN_DATA *plans;

{

static int error;

 

error = NO_ERROR;

 

write_data(filename, plans);

 

return error;

}

 

/* ********************************************************

reading PDL file

******************************************************** */

 

int fi_read_pdl(filename, plans)

char *filename;

PLAN_DATA *plans;

{

static int error;

static char progname[10];

 

error = NO_ERROR;

 

strcpy(progname, "data");

read_data(progname, filename, plans);

 

return error;

}

 

/* ********************************************************

connecting ORACLE database

******************************************************** */

 

int db_connect()

{

strcpy(uid.arr,"jimmy");

strcpy(pwd.arr,"jimmy");

uid.len=strlen(uid.arr);

pwd.len=strlen(pwd.arr);

 

printf("Trying to connect to jimmy/jimmy ...");

EXEC SQL WHENEVER SQLERROR GOTO errconnect;

EXEC SQL CONNECT :uid IDENTIFIED BY :pwd;

return(NO_ERROR);

errconnect:

return(ERROR);

}

 

int db_release()

{

printf("Trying to release from ORACLE ...");

EXEC SQL WHENEVER SQLERROR GOTO errrelease;

EXEC SQL COMMIT WORK RELEASE;

return(NO_ERROR);

errrelease:

return(ERROR);

}

 

/* ********************************************************

creating database tables

******************************************************** */

 

int db_creat()

{

if((db_creat_cap_grp() == NO_ERROR) &&

(db_creat_resrce() == NO_ERROR) &&

(db_creat_resrce_rel() == NO_ERROR) &&

(db_creat_prt_dat() == NO_ERROR) &&

(db_creat_prt_cntn() == NO_ERROR) &&

(db_creat_proc_dscr() == NO_ERROR) &&

(db_creat_super_task() == NO_ERROR) &&

(db_creat_proc_cntn() == NO_ERROR)) {

printf("Database action commited ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL COMMIT;

return(NO_ERROR);

}

err_creat:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL ROLLBACK;

return(ERROR);

}

 

int db_creat_cap_grp()

{

printf("Deleting table CAP_GRP ...");

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL DROP TABLE CAP_GRP;

printf("Creating table CAP_GRP ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE CAP_GRP(

ID NUMBER(6) ,

NAM CHAR(30) ,

DSCR CHAR(100) ,

USAGE_OFR NUMBER(3) ,

UTIL_FTR NUMBER(3) ,

LOGICAL_X NUMBER(3) ,

LOGICAL_Y NUMBER(3) ,

PROD_SEG_01 NUMBER(3) ,

PROD_SEG_02 NUMBER(3) ,

PROD_SEG_03 NUMBER(3) ,

PROD_SEG_04 NUMBER(3) ,

PROD_SEG_05 NUMBER(3) ,

PROD_SEG_06 NUMBER(3) ,

PROD_SEG_07 NUMBER(3) ,

PROD_SEG_08 NUMBER(3) ,

PROD_SEG_09 NUMBER(3) ,

PROD_SEG_10 NUMBER(3) ,

PROD_SEG_11 NUMBER(3) ,

PROD_SEG_12 NUMBER(3) ,

PROD_SEG_13 NUMBER(3) ,

PROD_SEG_14 NUMBER(3) ,

PROD_SEG_15 NUMBER(3) ,

PROD_SEG_16 NUMBER(3) ,

PROD_SEG_17 NUMBER(3) ,

PROD_SEG_18 NUMBER(3) ,

PROD_SEG_19 NUMBER(3) ,

PROD_SEG_20 NUMBER(3) ,

PROD_SEG_21 NUMBER(3) ,

PROD_SEG_22 NUMBER(3) ,

PROD_SEG_23 NUMBER(3) ,

PROD_SEG_24 NUMBER(3) ,

PROD_SEG_25 NUMBER(3) ,

PROD_SEG_CNT NUMBER(3) ,

RESRCE_PRT_01 NUMBER(3) ,

RESRCE_PRT_02 NUMBER(3) ,

RESRCE_PRT_03 NUMBER(3) ,

RESRCE_PRT_04 NUMBER(3) ,

RESRCE_PRT_05 NUMBER(3) ,

RESRCE_PRT_06 NUMBER(3) ,

RESRCE_PRT_07 NUMBER(3) ,

RESRCE_PRT_08 NUMBER(3) ,

RESRCE_PRT_09 NUMBER(3) ,

RESRCE_PRT_10 NUMBER(3) ,

RESRCE_PRT_11 NUMBER(3) ,

RESRCE_PRT_12 NUMBER(3) ,

RESRCE_PRT_13 NUMBER(3) ,

RESRCE_PRT_14 NUMBER(3) ,

RESRCE_PRT_15 NUMBER(3) ,

RESRCE_PRT_16 NUMBER(3) ,

RESRCE_PRT_17 NUMBER(3) ,

RESRCE_PRT_18 NUMBER(3) ,

RESRCE_PRT_19 NUMBER(3) ,

RESRCE_PRT_20 NUMBER(3) ,

RESRCE_PRT_21 NUMBER(3) ,

RESRCE_PRT_22 NUMBER(3) ,

RESRCE_PRT_23 NUMBER(3) ,

RESRCE_PRT_24 NUMBER(3) ,

RESRCE_PRT_25 NUMBER(3) ,

RESRCE_CNT NUMBER(3) ,

PLAN_HORIZ NUMBER(6) ,

COST_SETUP NUMBER(6) ,

COST_RUN NUMBER(6) ,

SCRAP_RATE NUMBER(6) ,

AVAIL NUMBER(4) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_resrce()

{

printf("Deleting table RESRCE ...");

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL DROP TABLE RESRCE;

printf("Creating table RESRCE ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE RESRCE(

ID NUMBER(6) ,

NAM CHAR(30) ,

DSCR CHAR(100) ,

CLASS NUMBER(6) ,

CAPAB NUMBER(6) ,

SKILL NUMBER(6) ,

QUALF NUMBER(6) ,

QUANT NUMBER(6) ,

COST_SETUP NUMBER(6) ,

COST_RUN NUMBER(6) ,

COST_RATE NUMBER(6) ,

SETUP_TIME NUMBER(6) ,

USAGE_TIME NUMBER(6) ,

AVAIL NUMBER(6) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_resrce_rel()

{

printf("Deleting table RESRCE_REL ...");

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL DROP TABLE RESRCE_REL;

printf("Creating table RESRCE_REL ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE RESRCE_REL(

ID NUMBER(6) ,

CNST_01 NUMBER(3) ,

CNST_02 NUMBER(3) ,

CNST_03 NUMBER(3) ,

CNST_04 NUMBER(3) ,

CNST_05 NUMBER(3) ,

CNST_06 NUMBER(3) ,

CNST_07 NUMBER(3) ,

CNST_08 NUMBER(3) ,

CNST_09 NUMBER(3) ,

CNST_10 NUMBER(3) ,

CNST_CNT NUMBER(3) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_prt_dat()

{

printf("Deleting table PRT_DAT ...");

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL DROP TABLE PRT_DAT;

printf("Creating table PRT_DAT ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE PRT_DAT(

ID NUMBER(6) ,

NAM CHAR(30) ,

DSCR CHAR(100) ,

FACTORY NUMBER(6) ,

DB_KEY NUMBER(6) ,

MODF_IDX NUMBER(6) ,

UNIT_M NUMBER(6) ,

S_T_STAT NUMBER(6) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_prt_cntn()

{

printf("Deleting table PRT_CNTN ...");

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL DROP TABLE PRT_CNTN;

printf("Creating table PRT_CNTN ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE PRT_CNTN(

ID NUMBER(6) ,

PART_ID NUMBER(6) ,

S_T_CONSUMER NUMBER(6) ,

S_T_SUPPLIER NUMBER(6) ,

CONSUMER_NUM NUMBER(6) ,

RELAT_FCT NUMBER(6) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_proc_dscr()

{

printf("Deleting table PROC_DSCR ...");

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL DROP TABLE PROC_DSCR;

printf("Creating table PROC_DSCR ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE PROC_DSCR(

ID NUMBER(6) ,

ID_S_T NUMBER(6) ,

ID_ASSOC NUMBER(6) ,

DSCR CHAR(100) ,

CAP_GRP NUMBER(6) ,

RESRCE_01 NUMBER(3) ,

RESRCE_02 NUMBER(3) ,

RESRCE_03 NUMBER(3) ,

RESRCE_04 NUMBER(3) ,

RESRCE_05 NUMBER(3) ,

RESRCE_06 NUMBER(3) ,

RESRCE_07 NUMBER(3) ,

RESRCE_08 NUMBER(3) ,

RESRCE_09 NUMBER(3) ,

RESRCE_10 NUMBER(3) ,

RES_CNT NUMBER(3) ,

EST_TIME_RUN NUMBER(3) ,

EST_TIME_SETUP NUMBER(3) ,

RANK NUMBER(3) ,

USAGE_CAPACITY NUMBER(6) ,

REQ_CAPACITY NUMBER(6) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_super_task()

{

printf("Deleting table SUPER_TASK ...");

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL DROP TABLE SUPER_TASK;

printf("Creating table SUPER_TASK ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE SUPER_TASK(

ID NUMBER(6) ,

DSCR CHAR(100) ,

PLAN_LEVEL NUMBER(6) ,

LOT_MIN NUMBER(6) ,

LOT_SIZ NUMBER(6) ,

AVG_STOCK NUMBER(6) ,

SCRAP_RATE NUMBER(6) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_proc_cntn()

{

printf("Deleting table PROC_CNTN ...");

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL DROP TABLE PROC_CNTN;

printf("Creating table PROC_CNTN ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE PROC_CNTN(

ID NUMBER(6) ,

S_T_CONSUMER NUMBER(6) ,

S_T_SUPPLIER NUMBER(6) ,

MIN_LEAD_TIME NUMBER(6) ,

RELAT_QUANT NUMBER(6) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

/* ********************************************************

writing database tables

******************************************************** */

 

int db_write(plans)

PLAN_DATA *plans;

{

if((db_write_cap_grp(plans) == NO_ERROR) &&

(db_write_resrce(plans) == NO_ERROR) &&

(db_write_resrce_rel(plans) == NO_ERROR) &&

(db_write_prt_dat(plans) == NO_ERROR) &&

(db_write_prt_cntn(plans) == NO_ERROR) &&

(db_write_proc_dscr(plans) == NO_ERROR) &&

(db_write_super_task(plans) == NO_ERROR) &&

(db_write_proc_cntn(plans) == NO_ERROR)) {

printf("Database action commited ...");

EXEC SQL WHENEVER SQLERROR GOTO err_write;

EXEC SQL COMMIT;

return(NO_ERROR);

}

err_write:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL ROLLBACK;

return(ERROR);

}

 

int db_write_cap_grp(plans)

PLAN_DATA *plans;

{

int i;

int j;

 

printf("Inserting into table cap_grp ...");

EXEC SQL WHENEVER SQLERROR GOTO err_write;

for(i = 0; i <= plans->ptr_cap_grp; i++){

printf(" %d",i);

id = plans->cap_grp[i].id; /* id */

for(j = 0; plans->cap_grp[i].nam[j] != 0; j++){

nam.arr[j] = plans->cap_grp[i].nam[j];

}

nam.arr[j] = 0; /* nam */

nam.len = j;

for(j = 0; plans->cap_grp[i].dscr[j] != 0; j++){

dscr.arr[j] = plans->cap_grp[i].dscr[j];

}

dscr.arr[j] = 0; /* dscr */

dscr.len = j;

usage_ofr = plans->cap_grp[i].usage_ofr; /* usage_ofr */

util_ftr = plans->cap_grp[i].util_ftr; /* util_Ftr */

logical_x = plans->cap_grp[i].logical_x; /* logical_x */

logical_y = plans->cap_grp[i].logical_y; /* logical_y */

/* prod_seg */

prod_seg_01 = 0 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[0] : 0;

prod_seg_02 = 1 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[1] : 0;

prod_seg_03 = 2 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[2] : 0;

prod_seg_04 = 3 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[3] : 0;

prod_seg_05 = 4 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[4] : 0;

prod_seg_06 = 5 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[5] : 0;

prod_seg_07 = 6 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[6] : 0;

prod_seg_08 = 7 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[7] : 0;

prod_seg_09 = 8 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[8] : 0;

prod_seg_10 = 9 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[9] : 0;

prod_seg_11 = 10 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[10] : 0;

prod_seg_12 = 11 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[11] : 0;

prod_seg_13 = 12 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[12] : 0;

prod_seg_14 = 13 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[13] : 0;

prod_seg_15 = 14 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[14] : 0;

prod_seg_16 = 15 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[15] : 0;

prod_seg_17 = 16 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[16] : 0;

prod_seg_18 = 17 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[17] : 0;

prod_seg_19 = 18 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[18] : 0;

prod_seg_20 = 19 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[19] : 0;

prod_seg_21 = 20 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[20] : 0;

prod_seg_22 = 21 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[21] : 0;

prod_seg_23 = 22 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[22] : 0;

prod_seg_24 = 23 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[23] : 0;

prod_seg_25 = 24 <= plans->cap_grp[i].prod_seg_cnt

? plans->cap_grp[i].prod_seg[24] : 0;

/* prod_seg_cnt */

prod_seg_cnt = plans->cap_grp[i].prod_seg_cnt;

/* resrce_prt */

resrce_prt_01 = 0 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[0] : 0;

resrce_prt_02 = 1 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[1] : 0;

resrce_prt_03 = 2 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[2] : 0;

resrce_prt_04 = 3 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[3] : 0;

resrce_prt_05 = 4 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[4] : 0;

resrce_prt_06 = 5 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[5] : 0;

resrce_prt_07 = 6 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[6] : 0;

resrce_prt_08 = 7 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[7] : 0;

resrce_prt_09 = 8 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[8] : 0;

resrce_prt_10 = 9 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[9] : 0;

resrce_prt_11 = 10 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[10] : 0;

resrce_prt_12 = 11 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[11] : 0;

resrce_prt_13 = 12 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[12] : 0;

resrce_prt_14 = 13 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[13] : 0;

resrce_prt_15 = 14 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[14] : 0;

resrce_prt_16 = 15 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[15] : 0;

resrce_prt_17 = 16 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[16] : 0;

resrce_prt_18 = 17 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[17] : 0;

resrce_prt_19 = 18 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[18] : 0;

resrce_prt_20 = 19 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[19] : 0;

resrce_prt_21 = 20 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[20] : 0;

resrce_prt_22 = 21 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[21] : 0;

resrce_prt_23 = 22 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[22] : 0;

resrce_prt_24 = 23 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[23] : 0;

resrce_prt_25 = 24 <= plans->cap_grp[i].resrce_cnt

? plans->cap_grp[i].resrce_prt[24] : 0;

resrce_cnt = plans->cap_grp[i].resrce_cnt; /* resrce_cnt */

plan_horiz = plans->cap_grp[i].plan_horiz; /* plan_horiz */

cost_setup = plans->cap_grp[i].cost_setup; /* cost_setup */

cost_run = plans->cap_grp[i].cost_run; /* cost_run */

scrap_rate = plans->cap_grp[i].scrap_rate; /* scrap_rate */

avail = plans->cap_grp[i].avail; /* avail */

EXEC SQL INSERT INTO CAP_GRP

(

ID,NAM,DSCR,USAGE_OFR,UTIL_FTR,LOGICAL_X,LOGICAL_Y,

PROD_SEG_01,PROD_SEG_02,PROD_SEG_03,PROD_SEG_04,

PROD_SEG_05,PROD_SEG_06,PROD_SEG_07,PROD_SEG_08,

PROD_SEG_09,PROD_SEG_10,PROD_SEG_11,PROD_SEG_12,

PROD_SEG_13,PROD_SEG_14,PROD_SEG_15,PROD_SEG_16,

PROD_SEG_17,PROD_SEG_18,PROD_SEG_19,PROD_SEG_20,

PROD_SEG_21,PROD_SEG_22,PROD_SEG_23,PROD_SEG_24,

PROD_SEG_25,PROD_SEG_CNT,

RESRCE_PRT_01,RESRCE_PRT_02,RESRCE_PRT_03,RESRCE_PRT_04,

RESRCE_PRT_05,RESRCE_PRT_06,RESRCE_PRT_07,RESRCE_PRT_08,

RESRCE_PRT_09,RESRCE_PRT_10,RESRCE_PRT_11,RESRCE_PRT_12,

RESRCE_PRT_13,RESRCE_PRT_14,RESRCE_PRT_15,RESRCE_PRT_16,

RESRCE_PRT_17,RESRCE_PRT_18,RESRCE_PRT_19,RESRCE_PRT_20,

RESRCE_PRT_21,RESRCE_PRT_22,RESRCE_PRT_23,RESRCE_PRT_24,

RESRCE_PRT_25,RESRCE_CNT,

PLAN_HORIZ,COST_SETUP,COST_RUN,SCRAP_RATE,AVAIL

)

VALUES

(

:id,:nam,:dscr,:usage_ofr,:util_ftr,:logical_x,:logical_y,

:prod_seg_01,:prod_seg_02,:prod_seg_03,:prod_seg_04,

:prod_seg_05,:prod_seg_06,:prod_seg_07,:prod_seg_08,

:prod_seg_09,:prod_seg_10,:prod_seg_11,:prod_seg_12,

:prod_seg_13,:prod_seg_14,:prod_seg_15,:prod_seg_16,

:prod_seg_17,:prod_seg_18,:prod_seg_19,:prod_seg_20,

:prod_seg_21,:prod_seg_22,:prod_seg_23,:prod_seg_24,

:prod_seg_25,:prod_seg_cnt,

:resrce_prt_01,:resrce_prt_02,:resrce_prt_03,:resrce_prt_04,

:resrce_prt_05,:resrce_prt_06,:resrce_prt_07,:resrce_prt_08,

:resrce_prt_09,:resrce_prt_10,:resrce_prt_11,:resrce_prt_12,

:resrce_prt_13,:resrce_prt_14,:resrce_prt_15,:resrce_prt_16,

:resrce_prt_17,:resrce_prt_18,:resrce_prt_19,:resrce_prt_20,

:resrce_prt_21,:resrce_prt_22,:resrce_prt_23,:resrce_prt_24,

:resrce_prt_25,:resrce_cnt,

:plan_horiz,:cost_setup,:cost_run,:scrap_rate,:avail

);

}

printf(" successful\n");

return(NO_ERROR);

err_write:

return(ERROR);

}

 

int db_write_resrce(plans)

PLAN_DATA *plans;

{

int i;

int j;

 

printf("Inserting into table resrce ...");

EXEC SQL WHENEVER SQLERROR GOTO err_write;

for(i = 0; i <= plans->ptr_resrce; i++){

printf(" %d",i);

id = plans->resrce[i].id; /* id */

for(j = 0; plans->resrce[i].nam[j] != 0; j++){

nam.arr[j] = plans->resrce[i].nam[j];

}

nam.arr[j] = 0; /* nam */

nam.len = j;

for(j = 0; plans->resrce[i].dscr[j] != 0; j++){

dscr.arr[j] = plans->resrce[i].dscr[j];

}

dscr.arr[j] = 0; /* dscr */

dscr.len = j;

class = plans->resrce[i].class; /* class */

capab = plans->resrce[i].capab; /* capab */

skill = plans->resrce[i].skill; /* skill */

qualf = plans->resrce[i].qualf; /* qualf */

quant = plans->resrce[i].quant; /* quant */

cost_setup = plans->resrce[i].cost_setup; /* cost_setup */

cost_run = plans->resrce[i].cost_run; /* cost_run */

cost_rate = plans->resrce[i].cost_rate; /* cost_rate */

setup_time = plans->resrce[i].setup_time; /* setup_time */

usage_time = plans->resrce[i].usage_time; /* usage_time */

avail = plans->resrce[i].avail; /* avail */

EXEC SQL INSERT INTO RESRCE

(

ID,NAM,DSCR,

CLASS,CAPAB,SKILL,QUALF,QUANT,

COST_SETUP,COST_RUN,COST_RATE,

SETUP_TIME,USAGE_TIME,AVAIL

)

VALUES

(

:id,:nam,:dscr,

:class,:capab,:skill,:qualf,:quant,

:cost_setup,:cost_run,:cost_rate,

:setup_time,:usage_time,:avail

);

}

printf(" successful\n");

return(NO_ERROR);

err_write:

return(ERROR);

}

 

int db_write_resrce_rel(plans)

PLAN_DATA *plans;

{

int i;

int j;

 

printf("Inserting into table resrce_rel ...");

EXEC SQL WHENEVER SQLERROR GOTO err_write;

for(i = 0; i <= plans->ptr_resrce_rel; i++){

printf(" %d",i);

id = plans->resrce_rel[i].id; /* id */

/* cnst */

cnst_01 = 0 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[0] : 0;

cnst_02 = 1 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[1] : 0;

cnst_03 = 2 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[2] : 0;

cnst_04 = 3 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[3] : 0;

cnst_05 = 4 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[4] : 0;

cnst_06 = 5 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[5] : 0;

cnst_07 = 6 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[6] : 0;

cnst_08 = 7 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[7] : 0;

cnst_09 = 8 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[8] : 0;

cnst_10 = 9 <= plans->resrce_rel[i].cnst_cnt

? plans->resrce_rel[i].cnst[9] : 0;

/* cnst_cnt */

cnst_cnt = plans->resrce_rel[i].cnst_cnt;

EXEC SQL INSERT INTO RESRCE_REL

(

ID,

CNST_01,CNST_02,CNST_03,CNST_04,CNST_05,

CNST_06,CNST_07,CNST_08,CNST_09,CNST_10, CNST_CNT

)

VALUES

(

:id,

:cnst_01,:cnst_02,:cnst_03,:cnst_04,:cnst_05,

:cnst_06,:cnst_07,:cnst_08,:cnst_09,:cnst_10,:cnst_cnt

);

}

printf(" successful\n");

return(NO_ERROR);

err_write:

return(ERROR);

}

 

int db_write_prt_dat(plans)

PLAN_DATA *plans;

{

int i;

int j;

 

printf("Inserting into table prt_dat ...");

EXEC SQL WHENEVER SQLERROR GOTO err_write;

for(i = 0; i <= plans->ptr_prt_dat; i++){

printf(" %d",i);

id = plans->prt_dat[i].id; /* id */

for(j = 0; plans->prt_dat[i].nam[j] != 0; j++){

nam.arr[j] = plans->prt_dat[i].nam[j];

}

nam.arr[j] = 0; /* nam */

nam.len = j;

for(j = 0; plans->prt_dat[i].dscr[j] != 0; j++){

dscr.arr[j] = plans->prt_dat[i].dscr[j];

}

dscr.arr[j] = 0; /* dscr */

dscr.len = j;

factory = plans->prt_dat[i].factory; /* factory */

db_key = plans->prt_dat[i].db_key; /* db_key */

modf_idx = plans->prt_dat[i].modf_idx; /* modf_idx */

unit_m = plans->prt_dat[i].unit_m; /* unit_m */

s_t_stat = plans->prt_dat[i].s_t_stat; /* s_t_stat */

EXEC SQL INSERT INTO PRT_DAT

(

ID,NAM,DSCR,FACTORY,DB_KEY,MODF_IDX,UNIT_M,S_T_STAT

)

VALUES

(

:id,:nam,:dscr,:factory,:db_key,:modf_idx,:unit_m,:s_t_stat

);

}

printf(" successful\n");

return(NO_ERROR);

err_write:

return(ERROR);

}

 

int db_write_prt_cntn(plans)

PLAN_DATA *plans;

{

int i;

int j;

 

printf("Inserting into table prt_cntn ...");

EXEC SQL WHENEVER SQLERROR GOTO err_write;

for(i = 0; i <= plans->ptr_prt_cntn; i++){

printf(" %d",i);

id = plans->prt_cntn[i].id; /* id */

part_id = plans->prt_cntn[i].part_id; /* part_id */

/* s_t_consumer */

s_t_consumer = plans->prt_cntn[i].s_t_consumer;

/* s_t_supplier */

s_t_supplier = plans->prt_cntn[i].s_t_supplier;

/* consumer_num */

consumer_num = plans->prt_cntn[i].consumer_num;

relat_fct = plans->prt_cntn[i].relat_fct; /* relat_fct */

EXEC SQL INSERT INTO PRT_CNTN

(

ID,PART_ID,S_T_CONSUMER,S_T_SUPPLIER,CONSUMER_NUM,RELAT_FCT

)

VALUES

(

:id,:part_id,:s_t_consumer,:s_t_supplier,:consumer_num,:relat_fct

);

}

printf(" successful\n");

return(NO_ERROR);

err_write:

return(ERROR);

}

 

int db_write_proc_dscr(plans)

PLAN_DATA *plans;

{

int i;

int j;

 

printf("Inserting into table proc_dscr ...");

EXEC SQL WHENEVER SQLERROR GOTO err_write;

for(i = 0; i <= plans->ptr_proc_dscr; i++){

printf(" %d",i);

id = plans->proc_dscr[i].id; /* id */

id_s_t = plans->proc_dscr[i].id_s_t; /* id_s_t */

id_assoc = plans->proc_dscr[i].id_assoc; /* id_assoc */

for(j = 0; plans->proc_dscr[i].dscr[j] != 0; j++){

dscr.arr[j] = plans->proc_dscr[i].dscr[j];

}

dscr.arr[j] = 0; /* dscr */

dscr.len = j;

cap_grp = plans->proc_dscr[i].cap_grp; /* cap_grp */

/* resrce */

resrce_01 = 0 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[0] : 0;

resrce_02 = 1 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[1] : 0;

resrce_03 = 2 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[2] : 0;

resrce_04 = 3 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[3] : 0;

resrce_05 = 4 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[4] : 0;

resrce_06 = 5 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[5] : 0;

resrce_07 = 6 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[6] : 0;

resrce_08 = 7 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[7] : 0;

resrce_09 = 8 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[8] : 0;

resrce_10 = 9 <= plans->proc_dscr[i].res_cnt

? plans->proc_dscr[i].resrce[9] : 0;

res_cnt = plans->proc_dscr[i].res_cnt; /* res_cnt */

/* est_time_run */

est_time_run = plans->proc_dscr[i].est_time_run;

/* est_time_run */

est_time_setup = plans->proc_dscr[i].est_time_setup;

rank = plans->proc_dscr[i].rank; /* est_time_run */

/* usagecapacity*/

usage_capacity = plans->proc_dscr[i].usage_capacity;

/* req_capacity */

req_capacity = plans->proc_dscr[i].req_capacity;

EXEC SQL INSERT INTO PROC_DSCR

(

ID,ID_S_T,ID_ASSOC,DSCR,CAP_GRP,

RESRCE_01,RESRCE_02,RESRCE_03,RESRCE_04,RESRCE_05,

RESRCE_06,RESRCE_07,RESRCE_08,RESRCE_09,RESRCE_10,

RES_CNT,EST_TIME_RUN,EST_TIME_SETUP,

RANK,USAGE_CAPACITY,REQ_CAPACITY

)

VALUES

(

:id,:id_s_t,:id_assoc,:dscr,:cap_grp,

:resrce_01,:resrce_02,:resrce_03,:resrce_04,:resrce_05,

:resrce_06,:resrce_07,:resrce_08,:resrce_09,:resrce_10,

:res_cnt,:est_time_run,:est_time_setup,

:rank,:usage_capacity,:req_capacity

);

}

printf(" successful\n");

return(NO_ERROR);

err_write:

return(ERROR);

}

 

int db_write_super_task(plans)

PLAN_DATA *plans;

{

int i;

int j;

 

printf("Inserting into table super_task ...");

EXEC SQL WHENEVER SQLERROR GOTO err_write;

for(i = 0; i <= plans->ptr_super_task; i++){

printf(" %d",i);

id = plans->super_task[i].id; /* id */

for(j = 0; plans->super_task[i].dscr[j] != 0; j++){

dscr.arr[j] = plans->super_task[i].dscr[j];

}

dscr.arr[j] = 0; /* dscr */

dscr.len = j;

/* plan_level */

plan_level = plans->super_task[i].plan_level;

lot_min = plans->super_task[i].lot_min; /* lot_min */

lot_siz = plans->super_task[i].lot_siz; /* lot_siz */

/* avg_stock */

avg_stock = plans->super_task[i].avg_stock;

/* scrap_rate */

scrap_rate = plans->super_task[i].scrap_rate;

EXEC SQL INSERT INTO SUPER_TASK

(

ID,DSCR,PLAN_LEVEL,LOT_MIN,LOT_SIZ,AVG_STOCK,SCRAP_RATE

)

VALUES

(

:id,:dscr,:plan_level,:lot_min,:lot_siz,:avg_stock,:scrap_rate

);

}

printf(" successful\n");

return(NO_ERROR);

err_write:

return(ERROR);

}

 

int db_write_proc_cntn(plans)

PLAN_DATA *plans;

{

int i;

int j;

 

printf("Inserting into table proc_cntn ...");

EXEC SQL WHENEVER SQLERROR GOTO err_write;

for(i = 0; i <= plans->ptr_proc_cntn; i++){

printf(" %d",i);

id = plans->proc_cntn[i].id; /* id */

/* s_t_consumer */

s_t_consumer = plans->proc_cntn[i].s_t_consumer;

/* s_t_supplier */

s_t_supplier = plans->proc_cntn[i].s_t_supplier;

/* min_lead_time*/

min_lead_time = plans->proc_cntn[i].min_lead_time;

/* relat_quant */

relat_quant = plans->proc_cntn[i].relat_quant;

EXEC SQL INSERT INTO PROC_CNTN

(

ID,S_T_CONSUMER,S_T_SUPPLIER,MIN_LEAD_TIME,RELAT_QUANT

)

VALUES

(

:id,:s_t_consumer,:s_t_supplier,:min_lead_time,:relat_quant

);

}

printf(" successful\n");

return(NO_ERROR);

err_write:

return(ERROR);

}

 

/* ********************************************************

reading database tables

******************************************************** */

 

int db_read(plans)

PLAN_DATA *plans;

{

int p_cap_grp;

int p_resrce;

int p_resrce_rel;

int p_prt_dat;

int p_prt_cntn;

int p_proc_dscr;

int p_super_task;

int p_proc_cntn;

 

p_cap_grp = plans->ptr_cap_grp;

p_resrce = plans->ptr_resrce;

p_resrce_rel = plans->ptr_resrce_rel;

p_prt_dat = plans->ptr_prt_dat;

p_prt_cntn = plans->ptr_prt_cntn;

p_proc_dscr = plans->ptr_proc_dscr;

p_super_task = plans->ptr_super_task;

p_proc_cntn = plans->ptr_proc_cntn;

if((db_read_cap_grp(plans) == NO_ERROR) &&

(db_read_resrce(plans) == NO_ERROR) &&

(db_read_resrce_rel(plans) == NO_ERROR) &&

(db_read_prt_dat(plans) == NO_ERROR) &&

(db_read_prt_cntn(plans) == NO_ERROR) &&

(db_read_proc_dscr(plans) == NO_ERROR) &&

(db_read_super_task(plans) == NO_ERROR) &&

(db_read_proc_cntn(plans) == NO_ERROR)) {

printf("Database action ...");

return(NO_ERROR);

}else{

plans->ptr_cap_grp = p_cap_grp;

plans->ptr_resrce = p_resrce;

plans->ptr_resrce_rel = p_resrce_rel;

plans->ptr_prt_dat = p_prt_dat;

plans->ptr_prt_cntn = p_prt_cntn;

plans->ptr_proc_dscr = p_proc_dscr;

plans->ptr_super_task = p_super_task;

plans->ptr_proc_cntn = p_proc_cntn;

return(ERROR);

}

}

 

int db_read_cap_grp(plans)

PLAN_DATA *plans;

{

int i;

int j;

int k;

printf("Extracting from table cap_grp ...");

EXEC SQL WHENEVER SQLERROR GOTO err_read;

EXEC SQL DECLARE C1 CURSOR FOR

SELECT * FROM CAP_GRP;

EXEC SQL OPEN C1;

EXEC SQL WHENEVER NOT FOUND GOTO end_read;

for(i = 0; ; i++){

EXEC SQL FETCH C1 INTO

:id,:nam,:dscr,:usage_ofr,:util_ftr,:logical_x,:logical_y,

:prod_seg_01,:prod_seg_02,:prod_seg_03,:prod_seg_04,

:prod_seg_05,:prod_seg_06,:prod_seg_07,:prod_seg_08,

:prod_seg_09,:prod_seg_10,:prod_seg_11,:prod_seg_12,

:prod_seg_13,:prod_seg_14,:prod_seg_15,:prod_seg_16,

:prod_seg_17,:prod_seg_18,:prod_seg_19,:prod_seg_20,

:prod_seg_21,:prod_seg_22,:prod_seg_23,:prod_seg_24,

:prod_seg_25,:prod_seg_cnt,

:resrce_prt_01,:resrce_prt_02,:resrce_prt_03,:resrce_prt_04,

:resrce_prt_05,:resrce_prt_06,:resrce_prt_07,:resrce_prt_08,

:resrce_prt_09,:resrce_prt_10,:resrce_prt_11,:resrce_prt_12,

:resrce_prt_13,:resrce_prt_14,:resrce_prt_15,:resrce_prt_16,

:resrce_prt_17,:resrce_prt_18,:resrce_prt_19,:resrce_prt_20,

:resrce_prt_21,:resrce_prt_22,:resrce_prt_23,:resrce_prt_24,

:resrce_prt_25,:resrce_cnt,

:plan_horiz,:cost_setup,:cost_run,:scrap_rate,:avail;

printf(" %d", i);

j = ++plans->ptr_cap_grp;

plans->cap_grp[j].id = id;

for (k = 0; k < nam.len; k++) {

plans->cap_grp[j].nam[k] = nam.arr[k];

}

plans->cap_grp[j].nam[k] = 0;

for (k = 0; k < dscr.len; k++) {

plans->cap_grp[j].dscr[k] = dscr.arr[k];

}

plans->cap_grp[j].dscr[k] = 0;

plans->cap_grp[j].usage_ofr = usage_ofr;

plans->cap_grp[j].util_ftr = util_ftr;

plans->cap_grp[j].logical_x = logical_x;

plans->cap_grp[j].logical_y = logical_y;

plans->cap_grp[j].prod_seg[0] = prod_seg_01;

plans->cap_grp[j].prod_seg[1] = prod_seg_02;

plans->cap_grp[j].prod_seg[2] = prod_seg_03;

plans->cap_grp[j].prod_seg[3] = prod_seg_04;

plans->cap_grp[j].prod_seg[4] = prod_seg_05;

plans->cap_grp[j].prod_seg[5] = prod_seg_06;

plans->cap_grp[j].prod_seg[6] = prod_seg_07;

plans->cap_grp[j].prod_seg[7] = prod_seg_08;

plans->cap_grp[j].prod_seg[8] = prod_seg_09;

plans->cap_grp[j].prod_seg[9] = prod_seg_10;

plans->cap_grp[j].prod_seg[10] = prod_seg_11;

plans->cap_grp[j].prod_seg[11] = prod_seg_12;

plans->cap_grp[j].prod_seg[12] = prod_seg_13;

plans->cap_grp[j].prod_seg[13] = prod_seg_14;

plans->cap_grp[j].prod_seg[14] = prod_seg_15;

plans->cap_grp[j].prod_seg[15] = prod_seg_16;

plans->cap_grp[j].prod_seg[16] = prod_seg_17;

plans->cap_grp[j].prod_seg[17] = prod_seg_18;

plans->cap_grp[j].prod_seg[18] = prod_seg_19;

plans->cap_grp[j].prod_seg[19] = prod_seg_20;

plans->cap_grp[j].prod_seg[20] = prod_seg_21;

plans->cap_grp[j].prod_seg[21] = prod_seg_22;

plans->cap_grp[j].prod_seg[22] = prod_seg_23;

plans->cap_grp[j].prod_seg[23] = prod_seg_24;

plans->cap_grp[j].prod_seg[24] = prod_seg_25;

plans->cap_grp[j].prod_seg_cnt = prod_seg_cnt;

plans->cap_grp[j].resrce_prt[0] = resrce_prt_01;

plans->cap_grp[j].resrce_prt[1] = resrce_prt_02;

plans->cap_grp[j].resrce_prt[2] = resrce_prt_03;

plans->cap_grp[j].resrce_prt[3] = resrce_prt_04;

plans->cap_grp[j].resrce_prt[4] = resrce_prt_05;

plans->cap_grp[j].resrce_prt[5] = resrce_prt_06;

plans->cap_grp[j].resrce_prt[6] = resrce_prt_07;

plans->cap_grp[j].resrce_prt[7] = resrce_prt_08;

plans->cap_grp[j].resrce_prt[8] = resrce_prt_09;

plans->cap_grp[j].resrce_prt[9] = resrce_prt_10;

plans->cap_grp[j].resrce_prt[10]= resrce_prt_11;

plans->cap_grp[j].resrce_prt[11]= resrce_prt_12;

plans->cap_grp[j].resrce_prt[12]= resrce_prt_13;

plans->cap_grp[j].resrce_prt[13]= resrce_prt_14;

plans->cap_grp[j].resrce_prt[14]= resrce_prt_15;

plans->cap_grp[j].resrce_prt[15]= resrce_prt_16;

plans->cap_grp[j].resrce_prt[16]= resrce_prt_17;

plans->cap_grp[j].resrce_prt[17]= resrce_prt_18;

plans->cap_grp[j].resrce_prt[18]= resrce_prt_19;

plans->cap_grp[j].resrce_prt[19]= resrce_prt_20;

plans->cap_grp[j].resrce_prt[20]= resrce_prt_21;

plans->cap_grp[j].resrce_prt[21]= resrce_prt_22;

plans->cap_grp[j].resrce_prt[22]= resrce_prt_23;

plans->cap_grp[j].resrce_prt[23]= resrce_prt_24;

plans->cap_grp[j].resrce_prt[24]= resrce_prt_25;

plans->cap_grp[j].resrce_cnt = resrce_cnt;

plans->cap_grp[j].plan_horiz = plan_horiz;

plans->cap_grp[j].cost_setup = cost_setup;

plans->cap_grp[j].cost_run = cost_run;

plans->cap_grp[j].scrap_rate = scrap_rate;

plans->cap_grp[j].avail = avail;

}

end_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C1;

printf(" successful\n");

return(NO_ERROR);

err_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C1;

return(ERROR);

}

 

int db_read_resrce(plans)

PLAN_DATA *plans;

{

int i;

int j;

int k;

 

printf("Extracting from table resrce ...");

EXEC SQL WHENEVER SQLERROR GOTO err_read;

EXEC SQL DECLARE C2 CURSOR FOR

SELECT * FROM RESRCE;

EXEC SQL OPEN C2;

EXEC SQL WHENEVER NOT FOUND GOTO end_read;

for(i = 0; ; i++){

EXEC SQL FETCH C2 INTO

:id,:nam,:dscr,

:class,:capab,:skill,:qualf,:quant,

:cost_setup,:cost_run,:cost_rate,

:setup_time,:usage_time,:avail;

printf(" %d", i);

j = ++plans->ptr_resrce;

plans->resrce[j].id = id;

for (k = 0; k < nam.len; k++) {

plans->resrce[j].nam[k] = nam.arr[k];

}

plans->resrce[j].nam[k] = 0;

for (k = 0; k < dscr.len; k++) {

plans->resrce[j].dscr[k] = dscr.arr[k];

}

plans->resrce[j].dscr[k] = 0;

plans->resrce[j].class = class;

plans->resrce[j].capab = capab;

plans->resrce[j].skill = skill;

plans->resrce[j].qualf = qualf;

plans->resrce[j].quant = quant;

plans->resrce[j].cost_setup = cost_setup;

plans->resrce[j].cost_run = cost_run;

plans->resrce[j].cost_rate = cost_rate;

plans->resrce[j].setup_time = setup_time;

plans->resrce[j].usage_time = usage_time;

plans->resrce[j].avail = avail;

}

end_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C2;

printf(" successful\n");

return(NO_ERROR);

err_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C2;

return(ERROR);

}

 

int db_read_resrce_rel(plans)

PLAN_DATA *plans;

{

int i;

int j;

 

printf("Extracting from table resrce_rel ...");

EXEC SQL WHENEVER SQLERROR GOTO err_read;

EXEC SQL DECLARE C3 CURSOR FOR

SELECT * FROM RESRCE_REL;

EXEC SQL OPEN C3;

EXEC SQL WHENEVER NOT FOUND GOTO end_read;

for(i = 0; ; i++){

EXEC SQL FETCH C3 INTO

:id,

:cnst_01,:cnst_02,:cnst_03,:cnst_04,:cnst_05,

:cnst_06,:cnst_07,:cnst_08,:cnst_09,:cnst_10,:cnst_cnt;

printf(" %d", i);

j = ++plans->ptr_resrce_rel;

plans->resrce_rel[j].id = id;

plans->resrce_rel[j].cnst[0] = cnst_01;

plans->resrce_rel[j].cnst[1] = cnst_02;

plans->resrce_rel[j].cnst[2] = cnst_03;

plans->resrce_rel[j].cnst[3] = cnst_04;

plans->resrce_rel[j].cnst[4] = cnst_05;

plans->resrce_rel[j].cnst[5] = cnst_06;

plans->resrce_rel[j].cnst[6] = cnst_07;

plans->resrce_rel[j].cnst[7] = cnst_08;

plans->resrce_rel[j].cnst[8] = cnst_09;

plans->resrce_rel[j].cnst[9] = cnst_10;

plans->resrce_rel[j].cnst_cnt = cnst_cnt;

}

end_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C3;

printf(" successful\n");

return(NO_ERROR);

err_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C3;

return(ERROR);

}

 

int db_read_prt_dat(plans)

PLAN_DATA *plans;

{

int i;

int j;

int k;

 

printf("Extracting from table prt_dat ...");

EXEC SQL WHENEVER SQLERROR GOTO err_read;

EXEC SQL DECLARE C4 CURSOR FOR

SELECT * FROM PRT_DAT;

EXEC SQL OPEN C4;

EXEC SQL WHENEVER NOT FOUND GOTO end_read;

for(i = 0; ; i++){

EXEC SQL FETCH C4 INTO

:id,:nam,:dscr,:factory,:db_key,:modf_idx,:unit_m,:s_t_stat;

printf(" %d", i);

j = ++plans->ptr_prt_dat;

plans->prt_dat[j].id = id;

for (k = 0; k < nam.len; k++) {

plans->prt_dat[j].nam[k] = nam.arr[k];

}

plans->prt_dat[j].nam[k] = 0;

for (k = 0; k < dscr.len; k++) {

plans->prt_dat[j].dscr[k] = dscr.arr[k];

}

plans->prt_dat[j].dscr[k] = 0;

plans->prt_dat[j].factory = factory;

plans->prt_dat[j].db_key = db_key;

plans->prt_dat[j].modf_idx = modf_idx;

plans->prt_dat[j].unit_m = unit_m;

plans->prt_dat[j].s_t_stat = s_t_stat;

}

end_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C4;

printf(" successful\n");

return(NO_ERROR);

err_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C4;

return(ERROR);

}

 

int db_read_prt_cntn(plans)

PLAN_DATA *plans;

{

int i;

int j;

printf("Extracting from table prt_cntn ...");

EXEC SQL WHENEVER SQLERROR GOTO err_read;

EXEC SQL DECLARE C5 CURSOR FOR

SELECT * FROM PRT_CNTN;

EXEC SQL OPEN C5;

EXEC SQL WHENEVER NOT FOUND GOTO end_read;

for(i = 0; ; i++){

EXEC SQL FETCH C5 INTO

:id,:s_t_consumer,:s_t_supplier,:consumer_num,:relat_fct;

printf(" %d", i);

j = ++plans->ptr_prt_cntn;

plans->prt_cntn[j].id = id;

plans->prt_cntn[j].s_t_consumer = s_t_consumer;

plans->prt_cntn[j].s_t_supplier = s_t_supplier;

plans->prt_cntn[j].consumer_num = consumer_num;

plans->prt_cntn[j].relat_fct = relat_fct;

}

end_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C5;

printf(" successful\n");

return(NO_ERROR);

err_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C5;

return(ERROR);

}

 

int db_read_proc_dscr(plans)

PLAN_DATA *plans;

{

int i;

int j;

int k;

printf("Extracting from table proc_dscr ...");

EXEC SQL WHENEVER SQLERROR GOTO err_read;

EXEC SQL DECLARE C6 CURSOR FOR

SELECT * FROM PROC_DSCR;

EXEC SQL OPEN C6;

EXEC SQL WHENEVER NOT FOUND GOTO end_read;

for(i = 0; ; i++){

EXEC SQL FETCH C6 INTO

:id,:id_s_t,:id_assoc,:dscr,:cap_grp,

:resrce_01,:resrce_02,:resrce_03,:resrce_04,:resrce_05,

:resrce_06,:resrce_07,:resrce_08,:resrce_09,:resrce_10,

:res_cnt,:est_time_run,:est_time_setup,

:rank,:usage_capacity,:req_capacity;

printf(" %d", i);

j = ++plans->ptr_proc_dscr;

plans->proc_dscr[j].id = id;

plans->proc_dscr[j].id_s_t = id_s_t;

plans->proc_dscr[j].id_assoc = id_assoc;

for (k = 0; k < dscr.len; k++) {

plans->proc_dscr[j].dscr[k] = dscr.arr[k];

}

plans->proc_dscr[j].dscr[k] = 0;

plans->proc_dscr[j].cap_grp = cap_grp;

plans->proc_dscr[j].resrce[0] = resrce_01;

plans->proc_dscr[j].resrce[1] = resrce_02;

plans->proc_dscr[j].resrce[2] = resrce_03;

plans->proc_dscr[j].resrce[3] = resrce_04;

plans->proc_dscr[j].resrce[4] = resrce_05;

plans->proc_dscr[j].resrce[5] = resrce_06;

plans->proc_dscr[j].resrce[6] = resrce_07;

plans->proc_dscr[j].resrce[7] = resrce_08;

plans->proc_dscr[j].resrce[8] = resrce_09;

plans->proc_dscr[j].resrce[9] = resrce_10;

plans->proc_dscr[j].res_cnt = res_cnt;

plans->proc_dscr[j].est_time_run = est_time_run;

plans->proc_dscr[j].est_time_setup = est_time_setup;

plans->proc_dscr[j].rank = rank;

plans->proc_dscr[j].usage_capacity = usage_capacity;

plans->proc_dscr[j].req_capacity = req_capacity;

}

end_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C6;

printf(" successful\n");

return(NO_ERROR);

err_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C6;

return(ERROR);

}

 

int db_read_super_task(plans)

PLAN_DATA *plans;

{

int i;

int j;

int k;

printf("Extracting from table super_task ...");

EXEC SQL WHENEVER SQLERROR GOTO err_read;

EXEC SQL DECLARE C7 CURSOR FOR

SELECT * FROM SUPER_TASK;

EXEC SQL OPEN C7;

EXEC SQL WHENEVER NOT FOUND GOTO end_read;

for(i = 0; ; i++){

EXEC SQL FETCH C7 INTO

:id,:dscr,:plan_level,:lot_min,:lot_siz,:avg_stock,:scrap_rate;

printf(" %d", i);

j = ++plans->ptr_super_task;

plans->super_task[j].id = id;

for (k = 0; k < dscr.len; k++) {

plans->super_task[j].dscr[k]= dscr.arr[k];

}

plans->super_task[j].dscr[k]= 0;

plans->super_task[j].plan_level = plan_level;

plans->super_task[j].lot_min = lot_min;

plans->super_task[j].lot_siz = lot_siz;

plans->super_task[j].avg_stock = avg_stock;

plans->super_task[j].scrap_rate = scrap_rate;

}

end_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C7;

printf(" successful\n");

return(NO_ERROR);

err_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C7;

return(ERROR);

}

 

int db_read_proc_cntn(plans)

PLAN_DATA *plans;

{

int i;

int j;

printf("Extracting from table proc_cntn ...");

EXEC SQL WHENEVER SQLERROR GOTO err_read;

EXEC SQL DECLARE C8 CURSOR FOR

SELECT * FROM PROC_CNTN;

EXEC SQL OPEN C8;

EXEC SQL WHENEVER NOT FOUND GOTO end_read;

for(i = 0; ; i++){

EXEC SQL FETCH C8 INTO

:id,:s_t_consumer,:s_t_supplier,:min_lead_time,:relat_quant;

printf(" %d", i);

j = ++plans->ptr_proc_cntn;

plans->proc_cntn[j].id = id;

plans->proc_cntn[j].s_t_consumer = s_t_consumer;

plans->proc_cntn[j].s_t_supplier = s_t_supplier;

plans->proc_cntn[j].min_lead_time = min_lead_time;

plans->proc_cntn[j].relat_quant = relat_quant;

}

end_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C8;

printf(" successful\n");

return(NO_ERROR);

err_read:

EXEC SQL WHENEVER SQLERROR CONTINUE;

EXEC SQL CLOSE C8;

return(ERROR);

}

 

 

APPENDIX B : INTEGRATOR INTERNAL DATA STRUCTURES

 

The role of the CAPP/PPC integrator is to bridge the functional and data gaps that exist between any generic CAPP and PPC with a definite emphasis on the bridging RPE (from McMaster) and GRIPPS (from IPA). This emphasis has dominated the early developmental effort at Western. There was not enough understanding and specification of both RPE and GRIPPS for the Western team to implement anything that has to depend on either module. Nonetheless, the team has successfully developed Message Board - a software that allows communication between multiple processes. This serves as the mechanism for passing signals (as well as data) between RPE, GRIPPS, and the integrator. It was then the agreement on the definition for generic common data that provided an opportunity for some actual coding of the integrator.

 

The implementation was written in both C and Pro*C. Pro*C is a Oracle-specific language, a hybrid of C and SQL, for accessing Oracle. Pro*C offers the concise and standard ways of access RDBMS with SQL-statements that are embedded in C programs. Pro*C deals in terms of tables, records, and scalar variables. However, Pro*C does not have the block structure of C, and arrays. Pro*C relies heavily on global variables, as well as Goto-statements. The Pro*C program will have to be translated into C by the Oracle Pro*C pre-compiler. My involvement deals mainly with data-related issues, and debugging.

 

There was the setting up of the structure definitions and internal data storage of process plans in C based upon the previously agreement on the generic common data. This is straight-forward because the eight components of a process plan were given in C-form in the agreement, except for choosing some meaningful English field names. Below are a brief description of these eight components:

 

• RESRCE - the resources (e.g. machines, tools, materials, or people) involved in a production;

• RESRCE_REL - the constraining relationships between resources;

• PRT_DAT - the distinguishable material parts (either in-progress, finished, or purchased) exist between operations;

• PRT_CNTN - the utilization of parts to make others parts

• CAP_GRP - the logical grouping of resources (a.k.a. capacity group) to perform a sequence of operations;

• SUPER_TASK - the result of all operations that happened within a single capacity group, but without specifying which capacity group;

• PROC_CNTN - the order of executing the super tasks.

• PROC_DSCR - the (either preferred or alternative) set of operations to be performed by the super-task.

 

A new structure that houses all these eight components together to describe a process plan is also defined. This permits an easy handling of multiple process plans in a program. All nine structure definitions is presented in Appendix-A. Along with these structures, utilities that initialize, insert values into, and retrieve values from these structures were also implemented.

 

Similar data-related work was also done for Oracle. There was the defining and setting up of the tables and records (that correspond to the previously mentioned structure definitions and internal data storage structure definitions) in Oracle. Along with these tables, utilities that write records into and read records from the Oracle tables were also implemented. There were also utilities that allow the connecting with and releasing from Oracle. All implementation here was done in Pro*C. Most but not all of the naming convention remain the same here as in the C-structure definitions, and the internal dta storage mentioned in the last paragraph. Appendix-B presents the definition and setting up of the tables and records in Oracle.

 

Test programs have been written to create the internal data storage, the data-tables in Oracle, as well as to transfer data between them.

 

There were one other area that I have once worked on but have since abandoned. It is to read the PDL-file into, as well as to write the PDL-file from the internal data storage. PDL, a product description language, was developed at McMaster, and is the format of the input files to RPE. Effort had been devoted to construct a lexical analyzer, using Lex, based on the grammar of a section of PDL. (This particular section describes the geometrical attributes of individual part, and relations between different parts.) A parser, developed using Yacc, was also implemented. The PDL file of air cylinder (provided by McMaster team) was successfully read into the internal data storage. The opposite process of writing the information (stored in the internal storage) onto the PDL file was also successful. This effort was stopped for several reasons. First, it is somewhat a duplication of the work of the McMaster team although RPE has a object-oriented internal storage. Second, it is not certain whether or not PDL-file will be the final format that the integrator is working with.

 

There are bugs in the implementation as a whole (i.e. the software collectively written by the Western team). The demonstration died (not immediately but) after it had run for sometime. It did not always died at the same location. It could die either inside of outside of Oracle. The demonstration tries to test how data are to be pass process plan between the stub-CAPP, stub-PPC, stub-RPE, and integrator through files as well as Oracle. The random number generator was used to determine the action to be taken whenever a process plan is received. Debugging has been difficult because of the multiple-process environment. The approach I have taken is to analyze the print-out traces of the program execution. Bugs had been found and fixed, but none of them is responsible for the program crash.

The C data structures for the generic common data

 

The first eight structures defined below are taken from the agreed upon definition of the generic data definition. The last one is the super-structure that contains all the components to describe a single process plan for a specific manufacturing facility.

 

typedef struct{

int id ;/* identification of resource */

char nam[30] ;/* naming */

char dscr[100] ;/* description */

int class ;/* resource class */

int capab ;/* capability */

int skill ;/* skill */

int qualf ;/* qualification */

int quant ;/* quantity */

int cost_setup ;/* setup cost */

int cost_run ;/* run-time cost */

int cost_rate ;/* cost rate */

int setup_time ;/* setup time */

int usage_time ;/* usage time */

int avail ;/* availability (time) */

} RESRCE ;

typedef struct{

int id ;/* identification of resource */

int cnst[10] ;/* constrainting resource */

int cnst_cnt ;/* number of constraints */

} RESRCE_REL ;

typedef struct{

int id ;/* identification of part data */

char nam[30] ;/* naming */

char dscr[100] ;/* description */

int factory ;/* identification of factory */

int db_key ;/* keys for db usages */

int modf_idx ;/* modification index */

int unit_m ;/* unit of measurement */

int s_t_stat ;/* FINISHED/IN PROCESS/PURCHASED */

} PRT_DAT ;

typedef struct{

int id ;/* identification of path/route */

int part_id ;/* part id */

int s_t_consumer ;/* process that uses parts */

int s_t_supplier ;/* process that produces parts */

int consumer_num ;/* process number of consumer */

int relat_fct ;/* rel-factor: supplier to consumer */

} PRT_CNTN ;

 

typedef struct{

int id ;/* identification of capacity group */

char nam[30] ;/* naming */

char dscr[100] ;/* description */

int usage_ofr ;/* total usage/capacity offered */

int util_ftr ;/* utilization factor (in %) */

int logical_x ;/* logical x-coordinate */

int logical_y ;/* logical y-coordinate */

int prod_seg[25] ;/* member capacity groups */

int prod_seg_cnt ;/* count */

int resrce_prt[25] ;/* list of resource (id's) */

int resrce_cnt ;/* count */

int plan_horiz ;/* (in min) */

int cost_setup ;/* setup cost (in $) */

int cost_run ;/* unit run-time cost (in $) */

int scrap_rate ;/* scrap rate (in %) */

int avail ;/* availability (time) */

} CAP_GRP ;

typedef struct{

int id ;/* identification of super task */

char dscr[100] ;/* description */

int plan_level ;/* proc dscr number within a proc plan */

int lot_min ;/* minimum lot size */

int lot_siz ;/* minimum container size */

int avg_stock ;/* average stock */

int scrap_rate ;/* scrap rate (in %) */

} SUPER_TASK ;

typedef struct{

int id ;/* identification of path/route */

int s_t_consumer ;/* process that uses parts */

int s_t_supplier ;/* process that produces parts */

int min_lead_time ;/* minimum lead time */

int relat_quant ;/* rel-quantity: supplier to consumer */

} PROC_CNTN ;

typedef struct{

int id ;/* identification: process description */

int id_s_t ;/* supertask */

int id_assoc ;/* associated parts of supertask */

char dscr[100] ;/* description */

int cap_grp ;/* capacity group used */

int resrce[10] ;/* resource */

int res_cnt ;/* number of resource */

int est_time_run ;/* estimated average run time */

int est_time_setup ;/* estimated setup time */

int rank ;/* preferred process plan indicator */

int usage_capacity ;/* usage of the capacity group */

int req_capacity ;/* capacity deficit */

} PROC_DSCR ;

 

 

#define MAX_REC_RESRCE 100

#define MAX_REC_RESRCE_REL 100

#define MAX_REC_PRT_DAT 100

#define MAX_REC_PRT_CNTN 100

#define MAX_REC_CAP_GRP 100

#define MAX_REC_SUPER_TASK 100

#define MAX_REC_PROC_CNTN 100

#define MAX_REC_PROC_DSCR 100

typedef struct{

int ptr_resrce ;

int ptr_resrce_rel ;

int ptr_prt_dat ;

int ptr_prt_cntn ;

int ptr_cap_grp ;

int ptr_super_task ;

int ptr_proc_cntn ;

int ptr_proc_dscr ;

RESRCE resrce[MAX_REC_RESRCE] ;

RESRCE_REL resrce_rel[MAX_REC_RESRCE_REL] ;

PRT_DAT prt_dat[MAX_REC_PRT_DAT] ;

PRT_CNTN prt_cntn[MAX_REC_PRT_CNTN] ;

CAP_GRP cap_grp[MAX_REC_CAP_GRP] ;

SUPER_TASK super_task[MAX_REC_SUPER_TASK] ;

PROC_CNTN proc_cntn[MAX_REC_PROC_CNTN] ;

PROC_DSCR proc_dscr[MAX_REC_PROC_DSCR] ;

} PLAN_DATA;

 

Appendix-B

The Oracle data-tables for the generic common data

 

The eight Pro*C routines below create in Oracle the necessary data-tables.

 

int db_creat_resrce()

{ EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE RESRCE(

ID NUMBER(6) ,

NAM CHAR(30) ,

DSCR CHAR(100) ,

CLASS NUMBER(6) ,

CAPAB NUMBER(6) ,

SKILL NUMBER(6) ,

QUALF NUMBER(6) ,

QUANT NUMBER(6) ,

COST_SETUP NUMBER(6) ,

COST_RUN NUMBER(6) ,

COST_RATE NUMBER(6) ,

SETUP_TIME NUMBER(6) ,

USAGE_TIME NUMBER(6) ,

AVAIL NUMBER(6) );

return(NO_ERROR);

err_creat:

return(ERROR); }

 

int db_creat_resrce_rel()

{ EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE RESRCE_REL(

ID NUMBER(6) ,

CNST_01 NUMBER(3) ,

CNST_02 NUMBER(3) ,

CNST_03 NUMBER(3) ,

.. ..

(up to) (up to)

.. ..

CNST_10 NUMBER(3) ,

CNST_CNT NUMBER(3) );

return(NO_ERROR);

err_creat:

return(ERROR); }

 

int db_creat_prt_dat()

{ EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE PRT_DAT(

ID NUMBER(6) ,

NAM CHAR(30) ,

DSCR CHAR(100) ,

FACTORY NUMBER(6) ,

DB_KEY NUMBER(6) ,

MODF_IDX NUMBER(6) ,

UNIT_M NUMBER(6) ,

S_T_STAT NUMBER(6) );

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_prt_cntn()

{ EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE PRT_CNTN(

ID NUMBER(6) ,

PART_ID NUMBER(6) ,

S_T_CONSUMER NUMBER(6) ,

S_T_SUPPLIER NUMBER(6) ,

CONSUMER_NUM NUMBER(6) ,

RELAT_FCT NUMBER(6) );

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_cap_grp()

{ EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE CAP_GRP(

ID NUMBER(6) ,

NAM CHAR(30) ,

DSCR CHAR(100) ,

USAGE_OFR NUMBER(3) ,

UTIL_FTR NUMBER(3) ,

LOGICAL_X NUMBER(3) ,

LOGICAL_Y NUMBER(3) ,

PROD_SEG_01 NUMBER(3) ,

PROD_SEG_02 NUMBER(3) ,

PROD_SEG_03 NUMBER(3) ,

.. ..

(up to) (up to)

.. ..

PROD_SEG_25 NUMBER(3) ,

PROD_SEG_CNT NUMBER(3) ,

RESRCE_PRT_01 NUMBER(3) ,

RESRCE_PRT_02 NUMBER(3) ,

RESRCE_PRT_03 NUMBER(3) ,

.. ..

(up to) (up to)

.. ..

RESRCE_PRT_25 NUMBER(3) ,

RESRCE_CNT NUMBER(3) ,

PLAN_HORIZ NUMBER(6) ,

COST_SETUP NUMBER(6) ,

COST_RUN NUMBER(6) ,

SCRAP_RATE NUMBER(6) ,

AVAIL NUMBER(4) );

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_super_task()

{

printf("Creating table SUPER_TASK ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE SUPER_TASK(

ID NUMBER(6) ,

DSC R CHAR(100) ,

PLAN_LEVEL NUMBER(6) ,

LOT_MIN NUMBER(6) ,

LOT_SIZ NUMBER(6) ,

AVG_STOCK NUMBER(6) ,

SCRAP_RATE NUMBER(6) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_proc_cntn()

{

printf("Creating table PROC_CNTN ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE PROC_CNTN(

ID NUMBER(6) ,

S_T_CONSUMER NUMBER(6) ,

S_T_SUPPLIER NUMBER(6) ,

MIN_LEAD_TIME NUMBER(6) ,

RELAT_QUANT NUMBER(6) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

int db_creat_proc_dscr()

{

printf("Creating table PROC_DSCR ...");

EXEC SQL WHENEVER SQLERROR GOTO err_creat;

EXEC SQL CREATE TABLE PROC_DSCR(

ID NUMBER(6) ,

ID_S_T NUMBER(6) ,

ID_ASSOC NUMBER(6) ,

DSCR CHAR(100) ,

CAP_GRP NUMBER(6) ,

RESRCE_01 NUMBER(3) ,

RESRCE_02 NUMBER(3) ,

RESRCE_03 NUMBER(3) ,

.. ..

(up to) (up to)

.. ..

RESRCE_10 NUMBER(3) ,

RES_CNT NUMBER(3) ,

EST_TIME_RUN NUMBER(3) ,

EST_TIME_SETUP NUMBER(3) ,

RANK NUMBER(3) ,

USAGE_CAPACITY NUMBER(6) ,

REQ_CAPACITY NUMBER(6) );

printf(" successful\n");

return(NO_ERROR);

err_creat:

return(ERROR);

}

 

APPENDIX D : AN AIR CYLINDER DATA EXAMPLE

 

Below is a condensed version of the PDL-file that describes only parts and their assembly.

 

//

// Yes, th D521X air cylinder again.

//

// UNITS-statement not included

//

ASSEMBLY( air_cyl ) {

// Functional data not included

// External Geometric data not inlcuded

// Body attributes not included

//Piston attributes not included

// Bushing attributes not included

// Misc. attributes not includde

// More Misc. attributes not included

//

// This is the bushing. It's made up of an o-ring and a bushing body.

//

ASSEMBLY( bushing ) {

// Attributes not included

PART( bushing ) {

// Attributes not included

}

PART( o_ring ) {

// Attributes not included

}

}

//

// The piston is made up of a shaft, a face, an o-ring, and a screw.

//

ASSEMBLY( piston ) {

// Attributes not included

PART( shaft ) {

// Attributes not included

}

PART( face ) {

// Attributes not included

}

PART( o_ring ) {

// Attributes not included

}

PART( screw ) {

// Attributes not included

}

}

// The body

PART( body ) {

// Attributes not included

}

// The base

PART( base ) {

// Attributes not included

}

// The o_ring for the body

PART( o_ring ) {

// Attributes not included

}

// The screws to hold the base to the body

PART( screws ) {

// Attributes not included

}

}

//

// A sample Macro Task listing.

//

// Accept completed product to shipping.

MACROTASK( recieve ) {

MACRO_OP( accept_sfg );

PARENT( ~air_cyl );

PRE_CNST( pc, ~inspect );

}

// Inspect prodcut as it leaves area.

MACROTASK( inspect ) {

MACRO_OP( inspect_dx501 );

PARENT( ~air_cyl );

ALT_CNST( ac1 ) {

PRE_CNST( pc1, ~assm_cyl_1 );

PRE_CNST( pc2, ~assm_cyl_2 );

PRE_CNST( pc3, ~assm_cyl_3 );

}

}

// Manual Cylinder assembly.

MACROTASK( assm_cyl_1 ) {

MACRO_OP( assemble_cylinder );

TOOL( dt501_ma_jig );

PARENT( ~air_cyl.screws );

RELATED( ~air_cyl.piston, ~air_cyl.bushing, ~air_cyl.base, ~air_cyl.body, ~air_cyl.o_ring );

PRE_CNST( pc, ~assm_bushing, ~assm_piston );

}

// Flexible Cylinder assembly.

MACROTASK( assm_cyl_2 ) {

MACRO_OP( assemble_cylinder );

TOOL( dt501_fa_jig );

PARENT( ~air_cyl.screws );

RELATED( ~air_cyl.piston, ~air_cyl.bushing, ~air_cyl.base, ~air_cyl.body, ~air_cyl.o_ring );

PRE_CNST( pc, ~assm_bushing, ~assm_piston );

}

// Hard Automation for Cylinder assembly.

MACROTASK( assm_cyl_3 ) {

MACRO_OP( assemble_cylinder );

TOOL( dt501_ha_jig );

PARENT( ~air_cyl.screws );

RELATED( ~air_cyl.piston, ~air_cyl.bushing, ~air_cyl.base, ~air_cyl.body, ~air_cyl.o_ring );

PRE_CNST( pc, ~assm_bushing, ~assm_piston );

}

// Assemble the bushing.

MACROTASK( assm_bushing ) {

MACRO_OP( assemble_bushing );

PARENT( ~air_cyl.bushing.bushing );

RELATED( ~air_cyl.bushing.o_ring );

PRE_CNST( pc, ~release );

}

// Assemble the Piston.

MACROTASK( assm_piston ) {

MACRO_OP( assemble_piston );

PARENT( ~air_cyl.piston.screw );

RELATED( ~air_cyl.piston.shaft, ~air_cyl.piston.face, ~air_cyl.piston.o_ring );

PRE_CNST( pc, ~release );

}

// Release parts from stock.

MACROTASK( release ) {

MACRO_OP( release_rip );

PARENT( ~air_cyl );

}

Internal data representation of the process plan of the air cylinder

 

 

PRT_DAT

id

nam

1

air_cyl

2

air_cyl.screw

3

air_cyl.piston

4

air_cyl.bushing

5

air_cyl.base

6

air_cyl.body

7

air_cyl.o_ring

8

air_cyl.bushing.bushing

9

air_cyl.bushing.o_ring

10

air_cyl.piston.screw

11

air_cyl.piston.shaft

12

air_cyl.piston.face

13

air_cyl.pistion.o_ring

 

 

RESRCE

id

nam

quant

1

dt501_ma_jig

1

2

dt501_fa_jig

1

3

dt501_ha_jig

1

 

 

SUPER_TASK

id

dscr

1

accept_sfg

2

inspect_dx501

3

assemble_cylinder

4

assemble_bushing

5

assemble_piston

6

release_rip

 

 

PROC_DSCR

id

dscr

id_s_t

res_cnt

rescre[0]

1

receive

1 <accept_sfg>

0

 

2

inspect

2 <inspect_dx501>

0

 

3

assm_cyl_1

3 <assemble_cylinder>

1

1 <dt501_ma_jig>

4

assm_cyl_2

3 <assemble_cylinder>

1

2 <dt501_fa_jig>

5

assm_cyl_3

3 <assemble_cylinder>

1

3 <dt_501_ha_jig>

6

assm_bushing

4 <assemble_bushing>

0

 

7

assm_piston

5 <assemble_piston>

0

 

8

release

6 <release_rip>

0

 

 

 

PROC_CNTN

id

s_t_consumer

s_t_supplier

1

1 <receive>

2 <inspect>

2

2 <inspect>

3 <assm_cyl_1>

3

2 <inspect>

4 <assm_cyl_2>

4

2 <inspect>

5 <assm_cyl_3>

5

3 <assm_cyl_1>

6 <assm_bushing>

6

3 <assm_cyl_1>

7 <assm_piston>

7

4 <assm_cyl_2>

6 <assm_bushing>

8

4 <assm_cyl_2>

7 <assm_piston>

9

5 <assm_cyl_3>

6 <assm_bushing>

10

5 <assm_cyl_3>

7 <assm_piston>

11

6 <assem_bushing>

8 <release>

12

7 <assm_piston>

8 <release>

 

 

PRT_CNTN

id

part_id

s_t_consumer

s_t_supplier

consumer_num

1

1 <air_cyl>

1 <receive>

2 <inspect>

1 <accept_sfg>

2

1 <air_cyl>

2 <inspect>

3 <assm_cyl_1>

2 <inspect_dx501>

3

1 <air_cyl>

2 <inspect>

4 <assm_cyl_2>

2 <inspect_dx501>

4

1 <air_cyl>

2 <inspect>

5 <assm_cyl_3>

2 <inspect_dx501>

5

3 <air_cyl.piston>

3 <assm_cyl_1>

 

3 <assemble_cylinder>

6

4 <air_cyl.bushing>

3 <assm_cyl_1>

 

3 <assemble_cylinder>

7

5 <air_cyl.base>

3 <assm_cyl_1>

 

3 <assemble_cylinder>

8

6 <air_cyl.body>

3 <assm_cyl_1>

 

3 <assemble_cylinder>

9

7 <air_cyl.o_ring>

3 <assm_cyl_1>

 

3 <assemble_cylinder>

10

2 <air_cyl.screw>

3 <assm_cyl_1>

6 <assm_bushing>

3 <assemble_cylinder>

11

2 <air_cyl.screw>

3 <assm_cyl_1>

7 <assm_piston>

3 <assemble_cylinder>

12

3 <air_cyl.piston>

4 <assm_cyl_2>

 

3 <assemble_cylinder>

13

4 <air_cyl.bushing>

4 <assm_cyl_2>

 

3 <assemble_cylinder>

14

5 <air_cyl.base>

4 <assm_cyl_2>

 

3 <assemble_cylinder>

15

6 <air_cyl.body>

4 <assm_cyl_2>

 

3 <assemble_cylinder>

16

7 <air_cyl.o_ring>

4 <assm_cyl_2>

 

3 <assemble_cylinder>

17

2 <air_cyl.screw>

4 <assm_cyl_2>

6 <assm_bushing>

3 <assemble_cylinder>

18

2 <air_cyl.screw>

4 <assm_cyl_2>

7 <assm_piston>

3 <assemble_cylinder>

19

3 <air_cyl.piston>

5 <assm_cyl_3>

 

3 <assemble_cylinder>

20

4 <air_cyl.bushing>

5 <assm_cyl_3>

 

3 <assemble_cylinder>

21

5 <air_cyl.base>

5 <assm_cyl_3>

 

3 <assemble_cylinder>

22

6 <air_cyl.body>

5 <assm_cyl_3>

 

3 <assemble_cylinder>

23

7 <air_cyl.o_ring>

5 <assm_cyl_3>

 

3 <assemble_cylinder>

24

2 <air_cyl.screw>

5 <assm_cyl_3>

6 <assm_bushing>

3 <assemble_cylinder>

25

2 <air_cyl.screw>

5 <assm_cyl_3>

7 <assm_piston>

3 <assemble_cylinder>

26

9 <air_cyl.bushing.o_ring>

6 <assm_bushing>

 

4 <assemble_bushing>

27

8 <air_cyl.bushing.bushing>

6 <assm_bushing>

8 <release>

4 <assemble_bushing>

28

11 <air_cyl.piston.shaft>

7 <assm_piston>

 

5 <assemble_piston>

29

12 <air_cyl.piston.face>

7 <assm_piston>

 

5 <assemble_piston>

30

13 <air_cyl.piston.o_ring>

7 <assm_piston>

 

5 <assemble_piston>

31

10 <air_cyl.piston.screw>

7 <assm_piston>

8 <release>

5 <assemble_piston>

32

1 <air_cyl>

8 <release>

-1

6 <release_rip>