This report outlines the final results of a multinational project which has been graciously funded by the Province of Ontario, Canada, and the prefect of Baden-Wuttemburg, Germany. The project saw The Fraunhofer Institute for Production Automation (IPA) in Stuttgart, Germany, McMaster University in Hamilton, and The University of Western Ontario (UWO) in London, all cooperating to jointly develop methods for integrating CAPP (Computer Aided Process Planning) and PPC (Production Planning and Control). CAPP and PPC are becoming more common in automated manufacturing. It was recognized that, although CAPP and PPC are being used, they were not being integrated, thus losing some of the key abilities of the software, in particular timely changes to process plans as the status of the shop floor evolves.
Various groups provided different forms of expertise. The group from IPA provided knowledge about PPC, based upon their software package GRIPPS which is being developed for commercial applications. McMaster discussed their software for reactive planning, and also provided some knowledge about CAPP systems. UWO provided expertise in communications and database issues. At present there are four software packages which are related to the project. They are GRIPPS from IPA, RPE from McMaster, and two packages which are described in this report. Both of the packages developed at UWO provide communications and data movement between CAPP, RPE and GRIPPS. The two versions of the Integrator are differentiated by communication methods. One Integrator uses Databases as the foundation for passing events and data. The Second Integrator uses OSI standards to pass events and data. Both have advantages and disadvantages, as will be described within the report.
The function of the integrator software is to deal with data format conversion, and event handling. The primary forms of data to be dealt with are process plans flowing from CAPP through RPE to PPC. Resource data will flow backwards from PPC through RPE to CAPP. The data is stored in many forms, such as relational databases, ASCII files, and proprietary databases. The data is also stored in various structures. Therefore the integrator is responsible for converting data as it is passed between modules. Events may be generated by any program at any time. As the Events are passed through the integrator they will cause data to be transferred, and may also drive other functions, such as long term statistics gathering. The Events will be generated for standard events such as requests for process plans. But, Events may also be generated for unexpected events on the shopfloor, such as bottlenecks and resource shortages. Some of these problems cannot be solved by scheduling techniques only and require replanning of the processes themselves with the least disturbance to the overall production system.
In total our Integrator resolves the integration issues between the two systems, product data representation and exchange, event handling and real time related issues, common definition of variables and attributes, and use of standard common data bases.
20 - 30% of all jobs in small and medium sized shops must be redirected to other resources to achieve production goals. Existing Process planners produce fixed linear sequences of operations. Schedules based on these are not flexible and are unable to react to disruptions on the shop floor. Order throughput is finally accomplished through improvisation. However, the associated cost penalties adversely affect manufacturing competitiveness. Flexible and rapid planning is crucial in Computer-Integrated Manufacturing (CIM) for high throughput in a Just-in-Time (JIT) environment. Nearly all existing CAPP systems are:
A closed-loop CAPP/PPC system has the advantage of responding to unforeseen production problems or unforeseen events (Studies show that a third of all process plans are not valid or have to be modified on short notice when manufacturing starts). Dynamic and reactive CAPP is required to address the above limitations and to respond effectively to changes in product style versus manufacturing capability. A generic, modular and domain independent “integrator” capable of integrating various CAPP systems and various PPC systems is needed. PPC systems capable of utilizing the data and knowledge provided by the dynamic planner and integrator modules is necessary.
This report will begin by outlining the purpose for developing this project. After describing the objectives the report goes on to describe who the key players are, and how they are related to the project on the whole, thus establishing the means of project execution. Industry contacts are then described to determine our constraints which guided our research and development. The issues of implementation, and existing problems are described in the sections which follow. This includes a description of existing literature, issues of importance to the integrator, and then details of the integrator developed. This covers details of events, data structures/formats, software written, databases, and existing software. In conclusion, progress to date and outstanding work is discussed. The readers attention should be drawn to the quantity of work which is documented in the appendices only. These are considered important, but too voluminous to include in the main body of this report.
We set out to develop an automated interface between CAPP and PPC. This interface was to resolve data conversion issues, and basic updates to process plans. The novel feature of this system was to be its ability to deal with feedback from the shop floor about failed process plans. As the availability of resources in the factory change, new plans must be generated in real time to allow work to continue unhindered.
An outline of CAPP research may be found in the papers by Alting and Zhang(1989), Ham and Lu (1988), Eversheim (1985), Lenau and Alting (1990), and Weill et al. (1982). Most of the research has centred around metal cutting processes. Ham and Lu (1988) suggested future directions for research efforts in CAPP. The authors pointed out that process planning is often carried out without consideration of job shop status information, such as resource availability, breakdown of equipment or disruptions caused by stochastic bottlenecks. Replanning is done by improvisation and can result in long through-put times. Eversheim et al. (1990) describe the current situation of order processing in industry as: “detailed information about the order on the one hand and the actual shop floor situation on the other hand are not available; realistic planning of the order processing is still the exception”. The authors propose an Assembly Management System which should provide sufficient information about the actual situation in case of disturbances. It is suggested that during order processing (planning), process alternatives, which mirror the flexibility of the assembly process, be incorporated. However, integration and common definition between CAPP and PPC are not discussed. Another paper dealing with the subject of integration, from a high-level CIM perspective, is a model presented by Harhalakis et al. (1990). The model discusses integration at the facility level and is presented along with the rules of interaction between the constituent modules. The authors used this approach to automatically update the various databases used by CAD, CAPP, and MRP. Törnshoff and Detand (1990) proposed, as part of the ESPRIT Project 2457 FLEXPLAN, a “process description concept” which can be used by planning, scheduling and control systems throughout a manufacturing environment. A Petri-net graph-based representation is generated during process planning. It provides information structures which can be continuously enhanced during the progress of manufacturing. For example, the PPC system calculates order due dates, the scheduling system determines planned start and termination dates as well as resource allocation data, and the monitoring system updates the actual process history. However the issues of events and related feedback from PPC to CAPP to respond to shop floor “disturbances” and the way for CAPP to replan are not discussed. This approach assumes that the systems to be integrated e.g. - the CAPP and PPC systems - use Petri nets.
An approach for replanning “on-line” is presented by Ruf and Jablonski (1990). In this approach it is proposed that a static process planning system be used which identifies all combinations of manufacturing resources that are suited to produce a part. A dynamic resource allocation system decides on-line which of the possible resources have to be used in order to execute a manufacturing order. The paper deals with a feature-based part description, and does not consider issues of integration with a PPC system.
In summary, traditional CAPP systems are static, linear (strictly sequential), and they assume unlimited factory resources. To achieve an optimal schedule, process plans should take into consideration the actual workshop status as well as any capacity constraints. The reviewed research shows the necessity of breaking away from the process plan as a static and linear sequence, and the need to have plans that are able to represent parallelism and alternative operations or resources. Similarly PPC systems should use a strategy that can benefit from the non-linear, alternative plans representation. The task of integrating such CAPP and PPC systems, even in the presence of such capabilities, is not a minor one. This paper focuses on this important subject, which has not been much discussed in previously published research. In particular, we wish to discuss the need for common definitions and the use of distributed vs. standard and common databases. The type of events and resulting communication between various modules in a concurrent engineering and parallel heterogeneous processing environment are also considered.
Typical medium size parts manufacturers in Ontario, Canada were surveyed to find out how production planning and process planning are carried out and how closely integrated they are. The outcome of such investigations was revealing. Nearly all process plans are created by humans, detailed (micro-level) and process planning is almost non-existent. What is passed on to production is mostly macro-level, mixed domain (including all required processes, e.g. metal sealing assembly, welding washing..., etc.) plans known as routing sheets. The sequence of operations and machine selection is based on ideal assumptions regarding availability of resources, and the best resources are always selected. These plans are linear and do not present alternative routes or resources. Once such plans are issued for production, the jobs of process planners end. On the shop floor, however, production disruptions occurs due to shortage of resources (tools, material, operators, etc.) and bottlenecks. Foremen, change the route sheets to meet production demands, however, it is done locally without complete knowledge of its effect on the overall operation of the factory and often leads to higher costs. It also became evident that expediting is a fact, and its associated costs is a fact of life in this environment and that capacity planning is hardly done.
Although some of the day to day production planning problems can be solved by scheduling techniques, it is apparent that rationalized alternate plans are needed in many cases to cope with the dynamic picture on the shop floor. The lack of communication between process planning and production planning obviously leads to higher costs and is a serious obstacle to achieving effective integrated manufacturing systems.
B&W manufactures steam boilers for power generator. Their production volume is very small and the type of product is very much made to ordered. The process plan(s) received from B&W are for only a few components and no sub-assembly are involved. Many production steps are involved in these processes and most of them are either repeated (e.g. inspection) or involving human skills (e.g. wielding). It would be desirable to select an example where there are more components and sub-assemblies, and with a good mixture of human skills and machinery involved in the production steps. Also important are the possible alternative operations and the alternative resources, which are lacking or not needed in their manufacturing steps.
AB provides a better environment to test RPE. Their production is high voltage & low voltage starters. These are metal cabinets approximately the size of a fridge and contain both mechanical and electrical assemblies. Some of the sheet metal work are stamped in-house using presses. This is one area where they have alternate resources, i.e. alternate presses for stamping. Internal components of the cabinet (such as switches, wire harnesses, electrical components, etc...) are assembled together at different areas within the plant. Some operations are in precedence (e.g. sheet metals are stamped, welded together, and painted) while others are being done in parallel (e.g. preparation of wire harnesses are initiated ~5 to ~10 days before final assemblies). Final assembly is when all bought-out and in-house components are put together.
Another word, AB seems to have all the elements that are needed for prototyping RPE. They utilize both machining and assembly operations, also, alternate resources and methods are available. The flow of materials within the plant is known, and process plans are documented.
One of the production planning problem at AB is “revisions”. This is a problem of existing production planning solution as discussed earlier in this document. Linear planning is rigid and does not allow for reacting to changes in the production environment. One revision of any kind (engineering or production planning) would create havoc to the production routine.
Their second problem, from the management point of view, is that the decision making resulting from revisions or disruptions are not documented. They seemed to depend too much on the experiences and the decision making of the foreperson and the supervisor. This is a touchy issue but could also be a potential problem for people in the upper management level.
• The BOM representation of the vehicle includes many sub-assemblies that could be used as test components for the integration system. A wide variety of test components could be established ranging from simple assemblies (e.g. engine hood) to very complex assemblies (e.g. suspension system).
Data structures and definitions may vary between CAPP and PPC. As a result, a set of global data structures were established to support the transfer of information between the formats, and media of the various systems. Since the system is expected to respond, and perform in real-time, it is necessary to have the system at least event driven. Each event should have some effect which either pushes, or pulls data through the system, and triggers other functional modules to perform CAPP, PPC, or other functions. Current CAPP and PPC systems have been developed independently, thus there are some functions missing which are must be added to deal with failures. Finally, since the software packages are all dedicated to computers, and are all separate programs, we must deal with the details of communication between the programs. Two solutions were developed for this, one used shared databases, and the other used low level network sockets.
Some prerequisites are required for a truly successful integrator. The first is a successful CAPP system capable of reactive planning, and replanning. A flexible and responsive PPC system is required to deal with shop floor failures and alternate plans. Unfortunately, most existing CAPP and PPC systems do not meet the prerequisites. They are generally not generic, not responsive, and not reactive. This is further aggravated by the functional gap between CAPP and PPC.
The data gap between functions is apparent, but it is simplified by a common data definition. Through the common data definition a common database may be set up to eliminate multiple copies of data, and the resulting data consistency problems. The previous lack of connection between CAPP and PPC means that all of the communication tools, and communications functions are missing.
CAPP generates process plan (and/or alternate process plans) for a product given a set of available resources. A process plan is an ordered sequence of manufacturing operations that produces from appropriate raw materials a product. There is no standard format for the process plans. Resources could include tools, machines, peoples, and materials. The database of CAPP stores both production-specific and production-general information. Examples of production-specific information are the geometry of the product, the relationships between parts, the availability of resources. Examples of production-general information are the rules of utilizing resources and the rules of manufacturing practices. This database contains all the information needed by CAPP to generate the process plan. For instance, CAPP will suggest a sequence of machining steps to meet the requirement by matching the capabilities of the available machines, and by applying the knowledge of good and bad machining practices.
The database of CAPP is typically static throughout the planning. The output process plan will then be given to a scheduler (either person or PPC). A process plan may become not usable during the on-line production because the planning environment may have become invalid as the shop-floor changes. As a result, CAPP will be re-started with new information to re-generate parts of the original process plan or a whole new process plan.
PPC schedules a number of process plans for shop-floor for either short-term or long-term basis. It deals with the flow of materials by optimizing the utilization of the resources to meet the target production. Materials refers to raw materials, sub-assemblies, parts, lots. These materials are continuously being consumed and produced at work-sites. A work-site is a location where a step of the manufacturing operation takes place. A work-site could be a single resource or an assembly of resources. The materials flow from work-sites to work-sites. There are setup time and processing time for each work-site. The objective of PPC is to maximize the performance of the resources by balancing the rates of consumption and production of materials to meet the target production.
The shop-floor changes dynamically both expectably and unexpectedly during the actual production. The consumption of inventories, and maintenance of the machines are examples of the expected changes. The overloading of machines, lack of raw materials, changes in orders, and errors are examples of unexpected changes. PPC reacts immediately to an unexpected happening by re-scheduling the process plans to maintain the target production as much as possible. For instance, PPC will re-schedule some work that was originally scheduled for a overloaded machine to a less busy machine. Re-scheduling is an critical function of PPC.
The database of PPC stores the process plans, and shop-floor information. The shop-floor information contains the production schedule, the flow patterns of the materials, the utilization of work-sites, and others. The shop-floor information will be dynamically updated to reflect the latest status during the on-line production.
CAPP and PPC are two stand-alone modules that are only logically connected by the relation that the output from CAPP is the input to PPC. There is no mechanism for either module to communicate with each other of its needs. For instance, PPC could not invoke CAPP to re-plan parts of a process plan based on the current shop-floor conditions. This represents the functional gap that exists in integrating CAPP and PPC.
The databases of these modules are typically different, and not necessarily compatible. They reside possibly on separate mediums. There exists a data gap during the integration in mapping and communicating information between the two databases. For instance, the formats of two related pieces of information on resources, the planning environment (of CAPP) and production environment (of PPC), are often different.
Process plans can be stored without a great deal of trouble (refer to appendices). Unfortunately there is a large discrepancy between definitions of resources. It was experienced that some systems had specialized assumptions, or had to be customized for each application. Although a generic structure was proposed here, a resource data definition is still loosely defined. It is the opinion of the authors that an adequate data structure would only from the result of years of trial in various manufacturing institutions.
A secondary issue when storing process plan data is dealing with multiple plans, and versions of the plans. This may virtually ignored because of the modern version control capabilities available in all modern DataBase Management Systems (DBMS).
A latent issue arises when multiple systems use the data. In effect the system may have many sources for the same information. This leads to data synchronization, and validation problems. We had basically decided that only one software package would be allowed to update the common data. This eliminates problems of data change notification. But, with the eventual development of a global database, the data could be immediately updated in a global sense.
Previous manual, and partially automated, systems would pass paper, phone calls, messages, and other forms of communication between Process Planning and Scheduling to request, forward and trouble-shoot process plans. In a fully automated system this is not feasible due to time delays, and lack of order. Thus, a formal set of events have been defined to facilitate integration of action.
An event may drive a process plan towards PPC from CAPP. In turn, PPC may drive information about a failure from PPC to CAPP which will demand replanning. As these events are passed through the integrator, they will also call for movement from one source and format to another.
• Another manipulation is to write the data back out into an MacroTask file. (e.g. To transfer Resource status) This file is not quite the same as the sample input file, as some fields (such as SETXY) are ignored on input, and thus cannot be re-generated from the common data.
As described in the previous section, there are many forms of data storage to be considered. The data gap has to be bridged by the integrator. Each package (CAPP, RPE and PPC) has its own data storage mechanism. for example RPE uses PDL files, and internal data structures, while GRIPPS (for PPC) uses Oracle, a relational database. As a result, we could not assume a global database, although this would have been preferable. Without a global database the integrator must have a separate interface to each data source. Each of the interfaces will transfer data to and from internal structures, and the external data source. This increases the independance of each software package, and simplifies replacement of one package with another.
As mentioned before, the global database stategy would allow a superior implementation, but is not practical until all packages store data in the same database, using the same structures. This problem was made obvious when considering resource descriptions, we found that while CAPP deals with specific resources, PPC deals with resources lumped in a capacity group. The representations would be quite different, even if both were stored in the same database. The actual solution
There are three possible approach for integrating CAPP and PPC. The first approach is a high-level integration of functions and CIM modules which can be called a “global integration scheme”. The work of Harhalakis et al. (1990) is in this category. Each CIM module (CAD, CAPP, and MRPII) is allowed to maintain its own database and an updating scheme is devised. This method is very data intensive, and results in duplication of data, and does not address the need for non-linear plan representation which considers actual manufacturing resources and constraining resources and events. The manufacturing systems’ “events” are not considered. Instead their events relate to each individual data record, not to the status of the modules in the system.
The second approach is the opposite extreme, and proposes complete integration of planning and scheduling. In this approach CAPP and PPC become one system. The merits of this is that planning and control depend on each other and must ultimately use the same data. Moreover, the borderline between planning scheduling and control is fuzzy. In this approach the system should obviously use a common database management system. The representation would be common, using Petri-nets for instance, to model logical and temporal relationships. FLEXPLAN is a system being developed in that direction by Törnshoff and Detand (1990).
However, CAPP systems are essentially time independent, while PPC systems are necessarily time dependent. Today’s CAPP systems do not take these dependencies into account. Even if we overcome this difficulty and merge the planning optimization task and the scheduling optimization task into a single optimization task, it cannot be solved due to complexity reasons as noted by Törnshoff et al. (1989).
The third approach, which is described in this paper, can be considered a realistic intermediate between the first two approaches. The proposed approach to integration is essentially modular. In this, Process Planning and Production Planning and Control do not need to be one system. However, the CAPP and PPC systems need to have the ability to interact with shop floor disturbances (events), non-linear process plans, and resources and constraints. Physically the database can be common or a standard distributed database. However common definitions, structures, interpretations of events and synchronization issues in a multi-tasking networked/parallel environment are considered. In fact in this approach a separate module called the “Integrator” is used. It should be recognized however that the boundaries between the various modules are in fact arbitrary. Several physical implementations are possible. The modular approach has practical advantages including flexibility of implementation as well as the possibility of integrating existing CAPP and PPC systems. In the remainder of this paper we describe the functions of the Integrator, and the RPE (Reactive Planning Environment) module which was developed in connection with this project (Stranc 1992). RPE allows for representative evaluation, and selection of alternate plans. The PPC system which is addressed in this project is GRIPPS (Kuhnle 1991).
Process plans for producing components and assembling them into products are used to make routing sheets which are used, in turn, by the PPC system to create a master schedule for the manufacturing facility. Ideally there should not be any deviation from the master schedule. In reality, however, 20-30% of the process plans and routing sheets are modified locally to cope with production bottlenecks, equipment failures, resource shortages and changes in order priorities. These problems cause unforeseen and unacceptable delays in production. They may require a reaction from the PPC system depending on their duration and severity. This will typically call for local rescheduling which requires shifting work to alternate resources, or in more extreme cases, different processes. Here we will focus or the reactive process planning aspects only, leaving reactive production planning to the accompanying paper by our collaborators at IPA.
2. Allow the combination and representation of mixed domain operations in a plan. In particular it deals with product assembly planning as well as other processes which may be required to complete a product such as welding, soldering, cleaning, inspection, fabrication and machining at the macro operation level (not detailed task planning).
6. Allow alternate plans evaluation, according to user defined criteria such as time, scrap rate, load balancing and cost, and selection of the best plan under given conditions such as absence or over-utilization of certain resources.
A scheme for representing micro and macro tasks in a process plan and routing sheets using a multi-layered precedence graph has been developed. Resources are modelled and associated with each task. ‘PreConstraints’ define order between macro tasks (operations). ‘AltConstraints’ are used to specify alternative processing methods within a process plan which can achieve a common end result (Figures 7.1, 7.2, 7.3, & 7.4). For example, alternate plans for a product assembly using manual, semi-automatic or fully automated systems may be represented and used as substitutes to deal with bottlenecks. These alternatives are examined and evaluated as needed, using graph search methods, in response to feedback from the PPC system.
RPE uses a feature-based, object-oriented approach (ElMaraghy, 1991) to represent a product structure hierarchically. Bills of material produced by conventional CAD systems may also be used. Current process planners produce detailed (micro) tasks in a single domain (e.g. machining or assembly). The resulting plans are input to RPE and corresponding precedence graphs are generated. These are edited and modified interactively by the user to add operations not considered by the micro process planners. It is also possible to enter the whole plan and alternatives interactively by the user through an effective graphical interface. The output from RPE is the recommended plan. The precedence graph process plan format would be useful to those PPC systems which are capable of using this powerful representation in rescheduling. Alternatively, the precedence graph is converted to the usual sequential process plan format in a flat file for use by traditional PPC systems. This allows RPE to be interfaced with conventional PPC systems currently in use. The selected plan and operations sequence are also displayed along with the resources layout within the plant.
PPC systems often aggregate individual resources (machines, tools etc.) into a higher level resource called a capacity group. One of the important integration issues we faced was the development of a clear definition of resource models used by CAPP and RPE and capacity groups used by PPC and a mapping between the two.
The Integrator has been developed, along with a number of features which make it useful as a functional bridge. Version and Revision control have been added to ensure continuity of plans within the Integrator. The Integrator has been given an interface which allows it to be used with RPE. This allows the added benefits of reactive planning, without resorting to the full replanning which would have to occur in the CAPP system. In both the database, and the sockets implementation, CAPP, RPE and PPC are very independant. They can operate concurrently, on the same, or many machines without complication. This has an added benefit of making the system robust and fault tolerant.
The data is produced, utilized and updated by the CAPP, RPE, and PPC systems. When data is changed, it results in a data change notification event. If a system wants to declare data invalid, it does this with a request. Therefore, when operating in steady state the interfaced systems pass events and requests to push and pull process plans in production.
The issue of common data may have a profound impact on the event types which the system uses. For example, if a CAPP system is based on its own proprietary data base (or files), and the PPC system is based on another database, then:
This problem also occurs when using files, or other data storage mechanisms. Therefore, in lieu of a common database the integrator should use its own internal common data definition to transfer data between CAPP, RPE and PPC. The primary (and novel) function of the Integrator is dealing with events from CAPP, RPE, and PPC. Events are passed to the integrator using messages, and then to another client using messages. Depending upon the message source, and content, the Integrator may send a message to another process. The content of messages will commonly be:
If a common data base is used, then a message does not need to contain any data, and only needs to refer to the data which has been changed. If a common database is not used, then the integrator must maintain its own database, which is updated when data changes. This update may come in two forms: either all data is passed as messages, or all data is remotely accessed from files and databases. To summarize, the three types (cases) of event handling features of the CAPP/PPC integrator are:
Passing data as in case 2 is time consuming, and the integrator may be overwhelmed by the volume of data. Using the common database is the simplest solution, except that all applications are tied to the same database software. The final method in case 3 uses the references to changed data to load common data structures in the Integrator. It is commonly agreed that simply passing a reference to changed data is the best mechanism. Cases 1 and 3 above are dependant on direct access to the outside data sources, common or not. The case 3 approach was chosen to accommodate the greatest number of CAPP and PPC systems. Case 1 should be adopted when a global and common database is used.
For our implementation the Integrator uses the same database used by PPC. In this case it is a commercial Relational Database, and the PPC system is GRIPPS (Kuhnle, 1991). The RPE program runs on PDL files (ElMaraghy, 1991), and thus the integrator will handle reading these files and writing the data to the commercial Database. A similar function occurs for the CAPP system.
Two methods for communication between processes have been developed independently, but provide the same functionality. In the first message passing mechanism, a database table is used to store messages which may be picked or issued by any database client. In the other method, a message server (Jack and ElMaraghy, 1992) is used, and connects all modules (CAPP, RPE, PPC, and the Integrator) through the use of TCP/IP sockets (Sechrest, 1986). Using a complex communication scheme, messages are routed between groups. This method of communication is suited to client programs which are not registered on the database. The block diagram of the CAPP/PPC Integrator is shown in .
illustrates the basic structure of the software. The message layer deals with interprocess communication between the Integrator and CAPP, PPC and RPE. The Executive routines track message content, and decide how to respond, by directing data transfer and issuing new events. The data structures are used for internal storage of the data when transferring between applications. To load these structures there is a generic data interface layer, which may use various sources of data. These source are PDL, a standard database and CAPP files. The final features shown are the filtering routines. The filter functions will “screen-out” resources which are unavailable or overutilized for planning. This is used when sending resource data to RPE.
In , the basic flow of events is pictured. All events will start when a message is issued from CAPP, PPC, or RPE. This message will trigger the loading of data into the Integrator. The data is then downloaded to another data store, using filtering if required. A message is then issued to the recipient of the new message.
The definitions of common data are essential to make the CAPP/PPC Integrator work. These are required so that data from either CAPP or PPC could be put in a common format, which could then be translated into another format. This also gives the Integrator the ability to store plans if required. While CAPP and PPC have common requirements for the process plans themselves, there is a significant difference in the representation of resources. The PPC program uses the concept of Capacity Group, which described a collection of resources, while CAPP and RPE refer to resources. Therefore the common definition of data includes a mapping between resources, and the capacity groups they are lumped into.
On the other hand a complete description of resources is required so that the CAPP and RPE programs can pass adequate information so that when a PPC plan fails because of a capacity group, the failure can be mapped back to a particular resource.
In providing a fully automated integration of the CAPP and PPC modules, the Western team proposed to develop a knowledge-based, automated, and stand-alone integrator module, written in C, that could use in conjunction with any stand-alone CAPP and PPC modules on both the functional and data level.
With respect to the bridging of the data gap, the integrator is equipped with specific knowledge of how the databases map onto each other. This permits the integrator to operate as a true liaison between CAPP and PPC. For instance, a process plan generated by CAPP can be translated directly into a format readily recognizable by PPC, and the work-site information collected by PPC can be converted to the resource information readily usable by CAPP.
With respect to the bridging of the functional gap, the integrator is equipped with appropriate routines to coordinate and complement the existing functionality of CAPP and PPC. This permits the integrator to operate as an automated interface between CAPP and PPC. For instance, CAPP will be invoked automatically (indirectly via the integrator) by PPC when PPC needs to have parts of a process plan re-planned.
As mentioned before, the databases of CAPP and PPC are often different (and may not even be compatible). There was an attempt to provide a set of data, residing on a DBMS, mutually accessible by both CAPP and PPC. This set of data can be viewed as information that one module would maintain for another under an integrated setting. This set of data can also be viewed as an explicit union of the two databases. For instance, this union could include the information on machine utilization which is updated by PPC and is required by CAPP in process planning. There are two main drawbacks in this approach. First, it imposes a restriction on the implementation of the modules. Second, it solves only one specific scenario. Nonetheless, this attempt addresses two critical issues in bridging the data gap: data translation and data passing. The essence of the above attempt is that each module translates (parts of) its database to a pre-determined format, and then places this resulting translation at a pre-determined location for the other module to pick up.
In order to bridge this data gap, the Western team has proposed an integrator with the following components (as illustrated in ). First, an internal data structure that generically describes the databases of the CAPP and PPC. This data structure functions much like the previously mentioned pre-determined format. Second, a set of routines for the integrator to access its own internal data structure, as well as the databases of CAPP and PPC. Due to the scope of these routines, the databases refers only to the externally-residing databases if CAPP and PPC were to run in a stand-alone mode. Third, a set of routines that translates between the information kept by the internal data structure and the two databases. The coding of these routines is part of the setup of the CAPP/PPC intergration. These three components together permits the two separate databases to be reconciled by the integrator.
The functional gap can be bridged in a similar way. The first step is to identify the intended functionality of the integrated system, and to determine how the functionality of CAPP and PPC fit in with the big framework. The second step is to provide a set of routines that coordinate, as well as complement the existing functionality of CAPP and PPC to produce the intended overall functionality of the integrated system (as shown in ). Coordinating means interfacing between CAPP and integrator, and interfacing between PPC and integrator. Through these interfaces, the internal routines (of CAPP and PPC) can be invoked, and the result can be communicated back to the integrator. Complementing means an automated connections between the functionality of CAPP and PPC. As an illustration, the integrator could provide the following primitive routines to facilitate the request for process plan:
This chapter describes the mechanism of bridging the data gap for the integration of CAPP and PPC. Specifically, the bridging of data gap between the internal data structure (of the integrator), the ASCII files, and Oracle DBMS (within the context of the prototype) will be presented in detail.
In the following sections, the elements in the internal data structure together with example will be presented below. The interfacing between the integrator and the external storages will be dealt with in a similar manner. The topic of information translation will also be addressed. In conclusion, a list of improvement for the future implementation of the prototype will also be included.
The generic data structure is a vital part of the integrator. This structure serves two purposes. First, it acts as a standard representation for the information, stored in the databases of CAPP and PPC, that are relevant to the integration. This serves a standard basis for communicating information between modules. Second, it allows this information be stored internally to the integrator for future manipulation.
The current version of the generic data structure was initially set up jointly by all three teams. The generic data structure, programed as C-structures, has subsequently been revised by the Western team. There are eight basic C-structures that handles four types of informations: resources, parts, capacity groups, and process plans. There is one array of each of the basic C-structures, and a super-structure of these eight arrays. This super-structure contains all the information for the integrator to relate one planning application between CAPP, PPC, and the integrator. The eight basic C-structures will be listed below.
Resource is an aggregate term that refers to all objects (e.g. machines, tools, materials, and people) involved in a production. Two C-structures are used to describe the available resources. The first C-structure “RESRCE” describes the identification, application, cost factor, time factor, and availability of each resource. The elements of RESRCE are:
A part refers to a clearly distinguishable material object exists between operations. It can be either a manufactured, finished, or purchased object. Two C-structures are used to describe the parts and their inter-relationships involved in the production. The C-structure “PRT_DAT” describes the identification, and characteristics of each part. The elements in PRT_DAT are listed below.
As an example, the table below shows four related parts (A, B, C, and D): 2 units of A and 1 unit of B are needed to produce 1 unit of C, 3 units of A and 1 unit of C are needed to produce 1 unit of D.
A capacity group is a clearly distinguishable work-site on the shop-floor. It is a logical grouping of resources, and it performs a sequence of operations. Often, a capacity group is set up individually to meet the specific production requirement. Capacity group is denoted by the C-structure “CAP_GRP”. CAP_GRP describes the identification, setup, time factor, cost factor, and availability of each capacity group. Below is a list of the elements in CAP_GRP.
A process plan is divided into tasks. The tasks are ordered, and every task is characterized in two ways: by the goal of the task, and by the operations required to achieve this goal. The goal is measured in terms of some clearly distinguishable object within the overall flow of materials. For instance, the goal could be a certain sub-assembly. Typically for a task, the goal stays the same while the operations vary during the complete production. Three C-structures are used to describe the process plan. The first C-structure “SUPER_TASK” denotes super-task. A super-task describes the result of all operations that happen within a single capacity group, but not specifying which capacity group. Below is a list of the elements in SUPER_TASK.
The third C-structure, named “PROC_DSCR”, denotes process description. It describes the operations to be performed for the super-task. It could be either the preferred or alternative set of operations for the super-task. Below, the elements of PROC_DSCR are listed.
The above generic data structure is derived from the structure given on the document “Generic Data for CAPP/PPC Integration”. Both versions share a number of similarities. For instance, they both have eight C-structures. These C-structures provide a relational scheme of representing information similar to that of relational DBMS. By relational, information is stored as tables of related records (although information is not necessarily organized in a normalized form). These structures have similar interpretation but slightly different representation. The main differences will be compared below.
The relation between parts and super-tasks is ambiguous in the original version. It is only stated that the super-task is measured in terms of some clearly distinguishable object. There is no mentioning of whether or not this resulting object is a part. There is also not a field for part identifier in the C-structure SUPER_TASK. The Western team resolved this by providing a field for part identifier in the current version of SUPER_TASK for an explicit declaration of the part that is resulted from the super-task.
In the original version of the C-structure PROC_CNTN, there is a field that specifies the quantity relation from supplier (super-task) to consumer (super-task). This again emphasizes the point made in the last paragraph about clarifying the relation between part and super-task. More importantly, this field is redundant because the information has already been kept, more appropriately, in the C-structure PRT_CNTN. This field is excluded from the current version of PROC_CNTN.
There is a similar redundancy between SUPER_TASK and PROC_DSCR. The field for the super-task identifier is removed from the current version of PROC_DSCR because this information can be retrieved from SUPER_TASK.
The original version of the C-structure PRT_CNTN does not provide a field for part identifier when PRT_CNTN is supposed to specify how one part is required for the production of another. The Western team treated this as an oversight, and provided a field for part identifier in the current version of PRT_CNTN.
There are two fields in the original version of the C-structure PRT_CNTN for the supplier and consumer (super-tasks) of the subject part. There is also a field in PRT_CNTN for an identifier of process connection. The information on the super-tasks have already been kept in the referenced process connection. The important issue here is the usage of super-tasks to specify how a part is required for the production of another. The information about which part is used under which super-task under and which process description can be readily available from the C-structures SUPER_TASK, PROC_CNTN, and PROC_DESP. A simpler way of describing the relation between parts is directly describing the parts that are related. This Information can be obtained from non-production-specific sources, such as the bills of materials. The C-structure PRT_CNTN in the current version has been simplified as mentioned above.
This data structure is currently used in the prototype, and is only a trial version. The Western team will continuously to revise this data structure to provide a full and detail representation of the complete planning and production environments.
A simple example will be given here to demonstrate this generic data structure. As mentioned before, this application for the prototype is to communicate process plans between the different information storages. The integrator will read a process plan, from an ASCII file supplied by the McMaster team, into its internal data structure and subsequently into Oracle. The complete file is given in the document “Input Files for RPE”. The plan describes the assembly of an air cylinder. It covers the full production cycle from releasing stocks, assembling parts, inspecting products, up to the shipping. This process plan is listed in Appendix A. The representation of the portion on assembling the air cylinder with the generic data structure is presented below.
Three types of routines were provided for accessing the super-structure. Currently, there are routines that retrieve records from, insert records into, and print the content of the super-structure. In future implementations, there could also be routines that retrieve, update, delete according to the key identifiers.
Oracle DBMS, a relational DBMS, is used to simulate the database of either CAPP or PPC. The objective is to test the mechanics of data passing between the integrator and an external database. For this prototype, the database is set to be identical to the internal data structures of the integrator. There are eight data-tables (in Oracle DBMS) that parallel to the above mentioned eight arrays of C-structures. Pro*C routines have been programmed to allow the integrator to access the external Oracle DBMS: connect onto Oracle DBMS, release from Oracle DBMS, write records (from the generic data structure) into Oracle DBMS, and read records from Oracle DBMS (unto the generic data structure).
Pro*C is a Oracle-specific language that allows SQL-statements be embedded within C programs. SQL is the standard query language for relational DBMS. The embedded SQL-statements offer the most concise and accurate description of the necessary database operations. Oracle DBMS has a pre-compiler that translates a Pro*C program into a C program. The main drawback of Pro*C is that it does not have a true block structure. Pro*C uses goto’s and labels. Pro*C also requires all variables that are used in the embedded SQL-statements be globally declared.
The research effort has indicated that it is simple to communicate information between external DBMS and the integrator. This should not come as a surprise. The Western team has made the Oracle DBMS interface modular. In the event of a DBMS change, only this interface will have to be adjusted accordingly.
The effort has also revealed the significant difference in the time-performance between accessing internal and external data storages. It is much faster for the integrator to access its internal data structure than any external DBMS. This difference implies that minimizing the actual amount of data access to the external DBMS will improve the efficiency of the integrated system. This supports the importance of representing the information that are relevant to the CAPP/PPC intergration internal to the integrator.
ASCII file is the second form of external storage mentioned above. It is fundamental to read from, and write onto ASCII files by a program. The objective is to test the mechanics of data translation. Specifically, an ASCII file containing the process plan (of an air cylinder) given in PDL is used in implementing the prototype. This ASCII file is supplied by the McMaster team. PDL is a product description language designed by the McMaster team. Naturally, PDL describes process plans in a format different from the generic data structure. Special routines have been programmed to allow the integrator to translate the process plan of the air cylinder (given in PDL-format) into a format that can be stored in generic-data-structure.
The research effort has revealed several significant elements to the process of data translation. First, it is crucial to have a definite goal of the translation process, and to have a clear understanding of both the structure and content of the data that is to be translated before beginning any translation. Second, there may not be any compatible translation for certain pieces of information that must be translated (due to the individual makeup of the two formats). The situation requires the formats be modified, or the data not be translated. A possible solution is to refine the generic data structure at the setting up of the integrated environment.
The effort has showed that the current version of generic data structure does not support the full structure and format of the process plan given in PDL. Since the generic data structure will be revised continuously and PDL is only a test, the focus here is to translate the pieces of information that can be translated between PDL and generic data structure.
The Western team has used the UNIX utilities lex (a token analyzer) and yacc (a parser) to extract the necessary information from the ASCII file. The extracted information is then put into the internal data structure by the corresponding access routines.
Graphical interfaces were developed for CAPP, PPC, RPE, and the Integrator (some of the modules were simulators). These programs tied into the message board, and sent messages back and forth between each other at timed intervals, All of the programs were able to inject messages manually, as well as automatically in simulation mode. The text window in each display indicated all the messages received, processed or sent by each application.
Implementation using the message board was simple and straightforward. There was only one significant difficulty which occurred: The two asynchronous processes (MPS, and X Windows), were difficult to operate simultaneously. This is seen as a difficulty which is due to the current release of the operating system, and thus could be quickly overcome when debugging for commercial applications.
The research effort has shown that the generic data structure is literally the heart of the integrated system (as the integrated system revolves and operates around the generic data structure). Although the generic data structure is still at its early stage of development, it was demonstrated to be capable of representing information that are relevant to the integration of the functionality of CAPP and PPC. Through this generic data structure, the modules can be communicating with each other, and the functionality of the integrator can be implemented.
For the generic data structure, information is represented in a relational fashion, and information is structured to reduce redundancy. It is a naive version, and it does not support the process plan given in PDL (regardless of whether or not it is necessary to support the full PDL files). There are several possible directions for the future development of the generic data structure. First, refining the current version to improve the capability of representing common information relevant to the integration. Second, developing a new generic data structure that does not have to use any specific scheme of structuring information employed by the last two teams. Third, experimenting with alternatives such as the object-oriented representation of information. Fourth, providing more access routines to the generic data structure.
It came as no surprise when the research effort revealed how straight forward it is to connect to external DBMS, and how much more complicated it is to perform the data translation. Four points can be observed from the research. First, regardless of the factors involved, the example of data translation is probably a typical scenario. Only some pieces of information will have be translated, and the generic data structure must be able to capture these pieces. This directly implies a loss of information that are not relevant to the integration when data is being translated back and forth. Second, the complexity of the translation depends on the data to be translated, and on the intended functionality of the integrator. Third, the generic data structure can be fine-tuned to suit any potential peculiarity of the data to be translated. Fourth, the most significant part in translation is a thorough understanding of both the content and format of the data that is to be translated.
The research outlined in this chapter is the first attempt to bridge the data gap. The results supported positively the approach taken by the Western team to bridge the data gap during the CAPP/PPC integration. The effort prepared the way of addressing other issues about the integration.
Integration of multiple processes requires the use of sophisticated techniques. If all processes are run on a single machine, these processes may communicate through common memory, files, etc. When the machines are distributed over a network of machines, then a more sophisticated approach is required. At Western we already posses a tool which may be applied to the Integrator Project. This tool is referred to as the Message Passing System (MPS). The system is socket based, using OSI standards, which makes it very portable between many operating systems and languages.
The basic design features for the Integrator support coarse grain concurrent processing over a number of machines. The various programs use a generic set of interface subroutines. These interface routines talk to a central server program which handles a number of communication schemes, including asynchronous, concurrent, filtered, grouped, hierarchical, etc. Even more important is the fact that because the source code is available, it is very easy to add features not anticipated at this time.
This system allows programs to be added and removed from the MPS system dynamically. As a result the system is very fault tolerant and robust. The client structure makes the architecture very modular. This modularity means that new functions may be added to the MPS system on-line, and new applications may be added without difficulty
Each program has a small library of subroutines which are used to communicate to the MPS server. After a client has been enrolled on the message board, they may send messages, or check to see if any messages are waiting for them. The MPS server is a single program which runs on a single machine, while serving all of the clients on the network. To Clarify, the MPS server is a Utility program which always runs, and the clients can be any program, such as,
MPS will allow clients to enrol in an ad-hoc manner. As a result, some abstract structure was required to allow the clients to identify their function. By using group names, and priority numbers, clients are allowed to enrol by function types, and their order of application to a particular message. The diagram in below shows a structure of processes for two hypothetical groups. In these diagrams the messages will flow from top to bottom. In the case where there are two or more clients at the same level, the message will be picked up in a first come, first served basis (thus giving concurrency). If a message passes through a group, it should be addressed to another group by one of the clients. If a message originates from a client, it will be assigned a destination group. When a message gets to the bottom of a group, it will be passed to the top of the destination group. This scheme allows for easy to configure programs.
A simple technical explanation is given below in which illustrates a very basic case of MPS operation. The first program (CAPP) will initialize itself, and wait for a message. The second program (PPC), will send a message to the first (CAPP) program.
Although developed separately, the Message Passing System (MPS) has a number of features which are applicable to the problems which occur in the Integrator. (For more details see the Technical Report on MPS by Jack et. al., 1991). The software uses OSI sockets, which make it portable between a wide variety of software platforms and operating systems. The systems uses a central server for message passing, and client routines which are used by the client programs in the system. The key points of interest are,
One of the principal areas of research currently in progress at the Design Automation and Manufacturing Research Laboratory at the University of Western Ontario involves the development and implementation of an integration module which will link Computer Aided Process Planning (CAPP) software with Production Planning and Control (PPC) software. PPC systems are often referred to as “scheduling” systems. The object of the project is to establish a complete process planning and production software package that implements a production cycle from initial planning stages through to shop floor scheduling. This system is being implemented in conjunction with the Flexible Manufacturing Research Center at McMaster University, who will be implementing the CAPP portion of the system, and with IPA in Stuttgart, Germany, whose PPC system, GRIPPS, will be used in the integration.
The “Object-Oriented” version of the project is a parallel implementation of the integration module using object-oriented software tools available at the DA&MRL. The focus of the project is on machining process planning and the use of a central Object-Oriented Database Management System (OODBMS) to serve data to the software modules. The process planner to be used will be MetCAPP. This package will be interfaced to the object-oriented database management system developed by Versant Object Technology. A front-end for the system will also be included which will allow the user to design feature-based parts for process planning. This design module will be implemented using ACIS, an object-oriented object modeling system.
Computer Integrated Manufacturing (CIM) systems often involve the use of many database intensive applications. Engineering data is becoming increasingly complex and occurs in such quantities that extensive database technology is employed by most applications. Until recently, engineering and manufacturing facilities have followed the trend set by many businesses for database storage by using relational database systems. Systems of this type involve the storage of data in the form of tables of text or numeric values. This strategy is useful for most business applications but has proven to be less than ideal for engineering applications due to the limitations placed on data structures.
OODBMSs have emerged only recently and provide a long awaited alternative to their relational counterparts. Besides being a radical departure from traditional storage and programming strategies, object-oriented databases are well suited to the complex nature of engineering data. These systems represent the current state-of-the-art in engineering computing applications and many manufacturing facilities are converting existing software applications to incorporate object-oriented features.
The development of object-oriented programming and database technology in recent years has been a result of a combination of several established research fields in the computing area. Research in programming languages, artificial intelligence and software engineering has contributed to the development of object-oriented concepts particularly in applications involving database technology [Zdonic and Maier, 1990].
Until recently, most data intensive applications were related to business applications. Much of the research that has occurred in the database field has been centered on tabular, relational systems because of their suitability to business-oriented tasks. The push in the manufacturing field for facilities to produce products at high rates in order to survive, as well as the rapid advancements in computing technology over the past few years has led to a situation where the fields of engineering science and computer science are becoming very closely related. The inability of relational systems to adequately meet the data storage needs of complex engineering applications has promoted research into alternate storage methods.
Traditional computing applications often maintained their own data usually in the form of flat files stored on magnetic storage media. As applications were developed that used the same sets of data, the storage of that data often became redundant. Also, as more advanced applications were developed these were limited because of the difficulty of altering data structures and yet maintaining compatibility with older applications. More complex applications such as CAD and manufacturing systems require central sets of persistent data which is often used by many applications at the same time [Zdonic and Maier, 1990]. This requirement for data storage and handling has led to the development of database management systems and to a reversal in the traditional role of data in engineering facilities. Most CIM implementations at present view the database (the data itself) as the central focus with applications built around it as opposed to the traditional view of data as a secondary component to the applications using it.
Process planning represents the basic bridge between design and manufacturing. As such it utilizes both design and manufacturing related data. Due to the complexity of both fields it is necessary that computer aided process planners have access to very complex data representations for products and manufacturing resources. Object-oriented database technology is ideally suited for this application because it has the capability of providing persistent database representations for any user-defined structure, referred to as an “object”.
The database management system, as previously mentioned, is a key feature in the facilitation of data flow between the various modules of the system. The modules also must communicate with each in order to complete the functionality of the system. Some examples of the types of messages that must be passed between modules are:
The Versant OODBMS is useful not only for maintaining the data in the system but also for passing messages between the various modules. Its fully distributed architecture makes communications across a network easy to implement in the form of a “message” database which is accessible to all modules. The concurrent usage capabilities of Versant databases also make real-time message passing available to the modules of the system. Figure 2 on the next page shows how a basic “message-board” system is implemented using Versant.
The message class is implemented in C++ and this basic code is made available to all of the client modules. Therefore, each client has full access to the database containing the messages. Each client process is also issued a code identification. This identification is used for the retrieval of messages issued to that particular process. The messages are in the form of an address identifier tied to the message contents. If a module needs to send a message the address and the message are added to the database. If a module wishes to retrieve its messages, it simply accesses the database and retrieves all messages containing its address identifier.
This implementation of a message passing system does not require the development of complex communication protocols and unusual hardware arrangements. It simply utilizes the inherent distributed communication capabilities of Versant to pass simple messages between the client modules of the overall software system.
MetCAPP is a machining process planning software package which incorporates the extensive manufacturing and machining experience of the Metcut Corporation, which has recently become a division of the Institute of Advanced Manufacturing Sciences in Cincinnati, Ohio. The system is a semi-generative CAPP environment which automatically generates speed and feed parameters for the machining of user defined features and associated tooling and material characteristics.
The CUTPLAN module is used to develop the process plan for an entire part. The user defines all of the features which make up the part. The module suggests the appropriate work station that may be used to produce each of the features and also calculates the times required at that work station to produce each feature. MetCAPP at the present time supports 41 different features which may be chosen from menus. For each feature on the part a separate call is made to the second module: CUTTECH.
The CUTTECH module is used to define all of the operations required to produce an individual feature on the part. A sequence of machining steps (operations) are defined and associated with specific cutting tools. This module determines the required number of cutting passes as well as the time to perform each operation. For each operation a call is made to the CUTDATA module.
CUTDATA is the main Metcut machining database compiled from over 40 years of machining experience. This database is accessed for each machining operation defined in CUTTECH and speed and feed information is automatically generated using the operation and tooling information found in CUTTECH
The MetCAPP API (Application Programming Interface) consists of a set of C functions which may be used to directly access any of the three modules of the MetCAPP software without the use of the supplied user interface. These C functions may be incorporated into any application program.
CUTTECH is the primary module incorporated into this project due to the fact that the process planning will occur at the feature level. CUTDATA will be used to obtain approximated times for operations to occur. The actual speed and feed data from CUTDATA, however, is not necessary information for the scope of this project.
The code development for this project is approaching completion. The Design Module has been implemented for simple features and produces both feature lists in Versant and ACIS models for simple parts. Currently all design input is text-based and is entered from the keyboard. Eventually, the system will have a graphical user interface (GUI) in X-Windows which will simplify user input.
The Planning Module currently reads from the feature and resource databases and employs MetCAPP to generate machining process plans for the simple parts designed in the Design Module. The Production Module is very simplistic at present. It reads process plans from the Planning Module and updates the allocation of resources (materials and workstations) to the plans. The availability of resources are randomly set in this module and appropriate messages are passed to the other modules as random events occur.
The message passing in the system has been implemented in the Communication Module. The messages are the triggers for the operation of the various modules in the system. For example, when the Design Module has successfully stored a feature list to Versant it posts a message to the Planning Module that a part is ready to be planned. The Planning Module sits idle on the system and polls the message board periodically. When the message is found the process planning procedure is triggered.
The future work on this project involves the addition of functionality to the Production Module and perhaps the simulation of actual scheduling using another commercial package. A GUI will be added to each of the modules of the system.
Relational systems are not ideally suited to handle the multi-dimensional nature of process plans from RPE. Tabular formats are not efficient for the storage of data that is hierarchical in nature (e.g. RPE submits alternate process plans as a hierarchy). Object-oriented database technology is better equipped to handle hierarchical, multi-dimensional data structures. Programming languages supplied with relational database management systems (e.g. SQLPlus in Oracle) are often proprietary and non-portable to other relational systems. SQL is a primitive query language not suitable for the development of complex engineering applications.
Most object-oriented database management systems support C and C++ application code in a standard form (ANSI). Database functionality is added to application code by including database class libraries. This makes application code portable among database systems with minimal changes.
The Standard Database agreed to by both teams for use in this project, is ORACLE DBMS. Oracle is a relational database used by many industries and other organizations for their DBMS needs. Considerable effort and progress was achieved by both teams in order to use the same data structures for the common data during Year 2 of this project. An ultimate objective would be to use the same physical ORACLE records of ‘the data’ by both CAPP and PPC. It was however more practical to use the same data structures, but two different physical databases. IPA uses ORACLE on a PC, McMaster/Western use a SUN Workstation version of ORACLE). Object-oriented databases offer advantages, but this is a future standard. Western has initiated a parallel project utilizing OODBMS.
At present the system is not fully implemented and tested. In the future a few bugs must be worked out. Theses are described in the next section. Eventually the entire system should run off a single global database.
The justification and need for integrating process planning and production planning and control more closely, has been demonstrated. The benefits from this integration are equally valid in manual, automated and computer integrated manufacturing environments. Traditional CAPP systems produce linear sequential plans which do not consider resource availability. Modifications required for localized rescheduling mean complete replanning with obvious disadvantages. A reactive planning environment (RPE) has been developed to capture plans and resource alternatives and provide an effective means of evaluation and selection of plans based on the dynamically changing shop floor requirements. The integrator module addresses the time dependent issues related to event handling, communications, database updating and response time (short, medium & long). Both RPE and the Integrator are designed to be compatible with existing CAPP and PPC systems with distributed and/or common databases. The effectiveness of the proposed solution is currently being demonstrated using prototype industrial applications.
All year 1 and year 2 tasks and milestones have been met and exceeded. The synergy and cooperation between the two Ontario universities, and with IPA, Baden-Wurttemberg was very beneficial, on a project of this magnitude and expected impact. The important CAPP/PPC Integrator issues have been identified and solutions have been formulated and implemented. Standards for representation of models and data were addressed. The approach used is generic and will allow integration of alternate CAPP and PPC systems with minimal effort. We are now working with industry to address their specific needs and implementation issues. This CAPP/PPC Integration project provided the motivation to enhance ongoing research in generative process planning, reactive planning and concurrent engineering environments.
• The MPS and Openwindows apparently have conflicts. This may only be an operating system bug, as the two should be independent. This becomes a problem when it is used for the simulation programs which have an Openwindows interface.
All of the unresolved problems can be dealt with when a manufacturer is located who will give us access to their software, and their databases. This will allow verification of the data structures, and clarify which events are missing. It will also allow the development of the missing functions, and debugging of the existing ones.
Jack, H., and ElMaraghy, W. H., 1992, A Manual for Interprocess Communication with the MPS (Message Passing System), DAMRL Report No. 92-08-01, The University of Western Ontario, London, Ontario, Canada.
Sechrest, S., 1986, An Introductory 4.3BSD Interprocess Communication Tutorial, in Unix Programmer’s Manual Supplementary Documents 1, by The Computer Systems Research Group, The University of California.
Törnshoff, H.K., and Detand, J., 1990; A Process Description Concept for Process Planning, Scheduling and Job Shop Control, 22nd CIRP Intern. Seminar on Manu. Sys., Univ. of Twente, Enschede, Netherlands.
Törnshoff, H.K., Beckendorff, Ur, and Anders, N., 1989; FLEXPLAN - A concept for Intelligent Process Planning and Scheduling”, CIRP Intern. Workshop on Computer Aided Process Planning, Hannover University Sept. 21-22, pp. 87-106.
Zdonic, S., and D. Maier, “Fundamentals of Object-Oriented Databases”, Readings in Object-Oriented Database Systems, S. Zdonic and D. Maier, ed., Morgan Kaufmann Publishers, Inc., San Mateo, CA, 1990, pages 1 - 32.
The role of the CAPP/PPC integrator is to bridge the functional and data gaps that exist between any generic CAPP and PPC with a definite emphasis on the bridging RPE (from McMaster) and GRIPPS (from IPA). This emphasis has dominated the early developmental effort at Western. There was not enough understanding and specification of both RPE and GRIPPS for the Western team to implement anything that has to depend on either module. Nonetheless, the team has successfully developed Message Board - a software that allows communication between multiple processes. This serves as the mechanism for passing signals (as well as data) between RPE, GRIPPS, and the integrator. It was then the agreement on the definition for generic common data that provided an opportunity for some actual coding of the integrator.
The implementation was written in both C and Pro*C. Pro*C is a Oracle-specific language, a hybrid of C and SQL, for accessing Oracle. Pro*C offers the concise and standard ways of access RDBMS with SQL-statements that are embedded in C programs. Pro*C deals in terms of tables, records, and scalar variables. However, Pro*C does not have the block structure of C, and arrays. Pro*C relies heavily on global variables, as well as Goto-statements. The Pro*C program will have to be translated into C by the Oracle Pro*C pre-compiler. My involvement deals mainly with data-related issues, and debugging.
There was the setting up of the structure definitions and internal data storage of process plans in C based upon the previously agreement on the generic common data. This is straight-forward because the eight components of a process plan were given in C-form in the agreement, except for choosing some meaningful English field names. Below are a brief description of these eight components:
A new structure that houses all these eight components together to describe a process plan is also defined. This permits an easy handling of multiple process plans in a program. All nine structure definitions is presented in Appendix-A. Along with these structures, utilities that initialize, insert values into, and retrieve values from these structures were also implemented.
Similar data-related work was also done for Oracle. There was the defining and setting up of the tables and records (that correspond to the previously mentioned structure definitions and internal data storage structure definitions) in Oracle. Along with these tables, utilities that write records into and read records from the Oracle tables were also implemented. There were also utilities that allow the connecting with and releasing from Oracle. All implementation here was done in Pro*C. Most but not all of the naming convention remain the same here as in the C-structure definitions, and the internal dta storage mentioned in the last paragraph. Appendix-B presents the definition and setting up of the tables and records in Oracle.
There were one other area that I have once worked on but have since abandoned. It is to read the PDL-file into, as well as to write the PDL-file from the internal data storage. PDL, a product description language, was developed at McMaster, and is the format of the input files to RPE. Effort had been devoted to construct a lexical analyzer, using Lex, based on the grammar of a section of PDL. (This particular section describes the geometrical attributes of individual part, and relations between different parts.) A parser, developed using Yacc, was also implemented. The PDL file of air cylinder (provided by McMaster team) was successfully read into the internal data storage. The opposite process of writing the information (stored in the internal storage) onto the PDL file was also successful. This effort was stopped for several reasons. First, it is somewhat a duplication of the work of the McMaster team although RPE has a object-oriented internal storage. Second, it is not certain whether or not PDL-file will be the final format that the integrator is working with.
There are bugs in the implementation as a whole (i.e. the software collectively written by the Western team). The demonstration died (not immediately but) after it had run for sometime. It did not always died at the same location. It could die either inside of outside of Oracle. The demonstration tries to test how data are to be pass process plan between the stub-CAPP, stub-PPC, stub-RPE, and integrator through files as well as Oracle. The random number generator was used to determine the action to be taken whenever a process plan is received. Debugging has been difficult because of the multiple-process environment. The approach I have taken is to analyze the print-out traces of the program execution. Bugs had been found and fixed, but none of them is responsible for the program crash.
The first eight structures defined below are taken from the agreed upon definition of the generic data definition. The last one is the super-structure that contains all the components to describe a single process plan for a specific manufacturing facility.