## CONTROL SYSTEMSFigure 8.4 An automotive cruise control system shows a transfer function block for a car. The input, or control variable is the gas pedal angle. The system output, or result, is the velocity of the car. In standard operation the gas pedal angle is controlled by the driver. When a cruise control system is engaged the gas pedal must automatically be adjusted to maintain a desired velocity setpoint. To do this a control system is added, in this figure it is shown inside the dashed line. In this control system the output velocity is subtracted from the setpoint to get a system error. The subtraction occurs in the summation block (the circle on the left hand side). This error is used by the controller function to adjust the control variable in the system. Negative feedback is the term used for this type of controller. Figure 8.4 An automotive cruise control system There are two main types of feedback control systems: negative feedback and positive feedback. In a positive feedback control system the setpoint and output values are added. In a negative feedback control the setpoint and output values are subtracted. As a rule negative feedback systems are more stable than positive feedback systems. Negative feedback also makes systems more immune to random variations in component values and inputs. The control function in Figure 8.4 An automotive cruise control system can be defined many ways. A possible set of rules for controlling the system is given in Figure 8.5 Example control rules. Recall that the system error is the difference between the setpoint and actual output. When the system output matches the setpoint the error is zero. Larger differences between the setpoint and output will result in larger errors. For example if the desired velocity is 50mph and the actual velocity 60mph, the error is -10mph, and the car should be slowed down. The rules in the figure give a general idea of how a control function might work for a cruise control system. Figure 8.5 Example control rules In following sections we will examine mathematical control functions that are easy to implement in actual control systems. ## 8.3.1 PID Control SystemsThe Proportional Integral Derivative (PID) control function shown in Figure 8.6 A PID controller equation is the most popular choice in industry. In the equation given the ’e’ is the system error, and there are three separate gain constants for the three terms. The result is a control variable value. Figure 8.6 A PID controller equation Figure 8.7 A PID control system shows a basic PID controller in block diagram form. In this case the potentiometer on the left is used as a voltage divider, providing a setpoint voltage. At the output the motor shaft drives a potentiometer, also used as a voltage divider. The voltages from the setpoint and output are subtracted at the summation block to calculate the feedback error. The resulting error is used in the PID function. In the proportional branch the error is multiplied by a constant, to provide a longterm output for the motor (a ballpark guess). If an error is largely positive or negative for a while the integral branch value will become large and push the system towards zero. When there is a sudden change in the error value the differential branch will give a quick response. The results of all three branches are added together in the second summation block. This result is then amplified to drive the motor. The overall performance of the system can be changed by adjusting the gains in the three branches of the PID function. Figure 8.7 A PID control system There are other variations on the basic PID controller shown in Figure 8.8 Some other control equations. A PI controller results when the derivative gain is set to zero. (Recall the second order response.) This controller is generally good for eliminating long term errors, but it is prone to overshoot. In a P controller only the proportional gain in non-zero. This controller will generally work, but often cannot eliminate errors. The PD controller does not deal with longterm errors, but is very responsive to system changes. Figure 8.8 Some other control equations ## 8.3.2 Manipulating Block DiagramsA block diagram for a system is not unique, meaning that it may be manipulated into new forms. Typically a block diagram will be developed for a system. The diagram will then be simplified through a process that is both graphical and algebraic. For example, equivalent blocks for a negative feedback loop are shown in Figure 8.9 A negative feedback block reduction, along with an algebraic proof. Figure 8.9 A negative feedback block reduction Other block diagram equivalencies are shown in Figure 8.10 A positive feedback block reduction to Figure 8.16 Moving summation function past blocks. In all cases these operations are reversible. Proofs are provided, except for the cases where the equivalence is obvious. Figure 8.10 A positive feedback block reduction Figure 8.11 Reversal of function blocks Figure 8.12 Moving branches before blocks Figure 8.13 Combining sequential function blocks Figure 8.14 Moving branches after blocks Figure 8.15 Moving summation functions before blocks Figure 8.16 Moving summation function past blocks Recall the example of a cruise control system for an automobile presented in Figure 8.4 An automotive cruise control system. This example is extended in Figure 8.17 An example of simplifying a block diagram to include mathematical models for each of the function blocks. This block diagram is first simplified by multiplying the blocks in sequence. The feedback loop is then reduced to a single block. Notice that the feedback line doesn’t have a function block on it, so by default the function is ’1’ - everything that goes in, comes out. Figure 8.17 An example of simplifying a block diagram The function block is further simplified in Figure 8.18 An example of simplifying a block diagram (continued) to a final transfer function for the whole system. Figure 8.18 An example of simplifying a block diagram (continued) ## 8.3.3 A Motor Control System ExampleConsider the example of a DC servo motor controlled by a computer. The purpose of the controller is to position the motor. The system in Figure 8.19 A motor feedback control system shows a reasonable control system arrangement. Some elements such as power supplies and commons for voltages are omitted for clarity. Figure 8.19 A motor feedback control system The feedback controller can be represented with the block diagram in Figure 8.20 A block diagram for the feedback controller. Figure 8.20 A block diagram for the feedback controller The transfer functions for each of the blocks are developed in Figure 8.21 Transfer functions for the power amplifier, potentiometer and motor shaft. Two of the values must be provided by the system user. The op-amp is basically an inverting amplifier with a fixed gain of -2.2 times. The potentiometer is connected as a voltage divider and the equation relates angle to voltage. Finally the velocity of the shaft is integrated to give position. Figure 8.21 Transfer functions for the power amplifier, potentiometer and motor shaft The basic equation for the motor is derived in Figure 8.22 Transfer function for the motor using experimental data. In this case the motor was tested with the full inertia on the shaft, so there is no need to calculate ’J’. Figure 8.22 Transfer function for the motor The individual transfer functions for the system are put into the system block diagram in Figure 8.23 The system block diagram, and simplification. The block diagram is then simplified for the entire system to a single transfer function relating the desired voltage (setpoint) to the angular position (output). The transfer function contains the unknown gain value ’Kp’. Figure 8.23 The system block diagram, and simplification The value of ’Kp’ can be selected to ’tune’ the system performance. In Figure 8.24 Calculating a gain Kp the gain value is calculated to give the system an overall damping factor of 1.0, or critically damped. This is done by recognizing that the bottom (homogeneous) part of the transfer function is second-order and then extracting the damping factor and natural frequency. The final result of ’Kp’ is negative, but this makes sense when the negative gain on the op-amp is considered. Figure 8.24 Calculating a gain Kp ## 8.3.4 System ErrorSystem error is often used when designing control systems. The two common types of error are system error and feedback error. The equations for calculating these errors are shown in Figure 8.25 Controller errors. If the feedback function ’H’ has a value of ’1’ then these errors will be the same. An example of calculating these errors is shown in Figure 8.26 System error calculation example for a step input. The system is a simple integrator, with a unity feedback loop. The overall transfer function for the system is calculated and then used to find the system response. The response is then compared to the input to find the system error. In this case the error will go to zero as time approaches infinity. Figure 8.26 System error calculation example for a step input Figure 8.27 Drill problem: Calculate the system error for a ramp input Figure 8.28 Drill problem: Calculate the errors ## 8.3.5 Controller Transfer FunctionsThe PID controller, and simpler variations were discussed in earlier sections. A more complete table is given in Figure 8.29 Standard controller types. Figure 8.29 Standard controller types ## 8.3.6 Feedforward ControllersWhen a model of a system is well known it can be used to improve the performance of a control system by adding a feedforward function, as pictured in Figure 8.30 A feed forward controller. The feedforward function is basically an inverse model of the process. When this is used together with a more traditional feedback function the overall system can outperform more traditional controllers function, such as the PID controller. Figure 8.30 A feed forward controller ## 8.3.7 State Equation Based SystemsState variable matrices were introduced before. These can also be used to form a control system, as shown in Figure 8.31 A state variable control system. Figure 8.31 A state variable control system An example is shown in Figure 8.32 A second order state variable control system that implements a second order state equation. The system uses two integrators to integrate the angular acceleration, then the angular velocity, to get the position. Figure 8.32 A second order state variable control system The previous block diagrams are useful for simulating systems. These can then be used in feedforward control systems to estimate system performance and then predict a useful output value. Figure 8.33 A state based feedforward controller ## 8.3.8 Cascade ControllersWhen controlling a multistep process a cascade controller can allow refined control of sub-loops within the larger control system. Most large processes will have some form of cascade control. For example, the inner loop may be for a heating oven, while the outer loop controls a conveyor feeding parts into the oven. |