Page images
PDF
EPUB

It is also necessary to warn against a simple-minded application of job analysis or any other method. Imagine this method applied to analyzing what a symphony orchestra does. An analyst might observe that for long periods the second clarinetist had nothing to do, that the tympanist only played repeated notes, and that the winds only repeated what the strings had introduced. Such an analysis would be correct as far as it went, but would simply miss the point of the performance. No method can substitute for judgment or a knowledge of what is being evaluated. Or as one writer put it, some works are like mirrors; if a donkey looks in, no apostle will gaze out.

A brief case study will serve to make these points clearer. One of the authors (Levine) was commissioned by the National Oceanic and Atmospheric Administration (NOAA), an agency within the Department of Commerce, to draft a plan to evaluate prospective contractors who would manage NOAA's supply operations. NOAA is itself an agency made up of other agencies, of which the largest and best known is the National Weather Service. All of these agencies are supported by NOAA warehouses which stock instruments, electronic equipment, common use technical and administrative forms, and NOAA publications, handbooks, and operating manuals. The largest of these warehouses, the NOAA Logistics Supply Center, is located in Kansas City, Missouri. For the moment, we can disregard the agency's intention to contract out the management of its warehouses; the performance criteria would be identical if the system was managed, as in fact it is, by government employees. The question remains: How can NOAA evaluate what is, in effect, a range of support services?

From what has been said, a general approach to evaluating NOAA's supply depot can be easily described. For each service, develop a standard; assign an acceptable quality level for the performance of the service; and design a surveillance method to determine if acceptable quality levels have been met (ref. 161). In practice, the task of drafting a quality assurance plan is a little more complicated. The Logistics Supply Center stocks some 8 600 line items, in addition to sophisticated one-of-a-kind equipment furnished to the National Weather Service; some items are inactive, while there are shortages of others; and in other cases, information on items in stock may not be readily accessible to users. Moreover, a quality assurance plan, to be effective, must be capable of being entered into a data-processing system; otherwise, the supply system will temporarily collapse whenever the one or two persons who carry it in their heads leave. The plan, as approved, allowed for the complexity of the system. Supply operations were broken down into some 65 to 70 discrete activities; to each was assigned a performance standard and an acceptable quality level; finally, one of three surveillance methods random sampling, 100-percent inspection, and customer

complaints - was specified to determine that standards really were being met. (For contractor operations, there was also a category of deductions for failure to meet the acceptable quality level.)

How does this system work? Suppose we have a requirement that the operations manager must check all incoming shipments. We could then specify, as a standard, that the correct number of containers as noted on the carriers' freight documents has been received. To find out if this standard is being met, we could carry out a random sampling of verified items received on randomly selected days. Finally, we would check to see that what was received met our acceptable quality level—say, that no more than 5 percent of incoming items were not properly verified. Table 10 gives three more examples of this kind of job analysis. None of these procedures is novel. The method of job analysis simply means that an agency looks at work as it is being done to see what actually results. This method has long been used by private industry and the Department of Defense, and the Office of Management and Budget now requires Federal

Table 10. Job Analysis of NOAA Warehousing Operations

Surveillance

Requirement

Standard

Establishes and im- Establishes written plements control procedures which over NOAA-owned prescribe the relaequipment

Identifies all non-expendable and selected items of expendable equipment

Maintains an accountability system for equipment management

tionships, opera-
tions, and specifics
of NOAA Logistics
Supply Center
equipment control
system
Controlled and iden-
tified in the records
and on the equip-
ment by a uniquely
numbered tag
Establishes register
with equipment con-
trol numbers listed
sequentially and
containing, as a
minimum: (a) the
equipment control

number, (b) date as-
signed or tagged,
(c) noun, and
(d) acquisition
document identifica-
tion number

Acceptable Quality Level 5%

Method 100%-inspection

[blocks in formation]

agencies to do job analyses before contracting out support services. The important thing to note about the job analysis method is that it can be extended to evaluate any support functions, no matter how complicated, and that in this respect it is easily superior to the support ratio method discussed earlier.

The Legal Status of Support Services

We have not yet answered the last of the questions posed earlier: What are the implications of the contracting out of many support functions? The first thing that needs to be said is that the question of who shall provide commercial services to the Federal Government is hotly debated. By now, analyses of the Office of Management and Budget's (OMB) Circular A-76, the controlling document on the subject, have reached talmudic levels of complexity. Since 1955, there have been two opposing philosophies regarding the use of contractors to provide support services to Federal agencies. The first, as summed up by OMB, is that it is the government's policy not to compete with its citizens, but instead, to "rely on competitive private enterprise to supply the products and services that Government needs." (ref. 162.) The other philosophy is that government shall attempt to get its work done by its own employees, only contracting with the private sector when the nature of the work makes full-time use of government employees impracticable or the skills needed to do the work in-house are unavailable (ref. 163). By the late 1970s, the former philosophy had completely superseded the latter, to the point where some officials and congressmen began to wonder out loud if the government was not losing its ability to evaluate its contractors' work.* To understand what the official policy implies about the government's conduct of research and development, we need to understand what OMB Circular A-76 prescribes. As we shall see, there is nothing self-evident about the procedures for converting a government, commercial, or industrial activity to contract operation.

* Of course, there are circumstances where programs are best evaluated from the outside, on the principle that no one should be judge in his own cause. This is why the Securities and Exchange Commission requires independent audits of publicly-held corporations, why Congress in authorizing certain education and social services programs requires evaluations by outside contractors, and why the Defense Department created RAND and the Aerospace Corporation as sources of independent technical evaluation. What is at issue here is a narrower question: How does an agency determine if it is getting value for its money? It can hire an outside evaluator, but once the evaluator submits a final report, what then? There seems to be a danger of an infinite regress: The agency selects a second evaluator to evaluate the first evaluator, followed by a third evaluator. . . If government employees have a stake in boosting their own programs, outside evaluators may have a stake in telling their clients what they want to hear. Otherwise, they may not be invited back.

Four principles enunciated in OMB Circular A-76 define how and by whom commercial and industrial work shall be done. First, there are restrictions on how contractors may be used: Contract employees may not supervise government employees, nor may government employees be involved in close, continual supervision of contract employees; and in-house work may not be converted to contract solely to avoid personnel ceilings or salary limitations (ref. 164). Second, there are certain functions which may not be contracted out, functions enumerated by a former director of the Bureau of the Budget (OMB's predecessor) as "the decisions on what work is to be done, what objectives are to be set for the work, what time period and what costs are to be associated with the work, what the results expected are to be... the evaluation and the responsibilities for knowing whether the work has gone as it was supposed to go, and if it has not, what went wrong, and how it can be corrected on subsequent occasions." (ref. 165.) This is as succinct a justification for an in-house staff as one could wish, but its practical application is less clear. As will be shown, there is no longer a firm line dividing research and development from "routine" support services.

Third, Circular A-76 outlines a procedure for agencies to follow in deciding whether to contract out their industrial and commercial activities. To simplify greatly, an agency does a cost-comparison study; that is, it develops an estimate of the cost of government performance of a commercial activity and compares it to the cost of contract performance (ref. 166). If studies warrant contract performance, the agency solicits bids and, in effect, competes against commercial firms to see who can do the work at least cost. If a firm's low bid is at least 10 percent lower than the agency's, the government is required to contract for performance of that service. Finally, OMB authorizes Federal agencies to carry on commercial activities if no satisfactory outside source is found to be available. This is, one might say, an escape clause for the government.

If the reader suspects that interpreting Circular A-76 is rather like walking through a minefield, our point is made. The circular wavers among the various reasons for justifying contracting out: because a commercial source is available, because of cost, or on general philosophical grounds. These may be good reasons, but they do not account for the main reason agencies have contracted for services: It is often the only way to get the job done. Agencies like NASA and the Defense Department have not let huge service and base support contracts simply because commercial sources were available, or because costs would be lower, and least of all for philosophical reasons. Rather, they have let these contracts because they were subject to continuing civil service manpower reductions, and the extensive and sophisticated use of support service contracting was the only way in which these agencies could continue to perform the functions for which they were responsible.

In this sense, government agencies may be violating the spirit if not the letter, of the provision that support service contractors should not be used as a substitute for civil service employees. The political climate in the past fifteen years has been such as to make this situation unavoidable. Government employment has not been held in high esteem (“There are too many bureaucrats") and yet the government has been called upon by the Congress to render ever more complicated services to various client groups. The pattern has been to appropriate the funds to do a given job but then in the name of government "efficiency" to cut back the number of civil service people necessary to do the work.

We can now examine the role of support services in a research and technology development environment.* There are three basic contractual arrangements that agencies use to manage their programs, ranging from management of entire installations to providing support for specific activities:

1. The agency awards a fixed-term, renewable contract to a commercial firm, a university, some other not-for-profit organization or a consortium, to manage an installation. The contractor "gets no proprietary benefits from laboratory research or facilities (whatever is available to the contractor is also available to other parties on the same terms). Its role is almost entirely administrative." (ref. 167.) The classic example of this relationship is the government-owned, contractor-operated facility, such as the Jet Propulsion Laboratory and the multiprogram laboratories of the Energy Department. Employees at these installations are on the contractor's payroll; thus workers at Sandia Laboratories are employees of Western Electric, not the Energy Department.

2. The agency installation is managed by government employees, but the agency awards a master contract for housekeeping and base support, and separate contracts for more specialized functions. NASA's Kennedy Space Center is an example. Trans World Airlines provides base support, firms with major development contracts provide checkout and launch support services, and the Air Force Eastern Test Range provides joint support for services such as photoprocessing. A comparable arrangement is the use of base support contractors at military installations.

3. Again, the installation is managed by government employees but there is no master contract for base support. Instead separate contracts are let for particular activities such as technical writing, janitorial

* OMB Circular A-76 does not apply to the conduct of research and development. However, "severable" commercial activities in support of research and technology development are subject to the circular.

« PreviousContinue »