source
stringlengths 62
1.33M
| target
stringlengths 127
12.3k
|
---|---|
Background Prior to CMIA, the timing of federal funds transfers to states was governed by the Intergovernmental Cooperation Act, Public Law 90-577. That law allowed a state to retain for its own purposes any interest earned on federal funds transferred to it “pending its disbursement for program purposes.” The House Committee on Government Operations, when considering the CMIA legislation in 1990, noted that the Intergovernmental Cooperation Act had been “the source of continuing friction between the states and the Federal Government.” The House Committee stated that under the Intergovernmental Cooperation Act, “the States need not account to the Federal Government for interest earned on Federal funds disbursed to the states prior to payment of program beneficiaries.” Several years earlier, in 1988, when the Senate Committee on Governmental Affairs had looked into this matter, it found that as a result, “some administering departments at the state level were drawing down Federal funds too far in advance of need, costing the Federal Government foregone interest.” Both committees pointed out, however, that whenever the federal government complained that states profited unduly from early drawdowns, states would recite “numerous instances where they lose interest opportunities because the Federal Government is slow to reimburse them for moneys the states advance to fund Federal programs.” At the request of the Senate Committee, a Joint State/Federal Cash Management Reform Task Force, comprised of financial management representatives from six states and six federal agencies, including OMB and Treasury, was formed in 1983 to seek fair and equitable solutions to the aforementioned problems relating to the transfer of funds between the federal government and the states. Its work contributed to passage of CMIA in October 1990. The House Committee expected that CMIA would “provide a fair and equitable resolution to those differences.” It would do so, according to the committee, by establishing “equitable cash transfer procedures, procedures whereby neither the Federal nor state governments profit or suffer financially due to such transfers.” CMIA, as enacted in 1990, requires the federal government to schedule transfers of funds to states “so as to minimize the time elapsing between transfer of funds from the United States Treasury and the issuance or redemption of checks, warrants, or payments by other means by a state,” and expects states to “minimize the time elapsing between transfer of funds from the United States Treasury and the issuance or redemption of checks, warrants, or payments by other means for program purposes.” To accomplish this goal, CMIA directed the Secretary of the Treasury to negotiate agreements with the individual states to specify procedures for carrying out transfers of funds with that state. It authorized the Secretary to issue regulations establishing such procedures for states with which the Secretary has been unable to reach agreement. The Senate Governmental Affairs Committee explained when considering a 1992 amendment to CMIA that the act is “meant to provide a self-enforcing incentive for both state and Federal agencies to time the transfer of Federal funds as closely as possible to their actual disbursement for program purposes, so that neither... will lose the time value of their funds.” The “self-enforcing incentive” that the Senate Committee refers to is the act’s interest liability provision. States are required to pay interest to the United States on federal funds transferred to the state from the time those funds are deposited to the state’s account until the time the state uses the funds to redeem checks or warrants or make payments by other means for program purposes. If a state advances its own funds for program purposes prior to a transfer of federal funds, the state is entitled to interest from the United States from the time the state’s own funds are paid out to redeem checks or warrants, or make payments by other means, until the federal funds are deposited to the state’s bank account. CMIA requires each state to calculate any interest liabilities of the state and federal government and calls for an annual exchange of the net interest owed by either party. Other key requirements of the act and/or Treasuryrules and regulations are as follows: The Department of the Treasury must establish rules and regulations for implementing CMIA. States and FMS may enter into Treasury-State Agreements (TSAs) that outline, by program, the funding technique and the clearance patternstates will use to draw down funds from the federal government. If any state and FMS do not enter into such an agreement, FMS will designate the funding technique and the interest calculation method to be used by that state. States may claim reimbursement from Treasury annually for allowable direct costs relating to development and maintenance of clearance patterns and the calculation of interest. States must prepare and submit to FMS an annual report that summarizes by program the results of the interest calculation from drawdowns and may include any claims for reimbursement of allowable direct costs. The federal program agencies are required to (1) schedule transfers of funds to the states so as to minimize the time elapsing between the disbursement of federal funds from the U.S. Treasury and the issuance and redemption of checks, warrants, or payments by other means by a state and (2) upon Treasury’s request, review annual reports submitted by the states for reasonableness and accuracy. During fiscal year 1994 (which, for the majority of states, included 9 months of the states’ first fiscal year under CMIA), the federal government obligated over a reported $150 billion in federal funds to the states for programs covered under the act. (See table 1.) These programs were funded by the Departments of Health and Human Services (HHS), Labor, Education, Agriculture, Transportation and the Social Security Administration. We did not independently verify the amounts in table 1. Objectives, Scope, and Methodology Our objective was to report, as required under the act, on CMIA’s implementation. Specifically, we determined whether as required under the act, the Department of the Treasury developed rules and regulations for implementing the act; the Treasury-State Agreements (TSAs) were negotiated in accordance with CMIA provisions and Treasury rules and regulations; the states we visited followed the funding techniques and clearance patterns approved by FMS in requesting and transferring funds; for the states we visited, interest was assessed to the federal government and states, in accordance with CMIA and Treasury rules and regulations; claims submitted by the states we visited for reimbursement of allowable direct costs incurred in implementing CMIA were prepared in accordance with Treasury regulations; the states submitted all required annual reports to FMS; and the federal program agencies (1) scheduled transfers of funds to the states so as to minimize the time elapsing between the disbursement of federal funds from the U.S. Treasury and the issuance and redemption of checks, warrants, or payments by other means by a state and (2) upon Treasury’s request, reviewed annual reports submitted by the states for reasonableness and accuracy. To accomplish these objectives, we (1) performed walkthroughs of how funds flow from the federal government to the states and how the states distribute the funds for program purposes, (2) interviewed state officials, (3) tested transactions, (4) interviewed state auditors, and (5) reviewed Single Audit Act reports. The Single Audit Act of 1984 requires each state or local government that receives $100,000 or more in federal financial assistance in any given year to have an annual comprehensive single audit of its financial operations, including tests to determine whether the entity complied with laws and regulations that may have a material effect on its financial statements or its major programs, as defined in the Single Audit Act. The Office of Management and Budget (OMB) publishes guidance to assist auditors in planning audits under the Single Audit Act of 1984. We also reviewed Treasury’s regulations, implementation plans, and procedures for reviewing TSAs and annual reports. In addition, we sent a questionnaire to all states to obtain their views on CMIA implementation and summarized the results of the 54 completed and returned questionnaires. To determine if the federal program agencies and the states were properly implementing CMIA, we also documented systems used to process selected transactions of eight major programs (National School Lunch, Unemployment Insurance, Chapter 1-Local Education, Family Support Payments to States, Social Services Block Grant, Medical Assistance, Highway Planning and Construction, and Supplemental Security Income). These programs were selected on the basis of federal funding levels and the amount of interest liabilities incurred during the first year of CMIA implementation. It was not part of our scope to assess the adequacy of the accounting systems states and federal program agencies used to carry out their CMIA requirements. The period covered by the audit was the states’ 1994 fiscal year, which, for almost all of the states, was the period from July 1, 1993 through June 30, 1994. The first required annual reports were due by December 31, 1994, and the first interest exchange between the states and the federal government occurred on or about March 1, 1995. The 12 states selected for detailed audit work were chosen primarily because they received relatively large amounts of federal funds, incurred comparatively large federal or state interest liabilities, and, in some cases, were denied interest and direct costs reimbursement claims submitted to FMS. We included states that reported interest liabilities to or from the federal government (California, Colorado, Florida, Indiana, Maryland, New York, Ohio, Pennsylvania, Texas, and Tennessee) and states that reported no state or federal interest liabilities (District of Columbia and Georgia). We also visited the Departments of Health and Human Services, Labor, Education, Agriculture, Transportation and the Social Security Administration because they process requests for funds for the programs we selected for audit and review federal interest liabilities relating to these programs. We conducted our audit between April and September 1995 at 12 states, 6 federal program agencies, and FMS. We performed our work in accordance with generally accepted government auditing standards. While we performed limited testing of the reasonableness of the calculated interest liability and reimbursement of the direct costs for the 12 states visited, our audit scope did not include an assessment of the accuracy and completeness of the $34 million net interest liability (comprised of $41.6 million of state interest liabilities offset by a $4.7 federal interest liability and $2.5 million in states’ claims for direct costs reimbursement), nor did we test the accuracy of program disbursements made by the states. We provided a draft of this report to Treasury’s FMS for review and comment. FMS agreed with our findings and conclusions. The Three Key Agents of CMIA Have Made Progress in Achieving the Act’s Purpose Our review showed that the Department of the Treasury, federal program agencies, and the states have made substantial progress in achieving the act’s purpose of timely transfers of funds. Most state officials acknowledged that CMIA has helped heighten their awareness of cash management, but several expressed concern over what they viewed as added administrative burden. While the three key agents have made progress in implementing CMIA, three of the states we visited consistently did not comply with certain Treasury rules and regulations. Some of the noncompliance situations resulted in an understatement in the states’ reported state interest liability. However, because it was outside the scope of our audit, we did not attempt to project the total understatements resulting from these noncompliances. We communicated these noncompliances to FMS, and it informed us that it will take appropriate actions to address the noncompliances. Financial Management Service and Federal Program Agencies As amended, CMIA directed that by July 1, 1993, or the first day of a state’s fiscal year beginning in 1993, whichever is later, the Secretary of the Treasury was to make all reasonable efforts to enter into a written agreement with each state that receives a transfer of federal funds. This agreement was to document the procedures and requirements for the transfer of funds between federal executive branch agencies and the states. In addition, the Secretary was to issue rules and regulations within 3 years relating to the implementation of CMIA. FMS officials have made substantial efforts to enable successful implementation. They published final rules and regulations for implementing CMIA; contracted for development of clearance patterns that could be used by states that did not develop their own; developed and issued an Implementation Guide, Federal and State Review Guides, and a Treasury-State Agreement Form Book; negotiated first year TSAs, within the time period specified in the act, with all but two states and second year agreements with all but one state; reviewed the documentation for reimbursement of allowable direct costs over $50,000 submitted by the states; received first-year annual reports from all the states and submitted them to program agencies for review of federal interest liabilities claimed; issued several policy statements intended to clarify regulations; submitted to OMB suggested language on CMIA-related audit objectives and procedures for inclusion in the planned revisions to the Compliance Supplement for Single Audit Act reviews; and developed plans to revise the CMIA regulations to streamline processes to make them more flexible. As part of its revision of the CMIA regulations, FMS plans to allow for greater variation in funding techniques and to delete descriptions and examples of the four current funding techniques from the regulations. Thus, according to FMS, states will be able to choose a technique that meets their needs. FMS also plans to eliminate the prohibition on reimbursable funding to provide states with greater flexibility in funding techniques. In the same regard, we found that the federal program agencies met their responsibilities under the act to transfer funds in a timely manner. This is evidenced by the relatively small (approximately $4.7 million) federal interest liability incurred in the first year of the act’s implementation. States State officials generally credit CMIA with heightening their awareness of cash management matters. Even though several of them said that they had been practicing cash management techniques prior to CMIA, they still believed that CMIA was instrumental in focusing attention on when federal funds should be requested. Of the 54 states responding to our questionnaire, 41 stated that CMIA raised their level of awareness regarding cash management. Thirty-two said that CMIA is needed to ensure financial equity in the transfer of funds. The 12 states we visited were generally making a good effort to comply with CMIA requirements. The following sections describe actions states have taken and provide additional details on actions taken by the 12 states we visited and the noncompliance situations we found at 3 of the states. Treasury-State Agreement: All but 2 of the 56 states and all of the 12 states visited signed a first year TSA with FMS. Clearance Pattern Methodology: Nine of the states we visited developed their own clearance patterns based on techniques described in the Treasury regulations. Three chose to accept a clearance pattern time provided by FMS based on a study done under contract for the federal government. In an effort to be efficient, a few states are testing clearance patterns on a quarterly basis, even though they are not required by Treasury regulations to recertify their clearance patterns more frequently than every 5 years. Adherence to Agreed to Drawdown Techniques: For all the programs included in our review, we tested to determine whether states we visited were drawing down federal funds in accordance with the terms contained in their agreements. Generally, we noted that drawdowns complied with agreement terms. However, in one state, the agreed upon drawdown techniques were consistently not followed for six of the seven programs tested. For example, two programs were consistently drawing funds several days prior to the TSA specified schedule. According to program officials, the agreed upon funding techniques negotiated by the state treasurer’s office did not reflect the actual timing of when these funds were clearing accounts. Therefore, the program officials drew the funds in what they thought was a more accurate manner. In addition, the state filed an amended annual report with FMS reducing its net state liability from about $500,000 to $60,000. The state informed FMS that it had followed its agreed upon funding techniques in all its programs and, therefore, was reducing its previously reported interest liability. However, as mentioned above, we found that the state was consistently not following its agreed upon funding techniques. In another state, our work showed that no attempt was made to draw down in accordance with the funding technique for 5 of the programs tested. According to program officials, they were unaware of the techniques specified in the agreement because they were not consulted before the agreement was approved nor had they seen the agreement after it went into effect. In this case, no federal interest liability was created since funds were being transferred to the states in a timely manner whenever they were requested. However, in the transactions we looked at, this did result in the state consistently using its own money to fund programs until it received federal funds. Interest Calculation: Ten of the 12 states we visited computed interest liabilities. Both states that did not make such computations told us they had no interest liabilities to compute. However, our review showed that one of these states should have computed an interest liability on certain refunds it received. Our tests of interest calculations showed some problems. For example, one state claimed a federal interest liability because it did not receive federal funds by the time specified in the TSA. FMS denied a significant portion of this claim because it concluded that the state was not requesting funds in time for the federal government to provide them as called for in the agreement. We attempted to determine the reasonableness of the state’s claim, but state officials told us that they no longer had sufficient documentation to support their claim. Direct Cost: The Treasury regulations authorize states to claim reimbursement for direct costs incurred for developing and maintaining clearance patterns and computing interest liabilities. Reimbursable direct costs were claimed by 11 of the 12 states we visited. FMS denied a significant portion of the direct cost claims for two of these states. FMS denied a portion of the claims because the documentation submitted did not support costs allowable under CMIA. One state has appealed the decision and the other is considering an appeal. In those cases where reimbursement was approved, our review of supporting documentation indicated that the states had reasonable support for their claims. Annual Reports: All 56 states submitted an annual report to FMS for the first year’s activities. Some States View Certain Procedures as Burdensome While overall states see benefits from CMIA, such as a heightened awareness of cash management, some expressed concern about what they perceived as an additional burden of the act. In 24 of the 54 responses to our questionnaire and 7 of the 12 states we visited, officials expressed their view that the additional administrative tasks associated with implementing the act are burdensome. In addition, officials at 2 of the states we visited stated that the CMIA regulations were inflexible. Some of the issues cited by the states included: Administrative tasks needed to comply with CMIA, such as preparing TSAs and annual reports, developing clearance patterns, computing interest liabilities, tracking refunds, and compiling direct costs, are burdensome to their operations. Three states said that the Treasury was being inflexible by not allowing them to use the reimbursable funding technique, which is a method of transferring federal funds to a state after the state has paid out its own funds for program purposes. After June 30, 1994, Treasury regulations prohibited reimbursable funding, except where mandated by federal law. One state said that it believed that the act itself does not specifically prohibit reimbursable funding and that some federal assistance programs must use it as a necessity. It said that using another funding technique that requires estimating cash needs in advance and reconciling later to actual expenditures creates an unnecessary administrative burden. It also said that the cash needs for some programs cannot be estimated due to fluctuating activities. As we discussed earlier, FMS is planning to revise the CMIA regulations to allow for the use of reimbursable funding. A Treasury policy statement requires that average clearance patterns be calculated out until 99 percent of the funds have cleared through the bank account. Some of the states said that this degree of precision was unnecessary because it requires them to make excessive small dollar amount draws. Treasury regulations require states to compute interest on refunds for which the federal share is $10,000 or more. Several of the states said that monitoring all programs covered by CMIA for refunds was burdensome given that most of these refunds relate to one federal program. We determined that over 90 percent of all state interest liabilities from refunds reported by the states in the first year annual reports related to one federal program. Some states said that the Treasury regulatons should allow reimbursement for all direct costs related to implementing CMIA and not just those costs related to the three specific categories identified in the regulations. We did not determine the extent of burden created by the added administrative tasks placed on the states as a result of implementing CMIA. However, it should be noted that the states can submit claims for reimbursement for some of the efforts required. Also, some of the tasks, such as preparing TSAs and annual reports, developing clearance patterns, and computing interest liabilities should be less onerous now that the initial processes for generating this information have been established. First-Year Exchange of Funds Indicates Act Is Working Under CMIA, a state is authorized to draw down funds based on approved funding techniques. If the state requests funds early, interest is due the federal government. Conversely, if the federal government fails to transfer funds on time, the state is due interest. Ideally, under the act, the transfer of funds would be interest neutral, with neither the federal government nor the states incurring any interest liability. The first year of implementation of CMIA resulted in a cumulative net state interest liability due to the federal government of approximately $34 million. Taken in context, this liability is relatively small compared to the over $150 billion reported as obligated in fiscal year 1994 for the programs covered by the act. Table 2 summarizes the components of the $34 million net state interest liability. Interest claims are submitted by program. FMS denied 47 claims by 15 states for interest (approximately $6.4 million). Reasons cited included insufficient documentation and repeated failure to follow the funding technique specified in the TSA. As of October 1995, 8 of the 15 states had appealed those denials to FMS. Of the 8 states that filed claims to appeal these denials, all but 2 have been resolved. FMS denied a portion of direct cost reimbursement claims submitted by 10 states because the costs were not eligible for reimbursement under Treasury rules and regulations, or the supporting documentation contained both eligible and ineligible costs which could not be separately identified. Three states submitted claims to appeal the denials; two of these states’ appeals were subsequently approved based on additional supporting documentation provided to FMS. As indicated previously, most of the states visited computed interest liabilities in accordance with TSAs, and the majority of the programs reviewed had interest neutral funding techniques, whereby neither the federal government nor the states incur interest. Much of the state interest liability was beyond state agencies’ immediate control and was instead attributed to certain states’ laws which require that they have the federal funds in the bank before they make any associated disbursements, as opposed to when the check clears the bank. Four of the 12 states we visited had a state interest liability totaling $18.5 million which primarily resulted from the states’ adherence to such laws. Single Audit Coverage OMB publishes guidance to assist auditors in planning audits under the Single Audit Act of 1984. The guidance, entitled, Compliance Supplement for Single Audits of States and Local Governments, was last updated in September 1990 and does not address CMIA, which was enacted in October 1990. OMB plans to issue a revised Compliance Supplement during fiscal year 1996 which will address CMIA requirements. We reviewed and generally supported a draft of the proposed revisions to the Compliance Supplement relating to cash management. However, we suggested that the Compliance Supplement also include provisions to determine that clearance patterns were properly established and verified by the appropriate state official. The fiscal year 1994 single audit reports for the states we visited lacked consistency and comprehensiveness in checking for compliance with CMIA requirements. Auditors in some of the states we visited said that they obtained knowledge about CMIA by obtaining FMS’ guidelines to state governments and attended cash management and audit conferences where CMIA was discussed. The auditors also said that they intended to expand work in their next audits to cover other aspects of CMIA requirements such as clearance pattern establishment and compliance with drawdown techniques contained in the TSA. FMS officials informed us that they do not routinely receive a copy of single audit reports from each state. Under the single audit concept, audited entities are only required to submit single audit reports to federal agencies that directly provide them funds and the Single Audit Clearinghouse, Governments Division, of the Commerce Department. Since FMS is not a funding agency, entities would not be required to submit reports to FMS. However, FMS may obtain copies of single audit reports from the Federal Audit Clearinghouse. Since some states comply with Single Audit Act requirements by arranging for single audit reports for each state department and agency that receives federal assistance, rather than one single audit for the entire state, FMS would in those cases need to obtain multiple reports for a given state. FMS officials also informed us that they do not routinely review the reports they do receive for CMIA findings. In our June 1994 report on the single audit process, we pointed out that single audit reports are not user friendly. We recommended that the auditors include a summary of their determinations concerning the entity’s financial statements, internal controls, and compliance with laws and regulations. The summary information would be useful because single audit reports generally contain seven or more reports from the auditor. We also recommended that the results of all single audits be made more accessible by having the Federal Audit Clearinghouse compile the results in an automated database. We believe that more useful information on compliance with cash management requirements, particularly when summarized in an accessible database, would provide FMS officials with a better basis for reviewing and acting on CMIA issues. Conclusions The Cash Management Improvement Act has heightened awareness of cash management at both the state and federal levels. Treasury, the federal agencies, and the states have made substantial progress in implementing the act. By implementing its plans to begin revising CMIA regulations to streamline the process and placing greater emphasis on using the results of single audits as a means of overseeing state activities and enforcing CMIA requirements, FMS should be able to further improve the act’s effectiveness and help alleviate any concerns about administrative burden. We are also sending this report to the Secretary of the Treasury; the Commissioner of the Financial Management Service, Department of the Treasury; the Director of the Office of Management and Budget; and the Chairmen and Ranking Minority Members of the House Committee on Government Reform and Oversight, Subcommittee on Government Management, Information and Technology and Senate Committee on Governmental Affairs. We will also send copies to others on request. This report was prepared under the direction of Gregory M. Holloway, Director, Governmentwide Audits, who may be reached at (202) 512-9510 if you or your staffs have any questions. Other major contributors to this report were Gary T. Engel, Senior Assistant Director; J. Lawrence Malenich, Assistant Director; and Johnny R. Bowen, Senior Audit Manager. Gene L. Dodaro Assistant Comptroller General The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Financial Management Service's (FMS), federal agencies', and states' implementation of the Cash Management Improvement Act (CMIA) during 1994. GAO found that: (1) FMS, federal agencies, and states have complied with CMIA requirements, established processes to implement CMIA, and made progress in achieving the act's goal of timely fund transfers; (2) total state interest liability during the first year of CMIA implementation was about $34 million; (3) states reported that while CMIA has improved their awareness of cash management, they are burdened by added administrative tasks; (4) states have not been able to effectively measure their compliance with CMIA, since the Office of Management and Budget has not published guidance for testing CMIA compliance; (5) FMS is taking action to address instances of state noncompliance in implementing CMIA which resulted in understatements of reported state interest liability; and (6) FMS is planning to revise CMIA regulations to allow states greater flexibility in funding techniques. |
Scope and Methodology To describe the history and nature of the NAOMS project, we researched, reviewed, and analyzed related material posted on several NASA Web sites and provided to us directly by NASA and its contractor for NAOMS. We reviewed relevant documents on the House of Representatives’ Committee on Science and Technology Web site. We examined relevant documents produced by the Battelle Memorial Institute (Battelle), National Academies, and others as well as information produced for the National Research Council. In addition, we reviewed a number of relevant reports, articles, correspondence, and fact sheets on the NAOMS project and air safety. Many of the publicly available materials we reviewed are named in the bibliography at the end of this report. To analyze the NAOMS air carrier pilot survey’s planning, design, and implementation (including pretest, interview, and data collection methods); interviewer training; development of survey questions, including which safety events to include in the survey; and sampling, we interviewed officials from NASA, the Federal Aviation Administration (FAA), and the National Transportation Safety Board (NTSB) and NAOMS project staff. We also reviewed relevant documents. We discussed the survey with NAOMS team members to obtain their recollections of the work, particularly regarding limitations, gaps, and inconsistencies in the documentation. GAO internal experts in survey research reviewed the Office of Management and Budget’s (OMB) Standards and Guidelines for Statistical Surveys and derived a number of survey research principles relevant to assessing the NAOMS survey. We compared the NAOMS survey’s design and implementation with these principles. Although OMB’s standards as they are used today were not final until 2006, the vast majority of OMB’s guidelines represent long-established, generally accepted professional survey practices that preceded the 2006 standards by several decades. We also examined the potential risk for survey error— that is, “errors inherent in the methodology which inhibit the researchers from obtaining their goals in using surveys” or “deviations of obtained survey results from those that are true reflections of the population.” Survey error could result from issues related to sampling (including noncoverage of the target population and problems with the sampling frame), measurement error, data processing errors, and nonresponse. We asked three external experts to review and assess the NAOMS air carrier pilot survey’s design and implementation as well as considerations for analysis of collected data. These external reviews and assessments were conducted independently of our own review activities. We selected the experts for their overall knowledge and experience in survey research methodology and, specifically, for their expertise in measurement (particularly the aspects of memory and recall), survey administration and management, and sampling and estimation. The experts included Robert F. Belli, Professor, Department of Psychology, University of Nebraska, Lincoln, Nebraska; Chester Bowie, Senior Vice President and Director, Economics, Labor, and Population Studies, National Opinion Research Center, Bethesda, Maryland; and Steve Heeringa, Senior Research Scientist at the Survey Research Center and Director of the Statistical Design Group at the Institute for Social Research, University of Michigan, Ann Arbor, Michigan. To determine what steps or other considerations might improve the quality and usefulness of a survey like NAOMS if one were to be implemented in the future, we identified and described methodological deviations that we found from GAO’s guidance and OMB’s standards. We also obtained the views of internal and external experts on how limitations caused by such deviations might be overcome. We assessed the potential or known effects of design or implementation limitations we identified. We focused our review on the most extensively developed part of the NAOMS effort, the air carrier pilot survey. We discuss the general aviation study as it relates to the air carrier survey and overall project evolution, but we do not focus on its development or implementation. We attempted to identify any problems that might have prevented the NAOMS survey data from producing meaningful results, and that might not materially affect the survey results but could result from accepting the reasonable risk and trade-offs inherent in any survey research project. We note that limitations may not necessarily be weaknesses. We conducted our work from March 2008 to March 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. NAOMS Was Intended to Identify Accident Precursors and Potential Safety Issues The NAOMS project was conceived and designed in 1997 to provide broad, long-term measures on trends and to measure the effect of new technologies and policies on aviation safety. Following the 1996 formation of the White House Commission on Aviation Safety and Security, and the commission’s 1997 report to the President committing the government and industry to “establish a national goal to reduce the aviation fatal accident rate by a factor of five within ten years and conduct safety research to support that goal,” NASA worked with FAA and NTSB to set up the Aviation Safety Investment Strategy Team within NASA. This team organized workshops, examined options, and recommended a strategy for improving aviation safety and security. One of its recommendations led to NASA’s Aviation System Monitoring and Modeling (ASMM) project, a program to identify existing accident precursors in the aviation system and to forecast and identify potential safety issues to guide the development of safety technology. ASMM, within NASA’s Aviation Safety and Security Program, was to provide systemwide analytic tools for identifying and correcting the predisposing conditions of accidents and to provide methodologies, computational tools, and infrastructure to help experts make the best possible decisions. ASMM was expected to accomplish this by, among other things: intramural monitoring, providing air carriers and air traffic control facilities with tools for monitoring their own performance and safety within their own organizations, and extramural monitoring, providing a comprehensive, systemwide, statistically sound survey mechanism for monitoring the performance and safety of the overall National Air Transportation System by seeking the perspectives of flight crews, air traffic controllers, cabin crews, mechanics, and other frontline operators (NAOMS was developed as the primary mechanism for collecting this information). Agencies, airlines, and other private organizations had realized that quantitative and anecdotal information they had been collecting could not be used to calculate statistically reliable risk levels. The project team identified eight major aviation safety data sources that were available when NAOMS was created. For example, flight operational quality assurance data could have helped in deriving statistically reliable estimates from digital measurements of flight parameters, but these data do not cover all airlines or include information on human cognition or affect. Another dataset was from the Aviation Safety Reporting System (ASRS), which for 30 years had been successfully collecting information from pilots, controllers, mechanics, and other operating personnel about human behavior that resulted in unsafe occurrences or hazardous situations. However, because ASRS reports are submitted voluntarily, the resulting data cannot be used to generate reliable rate estimates. Under ASRS, pilots describe events briefly by mail or on NASA’s ASRS Web site. NASA reviews each report and enters detailed information about the events into an anonymous database that it maintains. According to the ASRS Director, the system is subject to volatility in reporting, as in 2006, when the data witnessed a spike in reports of wrong runway use following a fatal accident in Kentucky, where pilots turned onto a taxiway that was too short for their aircraft to attain lift-off speed. Also, ASRS is not statistically generalizable. Although it does not constrain the types of events that can be reported, ASRS reporting is voluntary and unlikely to cover the universe of safety events, and it cannot be used to calculate trends. To complement this system and other safety databases, the NAOMS project was to interview a statistical sample of professionals participating in the air transportation system, including pilots, about their experiences. Data from the interviews were to enable statistically reliable measurements of rates and rate trends for a wide array of types of safety events, such as the professionals experiencing fire in the cargo or passenger compartment or encountering severe turbulence in clear air, collisions with birds, airframe icing, and total engine failure. As the project evolved, the NAOMS researchers decided to deemphasize NAOMS’s potential to calculate rates in isolation, instead highlighting the project’s primary capability to identify trends worthy of investigation, thereby complementing other data sources. The premise of the NAOMS project was that aviation personnel were the best source of information on day-to- day, safety-related events. In measuring the occurrence of safety incidents that might increase the risk of an accident, rather than accidents themselves, the project would serve a monitoring role rather than an investigative role. Instead of directly informing policy interventions, NASA expected that trends seen in the NAOMS data would point aviation safety experts toward what to examine in other data systems. However, to date, the accuracy of rate and trend estimates based on NAOMS data has not been established. NASA appointed two researchers with aviation safety experience to lead a project team in developing surveys for NAOMS as a part of ASMM. The researchers contracted with Battelle to administer the project. Battelle, in turn, subcontracted with experts in survey methodology and aviation safety to help with questionnaire construction and project execution. “1) plausibility and understandability of NAOMS statistics (e.g., reasonable and reliable representation of the relative frequencies with which unwanted events occur), “2) stability and interpretability of NAOMS statistical trends, “3) sensitivity to industry concerns about data misuse, and “4) timely and appropriate disclosures of NAOMS findings.” A primary objective of NAOMS was to demonstrate that surveys of personnel from all aspects of the aviation community could be cost- effectively implemented to help develop a full and reliable view of the NAS. NASA also sought to find a permanent “home” for the surveys, having planned to develop “scientific methodologies to maximize the useful information and minimize the cost, but not . . . provide for permanent service” or funding for NAOMS. That is, NASA intended the NAOMS project to collect data continually from air carrier and general aviation pilots, helicopter pilots, air traffic controllers, flight attendants, and mechanics. It sought to design a permanent survey data collection operation that, once implemented, could generate ongoing data to track event rates into the future (see fig. 1). NASA was to conduct the research and development steps necessary to demonstrate a survey methodology that would quantitatively measure aviation safety throughout the NAS, but it expected that a different organization, possibly FAA, would permanently implement the surveys NASA developed. NAOMS Concept Presented at NASA Data Analysis & Monitoring Workshop Briefings to Aviation Safety Decision Makers NASA’s project leaders outlined these objectives in briefings, presentations, workshops, and meetings as they explained the project’s concept and progress (see table 1). The NAOMS team briefed officials overseeing the ASRS project, for example, on NAOMS’s concept as early as 1997. In 2005, the team showed the Commercial Aviation Safety Team (CAST) how the NAOMS air carrier pilot survey could help develop metrics to assess the effectiveness of safety interventions. Another early presentation, in March 1998, demonstrated NAOMS’s concept and goals while spelling out in detail the project’s phase one. Project staff planned to profile and summarize participant demographics in a technical document, develop a preliminary statistical design, identify high-value survey topics, incorporate these topics into a draft survey instrument, and analyze and validate the survey design to refine the survey instrument. The presentation delineated four distinct project phases: develop the methodology, while engaging stakeholder support; conduct a test survey to prove the concept; implement the full nationwide survey incrementally; and hand off the instrument to an organization interested in operating it over the long term. Project staff were later to describe the first two stages as one “methods development” phase. Figure 2 outlines the completion of these phases as expressed first in 1997 briefings to aviation safety decision makers in the development stage to the delivery of NAOMS’s data collection system to ALPA in January 2007. The figure reflects changes in the NAOMS project resulting from NASA’s decision to halt development of the full array of surveys indicated in figure 1. By 2004, which was the original target date for permanent implementation of surveys, the team had been able to develop and begin only the pilot surveys (both air carrier and general aviation pilots), not those for other personnel as initially was planned. As shown in figures 1 and 2, NASA originally planned to end funding in 2004 but extended it to 2007 to “properly fund transition of the data” to the larger safety community. A Web-based version of the air carrier pilot survey and related information were handed off to ALPA in January 2007. The Survey’s Development: Feasibility, Methodology, and Field Testing In 1998, members of the NAOMS team—NASA managers, survey methodologists, experts in survey implementation, aviation safety analysts, and statisticians working with support service contractors from Battelle—began to study long-term surveys that had helped support government policymaking since at least 1948. The team intended for NAOMS to employ the best practices of surveys used in other policy areas providing comparable benefits. The team members reviewed an extensive variety of surveys used for national estimates and for risk monitoring. These surveys included the Centers for Disease Control and Prevention’s Behavioral Risk Factor Surveillance System, which provides information on, among others, rates of smoking, exercise, and seat-belt use, and the Bureau of Labor Statistics’ Consumer Expenditure Survey, which provides data to construct the consumer price index. The team’s aim was to learn how the NAOMS survey could measure actual experiences. “who were watching the operation of the aviation system first-hand and who knew what was happening in the field . . . this use of the survey method was in keeping with many other long-term federally funded survey projects that provide valuable information to monitor public risk, identify sources of risk that could be minimized, identify upward or downward trends in specific risk areas, to call attention to successes, identify areas needing improvement, and thereby save lives . . . .” “only the aviation systems operators—its pilots, air traffic controllers, mechanics, flight attendants, and others— the situational awareness and breadth of understanding to measure and track the frequency of unwanted safety events and to provide insights on the dynamics of the safety events they observe. The challenge was to collect these data in a systematic and objective manner.” In 1999, the team established a plan of action that included a feasibility assessment, with a literature review, to study methodological issues, estimate sample size requirements, and enlist the support of the aviation community. The assessment also planned for research that included a series of focus groups to help determine likely responses to a survey and a study of how pilots recall experiences and events. It also outlined a field trial to begin in fiscal year 1999 and, finally, a staged implementation, beginning with air carrier pilots, progressing to a regular series of surveys, and moving on to other aviation constituencies. For the feasibility assessment, NAOMS researchers consulted with industry and government safety groups, including members of CAST and FAA and analysts with ASRS. They reviewed aviation event databases such as ASRS, the National Airspace Information Monitoring System, and Bureau of Transportation Statistics (BTS) data on air carrier traffic. The team drew on information from this research, as well as team members’ own expertise, to construct and revise a preliminary questionnaire for air carrier pilots. “ What risk-elevating events should we ask the pilots to count? “ How shall we gather the information from pilots—written questionnaires, telephone interviews, or face-to-face interviews? “ How far back in the past can we ask pilots to remember without reducing the accuracy of their recollections? “ In what order should the events be asked about in the questionnaire?” As a result of the 600 air carrier pilot interviews conducted for the field trial, the researchers decided that telephone interviewing was sufficiently cost-effective and had a high enough response rate to use in the final survey. The field trial had tested question content that derived from previous research and had experimented with the order of different sections of the survey. The field trial gave the team confidence that the NAOMS survey was a viable means of monitoring safety information. However, the field trial did not fully resolve questions about the period of time that would best accommodate pilots’ ability to recall their experiences or about the best data collection strategy. Getting the Survey Under Way The team had decided before the field trial that the NAOMS questionnaire content and structure were to be governed by (1) measures of respondent risk exposure, such as the numbers of flight hours and flight legs flown; (2) estimates of the numbers of safety incidents and related unwanted events respondents experienced during the recall period; (3) answers to questions on special focus topics stakeholders requested; and (4) feedback on the quality of the questions and the overall survey process. After the team analyzed the data from the field trial and conducted further extensive research, it decided that the NAOMS survey should address as many safety events identified during its preliminary research as practical, that its questions should be ordered to match clusters from the field trial based on causes and phases of flight, and that a sample size of approximately 8,000 to 9,000 interviews per year would provide sufficient sensitivity to detect changes in rates. The team structured the survey in four sections in accordance with their original expectations of what the survey should cover. NAOMS’s project managers explained the rationale for this structure, shown in figure 3, in a 2004 presentation to FAA’s Air Traffic Organization (ATO). NASA’s contractors began computer-assisted telephone interviewing (CATI) data collection for the full air carrier pilot survey in March 2001. Using a sample that was drawn quarterly from a subset of a publicly available FAA database, interviewers surveyed pilots regularly over approximately 45 months of data collection. The survey methodology changed during the first few months of the survey: that is, researchers settled on which recall period to use and a cross-sectional data collection strategy approximately 1 year after the operational survey began. Interviewing ended in December 2004, by which time more than 25,000 air carrier pilot interviews had been completed. In addition to the air carrier pilot survey, NAOMS researchers explored elements of the original action plan for the project. They conducted focus groups with air traffic controllers and drafted preliminary survey questions. Building on research done for the main air carrier survey, NAOMS staff also developed and implemented a survey for general aviation pilots that ran for approximately 9 months in late 2002 and early 2003. However, by the end of 2002, NASA realized that it would not be feasible to expand the project to other aviation personnel under its initial plan to hand off the surveys for permanent service at the end of fiscal year 2004. NAOMS staff focused their attention on establishing the NAOMS air carrier pilot survey as a permanent service, noting that the system was still under development and that its benefits had not been fully demonstrated. They suggested that it would be difficult to find an organization that would be willing to commit to the financial and developmental resources necessary to manage an uncompleted project. The Survey’s Handoff and Results, and the NASA Inspector General’s Review NASA’s documentation had repeatedly shown that the NAOMS project’s purpose was “the development of methodologies for collecting aviation safety data,” with their eventual transition “to the larger safety community” for permanent implementation. NAOMS had met its key objectives of demonstrating a survey methodology to quantitatively measure aviation safety and track trends in event rates by the end of 2004, when original funding for the project had been scheduled to end. Seeking to ensure the future of the survey while streamlining the project, project staff tested whether Web-based data collection was a cost-effective measure. NASA established an agreement with ALPA, which planned to initiate a Web-based version of the air carrier pilot survey on behalf of CAST and its Joint Implementation Measurement Data Analysis Team. NASA extended NAOMS original funding into 2007 to accommodate the transition to ALPA. NASA conducted training sessions for ALPA staff on the NAOMS Web application in early fiscal year 2007 and conveyed the operational data collection system to ALPA in January 2007. However, ALPA never fully implemented the Web survey. According to an ALPA official in late 2007, the organization was exploring how to modify the survey before implementing it. Although ALPA never had access to existing NAOMS data, this official also expressed uncertainty about what should be done with the existing data. The project effectively ended at the point of transfer. “demonstrated a survey methodology to quantitatively measure aviation safety, tracked trends in event rates over time, identified effects of new procedures introduced into the operating environment, and generated interest and acceptance of NAOMS by some of the aviation community as described in the Project Plans.” The OIG report identified several shortcomings of the project, including that (1) the “contracting officers did not adequately specify project requirements” or “hold Battelle responsible for completing the NAOMS Project as designed or proposed”; (2) the “contractor underestimated the level of effort required to design and implement the NAOMS survey”; (3) “NASA had no formal agreement in place for the transfer and permanent service of NAOMS”; and (4) “NAOMS working groups failed to achieve their objectives of validating the survey data and gaining consensus among aviation safety stakeholders about what NAOMS survey data should be released.” An additional deficiency, according to the OIG, was that, as of February 2008, “NASA had not published an analysis of the NAOMS data nor adequately publicized the details of the NAOMS Project and its primary purpose as a contributor to the ASMM Project.” NAOMS’s Planning and Design Were Robust, but Implementation Decisions Complicate Data Analysis We found that, overall, the NAOMS project followed generally accepted survey design and implementation principles, but decisions made in developing and executing the air carrier pilot survey complicate data analysis. We discuss in this report each of the three major stages of survey development—planning and design, sample design and selection, and implementation—in turn. While we document the many strengths of the NAOMS survey and its evolution, we also discuss limitations that raise the risk of potential errors in various aspects of the survey’s results. We also note where design, sampling, and implementation decisions directly or potentially affect the analysis and interpretation of NAOMS’s data. Table 2 outlines the generally accepted survey research principles, derived in part from OMB guidelines, that we used in our assessment. The table is a guide primarily to how we answered our second question on the strengths and limitations of the design, sampling, and implementation of the NAOMS survey. However, we caution that survey development is not a linear process; steps appearing in one section of table 2 may also apply to other aspects of the project. Direct fulfillment of each step, while good practice, is not sufficient to ensure quality. Additional related practices, and the interaction of various steps throughout the course of project development and implementation, are essential to a successful survey effort. Table 2 should be viewed not as a simple checklist of survey requirements, but as guiding principles that underlie the narrative of our report and our overall evaluation of the NAOMS survey. The Survey’s Planning and Design Early documentation of the NAOMS project shows that the project was planned and developed in accordance with generally accepted principles of survey planning and design. As we have previously discussed, the project team established a clear rationale for the air carrier pilot survey and its use for ongoing data collection at its conception. Team members considered the survey’s scope and role in light of other sources of available data, basing the questionnaire on a solid foundation of available data, literature, and information from aviation stakeholders. They devised mechanisms to protect respondent confidentiality. Researchers collected preliminary information from focus groups and interviews that they used in conducting confirmatory memory experiments and in developing the questionnaire to reduce respondent burden and increase data quality. The team was also concerned with validating the concept of NAOMS and achieving buy-in from members of industry and others to help ensure the relevance and usefulness of the NAOMS data to potential users, although they were not able to fully resolve questions some stakeholders had in the utility of the data. The team’s field trial of air carrier pilots allowed them to answer key questions about data collection and response rate. The field trial was followed with supplemental steps to revise the questionnaire before the full air carrier pilot survey. Notwithstanding the survey design’s strengths, it exhibited some limitations, such as a failure to use the field trial to fully test questionnaire content and order and fragmented management plans. We found potential risk for survey errors involving measurement, with low implications for risk of error in the survey’s data. Preliminary Research Supported the Survey’s Development In its planning, the NAOMS team extensively researched survey methodology, existing safety databases, and literature on aviation safety and personnel. The team also conducted interviews and focus groups with pilots. To generate publicity and support from aviation stakeholders, the NAOMS team made multiple presentations to and conducted workshops with government officials and aviation stakeholders (see table 1). The preliminary research and feedback from stakeholders helped the team define the scope of data collection. Initial literature reviews focused primarily on the data collection methods that would be most likely to ensure response accuracy, on question wording and ordering that would maximize recall validity, and on preventing respondents from underreporting for fear of being held accountable for mistakes. A document summarizing several early team memorandums addressed theories and literature on “satisficing”—or the notion that survey respondents seek strategies to minimize respondent burden and cognitive engagement—and the relationship between the data collection method and respondent motivation. This document, which was reprinted, in part, in the contractor’s reference report on NAOMS, also examined literature on social desirability, particularly how confidentiality affects response accuracy. It included reviews of academic literature on how interviewing methods can dampen or enhance tendencies toward socially desirable responses. The summary document discussed the importance of the questionnaire’s accounting for memory organization as a way to minimize response burden and maximize respondent recall using specific cues to take full advantage of how pilots organize events in memory, thus maximizing their ability to recall and report events in the reference period. It outlined specific strategies that have been used to assess memory organization. The document proposed steps the NAOMS researchers could take to assess memory organization; identify optimal recall periods; and construct, validate, pretest, and refine the survey questionnaire. It also outlined a way to implement and evaluate different data collection methods and included initial sample size calculations to compare response rates and potential sampling frames. Another planning document enumerated in detail the populations of interest in addition to pilots, including air traffic controllers, mechanics, dispatchers, and flight attendants. The project team compiled an annotated list of sources on aviation safety and their limitations to indicate how the survey might play a role within an overall system to monitor national airspace safety. The project team supplemented its research with focus groups and one-on-one interviews with pilots to help in deciding which safety events the questionnaire should cover. These focus groups and interviews are discussed in more detail in appendix I. Workshops and Consultations with Stakeholders and Potential Users After presentations on the NAOMS concept and its relevance to aviation safety in March and November 1998, NAOMS staff held the project’s first major workshop on May 11, 1999. A wide range of FAA and NASA officials; representatives from private industry, academia, and labor unions; and methodologists discussed the need for NAOMS as a way to fill gaps in safety knowledge and move beyond accident-driven safety policy (often called the “accident du jour” syndrome); government’s and others’ use of survey research, citing specific surveys that are used to measure rates, trends, risks, and safety information in other fields; the intent to focus NAOMS questions on individuals’ experiences, rather than on their opinions; and the need to involve industry and labor stakeholders to ensure high participation rates and relevant safety content. In addition to introducing the concept of NAOMS and its likely form, the team expressly sought labor and industry participation in developing NAOMS and to ensure high response rates; the relevance of specific questions; and the survey’s output application to decision making on policies, procedures, and technology. Several aviation stakeholders participating in the workshop offered feedback on the survey in general and on individual questions raised in focus groups and the early field research. For example, a summary of comments from FAA staff raised questions about response rate, the scope of questions, and strategies for data validation. We found that NAOMS staff clearly thought through many of these issues, including matters of response rate and questionnaire consistency, and worked to address them as the project developed. However, as we discuss in the following text, while NASA initially expected that FAA would be a primary customer of NAOMS data, it failed to attain consensus with the agency on the project’s merits and on whether NAOMS’s goal of establishing statistically reliable rates, in addition to trends, was possible. Defining the Scope of the Data NAOMS Would Collect The NAOMS team determined that the NAOMS survey would usefully supplement other safety resources whose goals were investigative or were to identify causation. Unlike those resources, NAOMS was to capture not just incidents but also precursors to accidents and “more subtle associations that may precede safety events.” The 2007 ASMM summary report noted that one must know where to look in order to investigate precursors. NAOMS was designed to point toward such research. The project team expected that trends seen in the NAOMS data would point aviation safety experts toward what to examine in other data systems. Researchers and FAA officials told us that many data, such as radar track data and traffic collision avoidance data, do not cover the entire NAS and were not regularly analyzed at the time that NAOMS was being developed. Following the 1999 workshop on the concept of NAOMS and the preliminary air carrier pilot questionnaire, a summary of comments from FAA showed some support for NAOMS. However, the summary expressed concern that much of the data being gathered were too broad to permit the development of appropriate intervention strategies. An FAA memorandum later, following meetings with NAOMS staff in 2003, requested extensive questionnaire revisions and suggested that certain questions were irrelevant, should be dropped, or were covered by other safety systems. FAA also sought more detailed investigatory questions to assess the causes of some events, such as engine shutdowns, and revisions to questions that it saw as too subjective and too broad to provide real safety insight. To ensure that question consistency over time would enable trend calculations, NASA researchers did not make most of the revisions. Instead, they responded that to the extent that NAOMS might provide “a broad base of understanding about the safety performance of the aviation system” and allow for the computation of general trends over time, its questions could help supplement other safety systems. The project team’s concerns about respondent confidentiality influenced the questionnaire’s design. For example, they expressed some fear that questions that attributed blame to respondents reporting safety events would lead to underreporting. These concerns motivated decisions to exclude from the questionnaire most of the information that could have identified respondents. Pilots were not asked to give dates or identify aircraft associated with events they reported. Additionally, the database that tracked sampling and contact information for individual pilots recorded only the weeks in which interviews took place, not their specific dates. Project Management Plans Were Not Comprehensive The NAOMS team’s project management plans were not comprehensive. From 1998 to 2001, the activities of Battelle and its subcontractors were covered by statements of work to plan and track the survey’s development. These documents enumerated tasks, deliverables, and projected timelines. Similar documents do not exist for the 2002 to 2003 data collection period, when NASA changed priorities for NAOMS. Battelle developed a new implementation plan to address changes in NASA’s priorities in 2004, but plans from 2002 onward were largely subsumed in a series of contract modifications and were not centralized. Twenty-four base contracts and modifications contained information to track overall progress, but, according to NASA, the overall ASMM project plan (while in accordance with NASA policy) did not contain sufficient detail to correlate the plan with contract task modifications such as those used for NAOMS. The lack of a central plan makes it difficult to evaluate specific aspects of NAOMS against preestablished benchmarks. Furthermore, the failure to maintain management or work plans during data collection or to adapt the initial work plans to accommodate project changes may have contributed to the gaps in record-keeping regarding sampling, as discussed later in this report. Innovative Memory Experiments Enhanced the Questionnaire Research demonstrates that designing a survey to accommodate the population’s predominant memory structure can reduce respondents’ cognitive burden and increase the likelihood of collecting high-quality data. The NAOMS team conducted innovative experiments to help in developing a survey that would reduce respondent burden and accommodate the air carrier pilots’ memory organization and their ability to recall events, thus increasing the likelihood of accuracy. While researching and testing hypotheses about memory organization to enhance questionnaire design are excellent survey research practices, few researchers have the time or resources to conduct extensive experiments on their target population. The NAOMS survey methodologist ran experiments from 1998 through 1999 to generate and test hypotheses that could be incorporated into the design of the air carrier pilot survey. Several of the project’s experiments to determine pilots’ recall and memory structures were based on relatively few pilots. These were supplemented with other experiments and additional data analysis to validate the researchers’ hypotheses. However, these experiments were limited to the core questions on safety in the air carrier pilot survey and did not extend to other sections of the survey or other populations, whether general aviation pilots, mechanics, or flight crew. The memory experiments led researchers to design the core safety events section of the survey according to a hybrid scheme of memory organization—that is, it used groupings and cues related to causes of events as well as phases of flight, such as ground operations and cruising. After the memory experiments, the NAOMS survey methodologist recommended that project staff undertake cognitive interviews to ensure that the questionnaire to be used in a planned field trial could be understood and was complete, recommending also that a final version of the questionnaire be tested with a separate group of pilots. A memorandum indicated that at least five cognitive interviews were held before the field trial, but we could not identify documentation on their effect on the questionnaire’s structure or content. A Large-Scale Field Trial Resolved Many Issues, but Not Others In 1999, following more than 1 year of research, experiments, and questionnaire development, NAOMS researchers conducted a large-scale field trial. It was to help decide the appropriate recall period for the survey questions; major issues of order and content for the questionnaire; and the appropriate method of survey administration to minimize cost, while maximizing response rate and data quality. The field trial also allowed the NAOMS team to assess whether the survey methodology was a viable means of measuring safety events. Although largely in accordance with generally accepted survey principles, the field trial had some limitations and did not resolve important questions about the survey’s methodology. To administer the trial, team members randomly assigned pilots to various experimental conditions: three different interviewing methods (self- administered questionnaires, and CATI and in-person interviews), six different recall periods, and the presentation of the main questions of the core safety questions first or following the topical focus section. Interviewers for the CATI and in-person interviews received group and individual training, and the researchers used widely accepted practices to enhance response rates for the self-administered questionnaire, with notifications and reminder letters to maximize response rate. Their analysis of the data appeared to show that experimental assignments were sufficiently random and different in data quality to allow some decisions about response mode and recall period—showing, for example, that different modes resulted in different completion rates, and that longer recall periods produced higher event counts. Recall Period Research and Testing The NAOMS researchers hoped to reliably measure highly infrequent events—the severest of which pilots were likely to recall quite well— without jeopardizing the measurement of more frequent, less memorable events that had safety implications. Literature on survey research did not point to one specific reference period for events such as those in the NAOMS survey. To evaluate the effect of recall period on a pilot’s ability to accurately remember events, the project’s survey expert asked five pilots to fill out, from memory, a calendar of the dates and places of each of their takeoffs and landings in the past 4 weeks. Then they were asked to fill out an identical calendar at home, using information they had recorded in their logbooks. The survey methodologist used these data to support his recommendation that NAOMS use a 1-week recall period, noting that this would require a substantial increase in sample size to measure events with the precision NAOMS originally intended. However, because the experiment was designed to measure only takeoffs and landings—routine activities that were unlikely to carry the weight in memory of more severe or infrequent safety events at the heart of the NAOMS project—the survey methodologist added the caveat that the final decision about recall interval would have to be informed by the particular list of events in the final NAOMS questionnaire and the rates at which pilots witnessed them. Following the logbook experiment, NAOMS researchers tested several potential recall periods in the field trial, including 1 and 2 weeks and 1, 2, 4, and 6 months. Data from the field trial show an increase in the number of hours flown and event reporting commensurate with extensions of the recall period and possible overreporting for the 1-week period relative to the others. Aside from the logbook experiment, however, no efforts were made to validate the accuracy of field trial reports of safety events or flight hours and legs flown in survey data collected within different recall periods. The project team also obtained feedback from the pilots participating in the field trial. This feedback indicated that most who commented on recall periods said they were too short; the pilots wanted to report incidents that happened recently, but not within the recall period. The researchers noted that the pilots’ discomfort with a short recall period did not necessarily mean the data collected within that period were inaccurate; it meant only that it was possible that they wanted to report events outside the recall period to avoid giving the impression that certain events never occurred. Researchers also studied pilots’ reported confidence in their responses as an indication of data quality obtained with different recall periods. However, the information from the field trial tests and respondent feedback did not resolve the question of which recall period to use. Researchers decided to use approximately the first 9 months of NAOMS data collection as an experimental period to resolve questions the field trial could not answer, and they settled on a 60-day recall period several quarters after full data collection began. The contractor administering the field trial randomly assigned pilots to mail questionnaires, face-to-face interviewing, or CATI. Face-to-face data collection was stopped after it proved to be too costly and complicated. The project team then compared the costs and response rates of the two other methods as well as the completeness of responses as a measure of data quality. Completed mail questionnaires cost $67 each and had a response rate of 70 percent, and 4.8 percent of the questions went unanswered. Telephone interviews cost $85 and attained a response rate of 81 percent, and all of the questions were answered. The project team decided that the CATI collection method was preferable, given the response rate, the cost, and a tighter relationship between the numbers of hours flown and aggregated events reported. We found ample information to support this data collection method. In contrast, the field trial did not provide the researchers with an opportunity to validate the sample strategy for data collection—either cross-sectional (drawing each sample anew over time) or panel (surveying the same set of respondents over time). As with the recall period, researchers used the early part of the full survey to experiment with both panel and cross-sectional approaches. They decided on a final data collection approach approximately 9 months after the full survey began. Team members developed different versions of the field trial questionnaire to test whether to survey pilots first about main events—the core safety issues in section B—or about focus events—the issues on specific topics in section C (see fig. 3). The researchers’ quantitative analysis of the field trial data suggested that different section orders did not affect data quality. However, we found it unusual that the field trial questionnaire did not fully incorporate the specific question order suggested by experiments or literature in the main events section. While questionnaires contained content areas from the memory experiment that combined the causes of events and the phases of flights, individual topics within the core safety events section of the field trial survey were not ordered from least to most severe as the survey methodologist recommended. NASA later clarified that the NAOMS team incorporated the results of the field trial into the final survey instrument. Additionally, the field trial questionnaire did not contain the “drill-down” questions that appeared in the final questionnaire—that is, questions asking for multiple response levels (see fig. 4). The failure to include these questions appears to violate the generally accepted survey practice of using a field trial to test a questionnaire that has been made as similar as possible to the final questionnaire. While questionnaires almost inevitably change between a field trial and their final form, the results of the experiments, cognitive interviews, and full set of questions should have been incorporated into the test questionnaire before the development of the final survey. In addition to subject matter and survey methodology research, experiments, and field testing, NAOMS staff used other commonly used survey research techniques to develop and revise the air carrier pilot survey questionnaire. For example, we found that at least five cognitive interviews were conducted before the field trial, but we found no documentation that described these interviews or their effect. Additional cognitive interviews were conducted after the field trial on nearly final versions of the questionnaire before the survey’s full implementation, resulting in changes to the questionnaire (see app. I). The project team did not record field trial interviews; doing so would have allowed verbal behavioral coding, which is a supplemental means of assessing problems with survey questions for both respondents and interviewers. Besides the changes the team made to the questionnaire from the results of the cognitive interviews, team members reviewed the survey instrument in great detail, adding and deleting questions to make it easier for the interviewers to manage and for the respondents to understand. However, as we have previously mentioned, the questionnaire used in the field trial did not fully incorporate the order of events suggested by the memory experiments. This order appears to have been addressed after the cognitive interviewing that took place just before the final survey began. We found evidence that the NAOMS team made some changes to the questionnaire as a result of respondent comments on the field trial, such as discarding a planned section on minimum equipment lists, seen by many respondents as ambiguous and unclear, in favor of a different set of questions. However, there is no documentation of additional question revisions in response to empirical information from the field trial. Additionally, except for CATI testing involving Battelle managers and interviewers, we could not find evidence of a pretest of the final questionnaire incorporating all order and wording changes before the main survey was implemented. NASA recently told us that the results of the field trial, as well as inputs from other research, were fully incorporated into the final survey instrument. The Survey’s Sample Design and Selection We found that for its time, NAOMS’s practices regarding sample frame design and sample selection met generally accepted survey research principles, with some limitations. The project team clearly identified a target population and potential sample sources. To maintain program independence, the team constructed the sampling frame from a publicly available database that was known to exclude a sizable proportion of air carrier pilots, and applied filtering criteria to the frame to increase the likelihood that the pilots NAOMS contacted would be air carrier pilots, rather than general aviation pilots. It is not known for certain whether the approximately 36,000 pilots NAOMS identified for its sample frame were representative of the roughly 100,000 believed to exist. The implications for the risk of error were high; the most significant sources of potential survey error stem from coverage and sampling. In addition to increasing the risk of error, sampling decisions potentially affect the analysis and interpretation of NAOMS data. Sample size calculations may not be sufficient to generate reliable trend estimates because of the infrequency of events that have great safety significance and concerns about operational characteristics and potential bias resulting from the sample filter. Additionally, developing estimates of event counts for air carrier operations in the NAS (which was not a primary objective of NAOMS) from a sample of pilots is complicated by the fact that rates from NAOMS are based on individuals’ reports, rather than on direct measures of safety events. Also, the survey has the potential for multiple individuals to observe the same event. Potential Problems Related to the Sampling Strategy Require Additional Assessment While NAOMS researchers designed and selected a sample in accordance with generally accepted survey research principles, sampling decisions they made to address complications influenced the nature of the data collected. NAOMS’s sampling strategy for the air carrier pilot survey was complicated by the needs to (1) link a target population to specific analytical goals; (2) identify an appropriate frame from which to draw a sample; and (3) locate commercial pilots, rather than general aviation pilots. Eventually, the team constructed a frame from a publicly available pilot registration database that excluded some pilots and lacked information on where pilots worked, compelling the team to use a filter to increase the likelihood of sampling air carrier pilots. The contractor drew a simple random sample each quarter from the freshly updated, filtered, and cleaned database and divided the sample into random replicates that were released weekly for interviewing. After the first year of the air carrier pilot survey, which adapted sampling to accommodate experiments on recall period and panel approach to data collection, the survey sampled approximately 3,600 air carrier pilots for most quarters of data collection. This sampling strategy resulted in 25,720 completed interviews by the end of the air carrier interviewing. To develop NAOMS’s sampling strategy, the team first needed to identify a target population. Although an ideal target population corresponds directly with a specific unit of analysis of interest, researchers often rely on proxies when they cannot directly sample the unit. With NAOMS’s goal of estimating trends of safety events per air carrier flight hour or flight leg in the NAS, a target population might have been all air carrier flights in the NAS. Theoretically, one could draw a sample of all air carrier flights in the NAS, locate the pilots on these flights, and interview them about events specific to a particular flight. Given that such a sample would be prohibitively resource-intensive, the NAOMS team identified an alternative target population—namely, air carrier pilots. Surveying air carrier pilots would provide information on safety events as well as on how many flight hours or flight legs that pilots flew. If the frame fully covered the population of air carrier pilots, the team’s planned simple random sample from the frame would allow an estimation of individual air carrier pilots’ rates of events experienced per hour or leg flown. In isolation, these individual-based estimates would fall short of cleanly characterizing the NAS, which involves other pilots besides air carrier pilots and other personnel, including other crew members on each flight. However, the estimates could address NAOMS’s goal of estimating rates (for individual air carrier pilots) on the basis of risk exposure and trends in safety events over time, to supplement other systems of information about safety. One potential difficulty with this target population was that the number of pilots actively employed as air carrier pilots was not known when the project began. Although the NAOMS team extensively reviewed the size of the pilot population, we found multiple estimates of the target population from the NAOMS documentation. NAOMS’s preliminary research suggested that approximately 90,000 pilots were flying for major national and regional air carriers and air cargo carriers. Other information suggested that the population could have been as large as 120,000 pilots. For example, the 60,000 air carrier pilots in ALPA’s membership represented “roughly one-half to two-thirds” of all air carrier pilots, or, alternatively, up to 80 percent of the target population. In light of these different estimates, we assume for purposes of discussion a target population of about 100,000 air carrier pilots. NAOMS researchers next needed to identify a source of information on its target population to provide a sampling frame from which it could sample air carrier pilots. As we have previously mentioned, because there was no central list of air carrier pilots that would ensure coverage of the target population, researchers had to choose an alternative frame. Initially, they considered using ALPA’s membership list of air carrier pilots. However, to maintain the project’s independence and to be as inclusive of pilots as possible, regardless of their employer or union status, they decided against using this or any other industry list, such as personnel information from airlines. The project team also considered using FAA’s Airmen Registration Database. Its information on pilots included certification type and number, ratings, medical certification, and other personal data. When the survey was first being developed, limited information for all pilots in the Airmen Registration Database was publicly available as the Airmen Directory Releasable File. In 2000, after the field trial but before the full air carrier pilot survey was about to be implemented, FAA began allowing pilots to opt out of the publicly releasable database. NASA officials told us that the team had considered asking FAA for the full database but decided against formally pursuing access to it for several reasons. These included ensuring continuing access to a public, updated database; ensuring access to a database that contained contact information for pilots; and maintaining independence from FAA as an aviation regulatory agency. Also, NASA was concerned about using the full data, because it wanted to maintain the privacy of pilots who had removed their names from the list explicitly to avoid contacts from solicitors, purveyors, or the like. NAOMS staff had access to the full database when it was still publicly available in 2000 for the air carrier pilot survey’s field trial sample. However, NASA officials believed that they could not use it for the full- scale survey from 2001 to 2004 because the nature of the frame—in terms of how well it represented the current air carrier pilot population—would change over time. Instead, the team decided to use as the frame for the full-scale air carrier pilot survey the Airmen Directory Releasable File that excluded pilots who had opted out; this file was regularly updated over the course of the air carrier pilot survey. The choice of frame may have been appropriate, given programmatic constraints, but posed several challenges. First, pilots in the publicly available Airmen Directory Releasable File were not necessarily representative of pilots in FAA’s full Airmen Registration Database. Second, the database lacked information on whether airmen actively flew for a commercial airline. Lastly, only a relatively small portion of the 688,000 pilots in the database at the time of the field trial were air carrier pilots. Potential Effect of the Opt-out Policy NAOMS staff, realizing the potential limitations of using the publicly available data, were concerned about whether the frame provided adequate coverage of the target population or introduced bias into the data—that is, whether pilots in the public, opt-out database were sufficiently representative of air carrier pilots overall. For example, ALPA had provided its membership (which comprises approximately two-thirds of air carrier pilots) with information about the opt-out policy and with a form letter to pilots to facilitate their removal from the list. It is, therefore, possible that ALPA pilots removed their names from public access at a higher rate than non-ALPA pilots. NAOMS researchers’ analysis suggests that air carrier pilots may have removed their names from the public database at a disproportionately greater rate than did general aviation pilots. One Battelle statistician expressed concern to other NAOMS team members that the sample, therefore, might not represent the population of interest. To help assess potential bias as a result of the opt-out policy (and the filter, discussed in the following text), researchers added a question to the survey—part way through the data collection phase—asking pilots to identify the size category of the aircraft fleet of the air carrier for which they flew. This information would allow for a comparison with air carrier fleet sizes known to exist in the NAS. Identifying Air Carrier Pilots from the Sampling Frame The database from which the project drew its sample of pilots lacked information on where the pilots worked and, therefore, could not be used to identify pilots flying commercial aircraft. The incidence of air carrier pilots in the full Airmen Registration Database was fairly low— approximately one in seven pilots would have been an air carrier pilot. (We could not find documentation on the number or proportion of air carrier pilots in the opt-out database, but we believe it to have had a similarly low incidence.) Therefore, the NAOMS researchers decided to use a filter to increase the likelihood that those contacted for the survey would be air carrier pilots. The filter required that pilots be U.S. residents certified for air transport, with flight engineer certification and a multiengine rating—a rating that sets specific standards for pilot experience and skill in operating a multiengine aircraft. By construction, all pilots in the public (opt-out) Airmen Directory Releasable File who did not fulfill these filtering requirements fell into the sampling frame to be used for the general aviation survey. After the filter was applied, the final frame for air carrier sampling had approximately 37,000 pilots in the first several quarters; records on the size of the frame’s later quarters were not maintained. With these filtering criteria, approximately 70 percent to 80 percent of those contacted for the air carrier sample were, in fact, air carrier pilots who had flown within the recall period specified on the questionnaire. Although the contractor collected some information on pilots who were contacted but deemed ineligible for the survey, the data were not analyzed specifically to establish how effective the filter was at identifying air carrier pilots, even if they did not qualify for the survey. Without data on which people were excluded because they were general aviation, rather than air carrier pilots, these pilots would be wrongly omitted from the sampling frame for the general aviation survey. As data collection progressed, the NAOMS team realized that the data were biased toward more experienced pilots, pilots flying primarily as captains, and pilots flying widebody aircraft over longer flight times. After extensive analysis of the observed bias, the team attributed the bias primarily to two of the four filtering criteria—that is, that pilots were required to have both air transport and flight engineer certifications. Team researchers explored various strategies for addressing the observed bias and made several recommendations for data collection and analysis. The team considered whether using stratification to select samples according to alternative or additional characteristics would help reduce the observed bias toward more experienced pilots flying larger aircraft, but it eventually decided against changing the sampling strategy midsurvey. To determine whether the filter systematically excluded certain types of respondents—for example, air carrier pilots flying smaller aircraft or pilots with less experience—the NAOMS team recommended capitalizing on the implementation of NAOMS’s general aviation portion. The sampling frame for the general aviation survey included all pilots not filtered into the air carrier sample. Accordingly, project staff could examine the characteristics of air carrier pilots who fell into the general aviation sample because they did not meet filtering requirements, to establish whether they differed notably from those surveyed using the filtered sample. Preliminary analysis confirmed that pilots surveyed from the filtered sample exhibited systematic differences from air carrier pilots in the general aviation survey. Specifically, pilots surveyed with the air carrier sampling filters overrepresented captains and international flights, underrepresented smaller aircraft and airlines, and overrepresented the largest aircraft and airlines. Following these analyses, the NAOMS team advocated incorporating operating characteristics into all analyses to mitigate potential bias. For the most part, the team recommended using operational size categories— that is, small transport aircraft and medium, large, and widebody aircraft—to stratify and possibly weight analyses, since different types of aircraft face different event risks and since safety issues may be more or less serious, depending on operating characteristics or aircraft make and model. The team’s presentations of preliminary results frequently incorporated such analyses, as shown in figure 5. While other operational stratifications were suggested, such as specific aircraft make and model, it was acknowledged that this kind of analysis would dramatically reduce the effective sample size available for analysis in each category. A smaller effective sample size would decrease the precision of estimates from the survey, making it more difficult to detect changes in rates over time, especially for infrequent events. Additionally, to the extent that the data were to be analyzed as rates per flight leg or flight hour, an analysis segregated by operational characteristics would represent a fair description of these rates if it were assumed that the data adequately represented aircraft and pilots experiencing safety events within those operational categories—for example, if the widebodies and their pilots in the sample were fairly representative of air carrier widebody aircraft and pilots in the NAS. Sample Size Calculations May Have Curtailed Statistically Reliable Trend Estimates for All Questions NAOMS aimed to generate statistically reliable rates and trends that would allow analysts to identify a 20 percent yearly change with 95 percent confidence. However, the ability to detect such trends depended not only on the sample size, but also on the frequency of events. One statistician who had worked with the project team reported recently that detecting changes in trends of very rare events, such as complete engine failure, would require a prohibitively large sample of approximately 40,000 pilots. NAOMS’s sample sizes were insufficient to allow analysis of all questions on the air carrier pilot survey or to accommodate analytical strategies that researchers eventually deemed necessary after data collection had begun, such as analysis by aircraft size category. During the field trial, sample sizes were calculated to distinguish response rates between the three data collection methods (face-to-face and telephone interviews and mail questionnaires) to answer questions such as the following: Did an 81 percent completion rate for telephone interviews differ significantly from a 70 percent response rate for mail questionnaires? Later sample calculations for the full survey focused more directly on establishing the ability to detect a 20 percent change in event rates over time. Data from the field trial were analyzed to estimate how frequently an air carrier pilot experienced each specific event, enabling the team to assess how reliably different sample sizes could detect increases or decreases of 20 percent. From the field trial data, the contractor estimated that 8,000 interviews would allow detection of changes in rates with 95 percent confidence for approximately one-half of the core safety event questions. The team eventually settled on a sample size of approximately 8,000 cases a year, declaring in its application to OMB that this would be the minimum size required to reliably detect a 20 percent change. The application clarifies that just 5,000 unique pilots would be interviewed in the first year to gather 8,000 completed surveys (4,000 in cross-sectional samples, and 1,000 in four waves of the panel), but sample size calculations submitted to OMB do not expressly consider the impact of the panel’s smaller sample size on the ability of NAOMS data to detect trends. In the 3 years after data collection experiments in recall and method were discontinued, the survey interviewed approximately 7,000 cases a year. At the time the NAOMS OMB application was submitted, project staff did not have adequate data to know for certain how frequently individual safety events would be reported, or to know an exact number of interviews that could actually be attained in a year. The NAOMS OMB application reported that pilots experience certain events quite infrequently, without expressly calculating how well a sample size of 8,000 could generate reliable estimates for such events. The sample size calculations in the application also assumed that the first-year data could be aggregated across recall periods and both the panel and cross-sectional data collection approaches that were used. NAOMS project staff later told us that further analysis would be essential to establish whether rates and trends generated from different recall periods and data collection approaches were sufficiently similar to allow combining the data. NASA believes that, even without data from the experimental period, the subsequent 3 years of air carrier pilot data were sufficient to demonstrate the survey’s capability of detecting trends reliably. Partway through data collection for the full air carrier pilot survey, NASA’s contractor conducted simulations using early NAOMS data to better establish sample sizes at which 20 percent changes in rates for individual questions could be detected. These data confirmed that a sample of 8,000 cases a year would be sufficient to detect a 20 percent change for roughly one-half the core safety event questions, assuming all cases were analyzed simultaneously. By this point, however, the project team had already established the importance of breaking out NAOMS’s estimates according to the size category of the aircraft flown to compensate for operational differences and the effects of the sampling procedures that we have previously described. Thus, sample size calculations may have overstated the ability of the NAOMS data to reliably detect trends at given significance levels, if segregating answers by operational characteristics is critical. Additional simulations that accounted for likely analytical considerations would be essential to determine whether the NAOMS project could attain its goal of measuring 20 percent changes in rates of different safety events with statistical confidence. Sampling and Design Decisions Bear on NAOMS’s Rate Calculations and Characterization of the National Air Space When analyzing NAOMS’s data, researchers must consider the effect of several design and sampling decisions that the project team made to accommodate pilots’ confidentiality and the infeasibility of directly sampling all flights in the NAS. For example, the likelihood that a particular event would be reported by a pilot responding to the NAOMS survey increased with the number of crew witnessing the event and the number of aircraft involved. However, in designing a questionnaire to lessen the likelihood of respondent identification, the NAOMS team decided not to link pilots’ reports of specific events to particular aircraft flown during those events or on the dates on which those events happened. Furthermore, the team’s choice of sampling frame and filter resulted in a disproportionate selection of captains relative to other crew members. While sampling and design choices were rational in light of concerns about confidentiality and program independence, such decisions have had implications on how to calculate and interpret rates from NAOMS and on whether analysts can extrapolate the data to characterize the national air space. NAOMS staff failed to identify specific analytical strategies to accommodate these issues in advance of data collection. Using NAOMS Data to Calculate Rates and Trends Survey design and sampling decisions affect how rates from NAOMS data can be calculated. For example, the NAOMS survey has the potential to collect multiple reports of safety events if more than one crew member on an aircraft or crew members on different aircraft observed the same safety event. Safety events happening on aircraft with more crew members would also have had a greater likelihood of being reported, since more individuals who experienced the same event could have been subject to selection into the sample. These issues are not a problem, unless researchers fail to address them appropriately in an analysis. Analytic goals must determine whether one adjusts for the potential that an event is observed by multiple crew members in the sampled population. Given that one of NAOMS’s goals was to characterize the rate at which individual air carrier crew members experienced events per flight hour or flight leg, and assuming all crew members in an aircraft were equally likely to be sampled, multiple crew members observing an event involving one aircraft would not pose a problem. However, other considerations bear on whether and how to make adjustments. For example, bias resulting from the sampling frame and filter suggests that captains were more likely to have been selected into the air carrier sample than first officers or other crew members; additionally, many pilots flew in more than one crew capacity during the recall period. Events involving multiple aircraft also complicate estimates, partly because individuals not qualified for the air carrier pilot survey might have flown many of these aircraft. Extrapolating from individually derived rate estimates to system counts would also require making substantial assumptions and adjustments (see the following text). One potential strategy to address the possibility of multiple observations of the same event would be to allocate events according to the number of crew members who might have witnessed them (more details on alternative strategies are in app. I). For example, a report of a bird strike from a pilot flying a widebody aircraft with two additional crew members could be counted as one-third of a bird strike. Appropriate allocation presumes, however, that the analyst can identify the number of crew members present for any given report of a safety event. In general, the NAOMS recall period extended over 60 days, during which some pilots flew two or more types of aircraft of different size categories, implying different numbers of crew. Additionally, the questionnaire did not allow a pilot who flew more than one aircraft to identify which aircraft a reported safety event was associated with or in which role he or she served as crew. Analysts seeking to address the potential effect of multiple reports of the same event would have to develop allocation strategies that account for these design issues. Researchers must also develop allocation strategies for other aspects and types of analysis using NAOMS data, such as trends or rate estimates for different aircraft types. We have previously mentioned that the NAOMS team recommended analyzing data by operational size category because of sampling considerations and because the effect and exposure to certain risks varied by class of aircraft. They also noted the importance of seasonal variations in relation to safety events—for example, icing is less likely to be a problem in summer than winter. In its preliminary analysis, the NAOMS team attempted to resolve the issue of seasonal assignment by using nonproportional allocation strategies. The team used a midpoint date of the recall period—for example, October 1 if an interview recall period ran from September 1 to October 30—to determine a seasonal assignment for each interview in the analysis. For pilots flying different aircraft during the recall period, team members assigned an operational size class, based on the aircraft predominantly flown. For pilots who reported flying different operational sizes of aircraft equally over the recall period, project staff used a random number generator to determine the size class for preliminary analysis. Extrapolating to the National Airspace System The NAOMS team disagreed on the survey’s ability to provide information on systemwide event counts versus rates and on trends based on individuals’ risk exposure. In preliminary analysis, the contractors often used BTS data to weight NAOMS data to generate systemwide event counts for air carrier operations in the NAS, and to provide baseline measures to assess potential bias resulting from sampling and filtering procedures. Since BTS’s data collection processes changed during the NAOMS data collection period, however, the contractor stopped using these data to weight its estimates. Because of the distinction between the NAOMS’s unit of analysis and the sampling frame, as well as other sampling issues we found, it may not be possible to establish systemwide event counts for air carrier flights from the NAOMS data without using an external benchmarking dataset. However, extrapolating to systemwide event counts was not an explicit goal of the project. To the extent that analysts seek to use an external dataset to weight the NAOMS data in estimates of systemwide counts, that dataset’s collection procedures and reliability would require assessment. Additionally, caution should be exercised, since changes in data collection or editing procedures over time could confound actual trends with changes resulting from variations in any external weighting dataset. The Survey’s Implementation We found that NAOMS researchers followed generally accepted survey principles for many aspects of the survey’s implementation, with some limitations. Sample administration, information systems, and confidentiality provisions appear to have been adequate, and telephone interviewers were successful in administering technical questions and attaining high completion rates. However, despite adequate records of data editing and checks, analysis and interpretation of NAOMS data are complicated by first-year experiments in recall period and data collection approaches and CATI programming choices, along with sampling and design decisions. Researchers did not conduct full data validation or nonresponse bias assessments to ensure the quality of the data. We found deficiencies in record-keeping and moderate implications for the risk of survey error; the potential survey errors involved processing, sampling, and nonresponse. Information Systems and Sample Management Maintained Confidentiality, but Data Checks and Record- Keeping Were Limited We found several issues with NAOMS information systems. Sample administration and management, including notification of and informational materials for pilots and release of sample for interviewing, met generally accepted survey principles. Pilot confidentiality seriously concerned project staff, and steps to protect confidentiality appear to have been adequate. In contrast, CATI programming and data checks, along with record-keeping, had greater limitations. Taking its sample from the Airmen Directory Releasable File, NAOMS sampled using pilots’ certificate numbers, with a filter designed to target air carrier pilots. After adjusting for duplicate certificate numbers that had entered the sample some time in the previous year (regardless of whether an interview was completed), the team obtained pilots’ updated addresses from the U.S. Postal Service’s change-of-address file and submitted them to Telematch to obtain telephone numbers for each address. This process resulted in an approximately 60 percent match of addresses to telephone numbers, which researchers saw as sufficient because they believed the Airmen Directory included some records for individuals who had retired or were deceased. Each quarterly sample was then divided randomly into 13 parts to be released weekly. On the Friday before each week’s release, project staff sent pilots a notification on NASA letterhead that described the study and its confidentiality provisions and informed them that an interviewer would be calling. To pilots for whom Telematch could not provide a valid telephone number, or who had “bad” numbers from the field trial, project staff sent postcards asking them to call NAOMS interviewers directly or to send in an updated telephone number. The project team monitored the disposition of the sample on a weekly or quarterly basis, including the proportion of respondents who were ineligible, refused, or could not be located. While between 17 and 29 percent of pilots in each quarterly sample could not be located, and consequently were not interviewed, approximately 5 percent of the completed interviews resulted from cases that had not been matched to a telephone number through Telematch. The NAOMS team aimed initially for a 6-week fielding period, or “call window,” to allow interviewers sufficient time to call back each nonresponding pilot in the sample before assigning the case a final disposition (such as “no-locate” or “refusal”) and removing the pilot from the sample. However, researchers found that a 3-month call window was necessary to attain a sufficient response rate. The team did not indicate having compared the answer patterns of pilots they reached early in the sample with the answer patterns of pilots who were hard to track down, to ensure the patterns were comparable across the full sample field period. Information Systems and Pilot Confidentiality The survey’s management techniques and documentation for interviewers indicate that the NAOMS project team was particularly attentive to confidentiality. The questionnaire did not ask pilots to link safety events to specific flights, airlines, or times. Interviewers were informed that “Battelle not link data items with individual pilots. All reports will be presented using aggregate information.” Battelle used separate systems to track the sampling and to store the interview data, which ensured that pilots’ answers could not be linked to any identifying information. In the system with sampling information, the specific date of each interview was not recorded, only the week in which it happened. The NAOMS Reference Report described NAOMS’s responses as “functionally anonymous” and suggested that the promise of confidentiality enhanced the respondents’ rapport with the interviewers. “ The identity of respondents will not be revealed to anyone outside of the study staff. “ The data presented in reports and publications will be in aggregate form only. “ The respondent will be assured that participation is completely voluntary and in no way affects their employment.” Among analytical products for the aviation community, researchers planned to release summary reports and “structured, fully de-identified datasets.” According to a presentation at the first NAOMS workshop, NAOMS products would be subject to FOIA after they were in “a finished state.” NASA officials told us that they agreed that there would be little risk of violating pilots’ confidentiality if data were released in aggregate as initially was planned. In meetings with NASA, as well as in the agency’s written comments responding to our draft report, officials expressed serious concern about the importance of protecting pilots’ identity, a concern we share. The officials offered several specific examples of how they felt NAOMS data could be used to identify individual pilots. However, many government agencies that collect sensitive information, such as the Institute for Education Sciences, the Census Bureau, and the National Center for Health Statistics, have successfully allowed individual researchers access to extremely sensitive raw data on individuals. These agencies have effectively addressed the issue of individual privacy by, for example, requiring researchers to attain clearance to use data that could reveal sensitive information, to sign nondisclosure agreements, and to submit to stiff penalties for noncompliance. Additionally, agencies may restrict the types of analyses that can be performed with the data, where data can be analyzed, and how the data are reported. For example, the National Center for Health Statistics may prevent researchers from accessing table cells that contain fewer than five observations to lessen the likelihood that an individual respondent can be identified. We realize that given the evolution of data mining techniques, one could conceive of a full, raw NAOMS dataset being linked to proprietary information from airlines or a host of other safety systems in ways that might enable a dedicated data analyst to identify a particular pilot from the air carrier survey. This breach seems unlikely to happen, however, given the relative absence of identifiable information in the survey data and the lack of connection between the tracking database and the CATI data. If the survey were to be implemented as it was planned and the data released publicly only in aggregate, the confidentiality provisions of the air carrier pilot survey appear to have been adequate. The risk that individual pilots might be identified from the raw data would be greater for the general aviation survey, which involved a wider range of aircraft types, several of which might be linked to very small populations of pilots. NASA officials also expressed concern that pilots might have understood NAOMS’s promises of confidentiality as conferring the kind of legal protection that voluntary reporting to a system like ASRS provides. We found no evidence substantiating or refuting this understanding. To the extent that confidentiality protections in NAOMS were adequate, any fear that pilots would invoke legal protections that did not exist are unfounded. CATI Programming and Data Checks Partly because NASA emphasized the importance of not second-guessing pilots, and partly because project staff wanted to avoid truncating answers unnecessarily, the contractor built only limited edit checks into the CATI data collection system, despite initial plans to the contrary. The questionnaire used in training interviewers identified one structured prompt for the number of hours a pilot reported having flown during the recall period. It did not include any other instructions to recheck values reported for specific questions if they seemed unreasonable (perhaps indicating mistyping or an interviewer-respondent misunderstanding). Although the contractor documented edits and quality checks that it performed on the collected data, the CATI system may not have included all initially planned edit checks. The final questionnaire for interviewer training suggests that additional edit checks were built into the CATI system, but the contractor’s data editing protocols suggest that the edit checks were not consistently integrated into the program. For example, when pilots were asked to break the time that they flew different aircraft into percentages—such as 50 percent of the time flying a Boeing 737, 25 percent flying a McDonnell Douglas MD-80, and 25 percent flying a Boeing 727—the CATI system was supposed to have forced interviewers to reenter information if the responses did not add to 100 percent. Therefore, if, for example, the interviewer had mistakenly entered 25 percent for each of the three separate aircraft categories, the total percentage (75 percent) should have triggered the CATI system to force the interviewer to reenter information until it added to 100 percent, but the system did not in a handful of cases. Although such anomalies were extremely rare in the air carrier pilot data, multiple managerial reviews and tests of the CATI programming before the survey was implemented failed to identify the anomalies in advance of survey fielding. For many of the questions that pilots were asked, the concern that answers not be truncated unnecessarily by imposing predetermined edit checks seems reasonable, given that the goal was to generate statistically reliable information on aviation safety that was otherwise unavailable. For other questions, such as those on total engine failure and other rare events, input from aviation experts and operational staff would have helped in constructing thresholds for the checks in the CATI system. The additional data would have helped analysts distinguish between true outliers and data entry errors and between interviewer and respondent misunderstandings. Survey completion rates were relatively high, and the NAOMS team reported exceptionally few break-offs partway through the interviews. It is impossible to know for certain whether the high completion rates were because interviewers did not second-guess pilots by asking them to repeat answers that researchers had deemed unlikely. To the extent that interviewer rapport with pilots was enhanced because the pilots were not second-guessed, the decision to limit the number of built-in CATI edit checks may have enhanced the completion rates, at the expense of complicating data cleaning and outlier identification. NAOMS record-keeping was fairly decentralized. While many of the individual steps of the NAOMS project appear to have been documented in some form, the project staff and contractors did not assemble a coordinated, clear history detailing the project’s management that would facilitate evaluation of the overall air carrier pilot survey. Information on the project’s steps is largely dispersed across a series of contracts and modifications between NASA and Battelle and internal NAOMS team documents on individual pieces of the project. The lack of summary documentation for various aspects of the project makes it difficult to (1) distinguish between what was planned at the beginning of the project and what phases were accomplished in later years, following NASA priority changes for NAOMS’s resources, and (2) assess whether aspects of project and budget management raised the potential risk of survey error. Regarding the sample, the contractor kept limited information on the size of the frame before and after filtering to identify air carrier pilots. The size information the contractor maintained was not enough to reconstruct the sampling fraction—the percentage of pilots sampled each quarter from the filtered frame—for all quarters of the air carrier pilot survey. Additionally, Battelle’s procedures for maintaining pilot confidentiality aimed to make it extraordinarily difficult to identify which pilots were in the sample frame at any given time. At the time of sampling, Battelle maintained enough information to remove pilots who had already been sampled from future samples for the next four quarters. Battelle did this partly because the population was relatively small, and because they did not want to interview the same pilot more than once a year. Although the contractor lacked formal records, it estimated that the procedure led to the exclusion of approximately 20 percent of the filtered sampling frame in any given year. Regarding NAOMS data, the lack of sampling records prevents analysts from leveraging sampling information when producing estimates or calculating sampling errors. Furthermore, the lack of these data hinders the kinds of nonresponse bias analysis that the project team originally planned. Without reliable information on the proportion of cases that were removed from the sample in any given quarter, analysts must rely on more conservative variance estimates than might have been necessary, making the detection of changes over time more difficult. Experiments in Data Collection and Recall Period Length May Have Restricted the Utility of the First-Year Data Two main experiments that NAOMS researchers conducted in the initial year of interviewing may have restricted the utility of first-year data. Because the field trial had not resolved the optimal length of time the survey’s questions should cover, researchers used the final survey to test first two and then three different recall periods for several months. Subject matter experts on the team also advocated a second experiment to determine the relative merits of a panel or cross-sectional data collection approach. NASA officials told us that they viewed the first months of the survey as part of a development phase, rather than full implementation of the survey. Nevertheless, NAOMS project staff have noted that adequate research on the feasibility of combining data from the experimentation has not yet been done. Depending on the results of such research, it may be imprudent to evaluate NAOMS’s first-year responses as if they were similar to the trend data collected in subsequent years. Approximately one- quarter of NAOMS air carrier pilot survey interviews were collected under experimental conditions; the subsequent 3 years of the survey used a cross-sectional data collection approach with a 60-day recall period. “We will be asking panel members to give us a code word that we can use to link interviews, but this code word will not be kept in our tracking system. Pilots forgetting the word will not have their data linked.” The NAOMS team decided to begin its first full year of air carrier data collection using both panel and cross-sectional approaches. After analyzing the first half-year of data, the team noted that, among other things, the panel approach may have heightened pilots’ awareness of the timing of safety events but not the number of events recalled. The project team decided, for the following four reasons, to abandon the panel design in favor of cross-sectional data collection: (1) the panel design resulted in fewer independent observations; (2) the panel design was logistically difficult to administer; (3) NAOMS’s confidentiality procedures made analyzing repeated observations over time impossible (the proportion of pilots who remembered the password and thus could have data linked was not reported); and (4) the cross-sectional design had yielded a sufficiently high response rate to allay worries that pilots would be unwilling to respond unless enlisted as panel members. As we have previously discussed, the lack of literature on pilots’ recall, in particular, and the wide variation in the literature’s recommended recall periods, more generally, made it difficult for the team to decide on the most appropriate recall period. Team members had extensively analyzed data from the field trial to determine any differences among the recall periods tested in that survey. Researchers’ analysis showed that, as expected, respondents with longer recall periods reported having flown more hours and legs than those with shorter recall periods. Researchers’ regression analysis also confirmed a positive relationship between recall period and the total number of events that pilots reported; the magnitude and statistical significance of this relationship was strongest between 2 weeks (14 days) and 2 months (60 days). Additionally, the team examined pilots’ comments on whether their particular recall period had been appropriate. Despite these analyses, the team decided to delay the decision on recall period until they had collected more data in the initial months of the full air carrier survey. After reviewing the field trial results and pilots’ comments, the team was firm only in the belief that a 7-day period was too short, despite a small-scale experiment suggesting this period was optimal for pilots’ memory of routine events. (However, a 7-day period would have been too short to capture infrequent risk events.) The team explored various tolerances for error, event periodicity, and cost before testing 30-day and 90-day recall periods in the survey’s first two quarters of sampling. After the first two waves of data collection, team members explored data on the length of the recall period. Then they tested a three-way split design, collecting an additional 2 months of cross-sectional data to assess whether 60 days would be the best compromise between the 30-day and 90-day periods. Using these data, the project team compared the mean event rate over time across all core safety event questions—noting that longer recall periods should result in pilots reporting more events—and the standard deviation associated with these rates, which declined as the recall period increased. However, the team did not analyze the relationship between recall periods and specific events or the correlation of exposure units (flight hours and flight legs) to safety events for the different periods. Eventually, staff chose 60 days as providing a reasonable balance between the recall of events and avoidance of error. According to NASA officials, the selected recall period was seen as a compromise between cost and reliability. Despite the theoretical merits of the analyses justifying this decision, researchers cannot independently confirm the accuracy of reporting under different recall periods without separate data validation efforts as part of the field trial or full survey. However, the practicality of efforts to validate respondent accuracy depends on the nature of the data being collected, the existence of alternative data sources, and the design of the questionnaire. As NAOMS’s survey methodologist has observed, surveys would be unnecessary if a true population value were known. Because NASA’s objective in designing and implementing the NAOMS survey was to develop a data collection methodology, the team was warranted in deciding to use the first year of data analysis to resolve questions that had not been fully answered by the field trial. This is particularly true for their decision to test various recall periods that would help them find an appropriate balance between recall period and budget and sampling constraints. As we have previously mentioned, further analysis would be required to establish whether data collected during the experimentation can be combined with later data using only the 60-day recall period and cross-sectional approach. However, NASA officials told us that the subsequent 3 years of cross-sectional data collection with a 60-day recall period was sufficient to demonstrate the capability of the air carrier pilot survey to measure trends. Experienced Professional Interviewers Administered Technical Questions Training materials, questionnaire copies and revisions, specificity in interviewers’ scripts, and cooperation among staff demonstrate that the team selected appropriate interviewers and was sensitive to key issues throughout the questionnaire’s development. The NAOMS project team decided not to use aviation experts as interviewers in the belief that the “lack of expert knowledge can be a benefit since the interviewers are only recording what they hear rather than interpreting it through the lens of their own experiences.” To mitigate issues that might have resulted from using interviewers unfamiliar with the subject matter, the team emphasized the importance of the clarity of the questions and consistency in how the interviewers read them and responded to the respondents’ questions. The project staff emphasized the importance of using professional and experienced interviewers and giving them adequate training to administer the survey. NAOMS’s principal investigator told us that the interviewers Battelle used for the NAOMS survey were exceptionally professional and were accustomed to conducting interviews on sensitive topics. Interviewers received a training manual for the project’s first year, which included the following: a background on the rationale for the NAOMS survey, a description of how the survey could shed light on safety systems, the survey’s confidentiality protections, and information on the survey’s sampling and tracking information. They also received a paper copy of the questionnaire with interviewer notes, pronunciation information, and a glossary of aviation terms. The NAOMS team conducted a series of cognitive interviews with pilots to learn whether they would understand the questions and whether the incidents they reported were those that the team sought to measure. These interviews led to questionnaire revisions to address potential ambiguities for both respondents and interviewers. Regardless of efforts to develop clear questions that interviewers could read directly and respondents could easily interpret and answer, the team acknowledged that certain questions turned out to be less reliable than others. For example, in considering a question series on the uncommanded movements of rudders, ailerons, spoilers, and other such equipment (see fig. 6), the team’s concern was that pilots might be unaware of these events or might interpret uncommanded movements as including autopilot adjustments. The survey instrument did not include instructions to interviewers to clarify the intended meaning of this set of questions, and question standardization alone could not overcome the questions’ potential ambiguity, despite interviewers’ skill. In its quality assurance procedures, Battelle monitored and documented approximately 10 percent of the interviews. However, it did not record audio of the interviews. Battelle’s documentation states that the monitoring procedure took the form of live supervisory monitoring of interviews in progress, as well as callbacks to respondents to ask about their interviewing experience and to administer key questionnaire items again to see whether answers were reliable. However, NASA officials told us that the callbacks were never performed, in keeping with the project’s concerns about pilot confidentiality. Telephone Interviews Attained High Completion Rates, but Validation Efforts Focused Primarily on Face Validity While interviewers for NAOMS attained high completion rates from pilots in the sample, limited validation efforts hinder confirmation of data quality. Roughly 80 percent of sampled pilots thought to be eligible for the NAOMS air carrier pilot survey completed telephone interviews, and a notable portion of those who were contacted were found to be ineligible. The project team decided against conducting nonresponse bias analysis and did not pursue other formal data validation, focusing instead on the face validity of preliminary NAOMS rates and trends. In public presentations and documents of air carrier pilot survey results, NAOMS staff often discussed the rate of sample cases that were located and the proportion of interviews completed. The completion rate, distinct from a response rate, surpassed 80 percent by the end of the air carrier survey. Throughout the air carrier survey, approximately 23 percent of those contacted were deemed ineligible because they were not commercial air carrier pilots or had not flown in the recall period. Additionally, approximately 24 percent of cases drawn for the air carrier sample were never located and, thus, their eligibility for the sample could not be determined. A survey’s response rate, defined, in general, as the number of completed interviews divided by the eligible number of reporting units in the sample, is often used as an indicator of data quality and as a factor in deciding to pursue nonresponse bias analyses or additional survey follow-up. OMB’s guidelines, although not yet formal when the NAOMS survey was implemented, call for a nonresponse bias analysis when survey response rates fall below 80 percent. OMB guidelines cite survey industry standards for response rate calculations; these calculations generally include either unknown sample cases or an estimate of likely eligibles among unknown cases, in the denominator of the calculations. A calculation of response rates that excludes unknown cases rests on the assumption that all of those cases would have proven ineligible. For NAOMS data, a response rate calculation that included cases of indeterminate eligibility in the denominator (because the pilots could not be located) would be closer to 64 percent. If the cases not located fell out of scope at approximately the same rate as the cases that were located and contacted, the NAOMS response rate would be approximately 67 percent. NAOMS staff told us that they decided against pursuing nonresponse bias analyses as initially planned because they thought that air carrier completion rates were quite high for pilots who were located and contacted and because NASA’s priorities had changed, resulting in fewer resources for staff to complete such activities. However, more conservative calculations of response rates might have merited further scrutiny, such as a nonresponse bias analysis or other research into reasons for the sample rate of unlocated pilots. Comparing information from the sample frame respondents’ and unlocated pilots’ characteristics might have provided insight into any systematic differences between the two groups. NAOMS project staff attempted to validate the data in a variety of limited ways. Besides the interview monitoring, they made preliminary calculations, such as a comparison of the hourly rate at which pilots left the cockpit to deal with passenger disturbances. They found that, unlike some other events, the rate dropped dramatically after September 11, 2001 (see fig. 7), which demonstrated the importance of enforcing existing rules requiring the cockpit door to be closed during flight. Other validation attempts included checking on the seasonality of events—for example, on whether reports of icing problems increased in winter. The NAOMS staff recommended more formal validation efforts, suggesting the examination of questions that had been included in the survey specifically because they could be benchmarked against other FAA data systems, such as ASRS and the Wildlife Strike Database. Such work would have been complicated, however, by the decision to use NAOMS data to fill in data gaps from other safety systems and not to ask questions that directly overlapped them, even for items included for benchmarking. For example, NAOMS asked pilots about all bird strikes without establishing a threshold for their severity. FAA does not, however, require pilots to report all bird strikes to its Wildlife Strike Database, only those bird strikes that cause “significant” damage. Additionally, aviation researc have estimated that up to 80 percent of bird strikes with civil aircraft are not reported to FAA’s Wildlife Strike Database. Therefore, it is not surprising that NAOMS data imply a much higher incidence of bird st than other systems. In addition to considering examples such as pre- and post-September 11, 2001, rates, NAOMS staff had also examined other issues that had intuitive appeal, such as seasonal fluctuations in reported bird strikes. Project staff also suggested that the data corresponded well with other data systems, citing as an example both runway incursions—a decline in w the NAOMS team attributed to an FAA policy change—and reserve fuel tank use—an increase in which had reportedly been seen in ASRS. Additionally, for field trial data, project staff examined the strength o relationship between the number of events reported and the hours flown or the length of the recall period, because pilots flying more hours or recalling events over longer recall periods should report more events t those with fewer hours flown or shorter recall periods. In addition to having face validity, the survey methodologist noted that the relationsh between events reported and flight hours and legs is also a measure of construct validity, in that it demonstrated that NAOMS’s measures corresponded well with theoretical expectations. However, the relationship does not confirm whether the events that pilots repo rted actually happened. No other data validation efforts were undertaken o n the full survey. NAOMS project staff reported that several questions in ip the NAOMS data had face validity, but the data still had to be benchmarked. While such benchmarking is critical for validating NAOMS data, it may not be sufficient to confirm the accuracy of pilot recall for most NAOMS questions or to estimate the potential effect of nonresponse bias. Stakeholders Disagreed on the Utility and Value of the NAOMS Data The effectiveness of NAOMS as a monitoring tool depended on its ability to provide reliable and valid estimates to address customers’ concerns. NAOMS team members promoted the survey’s potential for generating rates and trends but also debated whether the data could be used to establish baseline counts of events for the NAS. NAOMS working groups were started but disbanded before resolving this issue or benchmarking the data against what was known from other safety data. NAOMS Data and Systemwide Event Counts NAOMS team members agreed that the survey was designed to measure the occurrence of events, rather than their causes. They did not clearly agree on the survey’s ability to provide systemwide counts of events, rather than rates per flight hour or flight leg, or rate trends over time. According to the project’s leaders, NAOMS was never intended to generate an absolute picture of the NAS (i.e., total counts of the number of events in the NAS each year). They told us that its utility was understood to lie in its ability to measure relative frequencies that could be used to generate trends over time. However, NASA’s OIG found “a disparity between the stated goals of NAOMS and the manner in which NAOMS project management initially presented the data to FAA,” a point that FAA also raised. Senior FAA officials told us that NAOMS staff repeatedly indicated that the project would provide “true” estimates of rates of safety events in the NAS at the project’s beginning, a capability that FAA disputed. NAOMS’s emphasis on relative trends, which FAA believed NAOMS could depict, happened only in later stages of the project. Regardless of whether NAOMS data were presented as counts or rates, the data were never designed to serve as a stand-alone system. The survey’s methodologist told us that he believed that NASA staff were always clear about the goal of establishing rates and trends, but that in the absence of a baseline count of how frequently safety events occurred, these rates were insufficient to specifically quantify change from the survey’s beginning. However, in theory, such data could be used to generate trends if the nature of any sampling and nonsampling error in data collection remained constant over time. Additionally, the NAOMS survey methodologist described issues that might jeopardize inferences about trends based on hourly rates. For example, because rates per-exposure unit are a per-pilot measure, rather than a system or aircraft measure, one could incorrectly attribute a change in rates to a systemwide shift that might instead have resulted from a change in technology that affected the number of individuals in the cockpit crew. As we have previously mentioned, the sampling frame, the filter, and potential noncoverage and nonresponse issues would make further analysis necessary before one could conclude that NAOMS’s measures of rates per-exposure unit could be generalized to the full population of air carrier pilots. According to NASA’s researchers, when the NAOMS contractors began to work closely with the data, they began to extrapolate and generate systemwide count estimates. NASA reported that one contractor believed it was essential to report system counts: that is, counts were necessary to convey the meaning of the data from a policymaker’s perspective and rates did not convey the significance of a given result. Battelle staff used BTS data to weight NAOMS data according to systemwide numbers of flight hours or flight legs and used these estimates in several presentations of NAOMS preliminary results. The staff reported to us later that they had decided against weighting up to the full population of aircraft types because they did not think that it made sense to combine operational size categories of aircraft. The early presentations of the NAOMS data raised concerns for FAA, because the numbers presented as systemwide estimates did not match FAA’s other information sources. Several FAA and NASA officials with whom we spoke asserted that data from several specific survey items did not correspond with the content of other reporting systems. However, the items cited were not intended to overlap directly with data FAA had already collected. NASA officials conceded that how NAOMS defined the question wording might have contributed to one cited discrepancy. In addition, FAA officials thought NAOMS was unable to accurately measure systemwide rates of safety events and asked for extensive, specific revisions to the survey to address specific questions. Among other things, these officials wanted NAOMS to ask questions that were more investigatory in nature than the broad monitoring concept that NASA had envisioned. NASA did not make the changes that FAA recommended part way through the survey. In correspondence with FAA, NAOMS researchers emphasized that the survey’s ability to measure trends required consistent question wording. FAA officials were also concerned about the quality of NAOMS data because the survey’s questions were based solely on pilots’ perceptions. NAOMS’s Working Groups NASA’s project leaders reported that the working groups were to play a critical role in evaluating the validity of the NAOMS data and in establishing whether the survey’s information seemed reasonable, given what was known about safety from other data sources. The two working groups, established in 2003 and 2004, were distinct from the two workshops conducted in 1999 and 2000, although the groups and workshops were similar in that they both aimed to introduce the NAOMS project to a wide range of stakeholders, including FAA and industry members, and that they solicited input on the survey’s goals and questionnaires. NASA envisioned a wide range of participants in the working groups, including pilots; flight attendants; people familiar with alternative data systems; and other aviation stakeholders, such as academic researchers and industry. Project leaders told us that they did not expect that participants would necessarily attain consensus, except to the extent that the groups thought the NAOMS data appeared to be valid and could publicly present the data in a way that would not be automatically translated into systemwide extrapolation of event counts. According to a presentation at the first working group meeting, in December 2003, “the release of NAOMS data, and its future directions, will be guided by the Working Group .” NASA and FAA representatives had agreed earlier that year not to release any survey results before the working groups reviewed them and came to a consensus on the timing, content, and level of the release of NAOMS data. Discussing the fate of the 2003 and 2004 working groups, NASA’s OIG concluded in March 2008 that “the NAOMS working groups failed to achieve their objectives of validating the survey data and gaining consensus among aviation safety stakeholders about what NAOMS survey data should be released.” The working groups’ limited effect may have stemmed partly from disagreement over their composition. NASA project leaders suggested that FAA had wanted an existing advisory group to oversee efforts to validate the data, whereas NASA wanted a different combination of academicians—specifically, FAA staff, subject matter experts, and industry stakeholders. FAA officials told us that they had serious concerns about some of NASA’s proposed experts, because these experts cited preliminary estimates from NAOMS data that FAA found not to be credible. Additionally, portions of the working group agendas were dedicated to discussing the importance of survey research for reliably measuring trends. These discussions might indicate that some working group members doubted the core foundations of the NAOMS project or the survey’s ability to supplement aviation safety systems. According to an official in NASA’s OIG, he believed that the presentations at the working groups were, in a sense, an attempt to get the working group participants on board with the NAOMS project. NASA’s project team suggested that the two working group meetings took place necessarily late in the NAOMS project to allow for the collection of enough preliminary data and to work through nondisclosure issues. The team also suggested that the meetings “were largely dedicated to organizational, procedural, and membership issues.” Moreover, presentations at the two working group meetings showed only the contractor’s preliminary aggregate analysis. Because the working group members never had the raw data, they had no opportunity to achieve consensus on the validity of NAOMS data or appropriate uses of these data. NASA’s project leaders have asserted, moreover, that the “Working Group approach” was “terminated prematurely because the NAOMS resources were re-directed to another approach.” According to the project leaders, policy changes resulted in the disbanding of all advisory groups before a more formalized NAOMS group could be assembled after the first two groups failed to reach their objectives. Reestablishing any sort of advisory group would be difficult, because NASA procedures would require prospective participants to undergo a strict nondisclosure procedure. Given that the working group members did not have access to the raw data and did not agree on the groups’ goals or composition, it is not surprising that they were unable to productively pursue consensus on the validity and utility of NAOMS data. Additionally, to the extent that some participants rejected NAOMS’s premise that a survey is a valid and reliable way to generate safety-related data, they are not likely to have believed that the data the project collected could be validated. For example, while acknowledging that NAOMS had the potential to allow reliable estimates of relative trends, FAA officials told us that they disagreed that NAOMS could generate statistically reliable rate estimates because of the subjectivity of NAOMS questions. These officials questioned the ability of NAOMS’s information to generate rates or its capacity for validation by existing databases. Additionally, FAA officials noted that they did not believe any potential customers would have confidence in aggregate NAOMS results unless the source data were released to the customers directly, rather than to a working group. FAA also expressed concern that pilots would lack causal knowledge to answer the survey’s questions. However, we have noted in this report that the questionnaire was not designed to collect causal information. Additionally, we believe that knowledge of why an event occurred should not be needed to report whether a pilot witnessed or experienced a specific event. A New Survey Would Require Detailed Planning and Revisiting Sampling Strategies A new survey similar to NAOMS would require more coherent planning and sampling methods linked to specific analytic goals. In addition, the NAOMS survey exhibited some limitations that others might want to avoid. Sufficient survey methodology literature and documentation on NAOMS’s memory experiments are available to conduct another survey of its kind with similarly strong survey development techniques, built on a similarly strong foundation. The sections that follow suggest some elements of a new survey like NAOMS. Conduct a Cost-Benefit Analysis Before undertaking a similar survey, researchers should review developments in aviation safety and also the costs of and potential for the NAOMS data to enhance policymakers’ ability to measure trends and effects on safety interventions. As NAOMS’s application to OMB observed, managers seek rational and data-driven approaches to aviation safety, which “requires numbers that quantify the safety risks these investments are expected to reduce, numbers that reveal trends portending future safety problems, and still more numbers that measure the effectiveness of past safety investments.” NAOMS air carrier data demonstrate that surveys can be used to generate trend data measuring aspects of aviation safety, and some of the team’s researchers believe that the data’s utility for monitoring the effect of policy interventions has already been demonstrated. A survey like NAOMS could supplement other safety information, but additional analysis must determine whether NAOMS can be sufficiently useful and cost-effective, given more recent events and technological developments. For example, digital flight data could potentially provide monitoring information, but they are not yet comprehensive or regularly and thoroughly analyzed. Additionally, many data sources, such as digital measurements of flight parameters, cannot illuminate behavioral or perceptual information from operators that might bear on aviation safety. Until such capacity exists, a survey like NAOMS may nonetheless cost-effectively supplement other safety information and identify where to look for other sources of safety information. A thorough cost-benefit analysis should include the cost of additional steps to develop the survey, such as further experiments, questionnaire revisions, and pretesting. Such an analysis should also address the potential costs and benefits of the survey in light of resources required to analyze other sources of safety information. For example, the cost of collecting and analyzing NAOMS-like data may be small relative to the cost of thoroughly analyzing digital flight data, but, depending on the questionnaire design, such analysis may not identify causation. Capitalize on Experimentation and Testing A future survey should build on the insights gained from NAOMS’s extensive developmental research on pilots’ memory organization and ability to recall events. The survey might undertake additional experiments and testing to accommodate survey revisions resulting from stakeholder interests and lessons learned from the NAOMS air carrier pilot survey. A survey might supplement experiments with additional cognitive interviews, behavioral coding, and reviews. Researchers should consider the resources needed for wide-scale testing during the survey’s development. Whereas research demonstrates the benefits of adapting a survey’s content to the subject matter and population of interest, researchers would want to consider the availability of resources and time to conduct the experiments necessary to reduce respondent burden and increase accuracy. Additionally, researchers should engage in data validation efforts beyond establishing face validity when making important design decisions, such as which recall period to use. Generally accepted survey practice is to use a field trial to test a questionnaire that is as similar as possible to the final questionnaire. Accordingly, a future survey might attempt to incorporate the results of the experiments, cognitive interviews, and full set of questions into a field trial questionnaire. A future survey should also run a monitored CATI pretest on the final version of the questionnaire, to test the automated programming and ensure that interviewers and respondents appear to interpret questions correctly. Collaborate with Customers in the Survey’s Development Beyond soliciting and incorporating feedback from aviation safety stakeholders, staff promoting a new survey like NAOMS should work directly with the survey’s presumed customers to specify the uses of the data. While it is not essential that these data inform policy interventions, policymakers should agree on their potential utility. A customer’s rejection of the premises of a data collection system—as happened with FAA’s rejection of the idea that NAOMS would provide a reliable safety monitoring system—should be resolved before full data collection begins, and consensus on the survey’s goals and uses should be formally documented. Otherwise, alternative customers should be identified or the survey’s design and goals should be revisited. Consulting with potential customers on the wording and likely use of specific questions would enhance the utility of the survey’s data. An analysis of the existing NAOMS data by both scientists and customers’ representatives could help demonstrate how specific analytic products might directly or indirectly serve organizational missions. Assess Whether Questionnaire Content Facilitates Planned Analyses In the NAOMS air carrier pilot survey, there is the potential for more than one crew member on the same aircraft or on separate aircrafts to have reported the same incident. Proportional allocation or segregated analysis of different types of crew might help address the potential for multiple reports of the same event but can be difficult to implement. Nevertheless, survey designers should consider their analytic goals when designing the questionnaire—that is, are they looking for per-crew member risk estimates or system counts? Certain goals may require researchers to adjust the data, while others may not. Overall, survey designers should be prepared to compare the sensitivity of their estimates with different strategies and under different assumptions. Future efforts to collect safety information from pilots in a survey might also reconsider the potential effect of sampling pilots who fly more than one type of aircraft during the recall period or in more than one crew capacity. The survey designers might want to consider whether NAOMS’s confidentiality considerations outweigh the potential benefits of allowing pilots to link reported events to particular aircraft, given the perceived link between operational size class and risk exposure. To facilitate estimates, the designers of a future survey should also explore the feasibility of modifying the questionnaire to allow pilots to identify specific aircraft and crew capacities associated with each report of a safety event. They would benefit from establishing an analysis plan in conjunction with the questionnaire. Doing so would help determine the utility of adding and deleting questions and would clarify, at the analysis stage, the effect that doing so would have on data collection. Detail Analytical Goals and Strategies in Advance of Fielding To ensure consensus on the usefulness of the data, a detailed analysis plan should be developed. The plan should include basic information on likely estimating the strategies and uses of the data, as well as detailed information on likely adjustments or weights needed to take account of questionnaire design and sampling and of the potential uses of the data. Any adjustments to the analysis plan for operational considerations, preliminary results, policy changes, or unforeseen circumstances should be formalized as data collection progresses. NAOMS was intended to capture precursors to accidents and nonsignificant risks and to supplement other aviation safety information. It was expected that rate trends seen in the NAOMS data would point aviation safety experts toward what to examine in data systems. Therefore, aviation safety experts and stakeholders would have to conduct more extensive analysis than was conducted in the NAOMS project to establish whether rates and trends could be used for this purpose. Additionally, for a similar survey, analysis would have to establish whether data generated from different recall periods, interview methods, or operational size categories were sufficiently similar to allow data to be combined, and whether making adjustments to sampling strategies or question wording is necessary to accommodate analytic goals. The NAOMS survey was intended to provide a better understanding of the safety performance of the aviation system, and to allow for the computation of general trends over time, in order to supplement safety systems. A survey with a different goal—one that was investigative or intended to understand the causes of events—would seek information different from those asked for in the NAOMS questions. Depending on the customers’ intended use of the data, developers of a future survey might consider writing questions that asked about, for example, the causes of engine failures or details about air crews’ experience of engine shutdowns. Whereas questions such as the latter would be consonant with NAOMS’s goal of describing precursors to safety events, the former would be more investigative. Developing a detailed analysis plan in conjunction with the questionnaire would help ensure that the survey included questions relevant for specific analyses. Revisit Sampling Strategy Given the proportion of out-of-scope cases drawn into NAOMS’s filtered sample, and the cost of finding and contacting them, the designers of a future survey should reevaluate the merits of using a database like the Airmen Registration Database as a sampling frame relative to potential alternatives, to ensure that the database is still the most cost-effective or programmatically viable means of identifying the target population. Other frames, such as industry or union lists, might be considered or alternative stratification and filtering strategies might be used to identify air carrier pilots. Sampling strategies must also consider whether the proliferation of cell phones will require adjusting contact methods to target a population as mobile as pilots. Analysis of data such as the NAOMS data might compare different approaches to calculating trends and exposure rates to see if substantive conclusions were similar. Analysts might also want to determine how their estimates relate to the overall NAS. For example, if estimates can address only crew-based risk exposure, they probably do not characterize the NAS, although they may provide other important information for aviation safety monitoring. To the extent that characterizing event levels for the NAS is a goal, a survey like NAOMS might require a different sampling strategy than for a survey designed primarily to monitor trends. Sampling records, including sources used to construct a sample frame and the frame itself, should be maintained for potential use in estimates and nonresponse bias analyses. Write a Detailed Implementation Plan A detailed implementation plan would help ensure the continuity of management and record-keeping for the project and would help ensure that steps like data validation and bias analyses are carried through on a schedule. Given the risks and trade-offs inherent in any survey endeavor, such a plan would also help to ensure that future analysis of the data can accommodate decisions made in the face of changing conditions or for practical considerations. While benchmarking and face validity checks are important aspects of data validation, they may not be sufficient to confirm the accuracy of pilot recall or estimate the potential effect of nonresponse bias. Even so, besides conducting quality checks on the interview process, future survey developers should undertake formal data validation efforts during data collection and questionnaire development. Nonresponse bias analyses should be planned and completed. The survey’s sponsors should allocate resources to fully benchmark the data. NAOMS’s confidentiality provisions appear to have been adequate. Nevertheless, researchers interested in implementing a similar survey might find it useful to further delineate the kinds of data that might be released and the techniques that might be used to remove identifiers from datasets before implementing the survey. In light of other agencies’ mechanisms for releasing individual-level data to screened researchers in a controlled fashion, survey documentation should also clarify the conditions under which data could be released to outside researchers, as appropriate. While the NAOMS extended survey sample fielding period may have been necessary to attain a high response rate from a population as mobile as pilots, future researchers should compare the nature of the answers from pilots who were contacted with relative ease with the answers from pilots who it took greater effort to contact. These researchers should also consider an extended field period’s implications for how quarterly statistics are generated in light of potential changes to the sampling frame over time. There is some merit to NASA’s assertion that the working groups could not conduct any data validation, without access to the data. In a future survey, such groups might be constituted earlier, so that data are available for discussions on data validation. A future effort might use such working groups in parallel with data collection, thus soliciting and formalizing the participation of stakeholders. This parallel effort might help the new effort begin validation as soon as sufficient data are collected. It might also help circumvent disputes over the potential uses of the survey data. Finally, researchers pursuing efforts similar to the NAOMS project might usefully delineate in advance exactly how rates will be calculated, how potential issues will be clarified, and how the data will be interpreted. A future survey might benefit from tighter coordination between its designers and contractors to ensure that public presentations of preliminary results, when there is still significant debate about the validity of the results, show only the numbers agreed to by project staff. Concluding Observations As a monitoring tool, NAOMS was intended to point air safety experts toward trends, to help show FAA and others where to look for causes or extremely rare safety events in other datasets. As a research and development project, NAOMS was a successful proof of concept. However, the data that NASA collected under NAOMS have not been fully analyzed or validated by project staff or aviation safety stakeholders. Depending on the research objective, proper analysis of NAOMS data would require multiple adjustments. Additionally, because of their age, existing NAOMS data would most likely not be useful as indicators of the current status of the NAS. “The NAOMS survey could be very useful in sampling flight crew perceptions of safety, and complementing other databases such as ASRS. The survey data, when properly analyzed, could be used to call attention to low-risk events that could serve as potential indicators for further investigation in conjunction with other data sources.” In this report, we have both described NAOMS’s limitations sufficiently to enable others to look at redesigning them and suggested ways in which a newly undertaken project might successfully go forward. The planners and designers of a new survey might want to supplement it where NAOMS was self-limiting, by incorporating research into investigatory questions of the type that interested FAA, or to more specifically detail its monitoring capacity in conjunction with existing aviation safety systems. Alternatively, a newly constituted research team might lead operational, survey, and statistical experts in extensively analyzing existing data to validate a new survey’s utility for various purposes or to illuminate future projects of the same type. Agency Comments and Our Evaluation We provided a draft of this report to the National Aeronautics and Space Administration and to the Department of Transportation for their review. Transportation had no comments on the draft report. NASA provided written comments, and appendix II contains a reprint of the agency’s letter. NASA also provided technical clarifications, which we incorporated into the report as appropriate. In response to the draft report’s characterization of NAOMS, NASA emphasized that NAOMS was a research and development initiative. We revised the report to more clearly reflect this aspect of NAOMS. NASA also stated that the draft report inappropriately asserted that NAOMS’s goals changed over time, and noted that the principal goal of the project was always to develop a methodology to assess trends or changes over time. While we recognize that this was a primary goal of the project and have revised the report to clarify this issue, we believe that the project staff were not consistent in how they presented NAOMS’s likely capabilities to other aviation stakeholders over the life of the project. NASA was also concerned about the draft report’s discussion about maintaining pilot confidentiality, citing its own research on the risk of pilot disclosure in the NAOMS data and the inability to determine individuals’ motivation for trying to identify a specific pilot. We agree with NASA’s concern about pilot identification and have revised the report to highlight NASA’s concern; however, we also note that other government agencies have developed mechanisms for releasing, in a controlled manner, extremely sensitive raw data with high risk for the identification of individuals to appropriate researchers. We also provided a draft of this report to Battelle (NASA’s contractor for NAOMS) and Jon A. Krosnick, Professor, Stanford University (the survey methodologist for NAOMS) for their review. Battelle provided no comments on the draft report. Dr. Krosnick reported that he found the draft report to be objective and detailed, and that he believed it will contribute to the public debate on NAOMS. He also provided technical clarifications, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to relevant congressional committees, the Administrator of the National Aeronautics and Space Administration, the Secretary of the Department of Transportation, and the Administrator of the Federal Aviation Administration, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions concerning this report, please contact Nancy Kingsbury at (202) 512-2700, [email protected], or Gerald Dillingham at (202) 512-2834, [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of the report. GAO staff who made key contributions to this report are acknowledged in appendix III. Nancy R. Kingsbury, Ph.D. Managing Director, Applied Research and Methods erald L. Dillingham, Ph.D. Appendix I: Technical Issues Relating to NAOMS’s Development and Data In this appendix, we present in more detail a few topics we discuss in the report. They are the (1) National Aviation Operations Monitoring Services’ (NAOMS) memory experiments; (2) NAOMs’s cognitive interviews with pilots; (3) estimating the effect of the sampling frame, filter, and operational considerations; (4) outlier detection and mitigation; and (5) allocation strategies. Memory Experiments The recall and memory experiments for the core safety event section began with three focus groups conducted in August and September 1998, consisting of 37 pilots, and one-on-one “autobiography” interviews of 9 pilots. The autobiographies gave the team insight into pilots’ experiences and how they thought about events, enabling the team to develop potential event clusters that matched general categories suggested by the pilots’ responses. The focus groups and autobiographies helped in generating questions about different types of events that would link to the major hypothesized memory structures—flight phases, causes, and severity—and eventually a hybrid type that contained causes and flight phases. The NAOMS team and its subject matter experts then listed 96 events— some based on actual experiences, some purely hypothetical—that covered different permutations of these events. For example, they differentiated between minor, moderate, and major problems during takeoff, cruise, and other phases of flight, involving specific causes and resulting in specific events. Examples were “major, approach, weather, spatial deviation” and “minor, landing, people-problem with a conflict or in-flight encounter.” A sorting experiment used the list derived from this process. Researchers gave 14 pilots 96 randomly sorted cards, each containing an individual event, and asked them to sort these cards into stacks containing events that were similar to one another, and to label the stacks descriptively. This sorting task further confirmed potential clusters in the pilots’ memory structures. A quantitative analysis of four competing hypotheses of organizational schemes (cause, flight phase, combined cause and flight phase, and severity) showed that the scheme that contained both causes and flight phases best explained the results of the sorting experiment. The project team also assessed the order in which pilots recalled events. The team transcribed the 96 events onto individual sheets of paper and randomly sorted them before presenting them to 9 pilots to read. The pilots then were asked to solve a set of anagrams completely unrelated to aviation—a “distraction” activity to clear their minds—before recalling specific events from the list of 96 events. The researchers tape-recorded what the pilots said, transcribed the responses, and analyzed the resulting data, using an index called “adjusted ratio of clustering” for each of the four hypothesized schemes. Data again indicated that a scheme combining causes and phases of flight best represented pilots’ prevalent memory structures. For a final confirmatory test of the best organizational approach to pilots’ memory structures, the project team randomly assigned 36 pilots to 1 of 4 experimental conditions. This test was similar to the recall study, except that pilots in 3 of the experimental conditions were offered cues to prompt event recall (cause, phase, or a combination of the two). The cues that combined cause and phase appeared to optimize the number of specific events that a pilot could recall. A memorandum summarizing these results added a final caveat on question order: that is, events were to be ordered from the weakest in memory to the strongest in memory. This ordering would accord with literature that showed that strong memories can obscure lesser ones in the same memory cluster. The memorandum’s author recommended further research with pilots to develop a ranking of weak to strong memories. It does not appear that formal analysis was conducted, although it is likely that some NAOMS researchers tapped into their own flying and other aviation experience to help sort events on the final questionnaire. Cognitive Interviews For the full air carrier pilot survey, researchers interviewed four Aviation Safety Reporting System (ASRS) analysts, all of retired pilots, plus seven active pilots recruited from personal friends of NAOMS staff. At least six of the seven active pilots were air carrier pilots who would have been within NAOMS’s target population. The questionnaire was revised between the three separate sets of cognitive interviews, but not between participants within a set of interviews—the four ASRS analysts, the six air carrier pilots, and the 7th pilot. The revisions included changes the survey methodologist recommended to more appropriately match the memory structure that the earlier experiments had revealed, as well as changes to accommodate issues raised in the cognitive interviews. We do not have evidence to suggest whether the questionnaire’s final version was cognitively tested before the survey’s implementation. Interviewers and Battelle Memorial Institute (Battelle) managers did conduct a series of interviews to test the flow of the computer-assisted telephone interview (CATI) programming before the survey was implemented. Estimating the Effect of the Sampling Frame, Filter, and Operational Considerations The decisions that decreased the likelihood of identifying the NAOMS survey respondents made it necessary for analysts to adjust their estimates. In making adjustments, analysts generally look to their analytical goals and to the likely effect of an adjustment on the substantive interpretation of an estimate compared with an alternative. The analysts also try to explore whether adjustments made to address specific problems affect adjustments to address other issues. For example, a series of adjustments to address different features or limitations of the data may render the interpretation of estimates too complicated for practical use. Changes in external datasets used for benchmarking or in creating projections may affect the interpretability of the data over time. In the case of the NAOMS data, sampling, design, and implementation decisions complicate straightforward estimates for either system counts or rates. For a full analysis to account for issues related to questionnaire design, sampling, and implementation, the NAOMS air carrier data would require multiple adjustments and imputation. Additional analyses would be required to determine the nature and effect of these adjustments. Before the project’s end, NAOMS researchers analyzed potential biases that they believed resulted from the filter used to identify air carrier pilots from the sampling frame. These analyses are critical for determining the appropriate uses of the data. We believe that the first priority for further analysis is to estimate the effect of the sampling frame. That is, however appropriate NAOMS’s use of the publicly available Airmen Registration Database may have been for cost and programmatic considerations, it has not yet been established whether the frame sufficiently represented air carrier pilots in general, especially in light of pilots’ ability to opt out of the registry. Potential analytic approaches to assessment include but are not limited to the following: Comparing pilots’ reported airline fleet characteristics in the survey with outside data on the size of air carrier fleets. NAOMS project staff added a question on airline fleet size to the survey expressly to be able to gauge whether the pilots in the Airmen Registration Database flew in fleets similar to the air carrier fleet distribution as a whole. While this analysis might provide compelling information about how representative the frame was, it is insufficient to demonstrate that the frame fully represented air carrier pilots of interest or air carrier pilots covered by the full frame. For example, it is conceivable that the distribution of pilots’ airline fleet characteristics correspond between NAOMS data and data derived from other sources, but that the distribution of pilot characteristics within each fleet size was systematically biased toward more experienced pilots who were better able to foresee and avoid safety-related events. Comparing pilot characteristics from the publicly available frame or the sample (as a random subset of the frame) with the full database that the Federal Aviation Administration (FAA) maintained. Ideally, the comparison would have been made with files used for survey fielding. However, Battelle has reported that it does not have enough data to make such a comparison. A NAOMS team member suggested that for an alternative, one could compare the full FAA database with the publicly available registry on a range of characteristics both relevant and external to NAOMS’s concerns. Without knowing whether the nature of the opt-out registry had changed over time, this analysis would help determine whether pilot characteristics in the public frame can be generalized to those in the full frame. However, because neither database contains information on pilots’ employment or union membership, this analysis would be insufficient to determine whether the frame used for NAOMS data collection was systematically biased to include or exclude pilots from certain airlines or unions. Thus, this approach would complement, not replace, the analysis comparing fleet characteristics discussed in the previous bullet. Conducting something like a nonresponse bias assessment. Analysts would take random samples of pilots within the filtered frame as it would be constructed from the publicly available database and from the full FAA- maintained database and would use a survey to compare pilot characteristics for these two samples. Ideally, this would have been done during the survey field trials; however, in the absence of compelling evidence that the nature of the two databases had changed over time, the comparison could still provide insight on whether pilots in the opt-out frame were sufficiently similar to those in the full database to treat the opt-out frame as representative of the population. Depending on its design, a study such as this would allow analysts to focus on characteristics that were most relevant to NAOMS, such as career flying hours or experiences of safety events, and would also provide a means of gauging potential bias in terms of employers, union membership, and other factors that are not expressly collected in the certificate database. In any case, analysts of NAOMS data must pursue additional research to determine the existence and nature of potential biases from using the public database rather than the full database, and determine whether and which analytic strategies will ensure that the results adequately represented safety events in the population of interest. In addition to adjustments for sampling considerations, other analyses may be useful in generating estimates and necessary adjustments. For example, to mitigate the effect of coverage bias in systemwide event count estimates, the NAOMS team advocated using Bureau of Transportation Statistics data related to operational size categories, carrier size, flight hours, and flight legs as benchmarks for weighting these data. The feasibility of using exogenous information to weight NAOMS data depends heavily on achieving a consensus on the appropriate and inappropriate uses of the survey regarding measuring risk exposure and safety events in the national airspace system (NAS). Battelle recommended statistical modeling—in particular, generalized linear modeling—to develop “more refined rate estimates.” Generalized linear models would have allowed estimates of safety event rates, while controlling for the independent effect of factors such as season and operational aircraft size. Battelle conducted preliminary modeling with generalized linear regression models on grouped sets of data. The utility of such models is contingent on the goals of the analysis and the nature of bias or patterns of missing data; adjusting for independent factors may not be appropriate when generating rate estimates to project to the population. One Battelle statistician noted that NAOMS data lacked important explanatory factors, and that statistical models could suffer from omitted variable bias (which is unrelated to whether these data can be projected to the population of interest). This criticism did not account for the fact that NAOMS’s data were not designed to be used for an investigative process or to establish causation. Estimates from NAOMS are further complicated by the need to distinguish between risk based on time exposure and risk related to the number of takeoffs and landings. Analysts using NAOMS data might want to compare various approaches to calculating trends and exposure rates to see if different analyses result in similar substantive conclusions. They should also clarify whether and how estimates relate to the overall system—for example, if they can address only crew-based risk exposure, one might ask whether this is sufficient for characterizing the NAS. Outlier Detection and Mitigation Outliers can greatly influence the interpretation of statistical analyses. Outlier detection and cleaning, which should consider both statistical and operational concerns, require help from subject matter experts who can identify whether a given data point seems “reasonable” in context. Researchers may also consider whether data follow statistical distributions, such as binomial or Poisson distributions, in deciding how to identify or exclude outliers. Additionally, researchers should consider whether the unit of analysis (whether counts or rates) leads to identifying different cases of outliers and the effect of various methods of outlier detection and cleaning on the substantive interpretation of the analysis. Causes of outliers can be respondents’ mishearing or misinterpreting a question or deciding not to respond truthfully. Outliers may also reflect accurate data that do not correspond with the preponderance of cases. For example, one Battelle researcher cited the “cowboy theory” of aviation safety—the notion that the vast majority of accidents are caused by a small proportion of pilots. Battelle also suggested that some pilots might report events that they had not experienced in order to deliver a message about safety. Survey research data collected by CATI methods are also subject to several types of outliers. An interviewer may mistype a response—for example, entering 3 as 33. CATI systems often use range checks to prevent such errors: that is, if what is typed exceeds a numerical threshold, the interviewer is prompted to ask the question again or to key the data again. Few hard range checks were incorporated into the NAOMS CATI program, because NASA had instructed the contractor not to question the veracity of pilots’ responses by having interviewers re-ask questions if a response seemed unusual. The lack of range checks makes it more difficult to distinguish between outlying answers that were mistyped and those that represented accurate respondent answers. The use of free-text fields to record aircraft type may also have complicated the identification of unreasonable answers for air carrier pilots. For most questions, the contractor developed an outlier cleaning method that was thought to be both appropriate and objective. This method was used to identify and remove cases of “doubtful quality” (such as whether the ratio of flight hours to flight legs was unreasonable or whether a pilot had “unreasonable” values on multiple questions), cases lacking information in the questionnaire’s fields on flight activity, and additional outliers flagged as “not applicable.” Although the method provided a consistent means of approaching outliers for each question, it did not account for whether reported values made sense in an operational context. Furthermore, the method was developed only midway through data collection. Had the method been developed farther along, more data might have helped clarify whether a distribution-based approach to outlier detection would have been appropriate. To more thoroughly consider statistical and operational concerns, further strategies for data cleaning and outlier detection would benefit from using the full data. Allocation Strategies The NAOMS survey has the potential to collect multiple reports of safety events witnessed by more than one crew member or involving multiple aircraft. Several NAOMS researchers believe that the effect of this issue has been overstated, particularly in light of potential analytical strategies to remedy this problem. Additionally, such concerns do not apply to analyses that determine per-crew member risk exposure (as compared with systemwide projections of event counts), if each individual crew member had an equal chance of being selected. Strategies that researchers have suggested for addressing the potential for multiple reports of the same event include proportionally allocating events by the likely number of crew members on each aircraft. However, because the number of crew members varies by aircraft size and flight—for example, long international flights require relief crews—this strategy is complicated by the inability to determine for certain which aircraft was involved in a specific incident when a pilot flew more than one aircraft during the recall period. An alternative strategy would be to calculate events reported by pilots who flew as captains separately from those events reported by other pilots—that is, first officers, flight engineers, and relief pilots. However, this approach might also be complicated by the possibility that pilots flew in more than one capacity over the recall period and the questionnaire does not allow pilots to identify whether they were the captain when experiencing a reported safety event. Furthermore, to the extent that sampling techniques resulted in bias related to the likelihood of flying in a given capacity—that is, the so-called “left-seat bias” that resulted in disproportionate sampling of captains thought to have resulted from the sample filter—segregated analysis of different crew members would require adjustments to project event counts systemwide. The inability to link reported safety events for pilots who flew more than one aircraft type to a specific aircraft (and, by implication, to a crew size) or day requires developing allocation strategies for other aspects of the data. Before settling on the nonproportional allocation strategies that we describe in this report, Battelle explored alternatives for allocating aircraft among operational size categories and seasons in its preliminary analyses of NAOMS data. For both size category and season, Battelle first attempted to allocate reported safety events and hours flown proportionally across the number of days in a given season or according to the percentage flown per aircraft. Both allocations proved unsatisfactory as it became administratively infeasible for the NAOMS team to maintain either system as data collection continued. Additionally, the allocations resulted in fractional degrees of freedom, in that reports from pilots that were split across seasons or aircraft were treated as less than a full case. Similarly, treating proportionally allocated safety events entails theoretical difficulties—for example, was it legitimate when calculating rates to count one-half or one-third of a bird strike? While proportional allocation or segregated analysis of different types of crews may help to account for potential reports of the same event, these strategies may be difficult to implement because pilots could have flown more than one aircraft type or in multiple crew capacities during the recall period and because of seasonal patterns in the data. As with other weights and adjustments, researchers need to consider their analytical goals—for example, whether they are looking for per-crew member risk estimates or system counts—and should be prepared to compare the sensitivity of their estimates with different strategies and different assumptions. Analysts should also assess whether and how the necessity of multiple adjustments and allocations limits the utility of the data for characterizing trends in air carrier aviation safety. Appendix II: Comments from the National Aeronautics and Space Administration Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the persons named above, H. Brandon Haller, Assistant Director; Teresa Spisak, Assistant Director; Carl Barden; Ron LaDueLake; Maureen Luna-Long; Grant Mallie; Erica Miles; Charlotte Moore; Anna Maria Ortiz; Dae Park; Penny Pickett; Mark Ramage; Carl Ramirez; Mark Ryan; and Richard Scott made key contributions to this report. Bibliography Many publicly available documents on the National Aviation Operations Monitoring Service (NAOMS) are at the National Aeronautics and Space Administration’s (NASA) Web site dedicated to the NAOMS project (www.nasa.gov/news/reports/NAOMS.html, last accessed Mar. 1, 2009) or at other NASA Web sites where materials on NAOMS and the Aviation Safety and Security Program are archived and searchable. The Committee on Science and Technology of the House of Representatives maintains additional information related to its October 31, 2007, hearing on NAOMS through its Web site at http://science.house.gov/publications/ (last accessed Mar. 1, 2009). Battelle Memorial Institute. NAOMS Reference Report: Concepts, Methods, and Development Roadmap. Prepared for the National Aeronautics and Space Administration Ames Research Center. November 30, 2007. Connell, Linda. NAOMS Workshop: National Aviation Operations Monitoring Service (NAOMS). Washington, D.C.: National Aeronautics and Space Administration, March 1, 2000. Connell, Linda. Workshop on the Concept of the National Aviation Operational Monitoring Service (NAOMS). Alexandria, Va.: National Aeronautics and Space Administration, May 11, 1999. Connors, Mary, and Linda Connell. “The National Aviation Operations Monitoring Service: A Project Overview of Background, Approach, Development and Current Status.” Presentation to the NAOMS Working Group 1. Seattle, Wash.: National Aeronautics and Space Administration, December 18, 2003. Dodd, Robert S. Statement on the National Aviation Operations Monitoring Service, October 28, 2007. Statement on the National Aviation Operations Monitoring Service. Statement before the Committee on Science and Technology, House of Representatives, U.S. Congress. Washington, D.C.: October 31, 2007. Griffin, Michael D., Administrator, National Aeronautics and Space Administration. Letter to National Aeronautics and Space Administration employees on NAOMS. Washington, D.C.: January 14, 2008. Griffin, Michael D., Administrator, National Aeronautics and Space Administration. Statement on the National Aviation Operations Monitoring Service. Statement before the Committee on Science and Technology, House of Representatives, U.S. Congress. Washington, D.C.: October 31, 2007. Griffin, Michael, Administrator, and Bryan D. O’Connor, Chief, Safety and Mission Assurance, National Aeronautics and Space Administration. “Release of Aviation Safety Data.” Media briefing moderated by J. D. Harrington, National Aeronautics and Space Administration Office of Public Affairs. Washington, D.C.: December 31, 2007. Krosnick, Jon A. Statement on the National Aviation Operations Monitoring Service, October 30, 2007. Statement before the Committee on Science and Technology, House of Representatives, U.S. Congress. Washington, D.C.: October 31, 2007. McVenes, Terry, Executive Air Safety Chairman, ALPA International. Statement on the National Aviation Operations Monitoring Service. Statement before the Committee on Science and Technology, House of Representatives, U.S. Congress. Washington, D.C.: October 31, 2007. Miller, Brad, Chairman, Subcommittee on Investigations and Oversight, Committee on Science and Technology, House of Representatives, U.S. Congress. Letter to Robert Sturgell, Acting Administrator, Federal Aviation Administration. Washington, D.C.: July 23, 2008. National Aeronautics and Space Administration. National Aviation Operations Monitoring Service Application for OMB Clearance. Moffett Field, Calif.: Ames Research Center, June 12, 2000. National Aeronautics and Space Administration. “National Aviation Operational Monitoring Service (NAOMS): Development and Proof of Concept.” Presentation to the Aviation Safety Reporting System Advisory Subcommittee. Washington, D.C.: November 13, 1998. National Aeronautics and Space Administration. “Creation of a National Aviation Operational Monitoring Service (NAOMS): Proposed Phase One Effort.” Presentation to the Flight Safety Foundation Icarus Committee Working Group on Flight Operational Risk Assessment. Washington, D.C.: March 5, 1998. National Aeronautics and Space Administration, Office of Safety and Mission Assurance. “Final Report of the National Aeronautics and Space Administration (NASA) National Aviation Operations Monitoring Service (NAOMS) Information Release Advisory Panel (2008).” Memorandum to the Associate Administrator, National Aeronautics and Space Administration. Washington, D.C.: May 12, 2008. National Aeronautics and Space Administration, Office of Inspector General, Assistant General for Auditing. “Final Memorandum on the Review of the National Aviation Operations Monitoring Service (Report No. IG-08-014; Assignment No. S-08-004-00),” to the Associate Administrator for Aeronautics Research, National Aeronautics and Space Administration. Washington, D.C.: March 31, 2008. Statler, Irving C. Aviation Safety and Security Program (AvSSP): 2.1 Aviation System Monitoring and Modeling (ASMM) Sub-Project Plan, Version 4.0. Washington, D.C.: National Aeronautics and Space Administration, February 2004. Statler, Irving C., ed. The Aviation System Monitoring and Modeling (ASMM) Project: A Documentation of Its History and Accomplishments 1999–2005. Washington, D.C.: National Aeronautics and Space Administration, June 2007. White House Commission on Aviation Safety and Security. Final Report to President Clinton. Washington, D.C.: The White House, February 12, 1997. | The National Aviation Operations Monitoring Service (NAOMS), begun by the National Aeronautics and Space Administration (NASA) in 1997, aimed to develop a methodology that could be used to survey a wide range of aviation personnel to monitor aviation safety. NASA expected NAOMS surveys to be permanently implemented and to complement existing federal and industry air safety databases by generating ongoing data to track event rates into the future. The project never met these goals and was curtailed in January 2007. GAO was asked to answer these questions: (1) What were the nature and history of NASA's NAOMS project? (2) Was the survey planned, designed, and implemented in accordance with generally accepted survey principles? (3) What steps would make a new survey similar to NAOMS better and more useful? To complete this work, GAO reviewed and analyzed material related to the NAOMS project and interviewed officials from NASA, the Federal Aviation Administration, and the National Transportation Safety Board. GAO also compared the development of the NAOMS survey with guidelines issued from the Office of Management and Budget, and asked external experts to review and assess the survey's design and implementation. NAOMS was intended to demonstrate the feasibility of using surveys to identify accident precursors and potential safety issues. The project was conceived and designed to provide broad, long-term measures on trends and to measure the effects of new technologies and aviation safety policies. Researchers planned to interview a range of aviation personnel to collect data in order to generate statistically reliable estimates of risks and trends. After planning and development, a field trial, and eventual implementation of the air carrier pilot survey and the development of a smaller survey of general aviation pilots, the project effectively ended when NASA transmitted a Web-based version of the air carrier pilot survey to the Air Line Pilots Association. NAOMS's air carrier pilot survey was planned and designed in accordance with generally accepted survey principles, including its research and development, consultation with stakeholders, memory experiments to enhance the questionnaire, and a large-scale field trial. The survey's sample design and selection also met generally accepted research principles, but there were some limitations, and the survey data may not adequately represent the target population. Sample frame and design decisions to maintain program independence and pilot privacy complicate analysis of NAOMS data. Certain implementation decisions, including extended methodological experiments and data entry issues, also complicate analytical strategies. Also, working groups of aviation stakeholders were convened as part of NAOMS to assess the validity and utility of the data, but these groups never had access to the raw data and were disbanded before achieving consensus. To date, NAOMS data have not been fully analyzed or benchmarked against other data sources. While NAOMS's limitations are not insurmountable, a new survey would require more coherent planning and sampling methods, a cost-benefit analysis, closer collaboration with potential customers, a detailed analysis plan, a reexamination of the sampling strategy, and a detailed project management plan to accommodate concerns inherent in any survey endeavor. As a research and development project, NAOMS was a successful proof of concept with many strong methodological features, but the air carrier pilot survey could not be reinstated without revisions to address some of its methodological limitations. The designers of a new survey would want to supplement NAOMS where it was self-limiting. Alternatively, a newly constituted research team might lead operational, survey, and statistical experts in extensively analyzing existing data to illuminate future projects. In reviewing a draft of this report, NASA reiterated that NAOMS was a research and development project and provided technical comments, which GAO incorporated as appropriate. NASA also expressed concern about protecting NAOMS respondents' confidentiality, a concern GAO shares. However, GAO noted that other agencies have developed mechanisms for releasing sensitive data to appropriate researchers. The Department of Transportation had no comments. |
Background The United States has a network of about 300,000 miles of gas transmission pipelines that are owned and operated by approximately 900 operators. These pipelines, which are primarily interstate, typically move gas products over long distances from sources to communities, and tend to operate at the highest pressures and have the largest diameters of any Gas transmission pipelines are critical because they type of pipeline.transport nearly all of the natural gas used in the United States, which fuels about a quarter of the nation’s energy needs. Pipelines do not experience many of the safety threats faced by other forms of freight transportation because they are mostly underground. However, they are subject to problems that can occur over time (such as leaks and ruptures resulting from corrosion) or are independent of time (such as damage from excavation, land movement, or incorrect operation). PHMSA administers the national regulatory program to ensure the safe transportation of natural gas and hazardous liquids (e.g., petroleum or anhydrous ammonia) by pipeline, including developing safety requirements that all pipeline operators regulated by PHMSA must meet. In fiscal year 2012, the agency’s total budget was $201 million, about half of which is for pipeline safety activities. PHMSA’s Office of Pipeline Safety employs over 200 staff, with about 135 of those staff involved in inspections and enforcement. In addition, over 300 state inspectors help oversee pipelines and ensure safety. Pipeline operators are subject to PHMSA’s minimum safety standards for the design, construction, testing, inspection, operation, and maintenance of gas transmission pipelines. However, this approach does not systematically account for differences in the kinds of threats and the degrees of risk that individual pipelines face. For example, pipelines located in the Pacific Northwest are more susceptible to damage from geologic hazards, such as land movement, than pipelines in some other areas of the country. Federal efforts to incorporate risk-based concepts into pipeline management began in earnest in the mid-1990s. For example, the Accountable Pipeline Safety and Partnership Act of 1996 required the Department of Transportation to establish risk management The purpose of this effort was “to demonstrate, demonstration projects. through the voluntary participation by owners and operators of gas pipeline facilities and hazardous liquid facilities, the application of risk management; and to evaluate the safety and cost-effectiveness of the program.”approach to safety: the integrity management program. Integrity management helps ensure safety by, among other things, using information to identify and assess risks and prioritizing risks so that resources may be allocated to address higher risks first. The integrity management program requires operators to perform a number of activities, such as identifying high consequence areas and pipelines These projects helped PHMSA establish a more risk-based within those areas, as well as identifying the threats facing those pipelines. PHMSA first implemented integrity management requirements for hazardous liquid pipeline operators with 500 or more miles of pipelines in December 2000, followed by hazardous liquid pipeline operators with less than 500 miles in January 2002. The Pipeline Safety Improvement Act of 2002 extended the integrity management program to gas transmission pipelines, which include about 20,000 miles of pipeline segments located in high consequence areas.subject to PHMSA’s integrity management program, operators must still meet the minimum safety standards noted above. As part of the integrity management program, operators are required to assess the integrity of their pipelines within high consequence areas on a regular basis using approved methods. Specifically, gas transmission pipeline operators were required to complete a baseline assessment on pipeline segments within high consequence areas by December 17, 2012. According to the 2002 act, operators are then required to complete reassessments of these pipelines at least every 7 years. Gas transmission pipeline operators completed most baseline assessments by December 17, 2012, and reassessments are currently under way. From 2004 through December 2011 (the latest data available), baseline assessments were conducted on over 23,450 miles of gas transmission pipeline in high consequence areas. Over 4,470 miles of gas transmission pipeline in high consequence areas—or about 20 percent of the pipeline miles that had a completed baseline assessment between 2004 and 2011—were reported as reassessed between 2008 and 2011 (see fig. 1). Among other things, PHMSA’s integrity management regulations required operators to (1) prioritize their baseline assessments to assess riskier pipelines first and (2) complete baseline assessments of these riskier pipelines by December 2007, and all pipelines within high consequence areas by December 2012. As a result, a small spike in the mileage assessed occurred in 2007. Under PHMSA’s regulations, gas transmission pipeline operators may use any of three primary approaches to conduct assessments: In-line inspection: In-line inspection involves running a specialized tool—often known as a smart pig—through the pipeline to detect and record anomalies, such as metal loss and damage (see fig. 2). In-line inspection allows operators to determine the nature of any problems without either shutting down the pipeline for extended periods or potentially damaging the pipeline. In-line inspection devices can be run only from specific launch and retrieval points, which may extend beyond high consequence areas. Operators using in-line inspection will often gather information along the entire distance between launching and retrieval locations to gain additional safety information. Based on PHMSA’s data, the majority of pipeline miles assessed in 2011 (88 percent) were done using in-line inspection. Direct assessment: Direct assessment is an aboveground assessment method used to identify problem areas on a pipeline. The process includes gathering data on potential risks facing the pipeline, analyzing those data to identify potential problem locations, and then excavating and directly examining those locations. PHMSA regulations require that at least two or more aboveground detection instruments, such as a close interval survey,direct assessment. be used to constitute a Hydrostatic testing: Hydrostatic testing entails sealing off a portion of the pipeline, removing the gas product and replacing it with water, and increasing the pressure of the water above the rated strength of the pipeline to test its integrity. If the pipeline leaks or ruptures, the pipeline is excavated to determine the cause of the failure. Operators must shut down pipelines to perform hydrostatic testing. Also, this assessment method can weaken the pipeline due to the high pressures involved, making it more susceptible to failure later. Finally, operators must be able to dispose of large quantities of waste water in an environmentally responsible manner. According to the operators we spoke with, the costs associated with performing each of these assessment methods varies greatly. For example, operators told us that the estimated average cost for conducting a direct assessment ranges from $5,000 per mile to $500,000 per mile. The costs vary due to a number of factors, such as the amount of pipeline mileage to be assessed and the number of digs that must be performed after completing an assessment to confirm the findings. PHMSA’s regulations promulgated pursuant to the Pipeline Safety Improvement Act of 2002 require gas transmission pipeline operators to reassess their pipelines for all safety risks—such as corrosion, excavation, land movement, or incorrect operation—at regular intervals based on industry consensus standards. But the regulations limit the 7- year reassessment requirement in the 2002 act to corrosion damage only because corrosion is the most frequent cause of failures that can occur over time. The industry consensus standards adopted in PHMSA’s regulations require that gas transmission pipeline operators reassess their pipelines for all safety risks at least every 10, 15, or 20 years, depending primarily on the condition and operating pressure of the pipelines, with pressure measured as a percentage of specified minimum If an operator elects to establish a reassessment interval yield strength.for all safety risks based on the industry consensus standards, it must—in order to comply with the 2002 act—perform what is called a “confirmatory direct assessment” by at least the seventh year to assess corrosion damage, and then conduct the reassessment for all safety risks at the interval the operator established. Alternatively, an operator can elect to perform a reassessment for all safety risks (including corrosion damage) at least every 7 years in order to comply with both the 2002 act and PHMSA’s regulations. Figure 3 provides some examples in which an operator can meet its reassessment requirements, either through performing (1) a confirmatory direct assessment at year 7 and a reassessment for all safety risks at a later year that comports with the industry consensus standards, or (2) a reassessment for all safety risks every 7 years. The 7-year reassessment requirement in the Pipeline Safety Improvement Act of 2002 as well as the reassessment intervals noted in PHMSA’s regulations and the industry consensus standards are maximum reassessment intervals: they represent the maximum number of years between reassessments. If pipeline conditions and risks dictate more frequent reassessments, then pipeline operators must do so to comply with PHMSA’s regulations. In addition, between reassessments, operators must—regardless of whether their pipeline mileage is located in a high consequence area—patrol their pipelines, survey for leakage, maintain valves, ensure that corrosion-preventing cathodic protection is working properly, and take measures to prevent excavation damage. In general, PHMSA has full responsibility for inspecting interstate pipelines and enforcing regulations pertaining to them, although some states are designated as “interstate agents” to assist PHMSA. PHMSA also has arrangements with the 48 contiguous states, the District of Columbia, and Puerto Rico to assist with overseeing intrastate pipelines. State pipeline safety offices are allowed to issue regulations supplementing or extending federal regulations for intrastate pipelines, but these state regulations must be at least as stringent as the minimum federal regulations. Data Show Critical Pipeline Repairs Are Being Made, but Cannot Be Used to Determine an Appropriate Maximum Reassessment Interval for All Pipelines Nationwide Assessments Have Resulted in Critical Pipeline Repairs PHMSA’s baseline assessment and reassessment data from 2004 to 2011 show that pipeline operators have identified and are making critical repairs in high consequence areas, specifically for conditions requiring repairs immediately or within one year. For immediate conditions, operators must make a repair as soon as possible and reduce pipeline operating pressure or shut down the pipeline until the repair is completed. A dent in a pipeline wall that also appears to have cracks would be considered a condition in need of immediate repair. For scheduled conditions,condition during subsequent assessments for any changes that would require repair. A dent with a depth of more than two percent of the operators must make repairs within one year or observe the pipeline’s diameter located near certain sections of the pipeline wall would be considered a scheduled condition that must be repaired within one year. Pipeline operators report annually to PHMSA the number of immediate and scheduled repairs made on their pipelines that were identified through assessments. Miles assessed and repairs are reported in the year they are conducted. PHMSA data show that from 2004 to 2009, pipeline operators reported making 1,080 immediate repairs and 2,261 scheduled repairs (see fig. 4). The data also show that during 2010 and 2011, pipeline operators reported 387 immediate conditions repaired and 2,246 scheduled conditions repaired. A PHMSA official told us that a 2010 change in reporting requirements resulted in the increase in reported conditions repaired beginning in 2010. During this period—2004 through 2011—PHMSA also collected data on the frequency of incidents, failures, and leaks in high consequence areas. The average number of incidents in high consequence areas— the most serious of the three events because they can result in fatalities, injuries, or significant property damage—was 8 per year. When incidents in high consequence areas occur, they can have a significant impact in terms of lives lost, injuries, and property damage, as seen with the incident, noted earlier, in San Bruno, California. PHMSA Data Alone Cannot Be Used to Determine a Maximum Reassessment Interval Individual pipeline operators can use data collected through baseline assessments and reassessments to determine the appropriate reassessment interval for pipeline segments on their systems, but using these data once they have been aggregated to determine a national maximum reassessment interval is not feasible. Per PHMSA’s regulations, operators use information on risks specific to their pipeline and changes in anomalies previously identified to determine the appropriate reassessment interval for their pipeline segments in high consequence areas. For example, an operator told us that the company calculates reassessment intervals for pipeline segments in high consequence areas using baseline assessment and reassessment data (when available) to determine the remaining strength of an anomaly and a corrosion growth rate. Based on these calculations, corrosion should not grow to unsafe levels before the next reassessment. Pipeline operators report data to PHMSA that include the miles assessed in high consequence areas, conditions repaired within high consequence areas, and the tools used to conduct assessments. These data are reported as a summary of all pipeline miles for that company. Operators with both interstate and intrastate pipelines as well as those transporting different gas products are required to report on each system separately. As a result, the data collected by PHMSA are highly aggregated and do not allow comparison of a single pipeline segment over time, or the determination of a national maximum reassessment interval. We were asked to compare the number of anomalies noted in PHMSA’s baseline assessment data with its reassessment data as part of the mandate for this report in the Pipeline Safety, Regulatory Certainty, and Job Creation Act of 2011.data collected by PHMSA lack the detail and completeness to make this comparison: As described below, the assessment repair PHMSA’s data do not separate conditions repaired that were identified from baseline assessments from those identified by reassessments. Beginning with the 2010 annual report, PHMSA has used a data collection form that does not require pipeline operators to differentiate conditions repaired based on whether the condition was identified during a baseline assessment or a reassessment. A PHMSA official told us that given the quantity of data pipeline operators are already required to provide to PHMSA, asking them to report conditions as either identified during baseline assessments or reassessments would significantly increase the reporting burden. Also, as pipeline operators begin their second round of reassessments, the data will not identify repairs as coming from the first, second, or later rounds of reassessments. The lack of detail in PHMSA’s data will make it impossible to compare—amongst all gas transmission pipeline operators or for a specific pipeline system—the number of repairs identified during baseline assessments to those identified during reassessments. Therefore, looking solely at PHMSA’s data, an observer could not tell whether conditions repaired have increased or decreased as operators conduct initial and subsequent reassessments. Even if repair data were separated based on whether the condition repaired was identified during a baseline assessment or reassessment, the first round of reassessments in high consequence areas may not be complete until the end of 2019. A comparison of the number of critical repairs identified during baseline assessments and reassessments would not be possible until these reassessments are complete. Further, even if the repair data from baseline assessments and reassessments could be compared, these data are not sufficient to determine an appropriate maximum reassessment interval—such as the 7-year reassessment interval established in the Pipeline Safety Improvement Act of 2002—for all operators for several reasons, including those listed below: A decrease or increase in the number of conditions repaired would not necessarily indicate the appropriateness of the 7-year reassessment requirement. For example, according to the American Society of Mechanical Engineers (ASME), a decline in the number of repairs per mile would indicate the effectiveness of a pipeline operator’s integrity management plan and not that of the reassessment interval itself. As mentioned above, the data reported to PHMSA are highly aggregated. As a result, it is not possible to perform the type of analysis at the national level that pipeline operators use to determine reassessment intervals for an individual pipeline segment. For example, calculating a corrosion growth rate using assessment data is one way that pipeline operators can determine the appropriate reassessment interval for a pipeline segment. This calculation requires information about the history, condition, environment, and the characteristics of individual anomalies found on that individual pipeline segment. PHMSA’s assessment data do not have that level of necessary detail, so they cannot be used to determine an appropriate maximum reassessment interval for the entire gas transmission pipeline system in the United States. Instead, PHMSA’s data can provide descriptive information about how much pipeline mileage operators are assessing and how many repairs are being made. While the repair data collected by PHMSA are not sufficient to determine an appropriate maximum reassessment interval for all pipelines in the United States, an industry standard setting organization has developed maximum reassessment intervals of 10, 15, or 20 years that are widely accepted as balanced and transparent. As we reported in 2006, ASME developed an industry consensus standard—subsequently approved by the American National Standards Institute —on maximum reassessment intervals for all safety risks (including corrosion damage) that PHMSA incorporated into its regulations. ASME based this standard on, among other things, (1) the experience and expertise of engineers, consultants, operators, local distribution companies, and pipeline manufacturers; (2) more than 20 technical studies conducted by the Gas Technology Institute, ranging from pipeline design factors to natural gas pipeline risk management; and (3) other industry consensus standards, including the National Association of Corrosion Engineers standards, on topics such as corrosion. In addition, it is federal policy to encourage the use of industry consensus standards: Congress expressed a preference for technical standards developed by consensus bodies over agency-unique standards The in the National Technology Transfer and Advancement Act of 1995.Office of Management and Budget’s Circular A-119 provides guidance to federal agencies on the use of voluntary consensus standards, including the attributes that define such standards. The 7-Year Reassessment Requirement Provides a Safeguard, but Is Not Fully Consistent with Risk-Based Practices Maximum Reassessment Intervals Provide a Safeguard Maximum reassessment intervals—such as the 7-year reassessment requirement—provide a safeguard and allow regulators and operators to identify and address problems on a continual basis. The 7-year reassessment requirement as well as the reassessment intervals noted in the industry consensus standards represent the maximum number of years between reassessments. If pipeline conditions dictate more frequent reassessments, then pipeline operators must perform reassessments more frequently in order to comply with PHMSA’s integrity management regulations. Both the 7-year reassessment requirement and the maximum reassessment intervals noted in the industry consensus standards are likely to identify problems before they result in leaks or ruptures. For example, according to the industry consensus standards, it typically takes longer than the 10, 15, or 20 years specified in the standards for corrosion problems to result in a leak or rupture. Because the 7-year reassessment requirement is a more frequent interval than those in the industry consensus standards, it provides greater assurance that operators are regularly monitoring their pipelines to identify and address threats before they result in a leak or rupture. Regulators and operators we spoke with indicated that a maximum reassessment interval should exist and saw benefits to conducting periodic assessments of gas transmission pipelines. Regulators—both at the federal and state levels—told us that overseeing a maximum reassessment interval is rather straightforward. For example, an inspector can use operators’ records to verify relatively easily whether the operator completed an assessment on time. Operators we spoke with also support maximum reassessment intervals, telling us that in performing baseline assessments and reassessments they have obtained valuable knowledge of the condition of their pipeline systems, and that a maximum reassessment interval can provide a safeguard to compel poor performing operators to improve the integrity of their pipeline systems. The 7-Year Reassessment Requirement Is Not Fully Consistent with Risk- Based Practices Risk-based management has several key characteristics that help to ensure safety—it (1) uses information to identify and assess risks; (2) prioritizes risks so that resources may be allocated to address higher risks first; (3) promotes the use of regulations, policies, and procedures to provide consistency in decision making; and (4) monitors performance. The gas integrity management program is based on risk-based management practices. For example, it requires operators to integrate information from various sources, such as assessments, to identify the risks specific to their pipelines. To prioritize risks for resource allocation, the gas integrity management program focuses on high consequence areas and required operators to assess the riskiest segments of their pipelines first. Our past work has shown the benefits of risk-based management, including the integrity management program. For instance, we reported in 2006 that the integrity management program benefits public safety by supplementing existing safety requirements with risk-based management principles that focus on safety risks in high consequence areas. However, the 7-year reassessment requirement—which was established by the Pipeline Safety Improvement Act of 2002 and is just one component of the gas integrity management program—is not fully consistent with risk-based management practices. For example, the 7- year reassessment requirement does not permit operators to apply the information that they have collected from their assessments: even though operators must determine an appropriate reassessment interval based on the threats facing their pipelines in high consequence areas, they must reassess those pipelines at least for corrosion threats every 7 years regardless of the risks identified. While operators can currently use data—such as pipeline conditions and other information learned from previous assessments—to determine that more frequent assessments than every 7 years are required (e.g., every 5 years), operators cannot bypass the 7-year reassessment requirement if they have data that shows reassessment intervals longer than 7 years are justified (e.g., every 10 years). Rather, operators that choose to establish reassessment intervals beyond 7 years must still conduct some type of reassessment at least every 7 years in order to comply with the 2002 act. PHMSA officials told us that the 7-year reassessment requirement does not take into account risk, and while it may be an appropriate interval length for some pipeline systems, it is too short or too long for other systems. In contrast to these regulations, there are no statutory requirements that limit risk-based reassessment intervals for operators of a different type of pipeline, that of hazardous liquids. Under PHMSA’s regulations for the hazardous liquid integrity management program, operators must perform assessments of their pipelines within high consequence areas following a maximum reassessment interval. This reassessment interval—which, unlike the gas transmission pipeline reassessment interval, was established by a PHMSA rulemaking using a data analysis—can be extended if an operator can provide an engineering basis to do so; that is, operators have the ability to use the information learned from prior assessments and other efforts to identify and assess the risks facing its pipelines and determine that a longer reassessment interval is justified. Because the 7-year reassessment requirement for gas transmission pipelines was established by statute and not in a PHMSA rulemaking, PHMSA does not have the authority to modify this requirement without congressional action. In 2006, we reported that most of the operators we contacted preferred that reassessment intervals be based on the conditions and characteristics of the pipeline segment. In general, the industry associations we spoke with for this report also preferred risk-based reassessment intervals instead of the current 7-year reassessment requirement. In addition, 21 of the 27 operators we spoke with for this report indicated that they prefer a risk-based reassessment interval requirement. According to some of these operators, complying with the current 7-year reassessment requirement without the ability to use risk- based reassessment intervals beyond 7 years may not be an efficient use of their resources. For example, some operators told us that if they could reassess their pipeline segments less frequently than every 7 years without negatively impacting safety, they could potentially devote more resources to other safety tasks. PHMSA’s gas transmission integrity management regulations include provisions that could make reassessments more risk-based, but these efforts have not seen widespread use, primarily due to the 2002 act’s 7- year reassessment requirement. PHMSA’s regulations permit operators to use confirmatory direct assessment to comply with the 2002 act. Operators that choose to use confirmatory direct assessment are those that have established reassessment intervals greater than 7 years but no more than those noted in the industry consensus standards. These operators perform a confirmatory direct assessment at year 7 to look for corrosion threats only, followed by a reassessment at the interval the operator established for all threats facing the pipeline. According to PHMSA officials, confirmatory direct assessment was included in the regulations to better align with risk management principles. However, of the 27 operators with whom we spoke, only 5 told us that they completed or planned to conduct confirmatory direct assessment. According to some of the operators we spoke with, confirmatory direct assessment—which looks for corrosion damage only—can be just as costly and time-consuming as performing a reassessment for all safety risks and, therefore, the operators chose to perform a reassessment at the 7-year mark instead. Most of the regulators we spoke with—both at the federal and state levels—also noted that operators are generally not using confirmatory direct assessment. PHMSA’s regulations allow operators with ‘exceptional performance’ to deviate from some of the requirements of the integrity management regulations. These operators must have completed at least two assessments (i.e., a baseline assessment and a reassessment) and have remediated all anomalies found in the most recent assessment. An operator satisfying all of the exceptional performance criteria is generally permitted to deviate from most integrity management regulations. However, in order to comply with the 2002 act and PHMSA’s regulations, the operator must still perform a confirmatory direct assessment at least every 7 years. None of the operators we spoke with are pursuing the exceptional performance option, with most indicating that because they must complete a confirmatory direct assessment to identify corrosion problems every 7 years, this option holds little, if any, benefit. Implementing Risk- Based Reassessment Intervals beyond 7 Years Could Exacerbate Current Challenges and Would Benefit from More Information on Resource Requirements Changes to the 7-Year Reassessment Requirement Requires Congressional Action and Could Exacerbate Current Issues with Reviewing and Justifying Reassessment Intervals Although PHMSA generally agreed in the past that risk-based standards would allow operators to better tailor reassessments to pipeline threats, PHMSA cannot change the current 7-year reassessment requirement unless congressional action occurs because the requirement is in statute. The reassessment requirement is in the regulation pursuant to the requirement in the Pipeline Safety Improvement Act of 2002. In 2006, we recommended that this statutory requirement be amended to permit operators to reassess at intervals based on risk factors, technical data, and engineering analyses. In addition to requiring a statutory change, PHMSA officials noted that a number of current challenges could potentially be exacerbated by implementing risk-based reassessment intervals. For instance, inspecting and evaluating risk-based reassessment intervals beyond 7 years could create additional workload, staffing, and expertise challenges for regulators, such as PHMSA and state pipeline safety offices, for example: PHMSA officials told us that allowing all operators to participate in risk-based reassessment intervals beyond 7 years could add significantly to the agency’s workload in terms of inspecting operators’ integrity management programs, including review of their calculated reassessment intervals. For instance, these evaluations could require inspectors to spend more time and resources than currently, which could affect the number of inspections conducted overall. Moreover, PHMSA has already experienced some workload problems with inspections, which could be worsened by allowing operators to use risk-based reassessment intervals beyond 7 years. For example, in 2012, the Department of Transportation’s Office of the Inspector General reported that PHMSA has recently accumulated a backlog of integrity management inspections for hazardous liquid operators, caused in part by the agency redirecting resources to fulfill other inspection requirements. In response, beginning in 2013, the agency will implement a new approach to inspections, called integrated inspections, where an inspector may use data and information about a specific operator and pipeline system to custom-build a list of regulatory requirements to evaluate during inspection. For these integrated inspections, integrity management requirements would be one of several regulatory requirements inspectors could choose to focus on. However, the Inspector General’s report noted that PHMSA’s proposed schedule to implement a number of enhancements to its inspection program is ambitious and challenging, and until PHMSA successfully completes the transition, the agency may not be able to ensure sufficient and consistent oversight of all integrity management programs. Officials from state pipeline safety offices we met with noted potential concerns with staffing and training to effectively evaluate risk-based reassessment intervals. For example, some state pipeline safety officials suggested that they would need dedicated staff to evaluate operators’ results and analyses, while other state officials cited the current difficulty with enrolling in PHMSA training courses due to long waiting lists. Also, some operators we interviewed expressed concern that inspectors from state pipeline safety offices may lack sufficient training to review these analyses. For example, although state officials currently inspect operators’ integrity management programs, some operators told us that inspectors do not typically challenge their reassessment interval calculations. Regulating risk-based reassessment intervals beyond 7 years could be particularly challenging for PHMSA and state pipeline safety offices because there is a lack of guidance for operators to perform risk modeling. As a result, operators could use a variety of methodologies to calculate appropriate reassessment intervals for pipeline systems and even individual segments. The level of detail and review required by regulators overseeing these operators would vary depending on the sophistication of the operators’ analyses. While current regulations require operators to use engineering and risk analyses to determine the frequency at which reassessments must be conducted, operators could face additional challenges in justifying and calculating risk-based reassessment intervals beyond 7 years. Some operators told us that risk-based reassessment intervals beyond 7 years would likely be more labor-intensive and data-driven than the current regulatory environment. For example, operators would likely have to provide PHMSA more analyses to justify their calculated reassessment intervals than currently. Based on our interviews, operators appear to vary in the extent to which they currently calculate reassessment intervals and use the results of the data analyses, for example: Some operators we spoke with told us that they perform a less rigorous determination of their reassessment intervals and default to the 7-year interval if they determine that there are no problems with their pipelines. Also, one operator told us that unless evidence of corrosion is found on the pipeline segment, the operator does not perform a comprehensive calculation of the reassessment interval. Some operators we spoke with calculated reassessment intervals resulting in 7 years, but still chose to reassess their pipelines more frequently than their calculations due to identified conditions such as pipeline coating issues. While such a decision prioritizes the safety of the pipeline, it also illustrates some of the potential subjectivity involved with reassessment interval calculations, which may have accounted for such conditions, but did not ultimately determine a shorter interval in the analysis. For example, some PHMSA officials told us that oftentimes there is more than one correct conclusion based on pipeline data and some operators will choose a more conservative approach than others and vice versa. Further, some technical experts told us that risk-based reassessment intervals would require a higher level of skill and analysis beyond some operators’ current capabilities, thus forcing the operator to seek the assistance of contractors. As a result, the challenges operators currently have with justifying and calculating reassessment intervals, partly because of a lack of guidance from PHMSA, could be further affected if operators are to use these types of analyses to justify risk-based reassessment intervals beyond 7 years. Without guidance for operators to use in determining and calculating reassessment intervals, operators may use a range of approaches for determining the relevant risks to their systems, which could then create potential challenges for regulators with reviewing risk-based reassessment intervals beyond 7 years and ensuring oversight of these pipelines. PHMSA Has Previously Considered an Approach to Implementing Risk- Based Reassessment Intervals beyond 7 Years, but More Information on Resource Requirements Is Needed In 2008, PHMSA provided a detailed statement at the request of Congress to explain how the agency would establish and enforce risk- based criteria for extending the 7-year reassessment interval. According to PHMSA’s proposal, it would retain the current 7-year reassessment requirement, but allow for the use of risk-based reassessment intervals on a case-by-case basis where justified. Congress did not take any action and PHMSA has to address this proposal as a result of the 2008 report,neither reviewed nor updated its 2008 report to determine whether that report’s conclusions remain valid. However, PHMSA’s proposal outlined a number of steps to establish a process permitting the use of risk-based reassessment intervals beyond 7 years, for example: First, PHMSA would establish via rulemaking risk-based criteria that operators must meet to warrant extending their reassessment intervals beyond 7 years. Second, interested operators would have to notify PHMSA (or a state pipeline safety office for an intrastate transmission pipeline) one year in advance of the scheduled reassessment and submit information demonstrating their conformance with the criteria before using risk- based reassessment intervals beyond 7 years. As shown in table 1, one potential criterion could require some operators to conduct assessments using in-line inspection or hydrostatic testing (see appendix II for a longer list of draft criteria provided by PHMSA). Third, PHMSA would review all the notifications to determine whether For example, operators would the criteria in the rule have been met.need to demonstrate through analyses and documentation that their pipeline segments meet each criterion or provide substantial justification that any failure to meet a criterion does not increase the risk of corrosion in the segment. PHMSA would also consider in its review the specific location of the pipeline segments, the potential consequences if an accident were to occur at that location, and the compliance and overall performance history of the operator. PHMSA officials expected that operators of some types of pipelines would be more likely to use risk-based reassessment intervals beyond 7 years than others. For example, PHMSA cited operators that have demonstrated that their pipe is sound and that their engineering and risk analysis do not indicate the likelihood of time-dependent integrity problems occurring during a reassessment interval beyond 7 years. Although operators support the idea of using risk-based reassessment intervals beyond 7 years, it is not clear how many operators would be able to meet the potential criteria established by PHMSA, and PHMSA officials could not estimate the number either. For example, not all operators can conduct assessments using in-line inspection or hydrostatic testing, which is one of PHMSA’s proposed criteria for using risk-based reassessment intervals. According to the proposal, operators would have to meet each of the criteria. As a result, the mileage of pipelines that would be affected by allowing risk-based reassessment intervals beyond 7 years is currently unknown. In light of the uncertain potential effect on resources and expertise for both regulators and operators, an effort to implement risk-based reassessment intervals beyond 7 years may benefit first from PHMSA obtaining additional information regarding the resource requirements needed prior to a rule change, such as how the Office of Pipeline Safety initially established integrity management regulations. For example, the Accountable Pipeline Safety and Partnership Act of 1996 directed the Office of Pipeline Safety to establish a demonstration program to test a risk management approach to pipeline safety. Under the program envisioned by the legislation, the Secretary sought voluntary participation by interstate natural gas and hazardous liquid transmission operators in good standing to demonstrate company-specific risk management plans. The Secretary then completed a rulemaking that outlined the demonstration plan’s elements and provided opportunities for full public participation in the process. As a result, partly on the basis of the agency’s experience with the risk management demonstration program, the agency moved forward with a new regulatory approach, known as integrity management. Similarly, as noted above, PHMSA produced a report at the request of Congress explaining how the agency would establish and enforce risk-based criteria for extending the 7-year reassessment interval. In effect, efforts such as these allowed the agency to obtain preliminary results and information on the proposed rule such as the potential benefits and impacts under a variety of conditions before making a change. Conclusions Gas transmission pipeline assessments and reassessments have resulted in critical repairs being made. While the 7-year reassessment requirement has provided a safeguard by helping to identify these problems before they cause leaks or ruptures, the prescriptive 7-year reassessment requirement is not fully consistent with the characteristics of risk-based management promoted by the Pipeline Safety Improvement Act of 2002. PHMSA has generally agreed that risk-based reassessment intervals would allow operators to better tailor reassessments to pipeline threats and operators support this concept. Risk-based reassessment intervals beyond 7 years would allow operators to use the information they have collected about their pipeline systems to focus resources on areas of greatest importance. PHMSA drafted a process to establish and enforce risk-based criteria for the potential use of risk-based reassessment intervals in 2008. While this process would be more consistent with risk management practices, permitting operators to use risk-based reassessment intervals beyond 7 years would not be without challenges, even if justified using an engineering basis. First, Congress would have to amend the statutory requirement mandating the 7-year reassessment interval. In 2006, we recommended that this statutory requirement be amended to permit operators to reassess at intervals based on risk factors, technical data, and engineering analyses. If Congress were to amend the statute, both federal and state regulators as well as operators anticipate that overseeing and determining risk-based reassessment intervals beyond 7 years may create workload, staffing, and expertise challenges over what is currently required. Further, there is a lack of guidance to assist regulators and operators in developing the risk models currently used to calculate reassessment intervals. Without such guidance, operators could use a range of approaches for determining the relevant risk to gas transmission pipelines, potentially creating challenges with reviewing and justifying reassessment intervals. Given these potential challenges, more information might help decision- makers better understand the resource requirements needed in allowing risk-based reassessment intervals beyond 7 years. In this context, conducting a study or developing a legislative proposal for a pilot program, in consultation with Congress, to examine the impact on regulators and operators from the use of risk-based reassessment intervals beyond 7 years could help stakeholders—including regulators, operators, and decision-makers—determine the resource demands of inspecting and evaluating these efforts. A full evaluation of the challenges to implementing risk-based reassessment intervals beyond 7 years and their associated resource requirements could help to identify the most prudent and effective way to implement risk-based reassessment intervals. Such an evaluation could help to ensure that the challenges regulators and operators claim they may face from this change would not negatively affect safety. Further, a study—similar to the 2008 report PHMSA prepared at the request of Congress and incorporating lessons learned since publication of that report—or a legislative proposal for a pilot program—similar to the one used in developing the integrity management program—could allow regulators to develop guidance on calculating risk-based reassessment intervals as well as determine the impact of these reassessment intervals. As the debate about the use of risk-based reassessment intervals continues, it is clear that more information is needed to further the understanding and discussion about how to address the potential challenges to using risk-based reassessment intervals beyond 7 years before any change occurs. Recommendations for Executive Action To improve how operators calculate reassessment intervals, we recommend that the Secretary of Transportation direct the Administrator for the Pipeline and Hazardous Materials Safety Administration to develop guidance for operators to use in determining risks and calculating reassessment intervals. To better identify the resource requirements needed to implement risk- based reassessment intervals beyond 7 years for gas transmission pipelines, we recommend that the Secretary of Transportation direct the Administrator for the Pipeline and Hazardous Materials Safety Administration to collect information on the feasibility of addressing the potential challenges of implementing risk-based reassessment intervals beyond 7 years, for example by preparing a report or developing a legislative proposal for a pilot program, in consultation with Congress, that studies the impact to regulators and operators of a potential rule change. Agency Comments We provided the Department of Transportation with a draft of this report for review and comment. The department did not agree or disagree with the recommendations, but provided technical comments that we incorporated as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Transportation, and other interested parties. In addition, this report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our work for this report focused on gas transmission pipelines in high consequence areas and the requirement to assess these pipeline segments at periodic intervals. In particular, this report examines: (1) the extent to which the Pipeline and Hazardous Materials Safety Administration’s (PHMSA) assessment data provides information on repairs made and the appropriateness of the 7-year reassessment requirement, (2) the impact of the 7-year reassessment requirement on regulators and operators, and (3) the potential challenges of implementing risk-based reassessment intervals beyond 7 years. To address the extent to which PHMSA’s assessment data provides information on repairs and the appropriateness of the 7-year reassessment requirement, we reviewed PHMSA’s regulations, prior GAO reports, and PHMSA data on gas transmission pipelines. We analyzed the data reported to PHMSA by pipeline operators on, among other things, age and operating pressure of transmission pipelines; pipeline miles assessed; tools used to conduct assessments; immediate and scheduled conditions found during assessments and subsequently repaired; and incidents, leaks, and failures on gas transmission pipelines in high consequence areas. We used two PHMSA data sources in our data analysis: the Gas Integrity Management Semi-Annual Performance Measures Reports from 2004 through 2009 and the Annual Reports on Natural and Other Gas Transmission and Gathering Pipeline Systems from 2010 and 2011. From 2004 to 2009, PHMSA collected information on miles assessed, incidents, leaks, and failures in high consequence areas using the Gas Integrity Management Semi-Annual Performance Measures Report. Through this report, PHMSA collected data on baseline assessments and started collecting data on reassessments in 2008. In 2010, PHMSA discontinued the Gas Integrity Management Semi-Annual Performance Measures Report and merged it with the Annual Report on Natural and Other Gas Transmission and Gathering Pipeline Systems. The updated Annual Report on Natural and Other Gas Transmission and Gathering Pipeline Systems added questions on pipeline miles that were baseline assessed and reassessed; tools used to conduct assessments; conditions identified and repaired as a result of assessments; and incidents, leaks, and failures in high consequence areas. One important change in the updated Annual Report was PHMSA’s new approach to documenting conditions and repairs identified by baseline assessments and reassessments. Prior to 2010, pipeline operators reported the number of repairs made on pipelines to fix problematic conditions identified by the assessments—a single repair could mitigate multiple problems. For 2010 and later, pipeline operators were required to report the number of repaired conditions. Since operators have to report the actual number of problems found and repaired, PHMSA expected the number of reported repairs to spike. Due to this reporting change, we cannot compare repair data from 2004 to 2009 to repair data reported in 2010 and later. To assess the reliability of PHMSA’s gas transmission pipeline data, we spoke with agency officials about data quality control procedures and reviewed relevant documentation. We determined that the data were sufficiently reliable for the purposes of this report, specifically to provide background information and to describe repairs made in high consequence areas. To ensure the accuracy of our data analysis, we internally reviewed our calculations and shared preliminary results with PHMSA to ensure that we analyzed its data appropriately. To determine the impact of the 7-year reassessment requirement on regulators and operators, we reviewed relevant legislation and PHMSA regulations on integrity management. We also interviewed federal and state regulators, industry associations, gas transmission pipeline operators, pipeline safety advocacy and environmental groups, research firms, a state regulatory association, and technical experts. We selected 27 pipeline operators to interview based on our review of PHMSA data, specifically looking for pipeline operators with gas transmission pipeline miles in high consequence areas. We then divided pipeline operators into six groups based on their mileage in high consequence areas and whether they had conducted reassessments. We chose 3 to 5 operators from each of the six groups, with the goal of ensuring diversity across these and several other characteristics, including the number of recent incidents caused by corrosion and their geographic location. The information obtained in these interviews is not generalizable to the entire population of pipeline operators. We also selected a non-generalizable sample of eight state pipeline safety offices using PHMSA data to, for example, identify states with relatively high pipeline mileage while also achieving geographic diversity. Five of the states we spoke with serve as interstate agents for PHMSA. To learn about the operations of a gas transmission pipeline and the logistics of conducting an assessment, we made two site visits to view a pipeline under construction in Manassas, Virginia, and to view an in-line inspection tool being used on a pipeline in Rockville, Maryland. To determine the potential challenges of implementing risk-based reassessment intervals beyond 7 years, we reviewed PHMSA documents. We also questioned federal and state regulators, industry associations, gas transmission pipeline operators, pipeline safety advocacy and environmental groups, research firms, a state regulatory association, and technical experts on the extent pipeline operators use risk to determine reassessment intervals under the current system, as well as how expanding the use of risk-based reassessment intervals beyond 7 years would impact operators and regulators. We collected additional data from three pipeline operators on their experiences in calculating reassessment intervals and conducting reassessments. We selected these three pipeline operators by using PHMSA data to identify gas transmission pipeline operators with different ranges of mileage in high consequence areas. We then selected operators that had completed at least some reassessments and looked for diversity in the following categories: geographic location, number of pipeline repairs, and tools used to complete assessments. Organizations Contacted Appendix II: Potential Criteria for Risk-Based Reassessment Intervals In 2008, Congress requested that the Pipeline and Hazardous Materials Safety Administration (PHMSA) provide a detailed statement to explain how the agency would establish and enforce risk-based criteria for extending the 7-year reassessment interval. As part of that request, PHMSA drafted potential criteria that operators would have to meet in order to use risk-based reassessment intervals beyond 7 years.noted that the criteria may be further refined as potential rulemaking proceeds. The draft criteria include: If the pipeline operates at pressures that are greater than or equal to 30 percent of specified minimum yield strength, it must have been assessed using in-line inspection or hydrostatic testing. Most recent in-line inspection assessment shows pipeline to be in good condition. Few conditions meeting immediate repair criteria were found and the causative corrosion mechanisms have been identified and addressed. Most recent pressure test meets integrity management requirements and resulted in few leaks/failures or pressure reversals. Few or no significant corrosion repairs have been made in the covered segment since the last integrity assessment. Causes of previously identified significant corrosion defects have been corrected. No history of selective seam corrosion (a specialized form of corrosion associated with older pipelines), or microbiologically induced corrosion (a mode of corrosion incorporating microbes that react and cause the corrosion or influence other corrosion processes of metallic materials). Pipeline transports tariff quality dry gas (almost pure methane), with limited upsets introducing electrolyte or other contaminants, in which case internal corrosion risk has been managed. Pipeline is coated and cathodically protected (a technique to reduce the corrosion of a metal surface) and be in good condition. Coating must meet the requirements in 49 C.F.R. § 192.461 and be in good condition. Cathodic protection must be demonstrated generally effective. No history of stress corrosion cracking (the cracking induced from the combined influence of tensile stress and a corrosive environment). Assumed corrosion growth rate is justified and supports the longer reassessment interval. Calculations of remaining time frame before pipeline failure are conservative and demonstrate safety for an extended interval. Few safety related conditions, leaks, incidents, or failures have resulted from corrosion, and the causes have been addressed. History of compliance with corrosion control, integrity management, operator qualification, and drug and alcohol testing regulations is good. Public awareness program meets the requirements in 49 C.F.R. § 192.616. No open corrective action orders or significant enforcement actions related to corrosion control program deficiencies affecting the involved pipeline segments. Pipeline must have been constructed after 1970 unless demonstration of good condition is provided. Environmental conditions in which the affected pipeline segment is located must not be unusually conducive to corrosion. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Susan A. Fleming, (202) 512-2834, or [email protected]. Staff Acknowledgments In addition to the contact named above, Sara Vermillion (Assistant Director), Sarah Arnett, Russell Burnett, Leia Dickerson, Colin Fallon, David Hooper, Joshua Ormond, Daniel Paepke, Madhav Panwar, Anne Stevens, and Adam Yu made key contributions to this report. | About 300,000 miles of gas transmission pipelines cross the United States, carrying natural gas from processing facilities to communities and large-volume users. These pipelines are largely regulated by PHMSA. The Pipeline Safety Improvement Act of 2002 established the gas integrity management program, which required gas transmission pipeline operators to assess the integrity of their pipeline segments in high consequence areas by December 2012 and reassess them at least every 7 years. The Pipeline Safety, Regulatory Certainty, and Job Creation Act of 2011 directed GAO to examine the results of these baseline assessments and reassessments and the potential impact of making the current process more risk-based. GAO analyzed (1) PHMSAs assessment data on repairs made and the appropriateness of the 7-year reassessment requirement, (2) the impact of the 7-year reassessment requirement on regulators and operators, and (3) the potential challenges of implementing risk-based reassessment intervals beyond 7 years. GAO analyzed assessment data; reviewed legislation and regulations; and interviewed pipeline operators, federal and state regulators, and other stakeholders. Baseline assessment and reassessment data collected by the Department of Transportation's (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA) since 2004 show that pipeline operators are making repairs in highly populated or frequented areas ("high consequence areas"). For example, from 2004 to 2009, operators made 1,080 immediate repairs. While operators can use assessment data to determine reassessment intervals for specific pipelines, PHMSA's data are aggregated and cannot indicate an appropriate maximum interval for all pipelines nationwide. Such a determination requires, for example, collaboration of subject matter experts and analysis of technical studies. The current 7-year reassessment requirement provides a safeguard by allowing regulators and operators to identify and address problems on a continual basis, but is not fully consistent with risk-based practices. The 7-year reassessment requirement is more frequent than the intervals found in industry consensus standards and provides greater assurance that operators are regularly monitoring their pipelines to address threats before leaks or ruptures occur. However, this requirement--which was established in a 2002 act as part of the gas integrity management program rather than by rulemaking--is not fully consistent with risk-based management practices, which ask operators to, for example, use information to identify, assess, and prioritize risks so that resources may be allocated to address higher risks first. While operators are required to determine an appropriate reassessment interval based on the threats to their pipelines in high consequence areas, they must reassess those pipelines at least every 7 years regardless of the risks identified. Implementing risk-based reassessment intervals beyond 7 years would require a statutory change from Congress and could exacerbate current workload, staffing, and expertise challenges for regulators and operators. For example, PHMSA is facing workload problems with inspections, which could be worsened by allowing operators to use risk-based reassessment intervals beyond 7 years; PHMSA has an initiative under way that could help address this issue. Further, some operators told us that extending reassessment intervals beyond 7 years would likely require additional data analyses over what is currently required. Operators GAO met with varied in the extent to which they currently calculate reassessment intervals and use the results of data analyses. Guidance to calculate reassessment intervals is lacking, and as a result, operators may perform a less rigorous determination of their reassessment intervals at this time. At Congress's request, in 2008 PHMSA described how it would establish and enforce risk-based criteria for extending the 7-year reassessment interval. PHMSA proposed retaining the current 7-year reassessment requirement, but establishing a process by which operators could use risk-based reassessment intervals beyond 7 years if they met certain potential criteria, such as demonstrating sound risk analysis. While PHMSA and GAO have supported the concept of risk-based reassessment intervals beyond 7 years, given the breadth of potential challenges with implementation, more information might help decision-makers better understand the resource requirements for this change. For example, PHMSA has used pilot programs to collect such information and study the effects prior to rule changes. |
Background Natural gas is a key feedstock in the manufacturing of nitrogen for which there is no practical substitute. Manufactured nitrogen—also known as anhydrous ammonia—is used as a fertilizer itself and is also the primary building block used to manufacture all other nitrogen-based fertilizers. Some of this nitrogen also is used for industrial purposes such as promoting bacterial growth in waste treatment plants, making plastics, and as a refrigerant. U.S. manufacturers supplied almost 14 million tons of nitrogen during fertilizer year 2002 and an additional 7 million tons were imported. Fifty-six percent of the total nitrogen supply was consumed by U.S. agricultural demands. Since natural gas is the most costly component of nitrogen, the profitability of the U.S. nitrogen fertilizer industry depends, to a large degree, on the price of natural gas in the United States. As we reported in December 2002, natural gas prices can be volatile, and small shifts in the supply of or demand for gas are likely to continue to cause relatively large price fluctuations. In addition to facing a volatile natural gas market, which sometimes leads to price spikes, America’s nitrogen fertilizer producers must also compete in a marketplace where many competitors pay much lower prices for natural gas. For example, industry data show that recently, when the U.S. market price for natural gas was $5 per mmBtu, lower gas prices were available to nitrogen fertilizer producers in other parts of the world. The price of gas in the Middle East was 60 cents per mmBtu; in North Africa, 40 cents; in Russia, 70 cents; and in Venezuela, 50 cents. According to The Fertilizer Institute (TFI), fertilizer products operate in a world market, and U.S. prices are influenced by numerous variables other than the price of natural gas in the United States. U.S. General Accounting Office, Natural Gas: Analysis of Changes in Market Prices, GAO-03-46 (Washington, D.C.: Dec. 18, 2002). Higher Natural Gas Prices Have Contributed to Higher Nitrogen Fertilizer Prices and Reduced Domestic Production but Have Not Affected Availability of Fertilizer Because the cost of natural gas accounts for such a large percentage—up to 90 percent—of the total costs of manufacturing nitrogen fertilizer, nitrogen fertilizer prices tend to increase when gas prices increase. When gas prices increased in 2001 and 2003, prices for nitrogen fertilizers increased throughout the marketing chain. The higher natural gas prices in 2001 also led to higher production costs for the U.S. nitrogen fertilizer manufacturing industry and resulted in a significant reduction in the amount of nitrogen produced in this country that year. Despite this decline in the production of nitrogen, supplies of nitrogen fertilizer were adequate to meet farmers’ needs in 2001 primarily because of a significant increase in imported nitrogen. Natural Gas and Nitrogen Fertilizer Prices Are Closely Related Higher natural gas prices have contributed to higher prices for nitrogen fertilizer throughout the marketing chain. When gas prices increased significantly in 2001 and 2003, spot market prices, as well as the prices farmers paid for fertilizer, increased for all three nitrogen-based products included in our analysis—anhydrous ammonia, urea, and UAN. Further, the high prices seen in 2001 could have been even higher, if the volume of fertilizer imports had not increased to compensate for the reduction in domestic production of nitrogen. The relationship between gas prices and fertilizer prices was the strongest for anhydrous ammonia, at least in part, because anhydrous ammonia contains the highest concentration of nitrogen of the three fertilizer products—82 percent—and natural gas is by far the most costly component used in manufacturing nitrogen. Anhydrous ammonia is the nitrogen-based fertilizer used most often in the United States, and is also the primary building block for urea and UAN. As shown in figure 1, prices for anhydrous ammonia and natural gas prices moved closely in relation to each other during the period from January 1998 to March 2003. When gas prices increased or decreased, the spot market price for ammonia tended to follow the same trend. More specifically, both the price of natural gas and the price of ammonia peaked in January 2001 and again in March 2003. Closer review of the data shows that the monthly price of natural gas in January 2000 of $2.52 per mmBtu had risen 1 year later to $10.16 per mmBtu, an increase of 303 percent. Over the same time period, the price of anhydrous ammonia rose from $119 per ton to $290 per ton, an increase of 144 percent. Although there is a strong correlation between natural gas prices and nitrogen fertilizer prices, many other variables influence the supply and demand market forces that ultimately determine fertilizer prices. In addition, U.S. companies that produce nitrogen use various purchasing techniques to manage their natural gas price risks; therefore, they do not purchase all their gas at the prevailing market price. than anhydrous ammonia: 46 percent and 32 percent nitrogen, respectively. Because urea and UAN prices reflect lower nitrogen concentrations, they did not always move in direct relationship with natural gas prices. For example, in May 1998, urea prices increased to $162 per ton, while gas prices remained basically flat. Moreover, the prices of nitrogen fertilizer can differ depending upon how much further along the marketing chain prices are recorded. For example, as shown in figure 3, the price for anhydrous ammonia in the Mid Cornbelt, where this fertilizer is primarily used, was higher than the price in the U.S. Gulf. This difference reflects the cost of transporting the ammonia from the Gulf, where it is produced, to the Mid Cornbelt. Also, changes in the price of nitrogen fertilizer can lag behind changes in natural gas prices, depending upon where in the marketing chain prices are recorded. For example, as shown in figure 3, the price for anhydrous ammonia in the Mid Cornbelt peaked in February 2001—about 1 month after natural gas prices spiked that year. Other increases and decreases in the price of Mid Cornbelt ammonia lagged behind natural gas price changes on other occasions. We believe these lags reflect the time associated with transporting the fertilizer from its point of origin to the farmers who ultimately use the product. Retail prices for nitrogen fertilizer, or those prices paid by farmers, also tend to rise sharply when natural gas prices increase. As shown in figure 4, the USDA-reported farmer prices for nitrogen fertilizer reflected the natural gas price spikes that occurred in January 2001 and March 2003. However, the 2001 spike in fertilizer prices lagged behind the increase in gas prices by about 1 month. The February 2001 price for nitrogen fertilizer was about 79 percent higher than it was the previous year. Furthermore, according to USDA data, the average U.S. farm-level price for nitrogen fertilizer during the spring, when farmers’ demand for nitrogen fertilizer is the highest, tracked natural gas prices. Specifically, the April monthly price for natural gas increased approximately 84 percent from April 2000 to April 2001. Over the same time period, the April farm-level price for anhydrous ammonia increased 76 percent from $227 to $399 per ton. By April 2002, gas prices had decreased by 39 percent, and ammonia prices had dropped by 37 percent from the previous year’s level. In April 2003, the price of natural gas was again higher, increasing by 48 percent, and the average farm-level price of anhydrous ammonia followed this trend by increasing 49 percent. However, it is difficult to determine the extent of financial harm farmers suffered because of increased fertilizer prices in 2001. A USDA study directed at determining how corn farmers responded to higher fertilizer prices in 2001 found that about 34 percent of the responding producers of corn—a crop that requires large quantities of nitrogen fertilizer—purchased a majority of their nitrogen fertilizer at prices that were set prior to January 2001 and, therefore, were not affected by the sharp rise in fertilizer prices that year. Further, these producers were among the largest corn-producing farms and applied the most nitrogen fertilizer per acre. Eleven percent of the corn producers that responded to the USDA survey reported adjusting their nitrogen application rates or practices in response to higher prices, and the remaining 55 percent of respondents—generally smaller corn farms that applied the least amount of nitrogen fertilizer—reported they took no action in response to higher nitrogen fertilizer prices in 2001. Higher Natural Gas Prices Had Financial Consequences for U.S. Nitrogen Fertilizer Producers and Led to Reduced Production The sharp rise in gas prices in 2001 had financial consequences for the U.S. nitrogen fertilizer manufacturing industry because of the sharp increase in their production costs. These higher production costs, which could not be recovered through higher fertilizer prices, led to plant closures and a significant reduction in domestic nitrogen production. According to industry data, several companies that manufacture nitrogen fertilizer reported decreased revenues or financial losses in 2001, and each cited higher natural gas prices as contributing to or causing the financial consequences. For example, one large interregional cooperative that produces nitrogen fertilizer for U.S. farmers and ranches reported a loss of more than $60 million in 2001. The company’s 2001 annual report cited high natural gas prices as a primary reason for the financial loss. Industry data obtained from the International Fertilizer Development Centershowed that between January 2001 and June 2003, eight U.S. nitrogen fertilizer manufacturers permanently closed their plants, and a ninth plant had not operated since 2001. Industry officials also told us that natural gas prices in 2003 have remained well above historic averages and are continuing to exact a financial toll on the domestic nitrogen fertilizer manufacturing industry. These officials cite the fact that, in June 2003, the U.S. industry was operating at only 50 percent of capacity as evidence of this toll. Further, they said the industry has suffered through several years of extreme financial hardship, caused in part by higher gas prices driving up production costs and foreign competitors who have access to less expensive natural gas and, if gas prices in this country remain relatively high, more U.S manufacturers are likely to curtail nitrogen production, and some could permanently shut down their plants. The production and consumption of fertilizer is often measured by the amount of nutrient content in the fertilizer applied. For nitrogen fertilizer products, the primary nutrient that is measured is nitrogen. Manufacturers supply nitrogen that is consumed in both the agricultural and industrial sectors. Table 1 below provides estimates of nitrogen supply and demand in the United States over the last 7 years, including the nitrogen nutrient content in fertilizer products consumed by the agricultural sector. As the price of natural gas, the key component in the manufacturing of nitrogen spiked in 2001, nitrogen production fell. As shown in table 1, U.S. manufacturers produced 25 percent less nitrogen in 2001 than in 2000. Imports Have Helped Maintain Availability of Nitrogen Fertilizer Despite the significant decline in domestic production of nitrogen in 2001, supplies of nitrogen fertilizer were adequate to meet farmers’ demand that year primarily because of an increase in imports. USDA collected additional survey information from April to June 2001 to determine whether farmers were facing problems in obtaining nitrogen fertilizer. The results of this survey show that the supply of nitrogen fertilizer was adequate to meet farmers’ 2001 demand. As shown in figure 5, while nitrogen fertilizer supplies were below normal in several states in April 2001, they had returned to normal levels in all but one state by June of that year. Nationally, nitrogen fertilizer supplies were at 92 percent of normal levels in early April 2001, while only 12 states reported supplies at less than 90 percent of normal levels. Only two states—Pennsylvania and New Jersey—reported supplies at less than 80 percent of normal levels. However, by early June nitrogen fertilizer supplies were at 97 percent of normal levels nationally, and all but one state reported supplies at 95 percent or more of normal levels. By June 30, 2001, USDA officials concluded that there were sufficient supplies of nitrogen fertilizer, and they stopped the survey. Furthermore, USDA did not conduct a similar survey in 2003, when gas prices and fertilizer prices again increased, because it was unaware of any concerns about the availability of nitrogen fertilizer. The results of USDA’s survey are consistent with our analysis, which found that although domestic production of nitrogen declined 25 percent in 2001, the overall demand was met primarily because imports increased by about 43 percent. As shown in table 1, nitrogen imports increased from 6.3 million tons in 2000 to approximately 9 million tons in 2001. Although most nitrogen fertilizer imported into the United States has for the past several years come from Canada, the amount of nitrogen fertilizer imported from Canada decreased by almost 13 percent in 2001. On the other hand, nitrogen fertilizer imports from Trinidad Tobago, Venezuela, and Ukraine increased by 19 percent, 59 percent, and 469 percent, respectively, in 2001. The price of natural gas in these three countries was considerably lower than the price of gas in the United States; thus, fertilizer producers in these countries were able to produce nitrogen fertilizer at much lower costs than domestic producers. Table 1 also shows that domestic agricultural consumption of nitrogen decreased from 12.3 million tons in 2000 to 11.5 million tons in 2001—or about 7 percent. At least part of this reduction can be attributed to the impact of higher fertilizer prices on the country’s farmers. For example, according to USDA’s survey aimed at determining how corn farmers responded to higher fertilizer prices in 2001, 11 percent of responding farmers reported they adjusted their nitrogen fertilizer rates or practices in response to higher nitrogen fertilizer prices that year. About 80 percent of these farmers reduced their nitrogen fertilizer use by an average of 23 percent. Federal Government Has a Limited Role in Managing the Impact of Natural Gas Prices on the Fertilizer Market The federal government has a limited role in managing the impact of natural gas prices on the domestic fertilizer market. For example, the government does not determine the price of natural gas; however, two federal agencies—the Federal Energy Regulatory Commission (FERC) and the Commodity Futures Trading Commission (CFTC)—play important roles in promoting competitive natural gas markets by deterring anticompetitive actions. In addition, the Energy Information Administration (EIA) is responsible for obtaining information about and analyzing trends in the natural gas market that are used by industry and government decision makers. As with natural gas, the federal government does not set or control prices for nitrogen fertilizer. However, as part of its overall mission, USDA does monitor developments in the agricultural sector that could affect farmers. Regarding the fertilizer market, USDA collects, analyzes, and disseminates information on fertilizer prices and uses and, in 2001, collected additional information on the supply of nitrogen fertilizer and how higher fertilizer prices affected farmers. Lastly, USDA provides insurance and commodity price support programs to assist America’s farmers in managing risks associated with crop yields and revenues. Federal Role in the Natural Gas Market Is Focused on Ensuring a Competitive Marketplace As we reported in December 2002, in today’s deregulated market the federal government does not control the price of natural gas. However, two federal agencies are responsible for ensuring that natural gas prices are determined in a competitive marketplace. Specifically, FERC plays a major role in overseeing the natural gas marketplace to ensure that prices are just and reasonable and free from fraud and market manipulation. Similarly, CFTC exercises regulatory oversight of natural gas derivativesthat are traded on federally regulated exchanges, such as the New York Mercantile Exchange, to protect traders and the public from fraud, manipulation, and abusive practices. Following the price increases that occurred in the natural gas market during 2000–2001, both FERC and CFTC initiated investigations into possible fraud or manipulation. In August 2002, FERC reported that it had found indications that several companies may have manipulated spot prices upward for natural gas delivered to California during 2000–2001. In March 2003, FERC reported that it had found evidence of manipulation of both electricity and natural gas markets, and that spot market gas prices were not produced by a well-functioning competitive market. FERC staff made several recommendations to FERC commissioners aimed at correcting the deficiencies they found in the electric as well as the natural gas market. In a statement before the National Energy Marketers Association on April 4, 2003, the Chairman of CFTC acknowledged that the commission had imposed monetary penalties and filed complaints in federal court against several companies in connection with false reporting and attempts to manipulate natural gas prices and operating an illegal futures exchange. The Chairman also said that CFTC was actively engaged in other energy sector investigations, and further charges might be filed. Following the price spike that occurred in the natural gas market in February 2003, FERC and CFTC again undertook investigations of possible market manipulation. On July 23, 2003, they issued a joint statement saying that neither investigation had identified evidence of market manipulation. FERC concluded that gas prices had risen in apparent response to underlying supply and demand conditions and in a manner consistent with those conditions. CFTC said that it found nothing that suggested manipulative activity in the natural gas futures and options market during the week of February 24, 2003. A third federal agency—EIA—analyzes energy price movements and provides market information that gas industry analysts use as an indicator of both supply and demand. For example, in May 2002, EIA began reporting estimates on the volume of gas in storage, which is a key predictor of future natural gas prices. EIA also provides weekly and monthly updates on the natural gas market and special reports on various issues affecting the gas market. In its August 2003 energy outlook, EIA reported that gas prices at the Henry Hub, one of the largest gas market centers in the United States, fell below $4.70 per mmBtu during the last week in July 2003. This was considered significant because these prices had been considerably above $5 per mmBtu on a monthly basis since the beginning of the year. However, EIA advised that gas prices are at risk for volatility and industrial users who rely on spot market purchases for their gas, such as nitrogen fertilizer producers, face the greatest risk of higher natural gas prices. Federal Role in the Fertilizer Market Is Limited The federal government does not control prices for nitrogen fertilizer, and nitrogen fertilizer products imported from other countries are generally not subject to U.S. trade restrictions, such as quotas and tariffs.However, as part of its overall mission, USDA does monitor and report on developments in the agricultural sector that could affect farmers and offers certain programs to help farmers manage the risks associated with crop yield and revenues. The National Agricultural Statistics Service (NASS) collects information on agricultural acreage, production, stocks, prices, income, and information on fertilizer prices and uses. For example, the annual Agricultural Chemical Usage report provided by NASS includes information for targeted crops by major producing states on how much and what type of fertilizer was applied per acre. NASS also reports monthly price indices for three major fertilizer types—nitrogen, phosphate, and potassium—and actual prices paid by farmers for several fertilizer products in April of each year. In addition to its routine surveys, USDA collected additional information in 2001 about nitrogen fertilizer availability and prices. According to officials from the Office of the Chief Economist, this information was collected because Congress and others had raised concerns about higher natural gas prices and the possible impact these prices would have on the availability and price of fertilizer. In order to collect this information, questions were added to USDA’s ongoing Crop Progress survey aimed at determining the availability of nitrogen fertilizer in 2001 and to the Agricultural Resource Management Survey to determine how corn growers responded to the higher nitrogen fertilizer prices that occurred in 2001. The results of this additional survey information are discussed elsewhere in the report. USDA also offers insurance and commodity price support programs to help farmers manage risk associated with crop yields and revenues, but it currently does not offer similar programs to cover the risks associated with farm production costs, such as the cost of fertilizer. For example, in 2002, USDA’s insurance program covered crops valued at $41 billion, and commodity price support payments have averaged more than $10 billion per year since 1996. According to USDA officials, the agency does not offer insurance to cover the risks associated with farm production costs because these risks tend to be small compared with the risks associated with crop prices. Since a farmer’s income per acre from a crop equals the crop price times the yield, changes in either crop price or yield are directly and fully reflected in a farmer’s income. As shown in table 2, crop prices can change significantly over time and from year to year. From 1996 to 2001 the average price of corn declined by $.98 per bushel, or 35 percent, and average corn prices declined by $.61 per bushel—24 percent—from 1997 to 1998. Overall, from 1996 to 2001, average corn yields increased only 11 percent—from 130 bushels per acre to 144 bushels per acre. In addition, while national average yields are relatively stable from year to year, the actual yields for individual farmers can vary significantly from year to year as a result of natural causes, such as weather conditions and the extent of loss caused by insects and diseases. In contrast, a farmer faces fewer risks with costs of production because these costs tend to remain stable from year to year. As shown in table 3, total production costs per acre for a corn farm remained relatively stable from 1996 through 2001, and changes in different cost categories often offset one another. For example, although average fertilizer costs increased by $8.68 per acre from 2000 to 2001, this large increase was offset by a decrease of $8.24 in fuel, lube, and electricity costs. Other production costs also decreased and, as a result, total production costs decreased by $3.84, or about 2 percent. Similarly, although USDA provides information to farmers through the Cooperative State Research, Education and Extension Service to help them participate in farm commodity futures markets, there is relatively little information regarding farm production costs, such as fertilizer. According to a state extension service official, the extension service has issued several publications that provide information on farm commodity futures markets because farmers are generally familiar with these markets and have access to the information needed to participate successfully in these markets. However, the extension service generally does not encourage farmers to participate in futures markets involving farm production cost items, such as fuels, because farmers are not as familiar with these markets. Instead, farmers generally use various prepayment methods to control the costs of items used in producing crops. Observations Natural gas is the most costly ingredient used in manufacturing nitrogen fertilizer products. However, the price of natural gas can vary significantly in different markets throughout the world. Unfortunately for domestic nitrogen fertilizer manufacturers, the price of natural gas in the United States can far exceed its price in other parts of the world. As a result, domestic manufacturers are at a competitive disadvantage when domestic natural gas prices rise. Manufacturers can close plants in response to periodic price spikes and resume production when prices drop again, but higher prices sustained over the long term may result in more permanent curtailment of domestic production. In the past, farmers’ needs for fertilizer have been met by increases in imports when domestic production has been curtailed, as it was in 2001. However, it remains to be seen how well the market will respond to further reductions in the domestic production of nitrogen fertilizer that may be caused by more sustained higher natural gas prices in the future. Earlier this year, increased natural gas prices once again caused higher production costs for the nation’s fertilizer manufacturing industry, which in turn contributed to a reduction in the amount of nitrogen being produced and an increase in nitrogen fertilizer prices. Although it is too early to determine whether these higher gas prices will have the same adverse effect on the fertilizer manufacturing industry as higher gas prices did in 2001, some within the industry contend that continuing higher gas prices are threatening the industry. Agency Comments We provided USDA and TFI with a draft of this report for review and comment. We received oral comments from USDA and TFI officials, who agreed with our facts and observations. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the USDA Secretary, The Fertilizer Institute, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Questions about this report should be directed to me at (202) 512-3841. Key contributors to this report are listed in appendix II. Objectives, Scope, and Methodology In our study of the natural gas and nitrogen fertilizer markets, we determined (1) how the price of natural gas affects the price, production, and availability of nitrogen fertilizer and (2) the federal government’s role in managing the impact of natural gas prices on the U.S. fertilizer market. To address these objectives, we reviewed pertinent documents and obtained information and views from a wide range of officials from both the federal government and the private sector. We interviewed staff and/or obtained information from the Department of Agriculture’s (USDA) Office of the Chief Economist, Economic Research Service, National Agricultural Statistics Service, and Cooperative State Research, Education and Extension Service; the Department of Commerce; the Department of Energy’s Energy Information Administration; the Federal Energy Regulatory Commission; the Commodity Futures Trading Commission; and the International Trade Commission. We also discussed the relationship between the natural gas and nitrogen fertilizer markets with representatives from various industry organizations, including The Fertilizer Institute (TFI); the International Fertilizer Development Center; the American Farm Bureau Federation; Agrium Incorporated; CF Industries, Incorporated; and Terra Industries, Incorporated. To determine how the price of natural gas affects the price of nitrogen fertilizer, we examined industry-supplied natural gas prices and industry, as well as government, price data for nitrogen fertilizer and determined how fertilizer prices behaved when gas prices increased in 2000–2001 and again in 2003. We determined the extent to which a correlation existed between the price of natural gas and prices for three nitrogen fertilizers, anhydrous ammonia, urea, and urea ammonium nitrate, which were included in our analysis because they are widely used by American farmers. We compared natural gas and nitrogen fertilizer prices for the period January 1998 through March 2003. More specifically, we obtained industry prices for natural gas at the Henry Hub from Global Insight (USA), Inc. We selected Henry Hub prices because this market center is one of the largest in the country and often serves as a benchmark for wholesale natural gas prices across the country. We obtained monthly spot prices, or the current cash prices at which nitrogen-based fertilizers are sold at various locations, from an industry source—Green Markets: Fertilizer Market Intelligence Weekly. Green Markets, a Pike & Fischer, Inc., publication, collects independent spot price quotes for 19 fertilizer commodities every week. Our analysis of the market data included fertilizer prices at two major market locations: (1) the U.S. Gulf Port, whose prices are considered the benchmark for fertilizer prices in North America, and (2) the Mid Cornbelt, where large quantities of nitrogen fertilizer are used. In addition, we compared the relationship between the prices paid by farmers for nitrogen fertilizer and natural gas prices. To do this, we calculated the monthly prices paid by farmers for nitrogen fertilizer. We used the April prices paid by farmers for anhydrous ammonia, urea ammonium nitrate (32 percent nitrogen solution) and urea (46 percent nitrogen). Since these prices are reported only once a year in April, we applied the monthly prices paid index for nitrogen fertilizer published by USDA to the April prices in order to calculate a monthly price for nitrogen fertilizer. We did this by using the appropriate weights, supplied by USDA, for each of the fertilizer components (anhydrous ammonia, urea ammonium nitrate, and urea). The index for nitrogen fertilizer is based on the Producer Price Index series (PPI) and appropriate subcomponents from the Bureau of Labor Statistics. The April fertilizer prices are obtained by survey from establishments selling fertilizers to farmers. To determine the effect of natural gas prices on domestic nitrogen fertilizer production, we examined nitrogen inventory, production, and consumption data obtained from government and industry sources from 1996 through 2002. These data (shown in table 1) reflect the estimated quantity of nitrogen in the United States, including the nitrogen nutrient in several fertilizer products—anhydrous ammonia, ammonium nitrate, ammonium sulfate, aqua, nitrogen solutions, urea, and other nitrogen materials. The estimated nitrogen production, imports, and exports were derived from the Department of Commerce, Bureau of Census, quarterly report Inorganic Fertilizer Materials and Related Products (MQ325B). The inventory data were taken from a TFI report, Fertilizer Record, which reflects the results of a TFI monthly survey of domestic nitrogen fertilizer producers. The agricultural consumption data were derived from reports filed by fertilizer users with state fertilizer control officials. These reports are tabulated by the Association of American Plant Food Control Officials, Inc. (AAPFCO) and TFI and published by TFI in Commercial Fertilizers. Because of the incompleteness of the state fertilizer consumption reports, an unknown but significant amount of missing data, particularly for the most recent year, are imputed based on historical information by AAPFCO and TFI. The estimates described above were used in this report for several reasons. First, the estimates of total supply and total demand, which reflect the combination of data from several independent sources, differ only slightly. Second, the trends in consumption from the trade source are consistent with those in the related Census Bureau series. Third, these data are widely used by companies that produce nitrogen fertilizer. In addition, we reviewed financial reports and other industry documents that describe how the nitrogen manufacturing industry responded to higher natural gas prices and interviewed industry and government officials to obtain their views and comments. In determining the effect of higher natural gas prices on the supply of nitrogen fertilizer, we relied primarily on the results of a USDA survey on fertilizer availability in 2001. According to USDA officials, they added questions concerning nitrogen fertilizer supplies to the ongoing Crop Progress survey because this was the most efficient and reliable survey vehicle available on short notice. USDA asked respondents to report on the adequacy of nitrogen fertilizer supplies that were available to producers in their area. Although the responses were subjective, those people providing the responses are widely respected as the most knowledgeable about agricultural situations in their respective counties. The results of the survey questions used to gather information on the availability of nitrogen fertilizer were presented in the National Agricultural Statistical Service’s Crop Progress report dated June 4, 2001. We also examined data contained in our supply and demand table (table 1) to determine sources, supplies, and consumption of nitrogen fertilizer over the 7-year period ending in June 2002. To determine what role the federal government plays in managing the impact of natural gas prices on the U.S. fertilizer market, we reviewed the responsibilities of federal agencies regarding the natural gas and fertilizer markets and their efforts to monitor and collect information on these markets. We reviewed relevant documents provided by agriculture and fertilizer industry representatives and interviewed these officials to obtain their views on what actions, if any, the federal government should take to mitigate the effects of high natural gas prices on the U.S. fertilizer market. We also reviewed relevant documents and interviewed USDA and state extension service officials regarding how farmers manage the risks associated with their production costs and the federal government’s role in assisting farmers in managing these risks. Finally, we reviewed the results of a USDA analysis of the 2001 Agricultural Resource Management Survey, which was used to gather information on how American farmers who grow corn responded to the higher nitrogen fertilizer prices in 2001. The results of this analysis were presented in the USDA, Economic Research Service’s Agricultural Income and Finance Outlook report dated September 26, 2002. We performed our review from February through August 2003 in accordance with generally accepted government auditing standards. While we did not independently verify the accuracy of natural gas and fertilizer prices and other data obtained from industry sources, we did compare these data with other relevant data to ascertain the reasonableness of the data we used. We also interviewed knowledgeable government and industry officials to determine the reasonableness of the data and our use of them. We determined that the data were sufficiently reliable for the purposes of our report. GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individuals named above, Carol Bray, James Cooksey, Nancy Crothers, Paul Pansini, Robert Parker, and Barbara Timmerman made key contributions to this report. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs | Natural gas is the most costly component used in manufacturing nitrogen fertilizer. Therefore, when natural gas prices increased in 2000-2001, U.S. companies that produce nitrogen fertilizer reported adverse financial consequences resulting from much higher production costs. Concerns also arose that the nation's farmers would face much higher nitrogen fertilizer prices and that there might not be an adequate supply of nitrogen fertilizer to satisfy farmers' demands at any price. Responding to congressional concerns, GAO undertook a study to determine (1) how the price of natural gas affects the price, production, and availability of nitrogen fertilizer and (2) what role the federal government plays in mitigating the impact of natural gas prices on the U.S. fertilizer market. Higher natural gas prices have contributed to higher nitrogen fertilizer prices and reduced domestic production. The following figure shows the relationship between natural gas prices and the farmer price for nitrogen fertilizer. Higher gas prices in 2000-2001 also led to a 25 percent reduction in domestic production of nitrogen but, despite this decline, the supply of nitrogen fertilizer was adequate to meet farmers' demand in 2001. Demand was met because U.S. nitrogen production was supplemented by a 43 percent increase in nitrogen imports and a 7 percent decrease in agricultural consumption of nitrogen fertilizer. The federal government does not set natural gas prices, and it has a limited role in managing the impact of natural gas prices on the U.S. fertilizer market. Three federal agencies--(1) the Federal Energy Regulatory Commission, (2) the Commodities Futures Trading Commission, and (3) the Energy Information Administration--are responsible for ensuring that natural gas prices are determined in a competitive and informed marketplace. Moreover, the federal government has no role in controlling fertilizer prices, but the U.S. Department of Agriculture (USDA) does monitor developments in the agricultural sector, including fertilizer markets, that could affect farmers. Also, in 2001, USDA collected additional survey information in response to concerns about the price and availability of nitrogen fertilizer. |
Background Ports Are Important and Vulnerable Ports play an important role in the nation’s economy and security. Ports are used to import and export cargo worth hundreds of billions of dollars; generating jobs, both directly and indirectly, for Americans and our trading partners. Ports, which include inland waterways, are used to move cargo containers, and bulk agricultural, mineral, petroleum, and paper products. Ports are also important to national security by hosting naval bases and vessels and facilitating the movement of military equipment and supplying troops deployed overseas. Since the terrorist attacks of September 11, the nation’s 361 seaports have been increasingly viewed as potential targets for future terrorist attacks. Ports are vulnerable because they are sprawling, interwoven with complex transportation networks, close to crowded metropolitan areas, and easily accessible. Ports contain a number of specific facilities that could be targeted by terrorists, including military vessels and bases, cruise ships, passenger ferries, terminals, dams and locks, factories, office buildings, power plants, refineries, sports complexes, and other critical infrastructure. Multiple Jurisdictions Are Involved The responsibility for protecting ports from a terrorist attack is a shared responsibility that crosses jurisdictional boundaries, with federal, state, and local organizations involved. For example, at the federal level, the Department of Homeland Security (DHS) has overall homeland security responsibility, and the Coast Guard, an agency of the department, has lead responsibility for maritime security. Port authorities provide protection through designated port police forces, private security companies, and coordination with local law enforcement agencies. Private sector stakeholders play a major role in identifying and addressing the vulnerabilities in and around their facilities, which may include oil refineries, cargo facilities, and other property adjacent to navigable waterways. Information Sharing Is Important Information sharing among federal, state, and local officials is central to port security activities. The Homeland Security Act of 2002 recognizes that the federal government relies on state and local personnel to help protect against terrorist attacks, and these officials need homeland security information to prevent and prepare for such attacks. Information sharing between federal officials and nonfederal officials can involve information collected by federal intelligence agencies. In order to gain access to classified information, state and local law enforcement officials generally need to apply for and receive approval to have a federal security clearance. As implemented by the Coast Guard, the primary criterion for granting access to classified information is an individual’s need to know, which is defined as the determination made by an authorized holder of classified information that a prospective recipient requires access to specific classified information in order to perform or assist in a lawful and authorized governmental function. To obtain a security clearance, an applicant must complete a detailed questionnaire that asks for information on all previous employment, residences, and foreign travel and contacts that reach back 7 years. After submitting the questionnaire, the applicant then undergoes a variety of screenings and checks. Area Maritime Security Committees The Maritime Transportation Security Act, passed in the aftermath of the September 11 attacks and with the recognition that ports contain many potential security targets, provided for area maritime security committees—composed of federal, state, local, and industry members—to be established by the Coast Guard at ports across the country. A primary goal of these committees is to assist the local Captain of the Port—the senior Coast Guard officer who leads the committee—to develop a security plan—called an area maritime security plan—to address the vulnerabilities and risks in that port zone. The committees also serve as a link for communicating threats and disseminating security information to port stakeholders. As of June 2006, the Coast Guard organized 46 area maritime security committees, covering the nation’s 361 ports. Interagency Operational Centers Another approach at improving information sharing and port security operations involves interagency operational centers—command centers that bring together the intelligence and operational efforts of various federal and nonfederal participants. These centers are to provide intelligence information and real-time operational data from sensors, radars, and cameras at one location to federal and nonfederal participants 24 hours a day. These interagency operational centers represent an effort to improve awareness of incoming vessels, port facilities, and port operations. In general, these centers are jointly operated by federal and nonfederal law enforcement officials. The centers can have command and control capabilities that can be used to communicate information to vessels, aircraft, and other vehicles and stations involved in port security operations. Port-Level Information Sharing Supported by National-Level Intelligence While area maritime security committees and interagency operational centers are port-level organizations, they are supported by, and provide support to, a national-level intelligence infrastructure. National-level departments and agencies in the intelligence and law enforcement communities may offer information that ultimately could be useful to members of area maritime security committees or interagency operational centers at the port level. These intelligence and law enforcement agencies conduct maritime threat identification and dissemination efforts in support of tactical and operational maritime and port security efforts, but most have missions broader than maritime activities as well. In addition, some agencies also have regional or field offices involved in information gathering and sharing. Area Maritime Security Committees Have Improved Information Sharing Ports Reviewed Showed Improvements in Timeliness, Completeness, and Usefulness of Shared Information Area maritime security committees have provided a structure to improve the timeliness, completeness, and usefulness of information sharing. A primary function served by the committees was to develop security plans for port areas—called area maritime security plans. The goal of these plans was to identify vulnerabilities to a terrorist attack in and around a port location and to develop strategies for protecting a wide range of facilities and infrastructure. In doing so, the committees established new procedures for sharing information by holding meetings on a regular basis, issuing electronic bulletins on suspicious activities around port facilities, and sharing key documents, including vulnerability assessments and the portwide security plan itself, according to committee participants. Also, participants noted that these committees allowed for both formal and informal stakeholder networking, which contributes to improvements in information sharing. Our continuing work on the Coast Guard and maritime security, while not specifically focused on information sharing, has continued to indicate that area maritime security committees are a useful tool for exchanging information. For example, we have done work at eight additional ports and found that stakeholders were still using the committees as a structured means to regularly share information about threat conditions and operational issues. In addition, Coast Guard personnel and port stakeholders are using the area maritime security committees to coordinate security and response training and exercises. Also, in the wake of Hurricane Katrina, Coast Guard officials shared information collaboratively through their area maritime security committees to determine when it was appropriate to close and then reopen a port for commerce. Committees Have Flexibility in Their Structure and in the Way in Which They Share Information While the committees are required to follow the same guidance regarding their structure, purpose, and processes, each of the committees is allowed the flexibility to assemble and operate in a way that reflects the needs of its port area. Each port is unique in many ways, including the geographic area covered and the type of operations that take place there. These port- specific differences influence the number of members that participate, the types of state and local organizations that members represent, and the way in which information is shared. Interagency Operational Centers Have Also Improved Information Sharing Centers Process and Share Information on Operations Information sharing at interagency operational centers represents a step toward further improving information sharing, according to participants at the centers we visited. They said maritime security committees have improved information sharing primarily through a planning process that identifies vulnerabilities and mitigation strategies, as well as through development of two-way communication mechanisms to share threat information on an as-needed basis. In contrast, interagency operational centers can provide a continuous flow of information about maritime activities and involve various agencies directly in operational decisions using this information. Radar, sensors, and cameras offer representations of vessels and facilities. Other data are available from intelligence sources and include data on vessels, cargo, and crew. Greater information sharing among participants at these centers has also enhanced operational collaboration, according to participants. Unlike the area maritime security committees, these centers are operational in nature—that is, they have a unified or joint command structure designed to receive information and act on it. At the centers we visited, representatives from the various agencies work side by side, each having access to databases and other sources of information from their respective agencies. Officials said such centers help leverage the resources and authorities of the respective agencies. For example, if the Coast Guard determines that a vessel should be boarded and inspected, other federal and nonfederal agencies might join in the boarding to assess the vessel or its cargo, crew, or passengers for violations relating to their areas of jurisdiction or responsibility. Variations across Centers Affect Information Sharing The types of information and the way information is shared vary at the centers we visited, depending on their purpose and mission, leadership and organization, membership, technology, and resources, according to officials at the centers. In our report of April 2005, we detailed three interagency operational centers at Charleston, South Carolina; Norfolk, Virginia; and San Diego, California. As of June 2006, the Coast Guard has two additional interagency command centers under construction in Jacksonville, Florida, and Seattle, Washington. Both are being established as Sector Command Centers—joint with the U.S. Navy—and are expected to be operational in 2006. Of the interagency centers we visited, the Charleston center had a port security purpose, so its missions were all security related. It was led by DOJ, and its membership included 4 federal agencies and 16 state and local agencies. The San Diego center had a more general purpose, so it had multiple missions to include not just port security, but search and rescue, environmental response, drug interdiction, and other law enforcement activities. It was led by the Coast Guard, and its membership included 2 federal agencies and 1 local agency. The Norfolk center had a port security purpose, but its mission was focused primarily on force protection for the Navy. It was led by the Coast Guard, and its membership included 2 federal agencies and no state or local agencies. As a result, the Charleston center shared information that focused on law enforcement and intelligence related to port security among a very broad group of federal, state, and local agency officials. The San Diego center shared information on a broader scope of activities (beyond security) among a smaller group of federal and local agency officials. The Norfolk center shared the most focused information (security information related to force protection) among two federal agencies. The centers also shared different information because of their technologies and resources. The San Diego and Norfolk centers had an array of standard and new Coast Guard technology systems and access to Coast Guard and various national databases, while the Charleston center had these as well as additional systems and databases. For example, the Charleston center had access to and shared information on Customs and Border Protection’s databases on incoming cargo containers from the National Targeting Center. In addition, Charleston had a pilot project with the Department of Energy to test radiation detection technology that provided additional information to share. The Charleston center was funded by a special appropriation that allowed it to use federal funds to pay for state and local agency salaries. This arrangement boosted the participation of state and local agencies, and thus information sharing beyond the federal government, according to port stakeholders in Charleston. While the San Diego center also had 24-hour participation by the local harbor patrol, that agency was paying its own salaries. Coast Guard Continues to Develop Sector Command Centers at Ports In April 2005, we reported that the Coast Guard planned to develop up to 40 of its own operational centers—called sector command centers—at additional ports. These command centers would provide local port activities with a unified command and improve awareness of the maritime domain through a variety of technologies. As of June 2006, the Coast Guard reported to us that 35 sector command centers have been created, and that these centers are the primary conduit for daily collaboration and coordination between the Coast Guard and its port partner agencies. The Coast Guard also reported that it has implemented a maritime monitoring system—known as the Common Operating Picture system—that fuses data from different sources. According to the Coast Guard, this system is the primary tool for Coast Guard commanders in the field to attain maritime domain awareness. In April 2005, we also reported that the Coast Guard requested in fiscal year 2006 over $5 million in funding to improve awareness of the maritime domain by continuing to evaluate the potential expansion of sector command centers to other port locations, and requested additional funding to train personnel in Common Operating Picture deployment at command centers and to modify facilities to implement the picture in command centers. In June 2006, the Coast Guard reported to us that no additional funding for this program was requested for fiscal year 2007. Coast Guard Report on Interagency Operational Centers Congress directed the Coast Guard to report on the existing interagency operational centers, covering such matters as the composition and operational characteristics of existing centers and the number, location, and cost of such new centers as may be required to implement maritime transportation security plans and maritime intelligence activities. This report, called for by February 2005, was issued by the Coast Guard in April 2005. While the report addresses the information sought by Congress, the report did not define the relationship between interagency operational centers and the Coast Guard’s own sector command centers. Port stakeholders reported to us the following issues as important factors to consider in any expansion of interagency operational centers: (1) purpose and mission—the centers could serve a variety of overall purposes, as well as support a wide number of specific missions; (2) leadership and organization—the centers could be led by several potential departments or agencies and be organized a variety of ways; (3) membership—the centers could vary in membership in terms of federal, state, local, or private sector participants and their level of involvement; (4) technology deployed—the centers could deploy a variety of technologies in terms of networks, computers, communications, sensors, and databases; and (5) resource requirements—the centers could also vary in terms of resource requirements, which agency funds the resources, and how resources are prioritized. Other Ad Hoc Arrangements for Interagency Information- Sharing Our work identified other interagency arrangements that facilitate information sharing and interagency operations in the maritime environment. One example is a predesignated single-mission task force, which becomes operational when needed. DHS established the Homeland Security Task Force, South-East—a working group consisting of federal and nonfederal agencies with appropriate geographic and jurisdictional responsibilities that have the mission to respond to any mass migration of immigrants affecting southeast Florida. When a mass migration event occurs, the task force is activated and becomes a full-time interagency effort to share information and coordinate operations to implement a contingency plan. Another example of an interagency arrangement for information sharing can occur in single-agency operational centers that become interagency to respond to specific events. For example, the Coast Guard has its own command centers for both District Seven and Sector Miami, located in Miami, Florida. While these centers normally focus on a variety of Coast Guard missions and are not normally interagency in structure, they have established protocols with other federal agencies, such as the U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement, to activate a unified or incident command structure should it be needed. These Coast Guard centers make it possible to host interagency operations because they have extra space and equipment that allow for surge capabilities and virtual connectivity with each partner agency. Interagency Information- Sharing Concerns Go Beyond Maritime Area While our findings on maritime information sharing are generally positive, we have some concerns regarding interagency information sharing that go far beyond the maritime issue area. In January 2005, we designated information sharing for homeland security as a high-risk area because the federal government still faces formidable challenges in gathering, identifying, analyzing, and disseminating key information within and among federal and nonfederal entities. While we recognize the efforts that some agencies have undertaken to break out of information “silos” and better share information, we reported in 2006 that more than 4 years after September 11, the nation still lacks comprehensive policies and processes to improve the sharing of information that is critical to protecting our homeland. We made several recommendations to the Director of National Intelligence, who is now primarily responsible for this effort, to ensure effective implementation of congressional information sharing mandates. We continue to review agencies and programs that have the goal of improving information sharing among federal, state, and local partners. For example, we have ongoing work assessing DHS’ efforts to enhance coordination and collaboration among interagency operations centers that operate around the clock to provide situational awareness. We plan to report on this later this year. Also, we have just begun work on state fusion centers--which are locations where homeland security-related information can be collected and analyzed--and their links to their relevant federal counterparts, which we plan to report on in 2007. Coast Guard Making Progress Granting Security Clearances Lack of Security Clearances May Limit Ability to Confront Terrorist Threats According to the Coast Guard and state and local officials we contacted for our 2005 report, the shared partnership between the federal government and state and local entities may fall short of its potential to fight terrorism because of the lack of security clearances. If state and local officials lack security clearances, the information they possess may be incomplete. According to Coast Guard and nonfederal officials, the lack of access to classified information may limit these officials’ ability to deter, prevent, and respond to a potential terrorist attack. While security clearances for nonfederal officials who participate in interagency operational centers are sponsored by DOJ and DHS, the Coast Guard sponsors security clearances for members of area maritime security committees. For the purposes of our 2005 report, we examined in more detail the Coast Guard’s efforts to address the lack of security clearances among members of area maritime security committees. Coast Guard Continues to Take Steps to Grant Additional Clearances to State, Local, and Industry Officials In April 2005, we reported that as part of its effort to improve information sharing at ports, the Coast Guard initiated a program in July 2004 to sponsor security clearances for members of area maritime security committees, but nonfederal officials have been slow in submitting their applications for a security clearance. We also reported that as of February 2005, only 28 of 359 nonfederal committee members who had a need to know had submitted the application forms for a security clearance. As shown in table 1, as of June 2006, of the 467 nonfederal committee members who had a need to know, 197 had submitted security clearance applications—20 received interim clearances, and 168 were granted a final clearance, which allows access to classified material. Data Are Being Used to More Effectively Manage the Security Clearance Program A key component of a good management system is to have relevant, reliable, and timely information available to assess performance over time and to correct deficiencies as they occur. The Coast Guard has two databases that contain information on the status of security clearances for state, local, and industry officials. The first database is a commercial off- the-shelf system that contains information on the status of all applications that have been submitted to the Coast Guard Security Center, such as whether a security clearance has been issued or whether personnel security investigations have been conducted. We reported in April 2005 that the Coast Guard was testing the database for use by field staff, but had not granted field staff access to the database. As of June 2006, the Coast Guard granted access to this database—named Checkmate—to field staff. The second database—an internally developed spreadsheet on the area maritime committee participants—summarizes information on the status of the security clearance program, such as whether officials have submitted their application forms and whether they have received their clearances. We reported in 2005 that these Coast Guard has databases could be used to manage the state, local, and industry security clearance program, but that formal procedures for using the data as a management tool to follow up on possible problems at the national or local level to verify the status of clearances had not been developed by the Coast Guard. While it is unclear that the Coast Guard developed formal procedures, as of June 2006, the Coast Guard reported that it has developed guidance for using its data on committee participants. According to the Coast Guard, the guidance released to field commands regarding the state, local, and industry security clearance program clarified the process for nonfederal area maritime security committee members to receive clearances and specifically outlined responsibilities for working with applicants on completing required paperwork, including the application packages. The Coast Guard reported that as a result of this guidance, the number of received and processed security clearance packages for area maritime security committee members has increased. Concluding Observations As we reported in April 2005, and reaffirm today, effective information sharing among members of area maritime security committees and participants in interagency operational centers can enhance the partnership between federal and nonfederal officials, and it can improve the leveraging of resources across jurisdictional boundaries for deterring, preventing, or responding to a possible terrorist attack at the nation’s ports. The Coast Guard has recognized the importance of granting security clearances to nonfederal officials as a means to improve information sharing, and although we reported in 2005 that progress in moving these officials through the application process had been slow, it appears that as of June 2006 the Coast Guard’s efforts to process security clearances to nonfederal officials has improved considerably. However, continued management attention and guidance about the security clearance process would strengthen the program, and it would reduce the risk that nonfederal officials may have incomplete information as they carry out their law enforcement activities. Mr. Chairman and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee may have at this time. GAO Contacts and Staff Acknowledgments For information about this testimony, please contact Stephen L. Caldwell Acting Director, Homeland Security and Justice Issues, at (202) 512-9610, or at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found at the last page of this statement. Individuals making key contributions to this testimony include Susan Quinlan, David Alexander, Neil Asaba, Juliana Bahus, Christine Davis, Kevin Heinz, Lori Kmetz, Emily Pickrell, Albert Schmidt, Amy Sheller, Stan Stenersen, and April Thompson. Related GAO Products Coast Guard: Observations on Agency Performance, Operations, and Future Challenges. GAO-06-448T. Washington, D.C.: June 15, 2006. Maritime Security: Enhancements Made, but Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. Coast Guard: Observations on Agency Priorities in Fiscal Year 2006 Budget Request. GAO-05-364T. Washington, D.C.: March 17, 2005. Coast Guard: Station Readiness Improving, but Resource Challenges and Management Concerns Remain. GAO-05-161. Washington, D.C.: January 31, 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington, D.C.: December 10, 2004. Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program. GAO-04-1062. Washington, D.C.: September 30, 2004. Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automatic Vessel Identification System. GAO-04-868. Washington, D.C.: July 23, 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Coast Guard: Key Management and Budget Challenges for Fiscal Year 2005 and Beyond. GAO-04-636T. Washington, D.C.: April 7, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Homeland Security: Preliminary Observations on Efforts to Target Security Inspections of Cargo Containers. GAO-04-325T. Washington, D.C.: December 16, 2003. Posthearing Questions Related to Aviation and Port Security. GAO-04-315R. Washington, D.C.: December 12, 2003. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: September 9, 2003. Homeland Security: Efforts to Improve Information Sharing Need to Be Strengthened. GAO-03-760. Washington D.C.: August 27, 2003. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. Homeland Security: Challenges Facing the Department of Homeland Security in Balancing its Border Security and Trade Facilitation Missions. GAO-03-902T. Washington, D.C.: June 16, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Combating Terrorism: Preliminary Observations on Weaknesses in Force Protection for DOD Deployments through Domestic Seaports. GAO-02-955TNI. Washington, D.C.: July 23, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Sharing information with nonfederal officials is an important tool in federal efforts to secure the nation's ports against a potential terrorist attack. The Coast Guard has lead responsibility in coordinating maritime information sharing efforts. The Coast Guard has established area maritime security committees--forums that involve federal and nonfederal officials who identify and address risks in a port. The Coast Guard and other agencies have sought to further enhance information sharing and port security operations by establishing interagency operational centers--command centers that tie together the efforts of federal and nonfederal participants. This testimony is a summary and update to our April 2005 report, Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention, GAO-05-394 . It discusses the impact the committees and interagency operational centers have had on improving information sharing and identifies any barriers that have hindered information sharing. Area maritime security committees provide a structure that has improved information sharing among port security stakeholders. At the four port locations GAO visited, federal and nonfederal stakeholders said that the newly formed committees were an improvement over previous information-sharing efforts. The types of information shared included assessments of vulnerabilities at port locations and strategies the Coast Guard intends to use in protecting key infrastructure. GAO's ongoing work indicates that these committees continue to be useful forums for information sharing. Interagency operational centers also allow for even greater information sharing because the centers operate on a 24-hour-a-day basis, and they receive real-time information from data sources such as radars and sensors. The Coast Guard has developed its own centers--called sector command centers--at 35 port locations to monitor information and to support its operations planned for the future. As of today, the relationship between the interagency operational centers and the sector command centers remains to be determined. In April 2005 the major barrier hindering information sharing was the lack of federal security clearances for nonfederal members of committees or centers. In April 2005, Coast Guard issued guidance to field offices that clarified their role in obtaining clearances for nonfederal members of committees or centers. In addition, the Coast Guard did not have formal procedures that called for the use of data to monitor application trends. As of June 2006, guidance was put in place and according to the Coast Guard, was responsible for an increase in security clearance applications under consideration by the Coast Guard. Specifically, as of June 2006, 188 out of 467 nonfederal members of area maritime security committees with a need to know received some type of security clearance. This is an improvement from February 2005, when no security clearances were issued to 359 nonfederal members of area maritime security committees members with a need to know security information. |
Scope and Methodology As part of our audit of the fiscal years 2009 and 2008 CFS, we evaluated the federal government’s financial reporting procedures and related internal control. Also, we determined the status of corrective actions by Treasury and OMB to address open recommendations relating to the processes used to prepare the CFS detailed in our previous reports. In our audit report on the fiscal year 2009 CFS, which is included in the fiscal year 2009 Financial Report of the United States Government (Financial Report), we discussed the material weaknesses related to the federal government’s processes used to prepare the CFS. These material weaknesses contributed to our disclaimer of opinion on the accrual-based consolidated financial statements and our conclusion that the federal government did not have effective internal control over financial reporting. We performed our audit of the fiscal years 2009 and 2008 CFS in accordance with U.S. generally accepted government auditing standards. We believe that our audit provided a reasonable basis for our conclusions in this report. We requested comments on a draft of this report from the Director of OMB and the Secretary of the Treasury or their designees. OMB provided oral comments, which are summarized in the Agency Comments section of this report. Treasury’s comments are reprinted in appendix II and are also summarized in the Agency Comments section. New Internal Control Deficiencies Standard Operating Procedures for Preparing the CFS Over the past several years, Treasury has made progress in developing, documenting, and implementing internal control over the process for preparing the CFS through numerous standard operating procedures (SOP). However, we identified areas where SOPs were not developed, implemented, fully documented for fiscal year 2009, or a combination of these. Specifically, we found that SOPs were missing or inadequate in five key areas: (1) restatements and changes in accounting principles, (2) summary of significant accounting policies, (3) social insurance, (4) legal contingencies, and (5) analytical procedures. In connection with its role as preparer of the CFS, Treasury management is responsible for developing and documenting detailed policies, procedures, and practices for preparing the CFS and ensuring that internal control is built into and is an integral part of the CFS compilation process. Standards for Internal Control in the Federal Government calls for clear documentation of policies and procedures. Missing or inadequate policies and procedures increase the risk that errors in the compilation process could go undetected and result in misstatements in the financial statements or incomplete and inaccurate disclosure of information within the Financial Report. Restatements and Changes in Accounting Principles Treasury’s SOP entitled “Analyzing Agency Restatements,” which is intended to document Treasury’s procedures regarding prior period adjustments (i.e., restatements and changes in accounting principles) at the governmentwide level, was not adequate to help assure that prior period adjustments are properly identified and reported in the CFS. Specifically, we found that (1) not all procedures Treasury performs to identify, analyze, and report restated closing package data and changes in accounting principles in the CFS were fully documented in the SOP; (2) for some steps listed in the SOP, it was unclear who was responsible for performing the procedures; and (3) the SOP did not require an analysis of the overall impact of entities’ restatements on the CFS or documentation of the analysis and related conclusion. Recommendations for Executive Action To help assure that prior period adjustments are properly identified and reported in the CFS, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to enhance the SOP entitled “Analyzing Agency Restatements” to (1) fully document all procedures related to identifying, analyzing, and reporting restated closing package data as well as changes in accounting principles; (2) clarify who is responsible for performing the procedures contained in the SOP; and (3) include procedures for analyzing the overall impact of entities’ restatements on the CFS and documenting the analysis and related conclusion. Significant Accounting Policies Treasury did not have written policies and procedures to help assure that all significant accounting policies and related party transactions were properly identified and disclosed in Note 1 – Summary of Significant Accounting Policies (Note 1) to the CFS. Statement of Federal Financial Accounting Standards (SFFAS) No. 32, Consolidated Financial Report of the United States Government Requirements: Implementing Statement of Federal Financial Accounting Concepts 4 “Intended Audience and Qualitative Characteristics for the Consolidated Financial Report of the United States Government”, requires that significant accounting policies be disclosed within Note 1 to the financial statements. In accordance with SFFAS No. 32, Note 1 should, among other things, “Summarize the accounting principles and methods of applying those principles that management has concluded are appropriate for presenting fairly the agencies’ assets, liabilities, net cost of operations, and changes in net position. Disclosure of accounting policies should identify and describe the accounting principles followed by the reporting agency and the methods of applying those principles. In general, the disclosure should encompass important judgments as to the valuation, recognition, and allocation of assets, liabilities, expenses, revenues and other financing sources.” However, we identified significant accounting policies that were not disclosed in Note 1 to the draft CFS, including accounting policies regarding federal debt securities held by the public, beneficial interest in trust, and certain securities and investments. In addition, Treasury had not disclosed that certain federal entities—primarily Treasury and the Federal Deposit Insurance Corporation along with the Board of Governors of the Federal Reserve System and the Federal Reserve Banks—engaged in a related party transaction involving concurrent actions, coordinated actions, or both to help stabilize the financial system and the housing market. We communicated these omissions to Treasury officials who included these disclosures in Note 1 to the fiscal year 2009 CFS. Recommendation for Executive Action To help assure complete and accurate disclosure of significant accounting policies and related party transactions in Note 1 to the CFS, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop, implement, and document procedures for identifying, analyzing, compiling, and reporting all significant accounting policies and related party transactions at the governmentwide level. Social Insurance Information Treasury did not have adequate procedures to assure the accuracy of staff’s work performed in accordance with Treasury’s SOP entitled “Statement of Social Insurance, Social Insurance Note, and Required Supplementary Information.” The SOP provides detailed procedures for Treasury personnel to perform related to the reporting and disclosure of social insurance information. However, the SOP did not include adequate procedures to be performed to assure the accuracy of staff’s work and we identified several errors made by Treasury personnel in preparing the social insurance sections of the CFS. Specifically, when reviewing the Statement of Social Insurance–related note in the draft fiscal year 2009 CFS, we found instances where certain social insurance–related information was inconsistent with the related information provided by the federal entities through the Governmentwide Financial Report System (GFRS). For example, the Department of Health and Human Services (HHS) and the Social Security Administration (SSA) both submitted demographic data in GFRS showing that the ultimate fertility rate was assumed to be reached in 2033; however, the draft CFS reported 2032. Additionally, the beneficiary-to-worker ratio reported in GFRS by HHS and SSA for Medicare and the Old-Age and Survivors, and Disability Insurance programs did not agree to the amounts reported in the draft CFS prepared by Treasury personnel. These inconsistencies were not identified through Treasury’s process to prepare the social insurance information for the CFS. We communicated these matters to Treasury officials who corrected this information for disclosure in the fiscal year 2009 CFS. Recommendations for Executive Action We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to (1) enhance the SOP entitled “Statement of Social Insurance, Social Insurance Note, and Required Supplementary Information” to include procedures for assuring the accuracy of staff’s work related to preparing the social insurance information for the CFS and (2) implement and document such procedures. Legal Contingencies Treasury did not have adequate procedures to assure the accuracy of staff’s work performed in accordance with Treasury’s SOP entitled “Federal Agency Legal Letter Analysis.” The SOP has detailed procedures for Treasury staff to perform with regard to reviewing and analyzing entities’ legal representation letters and related management schedules. As part of the legal representation letter process, the SOP requires Treasury staff to compare the entities’ and the Department of Justice (Justice) lawyer’s assessments of the major cases and document any differences on a Schedule of Differences. The schedule is to be used by Treasury to identify, research, and resolve significant inconsistencies and also to make any adjustments to the CFS to help assure complete and accurate reporting of legal contingencies. However, the SOP did not include adequate procedures to be performed to assure the accuracy of staff’s work and we identified several errors made by Treasury personnel in preparing the Schedule of Differences. Specifically, our review of the Schedule of Differences identified nine additional cases in fiscal year 2009 where there were significant differences between the entities’ and Justice’s assessments, but such differences were not documented on the Schedule of Differences. Treasury did not have adequate procedures to detect that the Treasury staff had not identified these differences. Lack of identification and follow- up on inconsistent assessments by Justice and entity legal counsels impairs Treasury’s ability to determine the proper accounting treatment in the CFS for the related legal cases. We communicated these matters to Treasury officials who subsequently corrected some of these inconsistencies in the fiscal year 2009 Schedule of Differences. Recommendations for Executive Action We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to (1) enhance the SOP entitled “Federal Agency Legal Letter Analysis” to include procedures for assuring the accuracy of staff’s work related to preparing the Schedule of Differences and (2) implement and document such procedures. Analytical Procedures Treasury’s SOP entitled “Preparing the Financial Report of the U.S. Government” provides that Treasury staff are to perform an overall analysis of the balances reported in the CFS, including a review for reasonableness of changes from the prior year to the current year. The analysis should assist Treasury in identifying any significant changes in balances from year to year that should be disclosed in the Financial Report. The SOP did not include adequate procedures to be performed to assure the accuracy of the overall analysis, and we identified errors in the formulas used in the analysis and inadequacies in the explanations provided by Treasury personnel through performance of the analytical procedures. Our review of the overall line item analysis covering the Balance Sheets, Statements of Net Cost, Statements of Operations and Changes in Net Position, and related notes identified incorrect formulas used to calculate the percentage change from year to year. Specifically, Treasury calculated the change using the current year balances, rather than the prior year balances, as the denominator for the calculation. This error in the formulas resulted in incorrect percentage changes being used for the analysis. For example, the percentage change between years determined for the Investments in Government Sponsored Enterprises line item, was calculated as 89 percent, when the actual change was 824 percent. We communicated the formula errors to Treasury officials who took action to correct the errors. We also noted that Treasury staff’s documented explanations for certain significant fluctuations were inadequate. For example, the Department of Veterans Affairs’ (VA) gross costs in the CFS Statements of Net Cost changed from $435 billion in 2008 to ($39) billion in 2009, a change of over $473 billion and more than 100 percent. The significant decrease in gross cost was primarily attributable to VA’s reestimation of its actuarial liability for, and anticipated cost of, veterans’ compensation benefits. However, the explanation provided for this change by Treasury in its documentation of the analysis included a discussion of VA’s and several other federal entities’ changes in costs and did not specifically explain the primary reasons for the significant decrease in VA’s gross cost. As a result of incorrect formulas used in the overall analysis and inadequate explanations of changes, the risk of Treasury failing to properly report in the Financial Report significant changes in certain balances is increased. Recommendations for Executive Action We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to (1) enhance the SOP entitled “Preparing the Financial Report of the U.S. Government” to include procedures for assuring the accuracy of staff’s work related to performing analytical procedures and (2) implement and document such procedures. Status of Recommendations from Prior Reports As part of our audit of the fiscal years 2009 and 2008 CFS, we determined the status of corrective actions by Treasury and OMB to address open recommendations detailed in our previous reports. Of the 44 recommendations that are listed in appendix I, 2 were closed and 42 remained open as of February 19, 2010, the date of our report on the audit of the fiscal year 2009 CFS. Appendix I includes the status of recommendations from seven prior reports that were open at the beginning of our fiscal year 2009 audit. Recommendations from these reports that were closed in prior years are not included in this appendix. Appendix I includes the status according to Treasury and OMB, as well as our own assessments. Explanations are included in the status of recommendations per GAO when Treasury and OMB disagreed with our recommendation or our assessment of the status of a recommendation. We will continue to monitor Treasury’s and OMB’s progress in addressing GAO’s recommendations. Agency Comments OMB Comments In oral comments on a draft of this report, OMB stated that it concurred with the new findings and related recommendations in this report. Treasury Comments In July 21, 2010, written comments on a draft of this report, which are reprinted in appendix II, Treasury’s Fiscal Assistant Secretary concurred with our findings and noted that the agency has already made significant progress in improving its policies and procedures for the CFS preparation since the issuance of the report. Further, Treasury stated that it expects to implement additional recommendations by the end of fiscal year 2010, and that it will use GAO’s findings to focus its efforts on improving the central accounting and compilation activities associated with the CFS. Also, while noting that they may take several years to provide measurable reductions, Treasury cited plans it has initiated to address long-standing material issues, including the development of an infrastructure for General Fund accounting and plans for automating the interagency agreement process. This report contains recommendations to the Secretary of the Treasury. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of that report. We are sending copies of this report to interested congressional committees, the Fiscal Assistant Secretary of the Treasury, the Deputy Director for Management and Chief Performance Officer of OMB, and the Controller of OMB’s Office of Federal Financial Management. This report also is available at no charge on GAO’s Web site at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact me at (202) 512- 3406 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Status of Treasury’s and OMB’s Progress in Addressing GAO’s Prior Year Recommendations for Preparing the CFS Recommendation GAO-04-45 (results of the fiscal year 2002 audit) As the Department of the Treasury (Treasury) is designing its new financial statement compilation process to begin with the fiscal year 2004 CFS, the Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of the Office of Management and Budget (OMB), to develop reconciliation procedures that will aid in understanding and controlling the net position balance as well as eliminate the plugs previously associated with compiling the CFS. To eliminate or explain adjustments to net position, Treasury has continued to eliminate, at the consolidated level, intragovernmental activity and balances using formal balanced accounting entries (via Reciprocal Categories) and has continued its analysis of transactions that contribute to the unmatched transactions and balances adjustment. Major contributors to the plug are transactions with the General Fund (Reciprocal Category 29). In fiscal year 2009, a new Treasury Task Group was formed to develop the financial statements for the General Fund, with the goal to prepare them for financial audit. In the interim, Treasury continues to separately identify General Fund transactions to facilitate their agency reconciliation on a quarterly basis. Also throughout fiscal year 2009, Treasury continued its efforts on particular areas (fiduciary and employee benefits) and increased its analysis and monitoring efforts on agencies’ explanations of material differences with their trading partners. Open. Treasury has continued developing reconciliation procedures to aid in understanding the net position balance but remains unable to eliminate the plugs associated with compiling the CFS. In addition, there are hundreds of billions of dollars of unreconciled differences between the General Fund and federal entities related to appropriation and other intragovernmental transactions. The ability to reconcile these transactions is hampered because only some of the General Fund transactions are reported in Treasury’s financial statements. As OMB continues to make strides to address issues related to intragovernmental transactions, the Director of OMB should direct the Controller of OMB to develop policies and procedures that document how OMB will enforce the business rules provided in OMB Memorandum M-07- 03, Business Rules for Intragovernmental Transactions. OMB will continue its efforts to implement this recommendation. Open. As OMB continues to make strides to address issues related to intragovernmental transactions, the Director of OMB should direct the Controller of OMB to require that significant differences noted between business partners be resolved and the resolution be documented. OMB will continue its efforts to implement this recommendation. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to design procedures that will account for the difference in intragovernmental assets and liabilities throughout the compilation process by means of formal consolidating and elimination accounting entries. Treasury has designed formal consolidating and eliminating procedures to account for these differences and has implemented them. See status of recommendation no. 02-4. Open. Treasury’s formal consolidating and eliminating accounting entries could not fully account for the difference in intragovernmental assets and liabilities during the fiscal year 2009 compilation process. For example, there are hundreds of billions of dollars of unreconciled differences between the General Fund and federal entities related to appropriation and other intragovernmental transactions. These amounts are not eliminated by Treasury’s formal consolidating and elimination accounting entries because Treasury’s ability to reconcile these transactions is hampered because not all General Fund transactions are reported in Treasury’s financial statements. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop solutions for intragovernmental activity and balance issues relating to federal agencies’ accounting, reconciling, and reporting in areas other than those OMB now requires be reconciled, primarily areas relating to appropriations. Treasury continues to provide to federal agencies information from its Central Accounting and Reporting System (STAR) related to special and trust fund appropriations and nonexpenditure transfers for the agencies’ use in reconciling with this centrally reported data. The agencies were required to reconcile with this information in fiscal year 2009 on a quarterly basis. In addition, in fiscal year 2010, the agencies will also be required to reconcile their fund balance with Treasury and appropriations received against STAR on a quarterly basis. Open. Although Treasury’s analysis of agencies’ transactions with the General Fund is ongoing, the ability to reconcile these transactions is hampered because only some of the General Fund transactions are reported in Treasury’s financial statements. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to reconcile the change in intragovernmental assets and liabilities for the fiscal year, including the amount and nature of all changes in intragovernmental assets or liabilities not attributable to cost and revenue activity recognized during the fiscal year. Examples of these differences would include capitalized purchases, such as inventory or equipment, and deferred revenue. The current reconciliation of intragovernmental activity does account for differences caused by asset capitalization and agency advances or deferred revenue. Given current intragovernmental differences, further resolution of this activity is contingent on these differences being materially resolved. See status of recommendation no. 02-4. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile net operating cost and unified budget surplus (or deficit). Treasury should report “net unreconciled differences” included in the net operating results line item as a separate reconciling activity in the reconciliation statement. These unmatched transactions and balances will continue to be reflected in the Statements of Operations and Changes in Net Position until they are materially resolved. However, based on its analyses of these unmatched transactions and balances, Treasury believes that these unmatched transactions and balances are primarily caused by unreconciled transactions that affect only the amounts reported on an accrual basis of accounting (net operating cost) and, therefore, these unmatched transactions and balances should not be included as a separate reconciling item on this statement. Treasury will continue its analysis in fiscal year 2010. Open. Treasury has not implemented a process that demonstrates the amount, if any, of unmatched transactions and balances that should be included as a separate reconciling item in the reconciliation statement. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile net operating cost and unified budget surplus (or deficit). Treasury should develop policies and procedures to ensure completeness of reporting and document how all the applicable components reported in the other consolidated financial statements (and related note disclosures included in the CFS) were properly reflected in the reconciliation statement. Treasury will continue to improve the completeness and consistency of the information in this reconciliation statement and will continue to resolve significant inconsistencies, if any, to the applicable and related components reported in the other basic financial statements, and in the related note disclosures included in the CFS. Open. Treasury has not fully developed a process to ensure the completeness of reporting of information on the reconciliation statement and to document how all applicable components reported elsewhere in the CFS are properly reflected in the reconciliation statement. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile net operating cost and unified budget surplus (or deficit). Treasury should establish reporting materiality thresholds for determining which agency financial statement activities to collect and report at the governmentwide level to assist in ensuring that the reconciliation statement is useful and conveys meaningful information. Treasury will continue to update and revise its materiality policy in fiscal year 2010 to address remaining GAO concerns. Open. If Treasury chooses to continue using information from both federal agencies’ financial statements and STAR, Treasury should demonstrate how the amounts from STAR reconcile to federal agencies’ financial statements. Treasury has elected to continue the use of information from STAR and has identified the material areas where STAR data does not reconcile to federal agencies’ financial statements. Treasury intends to continue working on these material areas in fiscal year 2010. Open. If Treasury chooses to continue using information from both federal agencies’ financial statements and from STAR, Treasury should identify and document the cause of any significant differences, if any are noted. See status of recommendation no. 02- 15. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies’ audited financial statements. Treasury should document the consistency of the significant line items on this statement to federal agencies’ audited financial statements. Treasury has elected to continue to use information from STAR. Treasury will document the consistency of the significant line items on this statement to federal agencies’ audited financial statements as possible during fiscal year 2010. See status of recommendation no. 02-15. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies’ audited financial statements. Treasury should explain and document the differences between the operating revenue amount reported on the Statement of Operations and Changes in Net Position and unified budget receipts reported on the Statement of Changes in Cash Balance from Unified Budget and Other Activities. Treasury will continue with its efforts to reconcile budgetary receipts to net operating revenue. During fiscal year 2009, Treasury made significant progress with identifying and documenting the larger differences between budgetary receipts and net operating revenue and automated a portion of this reconciliation in the Governmentwide Financial Report System (GFRS). Open. OMB and Treasury continue to work toward establishing effective processes and procedures for identifying, resolving, and explaining material differences in net outlays and other components of the deficit between Treasury’s central accounting records and information reported in entity financial statements and underlying entity financial information and records. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to perform an assessment to define the reporting entity, including its specific components, in conformity with the criteria issued by the Federal Accounting Standards Advisory Board. Key decisions made in this assessment should be documented, including the reason for including or excluding components and the basis for concluding on any issue. Particular emphasis should be placed on demonstrating that any financial information that should be included but is not included is immaterial. Treasury developed a reporting entity policy in fiscal year 2009. Treasury will continue to revise the policy in fiscal year 2010 to address remaining GAO concerns. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to provide in the financial statements all the financial information relevant to the defined reporting entity, in all material respects. Such information would include, for example, the reporting entity’s assets, liabilities, and revenues. See status of recommendation no. 02- 22. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to disclose in the financial statements all information that is necessary to inform users adequately about the reporting entity. Such disclosures should clearly describe the reporting entity and explain the reason for excluding any components that are not included in the defined reporting entity. See status of recommendation no. 02- 22. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to help ensure that federal agencies provide adequate information in their legal representation letters regarding the expected outcomes of the cases. During fiscal year 2010, Treasury and OMB will continue to work with federal agencies to help better ensure that adequate information is provided in the legal representation letters regarding the expected outcomes of the cases. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies develop a detailed schedule of all major treaties and other international agreements that obligate the U.S. government to provide cash, goods, or services, or that create other financial arrangements that are contingent on the occurrence or nonoccurrence of future events (a starting point for compiling these data could be the State Department’s Treaties in Force). OMB and Treasury undertook a number of corrective actions in fiscal year 2009 that resulted in published policy guidance in OMB Circular A- 136 and procedural guidance in the Treasury Financial Manual regarding the proper reporting and disclosure of treaties. OMB and Treasury will continue to work with federal agencies in fiscal year 2010 to help ensure proper and complete recognition and disclosure of contingencies related to treaties and other international agreements. Open. As noted in Note 22, Contingencies, a comprehensive analysis to determine any financial obligation or possible exposure to loss and its related effect on the CFS has not yet been performed. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies classify all such scheduled major treaties and other international agreements as commitments or contingencies. See status of recommendation no. 02- 37. Open. See status of recommendation no. 02-37. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that have a reasonably possible chance of resulting in a loss or claim as a contingency. See status of recommendation no. 02- 37. Open. See status of recommendation no. 02-37. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that are classified as commitments and that may require measurable future financial obligations. See status of recommendation no. 02- 37. Open. See status of recommendation no. 02-37. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies take steps to prevent major treaties and other international agreements that are classified as remote from being recorded or disclosed as probable or reasonably possible in the CFS. See status of recommendation no. 02- 37. Open. See status of recommendation no. 02-37. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to ensure that the note disclosure for stewardship responsibilities related to the risk assumed for federal insurance and guarantee programs meets the requirements of Statement of Federal Financial Accounting Standards (SFFAS) No. 5, Accounting for Liabilities of the Federal Government, paragraph 106, which requires that when financial information pursuant to Financial Accounting Standards Board standards on federal insurance and guarantee programs conducted by government corporations is incorporated in general purpose financial reports of a larger federal reporting entity, the entity should report as required supplementary information what amounts and periodic change in those amounts would be reported under the “risk assumed” approach. This required information was requested from federal agencies for disclosure in the required supplementary information (risk assumed) section of the fiscal year 2009 CFS. In addition, Treasury completed an analysis of the risk assumed reporting by the agencies to document agency compliance with the applicable reporting requirements. Treasury will continue working with the federal agencies to ensure proper and complete disclosure of this information in fiscal year 2010. Open. Treasury’s reporting in this area is not complete. The CFS should include all major federal insurance programs in the risk assumed reporting and analysis. Also, since future events are uncertain, risk assumed information should include indicators of the range of uncertainty around expected estimates, including indicators of the sensitivity of the estimate to changes in major assumptions. GAO-04-866 (results of the fiscal year 2003 audit) The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop a process that will allow full reporting of the changes in cash balance of the U.S. government. Specifically, the process should provide for reporting on the change in cash reported on the consolidated balance sheet, which should be linked to cash balances reported in federal agencies’ audited financial statements. In fiscal year 2009, Treasury disclosed the change in cash balances as reported on the Balance Sheet on the Statement of Changes in Cash Balance. Open. Treasury has not developed a process that will allow full reporting of the changes in cash balance. For example, Treasury had not identified some of the significant changes in cash that we had noted as needing to be reported in the fiscal year 2009 Statement of Changes in Cash Balance. The Director of OMB should direct the Controller of OMB, in coordination with Treasury’s Fiscal Assistant Secretary, to work with the Department of Justice (Justice) and certain other executive branch federal agencies to ensure that these federal agencies report or disclose relevant criminal debt information in conformity with generally accepted accounting principles (GAAP) in their financial statements and have such information subjected to audit. OMB, working with Treasury, Justice, and certain other agencies, will continue working to address this recommendation. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to include relevant criminal debt information in the CFS or document the specific rationale for excluding such information. Treasury will include criminal debt information in the CFS as it becomes available. See status of recommendation no. 03-8. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to modify Treasury’s plans for the new closing package to (1) require federal agencies to directly link their audited financial statement notes to the CFS notes and (2) provide the necessary information to demonstrate that all of the five principal consolidated financial statements are consistent with the underlying information in federal agencies’ audited financial statements and other financial data. Treasury continues to use its CFS compilation process, GFRS, to provide direct linkage from the agency audited financial statements to most of the CFS principal statements. However, additional work is needed related to the two budgetary principal financial statements. See status of recommendation no. 02- 17. With regard to note disclosures, GFRS note references (linkages), along with additional Treasury analysis, are designed to link the CFS and agency note disclosures. Open. Treasury’s process for compiling the CFS demonstrated that amounts in the Statement of Social Insurance were consistent with the underlying federal agencies’ audited financial statements and that the Balance Sheet and the Statement of Net Cost were also consistent with federal entities’ financial statements prior to eliminating intragovernmental activity and balances. However, Treasury’s process did not ensure that the information in the remaining three principal financial statements was fully consistent with the underlying information in federal entities’ audited financial statements and other financial data. GAO-05-407 (results of the fiscal year 2004 audit) The Secretary of the Treasury should direct the Fiscal Assistant Secretary to require and maintain appropriate supporting documentation for all journal vouchers recorded in the CFS. During fiscal year 2009, Treasury ensured that journal vouchers included appropriate supporting documentation. Closed. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to require that Treasury employees contact and document communications with federal agencies before recording journal vouchers to change agency audited closing package data. Treasury will continue its efforts to ensure that all journal vouchers are communicated to the federal agencies before recording them in GFRS. Open. We believe that Treasury should be required to contact federal entities to resolve any discrepancies between federal entities’ audited closing packages and audited financial statements and discuss any other situations that require adjustments to federal entities’ audited closing package data because Treasury could incorrectly adjust federal entities’ audited information. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to require and document management reviews of all procedures that result in data changes to the CFS. During fiscal year 2009, Treasury ensured that management reviews of procedures that resulted in data changes to the CFS were required and documented. Closed. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to assess the infrastructure associated with the compilation process and modify it as necessary to achieve a sound internal control environment. Treasury continued to make improvements to its internal control infrastructure during fiscal year 2009. Treasury has updated its documentation to help ensure that control procedures are in place at all critical areas of the CFS preparation process and is working to ensure that these controls are adequately monitored and assessed each year. Open. Treasury has not completed an assessment to ensure that it has sufficient personnel with specialized financial reporting experience to achieve a sound internal control environment to carry out the compilation process and help ensure reliable financial reporting by the reporting date. GAO-06-415 (results of the fiscal year 2005 audit) The Director of OMB should direct the Controller of the Office of Federal Financial Management to consider, in order to provide audit assurance over federal agencies’ closing packages, not waiving the closing package audit requirements for any verifying agency in future years, such as Tennessee Valley Authority (TVA). OMB will continue working with TVA so that it submits its audited closing package by the required financial reporting deadline. Of note, TVA moved closer to submitting its closing package by the required year-end reporting deadline during fiscal year 2009. Open. OMB has not yet reasonably ensured that audit assurance is provided over all federal agencies’ closing package information. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop policies and procedures for monitoring internal control to help ensure that (1) audit findings are promptly evaluated; (2) proper actions are determined in response to audit findings and recommendations, such as a documented plan of action with milestones for short-term and long- range solutions; and (3) all actions that correct or otherwise resolve the audit findings are completed within established time frames. Treasury has designed a process to identify and execute the actions necessary to address GAO audit findings. Efforts will continue in fiscal 2010 to track and resolve audit findings. Open. Although Treasury has developed a process, the process did not ensure that corrective action plans were appropriately updated and monitored to help ensure effective resolution of audit findings within established time frames. GAO-07-805 (results of the fiscal year 2006 audit) The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish effective processes and procedures to ensure that appropriate information regarding litigation and claims is included in the governmentwide legal representation letter. Treasury, in coordination with OMB, will continue working to establish effective processes and procedures to ensure that appropriate information regarding litigation and claims is included in the governmentwide legal representation letter. Open. The federal government was unable to provide us with adequate legal representation regarding the accrual-based consolidated financial statements for fiscal year 2009. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop a process for obtaining sufficient information from federal agencies to enable Treasury and OMB to adequately monitor federal agencies’ efforts to reconcile intragovernmental activity and balances with their trading partners. This information should include (1) the nature and a detailed description of the significant differences that exist between trading partners’ records of intragovernmental activity and balances, (2) detailed reasons why such differences exist, (3) details of steps taken or being taken to work with federal agencies’ trading partners to resolve the differences, and (4) the potential outcome of such steps. During fiscal year 2009, Treasury continued a process for obtaining sufficient information from federal agencies to enable Treasury and OMB to adequately monitor federal agencies’ efforts to reconcile intragovernmental activity and balances with their trading partners. This information included (1) the nature and a detailed description of the significant differences that exist between trading partners’ records of intragovernmental activity and balances, (2) detailed reasons why such differences exist, (3) details of steps taken or being taken to work with federal agencies’ trading partners to resolve the differences, (4) the potential outcome of such steps, and (5) additional information related to their intragovernmental differences that would allow Treasury to correct these differences within GFRS. This effort will continue in fiscal year 2010, including following up with any federal agencies that either did not comply or provided incomplete information related to this new requirement. Open. While Treasury continued to take action in this area, we identified instances in which the procedures that Treasury designed to monitor intragovernmental transactions and balances had not been implemented consistently. For example, some review accountants did not follow up for additional explanations as necessary with agencies that either did not respond or had incomplete responses. GAO-08-748 (results of the fiscal year 2007 audit) The Secretary of the Treasury should direct the Fiscal Assistant Secretary to enhance and fully document all practices referred to in the standard operating procedure (SOP) entitled “Preparing the Financial Report of the U.S. Government” to better ensure that practices are proper and complete and can be consistently applied by staff members. In fiscal year 2009, Treasury updated this SOP by significantly expanding the functions covered by this SOP and increasing the level of detail related to all the key procedures. Treasury will work to ensure full compliance with this SOP, as well as all other significant policies, during fiscal year 2010, to address remaining GAO concerns. Open. Although Treasury made improvements to this SOP, key practices and procedures—including those related to preparing the budget statements— were excluded. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to enhance Treasury’s checklist or design an alternative and use it to adequately and timely document Treasury’s assessment of the relevance, usefulness, or materiality of information reported by the federal agencies for use at the governmentwide level. During fiscal year 2009, Treasury enhanced its analysis procedures to take into account agency-specific disclosures and assess their impact at the governmentwide level. Treasury will update its checklist during fiscal year 2010, as necessary, to comply with federal GAAP. Open. Although Treasury made significant improvements in documenting its assessment of agency information, we found that the assessment was not complete. The Director of OMB should direct the Controller of OMB’s Office of Federal Financial Management, in coordination with Treasury’s Fiscal Assistant Secretary, to develop formal processes and procedures for identifying and resolving any material differences in distributed offsetting receipt amounts included in the net outlay calculation of federal agencies’ Statement of Budgetary Resources and the amounts included in the computation of the budget deficit in the CFS. OMB, working jointly with Treasury, will enhance its efforts to implement this recommendation. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement effective processes for monitoring and assessing the effectiveness of internal control over the processes used to prepare the CFS. Treasury is currently revising its internal control procedures to formalize monitoring and assessment of the effectiveness of internal control over the preparation of the CFS. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement alternative solutions to performing almost all of the compilation effort at the end of the year, including obtaining and utilizing interim financial information from federal agencies. Treasury will continue to consider what information can be obtained during the interim period to facilitate the year-end CFS preparation process. Open. GAO-09-387 (results of the fiscal year 2008 audit) The Secretary of the Treasury should direct the Fiscal Assistant Secretary to design, document, and implement policies and procedures to identify and eliminate intragovernmental payroll tax amounts at the governmentwide level when compiling the CFS. During fiscal year 2009, Treasury began documenting its procedures for identifying and eliminating intragovernmental payroll tax amounts at the governmentwide level. Treasury will continue to update and revise the policy in fiscal year 2010 to address remaining GAO concerns. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop, document, and implement processes and procedures for preparing and reviewing the Management’s Discussion and Analysis (MD&A) and “The Federal Government’s Financial Health: A Citizen’s Guide to the Financial Report of the United States Government” sections of the Financial Report of the U.S. Government (Financial Report) to help assure that information reported in these sections is complete, accurate, and consistent with related information reported elsewhere in the Financial Report. During fiscal year 2009, Treasury prepared a new SOP pertaining to preparation of the MD&A section of the Financial Report and the Citizen’s Guide (Guide). Treasury noted that new steps and processes were being implemented during fiscal year 2009 to improve the efficiency and integrity of financial analysis in the MD&A and the Guide and that it was likely that the SOP was going to have to evolve as those steps were put into practice and refined. During fiscal year 2010, Treasury will be working with GAO to resolve its comments received in fiscal year 2009. These changes notwithstanding, the SOP will have to further evolve to accommodate the phased, 3-year implementation of SFFAS No. 36, Reporting Comprehensive, Long-Term Fiscal Projections for the U.S. Government. The steps that the government will be taking to comply with SFFAS No. 36, which will ultimately be reflected in the SOP, remain under discussion. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish and document criteria to be used in identifying federal entities as significant to the CFS for purposes of obtaining assurance over the information being submitted by those entities for the CFS. During fiscal year 2009, Treasury documented its criteria for identifying significant entities. Treasury and OMB will revise the criteria to address remaining GAO concerns. Open. The criteria developed by Treasury are in conflict with criteria developed and implemented by OMB for identifying significant entities. In addition, implementation of the policy as currently designed will not result in obtaining, in a timely manner, audit assurance over the information reported by newly identified significant entities for use in the CFS. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement policies and procedures for assessing and documenting, on an annual basis, which entities meet the criteria established for identifying federal entities as significant to the CFS. During fiscal year 2009, Treasury documented its procedures for identifying significant entities based on the criteria for these entities. Treasury will document and implement the policies related to significant entities based on the revised criteria from recommendation no. 08-03. Appendix II: Comments from the Department of the Treasury Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, the following individuals made key contributions to this report: Louise DiBenedetto, Assistant Director; John Ahern; Shawkat Ahmed; William Boutboul; Darryl Chang; Malissa Livingston; Susan Mata; Thanomsri Piyapongroj; Taya Tasse; and Cindy Tsao. | Since GAO's first audit of the fiscal year 1997 consolidated financial statements of the U.S. government (CFS), material weaknesses in internal control and other limitations on the scope of GAO's work have prevented GAO from expressing an opinion on the consolidated financial statements, other than the Statement of Social Insurance (accrual-based consolidated financial statements). The Department of the Treasury (Treasury), in coordination with the Office of Management and Budget (OMB), is responsible for preparing the CFS. As part of the fiscal year 2009 CFS audit, GAO identified material weaknesses and other control deficiencies in Treasury's processes used to prepare the CFS that warrant management's attention and corrective action. The purpose of this report is to (1) provide details on new control deficiencies GAO identified during its audit of the fiscal year 2009 CFS that related to the preparation of the CFS, (2) recommend improvements, and (3) provide the status of corrective actions taken to address GAO's previous 44 recommendations in this area. During its audit of the fiscal year 2009 CFS, GAO identified continuing and new control deficiencies in the federal government's processes used to prepare the CFS. The control deficiencies GAO identified involved 1) enhancing policies and procedures for identifying and analyzing federal entities' reported restatements and changes in accounting principles; 2) establishing and documenting policies and procedures for disclosing significant accounting policies and related party transactions; 3) establishing and documenting procedures to assure the accuracy of Treasury staff's work in three areas: (1) social insurance, (2) legal contingencies, and (3) analytical procedures; and 4) various other control deficiencies identified in previous years' audits. These control deficiencies contribute to material weaknesses in internal control over the federal government's ability to (1) adequately account for and reconcile intragovernmental activity and balances between federal entities; (2) ensure that the accrual-based consolidated financial statements were consistent with the underlying audited entities' financial statements, properly balanced, and in conformity with U.S. generally accepted accounting principles; and (3) identify and either resolve or explain material differences between components of the budget deficit reported in Treasury's records, which are used to prepare the Reconciliation of Net Operating Cost and Unified Budget Deficit and Statement of Changes in Cash Balance from Unified Budget and Other Activities, and related amounts reported in federal entities' financial statements and underlying financial information and records. As a result of these and other material weaknesses, the federal government did not have effective internal control over financial reporting. Of the 44 open recommendations GAO reported in April 2009, 2 were closed and 42 remained open as of February 19, 2010, the date of GAO's report on its audit of the fiscal year 2009 CFS. GAO will continue to monitor the status of corrective actions taken to address the 10 new recommendations as well as the 42 open recommendations from prior years. |
Background CDBG Program The Housing and Community Development Act of 1974 created the CDBG program to develop viable urban communities by providing decent housing and a suitable living environment and by expanding economic opportunities, principally for LMI persons. Program funds can be used for housing, economic development, neighborhood revitalization, and other community development activities. After funds are set aside for special statutory purposes—the Indian Community Development Block Grant program and allocated insular areas—the annual CDBG appropriation is allocated to entitlement communities and states. Entitlement communities generally are principal cities of metropolitan statistical areas, other metropolitan cities with populations of at least 50,000, and qualified urban counties with populations of 200,000 or more (excluding the populations of entitlement cities). States distribute their allocated CDBG funds to nonentitlement communities. After the set-asides, 70 percent of CDBG funds are allocated to entitlement communities, and the remaining 30 percent are allocated to states to distribute to nonentitlement communities. In fiscal year 2015, Congress appropriated about $3.1 billion for the Community Development Fund, $66 million of which was set aside for Native American tribes. Of the remaining amount, roughly $2.1 billion was allocated to entitlement communities, roughly $900 million was allocated to states to distribute to nonentitlement communities, and roughly $7 million was set aside for insular areas. Both entitlement and nonentitlement communities must use CDBG funds toward one of three national objectives: (1) benefitting LMI persons, (2) aiding in the prevention or elimination of slums or blight, or (3) meeting urgent community development needs. At least 70 percent of all funds allocated to entitlement communities and states must be used toward the first objective over a period of 1, 2, or 3 years (as specified by the grantee). Figure 1 illustrates the distribution of CDBG funds. For the LMI national objective, HUD defines an activity to be principally benefitting LMI persons if at least 51 percent of the population of the community or 51 percent of project beneficiaries meet HUD’s LMI definition, which we discuss in detail later. There are several ways a community can qualify for CDBG funds under the objective of principally benefitting LMI persons: If a project is designed to serve the entire community or a smaller area within a community—for example, a waste water project—the community or project area would need to qualify by showing that the entire community or project area is majority LMI persons. HUD refers to these projects as area-benefit activities. If a project is designed to serve a smaller area within a community— for example, a sidewalk for a neighborhood—the community may need to conduct a local income survey to demonstrate that the majority of residents to be served by that project are LMI persons. We refer to this type of survey as a local income survey since it is used to collect income data from a community or individuals that reside in a project service area. If a project is designed to serve a specific clientele—for example, a senior center or homeless shelter—the community may be able to qualify for funding under the LMI objective by showing that the beneficiaries of the project fall under one of the population categories that HUD presumes to be LMI, such as the elderly or homeless. If a project directly benefits LMI persons—for example, rehabilitation of single-family housing that will be occupied by a LMI household or a job creation activity that will create jobs the majority of which will be held by LMI persons—the community can qualify for funding under the LMI objective by showing that the direct beneficiaries are LMI. For the portion of funds that are allocated to states to distribute to nonentitlement communities, HUD provides states with flexibility in determining specific requirements, including those related to program oversight and selecting activities to fund. Specifically, HUD’s CDBG regulations state that HUD will give “maximum feasible deference” to the state's interpretation of the statutory and regulatory requirements, provided that these interpretations are not plainly inconsistent with the statute. States formulate community development objectives for their state and determine how to distribute funds among nonentitlement communities, which submit applications for funding to their respective state. States can use a formula, competition, open application, or a combination of methods to distribute funds to nonentitlement communities. HUD’s Low- and Moderate-Income Summary Data For projects designed to serve either the entire community or a smaller area within a community, HUD produces the LMI summary data to help states and communities determine whether at least 51 percent of a proposed project’s service area is comprised of LMI persons. The LMI summary data draw income data from ACS and provide estimates of the number and percentage of persons in a proposed project’s service area who can be considered low or moderate income. Specifically, a person is considered to be of low income if he or she is a member of a family whose income is at or below 50 percent of area median income. Similarly, CDBG moderate income level is generally tied to 80 percent of area median income. Unrelated individuals in a household are considered one-person families for CDBG purposes. Historically, HUD used data collected by the decennial census long form as the basis for the LMI summary data. However, after the 2000 decennial census, the Census Bureau discontinued the long form and began collecting detailed demographic and income information using ACS. As a result, the current LMI summary data, which were released in July 2014, are based on aggregated 2006–2010 ACS data. In contrast to the decennial census, which was conducted once every 10 years, ACS is an ongoing survey, and the Census Bureau updates its publicly available data annually. In addition, ACS sample sizes (which changed in 2011) and the geographic level at which data are available differ from those of the decennial census long form. See table 1 for a comparison of selected features of the decennial census long form, ACS before 2011, and ACS after 2011. Differences between the two surveys have resulted in differences between the LMI summary data values that are based on each survey, including the following examples: Smaller sample size and larger error rates: ACS has a smaller sample size than the decennial census long form, so the LMI estimates are based on a sample of fewer households. The 2000 decennial census long form was mailed to approximately 20.9 million housing unit addresses, all completed within several months of April 2000. The 2006–2010 5-year ACS estimates included 2.9 million housing unit addresses per year, for a total of 14.5 million addresses over the 2006–2010 5-year period, and surveys were mailed to 250,000 addresses each month. Because the ACS sample size is smaller, HUD uses a 5-year ACS average for its LMI summary data, with the current LMI summary data based on 2006–2010 ACS estimates. Even using 5 years of data, the ACS sample currently used by HUD is still smaller, reaching about 12.5 percent of all addresses, compared with the roughly 17.1 percent of addresses that received the long form of the 2000 decennial census. As a result of smaller sample sizes, ACS generally has higher sampling errors, and its estimates are less precise (see app. II for more details on ACS confidence intervals and their potential effects). HUD and Census Bureau officials noted that unlike the decennial census long form, ACS publishes margins of error with its public data for greater transparency. In 2011, the Census Bureau increased ACS’s overall sample sizes, as well as the sample sizes of small areas. These improvements will be reflected in HUD’s next update of the LMI summary data, which will be based on 2011–2015 data. According to Census Bureau analysis, for a poverty estimate of 10 percent in an average-sized tract and at the 90 percent confidence level, increasing the sample size from 2.9 million to 3.54 million would result in a 9.2 percent improvement in margins of error. More frequent LMI summary data updates: Another difference between the decennial census long form and ACS is that the long- form survey was conducted every 10 years; therefore, HUD updated its LMI summary data every 10 years as well. In contrast, the Census Bureau updates ACS’s 5-year estimates annually. Since switching to ACS data, HUD has chosen to update its LMI summary data once every 5 years. HUD officials said this approach allows communities to develop long-term plans and limit uncertainty related to gaining or losing LMI status from year to year, but still allows for more timely information than the decennial census long form did. Different geographic areas: HUD was able to produce its LMI summary data at a smaller geographic area with the decennial census long form than with ACS. Specifically, the LMI summary data were available at the smaller split-block group-level with the decennial census long form and are only available at the block group-level with ACS. HUD and State Policies Allow Local Income Surveys as an Alternative to Census Data HUD’s and states’ primary policy for nonentitlement communities that disagree with their CDBG eligibility determination based on census data is to allow communities to conduct local income surveys. In addition, some state and local officials said that because the ACS-based LMI summary data are not available at as small a geographic level as the prior LMI summary data, projects’ service areas more often do not align with HUD’s data. As a result, more communities have had to conduct local income surveys to demonstrate eligibility. State and local officials we interviewed said that conducting a local income survey can be a challenge for small communities due to resource constraints and high costs. Other options include applying for funding under one of the other national objectives or for a project targeted to beneficiaries that are presumed to be low income. However, state and community officials said these options can be difficult to use because they may not allow for projects that meet the needs of their communities. Communities Have the Option to Conduct a Local Income Survey When They Disagree with Census Data For nonentitlement communities that disagree with their CDBG eligibility determination based on HUD’s use of ACS data, HUD’s and states’ primary policy is to allow these communities to conduct local income surveys. HUD instructs communities submitting CDBG applications for area-benefit activities—that is, projects designed to serve an entire community or area within a community—to use the LMI summary data to the fullest extent feasible to show that the project area meets the 51 percent LMI threshold. However, if a community disagrees with its eligibility determination based on LMI data, it has the option to conduct a local income survey. According to HUD guidance, in order to conduct a local income survey, a community must develop a set of questions to determine household size and household income, identify the survey population or a random sample of households that would benefit from the activity, then select the type of survey to use (e.g., in-person, telephone, mail). The survey results must then be tabulated, and if they show that the project’s service area meets the 51 percent LMI threshold, the community may submit the survey results to the state with its CDBG application. We interviewed officials from eight states and representatives of local governments within those states, and most expressed concerns with the use of ACS data for making CDBG eligibility determinations. Several state officials said that nonentitlement communities have needed to conduct local income surveys due to ACS’s small sample sizes and large margins of error. For example, officials from one state attributed 20 nonentitlement communities’ loss of LMI status to a lack of precision in ACS estimates. Similarly, officials from several states and communities we interviewed noted instances where communities did not meet HUD’s LMI threshold based on ACS estimates but were able to demonstrate LMI status with local income surveys. For example, according to officials from one state, nine nonentitlement communities successfully showed that their project service areas met the LMI threshold with local income surveys when the LMI summary data indicated that they did not meet the threshold. State officials may not learn of nonentitlement communities that conducted local income surveys but were unable to show their project service areas met the LMI threshold. Officials from two of the states we interviewed told us that they were not aware of nonentitlement communities that were unable to show that their project service areas met the LMI threshold using a local income survey because the state officials only received local income surveys from communities that successfully showed their project service areas met the LMI threshold. See appendix II for our analysis of the ACS confidence intervals and resulting uncertainty over nonentitlement communities’ LMI status. HUD and States Allow Local Income Surveys When Service Areas Do Not Align with Census Geographic Boundaries HUD’s CDBG regulations and local income survey guidance also discuss the use of local income surveys in place of the LMI summary data when the service area benefitting from an activity is larger or smaller than the census boundaries. Specifically, HUD’s local income survey guidance states that a local income survey may be the most appropriate way to determine eligibility when (1) a service area comprises only a small portion of a block group, (2) a service area includes all or part of several nonentitlement communities and may also include both incorporated and unincorporated places, or (3) a service area is sparsely populated. HUD’s guidance explains that when a service area for an activity only comprises a small portion of a block group, the LMI summary data may not reflect the characteristics of the households being served. In addition, when a service area covers multiple nonentitlement communities, a local income survey may be necessary to supplement the LMI summary data. HUD does not specify when a service area would be considered too small or too large to use the LMI summary data. In their role as administrators of the state CDBG program, states may specify when a service area should be considered too small or too large and, therefore, when a nonentitlement community would be required to conduct a local income survey. One state official we interviewed had specific guidelines on when a service area is considered too small to use the LMI summary data. Specifically, in this state, the LMI summary data may be used only when at least 60 percent of the census geographic area is benefitting from the proposed activity. Other state officials we interviewed said they did not have specific requirements and determined whether project service areas were too large or too small to use the LMI summary data on a case-by- case basis. Based on our review of CDBG guidance from 49 states and Puerto Rico, 1 state does not allow nonentitlement communities to use the LMI summary data to show they meet the LMI threshold; instead, this state requires all communities applying for funding using the area-benefit criteria to conduct local income surveys. Officials from this state explained that they require surveys because project service areas in their state rarely align with the block groups. In addition, they said that the state has a geographically dispersed population and that it is more efficient to interview a small number of households than it is to determine the proper census area to use. Since HUD’s issuance of state CDBG regulations in 1988, communities have been allowed to conduct local income surveys when project service areas do not align with census areas. However some state and community officials we interviewed said that the loss of the smaller split- block group-level data with the transition from the decennial census long form to ACS has resulted in challenges. For example, officials from one state told us that the larger geographic areas reported in the ACS-based LMI summary data have made it difficult for communities to show income data for more targeted service areas. According to Census Bureau officials, concerns about confidentiality and the lack of precision around the data at these small geographic levels led the Census Bureau to discontinue publishing the data at the split-block group level. Census Bureau officials noted that margins of error at these small geographic levels would be very high, as they were with the decennial census long form. However, they said that the high margins of error with the decennial census long form would not have been evident to grantees because margins of error were not published. In addition, a few state and local officials attributed the need to conduct local income surveys to changes in the available geographic level of the data. For example, officials from one state and one nonentitlement community explained that the LMI summary data being reported at the larger geographic level has resulted in two cases where wealthier neighborhoods were included in the area. They said that this resulted in two nonentitlement communities not being able to use the LMI summary data to demonstrate eligibility. HUD does not collect data on the extent to which nonentitlement communities have conducted local income surveys instead of using the LMI summary data either because they disagreed with eligibility determinations based on ACS data or because a project service area did not align with census boundaries. However, officials from most of the eight states we interviewed told us that it is common for local income surveys to be used by nonentitlement communities to show that their project service areas meet the LMI threshold. For example, one state noted that out of 567 applications the state received during its last funding cycle, around 200 of those applications based eligibility on local income surveys. An official from another state said that out of 184 projects that the state funded in 2014, 75 of the eligible projects were supported by local income surveys. Several of the state officials we interviewed discussed nonentitlement communities’ reasons for conducting surveys. For example, one state official said that out of 36 income surveys conducted, 26 were conducted because the activities’ service areas did not align with census geographic areas; the remaining 10 were conducted because the communities’ did not agree with their LMI percentages based on ACS data. Similarly, officials from another state said that most local income surveys conducted in their state were due to project areas not aligning with census areas. In addition, another state official said that about half of all surveys conducted in her state were related to geographic areas not aligning and the other half were related to disagreement with ACS-based eligibility determinations. Officials from one state said that they could not determine why nonentitlement communities conducted surveys in their state. States and Communities Cited Costs and Challenges in Conducting Local Income Surveys, but Some Assistance Is Available All of the state and local stakeholders we interviewed described a number of challenges nonentitlement communities face when conducting local income surveys. For example, officials from three states we interviewed stated that the administrative burdens associated with conducting local income surveys can be difficult for nonentitlement communities with small staffs and small budgets. Local stakeholders also told us that conducting a local income survey can be time-consuming, with one survey generally taking a few months to complete depending on the resources available to the community. Some stakeholders also noted that conducting a methodologically sound local income survey can be demanding to these small communities. For example, officials from four states said it is challenging for communities to obtain a sufficient number of survey responses to be considered representative of the nonentitlement community’s income. Also, two of these states and two local stakeholders told us that survey respondents are generally unwilling to share information on their income. State and local stakeholders cited cost as a challenge associated with conducting a local income survey. Several of these stakeholders estimated the cost of a survey to be from $5,000 to $10,000. However, one community official said his community only paid the price of postage and the time of two employees for its survey. Officials from five states and four nonentitlement communities told us that in some cases, communities can obtain free assistance from volunteers, grant preparation firms, and regional development organizations. In addition, according to HUD officials, states could choose to allow the cost of conducting surveys to be considered an administrative or program delivery cost, and reimburse communities for it, but they noted that states have limited funds for administrative expenses. Officials from most of the states we interviewed did not provide financial assistance to nonentitlement communities conducting surveys; however, officials from one state told us that they allowed application preparation as an eligible administrative expense and that some communities in the state may have used some of these funds for conducting surveys. HUD and most states provide guidance that nonentitlement communities can use to develop and conduct their local income surveys. HUD’s guidance states that if a community follows HUD’s recommended survey methodologies, the survey will yield acceptable levels of accuracy. The guidance covers how to select the type of survey to use (e.g., in-person interviews versus interviews conducted by phone), how to develop a questionnaire, how to select a sample, and how to handle nonresponses. In general, state officials we interviewed felt HUD’s guidance was sufficient and did not have any suggestions for additional guidance that might be needed. In addition, almost all states have CDBG guidance that includes discussions of conducting local income surveys. The guidance often includes methodological requirements, such as sampling requirements, and how long nonentitlement communities may use local income survey results. In addition, in some cases, the guidance also includes minimum response rates and sample survey questionnaires. Further, state and local stakeholders told us that some nonentitlement communities can receive technical assistance for conducting local income surveys from nonprofit organizations and consultants that offer grant administration services. Other Options for Communities That Disagree with ACS Are Limited If a community disagrees with its LMI percentage based on ACS data or if the LMI summary data do not align with a project service area for the purpose of meeting the LMI threshold, options for qualifying for CDBG funds for area benefit projects other than local income surveys are limited. HUD officials noted that communities can still qualify for CDBG funds under one of the other national objectives—reducing or eliminating slums/blight and meeting an urgent need within the community. However, state and local stakeholders we interviewed told us that because the CDBG authorizing statute requires that 70 percent of CDBG funding be used for LMI activities, shifting activities to one of the other national objectives is a challenge. In addition, as noted previously, communities have the option to qualify for the national objective of benefitting LMI persons in other ways that do not require using the LMI summary data or conducting a local income survey, such as funding activities that serve populations presumed to be LMI or activities that directly benefit LMI persons. However, some state and local stakeholders told us that nonentitlement communities’ greatest needs are for area-benefit activities such as infrastructure projects, and that they therefore must rely on the LMI summary data or local income surveys for eligibility determination. In addition, a provision in the Consolidated Appropriations Act, 2016, states that a limited number of nonentitlement communities, tribal areas, and counties may continue using the 2000 decennial census long form- based LMI summary data to demonstrate their LMI eligibility if they are designated as a Promise Zone or a Distressed County as defined by the Appalachian Regional Commission. HUD officials said that this provision was included because these areas expressed concern to Congress about losing LMI status with the transition to ACS. This provision would only be applicable from fiscal years 2017 through 2020. The number of nonentitlement communities eligible under this provision to continue using the 2000 decennial census long form to demonstrate their LMI eligibility is relatively small. Since 2014, 22 communities across the country were designated as Promise Zones. These locations are generally larger cities and, in some cases, Indian reservations, and therefore generally do not participate in the nonentitlement CDBG program. The Distressed Counties are all located in the Appalachia region, covering 93 counties in 9 states. According to HUD officials, many of these Distressed Counties would qualify as LMI communities using the ACS-based LMI summary data. Stakeholders Noted That Potential Alternative Data Sources Have Limitations and That Collecting Accurate Income Information Is Generally Challenging Stakeholders we interviewed noted that alternative data sources that might be used to demonstrate that communities meet the LMI requirements of the CDBG program have limitations, and they cited challenges associated with measuring income generally. With respect to measuring income in general, some stakeholders noted that collecting accurate household income information can be challenging. They noted that income may be measured at a point in time, but household members might move from one household to another during the year. These stakeholders also noted that income information may also be inaccurately reported because survey respondents may have difficulty recalling their income over the past year, particularly if they had fluctuations in income levels. One stakeholder also said that a community’s income level may be difficult to determine if the community has many seasonal workers. In these cases, the community’s income level may vary depending on the time of year. In considering challenges associated with using income as a measure of economic well-being—such as whether members of a household are classified as living in poverty—some stakeholders we interviewed said that determining the appropriate threshold for defining and measuring poverty can be difficult. A few stakeholders noted that a more complete assessment of a household’s income level would include its expenses and benefits, and that the Supplemental Poverty Measure produced by the Census Bureau considers these aspects of a household’s financial circumstances. However, according to the Census Bureau, this measure was designed as an experimental poverty measure and is not used to determine eligibility for government programs. Stakeholders we interviewed cited several types of alternative sources of income information that have not been used by communities that disagree with census data to determine CDBG eligibility: Other large-scale Census Bureau surveys that include information on income. For example, the Survey of Income and Program Participation and the Current Population Survey both include questions on income. Administrative data that can also provide information on income. These could include federal and state tax data or data on enrollment in income-based programs, such as the National School Lunch Program for free and reduced-price meals, Medicaid, or the Supplemental Nutrition Assistance Program (SNAP). Sources of information that use a combination of survey and administrative data. For example, the Census Bureau’s Small Area Income and Poverty Estimates (SAIPE) supplements ACS income data with administrative data such as federal income tax return data. Sources such as the Department of Health and Human Service’s Social Vulnerability Index and Medically Underserved Areas designations can use survey information, administrative data, or a combination of survey and administrative data to compile indices that may provide an indication of a community’s income level. However, while these potential sources could provide information on income at the individual or community level, stakeholders also noted that they would likely have one or more of the following limitations: Does not fully allow communities to determine if they are LMI. Some sources of income data may not fully measure a community’s income level and therefore may not be a more accurate measure of whether a community should be considered LMI than the LMI summary data. For example, the Internal Revenue Service publishes some information from federal tax returns, but while this is a direct measure of income levels, it may exclude those low-income households that do not file taxes. In addition, data on participation in income-based programs such as the National School Lunch Program, Medicaid, and SNAP may provide indications of a community’s income level, but they also have limitations. For example, not everyone who is eligible for the programs participates in them. Therefore, participation rates may undercount the extent to which a community's population is low income. Proxies for income, such as measures of medically underserved areas, the Department of Health and Human Service’s Social Vulnerability Index, and the U.S. Department of Agriculture’s Food Access Research Atlas, also have limitations. Specifically, the extent to which these indices correlate with income and the level of income with which they correlate is unclear. Finally, some sources of income information may use a different measure of income, which may not correspond to HUD’s LMI measure. For example, SAIPE estimates median household income, which would not allow communities to determine the percentage of their population that meets HUD’s LMI threshold. Is not easily accessible by communities. To be used broadly, an alternative data source may need to be easily accessible by nonentitlement communities. Some data, such as tax data, may have some public availability but would likely require states or other agencies to either provide data to nonentitlement communities that are not typically made available, or calculate LMI percentages for the communities. However, sharing this information may raise confidentiality concerns. It may also raise concerns about whether communities would be able to access the information from states or other agencies that may feel burdened by having to provide the data or LMI calculations based on the data. Is not available at small geographic levels. Because nonentitlement communities are small communities, an alternative data source would need to be available at a sufficiently small geographic level. Specifically, the current LMI summary data are available for census block groups and places. However, many other sources of income data or related proxies do not have data available at these levels. For example, Census Bureau officials said this would be the case for Census surveys other than ACS. They noted that SAIPE income data, for example, are available at the county level, which would likely be too large for many nonentitlement CDBG projects. Other data sources, including SNAP participation rates and Bureau of Economic Analysis per capita income data, are also only available at the county level. Census Bureau officials noted that it may be possible to use survey and administrative data to create estimates of income levels for small areas using statistical models. They noted that SAIPE is based on such models, which can estimate income levels for a specific geographic level based on other characteristics, even if there is no specific income data point for that area. However, they said that such models do not currently exist for geographic areas small enough to be used by nonentitlement communities in the CDBG program. Is not more precise than ACS. To address the concern that ACS data do not provide sufficiently precise estimates of LMI for many nonentitlement communities, an alternative data source would need to be considered reliable for small geographic areas. However, Census Bureau officials said that ACS is the nation’s largest household survey that includes income information, and other Census Bureau surveys that include income information (e.g., Survey of Income and Program Participation, Current Population Survey) have smaller sample sizes than ACS, which would be associated with less certainty about the reliability of the data. Administrative data, such as those on program participation, would not have limitations related to sample size that would affect their preciseness, but they would face at least one of the other previously noted challenges, such as not allowing the community to fully determine if it is LMI. HUD officials said that in circumstances where a nonentitlement community disagreed with what ACS showed for their LMI percentage, HUD would consider allowing states to use an alternative data source beyond an income survey. However, they noted that such a provision would need to be fairly and consistently applied within the state. HUD officials said that they have not received inquiries from states about using alternative data sources to demonstrate LMI status. As a result, HUD has not developed formal guidance on accepting alternative data sources and is still determining how any such requests should be evaluated. In addition, in July 2015, the Census Bureau announced a plan to evaluate the availability and sustainability of using external data sources, such as Social Security Administration and Internal Revenue Service data, to supplement income information collected by ACS, which they said could improve the data. According to the research plan, the Census Bureau expects to make specific recommendations before March 2017. Agency Comments We provided a draft of this report to HUD and the Department of Commerce for their review and comment. They provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of the Department of Housing and Urban Development, the Secretary of the Department of Commerce, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology You asked us to review options for nonentitlement communities that disagree with their Community Development Block Grant (CDBG) eligibility determination based on the Department of Housing and Urban Development’s (HUD) use of American Community Survey (ACS) data. This report examines (1) HUD’s and states’ policies for communities that disagree with their eligibility determination based on HUD’s use of ACS data or are not able to use HUD’s Low- and Moderate-Income (LMI) summary data and the challenges, if any, communities face in using available options, and (2) stakeholders’ views on whether there are possible alternative data sources for determining eligibility under CDBG’s LMI objective. In addition, appendix II presents analysis of how margins of error around ACS’s LMI estimates can affect whether nonentitlement communities meet HUD’s LMI threshold. Appendix III presents analysis of changes in the LMI status of nonentitlement communities between the 2000 and 2006–2010 LMI summary data. HUD’s Policies To identify HUD’s and states’ policies for nonentitlement communities that disagree with their CDBG eligibility based on ACS data, we reviewed relevant portions of the Housing and Community Development Act of 1974 and relevant HUD regulations, policies, and local income survey guidance. We also reviewed state CDBG guidance for 49 states and Puerto Rico. In addition, we interviewed officials from HUD and the U.S. Census Bureau, as well as CDBG administrators from eight states: California, Kansas, Nebraska, Ohio, Texas, Utah, Washington, and Wisconsin. We used multiple methodologies to select the nongeneralizable sample of eight states. Specifically, we selected some of the states based on HUD- reported data that identified states with nonentitlement communities that conducted local income surveys. However, these data could not be used to definitively determine the total number of local income surveys conducted in individual states. Therefore, we selected an additional two states with high numbers of rural communities. We selected these additional states based on the U.S. Department of Agriculture’s Rural Continuum codes because rural communities were more likely to have smaller sample sizes and therefore more likely to be affected by larger margins of error around their LMI estimates. As such, communities in these states may have been more likely to have conducted local income surveys due to disagreements with their eligibility determinations based on ACS data. Finally, we identified two additional states based on recommendations from state officials we interviewed. We also interviewed a range of state and local CDBG administrators. For example, we asked officials from each of the states we selected for interviews to identify a few nonentitlement communities in their state that had conducted or attempted to conduct a local income survey or that disagreed with their LMI percentage based on ACS data. Based on those recommendations, we interviewed representatives from one or two of these communities in each of six states. Findings from the interviews with states and nonentitlement communities cannot be generalized to those with which we did not speak. In addition, in two states, we interviewed representatives of organizations that provide services to nonentitlement communities. We also interviewed representatives of a CDBG advisory council from California. Furthermore, we interviewed representatives from community development groups, including the Council of State Community Development Agencies, the Housing Assistance Council, and the National Association of Housing and Redevelopment Officials. Alternative Data Sources To obtain information on stakeholders’ views on alternative data sources that HUD could consider in addition to ACS for determining a nonentitlement community’s LMI eligibility, we consulted knowledgeable stakeholders, including researchers from the Urban Institute, George Washington University, and the National Opinion Research Center at the University of Chicago. We selected a purposive subset of such researchers based on our knowledge of organizations active in conducting research on topics relevant to our inquiry, a review of relevant studies, and a recommendation we received during an interview. We also interviewed HUD and Census Bureau officials, state and local government officials, as well as representatives of the community development associations listed previously, and asked whether they could identify alternative sources of income information that could be used for these purposes. We also reviewed past GAO reports to identify any potential alternative sources of income information. Analysis of ACS Data To illustrate how margins of error around LMI estimates based on ACS data can result in uncertainty about CDBG eligibility determination for nonentitlement communities, we analyzed 2008–2012 ACS special tabulations that the Census Bureau produced for HUD. Although HUD’s most recent LMI summary data were based on 2006–2010 ACS data, HUD officials said they no longer maintained margin of error information for these data, and they instead provided us with the margins of error for the 2008–2012 data. We also received a list of nonentitlement communities from HUD and used this list to identify nonentitlement communities in the 2008–2012 special tabulation dataset. We used the information on margins of error to create confidence intervals around the LMI estimates for these communities. We then selected a random, representative sample of 100 nonentitlement communities and graphed their LMI point estimates and the confidence intervals around the estimates. To describe changes in the number of nonentitlement communities eligible for CDBG funds between the 2000 and 2006–2010 LMI summary data in eight states, we used HUD’s list of nonentitlement communities to identify these communities in the 2000 and 2006–2010 LMI summary datasets. We limited our analysis to nonentitlement communities at the place level and we determined the extent to which communities lost or gained LMI status between these two LMI summary datasets. Based on discussions with HUD and Census officials, review of HUD documentation, and electronic testing of the data, we determined the data to be sufficiently reliable for these purposes. We conducted this performance audit from May 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Effects of ACS Confidence Intervals on Nonentitlement Communities’ LMI Status Both the decennial census long form and the American Community Survey (ACS) are based on samples and therefore subject to sampling error. While margins of error are available for ACS, the decennial census long form did not publish margins of error, so a direct comparison of the two surveys’ sampling error is difficult. Nonetheless, ACS sample sizes are smaller than the decennial census long form and, all other things equal, smaller sample sizes have larger sampling error. Larger sampling error results in larger margins of error and wider confidence intervals. The confidence level (e.g., 90 percent or 95 percent) indicates the level of certainty that the actual value lies within the confidence interval. For example, if a random sample survey estimated, with a confidence level of 90 percent, that 45 percent of a community’s population were low- and moderate-income (LMI) with a 10 percent margin of error, then one could say with 90 percent confidence that the community’s actual LMI percentage is somewhere between 35 percent and 55 percent. If the margin of error were instead 5 percent, with the same confidence level, then one could say with 90 percent certainty that the community’s actual LMI population lies between 40 percent and 50 percent. Figure 2 shows the ACS LMI estimates and confidence intervals around the estimates, at a 90 percent confidence level, for a random sample of 100 nonentitlement communities. The circles in the figure show the LMI point estimates for each of the 100 randomly selected nonentitlement communities. The horizontal bars on either side of the point estimates indicate the confidence intervals. The vertical line at the 0.51 mark indicates the 51 percent LMI threshold. The circle is shaded if a community’s confidence interval includes this threshold and therefore there is uncertainty about whether the community’s actual LMI value lies above or below the threshold. Confidence intervals that span the 51 percent threshold result in uncertainty over whether a community actually should be considered LMI. Across all 30,823 nonentitlement communities, roughly 45 percent had confidence intervals that spanned the 51 percent threshold. As shown in figure 3, a higher proportion of communities above the 51 percent threshold had confidence intervals that spanned the 51 percent threshold than those below the 51 percent threshold (76 percent versus 36 percent, respectively). Therefore, there were more nonentitlement communities that were deemed eligible when they may not be than nonentitlement communities that were deemed ineligible when they may be eligible. A confidence interval could span the 51 percent threshold because of a wide confidence interval, because the point estimate is close to the 51 percent line, or both. As can be seen in figure 2 above, all three cases were present among nonentitlement communities. The median ACS confidence intervals for all 30,823 nonentitlement communities’ LMI estimates was roughly +/- 10 percent, meaning half of nonentitlement communities had confidence intervals around their LMI estimates that spanned more than 20 percentage points from the lower to the upper bound. Appendix III: Changes in the LMI Status of Nonentitlement Communities between 2000 and 2006–2010 Low- and Moderate-Income Summary Data Officials from the Department of Housing and Urban Development (HUD) and some states we interviewed noted that each time HUD updates its Low- and Moderate-Income (LMI) summary data, whether from one decennial census to another as in the past, or the more recent 2014 transition from the 2000 decennial census to the 2006–2010 American Community Survey (ACS) data, some nonentitlement communities see changes in their LMI status. We analyzed the extent to which nonentitlement communities that are Census-designated places in the eight states we interviewed saw changes in their LMI status during the most recent update to HUD’s LMI summary data. We found the following: 2,842 out of 3,803 communities (75 percent) did not see a change in their LMI status. Specifically, 568 (15 percent) had LMI status with both the 2000 LMI summary data and the 2006–2010 LMI summary data, and 2,274 communities (60 percent) did not have LMI status under either LMI summary data. 498 communities (13 percent) lost LMI status. 463 communities (12 percent) gained LMI status. Communities that did see a change when HUD updated its LMI summary data could have lost or gained LMI status for a number of reasons, including economic or demographic changes from one survey period to the the nature of random surveys—different samples taken from the same geographic area may produce different outcomes; differences in the features of ACS and decennial long form; and differences in methodologies used for calculating LMI. We could not determine the extent to which losses and gains in LMI status between the 2000 LMI summary data and the 2006–2010 LMI summary data could be attributed to any of these individual factors. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Andrew Pauline (Assistant Director), Winnie Tsen (Analyst-in-Charge), Rachel Batkins, Bethany Benitez, Justin Fisher, Cindy Gilbert, Ty Mitchell, Marc Molino, and Jennifer Schwartz made key contributions to this report. | Administered by HUD, the CDBG program provides funding for housing, community, and economic development programs. After set asides, HUD must allocate 70 percent of funds to cities and urban counties, known as entitlement communities, and 30 percent to states for distribution to eligible nonentitlement communities. In fiscal year 2015, Congress appropriated $3 billion for the CDBG program, of which HUD allocated $900 million to states. Seventy percent of CDBG funds must principally benefit low- and moderate-income persons, and Census Bureau data are used for this determination. In 2014, HUD transitioned from using decennial census long form income data (which are no longer collected) to ACS income data, which HUD uses to update its income data every 5 years. GAO was asked to review HUD's policies related to communities that disagree with their CDBG eligibility determination based on HUD's use of ACS data. This report examines (1) HUD's and states' eligibility policies and (2) potential alternative data sources. GAO interviewed CDBG administrators from 8 states and from nonentitlement communities in each of these states, all of which were selected based on available data and CDBG stakeholders' recommendations; spoke with other CDBG stakeholders; and reviewed CDBG guidance from 49 states and Puerto Rico. GAO also analyzed how use of ACS data can affect community eligibility. GAO makes no recommendations in this report. HUD and the Department of Commerce provided technical comments. The Department of Housing and Urban Development's (HUD) and states' primary method for communities to demonstrate eligibility when they disagree with HUD's eligibility determination is to allow communities to conduct their own local income surveys to show that they meet the Community Development Block Grant (CDBG) income threshold. HUD instructs small communities, known as nonentitlement communities, to use data based on the Census Bureau's American Community Survey (ACS) to determine whether at least 51 percent of residents in their proposed project service areas are low- and moderate-income persons and are therefore eligible for CDBG funds. However, communities may disagree with their eligibility determination based on ACS data, or they may be unable to use this method because the project's service area is larger or smaller than the census boundaries. In these cases, HUD and states allow communities to conduct their own local income surveys to demonstrate eligibility. State officials GAO interviewed said it is common for nonentitlement communities to use local income surveys as an alternative to HUD's ACS-based data, and HUD and states provide guidance on conducting these surveys. However, stakeholders cited costs and other challenges nonentitlement communities face in conducting local income surveys, including resource constraints, administrative burdens, and difficulty in obtaining a sufficient number of survey responses. Other than local income surveys, alternative methods for showing eligibility for CDBG funds are limited. For example, communities may qualify by funding activities that serve populations HUD presumes to be low- and-moderate income, such as the elderly or homeless. Stakeholders GAO interviewed cited challenges associated with measuring income and limitations associated with alternative data sources that might be used to demonstrate that communities meet the low- and moderate-income requirements. For example, stakeholders cited general challenges associated with measuring income and poverty, such as fluctuations in an individual's or community's income over a year. Some stakeholders cited alternative sources of income information that have not been used by communities that disagree with census data to determine CDBG eligibility. However, they noted that these sources would likely have one or more of the following limitations: does not fully measure a community's income; is not easily accessible by communities; is not available at small geographic levels; or is not more precise than ACS. For example, some sources of income data, such as income tax data, may have limited public availability—limiting their accessibility by communities—and income tax data would not include low-income earners who are not required to file tax returns. Other sources, such as the Supplemental Nutrition Assistance Program and other income-based programs, would not provide data at a small enough geographic level to be useful for this purpose. The Census Bureau is in the process of exploring ways to use external data sources, such as Social Security Administration and Internal Revenue Service data, to supplement ACS to improve the data and expects to make recommendations by March 2017. |
Background Prior to 1980, federal agencies generally retained title to any inventions resulting from federally funded research—whether the research was conducted by contractors and grantees or by federal scientists in their own laboratories—although specific policies varied among the agencies. Increasingly, this situation was a source of dissatisfaction because of a general belief that technology resulting from federally funded research was not being transferred to U.S. businesses for developing new or improved commercial products. For example, there were concerns that biomedical and other technological advances resulting from federally funded research at universities were not leading to new products because the universities had little incentive to seek uses for inventions to which the government held title. Additionally, the complexity of the rules and regulations and the lack of a uniform policy for these inventions often frustrated those who did seek to use the research. In 1980, the Congress enacted two laws that have fostered the transfer of federal technology to U.S. businesses. The Stevenson-Wydler Technology Innovation Act of 1980 (P.L. 96-480, Oct. 21, 1980) promoted the transfer of technology from federal laboratories to the private sector. The Bayh-Dole Act (P.L. 96-517, Dec. 12, 1980) gave universities, nonprofit organizations, and small businesses the option to retain title to inventions developed with federal funding. It also authorized federal agencies to grant exclusive licenses to patents on federally owned inventions that were made at federal laboratories or that federal agencies patented after a federal funding recipient opted not to retain title. To protect the public’s interest in commercializing federally funded technology, the Bayh-Dole Act required, among other things, that a contractor or grantee that retains title to a federally funded invention (1) file for patent protection and attempt commercialization and (2) comply with certain reporting requirements. The act also specified that the government would retain “a nonexclusive, nontransferable, irrevocable, paid-up license to practice or have practiced for or on behalf of the United States any subject invention throughout the world.” The Bayh-Dole Act did not give large businesses the right to retain title to their federally funded inventions. Subsequently, in February 1983, President Reagan issued a memorandum on patent policy to executive agency heads stating that, to the extent permitted by law, the government’s policy is to extend the policy enunciated in the Bayh-Dole Act to all federally funded inventions arising under research and development contracts, grants, and cooperative agreements. In April 1987, President Reagan issued Executive Order 12591, which, among other things, requires executive agencies to promote the commercialization of federally funded inventions in accordance with the 1983 memorandum. Our 1999 report noted that federal agencies were not always aware of the government’s licenses and could not tell us the circumstances under which these licenses had been employed. Nevertheless, agency officials said that the government’s license to practice federally funded inventions is important because agency scientists could use these inventions without being concerned that such use would be challenged. The Government’s License Has Limited Applicability Federal agencies and their authorized funding recipients have the right to benefit from the use of a federally funded invention without risk of infringing the patents. Government scientists can use these inventions in their research without having to pay royalties. Federal contractors, grantees, and cooperative agreement funding recipients may use the government’s license if they are authorized to do so. For example, federal agencies can contract with a third party to manufacture products containing such inventions. However, the government’s license to use a federally funded invention does not automatically entitle the government to price discounts when purchasing products that happen to incorporate the invention. The government’s license also does not necessarily extend to later inventions related to or based on the federally funded invention. The Government’s License Protects Its Right to Practice the Invention The Bayh-Dole Act gives the government the right to “practice”—or use—a federally funded invention without being liable for patent infringement. There are two primary ways in which the government can use its right to practice an invention in which it has retained a license. First, the government can contract with a third party to make a product that incorporates the invention for or on behalf of the government without either the government or the contractor being liable for patent infringement. It is our understanding that this right has never been invoked for biomedical products. Second, the government can use the invention itself without obtaining a license from or paying a royalty to the patent owner. As discussed later in this report, federal research officials say that this is a common occurrence in the research arena, making the license to use federally funded inventions a valuable asset to the government. The Government’s License Is Available to Federal Agencies and Authorized Funding Recipients The government’s right to practice an invention is limited to federal agencies and their funding recipients specifically authorized to use the invention for federal government purposes. The Bayh-Dole Act provides that the license is “nontransferable,” which means that the government may not sell or otherwise authorize another to practice an invention in its stead. This concept is not unique to the Bayh-Dole Act. Such language appears frequently in patent practice, where nonexclusive licensing agreements are typically construed as restricting assignment of the license without the licensor’s consent. In the Bayh-Dole Act, the term “nontransferable” is followed immediately by qualifying text—language that allows the government to authorize others to practice the invention for or on its behalf but which restricts the purposes for which it may do so. Federal agencies typically have authorized contractors to use the government’s license to develop and produce mission-critical hardware, such as a weapon system. This use of the government’s license satisfies a legitimate federal governmental need in support of a congressionally authorized program. Such linkages to an agency’s mission are less prevalent when grants or cooperative agreements are used, as is typically the case with NIH, which sponsors biomedical research to benefit the public health. This research serves the public good through biomedical advances from publishing scientific results and developing new technology that improve people’s life. This good may represent a sufficient government need for NIH to authorize its grantees to use the government’s license as a basis for using federally funded inventions in their research. However, according to a senior NIH attorney, NIH does not use this rationale to authorize grantees to exercise the government’s licenses and has not included a clause in its grant agreements authorizing the use of federally funded inventions as part of the research. As a result, NIH’s grantees might be sued for infringement and must negotiate any licensing agreements they believe they need to support their work. Furthermore, the government’s license to use a federally funded invention generally does not apply to HHS’s purchases of drugs and vaccines because (1) HHS has never contracted for the manufacture of a pharmaceutical made with federal funds for the government’s use and (2) HHS’s funding assistance for acquiring drugs or vaccines for distribution is intended to assist the states’ public health services, rather than to meet a federal agency’s need. The Government Is Not Automatically Entitled to Price Discounts The “paid-up license” that the Bayh-Dole Act specifically confers on the federal government is often referred to as a “royalty-free license.” The term “royalty-free” license (and even “paid-up license”) has sometimes been misinterpreted in a way that effectively eliminates the conditions set forth in the statute. The license for which the federal government is “paid up” entitles it to practice an invention itself, or to have others practice the invention on the government’s behalf. The statute does not give the federal government the far broader right to purchase, “off the shelf” and royalty free (i.e., at a discounted price), products that happen to incorporate a federally funded invention when they are not produced under the government’s license. The Government’s License May Not Extend to Related Inventions An invention rarely represents a completely new form of technology because the inventor almost always has used “prior art” in developing the ideas that led to an invention. Prior art is the intellectual basis—the knowledge base—upon which the novelty of an invention is established or the basis that determines whether the “invention” would have been obvious to one skilled in the art. In making an invention, an inventor typically would build on the prior art in the particular technology, and some of this prior art might have been developed by either government scientists or federal funding recipients. However, an intellectual property interest in prior art does not in and of itself give one an interest in someone else’s subsequent invention. Also, an invention often is part of a family of related inventions. One research project may spawn multiple inventions that, for example, are separate and distinct or are further developments of a basic invention for specific applications. Similarly, the idea on which the original invention is based may trigger new inventions. The question of whether the government has an interest in later inventions also arises in instances involving the same technologies when the patents to these inventions are related in some fashion. Patents may be related because they protect inventions springing from the same essential technologies or scientists discover additional uses for an invention. For example, while a patent application is pending at USPTO, the applicant may decide to clarify the description of an invention because what initially was viewed as a single invention is found to be two or more inventions or because the USPTO patent examiner determines that patent application claims must be separated and independently supported. Whether the government has the right to practice an invention because it retains a license to use it under the Bayh-Dole Act depends upon whether the invention was developed with federal funding and is, therefore, subject to the act. An invention is a “subject invention” if it is conceived or first actually reduced to practice “in the performance of work under a funding agreement” (contract, grant, or cooperative agreement) to which the act applies. Rights to the parent patent do not automatically generate rights vis-à-vis related subsequent patents. In this regard, the government is not entitled to any different protection than other entities that fund research. There is one exception to the general rule that inclusion depends upon whether each invention was itself conceived or first actually reduced to practice in performing federally funded research. This exception holds that while the owner of a “dominant patent” can block the unlicensed use of that patent and related patents, the owner may not assert that patent either to deprive its licensee’s right to a “subservient patent” or, similarly, block the government’s license to use a subservient patent for a federally funded invention. Thus, if the owner of a dominant patent subsequently makes a new invention in the course of work under a federal contract or other federal assistance, the owner cannot assert the dominant patent to frustrate the government’s exercise of its license to use the second invention. The Government Appears to Hold Few Licenses to the Biomedical Products It Purchases Although determining the extent to which the government has licenses in biomedical products is difficult, the number appears to be small. For pharmaceuticals, one of the largest sectors of the biomedical market, we found that the government had an interest—either because of its license under the Bayh-Dole Act or as the owner or “assignee” of the patent—in only 6 brand name drugs associated with the top 100 products, by dollar value, that VA procured in fiscal year 2001 and 4 brand name drugs associated with the top 100 products, by dollar value, that DOD dispensed from July 2001 to June 2002. (See apps. II and III.) All four of the DOD drugs were among the six federally funded pharmaceuticals that VA purchased. As shown in table 1, VA and DOD spent about $120 million on these six drugs in fiscal year 2001. We could not determine the extent to which the government holds rights to other types of biomedical products because (1) no databases exist showing the underlying patents for most of these products and (2) products such as hospital beds and wheelchairs may incorporate numerous components that might not be covered by identifiable patents. Our examination found no government rights to any of five medical devices for which the VA Medical Center in Milwaukee, Wisconsin, had spent more than $1 million during fiscal year 2002. The medical devices we analyzed included electric hospital beds, closed circuit televisions, blood pressure monitors, low-air-loss and air-pressure mattresses, and wheelchairs. Officials from VA and DOD believe that the government would rarely have patent rights to such products. The Government Has Used Its Biomedical Licenses Primarily for Research Officials from VA, DOD, and NIH said that their agencies use the government’s licenses to biomedical inventions primarily in performing research. These officials could not tell us the extent of such usage, however, because researchers generally do not keep records. Instead, government researchers often use the technology and inform the patent owner of the government’s rights only if there is a claim of infringement or other question regarding the government’s use. In fact, government scientists usually do not obtain licenses for any patented technology they may use in research. They told us that using technology for research purposes without obtaining permission is a generally accepted practice among both government and university scientists. VA and DOD officials said they do not consider the government’s licenses for procurements because they (1) would not be able to determine readily which products incorporate patented technologies or whether the government helped fund the technology’s development, (2) believe they already receive favorable pricing through the Federal Supply Schedule and national contracts, and (3) are not required by law to do so. Similarly, the VA and DOD officials said they had not used the government’s licenses to have a contractor manufacture biomedical products for federal use. Biomedical Licenses Are Primarily Used for Research DOD and NIH attorneys told us that the government primarily uses its biomedical licenses for research. According to these officials, the government’s licenses are valuable because they allow researchers to use the inventions without concern about possible challenges alleging that the use was unauthorized. However, no governmentwide database exists to track how often government researchers actually use the licenses, and agencies did not have records showing how often or under what circumstances these licenses have been employed. NIH officials said that their agency does not routinely document its researchers’ use of patented technologies. Thus, they have no way to readily determine which patented technologies have been used or whether the government had an interest in them. However, the NIH officials cited additional reasons why NIH researchers seldom obtain licenses to conduct research: First, NIH researchers may not really need a license because they can work with the underlying principles behind the technology simply by using the information that has been published. Second, there is a prevailing practice not to enforce patent rights among federal agencies and nonprofit organizations that conduct academic research. Third, under 28 U.S.C. § 1498, federal agencies cannot be enjoined from using patented technology in conducting research; the patent owner’s only recourse is to sue the government for a reasonable royalty. An Army patent attorney told us that he advises researchers to inform him of any patented technologies they are using in their research. He also said, however, that this does not always happen in practice and that he and the researchers generally are not aware of a potentially infringing use until the patent owner informs them. At that time, he researches the matter and seeks permission, obtains a license, or informs the patent owner of the government’s interest if there is one. Because the attorney does not have records on government licenses, he has to research each case individually. He added that he had invoked the privileges of the licenses for research purposes but could not readily tell us how often this had occurred. A VA official said that, like NIH, VA researchers usually do not know whether the technology they use for research is patented. Furthermore, information about the government’s interest in the development of products is difficult to obtain because extensive research would be required. She said that VA procures some research materials using Material Transfer Agreements with universities. For the most part, however, VA simply goes about its research assuming it has the right to use the technologies of others unless there is a challenge. She was unaware of any patent infringement cases that had been filed against VA. The “General Research Exception” Is Cited in Using Patented Technologies VA, DOD, and NIH have each relied, to some extent, on the concept that a researcher could use patented technology for research as long as the research is for purely scientific endeavors. According to agency officials, such use is a generally accepted practice within the research community on the basis of what some believe is a “general research exception.” However, some agency officials questioned how this exception might be viewed in light of the decision rendered by the Court of Appeals for the Federal Circuit in Madey v. Duke University, 307 F.3d 1351 (Fed. Cir. 2002). Concerning the availability of the experimental use exception to a university, the court ruled that the experimental use exception is very narrow and strictly limited, extending only to experimental uses that are not in furtherance of the infringer’s legitimate business and are solely for the infringer’s amusement, to satisfy idle curiosity, or for strictly philosophical inquiry. The court also stated that the profit or nonprofit status of the user is not determinative of whether the use qualifies for the experimental use exception. Experimental use may infringe a patent when the use furthers the infringer’s business. For example, the business of a research institution includes conducting research. Some patent owners believe that allowing others to use their patented technologies for research purposes may pose no threat and may actually be to their benefit. In fact, representatives from corporations involved in the research and development of products in the biomedical area told us that they welcome additional research that will continue to advance the state of the art as long as such use is not merely an attempt to use the patents for commercial purposes without obtaining a license. They said that there has been an unstated “gentlemen’s agreement” among researchers in this regard that will not be affected by the Madey case. If true, government researchers may, as a practical matter, be able in many cases to continue using the patented technologies of others without obtaining licenses. Licenses Have Not Been Used for Biomedical Procurements VA and DOD procurement officials were unaware of any instances in which a federal agency had used the government’s licenses to have contractors manufacture products that incorporate federally funded inventions. Furthermore, these procurement officials said that, as discussed above, the government’s license does not provide an automatic discount for federal government procurements. They added that even if they wanted to use the license for procurements, they would not know which products incorporate federally funded inventions. The VA and DOD officials also said that the government’s licenses would probably not significantly reduce their procurement costs because they believe they already receive favorable pricing through the Federal Supply Schedule and national contracts. In particular, for a branded pharmaceutical to be listed on the Federal Supply Schedule, the manufacturer must agree to give the government a 24-percent discount over the nonfederal average manufacturer price. Furthermore, the federal government has negotiated national contracts that provide even greater discounts for some pharmaceuticals. Observations The government’s license under the Bayh-Dole Act provides protection against claims of patent infringement when federal agencies or their authorized funding recipients use federally funded inventions. Scientists working for federal agencies and their contractors generally are authorized to use federally funded inventions; however, agencies have not necessarily provided similar authorization in their grant agreements for scientists at universities and other institutions. The decision rendered by the Court of Appeals for the Federal Circuit in Madey v. Duke University calls into question the validity of the general research exception that many scientists have cited as a basis for using the patented technology of others in their research. Agency Comments and Our Evaluation We provided NIH with a draft of this report for its review and comment. NIH stated that because our report ties the exercise of the government’s license rights to the needs of the federal government, we give the impression that the government’s license rights are more limited than they actually are. While we agree with NIH that federal agencies and their funding recipients have unrestricted rights to use a federally funded invention for federal government purposes, it is important to recognize that they can use these rights only to meet needs that are reasonably related to the requirements of federal programs. NIH also provided comments to improve the report’s technical accuracy, which we incorporated as appropriate. (See app. IV for NIH’s written comments and our responses.) We will send copies of this report to interested Members of Congress; the Secretary of Defense; the Secretary of Health and Human Services; the Secretary of Veterans Affairs; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report were Richard Cheston, Deborah Ortega, Bert Japikse, Frankie Fulton, and Lynne Schoenauer. Appendix I: Objectives, Scope, and Methodology We examined the manner in which federal agencies administer, use, and benefit from intellectual property created under federally sponsored research programs related to public health, health care, and medical technology. Our objectives were to assess (1) who is eligible to use and benefit from the government’s licenses to biomedical inventions created under federally sponsored research, (2) the extent to which the government has licenses to those biomedical inventions it procures or uses most commonly, and (3) the extent to which those eligible have actually used or benefited from these licenses. To determine who is eligible to use and benefit from the government’s licenses, we reviewed the applicable laws, regulations, and procedures, including an examination of relevant case law. We also obtained the views of a senior attorney responsible for handling these cases in the Office of General Counsel of the Department of Health and Human Services. To assess the extent of the government’s licenses to biomedical inventions, we concentrated on pharmaceuticals because (1) pharmaceuticals represent a major component of the federal government’s biomedical procurements—an estimated $3.5 billion annually—and (2) government databases can be used to identify the underlying patents to pharmaceuticals approved by the Food and Drug Administration (FDA). In conducting our work, we first obtained data on the generic product name, total purchases by dollar amount, and number of prescriptions filled for the top 100 pharmaceuticals purchased by the Department of Veterans Affairs (VA) and the Department of Defense (DOD), which procure most of the government’s biomedical products for use by their hospitals and other medical facilities. VA’s data covered procurements for fiscal year 2001. DOD’s data covered the 12-month period from July 1, 2001, to June 30, 2002, because the agency began consolidating its pharmacy program sales data on July 1, 2001. For each of the VA and DOD pharmaceuticals, we used FDA’s Electronic Orange Book to identify the corresponding brand name product(s) and their patents. We focused on brand name products rather than generics because the former often utilize technologies with protected active patents and typically generate higher sales, whereas generic drugs often enter the market only after a product’s active patents have expired. We examined possible equivalent brand names to ensure that we identified the government’s licenses to available alternative products. FDA’s Electronic Orange Book included 210 of the 217 brand name products we reviewed. We also obtained patent numbers for three of the seven pharmaceuticals not included by examining their product Web sites. Using the patent numbers, we then accessed the patent records in the U.S. Patent and Trademark Office’s (USPTO) patent database to determine whether the government held any rights to the patented technologies of each brand name pharmaceutical. We identified any cases where the government was the owner or assignee or had a license to use the invention because it sponsored the research. In addition to our own assessment, we examined the National Institutes of Health’s (NIH) July 2001 report entitled NIH Response to the Conference Report Request for a Plan to Ensure Taxpayers’ Interests Are Protected. NIH assessed the return to the taxpayers for therapeutic drugs that use NIH-funded technology and have sales of at least $500 million per year, making them “blockbuster” drugs. From a survey of the pharmaceutical industry, FDA, USPTO, and its own databases, NIH determined that the government had rights to 4 of the 47 blockbuster drugs it identified for 1999—Taxol, Epogen, Procrit, and Neupogen. We found that all 4 of these were among VA’s top 100 pharmaceutical procurements and all but Taxol were among DOD’s top 100. To determine the extent of the government’s ownership of or licenses to use other biomedical products, we explored several methods to locate relevant patent and licensing information for medical devices. However, we found that (1) there are no databases showing the underlying patents for most of these products and (2) products such as hospital beds and wheelchairs typically incorporate numerous components that may or may not be covered by identifiable patents. In addition, VA and DOD procurement officials informed us that they do not have agencywide data showing the most frequently purchased items because many devices are purchased at the local level. Because of these limitations, we identified five medical devices for which the VA Hospital in Milwaukee, Wisconsin—a major procurer of medical devices—had spent more than $1 million during fiscal year 2002. This approach also provided only limited information. We examined the government’s rights to each device by identifying it in the General Services Administration’s on-line supply catalog, which includes the items on the Federal Supply Schedule, and reviewing the corresponding item descriptions. However, we found that the catalog does not provide patent or licensing information for any of the products. We also were unable to determine from the USPTO patent database the specific patents used for each medical device. Finally, our examination of product Web sites found that they do not provide information on the products’ patented technologies or address whether the government has license rights to them. To examine how the government has used its licenses to federally funded inventions, we interviewed DOD, NIH, and VA officials who procure biomedical products or who are involved in scientific research. Also, we researched relevant statutes and case law and met with knowledgeable officials in NIH and industry to determine whether a general research exception exists regarding patent infringement that applies to government and other researchers conducting research for purely scientific reasons. We conducted our work from April 2002 through April 2003 in accordance with generally accepted government auditing standards. We did not independently verify the data that VA, DOD, or NIH provided or the data obtained from the USPTO and FDA databases. However, agency officials addressed each of our questions regarding their data. Appendix II: The Top 100 Pharmaceuticals Procured by VA on the Basis of Dollar Value, Fiscal Year 2001 Appendix III: The Top 100 Pharmaceuticals Dispensed by DOD on the Basis of Dollar Value, July 1, 2001–June 30, 2002 Appendix IV: Comments from the National Institutes of Health The following are GAO’s comments on the National Institutes of Health’s letter dated April 22, 2003. GAO’s Comments 1. We agree with NIH that federal agencies have unrestricted rights to use a federally funded invention for government purposes. It has, indeed, a “nontransferable, irrevocable, paid-up license” to practice the invention. Or it may authorize someone to practice the invention on its behalf. However, these rights cannot be taken so as to undermine the rights that the Bayh-Dole Act clearly intends to accord to inventors. Specifically, the government’s license permits it to practice the invention to meet its needs, i.e., to meet needs that are reasonably associated with the requirements of federal programs, not to act outside of those constraints that normally distinguish public- from private-sector activities. 2. We deleted the footnote. 3. We deleted “generally” from the sentence. 4. We disagree. Related issues have been discussed in several court decisions. See, for example, AMP, Inc. v. United States, 389 F.2d 448, 454 (Ct. Cl. 1968), cert. denied, 391 U.S. 964 (1968). Regarding NIH’s concern that adherence to these cases might have a chilling effect on the willingness of private entities to participate as funding recipients, we point out that the parties can negotiate intellectual property rights dealing with these issues on a case-by-case basis. Moreover, the scope of any exception is limited as required to permit use of the government’s license in the subservient patent. | The Bayh-Dole Act gives federal contractors, grantees, and cooperative agreement funding recipients the option to retain ownership rights to inventions they create as part of a federally sponsored research project and profit from commercializing them. The act also protects the government's interests, in part by requiring that federal agencies and their authorized funding recipients retain a license to practice the invention for government purposes. GAO examined (1) who is eligible to use and benefit from the government's license to federally funded biomedical inventions, (2) the extent to which the federal government has licenses to those biomedical inventions it procures or uses most commonly, and (3) the extent to which federal agencies and authorized federal funding recipients have actually used or benefited from these licenses. GAO focused its work on the Department of Veterans Affairs (VA), the Department of Defense (DOD), and the National Institutes of Health (NIH). NIH commented that the report implies that the government's right to use its license is more limited than it actually is. GAO recognizes that the right of federal agencies and their funding recipients to use a federally funded invention is unrestricted. However, GAO believes that these license rights can be used only to meet needs that are reasonably related to the requirements of federal programs. Federal agencies and their authorized funding recipients are eligible to use the government's licenses to federally funded inventions for the benefit of the government. Government researchers can use the technology without paying a royalty, and federal agencies can authorize their funding recipients to use the government's licenses for specific contracts, grant awards, or cooperative agreements meeting a federal government need. The government is not entitled to automatic price discounts simply because it purchases products that incorporate inventions in which it happens to hold a license. Furthermore, the government's rights attach only to the inventions created by federally funded research and do not necessarily extend to later inventions based on them. Thus, the government may have no rights in a next-generation invention that builds on federally funded technology if the new invention were not itself created by federally sponsored research. Few of the biomedical products that federal agencies most commonly buy appear to incorporate federally funded inventions. In 2001 the government had licensing rights in only 6 brand name drugs associated with the top 100 pharmaceuticals that VA procured and in 4 brand name drugs associated with the top 100 pharmaceuticals that DOD dispensed. GAO was unable to determine the extent to which the government had rights to other types of biomedical products because there are no databases showing the underlying patents for most of these products and such products may incorporate numerous components that might not be covered by identifiable patents. The federal government uses its licenses to biomedical inventions primarily for research; however, researchers generally do not document such usage. These licenses are valuable because researchers can use the inventions without concerns about possible challenges for unauthorized use. Neither VA nor DOD has used the government's licenses to procure biomedical products because they cannot readily determine whether products use federally funded technologies and they believe they already receive favorable pricing through the Federal Supply Schedule and national contracts. Furthermore, neither VA nor DOD has used the government's license to manufacture a biomedical product for its use. |
Background USDA delivers services through its component agencies and through thousands of field offices in states, cities, and counties. These offices acquire and use various types of telecommunications services to accomplish their missions and service customer needs. USDA reports show that the Department spends about $100 million annually for telecommunications, of which about $37 million was for FTS 2000 services in fiscal year 1994. USDA is required to use FTS 2000 network services for basic long-distance communications; that is, the inter-Local Access and Transport Area (LATA) transport of voice and data communications traffic. Under the FTS 2000 contract, USDA agencies and offices use basic switched service for voice, packet switched service for data, video transmission service, and other types of services to support their communications needs. In addition to FTS 2000, USDA estimates that during fiscal year 1994 it spent another $50 million on local telecommunications and other services obtained from about 1,500 telephone companies. USDA agencies and offices use these services to meet their local telephone and data communications needs within LATAs. Other telecommunications services obtained from commercial carriers that are not available under the FTS 2000 contract, such as satellite communications, are also included in these costs. USDA also estimates that between $10 million to $30 million is spent annually on telecommunications equipment, such as electronic switches and telephone plant wiring, and support services, such as maintenance for acquired telecommunications equipment. The Federal Information Resources Management Regulation and USDA’s Telecommunications Policy (DR-3300-1) require that USDA’s agencies maximize use of all government telecommunications resources to achieve optimum service at the lowest possible cost. In addition, Section 215 of the Department of Agriculture Reorganization Act of 1994, requires USDA to reduce expenses by jointly using resources at field offices where two or more agencies reside. This includes sharing FTS 2000 telecommunication services. Strategies to reduce the costs associated with the use of FTS 2000 generally involve (1) consolidating separate FTS 2000 Service Delivery Points (SDPs)to increase the volume of communications traffic among fewer points and to obtain associated volume discounts and (2) optimizing services and types of access to SDPs by selecting more cost-effective telecommunications service options based on customers’ particular needs. Because there can be additional equipment and transmission costs associated with implementing such consolidation and optimization alternatives, these costs will offset some of the savings. For example, additional expenditures may be required for telecommunications equipment, such as interface cards and communications software to provide connectivity between systems, and for additional services such as equipment maintenance and purchasing of new leased telecommunications lines. Consequently, a cost-benefit analysis is generally performed to determine whether the alternatives are practical and worthwhile. The senior USDA Information Resources Management (IRM) Official—the Assistant Secretary for Administration—has delegated responsibility for managing all aspects of the Department’s telecommunications program to the Director of OIRM. This includes the responsibility for ensuring that the Department maximizes use of its telecommunications resources at the lowest possible cost. Within OIRM, the Associate Director for Operations is responsible for managing telecommunications services on a departmentwide basis, including services under governmentwide contracts such as FTS 2000. Public Law 103-354 authorized the Secretary of Agriculture to reorganize USDA and begin streamlining the Department to achieve greater efficiency, effectiveness, and economies. In this regard, the Secretary reduced the number of component agencies from 43 to 29, and, on December 6, 1994, announced plans to reduce the number of county field offices from about 3,700 to 2,531. These 2,531 offices, called Field Office Service Centers, will house multiple agencies to provide USDA customers one-stop shopping for farm services, natural resources conservation services, and rural housing and community development services. Other USDA agencies with offices throughout the country, such as the Forest Service, are also planning to combine offices and consolidate functions where it is feasible to do so. Scope and Methodology To address our objective, we interviewed USDA officials and reviewed USDA reports and other documentation to identify departmentwide consolidation and optimization activities. In addition, we reviewed telecommunications reports and FTS 2000 cost and billing information to determine expected savings associated with these efforts. We also visited USDA sites where consolidation and optimization activities have been performed to assess whether consolidation and optimization solutions were successfully implemented. Appendix I provides further details on our scope and methodology. We conducted our review between March 1994 and March 1995 in accordance with generally accepted government auditing standards. We discussed the facts in our report with USDA officials, including the Assistant Secretary and Deputy Assistant Secretary for Administration and the Director of USDA’s Office of Information Resources Management, and have incorporated their comments where appropriate. We also provided a draft of this report to USDA for comment. USDA’s comments are discussed in the report and are included in full in appendix II. USDA Loses Millions Annually by Not Consolidating and Optimizing FTS 2000 Telecommunications Services USDA has not consolidated and optimized FTS 2000 telecommunications services to the maximum extent possible. USDA has hundreds of field office sites where multiple USDA agencies, located within the same building or geographic area, obtain and use separate, and often times redundant, telecommunications services. OIRM officials estimate that by consolidating and optimizing FTS 2000 telecommunications services at many of these sites, the Department could save $5 million to $10 million a year. Although OIRM has identified these and other cost-savings opportunities over the past 2.5 years, OIRM senior management has not carried out its responsibility to reduce departmentwide telecommunications costs where possible. Consequently, USDA pays millions more than is necessary for the use of FTS 2000 services. Moreover, although the Secretary has begun restructuring agency field offices to consolidate operations at 2,531 new Field Office Service Centers, USDA has no operational plan or time frame for consolidating and optimizing telecommunications at these centers to ensure the most cost-effective use of FTS 2000 services. USDA Has Identified Opportunities for Savings USDA component agencies obtain telecommunications equipment and services independently. This has resulted in the use of redundant telecommunications services within and among USDA field offices. In 1990, when USDA transitioned from its departmental network to FTS 2000, OIRM was aware of consolidation and optimization opportunities. However, OIRM and the agencies jointly agreed not to address this issue during the transition due to the concern that doing so would complicate the transition and increase risks of disrupting service. Today, as in the past, hundreds of USDA field sites around the country, that have multiple agency offices located within the same building or geographic area continue to access and use FTS 2000 telecommunications services separately. As a result, since 1991, when USDA completed its transition to FTS 2000, these offices have been paying for redundant and unnecessary services. A few USDA component agencies have taken the initiative to begin eliminating the redundant use of FTS 2000 services by consolidating and optimizing telecommunications. For example, the Farmers Home Administration (FmHA), which began consolidating and optimizing FTS 2000 services at field office sites in 1991, has achieved savings by consolidating and optimizing telecommunications at several field sites. At just one multiagency office in Columbia, Missouri, USDA documents show that FmHA reduced telecommunications service costs at this site by about $120,000 per year, or about 55 percent. Also, in 1991, at the Siuslaw National Forest in Corvallis, Oregon, the Forest Service consolidated voice and data communications traffic between ranger districts and forest laboratory offices. By taking this action, Forest Service officials at the Siuslaw National Forest told us they achieved an annual savings of about $150,000. These independent efforts by Forest Service and FmHA are laudable, and they demonstrate that substantial savings can be realized. These efforts, though, focused primarily on solutions that provide a savings benefit to the individual agency and did not address cross-agency solutions that provide additional savings to the Department as a whole. USDA, recognizing that opportunities to consolidate and optimize FTS 2000 telecommunications services existed throughout the Department, formed the Telecommunications Services Division (TSD) within OIRM in April 1991. TSD is to assist the Office in carrying out its responsibilities to ensure that the Department’s telecommunications resources are being used in the most cost-effective way. Among other things, TSD was tasked with analyzing telecommunications data to identify departmentwide opportunities to consolidate and optimize FTS 2000 services. By consolidating and optimizing FTS 2000 services at field office sites, service costs could be substantially reduced by eliminating redundant FTS 2000 services between agency offices within the same building or geographic area. Since its creation in 1991, TSD has identified opportunities to consolidate and optimize FTS 2000 services. For example: In February 1992, TSD identified 30 USDA field office sites where the Department could achieve savings by consolidating and optimizing FTS 2000 services. TSD’s analysis of the first site showed that agencies’ FTS 2000 costs would be reduced by as much as 60 percent. In May 1993, TSD began developing a Network Analysis Model to identify cost-effective options for reducing FTS 2000 telecommunications costs at USDA field office sites. February 1994 TSD estimates showed that use of the model departmentwide to consolidate and optimize FTS 2000 services could reduce costs by as much as $5 million to $10 million each year or $400,000 to $800,000 per month. In February 1994, TSD reported that numerous USDA agencies were paying significantly higher than average charges for their use of FTS 2000 service. According to TSD’s report, costs could be reduced as much as $750,000 to $3.7 million annually by aggregating some FTS 2000 services at these agencies. In June 1994, TSD identified opportunities to save as much as $150,000 to $600,000 annually by shifting some component agency data transmissions that are not time-critical outside peak business hours when transmission costs are lower. According to FTS 2000 rates, costs to transmit data outside normal business hours can be as much as 50 percent less than during normal business hours. Therefore, USDA component agencies could take advantage of these savings by transmitting data such as time and attendance reports, noncritical E-Mail messages, and other noncritical data files outside normal business hours. Telecommunications Savings Not Realized Because OIRM Has Not Discharged Its Management Responsibilities Effectively OIRM has departmentwide responsibility for managing telecommunications and ensuring that the Department makes maximum use of telecommunications resources at the lowest possible cost. Under authority delegated by USDA’s Senior IRM Official, the Director of OIRM is the executive agent responsible for planning, development, acquisition, and use of the Department’s telecommunications resources. According to federal regulations, the Director of OIRM is supposed to exercise this authority by, among other things, providing departmentwide leadership and direction for telecommunications activities to ensure effective and economical use of resources; developing and implementing systems, processes, and techniques to improve operational effectiveness of telecommunications resources; and reviewing the use of these resources to ensure that they conform to all applicable federal and Department policies and procedures. In addition, OIRM must review and approve component agencies’ acquisition of telecommunications resources by granting technical approvals. OIRM has not effectively discharged its management responsibilities. Specifically, OIRM has not gone far enough under its authority to implement initiatives to consolidate and optimize FTS 2000 telecommunications services. Consequently, USDA has not been successful achieving telecommunications cost-savings identified by TSD. For example, despite being aware in February 1994 that TSD’s Network Analysis Model was an effective tool that USDA could use to reduce overall FTS 2000 service costs by as much as $400,000 to $800,000 per month, OIRM management never discussed these savings opportunities with the Department’s senior decisionmakers—USDA’s Under Secretaries and Assistant Secretaries—nor did it develop a plan for implementing the actions necessary to achieve departmentwide savings. Consequently, as of December 1994, USDA still had not realized these savings. In addition, OIRM did not act immediately to aggregate services with higher than average FTS 2000 costs and to shift agency data transmissions to off-peak hours. Although TSD advised OIRM management about these opportunities in February and June 1994, respectively, component agencies were not informed by OIRM management about the savings opportunities until October and November 1994. At that time, OIRM provided general information on the cost-savings initiatives during meetings with USDA interagency advisory groups, such as the Department’s Management Council—an interagency advisory group made up of component agencies’ Deputy Administrators for Management. However, OIRM’s Director for Operations acknowledged that OIRM did not follow up these general meetings with additional briefings or develop action plans for implementing these cost-savings initiatives. Consequently, as of December 1994, no progress has been made to achieve these savings. In cases where some consolidation and optimization efforts have been initiated by TSD, OIRM did not effectively discharge its management responsibility to ensure full implementation of these initiatives. For example, TSD’s 1992 plan to consolidate and optimize FTS 2000 services at 30 USDA field sites was not implemented because OIRM management took no steps to resolve interagency disagreements. In this case, although OIRM gave TSD responsibility for managing the effort, disagreements between agencies over responsibilities for consolidating services at the first site precluded any further work from getting underway. A December 1992 memorandum shows that TSD advised OIRM management of the problems it was having and asked for assistance in gaining agency cooperation. Nevertheless, OIRM management did not respond to TSD’s request and the matter was left unresolved. TSD continued to try to solicit agency support for the initiative but these efforts were unsuccessful and no savings were achieved. OIRM’s Associate Director for Operations was unable to explain OIRM management’s inaction in this case. In another case, although TSD determined in April 1994 that USDA could save $65,000 annually by consolidating FTS 2000 services among several agencies located in Colorado, about 40 percent of these savings were not achieved because one agency—the Animal and Plant Health Inspection Service (APHIS)—did not implement TSD’s recommendation to consolidate services at an APHIS office in Fort Collins, Colorado. While APHIS officials in Fort Collins agreed to consider TSD’s recommendation, APHIS took no subsequent action to consolidate its FTS 2000 services despite several follow-up discussions by TSD officials. Moreover, APHIS did not provide TSD with a reason for its inaction. In July 1994, TSD’s Chief of Operations told us he informed OIRM management about APHIS’ lack of action. However, OIRM management never attempted to resolve this matter with APHIS senior management. In October 1994, we asked the APHIS Chief for Information Systems and Communications at APHIS headquarters in Hyattsville, Maryland, to explain the agency’s apparent unwillingness to implement TSD’s recommendation for cost-savings in Fort Collins. He stated that he was personally unaware of this savings opportunity and could not explain why APHIS officials in Fort Collins did not implement the initiative. Following our meeting, APHIS notified TSD that it would implement TSD’s recommendations and begin consolidating FTS 2000 services at the APHIS office in Fort Collins. Consolidation work is scheduled to begin during 1995. In December 1994, we discussed these cases with the Director of OIRM, who agreed that opportunities for FTS 2000 telecommunications savings have been missed. Although the Director did not fully explain the lack of action by OIRM in each case, he stated that (1) shifts in departmental priorities to activities such as the Info Share program had prevented OIRM from making as much progress consolidating and optimizing FTS 2000 services as he expected to make during 1994 and (2) OIRM’s lack of fiscal authority and control over the agencies’ telecommunications budgets and expenditures had made it difficult to prompt agencies to act on cost-savings initiatives. The Director noted, however, that consolidated voice services have been installed in 35 farm service field sites under USDA’s Info Share program. While the Department’s consolidation of telecommunications services at some field sites under Info Share is a step in the right direction, it is only a fraction of the hundreds of USDA field sites where savings opportunities exist. More progress has not been made because OIRM has not effectively exercised the authority it does have to reduce telecommunications costs. Specifically, OIRM has not (1) met with USDA’s senior decisionmakers to advise them about savings opportunities, (2) developed departmentwide plans and implemented actions to consolidate and optimize FTS 2000 telecommunications services when opportunities for savings have been identified, and (3) overseen and effectively managed cost-savings initiatives to ensure that savings are achieved. In addition, OIRM has not effectively exercised its authority to review and approve the acquisition of telecommunications resources. Although OIRM’s Director stated that OIRM does not have control over USDA component agencies’ telecommunications budgets, the Office does have authority to review and approve agencies’ acquisition of telecommunications resources. However, OIRM has not used this authority to ensure that opportunities to consolidate and optimize FTS 2000 services are addressed. In this regard, OIRM reviews and approves component agencies’ requests for procurement of telecommunications resources under the Department’s technical approval process. However, OIRM officials responsible for technical approvals told us that they evaluate proposed procurements individually and do not review them to assess whether or not opportunities to consolidate and optimize FTS 2000 services have been addressed before approving agency telecommunications acquisitions. While OIRM has, for the most part, been passive and not gone far enough to fulfill its management responsibilities, it has sought support for departmentwide telecommunications cost-savings initiatives by discussing them with USDA interagency advisory groups. In this regard, OIRM’s Director told us that OIRM had briefed USDA’s Management Council and other interagency advisory groups on some of the savings opportunities that had been identified. However, OIRM did not have these discussions until October 1994, over 2.5 years after the savings opportunities were first identified. More importantly, as discussed above, officials participating in these groups are not senior decisionmakers. In addition, no interagency plans or actions to consolidate and optimize departmentwide FTS 2000 services were presented at or resulted from these meetings, and OIRM officials involved in the meetings told us they did not follow up with agency officials to solicit cooperation and support for implementing these initiatives. Conversely, one recent effort to reduce telecommunications costs at USDA’s headquarters in Washington, D.C., demonstrates how savings can be achieved when senior decisionmakers are involved. In this case, in November 1993, USDA began to consolidate and optimize telecommunications services at USDA’s headquarters offices after the Secretary of Agriculture announced plans to reduce telecommunications costs by $1 million. In response to the Secretary’s direction, OIRM took action to enhance telecommunications service and reduce costs at USDA’s headquarters offices by concentrating telecommunications circuits among component agency users, optimizing the use of FTS 2000 services and new technologies such as the Integrated Services Digital Network, and establishing a central process to control the ordering of equipment and services and the certification of billing. To date, OIRM records show that this effort has achieved several hundreds of thousands of dollars in savings. The current reorganization effort underway to combine offices and share resources among agencies further underscores the need for OIRM’s close involvement with senior decisionmakers in planning and implementing cost-effective telecommunications. As USDA restructures and streamlines headquarters and field office operations, the Department can take advantage of opportunities to consolidate and optimize departmentwide FTS 2000 telecommunications services. At the time of former Secretary Espy’s announcement to streamline USDA’s field structure, on December 6, 1994, OIRM had not met with USDA senior management and developed a plan or time frame for carrying out this formidable task. On December 20, 1994, OIRM established an agreement with one of the Info Share agencies to lead efforts consolidating and optimizing telecommunications at sites involving only the Info Share agencies. However, the agreement, dated January 11, 1995, was signed by the Director of OIRM and the designated lead agency’s Senior IRM Official, not USDA’s senior decisionmakers. Moreover, the written agreement did not clearly specify how consolidation and optimization activities would be carried out and time frames for their completion. Agency Officials Cite Factors That Precluded Action on Savings Opportunities Senior agency officials, including the Assistant Secretary of Administration and the Director of OIRM, acknowledged the need to act more swiftly when savings opportunities are identified. However, they pointed out that changes in some key USDA leadership positions during the transition of administrations in 1993 made it difficult for OIRM to gain the departmentwide attention that was needed. We recognize that a period of leadership transition can impact an organization’s progress on departmentwide initiatives. However, as discussed previously, we found no indication that OIRM management had advised senior decisionmakers about departmentwide telecommunications cost-savings opportunities, either before or after the 1993 transition. These officials also noted that it would have been inappropriate for OIRM to have led widespread efforts to consolidate and optimize FTS 2000 services before the Secretary officially announced in December 1994 that 1,170 of USDA’s 3,700 county-based field offices would be closed or consolidated.This is because OIRM believed the up-front costs to consolidate and optimize telecommunications services, such as service installation charges and equipment charges, would be unrecoverable in offices that later closed or moved to another location. We agree that it would be unwise to consolidate and optimize FTS 2000 services at offices where start-up costs cannot be recovered. However, we believe OIRM could have achieved substantial savings by consolidating and optimizing FTS 2000 services at USDA offices unaffected by the reorganization closures. Specifically, the closures did not include hundreds of state and district offices for farm service agencies where USDA has a significant opportunity for FTS 2000 cost savings. It also did not include hundreds of other USDA agency offices, such as Forest Service and APHIS. We also believe that OIRM wasted valuable time by not beginning to plan consolidation and optimization work at the county-based offices until after the Secretary announced the county-based office closures in December 1994. While we recognize that OIRM was not involved in the reorganization decisions, OIRM did not effectively use information provided by the Secretary in 1993 to begin planning reorganization cost-savings activities. Specifically, in September 1993, the Secretary publicly announced that the reorganization would create USDA Field Office Service Centers by moving stand-alone county-based offices to sites where more than one farm service agency would be collocated within the same building. According to Department records from 1992, USDA had 2,463 county-based office sites where farm service agencies were collocated. On the basis of this information, OIRM could have started collecting and analyzing data at these collocated sites to (1) identify opportunities for significant cost savings, (2) target consolidation and optimization planning at specific sites with the largest payback, and (3) develop implementation solutions for these sites. By having cost-savings solutions planned prior to the Secretary’s announcement, OIRM would have saved considerable time by being positioned to begin implementing cost-savings solutions at many of the reorganized field sites. Conclusions USDA has identified opportunities to achieve substantial departmentwide savings by more cost-effectively acquiring and using FTS 2000 services throughout the Department. However, while OIRM is responsible for exploiting these opportunities, it has not done so. In USDA, where component agencies act independently, implementing the actions necessary to achieve departmentwide cost savings requires effective management leadership. However, OIRM has not demonstrated this leadership. While OIRM has begun to discuss savings opportunities with component agency officials, it has not taken the management steps necessary to carry out its responsibility to reduce departmentwide FTS 2000 costs. To do so, OIRM would need to (1) involve senior decisionmakers, (2) establish implementation plans, (3) oversee actions to ensure that savings are achieved, and (4) ensure that opportunities to consolidate and optimize FTS 2000 services have been addressed prior to granting technical approval of telecommunication acquisitions. Unless these actions are taken immediately, USDA and its component agencies will continue to waste millions annually on the use of redundant FTS 2000 telecommunication services. Recommendation We recommend that the Secretary of USDA direct the Assistant Secretary for Administration to take immediate and necessary action to ensure that the Office of Information Resources Management effectively fulfills its management responsibility to reduce the Department’s FTS 2000 telecommunications costs. At a minimum, the Assistant Secretary should: advise appropriate Under Secretaries and Assistant Secretaries immediately about all opportunities identified by the Office of Information Resources Management and the Telecommunications Services Division to reduce telecommunications costs; work directly with the Under Secretaries and Assistant Secretaries to develop a plan for (1) consolidating and optimizing FTS 2000 telecommunications at USDA’s new Field Office Service Centers and (2) identifying additional USDA headquarters and field office sites where it is cost-effective to consolidate and optimize FTS 2000 telecommunications services; establish, in cooperation with the Under Secretaries and Assistant Secretaries, an implementation team consisting of OIRM and agency staff who have the technical capabilities and resources necessary to implement departmentwide FTS 2000 cost-savings solutions based on the established priorities; oversee implementation of all telecommunications cost-savings initiatives and report progress to the Secretary periodically as deemed appropriate; and preclude USDA component agencies and offices from obtaining and using redundant FTS 2000 telecommunications services by requiring that OIRM technical approvals be made contingent on the component agencies having considered and sufficiently addressed departmentwide consolidation and optimization of FTS 2000 services. Agency Comments and Our Evaluation The Department of Agriculture provided written comments on a draft of this report. Their comments are summarized below and reproduced in appendix II. In discussing USDA’s comments with us, the Assistant Secretary for Administration stated that the Department plans to fully implement our recommendation. Specifically, the Assistant Secretary stated that he will (1) take immediate and necessary action to ensure that OIRM effectively fulfills its management responsibility to reduce FTS 2000 telecommunications costs and (2) require the Director of OIRM to report periodically on the status of the FTS 2000 cost-savings actions that each USDA agency is undertaking. The Assistant Secretary added that USDA has already undertaken action to begin implementing our recommendation. For example, the Assistant Secretary briefed the Under and Assistant Secretaries on the importance of telecommunications management and cost-reduction opportunities, and instructed them to develop an action plan to identify and implement telecommunications cost-savings initiatives in their mission area. In this regard, the Assistant Secretary stated that there is ample evidence from actual experience from several USDA locations and from cost models that savings of thousands of dollars per office per year are possible and that “...the potential for savings are so great that the burden of proof is on the agencies to justify why consolidation of telecommunications services is not implemented in collocated offices.” The Assistant Secretary also said that USDA is taking action beyond what we recommended. For example, the Department has begun to investigate consolidating telecommunications services with other federal agencies. OIRM recently signed a memorandum of agreement with the Department of Interior to provide a framework for consolidating and sharing telecommunications services among agencies of these two departments. Although the Assistant Secretary agreed to take action on our recommendation, he stated that the draft report did not give sufficient weight to the changing management and organizational environment in USDA over the last 2 years and did not adequately recognize OIRM management and staff for their initiative and creativity in developing tools to analyze telecommunications costs. The Assistant Secretary also believes that the report ignores the responsibilities of information resources management officials in USDA agencies for cost-effective management of their telecommunication resources. We agree that the time between 1993 and 1995 was a period of significant change in the Department and that many USDA officials were deeply involved in planning and beginning to implement the reorganization of the Department and its agencies. However, as discussed in our report, we believe that OIRM could have done more during this time to achieve departmentwide cost savings. We also believe that the report does recognize OIRM and give it credit for progress made developing analytical tools for analyzing telecommunications costs and identifying cost-savings opportunities. Finally, while we agree that USDA agencies have responsibility for managing telecommunications cost-effectively, it is OIRM and not the agencies that have responsibility for identifying and directing departmentwide savings opportunities. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will provide copies of this report to the Secretary of Agriculture; the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs, the Senate and House Committees on Appropriations, the House Committee on Agriculture, and the House Committee on Government Reform and Oversight; the Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-6253 if you or your staff have any questions concerning the report. Other major contributors are listed in appendix III. Scope and Methodology To address our objective, we reviewed USDA policies addressing the management of FTS 2000 telecommunications services, USDA reports on FTS 2000 usage and costs, documentation related to USDA’s transition to the FTS 2000 network, and other materials outlining plans and efforts by OIRM and USDA component agencies to identify opportunities to consolidate and optimize telecommunications and implement cost-savings solutions. To identify the Department’s overall FTS 2000 costs, we also reviewed USDA usage and cost information obtained from OIRM and USDA’s National Finance Center. To determine USDA’s progress implementing consolidation and optimizing initiatives for the cost-effective use of FTS 2000 telecommunications services, we interviewed both OIRM management and field personnel involved in these activities. We also reviewed (1) USDA technical reports and internal correspondence describing the status of initiatives and (2) billing reports to determine savings associated with consolidation and optimization efforts. In addition, we visited locations identified by OIRM and USDA component agencies where FTS 2000 services have been consolidated and optimized and interviewed officials to determine whether the solutions were successfully implemented. We interviewed senior-level representatives from USDA’s 12 largest users of FTS 2000 telecommunications services to determine what actions USDA has taken to identify departmentwide opportunities to consolidate and optimize FTS 2000 services involving these agencies. We also observed a demonstration of TSD’s Network Analysis Model by a USDA contractor. This demonstration included an overview of the methodology being used and the data being generated. We did not test the validity of the Network Analysis Model. We performed our audit work from March 1994 through March 1995, in accordance with generally accepted government auditing standards. Our work was primarily done at USDA headquarters in Washington, D.C.; USDA’s National Finance Center in New Orleans, Louisiana; and USDA’s Telecommunications Services Division in Fort Collins, Colorado. We also conducted work at various USDA and component agency field offices including USDA state offices in Lexington, Kentucky; Richmond, Virginia; St. Louis, Missouri; and Columbia, Missouri; Forest Service headquarters in Rosslyn, Virginia; the Service’s Northwestern Region in Portland, Oregon; and the Service’s National Forest offices in Corvallis and Pendleton, Oregon; Agricultural Research Service office, Greenbelt, Maryland; APHIS headquarters in Hyattsville, Maryland; and regional office in Fort Collins, Colorado. Lastly, we visited Booze-Allen & Hamilton in McLean, Virginia, to observe a demonstration of the Network Analysis Model. Comments From the Department of Agriculture Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Kansas City Regional Office Troy G. Hottovy, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed whether the Department of Agriculture (USDA) is consolidating and optimizing Federal Telecommunications Systems (FTS) 2000 services across the department to maximize savings. GAO found that: (1) USDA has identified opportunities to significantly reduce telecommunications costs by consolidating and optimizing its FTS 2000 services, but it has not acted on all the identified opportunities and is wasting millions of dollars each year; (2) USDA field offices are failing to coordinate their FTS 2000 acquisitions and are often purchasing redundant systems; (3) USDA could save up to $10 million a year if its field offices consolidate and optimize FTS 2000 services; (4) the USDA Office of Information Resources Management (OIRM) has not effectively carried out its responsibility to reduce telecommunications costs; (5) some USDA agencies are independently consolidating and optimizing FTS 2000 services and reducing telecommunication costs; (6) senior USDA officials state that leadership transitions have made it difficult for OIRM to implement FTS 2000 consolidation plans; and (7) although USDA is reducing and consolidating its field offices, it has no operational plan or time frame for consolidating and optimizing telecommunications at the centers or to ensure that FTS 2000 is cost-effectively used. |
Background Since beginning operations in March 2003, DHS has assumed operational control of about 209,000 civilian and military positions from 22 agencies and offices specializing in one or more aspects of homeland security. The intent behind DHS’s merger and transformation was to, among other things, improve coordination, communication, and information sharing among the multiple federal agencies responsible for carrying out the mission of protecting the homeland. Overview of DHS Organizational Structure To accomplish its mission, the department is organized into various components, each of which is responsible for specific homeland security missions and for coordinating related efforts with its sibling components, as well as external entities. Table 1 shows DHS’s principal organizations and their missions. An organizational structure is shown in figure 1. Within the Management Directorate is the Office of the CIO, which is expected to leverage best available technologies and IT management practices, provide shared services, coordinate acquisition strategies, maintain an enterprise architecture that is fully integrated with other management processes, and advocate and enable business transformation. Other DHS entities also are responsible or share responsibility for critical IT management activities. For example, DHS’s major organizational components (e.g., directorates, offices, and agencies) have their own CIOs and IT organizations. Control over the department’s IT funding is vested primarily with the components’ CIOs, who are accountable to the heads of their respective components. The Director of Program Analysis and Evaluation is the sponsor for the department’s capital planning and investment control process and serves as the executive agent and coordinator for the process. This Director reports to the Chief Financial Officer (CFO). IT Is Critical to DHS’s Mission Performance To accomplish its mission, DHS relies extensively on IT. For example, for fiscal year 2007 DHS requested about $4.16 billion to support 278 major IT programs. Table 2 shows the fiscal year 2007 IT funding for key DHS components. As mentioned earlier, DHS requested about $4 billion for fiscal year 2008, which is the third largest planned IT expenditure among federal departments. Prior GAO Reviews of DHS’s IT Investment Management Efforts During the last 3 years, we have reported on steps that DHS has taken to establish its IT investment management activities and the associated challenges it faced. In May 2004, we reported that DHS was in the midst of developing and implementing a strategic approach to IT management. We also reported that DHS’s interim efforts to manage IT investments did not provide assurance that those investments were strategically aligned. As a result, we concluded that DHS system investments were at risk of requiring rework in order to properly align with strategic mission goals and outcomes. Accordingly, we recommended that DHS limit its IT investments to those efforts that were deemed cost-effective via several criteria and considering any future system rework that would be needed to later align the system with the department’s emerging systems integration strategy. In August 2004, we reported that DHS had established several key foundational elements for investment management. However, we also reported that DHS was not providing effective departmental oversight of IT investments, with many investments not receiving control reviews, due in large part to the lack of an organized process for conducting the reviews. Accordingly, we recommended that DHS establish milestones for the initiation and completion of major information and technology management activities, such as conducting these control reviews. In March 2006, we testified that DHS had worked to institutionalize IT management controls across the department but still faced challenges. We identified actions that DHS reported it was taking, while noting, for example, that the department still needed to define explicit criteria for determining if investments aligned with the agency’s modernization road map (enterprise architecture). Overview of DHS’s Approach to Investment Management DHS’s enterprisewide and component agency IT investments are categorized into one of four “levels” of investments that determine the extent and scope of the required project and program management, the level of reporting requirements, and the review and approval authority. An investment is assigned to a level based on its total acquisition costs and total life cycle costs. Table 3 shows the dollar thresholds that DHS reports it uses in determining investment levels. Several entities and individuals are involved in managing these investments. Table 4 lists the decision-making bodies and personnel involved in DHS’s investment management process, and provides a description of their key responsibilities and membership. Figure 2 shows the relationship among the key players in DHS’s investment management process. DHS’s investment management process consists of four phases (which it refers to as Capital Planning Investment Control Steps): (1) the preselect phase supports the initial conception and development of the investment, (2) the select phase supports the selection of the investment from among competing investments, (3) the control phase supports the monitoring of investments for acceptable performance, and (4) the evaluate phase supports the evaluation of investments for progress made against objectives. Each phase of the process is made up of multiple steps that set out requirements that need to be met in order for the boards to make decisions about the investments. The investment management phases are aligned with projects’ life cycle phases, as illustrated in figure 3. According to DHS policy, the boards are to review projects at key decision points or at least annually. Figure 3 shows where these key decision points (see shaded areas) are to occur in a project’s life cycle and in the investment management process. DHS’s preselect phase is to identify the business needs and assess the preliminary costs and benefits needed for the development and support of an investment’s initial concept. During this phase, the component agency is to assign a project manager to develop an investment review request— essentially an investment proposal—and to scope the project. The document is to provide initial information, which is to be used to establish a schedule for the investment’s key milestone reviews and be reviewed by the Integrated Project Review Team (IPRT). For major investments (level 1 and 2 investments), project managers are required to also assemble an interdisciplinary team to assist in the management of the investment. During this phase, the EAB assesses investments for alignment with the enterprise architecture and provides recommendations to the appropriate decision-making authorities (recommendations for level 1 investments are made to the IRB, those for level 2 investments are made to the JRC, and those for level 3 and 4 investments are made to the heads of the components). Project managers present investment proposals to their component-level investment review boards for approval. In the select phase, DHS is to assess investments against a uniform set of evaluation criteria and thresholds to ensure that the department selects the investments that best support its mission. All new and existing investments are to go through this phase in support of DHS’s annual programming and budgeting process. Based on the assessments during the select phase, DHS is to prioritize investments and decide which investments to include in its portfolios. The select phase is also intended to help the department justify budget requests by demonstrating the resources required for individual investments. At the end of the selection process, the department is to produce a scored and ranked list of Exhibit 300s for all major investments and an Exhibit 53 for all level 1 through level 4 IT investments for submission to the Office of Management and Budget. Once resources are expended to acquire planned capabilities, the investment is assumed to be in the control phase, and control related activities are to continue throughout the investment’s life cycle. During this phase, project managers are responsible for preparing inputs for periodic reporting in support of investment reviews. The purpose of the reviews is to ensure that investments are performing within acceptable cost, schedule, and performance parameters. The Acquisition Program Baseline is the main control instrument used through predeployment to baseline these parameters for investments. The IPRT reviews the Acquisition Program Baseline and other periodic reporting documents and provides recommendations to the project teams, if needed. Once the project teams have made the recommended changes, the IPRT provides a summary package to the component agency heads and DHS’s review boards (IRB and JRC) to support key milestone decision reviews and other reviews established in the investment’s investment review request during the preselect phase. The evaluate phase begins when an investment is implemented or is deployed and operational. During this phase, project managers are responsible for conducting postimplementation reviews (PIR) to evaluate the impact of the investment on the department’s mission and programs. The PIR focuses on three primary areas: impact to stakeholders and customers, ability to deliver results, and ability to meet baseline goals. Major investments that are in the operations and maintenance phases are required to perform an operational analysis to measure performance and cost against the investment’s baseline. If the investment’s performance is deficient, the program manager is required to introduce corrective actions. Any changes to the investment’s original baseline need to be approved by the appropriate IRB. The lessons learned from conducting a PIR are to be reported to the IPRT for use throughout the department. Overview of GAO’s ITIM Maturity Framework The ITIM framework consists of five progressive stages of maturity that an agency can achieve in its investment management capabilities. It was developed on the basis of our research into the IT investment management practices of leading private- and public-sector organizations. The maturity stages are cumulative; that is, in order to attain a higher stage, an agency must institutionalize all of the critical processes at the lower stages, in addition to the higher stage critical processes. The framework can be used to assess the maturity of an agency’s investment management processes and as a tool for organizational improvement. The overriding purpose of the framework is to encourage investment processes that promote business value and mission performance, reduce risk, and increase accountability and transparency in the decision process. We have used the framework in several of our evaluations, and a number of agencies have adopted it. These agencies have used ITIM for purposes ranging from self-assessment to redesign of their IT investment management processes. ITIM’s five maturity stages (see fig. 4) represent steps toward achieving stable and mature processes for managing IT investments. The successful attainment of each stage leads to improvement in the organization’s ability to manage its investments. With the exception of the first stage, each maturity stage is composed of “critical processes” that must be implemented and institutionalized in order for the organization to achieve that stage. These critical processes are further broken down into key practices that describe the types of activities that an organization should be performing to successfully implement each critical process. It is not unusual for an organization to be performing key practices from more than one maturity stage at the same time. However, our research shows that agency efforts to improve investment management capabilities should focus on implementing all lower stage practices before addressing higher stage practices. In the ITIM framework, Stage 2 critical processes lay the foundation for sound IT investment processes by helping the agency to attain successful, predictable, and repeatable investment control processes at the project level. At Stage 2, the emphasis is on establishing basic capabilities for selecting new IT projects, and on developing the capability to (1) control projects so that they finish predictably within established cost, schedule, and performance expectations and (2) identify and mitigate potential exposures to risk. Stage 3 is where the agency moves from project-centric processes to portfolio-based processes and evaluates potential investments by how well they support the agency’s missions, strategies, and goals. This stage requires that an organization continually assess both proposed and ongoing projects as parts of complete investment portfolios—integrated and competing sets of investment options. It focuses on establishing a consistent, well-defined perspective on IT investment portfolios and maintaining mature, integrated selection (and reselection), control, and evaluation processes, which are to be evaluated during PIRs. This portfolio perspective allows decision makers to consider the interaction among investments and the contributions to organizational mission goals and strategies that could be made by alternative portfolio selections, rather than to focus exclusively on the balance between the costs and benefits of individual investments. Organizations implementing Stage 2 and 3 key practices have in place capabilities that assist in establishing the selection, control, and evaluation processes required by the Clinger-Cohen Act of 1996. Stages 4 and 5 require the use of evaluation techniques to continuously improve both investment processes and portfolios in order to better achieve strategic outcomes. At Stage 4 maturity, an organization has the capacity to conduct IT succession activities and, therefore, can plan and implement the deselection of obsolete, high-risk, or low-value IT investments. An organization with Stage 5 maturity conducts proactive monitoring for breakthrough technologies that will enable it to change and improve its business performance. As mentioned earlier, each ITIM critical process is further broken down into key practices that describe the tasks that an organization should be performing to successfully implement each critical process. Key practices include organizational commitments, which are typically policies and procedures; prerequisites, which are conditions that must exist to implement a critical process successfully; and activities, which address the implementation of policies and procedures. DHS Has Established the Structure Needed to Effectively Manage Its Investments but Has Yet to Fully Define Many of the Related Policies and Procedures Through IT investment management, organizations define and follow a corporate process to help senior leadership make informed decisions on competing IT investment options. Such investments, if managed effectively, can have a dramatic impact on an organization’s performance and accountability. If mismanaged, they can result in wasteful spending and lost opportunities for improving delivery of services. Based on our framework, an organization should establish the management structure needed to manage its investments; build the investment foundation by selecting and controlling individual projects (Stage 2 capabilities); and manage projects as a portfolio of investments, treating them as an integrated package of competing investment options and pursuing those that best meet the strategic goals, objectives, and mission of the agency (Stage 3 capabilities). DHS has established the management structure to effectively manage its investments. However, the department has yet to fully define 8 of the 11 related policies and procedures defined by our ITIM framework. Specifically, while DHS has documented the policies and related procedures for project-level management, some of these procedures do not include key elements. For example, procedures for selecting investments do not cite either the specific criteria or steps for prioritizing and selecting new IT proposals, and procedures for management oversight of IT projects and systems do not specify the rules that the investment boards are to follow in overseeing investments. In addition, the department has yet to define most of the policies associated with managing its IT projects as investment portfolios. Officials attributed the absence of policies and procedures at the portfolio level to other investment management priorities. Until DHS fully defines and documents its policies and procedures for investment management, it risks selecting investments that will not meet mission needs in the most cost-effective manner. DHS Has Established an Investment Management Structure and Project- Level Policies, but It Has Not Fully Defined Supporting Procedures At ITIM Stage 2, an organization has attained repeatable, successful IT project-level investment control processes and basic selection processes. Through these processes, the organization can identify expectation gaps early and take the appropriate steps to address them. ITIM Stage 2 critical processes include (1) defining IT investment board operations, (2) identifying the business needs for each IT investment, (3) developing a basic process for selecting new IT proposals and reselecting ongoing investments, (4) developing project-level investment control processes, and (5) collecting information about existing investments to inform investment management decisions. Table 5 describes the purpose of each of these Stage 2 critical processes. DHS has established a management structure within which to execute investment management processes. As previously mentioned, this management structure consists of two review boards, the IRB and the JRC, which are responsible for defining and implementing DHS’s IT investment management approach. The membership for these boards appropriately consists of senior executives at the department level and from the major business units and the CIO organization. Other entities, including the EAB and IPRT, play a critical role in supporting the boards and performing investment management activities. DHS has also fully documented the policies and certain procedures associated with project-level management. Specifically, the department’s Investment Review Process management directive establishes the framework for department investment management by documenting a high-level investment management process and defining project-level policies, including policies for such key activities as identifying projects or systems that support business needs and selecting among new investment proposals. In addition, other documents specify the procedures associated with these policies. For example, the Investment Management Handbook and Business Case Life Cycle Handbook specify procedures for relating projects and systems to DHS’s business needs, and the Capital Planning and Investment Control Guide and Systems Development Lifecycle specify procedures for integrating funding and selection. Nevertheless, some of DHS’s project-level procedures fail to address key elements as follows: Procedures for selecting investments do not cite either the specific criteria or steps for prioritizing and selecting new IT proposals. According to officials, such elements are being used to select new IT proposals. However, unless the criteria and steps for prioritizing and selecting new proposals are documented in procedures, it is unlikely that they will be used consistently. Procedures for management oversight of IT projects and systems do not specify the steps and criteria (i.e., rules) for the investment boards to follow in controlling investments. Documenting these rules would provide reasonable assurance that key investment control activities are being performed consistently and would establish transparency and thus promote departmentwide understanding of how decisions are made. A methodology, with explicit decision-making criteria, does not exist to guide the EAB in determining an investment’s alignment with the DHS enterprise architecture. DHS has developed Enterprise Architecture Board Process Guidance that the EAB uses in its reviews of investments, and this guidance contains a standard template for projects to use in providing information to the board; however, it does not describe the procedures governing how alignment is to be determined. As a result, the EAB’s assessments are based on subjective and unverifiable judgments. This is a significant weakness given the importance of architecture alignment in ensuring that programs will be defined, designed, and developed in a way that avoids duplication and promotes interoperability and integration. DHS officials stated that they are aware of the absence of documented procedures in certain areas of project-level management, but said that they are nevertheless carrying out the activities that these procedures would address if they were documented. The officials attributed the absence of procedures to resource constraints, stating that, with a full time staff of six to support departmentwide investment management activities, they are more focused on performing investment management rather than documenting it in great detail. While we do not question the importance of actually implementing IT investment management practices, as evidenced by the fact that our ITIM framework provides for such implementation, it is important to recognize that implementation of undefined processes will at best produce ad hoc and inconsistent results. Accordingly, our framework provides for both documenting how IT investment management is to be performed through policies and procedures and for actually implementing these policies and procedures. Unless DHS’s IT investment process guidance specifies procedures for Stage 2 activities that cover all the elements of effective project-level investment management, it is unlikely that key activities will be carried out consistently and in a disciplined manner. This means that DHS is at risk of investing in IT assets that will not cost-effectively meet mission needs. Table 6 summarizes our findings relative to DHS’s execution of the seven key policy and procedure practices needed to manage IT investments at the project level (Stage 2). DHS Has Largely Not Documented Policies and Procedures for Portfolio Management Once an agency has attained Stage 2 (i.e., project-level) maturity, it needs to effectively manage critical processes for managing its investments as a portfolio or set of portfolios (Stage 3). IT investment portfolios are integrated, agencywide collections of investments that are assessed and managed collectively based on common criteria. Managing investments as portfolios is a conscious, continuous, and proactive approach to allocating limited resources among an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking an agencywide perspective enables an organization to consider its investments in a more comprehensive and integrated fashion, so that collectively the investments optimally address the organization’s missions, strategic goals, and objectives. Managing IT investments as portfolios also allows an organization to determine its priorities and make decisions about which projects to begin funding and continue to fund based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. Although investments may initially be organized into subordinate portfolios—based on, for example, business lines or life cycle stages—and managed by subordinate investment boards, they should ultimately be aggregated into enterprise-level portfolios. According to ITIM, Stage 3 maturity involves (1) defining the portfolio criteria; (2) creating the portfolio; (3) evaluating (i.e., overseeing) the portfolio; and (4) conducting PIRs. Table 7 summarizes the purpose of each of these processes. DHS has not yet fully established any of the policies and procedures associated with managing the 22 IT portfolios that it recently established. For example, the department does not have documented policies and procedures for creating and modifying portfolio selection criteria or for creating its portfolios. In addition, DHS does not have documented policies and procedures for evaluating (or controlling) its portfolios. Further, while the department has policies and procedures for conducting PIRs, these policies and procedures do not specify several items, including roles and responsibilities for conducting reviews, and how conclusions, lessons learned, and recommended management actions are to be shared with executives and others. DHS officials attributed the lack of portfolio-level policies and procedures to the fact that resources have been assigned to other investment management activities, such as its efforts to establish the 22 portfolios. However, they said that establishing these policies and procedures is important, and thus they are taking steps to begin defining them. Specifically, they said that a portfolio manager for four portfolios—Grants, Case Management, Portal, and Disaster Management—was hired in the fall of 2006, and this manager’s responsibilities include developing the direction, guidance, and procedures for departmental portfolio management. They also said that another portfolio manager is currently being recruited. In addition, DHS officials stated that the PIR procedures defined in the Operational Analysis Guide are being updated to focus more on lessons learned. Not having documented policies and procedures for portfolio management is a significant weakness, particularly since officials told us that they recently began performing control reviews of these portfolios. Until DHS fully establishes the policies and procedures for portfolio-level management, DHS is at risk of not selecting and controlling the mix of investments in a manner that best supports the department’s mission needs. As illustrated in table 10, none of the practices associated with policies and procedures for Stage 3 have been executed. Table 8 summarizes the rating for each critical process required to manage investments as a portfolio and summarizes the evidence that supports these ratings. DHS Has Not Fully Executed Key Practices Associated with Effectively Controlling Investments DHS has not fully implemented any of the key practices needed to control investments—either at the project level or at the portfolio level. For example, according to DHS officials and our review of the department’s control review schedule, the investment boards have not conducted regular reviews of investments. Further, while control activities are sometimes performed, they are not performed consistently across projects. In addition, because the policies and procedures for portfolio management have yet to be defined, control of the department’s investment portfolios is ad hoc, according to DHS officials. Officials told us that to strengthen IT investment management, they have recently hired a portfolio manager and are recruiting another one. Until DHS fully implements processes to control its investments, both at the project and portfolio levels, it increases the risk of not meeting cost, schedule, benefit, and risk expectations. DHS Has Not Implemented the Key Practices Associated with Controlling Investments at the Project Level As we have previously reported, an organization should effectively control its IT projects throughout all phases of their life cycles. In particular, its investment board should observe each project’s performance and progress toward predefined cost and schedule expectations, as well as each project’s anticipated benefits and risk exposure. The board should also employ early warning systems that enable it to take corrective actions when cost, schedule, and performance expectations are not met. According to our ITIM framework, effective project-level control requires, among other things, (1) providing adequate resources for IT project oversight; (2) developing and maintaining an approved management plan for each IT project; (3) making up-to-date cost and schedule data for each project available to the oversight boards; (4) having regular reviews by each investment board of each project’s performance against stated expectations; and (5) ensuring that corrective actions for each underperforming project are documented, agreed to, implemented, and tracked until the desired outcome is achieved. (The key practices are listed in table 9.) Although (as discussed in the previous section), DHS has established some policies and procedures, DHS has not implemented any of the prerequisites and activities associated with effective project control. For example, DHS officials stated that the department does not have adequate resources, including human capital, for project oversight. In addition, although DHS policies and procedures call for certain control activities to be performed, these have not always taken place. For example, DHS policy and procedures call for cost, schedule, benefit, and risk parameters to be documented in (1) Acquisition Program Baselines (APB) and risk management plans for major projects in the capability development and demonstration or production and deployment phases and (2) in operational analysis (OA) documents and Exhibit 300s for projects in operations and support (steady state). However, DHS officials acknowledged that some projects do not have APBs or OAs and stated that a management directive to implement the OA policy is in draft. In addition, although the APBs are supposed to be approved by the appropriate board at the alternative selection milestone decision point, DHS officials stated that this does not always happen. Instead, these officials said that the Office of Program Analysis and Evaluation is reviewing APBs for “interim approval.” In addition, OAs are currently reviewed by the boards only if a problem arises with the projects. Of the three investments we reviewed, an APB and risk management plan were developed for one (Transportation Worker Identification Credentialing or TWIC). However, these documents are being updated to reflect changes in the project’s scope and have not yet been approved by the IRB. For another investment (Integrated Wireless Network or IWN), although, according to officials, an APB was developed, it was not approved by the IRB, although it should have been given its life cycle stage. For the third investment (National Emergency Management Information System or eNEMIS), an OA document specifies the cost, schedule, and benefit expectations for the project. However, the OA has not been reviewed by an investment board because the project has not experienced a problem that would trigger its review. Data on actual performance are also not provided to the appropriate IT investment board on a regular basis. Specifically, according to the Investment Review Process management directive, Periodic Reporting Manual, and Investment Management Handbook, actual cost, schedule, and benefits performance data for projects through the production and deployment phase should be provided to the boards in the APB and the IPRT’s analyses of quarterly reports for key milestone decision reviews and annual reviews. However, our review of the fiscal year 2006 control schedule showed that project reviews did not always occur; therefore, the boards were not provided with data on actual project performance on a regular basis. In addition, a schedule for fiscal year 2007 project reviews has not been developed. Moreover, officials confirmed that these reviews do not always occur stating that, for fiscal year 2007, the boards’ reviews have been scheduled reactively, for projects that have legislatively required expenditure plans or have otherwise prompted congressional interest. In addition, while the IPRT is supposed to monitor data on the actual performance of projects in operations and support, these data are provided to the boards only if problems arise. Regarding investment board reviews of the performance of IT projects and systems against expectations, DHS’s policy requires that ongoing project reviews be conducted either annually or at milestone decision points. However, these reviews are not conducted in a timely manner for all level 1 and 2 investments that are not the subject of congressional interest. Officials stated that the Under Secretary for Management would likely be issuing new guidance aimed at making the review schedule more proactive. Finally, DHS officials told us that the investment boards do not effectively track the implementation of corrective actions for underperforming projects, primarily because they do not have a robust tool to support them in this activity. This means that DHS executives do not have the information they need to determine whether investments are meeting expectations, which increases the risk that underperforming projects will not be identified and corrected in a timely manner. Table 9 shows the ratings for each key practice required to control investments (except for the policies and procedures, which were discussed in the previous section) and summarizes the evidence that supports these ratings. DHS Has Not Implemented Key Practices Needed to Control Its Investment Portfolios The critical process associated with controlling investment portfolios (evaluating the portfolio under Stage 3 of our ITIM framework) builds upon the Stage 2 critical process providing investment oversight by adding the elements of portfolio performance to an organization’s investment control capacity. Compared with less mature organizations, Stage 3 organizations will have the capability to control the risks faced by each investment and to deliver benefits that are linked to mission performance. In addition, a Stage 3 organization will have the benefit of performance data generated by Stage 2 processes. Executive-level oversight of risk management outcomes and incremental benefit accumulation provides the organization with increased assurance that each IT investment will achieve the desired results. Table 10 lists the key practices associated with this critical process, with the exception of the establishment of policies and procedures, which was discussed earlier. Although officials told us that DHS has taken steps to classify its investments into 22 IT portfolios, the department has largely not defined the policies and procedures needed to control these portfolios (see earlier section of this report). As a result, DHS officials stated that they are performing portfolio-level control in an ad hoc manner. To begin addressing this, they stated that an analyst was recently hired to help develop guidance and procedures for the IT portfolios, and another staff member is being recruited. Without documented policies and procedures for controlling its investment portfolios, the department’s efforts to evaluate its portfolios will remain ad hoc, compounding its risk of investing in new and existing IT systems that are not aligned with DHS’s mission and business priorities and do not meet cost, schedule, and performance expectations. Conclusions Given the importance of IT to DHS’s mission performance and outcomes, it is vital for the department to adopt and employ an effective institutional approach to IT investment management. To its credit, the department has established aspects of such an approach and thus has a basis for achieving greater maturity. However, its approach is missing key elements of effective investment management, such as procedures for implementing project-specific investment management policies, as well as policies and procedures for portfolio-based investment management. Further, it has yet to fully implement either project- or portfolio-level investment control practices. All told, this means that DHS lacks the complete institutional capability needed to ensure that it is investing in IT projects that best support its strategic mission needs and that ongoing projects will meet cost, schedule, and performance expectations. After almost 4 years in operation, DHS is overdue in having a mature approach to investment management. Without one, DHS is impaired in its ability to optimize mission performance and accountability. Recommendations for Executive Action To strengthen DHS’s investment management capability and address the weaknesses discussed in this report, we recommend that the Secretary of Homeland Security direct the Undersecretary for Management, in collaboration with the CFO and CIO, to devote the appropriate attention to development and implementation of effective investment management processes. At a minimum, this should include fully defining and documenting project-and portfolio-level policies and procedures that address the following eight areas: selecting new investments, including specifying the criteria and steps for prioritizing and selecting these proposals; reselecting ongoing IT investments, including specifying the criteria and steps for prioritizing and reselecting these investments; overseeing (i.e., controlling) IT projects and systems, including specifying the procedural rules for the investment boards’ operations and decision making during project oversight; identifying and collecting information about investments, including assigning responsibility for the process and ownership of the information and defining the locations for information storage; creating and modifying IT portfolio selection criteria; analyzing, selecting, and maintaining the investment portfolios; assessing portfolio performance at regular intervals to reflect current conducting postimplementation reviews of IT investments, including defining roles and responsibilities for doing so, and specifying how conclusions, lesson learned, and recommended management actions are to be shared with executives and others. In addition, we recommend that the department implement key investment control processes. At a minimum, this should include these six project- level practices: providing adequate resources, including people, funding, and tools, for IT having IT projects and systems, including those in steady state (operations and maintenance), maintain approved project management plans that include expected cost and schedule milestones and measurable benefit and risk expectations; providing data on actual performance (including cost, schedule, benefit, and risk performance) to the appropriate IT investment board; having each investment board use verified data to regularly review the performance of IT projects and systems against stated expectations; taking appropriate actions to correct or terminate each underperforming IT project or system in accordance with defined criteria and the documented policies and procedures for management oversight; and having the investment board regularly track the implementation of corrective actions for each underperforming project until the actions are completed. It should also include the following six portfolio-level practices: providing adequate resources, including people, funding, and tools, for reviewing the investment portfolios and their projects; making board members familiar with the process for evaluating and improving the portfolio’s performance; providing results of relevant Providing Investment Oversight reviews from Stage 2 to the investment boards; developing, reviewing, and modifying criteria for assessing portfolio performance at regular intervals to reflect current performance expectations; defining and collecting IT portfolio performance measurement data that are consistent with portfolio performance criteria; and executing adjustments to the IT investment portfolios in response to actual portfolio performance. Agency Comments In DHS’s written comments on a draft of this report, signed by the Director, Departmental GAO/Office of Inspector General Liaison, the department stated that it agreed with our findings and recommendations and will use the report to improve its investment management and review processes. The department’s written comments are reprinted in appendix II. The department also provided technical comments that we incorporated in the report where appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees that have authorization and oversight responsibilities for homeland security and other interested congressional committees; the Director of the Office of Management and Budget; and the DHS Secretary, Undersecretary for Management, Chief Financial Officer, and Chief Information Officer. We also will make copies available to others upon request. In addition, the report will be made available at no charge on the GAO Web site at www.gao.gov. If you or your staff have any questions about matters discussed in this report, please contact me at (202) 512-3439 or by e-mail at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs Office may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to (1) determine whether the Department of Homeland Security (DHS) has established the management structure and policies and procedures needed to effectively manage its information technology (IT) investments and (2) determine whether the department is implementing key practices needed to effectively control these investments. To address our first objective, we reviewed the results of the department’s self-assessment of practices associated with project-level and portfolio- level policies and procedures and compared them against the relevant practices in Stages 2 and 3 of our IT Investment Management (ITIM) framework. We also validated and updated the results of the self- assessment through document reviews and interviews with officials. We reviewed written policies, procedures, guidance, and other documentation providing evidence of executed practices, including DHS’s Investment Review Process Management Directive, Capital Planning and Investment Control Guide, Investment Management Handbook, Periodic Reporting Manual, and various management memoranda. Our review focused on DHS’s capabilities related to Stages 2 and 3 in our framework that relate to policies and procedures because those stages lay the foundation for higher maturity stages and assist organizations in complying with the investment management provisions of the Clinger Cohen Act. To address our second objective, we reviewed the results of the department’s self-assessment of critical processes within Stages 2 and 3 that are associated with project-level and portfolio-level oversight and compared them against our ITIM framework. We also validated and updated the results of the self-assessment through document reviews and interviews with officials. In addition, we reviewed DHS’s Investment Review Board, Joint Resources Council, and Enterprise Architecture Board investment-related materials, including the investment review boards’ control schedule, status reports, meeting minutes, portfolio-related documents, and records of decisions. We also conducted interviews with officials from the Office of the Chief Information Officer, the Office of the Chief Financial Officer, and the Office of Program Analysis and Evaluation whose main responsibilities are to control investments and ensure that DHS’s IT investment management process is implemented and followed. As part of our analysis for the second objective, we selected three investments as case studies to verify that the key practices for investment control were being applied. The investments selected were major systems when we began our review. They also (1) represented a mix of enterprisewide (i.e., headquarters) and component agency investments; and (2) spanned different life cycle phases. The three investments are described below: DHS Integrated Wireless Network (IWN)—This network is to provide a coordinated nationwide approach to reliable, seamless, interoperable wireless communications. It is intended to support federal agents and officers engaged in the conduct of law enforcement, protective services, homeland defense, and disaster response with DHS, the Department of Justice, and the Department of the Treasury. IWN is a major enterprisewide investment and is in the capability development and demonstration phase. It has an estimated life cycle cost of $4.3 billion and is designated as a level 1 investment. Transportation Security Administration’s Transportation Worker Identification Credentialing (TWIC)—This project is intended to improve security by establishing a systemwide common secure credential, used across all transportation nodes, for all personnel requiring unescorted physical and/or logical access to secure areas of the transportation system. It is a major component agency investment and is designated as a level 1 investment. The total cost of the program is estimated at appropriately $307 million through fiscal year 2012. Federal Emergency Management Agency’s National Emergency Management Information System (eNEMIS)—eNEMIS is a mission critical application and infrastructure that supports the entire life cycle of emergency or disaster (including acts of terrorism) declarations. The project tracks major incidents; supports mission assignments and other predeclaration response activities; processes the governor’s request for assistance; and automates the preliminary damage assessment process, the regional analysis, and summary. It is a major component agency investment that is in the operations and support phase and is designated as a level 1 investment with an estimated total life cycle cost of $319 million. For these investments, we reviewed project management documentation, such as acquisition program baseline, operational analysis document, and decision memoranda. For both objectives, we rated the ITIM key practices as “executed” on the basis of whether the agency demonstrated (by providing evidence of performance) that it had fully met the criteria of the key practice. A key practice was rated as “not executed” when we found insufficient evidence of a practice during the review or when we determined that there were significant weaknesses in DHS’s execution of the key practice. We provided DHS an opportunity to produce evidence for the key practices that we rated as “not executed.” We conducted our work at DHS headquarters in Washington, D.C., from February 2006 through March 2007 in accordance with generally accepted government auditing standards. Appendix II: Comments from the U.S. Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Sabine Paul, Assistant Director; Gary Mountjoy, Assistant Director; Mathew Bader; Justin Booth; Barbara Collier; Tomas Ramirez; and Niti Tandon made key contributions to this report. | The Department of Homeland Security (DHS) relies extensively on information technology (IT) to carry out its mission. For fiscal year 2008, DHS requested about $4 billion--the third largest planned IT expenditure among federal departments. Given the size and significance of DHS's IT investments, GAO's objectives were to determine whether DHS (1) has established the management structure and associated policies and procedures needed to effectively manage these investments and (2) is implementing key practices needed to effectively control them. GAO used its IT Investment Management (ITIM) framework and associated methodology to address these objectives, focusing on the framework's stages related to the investment management provisions of the Clinger-Cohen Act. DHS has established the management structure to effectively manage its investments. However, the department has yet to fully define 8 of the 11 related policies and procedures that GAO's ITIM framework defines. Specifically, while DHS has documented the policies and related procedures for project-level management, some of these procedures do not include key elements. For example, procedures for selecting investments do not cite either the specific criteria or steps for prioritizing and selecting new IT proposals. In addition, the department has yet to define most of the policies associated with managing its IT projects as investment portfolios. Officials attributed the absence of policies and procedures at the portfolio level to other investment management priorities. Until DHS fully defines and documents policies and procedures for investment management, it risks selecting investments that will not meet mission needs in the most cost-effective manner. DHS has also not fully implemented the key practices needed to actually control investments--either at the project level or at the portfolio level. For example, according to DHS officials and the department's control review schedule, DHS investment boards have not conducted regular investment reviews. Further, while GAO found that control activities are sometimes performed, they are not performed consistently across projects. In addition, because the policies and procedures for portfolio management have yet to be defined, control of the department's investment portfolios is ad hoc, according to DHS officials. Officials told GAO that they have recently hired a portfolio manager and are recruiting another one to strengthen IT investment management. Until DHS fully implements processes to control its investments, both at the project and portfolio levels, it increases the risk of not meeting cost, schedule, benefit, and risk expectations. |
Background Amtrak was created by the Rail Passenger Service Act of 1970 to operate and revitalize intercity passenger rail service. Prior to Amtrak’s creation, intercity passenger rail service was provided by private railroads, which had lost money, especially after World War II. The act, as amended, gave Amtrak a number of goals, including providing modern, efficient intercity passenger rail service; giving Americans an alternative to automobiles and airplanes to meet their transportation needs; and minimizing federal subsidies. Through fiscal year 1998, the federal government has provided Amtrak with over $20 billion in operating and capital subsidies. Amtrak provides intercity passenger rail service to 44 states and the District of Columbia (see fig. 1). In fiscal year 1997, Amtrak served about 20 million intercity rail passengers on 40 routes and had passenger revenues of about $964 million. Amtrak also operates intercity passenger rail service that is financially supported by others—such as a state or a group of states. As illustrated in figure 1, in fiscal year 1997, 11 states paid Amtrak a total of about $70 million for such service to transport about 4.6 million passengers. In addition, Amtrak operates commuter rail service under contract. During fiscal year 1997, Amtrak was the contract operator of seven commuter rail systems serving about 49 million passengers. According to Amtrak, an average of 179,000 passenger trips are made each weekday on the 708 commuter trains it operates; and in fiscal year 1997, Amtrak received about $242 million in revenue to operate commuter rail service. Amtrak also provides train-dispatching, maintenance-of-way, and other services for commuter and freight railroads that use its tracks and facilities. According to Amtrak, four commuter rail systems (with more than 429,000 passengers per day)—mostly on the Northeast Corridor—pay to use its rails or facilities. In fiscal year 1997, four freight railroads operated on the Northeast Corridor: the Springfield Terminal Railway Company, the Providence and Worcester Railroad, the Connecticut Southern Railroad, and Conrail. As measured in train-miles—the movement of a train the distance of 1 mile—Conrail is by far the largest freight user of the Corridor. Overall, the freight railroads own about 97 percent of the tracks over which Amtrak operates (about 22,300 miles), and Amtrak directly owns only about 650 miles of tracks. Despite attempts to address growing losses, Amtrak’s financial condition raises the specter of possible bankruptcy. At the end of fiscal year 1996, the gap between Amtrak’s operating deficits and federal operating subsidies had begun to grow; Amtrak was continuing to experience working capital deficits (the difference between current assets and current liabilities); and debt levels had increased significantly. In fiscal year 1997, Amtrak’s net loss was $762 million, and its overall loss was $70 million.Although these losses were less than those for fiscal year 1996, Amtrak’s overall loss was still about $26 million more than planned. In addition, as of September 30, 1997, Amtrak had borrowed $75 million from banks to meet payroll and other operating expenses. Financial prospects for fiscal year 1998 may also be dim. Amtrak’s strategic business plan projects a cash flow deficit of about $100 million by September 1998, even assuming the successful implementation of all of the strategic business plan’s actions. The Congress recently provided about $2.2 billion in the Taxpayer Relief Act of 1997 that may be used to acquire capital improvements.However, because of high operating costs, Amtrak continues to face challenges in improving its financial health. Should Amtrak’s financial condition force it to file for bankruptcy, it must do so under chapter 11 of the Bankruptcy Code. This chapter contains provisions regarding the management and reorganization of debtors, including railroads, and specifies the circumstances under which a railroad may be liquidated. Among other things, chapter 11 seeks to protect the public interest in continued rail service. However, a railroad may be liquidated upon the request of an interested party (such as a creditor), if the court determines liquidation to be in the public interest. A railroad must be liquidated if a plan for reorganizing it has not been confirmed within 5 years after filing for bankruptcy. The trustee who is appointed plays a key role and, subject to the court’s review, directs the railroad and its affairs during bankruptcy. In a liquidation, the trustee administers the distribution of the railroad’s assets (called the estate) in accordance with the Bankruptcy Code. Appendix I contains a more detailed description of the bankruptcy process as it might apply to Amtrak. Costs Associated With a Liquidation Are Difficult to Predict In September 1997, Amtrak estimated the net cost to creditors and others of a possible liquidation to be between about $10 billion and $14 billion over a 6-year period. However, the financial impacts associated with a possible liquidation are difficult to estimate because of the uncertainties connected with the financial condition of the Corporation at the time of liquidation. These uncertainties are associated with different types of costs. These costs include, for example, (1) obligations that are due to creditors, such as lenders, vendors, and Amtrak employees; (2) costs that Amtrak currently pays, or might have to pay in the future, that could be assumed by other parties; and (3) costs to administer and close out the estate. Virtually all the costs associated with a liquidation would likely be borne either directly by those who do business with Amtrak or by those who benefit from Amtrak’s existence. In this regard, most of these costs would represent Amtrak’s existing financial obligations and the costs of providing future levels of rail service, which would be borne by other parties. One of the uncertainties associated with any estimate of the financial impacts involved in a liquidation is the obligations to creditors. These obligations can vary over time. For example, Amtrak’s debt levels and capital lease obligations have increased significantly in recent years—from $492 million in fiscal year 1993 to about $1.3 billion in fiscal year 1997. This total does not include about $820 million that is expected to be incurred in fiscal year 1998 and beyond to finance high-speed trainsets and locomotives and related maintenance facilities for the Northeast Corridor. Future obligations to creditors may be affected by a variety of factors, such as the Taxpayer Relief Act of 1997. This act provides Amtrak with a total of about $2.2 billion in federal funds in fiscal years 1998 and 1999 that may be used to acquire capital improvements and repay principal and interest on certain debt. In addition, a default on Amtrak’s obligations to creditors primarily represents a transfer to its creditors and/or their insurers to the extent that assets are not sufficient to satisfy Amtrak’s debts, rather than generating an additional cost resulting from liquidation. This is because the responsibility to repay financial obligations existed before any liquidation occurred and did not arise solely because of the liquidation. Also uncertain is Amtrak’s future labor protection obligations to those employees who would lose their jobs as the result of a discontinuance of service. Amtrak has estimated that, if it were liquidated, its labor protection obligations to its employees could amount to about $6 billion over 6 years. Since this estimate was made, the Congress passed the Amtrak Reform and Accountability Act of 1997. This act eliminates current labor protection arrangements on May 31, 1998, and requires Amtrak and its unions to negotiate new arrangements for the payment of salaries, wages, and benefits to employees who would be affected if service were discontinued. Amtrak’s obligations, if any, to employees who lose their jobs as a result of a liquidation would depend on the results of these negotiations. Finally, after a liquidation, costs to operate, maintain, and rehabilitate infrastructure, such as tracks and stations, that Amtrak currently pays could be borne by other parties as a result of decisions to provide passenger or other rail service. For example, existing commuter rail agencies might assume some of these costs. How much of these costs might actually be assumed is uncertain because, in part, it would depend on such factors as the extent to which the commuter authorities needed the infrastructure, the price the new owner might charge for use of the facilities, and the level at which the infrastructure would be maintained. Amtrak believes that the Northeast Corridor’s infrastructure costs would not decrease much if intercity passenger service were eliminated. However, several commuter rail agencies disagree, telling us that, without Amtrak, they would not need as much infrastructure as currently exists and would pare it back to reduce costs. Nevertheless, costs might increase if the new owner of the infrastructure charged more for its use than Amtrak currently charges. The amount of infrastructure costs that might be assumed is also uncertain because it would depend on future capital investments. As we reported in May 1997, the Federal Railroad Administration (FRA) and Amtrak estimated that about $2 billion in capital funds would be needed over a 3- to 5-year period to upgrade tracks and other infrastructure on the southern end of the Northeast Corridor and preserve Amtrak’s ability to operate at current service levels. Some amount of the $2.2 billion provided by the Taxpayer Relief Act may be used to address these needs. As discussed for default on obligations to creditors, these infrastructure costs might be assumed by others as a result of liquidation but would not arise solely because of a liquidation. Creditors Could Bear a Financial Burden in the Event of a Liquidation In a liquidation, Amtrak’s institutional creditors could sustain losses. As of September 1997, data from Amtrak showed that its combined secured and unsecured debt liability could be about $2.2 billion. The extent to which this liability could be met would depend in large part on the market value of Amtrak’s available assets and liquidation proceeds. With the exception of its interests in the Northeast Corridor and certain other real property, the federal government’s financial interests in the event of a liquidation would generally be subordinate to those of other creditors. Secured and Unsecured Creditors Could Face Losses As of September 1997, secured creditors that have financed Amtrak’s equipment purchases would have had about $1.1 billion in claims if the railroad defaulted on these purchases, according to Amtrak’s data. Generally, these secured creditors would be entitled to recover the equipment, or its value, used to secure Amtrak’s debt. However, to the extent that secured creditors’ claims exceeded the value of the equipment, these creditors would be considered unsecured and payments to them would depend on the proceeds available to satisfy unsecured claims following the sale of Amtrak’s assets. It is difficult to predict the market conditions that Amtrak’s trustee or secured creditors would face in attempting to sell or lease equipment in a liquidation. For example, Amtrak’s locomotives may be readily usable by other railroads, and selling them might generate cash sufficient to allow secured creditors to avoid losing money on their loans. (Locomotives represent about 41 percent of the outstanding loan balance.) In contrast, the sale or lease of passenger cars might generate little cash because, according to two rail industry officials we spoke with, these cars might need to be reconfigured to accommodate the needs of a purchasing railroad, either in the United States or abroad. Table 1 shows the outstanding balances of loans secured by rolling stock and the percent of the total loan balance that each type of equipment represents. In a liquidation, unsecured creditors’ positions would be more uncertain than secured creditors’. As of September 30, 1997, Amtrak’s data showed that unsecured liabilities totaled about $1 billion. Unsecured creditors depend entirely on the proceeds from the sale of Amtrak’s available assets for payment—to the extent that these proceeds exceed the amounts required to satisfy secured creditors. As of September 30, 1997, all of Amtrak’s rolling stock was encumbered by liens and would have been unavailable to satisfy unsecured creditors’ claims. However, unsecured creditors could have received payments from the sale of Amtrak’s real property, such as property on the Northeast Corridor. As of September 30, 1997, the value of Amtrak’s Northeast Corridor property was $4.3 billion.Whether the actual sale proceeds would be more or less than this amount is uncertain because the market value of Amtrak’s real property is untested. For example, the Northeast Corridor has commuter and freight rail easements that may affect its market value. In addition, according to FRA, the market value might be affected by the extent to which the property could be used for telecommunications and other utilities. Table 2 shows the categories of unsecured creditors and the amounts they were owed as of September 30, 1997. Unsecured creditors may have other sources of payment. These include such assets as receivables due to Amtrak and the sale of Amtrak’s materials and supplies inventory. According to Amtrak’s data, as of September 30, 1997, these other assets totaled about $173 million. Receivables include, for example, amounts due from travel agents and credit card companies that participate in the sale of Amtrak tickets. Materials and supplies consist primarily of items for the maintenance and improvement of property and equipment, such as spare parts, as well as fuel. As of September 30, 1997, Amtrak’s data showed that up to about $82 million, or 100 percent of its receivables, might be recovered in cash. In contrast, the data showed that only about $30 million of the approximately $91 million on its balance sheet for materials and supplies could be recovered, in part due to the unique nature of Amtrak’s spare parts inventory. In addition to the unsecured obligations outlined in table 2, employees who would lose their jobs if Amtrak stopped operating trains would be considered unsecured creditors and could raise claims against Amtrak’s estate. The extent of these claims, if any, is uncertain. Amtrak estimated the maximum 6-year labor protection liability associated with payments to these employees to be about $6 billion. This liability could change substantially, however, as a result of the Amtrak Reform and Accountability Act of 1997, as discussed earlier. As a result, it is not currently possible to quantify the claims, if any, that employees could raise. In our opinion, the United States would not be legally liable for secured or unsecured creditors’ claims in the event of an Amtrak liquidation. Therefore, any losses experienced by Amtrak’s secured and unsecured creditors would be borne in full by the creditors themselves or their insurers. Nevertheless, we recognize that creditors could attempt to recover losses from the United States. Federal Government Unlikely to Recover Its Financial Interests The federal government is both a secured creditor and a preferred stockholder in Amtrak; however, because of the nature of its financial interests, the federal government is not likely to recover these interests in the event of Amtrak’s liquidation. In exchange for funds for the purchase of and improvements to property and equipment, Amtrak has issued two promissory notes to the U.S. government. The first note, representing about $1.1 billion in noninterest-bearing debt, matures on November 1, 2082, with successive 99-year renewal terms, and is secured by a lien on Amtrak’s rolling stock. The note would be accelerated and become due in the event of Amtrak’s liquidation. However, according to FRA officials, to assist Amtrak in obtaining financing from the private sector, the federal government subordinated its lien on the equipment acquired by Amtrak after 1983 to the security interests of Amtrak’s equipment creditors. Consequently, in a liquidation, these other creditors would have first claim on this equipment or its value. Furthermore, while the federal government would be entitled to Amtrak’s pre-1983 equipment or its value, this equipment may be of limited value because of its age. The second note, representing about $3.8 billion in noninterest-bearing debt, matures on December 31, 2975, and is secured by a mortgage on Amtrak’s real property, primarily on the Northeast Corridor and in the Midwest. The mortgage on this property has not been subordinated. However, the note does not mature for over 970 years, and no payments are due until then. Furthermore, the note could only be accelerated upon the enactment of a statute requiring immediate payment. According to FRA, the present value of the mortgage—that is, the government’s interest in the property—is nominal. In a liquidation, the trustee could pay off the mortgage and sell the property or sell the property to a purchaser who would assume the mortgage. In either case, proceeds from the sale would be available to satisfy creditors’ claims. It is not likely the federal government would sustain a financial loss on such transactions because it has no expectation of payment for over 970 years. While the federal government’s financial interest might not be affected, any interest the federal government might have in continuing intercity passenger rail service could be jeopardized if a purchaser did not use Amtrak’s property for this purpose. The U.S. government also holds all of Amtrak’s preferred stock, about $10.6 billion as of September 30, 1997. While the Amtrak Reform and Accountability Act of 1997 eliminated the liquidation preference attached to such stock as well as the requirement to issue such stock, this stock ownership nonetheless represents a substantial interest in Amtrak.However, the federal government’s claim associated with this stock would be secondary to the payment of the claims of secured and unsecured creditors. Liquidation Could Place Financial Burden on Participants in the Railroad Retirement and Unemployment Systems In contrast to the losses that creditors might suffer, participants in the railroad retirement and unemployment systems would have increased financial obligations in the event of Amtrak’s liquidation. The financial health of some of these participants—especially small freight railroads and commuter passenger railroads—might be adversely affected to the degree that they cannot increase revenues or cut costs to offset increased payroll taxes. The primary source of income for the railroad retirement system is payroll taxes levied on employers and employees. Because the retirement system is on a modified pay-as-you go basis, the financial health of this system is closely related to the size of the railroad workforce and the income to the railroad retirement account derived from this workforce. In 1996, Amtrak paid about $335 million in payroll taxes into the railroad retirement account (about 8 percent of the total receipts for the railroad retirement account in calendar year 1996). A loss of this contribution could have a significant impact. A February 1997 analysis by the Railroad Retirement Board found that, if Amtrak had been liquidated in 1997 and no actions had been taken to increase payroll taxes or reduce benefit levels, the balance in the railroad retirement account would have begun to decline in 2000 and that the account would have been depleted by 2026. For this analysis, the Board assumed that all Amtrak employees were terminated and all Amtrak employees who were eligible for retirement at the time of a liquidation (about 1,300 employees) actually retired. Although the retirement account would not have been depleted until 2026, the Railroad Retirement Board would have had to take action before that time to protect the retirement account’s financial health. According to the Board, if Amtrak had been liquidated in 1997, this would have required, beginning in 1998, one of three actions: (1) a permanent “tier II” payroll tax increase on either employers or employees of other railroads or both, (2) tier II benefit reductions, or (3) a combination of the first and second actions equivalent to 2.3 percent of tier II taxable payroll. If the adjustment had been made totally as a tax increase, it would have resulted in a new combined employer and employee tax rate of 23.3 percent. Because the Board does not have the authority to increase retirement taxes, it would have to seek legislation to change the tax rate. Similarly, participants in the railroad unemployment system would be affected by a liquidation. In contrast to the impacts on the retirement account, the financial effects would be more immediate and shorter-term. The Railroad Retirement Board estimated that, if Amtrak had been liquidated in 1997, separated Amtrak employees would have received about $322 million in benefit payments that would not have been paid for by Amtrak. In order to pay these benefits, other railroads would have been required to increase their payroll tax contributions. In particular, the average tax rate would have been increased by a maximum of about 9 percentage points (a 400-percent increase)—from 3 percent to 12 percent in 2000. This estimate assumed that terminated employees would have exhausted all their unemployment benefits and that they would have received no labor protection benefits. The Board also assumed that the unemployment account would have had to borrow $288 million from the retirement account, as permitted by statute, over 2 years. Because this borrowing would have been short-term, the Board believes that it would have had little or no overall effect on the retirement account. By taking these actions, the Board projected that the unemployment account would have remained financially solvent and been out of debt by 2001. An Amtrak Liquidation Could Affect Intercity, Commuter, and Other Rail Service Liquidating Amtrak could disrupt intercity and other passenger rail service—service that affects over 20 million intercity passengers and over 100 million commuter and other passengers on the Northeast Corridor annually. In particular, for both intercity and commuter rail, issues associated with accessing tracks and stations—and the cost of such access—would largely determine the extent of service, if any, including service on the Northeast Corridor. Commuter railroads that contract for service from Amtrak and freight railroads using the Corridor might also face hardships. Continuation of Intercity Passenger Rail Service Could Be Limited The current level of Amtrak’s intercity passenger rail service in certain states might not continue if Amtrak were liquidated, according to department of transportation officials we talked to in three states—Colorado, Florida, and Louisiana. These states are not on the Northeast Corridor and do not provide financial support for intercity passenger rail service. They had the largest volume of intercity passenger ridership—about 650,000 intercity passengers in fiscal year 1997—of states that do not provide financial support for intercity passenger service and that are not on the Northeast Corridor. Although these officials were interested in continuing intercity service, they doubted service would continue for a number of reasons: the potentially high cost of continuing service, possible difficulties in negotiating access to tracks with freight railroads, and the lack of an incentive to keep such service going if Amtrak’s national route network were ended. Regarding the latter, officials from all three states said their states depend, at least to some degree, on Amtrak’s national route network to bring in tourists and others. Although intercity rail service might face an uncertain future in these states, these officials said they would continue to pursue more localized efforts to initiate or continue passenger rail service. States that financially support intercity passenger rail service, on the other hand, might have a greater interest in continuing this service. Three states that we talked to—California, Illinois, and Wisconsin—provide financial support for intercity passenger rail service and indicated more interest in continuing such service. These states represented about 5.6 million intercity passengers in fiscal year 1997. One state—Illinois—had even begun efforts to take over a portion of state-supported Amtrak service about 2 years ago, when Amtrak requested more money for the service. Although this effort ended when Amtrak signed a fixed-price contract to continue service, state officials indicated they would continue to be interested in arranging for this service should Amtrak go out of business. As with the states not currently providing financial support, factors cited as potentially hindering these states’ ability to maintain service included cost and uncertain access to freight railroads’ tracks. A California official told us these issues would be critical in his state for continuing intercity passenger rail service. There may be other hindrances as well. For example, a California official said his state might have difficulty arranging insurance because state law prevents the state from indemnifying third parties (such as freight railroads) in the event of accidents. Officials we spoke with in some states were concerned about access to tracks because they felt such access might be lost if Amtrak were liquidated. Amtrak is guaranteed by law access to freight railroads’ tracks to provide intercity passenger rail service. If Amtrak were liquidated and access to these tracks were lost, states and others might have to rely on other means to continue intercity passenger rail service. One means might be compacts between two or more states to provide intercity passenger rail service, as allowed under the Amtrak Reform and Accountability Act of 1997. Although the use of compacts may not guarantee either access to tracks or a specified cost, it could be a means to maintaining intercity passenger rail service. However, successfully implementing such compacts might be difficult. Among the potential problems cited by the states we talked to are reaching agreements on the allocation of costs, establishing train schedules, and determining station stops. Illinois and Florida officials said they had direct experience in trying to work with other states to establish a long-distance intercity passenger rail route. In both instances, the route was not established because of too many disputes among the participating states over cost and operational matters. In addition, these officials mentioned potential financial and/or operational problems that could be created if one or more states decided not to participate in a route. An Illinois official said interstate compacts might be feasible. However, the route would have to be relatively short—in the range of 3- to 4-hour trips, for example. Access to Tracks and Stations and Cost Could Also Influence Continuation of Commuter Rail Service As with intercity service, the extent of commuter rail service provided would depend in part on access to tracks and stations. Such access is a particularly critical issue for the Northeast Corridor. The Corridor serves over 100 million rail passengers per year and is a critical part of the transportation infrastructure for eight states and the District of Columbia. Officials at two commuter railroads operating on the Corridor—New Jersey Transit and the Southeastern Pennsylvania Transportation Authority—told us they would basically shut down if they were unable to use the Corridor to provide service. These railroads carried about 70 million passengers in 1996. A third commuter railroad—the Long Island Rail Road—told us its operations would be “devastated” if it were denied access to Amtrak’s Pennsylvania Station in New York City. According to Long Island Rail Road officials, although the Long Island Rail Road accounts for only about one-third of the track capacity at this station, it accounts for about 70 percent of the passengers—approximately 260,000 passenger trips per day. These officials were concerned about access even though they have easements to operate along the Northeast Corridor. Some commuter authorities expressed concern that these easements might be extinguished in a liquidation. After a liquidation, infrastructure costs would be a factor in maintaining commuter rail service. Amtrak estimates that current and future infrastructure costs of $5.4 billion might have to be absorbed by states and commuter rail authorities over a 6-year period if it were liquidated. The ability of states and commuter authorities to absorb this level of cost is uncertain. Officials in each of the three Northeast Corridor states we talked to—New Jersey, New York, and Pennsylvania—said they would have a difficult time providing additional money for passenger rail service if Amtrak went out of business. If funds were not available, states and commuter rail authorities might look to the federal government to help pay any additional costs. One state we talked to—New York—told us it would expect the federal government to pay for any costs the states would have to absorb if Amtrak were liquidated. Given the critical role of the Northeast Corridor and the 100 million passengers served annually, the states’ inability to absorb costs could dramatically affect the continuation of service along the Corridor. However, two commuter authorities—New Jersey Transit and Southeastern Pennsylvania Transportation Authority—told us that they would not need all of the tracks and other infrastructure currently in place on the Corridor. In addition, their trains are not as fast as Amtrak’s (traveling about 80 miles per hour compared with Amtrak’s 125 miles per hour on some portions of the Corridor) and would not need an infrastructure that supports high-speed service. Consequently, they believe the physical plant could be pared back to reduce costs. While the commuter authorities’ infrastructure needs might be reduced, they might have additional costs to use the facilities and/or to perform such services as dispatching trains (which Amtrak currently provides) if Amtrak were liquidated. Commuter Rail Agencies That Contract With Amtrak and Freight Railroads Could Face Hardships Amtrak’s liquidation could create some degree of hardship for commuter rail agencies that contract with Amtrak to provide service. In fiscal year 1997, Amtrak was the contract operator for seven commuter rail agencies and was paid about $242 million for its services. These services account for about 179,000 passenger trips, on average, per weekday. If Amtrak were liquidated, the commuter rail agencies that contract their service to Amtrak would have to find new operators. However, these agencies could have difficultly in doing this. According to the American Public Transit Association, currently only a handful of operators manage commuter rail service. These operators are commuter rail agencies that provide the service themselves, contract with Amtrak, or contract with freight railroads to provide the service. As of December 1997, only one nonrailroad, noncommuter rail agency commercial firm (Herzog Transit Services, Inc.) provided commuter rail service under contract. Two of the three commuter rail agencies that we spoke with that have contracted their service to Amtrak—Caltrain and Metrolink—said finding new operators could take time and ultimately be more expensive than their current arrangements. Metrolink estimated that it could take up to 12 months to find a new operator and that costs could be between 10 and 15 percent higher. The third agency—the Maryland Rail Commuter Service—was less concerned about finding a new operator than losing its entire Northeast Corridor service if Amtrak were liquidated. Freight railroads that operate on the Northeast Corridor could also face severe problems if Amtrak were liquidated. In particular, a liquidation would raise questions about whether freight railroads could continue to use the Corridor to provide service. For the two freight railroads we talked to (Conrail and the Providence and Worcester Railroad), access to the Corridor is integral to their operations. Both said the loss of this access could substantially impair their business. For example, Conrail operates 56 trains a day on the Northeast Corridor, with roughly 35,000 carloads of freight monthly and $37 million in monthly revenues. According to the Providence and Worcester Railroad, the use of the Corridor represents about 40 percent of its business and about 25 percent of its annual revenue. The loss of this business would cause both the railroad and its customers economic damage. Like the commuter railroads, freight railroads operate on the Northeast Corridor under an easement. Officials from both railroads said they would take action as necessary to continue service and to ensure they could continue to exercise their easement to provide freight service. Agency Comments and Our Evaluation We provided Amtrak and FRA with a draft of this report for review and comment. We met with Amtrak’s Vice President for Government Affairs and its Vice President and General Counsel. Amtrak agreed with the contents of the draft report and offered several technical and clarifying comments, which we incorporated where appropriate. We also met with FRA’s Chief Counsel, Deputy Chief Counsel, and Associate Administrator for Railroad Development. As with Amtrak, FRA agreed with the contents of the draft report and offered technical comments, which we incorporated where appropriate. Scope and Methodology To identify the financial issues associated with a possible liquidation of Amtrak, we reviewed Amtrak’s September 1997 analysis in a draft paper entitled “Budget Implications of a Zero Federal Grant: Why Zero Isn’t Zero.” This analysis identifies Amtrak’s estimate of the various costs associated with a possible liquidation. To understand how it was prepared, we discussed this analysis, including assumptions, methodology, and data sources, with Amtrak officials. However, we did not verify the estimates in Amtrak’s analysis. To identify other issues associated with a potential liquidation, we met with a variety of officials from federal and state governments, commuter and freight railroads, and individuals with experience in railroad reorganizations and restructurings. We discussed the potential operational, financial, and legal implications of Amtrak’s liquidation with these individuals and organizations. A list of the persons and organizations that we contacted is contained in appendix II. We did not develop an independent estimate of the costs associated with a liquidation nor of the costs and implications associated with other scenarios, such as a reorganization of Amtrak. Finally, we did not attempt to quantify indirect effects, if any, resulting from a possible Amtrak liquidation, such as effects on highway and aviation congestion, air quality, and energy consumption. We performed our work from July 1997 through February 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to congressional committees with responsibilities for transportation issues; the Secretary of Transportation; the Administrator, Federal Railroad Administration; and the Director, Office of Management and Budget. We will also make copies available to others upon request. If you or your staff have any questions about this report, please contact me at (202) 512-3650. Major contributors to this report were Helen Desaulniers, Richard Jorgenson, James Ratzenberger, and Carol Ruchala. Significant Aspects of the Railroad Bankruptcy Process Chapter 11 of the Bankruptcy Code, which generally sets out procedures for reorganization, would govern an Amtrak bankruptcy. For the most part, the provisions of chapter 11 applicable to corporate reorganizations would apply to Amtrak, as would several additional provisions applicable only to railroads. Because of the historical importance of railroads to the economy and the public, bankruptcy law seeks, among other things, to protect the public interest in continued rail service. In applying certain sections of the Bankruptcy Code, the court and an appointed trustee of Amtrak’s estate would be required to consider the public interest as well as the interests of Amtrak, its creditors, and its stockholders. A trustee must be appointed in all railroad cases. Amtrak could initiate a bankruptcy proceeding by filing a voluntary petition for bankruptcy when authorized by its board of directors. In addition, three or more of Amtrak’s creditors whose unsecured claimstotaled at least $10,000 could file an involuntary petition. After a petition was filed, a trustee would be appointed. This individual would be chosen from a list of five disinterested persons willing and qualified to serve. This list is submitted by the Secretary of Transportation to the U.S. Trustee (an official in the Department of Justice) for the region in which a petition was filed. The trustee becomes the administrator of the debtor’s estate and, with court approval, would likely hire attorneys, accountants, appraisers, and other professionals to assist with the administration of the estate. Once appointed, the trustee, with court oversight, rather than Amtrak’s board of directors, would make decisions about the railroad’s operations and financial commitments. The trustee would have to decide quickly whether Amtrak could continue to maintain adequate staff for operations. In addition, the trustee would have to decide whether Amtrak would need rolling stock equipment, such as passenger cars and locomotives, subject to creditors’ interests for its operations and, if so, obtain any financing necessary to maintain possession of such equipment. Unless the trustee “cured” any default—that is, continued payments—and agreed to perform obligations associated with Amtrak’s rolling stock equipment within 60 days of the bankruptcy petition, creditors with an interest in the equipment, such as lessors and secured lenders, could repossess it. Furthermore, the trustee would have to decide whether to assume or reject Amtrak’s obligations under executory contracts and unexpired leases. To assume a contract or lease on which Amtrak was in default, the trustee would have to (1) cure the default or provide adequate assurance that it would be cured, (2) compensate the other party or assure the other party of compensation for actual pecuniary losses resulting from the default, and (3) provide adequate assurance of future performance. In this context, a trustee could try to negotiate more favorable terms than under Amtrak’s existing contracts and leases. However, the availability of cash for the costs associated with contracts and leases would again be a critical element in the trustee’s decisionmaking. While payments on assumed contracts or leases would be expenses of the estate, payments due on rejected contracts and leases, as well as any damages and penalties, would give rise to general unsecured claims. In addition, the trustee would have to decide whether to avoid—that is, set aside—certain transactions between Amtrak and its creditors. Generally, the trustee could set aside Amtrak’s transfers of money or property for pre-existing debts made within 90 days of the bankruptcy petition, as long as Amtrak was insolvent at the time of the transfer and the creditor received more as a result of the transfer than it would receive in a bankruptcy proceeding. However, the trustee would not have unlimited authority in this area. For example, the trustee could not set aside a transfer that was intended by Amtrak and a creditor to be a contemporaneous exchange for new value and that was in fact a substantially contemporaneous exchange. Although the trustee would have considerable authority over Amtrak’s operations and financial commitments, neither the trustee nor the court could unilaterally impose changes in the wages or working conditions of Amtrak’s employees. The employees could voluntarily agree to such changes, perhaps in an effort to avoid or forestall liquidation. Otherwise, the trustee would have to seek changes in wages and working conditions by following procedures specified in the Railway Labor Act, including those for notice, mediation, and binding arbitration with the consent of the parties. Perhaps the trustee’s most significant responsibility would be to develop a plan of reorganization. The provisions of chapter 11 applicable to reorganization plans would, for the most part, apply to Amtrak. Therefore, among other things, a reorganization plan would have to (1) designate classes of claims (other than certain priority claims) and interests; (2) specify the unimpaired classes of claims or interests; (3) explain how the plan would treat impaired classes of claims or interests; and (4) provide adequate means for its implementation. Furthermore, the plan would have to indicate whether and how rail service would be continued or terminated and could provide for the transfer or abandonment of operating lines. Notably, the trustee could propose a plan to liquidate all or substantially all of Amtrak’s assets. Certain unsecured claims would have to be accorded priority in an Amtrak reorganization plan, as in any corporate reorganization plan. For example, administrative claims, such as those for post-petition expenses of the estate and reasonable compensation for the trustee and professionals engaged by the trustee, would have to be paid in full on the effective date of the plan, unless the holder of a claim agreed to an alternative arrangement. Other priority unsecured claims, such as those for wages and contributions to employee benefit plans, would also have to be paid in full on the effective date of the plan, unless each class of claimants accepted a plan providing for deferred payments. In addition, under Bankruptcy Code provisions specifically applicable to railroads, claims for personal injury or wrongful death arising out of Amtrak’s operations, either before or after the filing of a bankruptcy petition, would have to be treated as administrative claims. Furthermore, certain trade claimsarising no more than 6 months prior to the bankruptcy petition would also have priority. Finally, the court could require the payment of amounts due other railroads for the shared use of lines or cars, known as interline service. After full disclosure of its contents, Amtrak’s creditors and shareholders would vote on the plan of reorganization. Because the United States is a creditor and stockholder of Amtrak, the Secretary of the Treasury would accept or reject the plan on behalf of the United States. According to the Federal Railroad Administration, the Attorney General and the Secretary of Transportation would be consulted. However, a plan of reorganization could not be implemented unless confirmed by the court. To confirm the plan, the court would have to find, among other things, either that each class of impaired claims or interests had accepted it, or that the plan did not discriminate unfairly, and was fair and equitable, with respect to each class of impaired claims or interests that had not accepted it. In addition, under provisions of the Bankruptcy Code specifically applicable to railroad cases, the court would have to find that each Amtrak creditor or shareholder would receive or retain no less under the plan than it would receive or retain if all of Amtrak’s operating lines were sold and the proceeds of such sale, and other estate property, were distributed under a chapter 7 liquidation. Finally, the court would have to find that Amtrak’s prospective earnings would adequately cover any fixed charges and that the plan was consistent with the public interest. If more than one reorganization plan met these requirements, the court would be required to confirm the plan most likely to maintain adequate rail service in the public interest. Following confirmation of a reorganization plan, Amtrak would be discharged from its debts. If an Amtrak reorganization plan were not confirmed within 5 years of the bankruptcy petition, the court would have to order liquidation. However, the court could order liquidation earlier upon the request of a party in interest, after notice and hearing, if it determined liquidation to be in the public interest. Under such circumstances, the trustee would distribute the assets of the estate as though the case were a liquidation under chapter 7. Because the case would not be converted to a proceeding under chapter 7, relevant provisions of chapter 11 applicable to railroads would continue to apply. In a liquidation, the trustee would turn over collateral or make payments to the proper secured creditors, convert remaining property to cash, and distribute the proceeds to the unsecured creditors in accordance with the distribution scheme contained in chapter 7. Proceeds would be distributed in the following order: priority unsecured claims, including those discussed above, in specified order; general unsecured claims, timely and tardily filed; fines, penalties, and damages that are not compensation for pecuniary loss; and post-petition interest on claims previously paid. Claims of a higher priority would have to be provided for before claims of a lower priority. In addition, in most cases, if the holders of claims in a class could not be paid in full, claims would have to be paid on a pro rata basis. Organizations Contacted Federal Agencies State Departments of Transportation Intercity and Commuter Rail Agencies Freight Railroads Labor Unions Amtrak Lenders Kreditanstalt für Wiederaufbau (Germany) Export Development Corporation (Canada) Legal and Railroad Reorganization Experts Auditing Firm American Bankruptcy Institute American Public Transit Association American Short Line and Regional Railroad Association The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the financial and other issues associated with a possible Amtrak bankruptcy and liquidation, focusing on: (1) uncertainties in estimating the potential costs associated with a liquidation; (2) possible financial impacts on creditors, including the federal government; (3) possible financial impacts on participants in the railroad retirement and unemployment systems; and (4) possible impacts on intercity, commuter, and other rail service. GAO noted that: (1) Amtrak has estimated that the net cost to creditors and others of a possible liquidation could be as much as $10 billion to $14 billion over a 6-year period; (2) however, the costs associated with a possible liquidation are difficult to predict because they will depend on a few uncertainties; (3) Amtrak's financial obligations, if any, to employees who lose their jobs as a result of a liquidation would depend on the results of negotiations between Amtrak and its unions; (4) in addition, most of the costs identified by Amtrak are not liquidation costs; (5) existing commuter rail agencies and others that operate on Amtrak tracks might assume some of these costs; (6) Amtrak's creditors might face losses in the event of a liquidation; (7) the extent to which these creditors' claims could be paid would depend in large part on the market value of assets available to satisfy such claims; (8) with the exception of its interest in the Northeast Corridor and certain other real property, the federal government's financial interests in the event of liquidation would generally be subordinate to other creditors'; (9) for participants in the railroad retirement and unemployment systems, an Amtrak liquidation would result in higher payroll taxes on employers and employees of other railroads or a reduction in benefits to compensate for the loss of Amtrak's annual contributions; (10) according to the Railroad Retirement Board, which administers these systems, if no actions were taken to increase payroll taxes or reduce benefit levels, the balance of the railroad retirement account would start to decline by 2000 and would be depleted by 2026.; (11) the railroad unemployment account, on the other hand, would experience more immediate financial problems requiring the imposition of surcharges on participants as well as borrowing from the retirement account; (12) according to the Railroad Retirement Board, these measures would be required for 2 to 3 years to maintain financial solvency in the unemployment account; (13) the liquidation of Amtrak could also disrupt intercity and other passenger rail service; (14) a number of factors could affect the continuation of rail service, including access to the tracks and stations that are owned by Amtrak and others, and the ability of states and commuter railroads to absorb the cost of continuing service; and (15) some freight railroads use the Northeast Corridor and may also face the potential loss of millions of dollars of business to the extent that they are unable to retain access to the Corridor. |
Background In 1991/92, drought caused massive crop failure, threatening 18 million people in 10 southern African countries with famine. Because of a similar reduced maize crop after the 2001/02 crop cycle, several early warning systems predicted an impending food crisis that would run through the beginning of the following harvest in April 2003. (App. II provides a timeline of the crisis period, and app. III provides information on early warning systems.) Regional and national assessments of the crisis conducted by WFP, FAO, and others estimated that 15.3 million people in the region were at risk of starvation. (Fig. 1 shows the population at risk of famine in each of the six affected countries.) In July 2002, WFP initiated the Southern Africa Crisis Response Emergency Operation (EMOP) for providing food aid to the six countries on a regional basis. Prior to this consolidation, WFP had been delivering food to the individual country emergency operating programs. WFP’s objectives in the southern Africa food crisis were to prevent severe food shortages, safeguard the nutritional well-being of vulnerable segments of the population, preserve human assets, and prevent migration out of affected areas. As the major food aid donor in the southern Africa crisis, the U.S. government has a significant role in the relief effort. Through USAID’s Food for Peace Office and its Office of Foreign Disaster Assistance and USDA, the U.S. government has worked to support the EMOP and address the crisis. In February 2002, in an effort to avert famine, the United States began authorizing food aid shipments to the region. As of March 18, 2003, the U.S. government had provided approximately $275 million in food aid and $13 million for bilateral nonfood-related assistance such as agriculture, health, shelter, and sanitation. (See app. IV for additional information on the U.S. contributions.) WFP, the United States, and other countries partner with nongovernmental organizations to distribute food aid at the regional and village level. In addition, many of these organizations also provide nonfood emergency assistance and long-term development aid. Much of the population in each of the affected countries works in the agricultural sector. The percentage of labor force engaged in agriculture ranges from 66 percent in Zimbabwe to 86 percent in Lesotho and Malawi. Many of these farmers rely on maize (corn) as the primary staple crop. Unlike root crops such as cassava and sweet potatoes—which are common but less popular staples in the region—maize is relatively fragile, requiring more fertilizer and differing amounts of water during the growing season. Multiple Factors Contributed to the Food Crisis The immediate factor contributing to the food crisis was the erratic weather patterns that disrupted the normal growing cycle, causing maize production in southern Africa to drop from a 5-year average of about 7.3 million MT to about 5.2 million MT in 2002. The dramatic reduction in available maize can also be linked to a weak agricultural sector and government actions, such as Malawi’s decision to sell off its strategic grain reserve and Zimbabwe’s fast-tracked land reform. In addition, much of the region’s population had limited access to food because of widespread poverty. The HIV/AIDS epidemic further exacerbated the population’s access to basic commodities by decreasing household food production and income and increasing consumption requirements. Erratic Weather Patterns Played a Key Role in Reducing Maize Production Erratic weather patterns between December 2001 and May 2002 reduced the harvests in five of the six affected countries, except Mozambique, when compared with 5-year averages. Drought-like conditions gripped parts of Malawi, southern Mozambique, Swaziland, southern Zambia, and Zimbabwe in the middle of the growing season (see app. II for timeline). This water deficit at a crucial point in the growing season severely stressed crops and caused many hectares to wilt. In addition, parts of Zambia suffered high rainfall mid-season, flooding the still-growing crops. Similarly, in Malawi, after the mid-season dry spells wilted some crops, the country received heavy rains that hampered the harvesting and drying of what crops remained, and in some cases, caused them to rot. Lesotho also experienced prolonged rains late in the season as well as an additional late- season frost that damaged crops across large parts of the country and drastically reduced production. Regional Cereal Production Dropped by 29 Percent Regional food supplies have been limited due to poor cereal harvests in five out of the six affected countries. (See table 1.) Mozambique was the one exception: Its 2001/02 cereal harvest was actually above average. However, due to transportation constraints, Mozambique’s production surpluses could not be supplied to the southern part of the country where cereal harvests were lower. Poorly Functioning Agricultural Sector Negatively Affected Food Supply In addition to poor weather conditions, weaknesses in the agricultural sector contributed to a poor harvest. According to IFAD, these weaknesses included the following: Declining soil fertility reduced crop yields. In Lesotho, average maize and sorghum yields have declined by more than 60 percent since the mid-1970s. According to FAO, declining soil fertility is a primary cause of this trend and is leading to a crop production catastrophe in that country. Restricted access to agricultural inputs such as seeds and fertilizer limited harvests. In Zambia, important inputs such as seeds and fertilizer were not available until December 2001 or January 2002, resulting in late plantings. These crops were at a crucial stage of development when the rains ceased in early 2002, causing crop failure. Incomplete market development impaired farmers’ ability to sell crops. In Malawi, market reforms of the 1980s and 1990s eliminated price controls and removed government food grain monopolies. While these liberalizing reforms increased the availability of seeds and fertilizer, small farmers still lack access to credit. Recent Government Actions Further Reduced the Food Supply The food supply has been constrained further by certain government actions, the most damaging of which were the sale of grain reserves in Malawi and fast-tracked land reform in Zimbabwe. Sale of Malawi’s Grain Reserve Hindered Stable Food Supplies Between July 2000 and August 2001, the National Food Reserve Agency of Malawi sold the 167,000 MT of maize it had purchased and stored as food reserves for the country. Despite several audits, it is still uncertain where the proceeds of the sale went. While the sold reserves did not cause the Malawi food crisis, their absence jeopardized the population’s food security. Had the government retained 60,000 MT of maize in accordance with its own policy to ensure adequate food supplies or an equivalent amount of currency to purchase new stocks, it could have been used to help ease food shortages in the early stage of the crisis, when a considerable number of people are reported to have died, and to fill almost one-quarter of the country’s cereal gap while emergency response operations were ramping up. An investigation by Malawi’s National Audit Office in May 2002 concluded that the National Food Reserve Agency lost money in every area of handling maize because of poor financial management. Another investigation, conducted by Malawi’s Anti-Corruption Bureau in mid-2002, found that poor management of the grain reserve allowed companies and individuals to take advantage of the maize shortage to increase prices beyond the reach of a large sector of the community. The mismanagement cost the Malawian government more than K 2.9 billion (about $40 million). Zimbabwe’s Land Reform Decimated Production and Strained Region’s Supply After years of trying to redistribute the country’s arable land, the government of Zimbabwe fast-tracked its land reform and resettlement policy in 2000 with the aim of acquiring all commercial farms no later than August 8, 2002. The campaign was characterized by the forced expulsion of landowners and farm laborers. To date, there remain more than a million internally displaced farm laborers. While the government did acquire these farms, it did not maintain them to ensure continued productivity. As a result, the land seizure destabilized the country’s economy, leading to a 75 percent drop in commercial maize production over the past 2 years and turning Zimbabwe from a net exporter of grain to a net importer. Because Zimbabwe now cannot grow enough food to feed its own population, it has strained the cereal supply for the entire region. According to the State Department, the country’s gross domestic product fell by more than 20 percent and inflation soared to more than 269 percent between 1998 and 2002, coinciding with fast-tracked land reform. At the same time, unemployment rose by more than 25 percent as dismantling of commercial farms left many rural farm workers without a source of income and, therefore, a way to purchase food when their subsistence crops failed. In addition, government-imposed price controls on basic commodities have caused shortages of everything from bread, milk, sugar, and wheat flour to fuel and electricity. Widespread Poverty Contributed to Food Insecurity The six nations affected by the food crisis are generally low-income countries. The percentage of population subsisting on less than $1 per day range from 36 percent in Zimbabwe to 64 percent in Zambia. This widespread poverty and lack of productive assets (e.g., livestock and farm machinery) contribute to food insecurity in the region. In addition, the region is currently facing serious economic problems that further increase the population’s food insecurity. For example, in recent years, the dramatic collapse in the economy of Zimbabwe and a decline in the mining industry in South Africa and Zambia have removed sources of employment for many individuals in the region. The region’s food insecurity is associated with high rates of chronic malnutrition in the under-5 population—ranging from 30 percent in Swaziland to 59 percent in Zambia. HIV/AIDS Epidemic Exacerbated Food Shortages The HIV/AIDS epidemic has strained already-diminished food supplies by decreasing affected households’ food production and increasing nutritional requirements. In addition, the epidemic limits households’ access to food by decreasing income and increasing household expenses. According to the Joint United Nations Program on HIV/AIDS (UNAIDS), adult HIV/AIDS infection rates in 2001 were approximately 31 percent for Lesotho, 15 percent for Malawi, 13 percent for Mozambique, 33 percent for Swaziland, 22 percent for Zambia, and 34 percent for Zimbabwe. Infection rates are higher among women, who generally account for 70 percent of the agricultural labor force and 80 percent of food production in Africa. HIV/AIDS Reduces Food Supplies HIV/AIDS has decreased household food production by attacking people in their most productive working years, thus reducing the labor force. Around three-fourths of HIV/AIDS cases in southern Africa are among adults between the ages of 20 and 40. The percentage of agricultural labor force lost due to HIV/AIDS deaths by 2000 was nearly 6 percent for Malawi and 10 percent for Zimbabwe. Recent studies on specific rural areas show, for example, that each adult death in Zambia was associated with a 16 percent reduction in the amount of land planted by the household, and 72 percent of households affected by chronic illness in selected rural areas of Malawi experienced an agricultural production decrease. In addition, a person infected with HIV/AIDS requires up to 50 percent more protein and 15 percent more calories than a noninfected person. These extra needs put a further strain on the already limited food supplies. HIV/AIDS Decreases Access to Food HIV/AIDS has lowered household incomes, making it more difficult to access what food is available. Recent studies estimate that GDP growth in southern Africa is currently around 1 percent to 2 percent lower due to HIV/AIDS. For the six affected countries, 1 percent of GDP in 2001 amounted to around $200 million. Recent studies in the region also show large monetary impacts at the household level. For example, in Zambia, HIV/AIDS-affected households reported annual income levels of 30 percent to 35 percent less due to the disease. In Zimbabwe, households with orphans had 42 percent less income per capita than households without orphans. In addition, medical care and funeral expenses are significant: In Zambia, 42 percent of households with chronically ill members reported unusually high health care expenses compared with 14 percent of households without chronically ill members, while in Zimbabwe, funeral costs can be as much as twice the annual per capita poverty line. Food Needs Not Fully Met, but Famine Was Averted By the end of the April 2002-March 2003 crisis period, approximately 93 percent of the regional cereal gap appeared to have been met. Commercial cereal imports were reported as 1.72 million MT, while the food aid effort achieved at least 0.73 million MT (60 percent of the planned food aid amount). The commercial cereal imports and food aid prevented large-scale famine and death but did not reach parts of the region early enough throughout most of the crisis period to avert widespread hunger. Many people resorted to coping mechanisms, such as rationing their food intake, reducing their expenditures on nonfood items, and selling household assets to obtain food. The limited data available on nutritional status generally do not show a significant impact on acute malnutrition in the countries of the region. In addition to problems with timely delivery of food, U.N. agencies were only able to fund about 25 percent of urgent, nonfood emergency humanitarian needs. Approximately 93 Percent of the Cereal Gap Met during the Crisis Period The May/June 2002 FAO/WFP crop and food supply assessments (CFSAM) for each of the six countries estimated the cereal gap for the region at 4.1 million MT or 43 percent of domestic requirements for the April 1, 2002, through March 31, 2003, period. However, by the end of March 2003, the cereal gap had been revised downward substantially—-to 2.6 million MT or 31 percent of domestic requirements. Based on the plan that evolved from the CFSAMs, the cereal deficit was to be offset by a combination of commercial imports and emergency food aid. The assessments identified an emergency cereals need of 1.2 million MT for the crop year, and this amount was adopted as a goal in the United Nations’ July 2002 emergency appeal for food aid for the region. Although later analyses projected more people at risk of famine, the goal for emergency cereals needs was not increased. As shown in figure 2, if the emergency goal of 1.2 million MT were fully met, the estimated need for commercial cereal imports would be 1.4 million MT. Figure 3 indicates the extent to which food aid and commercial imports helped offset the cereal gap in each country and the region over the April 1, 2002, to March 31, 2003, period. As the figure shows, the region as a whole met at least 93 percent of its need. In two countries—Malawi and Zambia— food aid and commercial imports combined considerably exceeded the cereal gap, while the other four had unmet gaps ranging from between 9 percent to 50 percent. However, the numbers reported by the Vulnerability Assessment Committees (VAC), WFP, and others do not allow us to precisely define total food aid and commercial import levels. The figures are estimates and should be interpreted with caution. Food aid figures probably underestimate actual values because it was difficult for the VACs and WFP to collect comprehensive food aid data from NGOs. Thus, total NGO contributions could be considerably higher. Regarding commercial imports, some countries had experienced a considerable amount of informal trade in cereals, but the VACs and WFP did not always have access to reliable figures on informal trade. In the case of Zimbabwe, commercial imports may be exaggerated, since the VAC expressed skepticism about the data that were reported. According to some observers, Zimbabwe’s price controls may have encouraged a substantial outflow of cereals to neighboring countries where controls did not exist. Thus, the gap in Zimbabwe may have been much greater than shown in the figure. The data in figure 3 do not address the extent to which different parts of a country were served. Although Zambia appears to have offset its cereal gap by a large amount, the January VAC assessment reported serious cereal supply problems at local markets in rural areas. In addition, Malawi, which offset its cereal gap to an even greater extent, reported maize to be available in most markets, but vulnerable households had limited ability to pay for the food. (See app. V for additional information on commercial imports.) Food Aid Did Not Reach the Region Early Enough to Avert Widespread Hunger The overall commercial cereal imports and food aid averted widespread famine, according to WFP, USAID, and other observers in the region. However, because food supplies to the region were less than planned during the July through December period, far fewer people received food aid than expected. Many people in vulnerable areas went without meals and resorted to other coping mechanisms as well. Limited data available on nutritional status generally do not show a significant impact on acute malnutrition. Food Supplies from World Food Program Between July and December 2002, WFP distributed only 48 percent of the cereal it planned to provide to beneficiaries during that period. While Malawi and Swaziland received 87 percent and 76 percent, respectively, of their planned deliveries, the other four countries fell below the 40 percent mark. In addition to cereal, WFP planned to provide several other foods (principally pulses, vegetable oil, and corn/soya blend) for added nutrition as well as to meet the special needs of some of its recipients. WFP realized only 17 percent of its planned distribution of these foods for July through December 2002. WFP deliveries in three countries—Mozambique, Zambia, and Zimbabwe—each represented less than 10 percent of its plans (1 percent in the case of Zambia). In Malawi, which had the best performance, WFP achieved 40 percent of its planned distribution. Figure 4 shows WFP’s monthly performance in achieving its plans for delivery of cereals and noncereal commodities in the region. In general, WFP’s performance gradually improved between July and December. It improved substantially in January, achieving 97 percent for cereals and 74 percent for noncereals. Deliveries declined during the next 2 months, to a low in March of 81 percent for cereal and 53 percent for noncereals. Independent of WFP’s program, NGOs were to provide about 402,000 MT of cereals, or one-third of the emergency cereal need for the region. NGOs obtained or financed food for their efforts from donor countries as well as other voluntary contributions. The United States funded a World Vision program that provided 19,710 MT of cereal food aid to Zimbabwe. In addition, the United States contracted with an NGO consortium, called C- SAFE (Consortium for the Southern Africa Food Security Emergency), to deliver food into the region. According to U.S. officials, the program was part of a longer-term strategy that targeted the most vulnerable populations that the WFP program might miss. USAID, which began discussions withC- SAFE members (CARE, Catholic Relief Services, and World Vision) in July 2002, did not approve a program for the consortium until January 15, 2003. However, under a November pre-authorization agreement, C-SAFE began delivering food into the region in late December 2002. As of the end of March 2003, the consortium had delivered about 57,000 MT of cereal food aid to Malawi, Zambia, and Zimbabwe. (See app. IV for additional information on C-SAFE.) Data provided to us by WFP indicate NGOs provided at least another 16,200 MT of cereals food aid into the region. Beneficiaries Fewer than Intended Between July and December 2002, WFP averaged only 3.9 million beneficiaries per month, compared with a planned average of 10.4 million people per month (for both cereal and noncereal food aid). Figure 5 shows how the shortfall in food aid during the July through December 2002 period affected WFP beneficiary levels in each country. In four of the six countries, fewer than 45 percent of planned beneficiaries were served. In addition, many people who did receive food aid did not receive a full ration. For example, WFP officials in Malawi told us that during November they were only able to provide cereal to many of their beneficiaries. Beans and vegetable oil were unavailable to provide a balanced diet. Reduced Food Intake, Other Coping Strategies Studies show that people in vulnerable communities reduced food intake as their major coping strategy, and this approach has increased since the crisis began. For example, as of December 2002, more than 60 percent of the population in all regions of Malawi reduced the amount of food and number of meals they ate, according to the VAC. The Southern African Development Community (SADC) identified other coping strategies including reducing expenditures on nonfood items, selling or trading household assets to get food (e.g., sale of livestock), increasing consumption of wild foods, migrating to find work or food, stealing, and resorting to prostitution. Table 2 shows the extent to which surveyed households in Zambia relied on reduced food consumption and other coping strategies from between August and December 2002. Between 1999 and 2001, acute malnutrition rates in countries of the region for children under 5 years of age were between 1.2 percent and 6.4 percent. Some assessments conducted between May and October 2002 found an increase in acute malnutrition rates compared with earlier studies but did not find rates consistent with a severe food crisis, which would be 10 percent to 15 percent. However, these studies did not exclude possible pockets of severe malnutrition or hunger-related deaths in the region. Also, adult malnutrition and malnutrition in urban areas were not surveyed. More recent assessments (December 2002 through January 2003) of acute malnutrition for children under age 5 in select districts of Malawi, Mozambique, Swaziland, and Zambia found rates generally ranging between 2 percent and 8 percent. However, the rate was 11.2 percent in one province of Mozambique. According to a recent internal U.S. government report, anecdotal evidence from the field in late 2002 indicates that in certain districts in Zimbabwe, children were being admitted to some health care facilities in increasing numbers for malnutrition. At one facility, three to five children were reported to have died of malnutrition during each month of 2002. More formal nutrition surveys within the country have demonstrated acute malnutrition rates of 6.4 percent and 7.3 percent in May and August 2002, respectively. Results from a nutrition survey conducted in early 2003 are still pending. Nonfood Emergency Needs Severely Underfunded In addition to requesting $507 million for emergency food aid for July 2002 through March 2003, U.N. agencies also requested $143.7 million to address urgent and related humanitarian needs that increased people’s vulnerability to famine for the July 2002 through June 2003 period. As of April 9, 2003, less than 25 percent of the total identified requirements had been funded, according to an April 22 U.N. southern Africa humanitarian crisis update. Principal objectives of the request were to: prevent, contain, and address the outbreak of disease through enhanced health and nutritional surveillance; address the needs of people living with HIV/AIDS and seek to prevent ensure an adequate and timely provision of agricultural inputs for the next planting season as well as emergency veterinary inputs; maintain the capacity for planning recovery efforts in food self- sufficiency, education, and health services; and prevent marginal populations from falling into a downward spiral that could lead to prolonged dependency in the future. A longer-term objective was to phase out emergency humanitarian assistance and move toward a development agenda focused on poverty reduction, HIV/AIDS prevention and control, and support for food security by increasing food production and strengthening foreign exchange earnings. (For additional information on nonfood emergency needs, see app. VI.) Slow Donations, Poor Infrastructure, Concerns Associated with Biotech Food Were Major Obstacles to an Effective Response Major obstacles to the food aid effort’s success were the lack of sufficient, timely food donations; poor infrastructure in recipient countries; and concerns associated with biotech food. Although the United States made substantial, early donations, aggregate commitments from donor countries were 18 percent below what WFP needed for the July through December period. The shortfall was actually higher given the lag in time between when food is committed and when it arrives in-country. Poor infrastructure in recipient countries and related logistical constraints impeded efficient delivery of food aid and in some cases prevented food from reaching beneficiaries. Concerns over biotech food led Zambia to reject U.S.- donated maize and other countries to impose costly processing requirements. These actions reduced or delayed food aid, increased costs, and complicated the logistics of the emergency operation. Lack of Sufficient, Timely Donations Contributed to Food Aid Shortfalls By the end of June 2002, the United States had delivered more than 41,000 MT of food aid to ocean ports in the southern African region. U.S. deliveries to these ports between July and December 2002 represented approximately 50 percent of the food WFP needed to arrive in-country during that period. (See app. IV for additional information on U.S. food aid donations.) Nonetheless, in aggregate, donors did not make sufficient, timely donations to WFP. WFP needed about 855,000 tons of food (cereals and noncereals) to arrive in the six countries from July through December 2002 to support its planned food distributions. During that period, donors advised WFP that they would contribute about 701,000 tons—a shortfall of 18 percent. However, the shortfall was actually greater because of the considerable lag time between when WFP was advised in writing that a contribution would be made and when food arrived in a beneficiary country. WFP officials estimate that in-kind contributions take 3 to 5 months from the time donors confirm the contribution to the arrival of food aid at its final distribution sites. However, according to WFP officials, when contributions are made in cash and procurement is done within the region, the process can be reduced to 1 to 3 months. Table 3 shows the countries that gave the most to WFP’s regional emergency food aid operation and when they advised WFP of their intended donations. Some of the major donors, including the United States and the United Kingdom, gave large amounts early to the crisis. Others, including the European Union, South Africa, and Japan, waited several months or longer before confirming what they would contribute. WFP acknowledged that the early months of the regional EMOP would indeed have benefited from more rapid mobilization of resources. At the same time, WFP said, as of mid-May 2003, the operation had been 93 percent resourced, by 41 institutional donors--which represented an unusually supportive response. Poor Infrastructure Hampered Efficient Food Delivery The flow chart shown in figure 6 illustrates WFP’s logistics process of delivering food, from the time it is shipped by suppliers to the time food is actually distributed to the recipients at the village level. Food aid commodities are either purchased by WFP regionally or shipped to the region through one of five ports of entry: Beira, Nacala, and Maputo in Mozambique; Durban in South Africa; or Dar es Salaam in Tanzania. (See fig. 7 for a map of the transportation network.) From these points of entry, food is transported by truck or rail to intermediate storage facilities, or transshipment points, which are strategically located in various districts within the country to streamline the flow of deliveries. From these strategic locations, food is then transported to extended delivery points—storage facilities generally located at the district level—from which the food aid allocations for each final distribution site are dispatched. WFP manages this process, including transporting the food to the extended delivery points. Wherever possible, nongovernmental organizations that are designated as the implementing partners are responsible for the secondary transport of food from the extended delivery points to the final distribution points. Long-standing weaknesses in transportation infrastructure across the region hampered timely delivery of food aid where it was needed. Much of the transportation infrastructure (including ports, railways, and roads) had deteriorated since the 1991/92 drought. For example, the port of Maputo, which is ideally situated for moving food commodities to landlocked countries, such as Swaziland and Zimbabwe, cannot be used optimally because of the lack of adequate port warehouse and storage facilities. However, even when ports are full, there is a limit to the amount of food that can be transported over land to landlocked countries, like Zambia, due to rail and trucking capacity and other logistical considerations. According to WFP officials, the port of Nacala was in better condition than the port of Maputo. But its rail system—the sole transport link between Malawi and the nearest port in Mozambique and the shortest, cheapest route into Malawi and eastern Zambia—was in such poor condition it had to be fixed during the crisis. In late 2002, the United Kingdom and Canada gave WFP $6.4 million and $256,000, respectively, to rehabilitate a 48-mile- long track on the Nacala railway and to lease locomotives and wagons. While these locomotives and ongoing repairs to the rail corridor represented a major breakthrough, unexpected setbacks continued to mire operations. For example, in Malawi, heavy rains in January 2003 completely destroyed one bridge on the Nacala rail line, thus impeding the movement of commodities for at least 10 days. In late summer 2002, a donation of 200 trucks from the government of Norway and the International Federation of Red Cross and Red Crescent Societies helped ease access to places that are particularly hard to reach. However, many village roads in these countries routinely become impassable when the rainy season (September to March) begins, thus isolating beneficiaries from food deliveries. Recipient Country Concerns about Biotech Food Compromised Food Pipeline In the middle of 2002, Zambia and Zimbabwe debated whether to accept U.S.-donated maize based on concerns that it might contain biotech products that could adversely affect (1) the health of food aid recipients, (2) the countries’ agricultural biodiversity, and (3) their ability to export agricultural commodities. Despite some earlier concerns over U.S. biotech food aid and Zimbabwe’s objections to biotech whole kernel maize dating back to the middle of 2001, the United States and international agencies did not have a ready alternative to biotech food aid in the southern Africa crisis. The United States was only partly successful in its efforts to persuade southern African country governments to allow unrestricted import and distribution of food aid, including biotech products, on an emergency basis for the duration of the crisis. Efforts included providing information about agricultural biotechnology and the safety of biotech food aid to Zambia and the other countries. Nevertheless, Zambia rejected all food aid that could have included biotech commodities. Zimbabwe implemented stringent grain handling procedures, including milling of whole grain maize, that significantly slowed distribution of food aid. Malawi, Mozambique, and Lesotho also debated what to do and eventually imposed milling requirements on whole grain maize that were enforced with varying degrees of rigor. Toward the end of August 2002, FAO, WHO, and WFP issued a common statement on biotech food aid, as did the European Union. Both statements indicated that biotech food aid was unlikely to present a risk to human health and suggested milling the maize as a way to overcome environmental and trade concerns. However, U.S. officials from State, USAID, and USDA believe that, given the severity of the crisis and existing scientific evidence, U.N. agencies and the European Union did not speak out early or forcefully enough on the issue. The United States rejected the option of donating only milled maize, citing increased costs and limited U.S. milling capacity that would cause delays in getting food aid to needy people. U.S. officials estimate that U.S.-based milling would double the costs of its food aid, thus reducing the amount of aid it could provide. Additionally, according to U.S. officials, agreeing to mill all of the maize could have promoted the idea that unprocessed maize was unsafe. (App. VII provides further discussion of issues related to biotech food.) Despite the United States’ early and large donations, the impasse over biotech food significantly compromised the food pipeline in several ways: Food aid was reduced and delayed. On September 3, 2002, Zambia’s Agriculture Minister, in a statement to the press, demanded that 19,000 MT of biotech maize that had been delivered to storage facilities inside the country be sent to a country that was willing to accept it. (WFP was officially notified on October 29, 2002.) According to U.S. officials, by early November, Zambia had rejected an additional 57,000 MT of biotech maize intended for its food aid beneficiaries. The combined 76,000 MT of maize considerably exceeded WFP’s cereal shortfall for Zambia for the July through December period and would have fed 1.5 million Zambians for 3 months.In the case of Zimbabwe, there were delays while the government debated whether to accept whole grain maize and then negotiated, developed, and put in place restrictions it deemed suitable. According to a U.S. official, at one point, more than 80,000 MT of U.S. whole kernel maize imports destined for Zimbabwe were delayed in South Africa and Mozambique port warehouses awaiting permits— while the food aid pipeline lacked cereal. Costs of food aid operations increased. WFP, national governments, and other donors have borne the additional costs associated with requirements to mill some or all of the U.S.-donated maize. These costs include the milling itself, added charges for transporting whole grain maize to mills and for shipping milled product, added storage costs because of limited milling capacity, and grain losses associated with the milling process. WFP estimates that when it has to mill the product in South Africa, regional distribution costs could total up to $80 per metric ton more than for unmilled U.S. maize. Logistics of the food aid effort were complicated. Logistics became more complex because of (1) U.S. whole kernel maize piling up in ports as governments debated whether to accept biotech maize and, if so, under what conditions, (2) limited milling capacity, (3) added transportation and storage requirements, and (4) the short shelf life of maize milled regionally (3 months compared with 12 months for whole maize). Because food is distributed to households on a monthly basis, WFP had to ensure that milled maize would not take more than 2 months to arrive at final distribution sites. U.S. officials said that recipient countries in southern Africa did not make timely, informed decisions about whether to accept or reject biotech food aid. These officials also said the U.S. government does not have comprehensive data on which recipient countries are likely to accept or reject biotech food aid, nor does the U.S. government have a strategy for providing alternatives to biotech food to countries that may reject it. According to officials from State, USAID, and USDA, these problems are not confined to the southern Africa region but also have a global reach. Declining Support for Agricultural Sector and the HIV/AIDS Epidemic Pose Challenges to Emerging from Crisis into Sustained Recovery The major challenges to emerging from the current food crisis into sustained recovery include (1) a decline in agriculture sector investments; (2) limited scope of existing programs in agricultural development; and (3) the negative impact of the HIV/AIDS epidemic. Recognizing the need to address numerous challenges to move out of this crisis into recovery, the U.N. Secretary-General and several other key stakeholders have called for a comprehensive and targeted approach to break the pattern of recurrent food crises in Africa. The food outlook for the next crop year has improved, but without continuing efforts to respond to the region’s problems, recurring food crises may be difficult to avoid. Agriculture Sector Investments by Donors and Governments Have Declined Since agriculture accounts for 70 percent of the labor force in Africa, investments that improve productivity in the agricultural sector have significant implications for food security and overall rural development. According to the International Food Policy Research Institute, a 1 percent increase in agricultural productivity would help 6 million more Africans raise their incomes above $1 per day. However, data show declining investments in the agricultural sector as agricultural lending by the World Bank, the African Development Bank, and the International Fund for Agricultural Development has fallen. Similarly, agricultural spending by national governments and U.S. bilateral assistance for agricultural programs in the affected countries have declined. Agricultural Lending by Selected International Financing Organizations Total lending to the agriculture sector by selected international financing organizations declined during the 1990s. For example, measured in 2003 dollars, the African Development Bank approved about $873 million in loans for agriculture in 1990 compared with $236 million in 2000, as shown in figure 8. Similarly, the World Bank approved $4.7 billion in loans for agriculture in 1990 compared with $1.4 billion in 2000. Bank officials noted that the World Bank now approaches the agricultural sector in the context of the Bank’s overall rural development strategy that includes, among other things, lending for rural infrastructure, rural health, and environment and natural resource management. For this reason, starting in 2001, the World Bank began to include agricultural investments as part of its rural development lending. However, this does not negate the overall declining trend in agricultural lending between 1990 and 2000. Our review of World Bank agricultural loans to the six affected countries since 1990 found that 15 had been made—with 9 of them approved between 1990 and 1993. There were no loans recorded for Swaziland. As shown in figure 9, in 2002, the downward trend in World Bank agricultural lending to the affected region reversed with two $50 million emergency drought recovery loans for Zambia and Malawi. These loans included an agricultural component but also comprised health, social services, and other emergency programs. In general, national governments have been spending a declining share of their budgets on agriculture, as shown in figure 10. Real spending on agriculture has declined for two countries—Lesotho and Zambia—whereas total government spending has increased for all six affected countries. For the remaining countries, national government spending on agriculture has been stagnant or has grown at a slower rate than total government spending. Although the levels of U.S. bilateral assistance for agriculture by country have been mixed, overall assistance to the region’s agricultural sector has declined from $27 million in 1998 to $20.6 million in 2003. The largest reductions were for Malawi, which went from $10.3 million in 1998 to $3.2 million in 2003, while assistance to Mozambique went from $14.5 million in 1998 to $12.8 million in 2003 (see fig. 11). Existing Programs Are Helpful but Limited in Scope To promote agricultural development and work toward achieving food security, FAO, IFAD, and WFP advocate an approach that helps support small farmers, enhances the ability of the poor to access food, and aids recovery efforts (fig. 12 describes examples of some of the current programs). Several U.N. and USAID officials told us that while many of the programs they have funded have demonstrated promising results, the programs are limited in scope due to resource constraints and would need to be implemented on a much wider scale for greater impact and effectiveness. Negative Impact of HIV/AIDS on Food Security Will Grow In addition to being a significant factor that contributed to the food crisis, HIV/AIDS will continue to affect food security in the region by decreasing food production, lowering household income, and increasing household expenses, according to numerous experts. These effects will increase as the HIV/AIDS epidemic worsens. For example, by lowering the productivity of agricultural labor in its food supply model, USDA estimated that HIV/AIDS will cause a 3.3 percent reduction in grain output in sub-Saharan Africa over the next decade relative to the region's baseline projections. As a result, the projected food deficit will grow by 13 percent. According to an IMF study, HIV/AIDS will also lower gross domestic product (GDP). Figure 13 shows the projected decrease in growth rates of GDP per capita attributable to HIV/AIDS in 10 to 15 years: Estimates range from minus 4 percent in Mozambique to about minus 7 percent in Zimbabwe. Projected average per capita GDP growth rates without HIV/AIDS range from 1.5 percent for Lesotho to 3.9 percent for Mozambique, indicating that the HIV/AIDS effect will significantly reduce national income. In fact, for a typical sub-Saharan African country with HIV/AIDS prevalence of 20 percent, national income is estimated to be 67 percent lower at the end of a 20-year period than without the disease. U.N. Secretary-General and Others Cite Need for Integrated Response Although the international response was sufficient to avoid famine in past food crises in the region—as well as the current one—food security continues to be a significant development challenge. U.N. and U.S. officials acknowledge that food aid and humanitarian assistance alone will not prevent future crises without a comprehensive recovery strategy that addresses the underlying causes of food insecurity. In our review, we found no evidence of such a strategy. In March 2003, the U.N. Secretary-General noted that the devastating impact of HIV/AIDS requires an integrated response that may include long- term measures even when addressing short-term emergencies and called for a more systematic, targeted approach to break the pattern of recurrent food crises in Africa. Many other authoritative experts and key stakeholders have echoed the U.N. Secretary-General’s call for an integrated response. For example, in December 2002, SADC—the principal organization for regional cooperation in food and agriculture and related economic and social issues—acknowledged the need for political commitment at all levels within the region and for coordinated support from SADC, national governments, donors, nongovernmental organizations, and civil society to ensure food security in the future. Among those calling for a comprehensive response to address food security—one that integrates agricultural development, HIV/AIDS, natural disaster management, and other appropriate interventions—are the U.N. Office for the Coordination of Humanitarian Affairs, the International Food Policy Research Institute, and the Partnership to Cut Hunger and Poverty in Africa. In recent years, international donors have announced major initiatives related to food security. These include plans not only to enhance food availability by increasing agricultural production but strategies to increase food accessibility by reducing poverty. For example, the United Nations’ 2001 Millennium Development Goals pledged to help cut hunger in Africa in half by 2015. The World Bank and IMF have worked with countries eligible for debt relief—Malawi, Mozambique, and Zambia—to ensure that food security and agriculture are central themes in these countries’ Poverty Reduction Strategy Papers. Among other measures, these strategy papers emphasize promoting small-scale irrigation, reducing land degradation, and improving access to credit and agricultural inputs. In early 2002, USAID introduced its Agricultural Initiative to Cut Hunger in Africa, which is designed to accelerate agricultural growth and reduce vulnerability to hunger and poverty. However, as of April 2003, of the six affected countries, only Mozambique was proposed for funding ($3.9 million in 2003) under the initiative. Regional Food Outlook Remains Tenuous; Sustained Efforts Seen as Necessary Despite improved weather conditions and a better harvest beginning in April 2003, food security conditions in the region are still tenuous. As of the end of March 2003, early warning systems were forecasting that some parts of the region may have better harvests than last year; however, they also note that food insecurity and the need for emergency aid persist—and may worsen—in some areas, particularly in Zimbabwe and parts of southern Mozambique. In February 2003, the U.N. Office for the Coordination of Humanitarian Affairs stated that, because the response to improve agricultural inputs has been inadequate, recovery by the next agricultural season is unlikely and the need for food aid prolonged. In fact, the current emergency operations, which were for the crop year that concluded March 2003, have been extended through June 2003. Beyond that, according to WFP and USAID officials, the need for food aid will likely continue, particularly for many of the poorest and most vulnerable households. Conclusions The current food crisis is complicated by disruptive agricultural and governance policies in the affected countries and the HIV/AIDS epidemic. While the WFP, the United States, some donor governments, and NGOs provided enough food to prevent a famine, overall donor response was insufficient in terms of food quantities and timeliness to prevent widespread hunger. In addition, other obstacles—including poor infrastructure in the affected countries and concerns associated with biotech food—hampered an effective response. The controversy about biotech food aid, in particular, significantly complicated logistics, increased costs, and delayed food aid reaching beneficiaries. Concerns about agricultural biotechnology may be an obstacle to addressing future emergency food aid needs around the world, partly because the United States accounts for about half of global food aid and because several U.S. food aid commodities are genetically modified. Action is needed to reduce the likelihood of biotech food aid becoming a serious problem in future crises. Furthermore, in a region where agricultural production is critical to national economies and food security, there is a need for viable agricultural policies and funding by national governments, as well as adequate agricultural assistance and related strategies from multilateral organizations and donors, including the United States. Without a concerted strategy that integrates, among other things, agricultural development, the impact of HIV/AIDS, and natural disaster management, destabilizing food crises are likely to recur. Recommendations for Executive Action To maximize the effectiveness of the U.S. response to future food crises in the southern Africa region as well as in other parts of the world, we recommend that the Secretaries of State and Agriculture and the Administrator of USAID initiate a comprehensive review of the issues pertaining to biotech foods in emergency food aid. In anticipation of future food crises, this review could consider measures such as (1) encouraging recipient countries to enhance their capacity to make informed decisions regarding agricultural biotechnology and offering technical assistance in this endeavor; (2) identifying which countries are likely to accept, restrict, or reject biotech food aid; and (3) determining ways that the United States can contribute to emergency food aid needs in countries that decide to restrict or reject biotech food aid. To further food security in the region, we recommend that the Secretaries of State and Agriculture and the Administrator of USAID work with international organizations, donors, and national governments to develop a comprehensive, targeted strategy to ensure sustained recovery that (1) integrates agricultural development, HIV/AIDS awareness and action, natural disaster management, and other appropriate interventions; (2) estimates costs and resource requirements; and (3) establishes a plan for mobilizing resources, a timetable for achieving results, and indicators for measuring performance. Agency Comments and Our Evaluation The Department of State, USDA, USAID, and WFP provided written comments on a draft of our report. These comments are reprinted in appendixes VIII, IX, X, and XI, along with our responses to specific points. In general, the Departments of State and Agriculture, USAID, and WFP agreed with our overall conclusions and recommendations. However, they expressed technical concerns, primarily related to our discussion of biotech food, which we have addressed in the text as appropriate. USDA objected to our use of the term “biotech food,” saying it prefers “foods which may contain the products of agricultural biotechnology.” USDA also said that it opposes the use of the term “genetically modified organisms” and the acronym “GMO” because these terms carry negative connotations. According to the department, modern agricultural biotechnology is simply the next step in plant breeding technology. Notwithstanding USDA’s comments, FAO, WHO, and WFP use the terms genetically modified food, GM food, and biotech food. In fact, after receiving USDA’s comments, we found that USDA was still using the terms “biotech plants” and “biotech crops” in some of its publicly available informational materials. USAID objected to our use of the term “biotech food aid,” noting that the United States provides food aid from the general U.S. food supply, which may contain biotech crops. In certain places, we now refer to U.S. food aid as aid that may contain biotech crops. In commenting on our recommendation to initiate a comprehensive review of the issues associated with biotech foods in emergency food aid, USDA said that it agrees that all relevant parts of the U.S. government must continue to review and engage other countries regarding their biotech policies, including those related to food aid. USDA said it will continue to support developing countries’ efforts to enhance their capacity for making science-based and transparent decisions regarding products of modern agricultural biotechnology. At the same time, USDA said it is difficult to accurately identify countries that might accept or reject products of modern agricultural biotechnology because many developing countries’ policies depend upon the specific political, economic, and social circumstances at the time. The Department of State said that an interagency review on how to manage the presence of bioengineered foods in food aid might be useful for developing a strategy. However, the department said that such a review should be narrowly focused to ensure better coordination among food aid, development, trade policy and regulatory agencies. USAID said it supports further interagency discussion and coordination on the dimensions of biotechnology in food aid. USAID noted that it is actively engaged in supporting the development of capacity in a number of food aid recipient countries to make informed decisions, but said in practical terms this is a long-term strategy and unlikely to assist in emergency situations such as we saw in southern Africa. USAID said it believes that the most practical solution will be to work with recipient governments and partners involved in the delivery of food aid to build confidence in the existing safety evaluations of these products, including evaluations done in the United States and by other countries, scientists, and international organizations such as FAO and WHO. Regarding our recommendation to develop an integrated recovery strategy, the Department of State, USAID, and WFP fully support the need for a comprehensive, coordinated approach to help address the underlying causes of food insecurity. State cited U.S. efforts to work with major donor countries to create a multilateral framework for improving long-term food security throughout the world. USAID said that, along with an interagency working group formed by the sub-policy planning committee to coordinate the U.S. government response to the food crisis, it is in the process of drafting a recovery strategy and action plan, which will guide the development and review of new USAID country strategies in the region. WFP emphasized the need for the international community to remain engaged within the region and to help national governments address medium- to longer-term issues related to food insecurity. We agree that such efforts must be sustained if destabilizing food crises are to be avoided. In addition, we provided FAO, IMF, and the World Bank an opportunity to review parts of a draft of this report for technical accuracy, and we incorporated their comments as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretaries of State and Agriculture, and the USAID Administrator. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-3149. Other GAO contacts and staff acknowledgments are listed in appendix XII. Scope and Methodology To determine what factors contributed to the current crisis in southern Africa, we met with and analyzed information from government officials at the U.S. Agency for International Development (USAID) and the Departments of Agriculture (USDA) and State in Washington, D.C., and U.S. missions in Botswana, Malawi, Mozambique, Zambia, Zimbabwe, and South Africa. We also met with officials and reviewed information from the World Food Program (WFP), the Food and Agriculture Organization (FAO), and the International Fund for Agricultural Development (IFAD) at their headquarters in Rome and in the southern African countries we visited. In addition, we gathered information from and met with representatives of the World Bank and the International Monetary Fund (IMF), other U.N. agencies, other donor governments, and host government ministries. We also gathered information and met with representatives of early warning systems and interagency food security assessment teams. In addition, we gathered information and met with representatives of nongovernmental organizations (NGO), including Africare, Bread for the World, the Coalition for Food Aid, CARE, Catholic Relief Services, the Coalition for Food Aid, Save the Children, World Vision, and ACDI/VOCA. We also reviewed studies from public and private research institutions on the causes of the current and past food crises in southern Africa. To determine how well the populations’ overall food needs were met during the crisis period, we met with officials and reviewed information from WFP, FAO, host governments, the private sector, donor governments, NGOs, and the Southern African Development Community (SADC). We also met with and gathered information from representatives of interagency famine early warning and vulnerability assessment teams. In addition, we analyzed country-specific and regional WFP food aid data tracking food aid flows from donors (through WFP and NGOs) to the country level and, for WFP food aid, to the beneficiary level. Because of southern Africa’s infrastructure and technology problems, our collection and analysis of agricultural supply and demand information had inherent limitations. We observed food aid distribution at various stages including at the points of entry and storage facilities at the extended delivery points and final distribution points. We met with WFP, NGO, donor, national government, and local government officials responsible for managing and monitoring the food aid distribution process. We reviewed WFP’s real time logistics information system including its monitoring and loss reports. We also reviewed U.S. and other donor financial contributions to determine similar information on nonfood aid-related assistance. We verified the accuracy of data and reports, to the extent possible, by tracing the flow of information and obtaining comparable data from multiple government, international organization, NGO, and private sector sources. To determine the major obstacles to the food aid effort, we met with key officials and gathered and examined data from governments and the private sector in recipient countries, WFP, donor governments, NGOs, and SADC on the rate and amount of donor contributions, infrastructure, and biotech food aid and its impact on food aid distribution in the current crisis. We examined U.S. and other donor funding of WFP’s EMOP, reviewing actual country donations against pledges to determine the sufficiency and timeliness of donations. We reviewed and examined data on the transportation network for moving food in the region and identified and confirmed transportation and infrastructure obstacles during our fieldwork in country. We verified and confirmed the general accuracy of these data through multiple private sector and governmental organizations. With the assistance of the National Academy of Sciences, we had seven U.S. scientists provide an independent perspective on the Zambian Scientists’ biotech report. To determine the challenges to emerging from the crisis into sustained recovery, we met with numerous NGOs, the World Bank, the IMF, USAID, and the Departments of Agriculture and State and analyzed information on the decline in agricultural sector investment, the limited scope of existing programs in agricultural development, and the negative impact of the HIV/AIDS epidemic. In addition, we reviewed studies from private and public research institutions, such as the International Food Policy Research Institute, on the challenges to moving from crisis into recovery. We also analyzed agricultural funding data of the six southern African national governments. To analyze World Bank agricultural lending, we reviewed World Bank annual reports and its Web site from 1990 to 2002 to determine the amount and nature of loans made to the six affected southern African countries. After identifying the loans, we calculated an average deflator for each fiscal year (July to June) and calculated the 2003 value of each of these loans. To analyze the percentage of government budgets expended for agriculture, we gathered fiscal data for each of the six countries from the IMF country statistical appendices for 1997 through 2002. We then calculated the current and capital expenditures from agriculture as a share of the total current and capital expenditures in the government budget. We also calculated the real growth rate using least square regression methodology for agricultural and total spending. Finally, to determine the impact of HIV/AIDS on food security among the six affected countries, we reviewed USDA and IMF studies that quantified the food security situation and its economic implications. The information on foreign laws in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. We conducted our review from August 2002 through May 2003 in accordance with generally accepted government auditing standards. Timeline of the Southern Africa Food Crisis Figure 14 is a chronology of key events (political actions, alerts, and emergencies) that occurred in the region and in some of the affected countries during the 2-year period from July 2001 through April 2003. U.N. consolidated appeal launched (regional EMOP) Early Warning Systems and Vulnerability Assessment Methods Two types of data collection systems tracked the food crisis in southern Africa: (1) early warning systems, which monitor factors that affect food supply to provide decision makers with notice of potential crises; and (2) assessment systems, which monitor the nutritional needs of vulnerable populations in order to design or assess interventions. Early Warning Systems Famine early warning systems are designed to provide decision makers with information of an impending food crisis or famine. These systems compile various indicators of food security at a regional, national, or subnational scale, including information on weather and household purchasing power. While early warning systems are useful, preventing food crises depends on timely responses to the information they disseminate. For this report, we reviewed two early warning systems: FEWS NET: The Famine Early Warning System (FEWS) is a specialized Information Network (NET) based in 17 African countries contending with chronic food insecurity. FEWS NET is a partnership-based program initiated and funded by USAID. The goal of FEWS NET is to strengthen the ability of African countries and regional organizations to manage risk of food insecurity by providing timely and analytical early warning and vulnerability information. FEWS NET monitors information from multiple technologies, such as satellites and field observations, and seeks to facilitate timely access to that information; identifies specific, acute food security threats; and provides regular information assessments to decision makers that reflect the best judgment of the food security community. GIEWS: FAO’s Global Information Early Warning System on Food and Agriculture (GIEWS) is an information system based in Rome that includes 116 governments, 3 regional organizations, and 61 nongovernmental organizations. The goal of GIEWS is to provide the international community with warning of imminent food crises to ensure timely interventions in countries or regions affected by natural or man-made disasters. The system seeks to monitor all aspects of food supply and demand in all countries on a continuous basis at the global, regional/subregional, national, and subnational levels. It reports to the international community through its system of regular and ad hoc reports. FEWS NET and GIEWS often contribute to the WFP/FAO Crop and Food Supply Assessment Missions. Additionally, FEWS NET supplies GIEWS with the major proportion of its remote-sensing data for monitoring agricultural production. However, according to USAID, these two early warning systems differ in three main areas: Geographic scope: FEWS NET has a 17-country scope in Africa, whereas GIEWS has a global mandate. Level of detail: FEWS NET reports on the subnational level based on information gathered by field staff located throughout their scope, whereas GIEWS reports on the national and global levels from their headquarters in Rome with few field staff. Technical focus on food security: FEWS NET focuses on food availability, access, and utilization, while GIEWS focuses on food production and availability. Early Warning Systems in the Southern Africa Food Crisis The famine early warning systems produced relevant information regarding the southern Africa food crisis. FEWS NET and GIEWS reported on such food security indicators as adverse weather events, current food shortages, the status of cereal imports, the status of strategic grain reserves, the status of grain prices, and forecasts for the 2001/02 harvest. In particular, in late 2001 and early 2002 the early warning systems reported questionable food security in parts of the six countries, accounting for the poor 2000/01 harvest and anticipating future cereal gaps based on a poor outlook for the 2001/02 harvest. In November 2001, FEWS NET reported that the 2000/01 cereal harvest from earlier that year would likely be insufficient to fill food needs in each country except Mozambique. After FEWS NET highlighted these potential maize shortages, the U.S. government began carefully monitoring the situation in southern Africa, according to a Department of State cable. By mid-February 2002, GIEWS warned that impending severe food shortages threatened some 4 million people in the southern African countries, including parts of each of the six countries. That same month, however, FEWS NET reported that although there were localized areas of concern at the national and subnational levels, there was no reason for serious concern over production prospects from a regional perspective at that point in the growing season. In April 2002, closer to the harvest period, GIEWS warned of a looming food crisis in southern Africa, with conditions in several countries set to worsen. By May 2002, FEWS NET also warned of the potential for a food security crisis of regional magnitude if appropriate and timely action were not taken. Nevertheless, the early warning systems did not anticipate the severity of the situation in Malawi. Although the systems did report serious maize (corn) production shortfalls in Malawi during the 2000/01 harvest caused by mid-season floods and late-season drought, flawed agricultural statistics provided by the government of Malawi indicated that the production shortfall would be covered by other food crops, especially cassava. According to the IMF, the data error only became apparent in February 2002 when Malawi began experiencing severe food shortages. Assessment Systems For this report, we reviewed the following assessments: VAM: The Vulnerability Analysis and Mapping Unit (VAM) analyzes, maps, and reports on populations and geographic areas experiencing food insecurity to inform WFP food aid operations in 43 countries. VAM uses state-of-the-art mapping techniques to pinpoint the people most vulnerable to hunger and target their needs. VAM and FEWS NET work in close collaboration in the African countries where they are both present. VAM relies on FEWS NET for early warning analyses in most African countries. VAM also has a global mandate in supporting internal WFP operations, whereas FEWS NET attempts to build assessment capacity within the countries. CFSAM: Crop and Food Supply Assessment Mission (CFSAM) is a rapid assessment of information generated in an affected country used to fill in information gaps and to provide an early forecast of production and the emerging food supply situation of that country. This is done only at the request of the host country government. A mission may also collect information on household food security, vulnerability, coping mechanisms, and social welfare programs. GIEWS/FAO coordinates such missions in conjunction with WFP and other stakeholders such as host country ministerial staff, FEWS NET staff, and the Southern African Development Community’s (SADC) Regional Early Warning Unit (REWU). CFSAM targets a wider audience than WFP’s internally focused VAM unit. The international community uses the CFSAM information to calculate how much food aid and other relief assistance is needed to assist the most vulnerable people. VAC: SADC and some of its member states established vulnerability assessment committees (VAC) to better assess and address food security issues. The purpose of VAC assessments is to (1) provide additional information to help adjust response programs to better meet the needs of vulnerable populations; (2) rapidly investigate and characterize or verify suspected crises in local areas; and (3) better understand the causes of the emergency and their implications for a return to food security. The committees use a coordinated, collaborative process that integrates the most influential assessment and crisis response players into the effort to help gain privileged access to national data sets and expert technicians and increase the likelihood of reaching consensus among national governments, implementing partners, and major donors. Key players include the SADC Regional Early Warning Unit and national VACs, VAM, FEWS NET, GIEWS, and several NGOs. The assessment methodology included sample surveys at the district, community, and household levels and incorporated household food economy and nutritional surveys. Use of Assessments in the Southern Africa Food Crisis Several of the above assessments have been used to prepare for and monitor the southern Africa food crisis (see app. II for timeline). FAO conducted a series of CFSAMs starting in May 2002. The missions estimated cereal requirements and cereal production for each of the six countries and the extent to which the gap could be offset by commercial imports. (See app. V for more on commercial sector imports and their role in the crisis.) The remaining deficit was identified as requiring emergency food aid. The regional and national VACs built assessment capacity in the region and increased the breadth and depth of food security monitoring during the crisis. To this end, the committees conducted assessments and prepared reports on their results in September 2002 and January 2003. A third assessment occurred in May 2003. U.S. Donations By early 2002, the United States had recognized the seriousness of the developing food crisis in southern Africa and initiated actions to donate substantial quantities of food aid to the region. For example, in mid- February 2002, USAID arranged a loan of 8,470 metric tons (MT) of maize to southern Africa from stocks held in Tanzania; the commodity began arriving in the region in mid-March. On March 15, 2002, USAID authorized World Vision to provide 14,310 MT of food aid to Zimbabwe (the amount was later increased to 19,710 MT). On March 21, USAID authorized the purchase of 35,330 MT of commodities—from existing stocks in New Orleans—to be shipped to southern African ports; these stocks arrived by the end of June 2002. Overall, between late April 2002 and March 31, 2003, the United States donated and delivered nearly 500,000 MT of food aid to the region valued at about $275 million. The food represented approximately 68 percent of the total food aid delivered into the region between April 1, 2002, and March 31, 2003. U.S. deliveries included: 81,950 MT for precursor WFP emergency programs in each of the six countries—Lesotho, Malawi, Mozambique, Swaziland, Zambia, and Zimbabwe, valued at $43 million; 19,710 MT for the World Vision program in Zimbabwe, valued at $12.8 326,553 MT for the WFP regional operation, valued at $165.4 million; and 71,600 MT for the C-SAFE operation, valued at $53.5 million. Table 4 provides a breakout of the U.S. food aid by country and commodity. As the table shows, Zimbabwe, Malawi, and Zambia received the largest amounts of aid. Maize (corn) and maize meal accounted for more than 70 percent of the donated commodities. About 84 percent of all the donated food represented biotech commodities (i.e., maize, maize meal, oil, and corn soya blend (CSB)/corn soya milk(CSM)). Commercial Imports The extent and availability of commercial cereal food supplies during the food crisis are difficult to assess for a number of reasons. Early concerns about the progress of commercial imports stemmed from the large amounts of cereal imports needed, the high regional prices for maize, and the lack of sufficient foreign exchange resources for governments and private sector entities to purchase the required imports. Nonetheless, data available in mid-May 2003 indicated that 1.72 million MT of commercial cereal imports had been received in the six countries between April 1, 2002, and March 31, 2003. However, the regional Vulnerability Assessment Committee September 2002, December 2002, and January 2003 reports indicated some serious problems with food availability during the year. (See app. III for a detailed description of the VACs.) Factors contributing to the uncertainty over commercial cereal food supplies included the lack of time frames indicating when imports needed to be purchased, data reliability issues, problems between urban and rural distribution of food supplies, and government policies that provided disincentives to the private sector to import food or that, once imports were received in country, discouraged the efficient supply of those goods to the local market. In contrast to WFP’s food aid effort, the CFSAMs did not identify the monthly rate at which imports needed to occur, making it difficult to judge the timeliness of commercial imports. The September VAC report for Zambia indicated concern that it could be difficult for importers to find maize in regional markets (as several countries in the region had large deficits and would be competing to buy from the same suppliers), which in turn could delay purchases. According to the December VAC report for Lesotho, maize, wheat, and sorghum were generally not available for purchase at the end of the year in communities across that country. In Swaziland, maize was reported to be readily available in retail outlets nationwide even though there had been no commercial imports between July and November. However, there were fears that a shortfall in maize imports would occur between December and March, causing another round of price hikes. Once data on commercial cereal imports began to be available, data reliability became an issue in some cases. For example, according to the December Zimbabwe VAC report, figures on the combination of commercial imports, food aid imports, and available national production should have resulted in a surplus of 200,000 MT at the national level. However, the report indicated there was something seriously wrong with the numbers, since 41 percent of communities surveyed reported that cereal grains were not or were rarely available from the government’s grain marketing board and the other 59 percent reported that the grains were only occasionally available. U.S. government officials noted that there have been numerous anecdotal reports of the government’s politicizing food aid, which may partly explain some of the discrepancy. While commercial cereal imports may have been provided to urban markets, shortages were reported in many rural areas, indicating problems with the distribution of commercial cereal supplies. The December VAC report for Lesotho indicated that maize mealmaize that has undergone the milling processwas said to be available, though very expensive, in the various urban centers but generally not available in the rural areas, which were characterized as experiencing a serious food crisis. The January VAC report for Zambia said there was no commercial shortfall in urban areas and millers did not expect one through February and beyond. However, most of the maize that had been imported (officially or via cross-border trade) was reportedly only servicing the urban markets, leaving the rural areas with a severe commercial grain shortfall that drastically pushed up prices. Of 48 villages surveyed, fewer than 10 percent said maize was readily available and fewer than 30 percent said maize was occasionally available. Finally, some national governments implemented subsidies and price controls that raised concern about private sector imports and whether imports received would be supplied to local markets efficiently. For example, in response to the crisis, Malawi implemented a countrywide subsidy on the consumer price of maize to make it more affordable to the public. However, this policy, combined with high interest rates that inhibited the private sector from borrowing to cover its purchases, meant that the private sector could not profit from importing maize and selling it at the subsidized price. In Zambia, the government encouraged a program whereby a private sector group, the Millers Association, would import 300,000 MT of maize without having to pay import duties and the government would import 155,000 MT of maize to begin a strategic reserve. However, the Zambian government reportedly provided conflicting information about the amount of food it was to import for relief versus strategic reserves, thus causing confusion about planned imports, uncertainty over market prices, and conditions favorable for market speculation. In Zimbabwe, the government banned all private sector imports and implemented price controls on maize. This policy reportedly encouraged traders with food supplies to stockpile them or sell them at a much higher price on black markets in country or across borders. Nonfood Emergency Needs U.N. agencies requested $143.7 million to address urgent, nonfood humanitarian needs that increased people’s vulnerability to famine for the July 2002 through June 2003 period. As of April 9, 2003, less than 25 percent of the total identified requirements had been funded. Figure 15 compares the sectors and dollar amounts for which the U.N. agencies requested funding to actual contributions. As the figure shows, the largest amounts were requested for health, agriculture, and economic recovery. Only five of the nine sectors received any funding and four of these were only partially funded. The four sectors with the highest rates of funding were: multi- sector, 583 percent of the requested amount; coordination and support services, 49 percent of the requested amount; agriculture, 35 percent; and health, 21 percent. Funding requests were tied to specific projects. For example, the World Health Organization (WHO) asked for $2.9 million for projects in Malawi to enable earlier detection of epidemics, improve response to disease outbreaks in emergency situations, and strengthen emergency health coordination. FAO requested funding for an $8.5 million project in Zimbabwe to (1) increase agricultural production among 400,000 vulnerable households by providing inputs such as seeds and fertilizer, (2) facilitate tillage, (3) rehabilitate local water sources, and (4) develop opportunities to market agricultural products. Biotech Food and the Southern Africa Food Crisis This appendix provides additional information on crops and foods produced with modern agricultural biotechnology, how concerns about agricultural biotechnology developed in southern Africa, and how issues surrounding agricultural biotechnology affected delivery of U.S. food aid during the southern Africa food crisis. Modern Agricultural Biotechnology Modern agricultural biotechnology refers to various scientific techniques used to modify plants, animals, or microorganisms by introducing in their genetic makeup genes for specific desired traits, including genes from unrelated species. Genetic engineering techniques allow development of new crop or livestock varieties, since the genes for a given trait can be introduced into a plant or animal species to produce a new variety incorporating that specific trait. Additionally, genetic engineering increases the range of traits that can be introduced in new varieties by allowing genes from totally unrelated species to be incorporated into a particular plant or animal variety. Crops and foods containing or derived from genetically modified (GM) plants have been characterized by various users as biotech, GM, genetically modified organisms (GMO), and bioengineered crops and foods. Biotech crops currently on the market are mainly aimed at increasing crop protection by introducing resistance against plant diseases caused by insects or viruses or by increasing tolerance to herbicides. Biotech crops have lowered pest management costs and enhanced yields. By the end of 2000, such crops had been planted on nearly 100 million acres worldwide. As of 2000, the United States had 76.7 million acres of biotech crop varieties: 26 percent of all maize planted, 68 percent of cotton, and 69 percent of soybeans. The United States and a number of other countries have established regulatory processes for assessing whether foods derived from agricultural biotechnology are as safe for humans, animals, other plants, and the environment as their traditional counterparts. Safety assessments of GM foods investigate direct health effects (toxicity), tendencies to provoke allergic reaction, nutritional effects, and any unintended effects that could result from gene insertion. Environmental assessments consider the ability of the GMO to escape and potentially introduce the engineered genes into wild populations; susceptibility of nontarget organisms (e.g., insects that are not pests) to the gene product; loss of biodiversity; and increased use of chemicals in agriculture. The environmental aspects vary considerably according to local conditions. GM food such as whole kernel maize seed contains living modified organisms (LMO) that are capable of transferring or replicating genetic material. If maize is milled, this is no longer the case. Whole kernel maize seed can be eaten as a food or planted to grow a new crop. Challenges to U.S. Agricultural Exports Containing Biotech Commodities U.S. agricultural biotech exports have faced several significant challenges in international markets. First, as the single, major producer of biotech products, the United States has been relatively isolated in its efforts to maintain access for these products. Second, in many parts of the world, consumer concerns about the safety of biotech foods have increased, leading key market countries to implement or consider regulations that may restrict U.S. biotech exports. Third, in the United States, biotech and conventional varieties are typically combined in the grain handling system for more efficient use of crops from multiple sources. Thus, foreign regulations on biotech could affect all U.S. exports of these commodities as well as food products containing or derived from biotech crops. Specifically, regulations limiting or banning the importation of foods containing biotech products present serious challenges to U.S. exporters of corn and soy products, according to Department of State and USDA officials. Several international organizations are involved in developing guidance on biotech food and its regulation. The Codex Alimentarius Commission (Codex)a joint FAO/WHO body responsible for an international food code—has been developing principles for the human health risk analysis of biotech foods. These principles are based on a premarket assessment, performed on a case-by-case basis, that evaluates both direct effects (from the inserted gene) and unintended effects that may arise from inserting the new gene. The principles are in the final stage of an eight-step international agreement process. Draft language under consideration includes an option for mandatory labeling based on the method of production, even if there is no detectable presence of DNA or protein in the end product resulting from genetic modification. The United States and several other countries have opposed mandatory processed-based labeling for foods, which may contain the products of agricultural biotechnology. They favor mandatory labeling only with regards to allergic reactions, changes in nutritional content, and changes in handling requirements. Codex has been deadlocked on the labeling issue for several years. The Cartagena Protocol on Biosafety, an international environmental treaty, regulates transboundary movements of LMOs. Biotech foods are within the scope of the protocol only if they contain LMOs. The protocol requires exporters to get consent from importing countries before the first shipment of LMOs intended for release into the environment. This requirement does not apply to LMOs intended for direct use as food, feed, or for processing. The protocol will enter into force 90 days after the 50th country has ratified it, which may be in late 2003, according to a USAID official. The United States is not a signatory to the agreement. However, according to U.S. officials, as a practical matter U.S. exporters will need to observe and comply with local regulations implementing the protocol. U.S. Versus EU Approval Process The United States and the European Union (EU) have very different regulatory frameworks for approving new agricultural biotech products. The United States generally applies existing food safety and environmental protection laws and regulations to biotech products and approves them based on their characteristics rather than on whether they are derived from biotechnology. To evaluate new products, U.S. regulators require sufficient evidence to determine their safety or risk. Under this approach, the United States has approved most new biotech varieties. The EU follows the “precautionary principle,” under which the EU maintains that approval of new biotechnology products should not proceed if there is “insufficient, inconclusive, or uncertain” scientific data regarding potential risks. The EU has not approved any new biotech foods for marketing since 1998. This stance has affected the viability of biotech trade in other parts of the world. For example, given the importance of the EU market, U.S. soybean producers have been reluctant to introduce new biotech varieties not approved for marketing in the EU. Similarly, maize growers in Argentina, who export to the EU, are deferring planting a biotech variety known as “Round-up Ready” corn because the EU has not approved it. They are only planting biotech varieties approved by the EU. Biotech Issues in Zimbabwe According to U S. officials, Zimbabwe raised concerns about the potential adverse environmental and commercial/trade impacts of unmilled biotech products as early as the summer of 2001, a year before the U.N.’s southern Africa appeal. It did not want planting of whole kernel biotech seeds or feeding of livestock on biotech products. In December 2001, the United States offered to provide 14,300 MT of maize to Zimbabwe, but the government refused, since it could not be certified as GMO-free. In January 2002, the United States agreed to provide 8,500 MT of fortified corn meal to Zimbabwe as an initial contribution to a WFP program launched in November 2001. Since this was a milled product that did not contain any living GMOs, the government accepted it. In May 2002, the United States offered an additional shipment of 10,000 MT of whole kernel maize for the WFP program. The government again said it would only accept contributions that included assurances that the food was not derived from GMOs. As a result, the maize was reallocated to Zambia, Malawi, and Mozambique. Near the end of July 2002, Zimbabwe proposed to accept a U.S. offer of 17,500 MT of maize that might contain biotech commodity. However, the maize would be temporarily stored in silos to be milled and subsequently distributed. In the meantime, the government would use its own maize for distribution, which would be packed into USAID food bags and distributed. This proposal became known as “the swap.” Near the end of August, the United States approved the swap arrangement. However, on September 1, the Agriculture Minister of Zimbabwe was quoted as emphatically rejecting biotech food assistance. Four days later, though, the President approved accepting biotech maize, subject to special shipping, milling, and distribution requirements. Biotech Issues in Zambia In February and March 2002, WFP and U.S. officials notified Zambia that U.S. donations to that country would likely include maize containing biotech varieties. In June 2002, Zambia’s Vice President said the country would gladly accept the U.S. maize Zimbabwe refused. However, during June and July a public debate on biotech food began and appeared to be backed strongly by the opposition political party. In August, a 6-hour town meeting on the issue was held, and on August 16, the government decided to suspend all biotech imports and distributions. After this announcement, the USAID Administrator invited seven Zambian scientists to visit the United States on a fact-finding mission regarding the biotech issue. The scientists came to the United States in September and subsequently visited South Africa and several European countries as well. Their report concluded that: distributing biotech maize carries a high risk of eroding local maize varieties; the safety aspects of biotech foods are not conclusive; and there is a potential risk of biotech maize, if planted, affecting the export of baby corn and honey in particular and organic foods in general to the EU. They recommended that the government continue its policy of not accepting biotech foods. On October 29, the Zambian government agreed. Seven experts in the field of modern agricultural biotechnology reviewed the Zambian scientists’ report for us. With regard to human health and safety issues, two experts found the report to be fair, accurate, and fact- based; two experts disagreed with this assessment; and three did not respond. Concerning environmental issues, three experts said the report was fair, accurate, and fact-based; two experts disagreed; one expert was not certain; and one did not respond. The experts generally agreed that cross-pollination, or gene flow, would occur between biotech and conventional maize plants, but disagreed about whether this warrants a ban on biotech maize. Four experts suggested that milling of biotech maize was a viable option for maintaining safety while simultaneously feeding the hungry; the other three did not comment on this issue. Overall, the experts supported the need for Zambia and other southern African countries to be able to assess GMOs in their environments. U.S. Views and Approach During June 2002, the United States planned how it would respond to the biotech issue. It recognized that (1) Zimbabwe’s rejection of whole kernel maize was a problem that had to be addressed, (2) other affected countries’ positions on the import and transport of biotech food needed to be determined, and (3) it was important to provide information about biotech food. By early July, the United States was planning to use private and, if necessary, public diplomacy to get the affected countries to accept the biotech food aid. It would work with and through SADC and its members to remove barriers to biotech food aid and would support WFP in asking for humanitarian exceptions to current and proposed biotech regulations. When feasible, the United States would attempt to provide alternative food aid to countries that had bans on agricultural biotechnology in place. However, if recipient countries placed special regulations on biotech productsfor example, milling or labeling requirementsthey themselves would have to pay to implement these requirements. The U.S. government opposed agreeing to provide only milled biotech food aid because the process added costs and delayed shipments. On July 25, 2002, the State Department directed its embassies in the six affected states and Botswana to stress to host governments the importance of addressing the region’s immediate needs rather than engaging in protracted debate on the merits or supposed dangers of biotech food. State warned that recent decisions by some recipient and transit countries not to accept whole kernel biotech maize risked endangering the lives of millions of people. State advised U.S. missions to urge SADC member states to immediately adopt an agreement allowing unrestricted import and distribution of food aid, including biotech produce, on an emergency basis for the duration of the crisis. State’s background and guidance to its overseas posts on biotech food aid included the following: Food that is exported from the United States, whether commercially or through food aid, is the same food eaten by Americans in terms of its GMO content. To date, there is no scientific evidence to suggest that commercially available biotech commodities and processed foods are any less safe than their conventional counterparts. Commercially produced bioengineered plant varieties in the United States have been reviewed under the U.S. regulatory process, which sets rigorous standards for human, animal, and plant health and for environmental safety. These varieties have received safety approval in a number of countries. Developing countries are concerned that genetically engineered genes may contaminate other farmers’ fields or wild plants in the centers of origin, but this occurrence would not necessarily be negative or damaging. Genes naturally flow (through cross-pollination) between traditionally developed varieties and modern hybrids. Some African countries are concerned that if farmers plant whole grain U.S. food aid, their trade with the EU may be affected. At this point in time, the only whole grain in food aid that might contain biotech varieties is maize. If whole maize is planted, it is possible biotech varieties co-mingled in food aid could cross-pollinate with local varieties. However, it is unlikely that the biotech grain will grow well as it is made from hybrid seed and not well-adapted to conditions in southern Africa. U.N. Response In May 2002, WFP’s Executive Director raised the issue of biotech food aid with the U.N. Secretary General and in June briefed him on why biotech food aid was an impediment for operations in southern Africa. By early July 2002, the U.N. Under Secretary-General for Humanitarian Affairs had sought guidance from FAO, WHO, and WFP regarding food aid with biotech components. The FAO Director-General responded with a letter incorrectly citing the Cartagena Protocol as recommending that all food aid that might contain biotech products be subject to an “advanced informed agreement” and be milled before distribution to avoid the possibility of germination. However, the Cartagena Protocol expressly states that advanced informed agreement does not apply when the shipment is for direct use as food or feed, or for processing, nor does it suggest that grain shipments containing living biotech components be milled. After the United States raised concerns about this misinformation, the Director-General issued a correction letter. Nonetheless, the FAO representative in Malawi repeated the same erroneous recommendations to Malawi’s Ministry of Agriculture. The United States again cited the error, but in August, the FAO representative in Zimbabwe gave similar inaccurate advice. This time, when the United States alerted FAO, the problem was quickly corrected, according to FAO. On August 23, 2002, FAO, WHO, and WFP issued a joint U.N. statement on the use of biotech foods as food aid in southern Africa. Its key points included the following: Although there are no existing international agreements in force regarding trade in biotech food or food aid, WFP policy is that all donated food must meet the safety standards of both the donor and recipient countries and all applicable international standards, guidelines, and recommendations. FAO and WHO are confident that the principal country of origin (i.e., the United States) has applied its national food safety risk assessment procedures to its food aid and has fully certified that these foods are safe for human consumption. Based on national information from a variety of sources and current scientific knowledge, FAO, WHO, and WFP believe the consumption of biotech food now being provided as food aid in southern Africa is unlikely to present human health risk. Any potential risks to biological diversity and sustainable agriculture from inadvertent introduction of GMOs have to be judged and managed by countries on a case-by-case basis. In the case of maize, processing techniques such as milling or heat treatment may be considered to avoid inadvertent introduction of biotech seed. However, U.N. policy does not require that biotech grain used for food, feed, or processing be treated this way. Governments must carefully consider the severe and immediate consequences of limiting the food aid available for millions so desperately in need. European Union Statement Several of the southern African countries were concerned that if whole kernel biotech maize were planted and used as feed for their cattle, their ability to export grain and cattle to the EU would be hampered. On August 28, 8 days after Zambia announced it would not accept biotech foods, the delegation of the European Commission in Zambia issued a press release to clarify its position, which stated the following: The United States, the EU, and others have evaluated several biotech maize varieties, and some have been authorized for use, including planting. Given the serious food shortages in the region, governments may want to use these evaluations rather than wait for them to be repeated locally. The fact that a country grows biotech maize has no impact on its ability to export other agricultural products to the EU. The importation and use of biotech maize in a form other than grain should eliminate concerns about negative biodiversity effects and trade consequences. EU scientists have found no evidence that the biotech maize varieties they have assessed are harmful to human health. Comments from the Department of State The following are GAO’s comments on the State Department’s letter dated June 3, 2003. GAO Comments 1. We recognize that the United States is taking steps to help improve long-term food security in the region that include, among others, the Agricultural Initiative to Cut Hunger in Africa, which was introduced early last year. However, as noted in figure 11, overall assistance to the region's agricultural sector has declined between 1998 and 2003; and as of April 2003, only one out of the six countries (Mozambique) was proposed to receive funding under the new initiative. We also recognize that U.S. bilateral assistance in several of the affected countries has funded a number of programs related to food security over the years, including ongoing programs on agricultural development, HIV/AIDS, and disaster management. While all these programs do help to promote food security, U.N. and U.S. officials told us that to have broad impact, these programs need to be implemented on a much larger scale. Our recommendation that U.S. agencies work with international organizations, donors, and national governments to develop a comprehensive, targeted strategy for sustained recovery would, if implemented, help coordinate efforts, integrate approaches, and leverage limited resources as necessary to achieve greater effectiveness. 2. We modified the text on pages 3 and 23 to note that the United States anticipated the crisis at the outset. Although the United States acted early so that food would arrive in a timely way, USAID officials advised us that no food was prepositioned in any of the countries. We modified the text on page 18 to reflect the point on the C-SAFE program. 3. We modified the text on pages 10-11 to reflect this information. 4. According to USAID officials, available quantities of sorghum and bulgur were limited. 5. We noted this view on page 46. 6. We modified our discussion of Malawi’s policy on page 31. 7. We modified the text on pages 31 to reflect this point. 8. We replaced the definition, cited from a World Health Organization publication, with an alternative. See pages 3 and 65. We modified the discussion of safety issues on pages 64-65. 9. We modified the text on these several issues on page 67. 10. We clarified this footnote, on page 67, to indicate what information would be required in the initial documentation and what information would be required in subsequent exchanges of information. 11. On page 68, we changed our reference to the precautionary principle to clarify that the statements represent the EU’s arguments under the principle. Comments from Department of Agriculture The following are GAO’s comments on USDA’s letter dated June 3, 2003. GAO Comments 1. We modified the text on page 30 to reflect this point. 2. We modified the text on page 66 to reflect the first point. Our draft report noted that the biotech and conventional varieties are typically combined in the U.S. grain handling system. Regarding the use of the term biotech, see pages 45-46. Comments from the U. S. Agency for International Development The following are GAO’s comments on USAID’s letter dated June 9, 2003. GAO Comments 1. Currently in draft form, the recovery strategy/action plan USAID outlines in its comments represents a beginning. USAID notes that its draft strategy has already been useful as a planning tool in developing USAID’s country strategies for the region. As such, the recovery strategy/action plan will help target U.S. efforts. However, our recommendation goes beyond U.S. efforts. To ensure sustained recovery in an environment of constrained resources, we believe there is a need for a comprehensive strategy that pulls together the efforts of international organizations, donors, and national governments; integrates approaches; and leverages limited resources. Comment from the World Food Program The following are GAO’s comments on the WFP’s letter dated June 2, 2003. GAO Comments 1. We agree with WFP’s support for further collaboration with the U.S. government and other partners in defining and implementing a sustainable recovery strategy. 2. We further highlighted the problems of extreme poverty and regional economic decline on page 11. 3. We further highlighted Zimbabwe’s economic problems on page 11. 4. We reflected these points on page 25. GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to the persons named above, Joy Labez, Miriam Carroll, Kendall Schaefer, and Janey Cohen made key contributions to this report. Nathan Anderson, Nima Edwards, Etana Finkler, Chase Huntley, Bruce Kutnick, Jeremy Latimer, Barbara Shields, and Eve Weisberg provided technical support. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs | The southern Africa food crisis threatened 15.3 million people in six countries (Lesotho, Malawi, Mozambique, Swaziland, Zambia, and Zimbabwe) with famine. GAO was asked to look at (1) factors that contributed to the crisis, (2) how well the populations' needs were met, (3) obstacles to the food aid effort, and (4) challenges to emerging from crisis. Multiple factors contributed to the food crisis. Erratic weather reduced maize (corn) production. A poorly functioning agricultural sector caused food supply shortages. Government actions--including the sale of Malawi's grain reserve and Zimbabwe's land reform--further cut available food. Widespread poverty contributed to food insecurity and the HIV/AIDS epidemic exacerbated food shortages by reducing the labor force. Food aid averted famine, but the overall response did not prevent widespread hunger. About 93 percent of the total cereal gap--the difference between domestic needs and production--was met by the end of the April 2002-March 2003 crisis period. However, food aid deliveries fell short in several countries, and vulnerable households had limited ability to purchase commercial maize. Slow donations, poor infrastructure, and concerns about biotech food were major obstacles to an effective response. Excluding the United States, most donors did not make sufficient, timely donations to the World Food Program. Poor transportation systems and storage facilities hampered efficient food delivery. Zambia rejected food aid because of concerns regarding biotech food; other countries required milling maize for the same reason. This compromised the food aid pipeline given the United States was the region's key donor and its aid may contain biotech food. Declining investments in agriculture and the HIV/AIDS epidemic pose challenges to emerging from crisis into sustained recovery. U.N. and U.S. officials cite the need to reverse declining trends in agricultural investments by international financing organizations, national governments, and donors. Without a strategy that integrates, among other things, agricultural development, the impact of HIV/AIDS, and natural disaster management, food crises will recur. |
Introduction Background The increased understanding of our environment and the recognition that environmental problems do not stop at national boundaries have resulted in global concern about the future of our planet and an increasing number of international agreements to address those concerns. Since 1972, when over 130 nations took part in the United Nations Conference on the Human Environment, the number of multilateral international environmental agreements has grown from fewer than 50 to more than 170. Developing an international environmental agreement involves achieving a commitment among many nations with various levels of industrial development, technical capabilities, resources, and concern about an environmental problem. It is expected that the parties to the agreement then implement it within their countries by establishing the necessary laws, regulations, and administrative systems. Adopting commitments and implementing laws, however, do not necessarily lead to the changes in behavior that help to solve the environmental problem the agreement is attempting to address. Resources must also be provided to enforce the laws enacted and to evaluate the progress made, making adjustments, over time, as necessary. The United Nations Framework Convention on Climate Change (Framework Convention) was signed by 154 nations, including the United States, in 1992. The Framework Convention’s objective was to stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic (manmade) interference with the climate system. Under the Framework Convention, both developed and developing countries agreed, for example, to develop and submit reports on their greenhouse gas emissions. In addition to the general provisions agreed to by all countries, developed countries agreed to report on their policies and measures with the aim of returning their greenhouse gas emissions to 1990 levels by the year 2000. However, this goal was not binding on the developed countries. The Framework Convention entered into force in 1994, and the United States was one of the first nations to ratify it. However, by 1995, the parties to the convention realized that insufficient progress was being made toward its goals and thus decided to begin negotiations on a legally binding protocol. In December 1997, the parties reconvened in Kyoto, Japan, to finalize binding measures to reduce greenhouse gas emissions. The resultant Kyoto Protocol to the Framework Convention established binding emissions reductions for the period 2008 through 2012 for developed countries and laid the groundwork for additional measures aimed at decreasing greenhouse gas emissions. A number of important issues were not addressed at Kyoto: the role of developing nations, the specifics of an emissions-trading program (agreed to in principle), and procedures for determining, and consequences for, noncompliance. Negotiations are continuing on these issues, including provisions that might specify data-reporting requirements, monitoring mechanisms, and enforcement procedures. The Kyoto Protocol, initially adopted by 38 nations, was open for signature by all nations until mid-March 1999. As of that deadline, 84 nations, including the United States, had signed, thereby affirming their commitment to work to meet the protocol’s ambitious goals. U.S. ratification of the Kyoto Protocol, which requires the advice and consent of the Senate, is uncertain at this time. The official representatives of all the countries that have ratified the Framework Convention constitute its Conference of the Parties. This body held its first session in 1995 and will continue to meet annually unless decided otherwise. The Conference of the Parties is served by a secretariat, which administers the agreement. Among other things, the secretariat arranges for conference meetings, drafts official documents, compiles and transmits reports submitted to it, assists the parties in compiling and communicating information, coordinates with the secretariats of other relevant international bodies, and reports on its activities to the Conference of the Parties. The secretariat is operationally independent of the United Nations, but it is linked to the United Nations and its head is appointed by the U.N. Secretary-General in consultation with the parties to the Framework Convention. Objective, Scope, and Methodology The objective of this study was to provide background information on provisions that help ensure compliance with international environmental agreements, namely data reporting, monitoring, and enforcement. For the purposes of this study, we will use the following definitions for those terms with respect to international environmental agreements: Reporting is providing measurable data on activities undertaken in response to international obligations. Monitoring is the review and analysis of the data and other information that allows assessment of the impact or extent of progress being made in meeting the agreement’s stated goal or objective. Enforcement is a strategy adopted by the parties to an agreement that establishes consequences for a party’s noncompliance with its obligations under the agreement. In examining how to improve nations’ compliance with their international environmental obligations, we are taking a “results-oriented” approach. That is, we will explore those aspects of reporting, monitoring, and enforcement that are designed to ensure that signatory nations’ actions result in achieving the Framework Convention’s objectives. We surveyed our past reports and other relevant literature on the subject and summarized the results of our analysis. (See the bibliography for a list of the works we included in this effort.) The information presented in this study draws on information provided by a number of authors. We tended to cite those authors who provided specific examples to illustrate the points made. We did not attempt to verify the accuracy of the information presented in the literature. Our expert panelists, including an official from the Department of State, reviewed a draft of this study, and we incorporated their comments where appropriate. Susan R. Fletcher, Senior International Environmental Policy Analyst, Congressional Research Service, also contributed to this study. We performed our work from July 1998 through May 1999. National Data Often Do Not Provide a Basis for Assessing Nations’ Compliance With Agreements Data on the activities that nations are undertaking to meet their international environmental obligations are the basis of determining whether each nation is in compliance with the agreement to which it is a party. Historically, such data have had problems, such as being incomplete or inaccurate. As a result, it has often been difficult to determine whether nations are meeting their obligations. More recently, efforts to improve reporting rates have resulted in more complete data on nations’ compliance activities. However, data quality remains questionable. Currently, the Kyoto Protocol’s requirements for data reporting consist of general requirements and supplemental guidelines. These guidelines provide the parties with considerable flexibility. Data Are Critical to Determining Compliance Data on the activities that nations undertake to respond to their international environmental obligations are the basis of evaluating whether nations have fulfilled those obligations. The data form the first step in the evaluation process by providing measurable information on the results of nations’ activities. Once the activities have been measured, the data can be verified for accuracy, compared with the performance criteria, and otherwise examined to conclude whether a nation has achieved the agreed-to results. To be useful, the data must meet a number of criteria, including completeness, accuracy, understandability, uniformity, and timeliness. Problems With Self-Reported Data Exist The data on nations’ activities typically result from a requirement in most international environmental agreements that each nation report on its own behavior. Our studies and those by others have shown that national data reports have many problems. For example, in 1996 we examined the progress of the United States and other signatory nations in meeting the goal of the Framework Convention to reduce greenhouse gas emissions to 1990 levels by 2000. The Framework Convention requires signatory nations to adopt policies and measures to limit greenhouse gases and to submit detailed plans showing how each will help emissions return to 1990 levels. We reported, however, that the nations’ self-reported emissions data were often incomplete, unreliable, and inconsistent. For example, as of February 1996, some data on 1990 emissions levels were available for only 29 of the 36 parties to the Framework Convention. The data were incomplete largely because the Framework Convention’s reporting requirements were not specific and were developed only after some nations had submitted their reports. Consequently, the nations’ progress in meeting the convention’s goals could not be fully assessed. According to experts, ambiguity in the language of international environmental agreements frequently contributes to these types of data problems. In its October 1998 report on monitoring and reporting under the Kyoto Protocol, the Organisation for Economic Co-Operation and Development reported further that, after two full rounds of national reporting under the Framework Convention, a number of important gaps in reporting were apparent: data were missing, parties submitted their reports late, and information about how the data were prepared was lacking. The Quantity of Reported Data Has Improved, but the Quality of Data Is Questionable Problems with self-reported data have long been recognized, and in response, some international environmental agreements have begun including provisions to improve reporting rates. One way to improve data reporting is through financial assistance to developing nations that lack the administrative capacity to fulfill their reporting requirements. For example, the Montreal Protocol on Substances That Deplete the Ozone Layer (Montreal Protocol) has a multilateral fund designed to boost developing nations’ activities to comply with the protocol’s provisions.The fund pays for projects in developing nations that gather baseline data and build the administrative capacity to report the data. According to experts, this financial assistance to developing nations has resulted in better self-reporting of certain data under the Montreal Protocol. These improvements notwithstanding, the poor quality of self-reported data continues to be a problem under international environmental agreements. Experts familiar with numerous studies of the issue have noted that the data continue to be difficult to compare and their accuracy is often low or unknown. Data Reporting Requirements Under the Kyoto Protocol Are Minimal The Kyoto Protocol incorporated the general reporting requirements and supplemental guidelines of the Framework Convention. Parties are required to submit to the secretariat a national inventory of anthropogenic emissions of greenhouse gases, a general description of steps taken or to be taken to implement the protocol, and any other information that the party considers relevant. In addition to these requirements, the parties adopted guidelines that recommend methodologies for the parties to use in gathering their inventory data, the level of detail to include in the reports, and presentation formats to follow. The guidelines were developed to help ensure that the national reports are consistent and comparable; however, they provide considerable flexibility and do not require parties to follow a specific procedure. In addition, the protocol currently does not specify any penalties for not meeting the general requirements or following the guidelines. According to the work plan adopted by the parties in November 1998, those are to be developed by the end of 2000. Monitoring of International Environmental Agreements Has Been Limited Monitoring is necessary to determine whether a nation individually, and all nations collectively, are complying with their international environmental obligations. Until recently, international environmental agreements had few established formal mechanisms for monitoring. Periodic reporting by the parties was the primary monitoring mechanism included in such agreements; however, effective use of the reports for carrying out the monitoring function has been limited. Experts have suggested several characteristics that should be included in a comprehensive monitoring system. The Kyoto Protocol has specified some basic provisions for monitoring. The United States proposed additional provisions that might better ensure the effectiveness of the agreement to limit greenhouse gases, but these provisions have not as yet been included in the protocol. Monitoring Is Needed to Ensure Compliance The monitoring done under international environmental agreements includes the review and analysis of reported data and other information that allow assessment of the impact or extent of progress being made in meeting a stated goal or objective, such as implementing an agreement’s provisions. Monitoring can also include independent verification that involves determining whether the reported data or other information accurately reflects the existing situation or condition. Verification can be done through performing on-site inspections, obtaining information from another source, or doing an independent analysis and reaching the same conclusion as the original assessment. Monitoring can determine both procedural compliance and effectiveness—that is, whether intended outcomes are being achieved. Historically, most monitoring activities have focused on whether nations have implemented processes to transform their international obligations into acceptable rules within their domestic legal systems. However, the implementation of domestic policy or laws that conform to an agreement, commonly referred to as compliance, does not ensure that the agreement’s goals or objectives will be achieved. Meeting the goals of international environmental agreements generally requires influencing the behavior not only of governments but also of a large number of firms, individuals, agencies, and other entities that do not necessarily change their behavior simply because governments have signed an agreement. Thus, influencing the behavior of these entities often entails a complex process of forming and adjusting domestic policy to conform to the standards contained in an agreement. According to experts, international law is filled with examples of agreements that have had high formal levels of compliance but have had only limited influence on the behavior of the regulated entities. For example, from the inception of the International Convention for the Regulation of Whaling in 1946 until the early 1960s, the level of compliance with its catch quotas was nearly perfect. This was because those quotas were set very high and did not require the parties to decrease their catches. Determining whether the goals and objectives are being met requires going beyond implementation to evaluate effectiveness. Thus, effectiveness is the extent to which international agreements lead to changes in behavior that help to solve environmental problems. Recently, more attention has been given to whether performance targets—such as emission targets like those specified in the Kyoto Protocol—have been met. International environmental agreements such as the Kyoto Protocol can involve a substantial economic investment by countries that are serious about implementation. In addition, because of the large number of entities within each country that may have to change their behavior if the objectives of the agreement are to be achieved, extensive monitoring over large geographic areas may be required, making the monitoring function itself costly. Particularly where the costs of implementation are high, parties to international agreements may be reluctant to implement the measures needed to ensure that commitments are met unless they are confident that others will do the same. In these cases, having mechanisms included in the agreements to monitor when and how parties are implementing these measures can help to build confidence that agreements are, in fact, being put into practice. Most International Environmental Agreements Include Only Limited Monitoring Provisions Until recently, international environmental agreements have contained few substantive mechanisms for monitoring and evaluation. Although several agreements have provided for periodic reporting by the parties, these reports have rarely been used to carry out an effective monitoring program. Recent studies provide some possible reasons for the limited nature of monitoring. One possible reason is the concept of state sovereignty, which has resulted in nations not being willing to accept external scrutiny. One author pointed out, for example, that nations find it difficult to relinquish some of their sovereign authority to an international organization. For this reason, nations have been allowed to monitor or report on their own compliance and thus avoid any potential sovereignty questions that could result from external monitoring. However, at least partly because of the problems of low reporting rates and quality of reported data as discussed in the previous chapter, the effectiveness of such self-monitoring provisions is questionable. Another possible reason is that international environmental agreements generally do not provide specific authority or adequate resources to carry out an effective monitoring function. As we pointed out in our 1992 report on the monitoring of international environmental agreements, generally the role of the treaty secretariats established by the parties is to help implement agreements by collecting and distributing information and providing some technical assistance. We further stated, however, that although most of the secretariats had distributed lists of nonreporting parties at various times to generate peer pressure to stimulate compliance with reporting provisions, they had not been given the authority to monitor the agreements through verifying the information parties reported or independently assessing compliance. In addition, of the eight major international environmental agreements reviewed in that report, only one—the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), ratified by 112 countries—granted its secretariat specific authority and established a formal mechanism for assessing compliance. Under this agreement, the secretariat analyzes the data it receives and publishes reports detailing violations. In the case of particularly egregious violations, the secretariat may also recommend that parties cease trading with the particular party found to be in noncompliance. With respect to funding, we also stated in our 1992 report that secretariats generally had limited and unstable funding. We showed that the secretariats of the eight agreements we reviewed were small organizations, with staffs of 4 to 20 people and annual budgets of less than $1 million to $3 million in 1990. We pointed out that each secretariat was funded by voluntary contributions from parties and/or by resources apportioned by a related parent organization, which in many cases also operate largely on financial contributions from member nations. For example, CITES had a staff of 18 and funding of about $2.5 million in 1990. The secretariat’s officials told us that parties had never approved a budget with sufficient funds to cover all of the activities needed to implement the agreement. In addition to its administration duties and assessing the compliance of its 112 member nations, this secretariat also conducted studies to help determine whether certain species should be protected under CITES and provided certain technical assistance. Among other possible reasons for the lack of monitoring provisions is that the nature of the agreement or the environmental problem being addressed does not require the need for detailed monitoring provisions. In some cases, agreements may not require parties to change their behavior, and thus monitoring for compliance is not required. The high catch quotas established by the International Convention for the Regulation of Whaling between 1946 and the early 1960s, cited earlier in this chapter, are an example of such an agreement. Recent studies, however, indicate that more frequently now than in the past, international environmental agreements are requiring regular reporting by the parties and reviews of these reports and, in some cases, include mechanisms for verification. Experts Have Identified the Characteristics Needed for Effective Monitoring Experts have identified three factors that should be included in an agreement to establish effective monitoring and verification. First, they have suggested that authority and responsibility for carrying out the monitoring function need to be specified and that adequate resources need to be provided. According to one expert, successive rounds of subjecting data to a monitoring process generally provide an incentive for parties to improve the quality of reported data. Next, some experts have suggested that specific criteria and standard monitoring techniques need to be established to ensure their perceived legitimacy. Many experts believe that ambiguity or vagueness of treaty language, obligations, and requirements makes implementing international commitments and judging compliance difficult. Studies have shown that how an international agreement is constructed—the exact commitment, its scope, clarity, and application—can be critical to its success. Furthermore, as stated by one expert, implementation experiences of nations often vary because of differences in their interpretation of the commitments. As a result, some experts have suggested that agreements should include a review process with specific review and evaluation procedures. One study also suggests that guidance on the nature of the reviews should be clear and the review function should be overseen by the secretariat to ensure its neutrality and consistency. Another study, prepared for EPA’s Office of Policy, stated that monitoring procedures need to be carefully considered. The study stated that the procedures developed must be credible but not overly bureaucratic. In addition, according to the study, strict rules lead to complex procedures, increasing the cost of compliance and reducing an agreement’s cost-effectiveness; conversely, if the rules are too loose, then the parties can manipulate the results. Finally, experts have suggested that the monitoring function should be transparent and provide for participation and comments by interested parties. Making both the information and the methodologies that were used to compile that information widely available and permitting participation in the policy process are basic tenets of modern governance. The right to have access to such information on the environment is a recent development in international law. Public dissemination of information about parties’ progress can play a key role in the implementation of environmental agreements. Specifically, the information serves to assure each party that others are sharing the burden of implementation as agreed, which is particularly important in light of the high costs and the effects on international competitiveness that may result from implementing an agreement. Our 1992 report suggested, for example, that when the costs of implementing an agreement are high, nations might be more willing to open up their actions for review to ensure that implementation is equitable and that all parties are honoring their commitments. In addition, the sharing of information allows not only a comparison of the experiences of the nations reviewed but also an assessment of what is working and what opportunities exist to adjust goals or procedures, as needed. According to some experts, participants should include not only environmental and public interest nongovernmental organizations but also the target groups that must change their behavior if an agreement’s goals are going to be met. Worker and employer representation is a feature, for example, of the International Labor Organization, a specialized agency of the United Nations that coordinates the development and implementation of more than 160 international labor conventions intended to safeguard workers’ rights and ensure safe workplaces. This organization requires its member nations to regularly submit reports to the worker and employer representatives for comments, which are subsequently reviewed by an independent body appointed by the organization. According to one expert, target group participation would also provide better information on the range of possible policy options, technical feasibility, and costs and benefits. The Kyoto Protocol Contains Some Monitoring Provisions The Kyoto Protocol contains various monitoring provisions. Among other things, expert teams are to review the information in national reports submitted by the parties. Other provisions—suggested by the United States to strengthen monitoring—were not incorporated into the protocol. The monitoring provisions in the protocol state that information contained in the parties’ national reports is to be reviewed by teams of experts nominated by the parties and by intergovernmental organizations. The teams are to conduct the reviews by following guidelines and relevant decisions provided by the Conference of the Parties. According to the protocol, the review process will provide a thorough and comprehensive technical assessment of all aspects of the protocol’s implementation by a party. The protocol further states that the review teams will prepare a report to the Conference of the Parties that assesses the implementation of the commitments and identifies any potential problems in, and factors influencing, the fulfillment of commitments. Under the protocol, the secretariat will coordinate the review teams, circulate the teams’ reports to all parties to the Framework Convention, and identify questions about parties’ implementation indicated by the reports for further consideration by the Conference of the Parties. In addition to establishing the guidelines for the reviews, the Conference of the Parties will consider the reports of the expert review teams along with the information submitted by the parties and those implementation questions identified by the secretariat. Prior to the development of the Kyoto Protocol, the United States proposed provisions for the Framework Convention’s monitoring process. Although the protocol included many of these features, among them the use of independent review teams and the establishment of guidelines for the review process by the Conference of the Parties, it omitted several others. First, the U.S. proposal explicitly provided that the review teams would assess both the progress of implementation and the effectiveness of meeting the protocol’s goals. The proposal provided for assessing the effectiveness of the compliance and enforcement programs established as well as the individual measures reported. Second, the U.S. proposal included specific mechanisms that would allow observers and the public to provide comments and supplemental data to facilitate and improve the reviews. Adopting these specific suggestions would increase the transparency of the process and help to provide assurance that the actions being taken will achieve the Kyoto Protocol’s objectives. Although these specific suggestions were not incorporated into the monitoring provisions of the protocol, it is possible that when the guidelines for the review process are established, they will include additional portions of the U.S. proposal. International Environmental Agreements Are Rarely Enforced Enforcement is the final element needed to help ensure that nations comply with their international environmental obligations. Few agreements contain formal provisions for enforcement, however, and the enforcement provisions that do exist are used infrequently or inconsistently. Limited funding and international jurisdiction are two of the reasons that the enforcement of international environmental agreements has not been effective. International organizations, such as the United Nations Environment Programme (UNEP), often lack the jurisdiction to enforce their decisions. In recent years, ways to build credible enforcement mechanisms into international environmental agreements have been suggested, but no consensus exists on how best to do that. To date, enforcement provisions have not been specified for the Kyoto Protocol, even though the emissions targets are supposed to be binding on the parties. In designing those provisions to be effective, a number of issues, such as the funding of an enforcement authority, international jurisdiction problems, and penalties, should be taken into consideration. At the fourth Conference of the Parties in Buenos Aires in November 1998, a work plan to complete the enforcement provisions by year-end 2000 was agreed to. Few International Environmental Agreements Have Enforcement Provisions, and Existing Provisions Are Not Used Effectively Few international environmental agreements contain enforcement provisions; it is generally thought that if stringent provisions were included, fewer nations would participate and treaty obligations would be weaker. Instead, the enforcement of compliance with treaty obligations generally depends on peer or public pressure on nations. Even when agreements do include enforcement provisions, resource constraints and other factors may limit their effectiveness. For example, according to one expert, the Commission on the Conservation of Antarctic Marine Living Resources was established to function as the primary conservation organization for the southern Atlantic Ocean. However, the secretariat for the commission is limited in its enforcement capacity in two key respects. First, the agreement has no specific enforcement procedures—the only enforcement mechanism at the secretariat’s disposal is its ability to publicize nations’ noncompliance. Second, according to the agreement, the secretariat’s decisions must have the support of a consensus of the members, thus effectively giving any member the right to veto any proposed enforcement measures against it. Although some international environmental agreements contain enforcement provisions, these provisions are used infrequently or ineffectively. For example, the Northwest Atlantic Fisheries Convention, which applies to all waters of the northwest Atlantic Ocean, has the authority to establish and allocate fishing quotas for all convention members. The convention’s Fisheries Commission, which is the body responsible for managing the convention’s resources, can adopt proposals for the enforcement of the convention’s rules. However, the commission has jurisdiction only in the area that is beyond the coastal nations’ 200-mile economic zone; thus, the commission has no jurisdiction over some of the most productive fishing areas. In addition, the convention allows any member of the agreement to exempt itself from any enforcement proposal by the commission by lodging an objection. The convention also allows members to choose not to be bound by the commission’s rules already in force. Finally, although the convention allows members to board and inspect the vessels of other member nations, only the nation under whose flag a vessel is operating can prosecute and sanction a vessel’s owner for violations. Nations are often reluctant to penalize their own vessels. As one study of the convention’s 1993 records showed, of 49 vessels charged with offenses, only 6 were prosecuted. Finally, as several experts have pointed out, the ambiguity of the language and definitions in international environmental agreements makes enforcement of their provisions problematic because it is difficult to determine whether a nation has met its obligations. Consequently, secretariats spend their time and resources dealing with contested actions by member nations rather than enforcing compliance and bringing pressure on acknowledged violators. Secretariats Are Insufficiently Funded and Lack International Jurisdiction to Enforce Agreements According to one expert, the secretariats for international environmental agreements are in the logical position to enforce compliance with treaty obligations. However, most secretariats do not have enforcement authority. Those that do have the authority may be limited in their enforcement ability for two reasons. First, because their funding is limited or unstable, as discussed earlier, they often lack the institutional capacity to fulfill all of their responsibilities. Second, the secretariats lack the international jurisdiction that is needed to carry out enforcement. Therefore, secretariats have no means of forcing member nations to abide by the rules established by the agreements. As a result, secretariats rarely act as enforcers. Secretariat officials stress that they have neither the resources nor the authority to perform enforcement and that they, instead, view themselves as information clearinghouses and facilitators. International Organizations Lack Jurisdiction to Enforce Agreements No centralized regulatory body has jurisdiction or enforcement authority for international environmental agreements. As a result, the effectiveness of international agreements depends almost entirely on voluntary compliance. According to experts, the United Nations General Assembly, in 1972, established UNEP with a governing council and secretariat to promote international cooperation on environmental protection and to coordinate environmental action within the United Nations. However, UNEP is relatively small, limited by personnel and financial constraints. It does not have the ability to create binding international law and must rely on member nations to implement and comply with its enforcement policies. In the assessment of many observers, UNEP has generally not been an effective oversight and enforcement institution because of its limited formal powers. In addition, UNEP’s funding has been criticized as inadequate because its primary source is voluntary contributions to its Environment Fund. In 1993, the UNEP Governing Council acknowledged these limitations when it shifted its focus from environmental monitoring to helping developing countries use environmentally sound technologies. Without an organization to enforce international environmental agreements, compliance depends on the willingness of nations to abide by the provisions and enforce compliance among their citizens. When complying with a particular provision or commitment becomes contrary to a nation’s interests—for either sociopolitical or economic reasons—it is less likely that the nation will enforce compliance. In addition, many countries, particularly developing countries, lack the financial and technological capacity to meaningfully enforce environmental regulations. International Officials and Legal Scholars Suggest the Need for Credible Enforcement Mechanisms In recent years, ideas about how to enhance enforcement of international environmental agreements have emerged. Two multilateral documents and one country report have set forth enforcement proposals for protecting the international environment. Academic theories provide additional recommendations. However, there is little agreement on how to improve the enforcement of agreements, and, currently, the Kyoto Protocol does not include provisions for enforcement. The following are some recent proposals for enhancing enforcement. The World Commission. In 1987, a group of legal experts of the World Commission on the Environment and Development proposed the creation of a centralized organizational structure, a Commission for the Environment, to oversee international environmental agreements and to hear nations’ complaints about violations. A United Nations High Commissioner would head the commission, hear complaints about violations, and issue reports on the violations. (This plan for a high commissioner and a commission empowered to hear complaints and issue reports mirrors the strategy used by the United Nations human rights and refugee organizations.) Although this proposal contained a draft convention, as well as General Principles on Environmental Protection and Sustainable Development, the international community has not adopted it. Most likely this is because the document had no binding force and was not issued by an official United Nations organization. The Hague Declaration. The Declaration of the Hague on the environment, issued in 1989 by an international conference of government policy makers, scientists, and environmentalists focused on climate change, called for a “new institutional authority” to combat global warming. The authority would be created within the United Nations system and would have decision-making and enforcement powers. The declaration was not specific on the form that the authority should take, nor did it propose any type of design. Citing the Hague declaration as a step in the right direction, one legal expert has suggested that the 1974 Convention on the Protection of the Environment, which is a general treaty that addresses the environment as a whole, could be used as a model. This 1974 convention created a right of action against a nation for anyone who is affected by environmentally harmful activities in that nation and requires each party to the agreement to establish a special authority to safeguard general environmental interests. The expert believes that the convention could be used as a model for future conventions that address environmental issues because it would allow nations to protest any activity that has been proven harmful to the common environment. This would eliminate diplomatic, political, and economic pressures against the protesting nations. The Soviet Initiative. A third recommendation, made by the former Soviet Union, would create a cadre of what one author called “green troops”—modeled after the peacekeeping and peacemaking efforts of the United Nations. The proposal would also create and staff centers responsible for collecting and analyzing environmental data, deploying the troops to the scenes of environmental disasters, conducting inspections, verifying treaty compliance through on-site inspections, and assessing damage. Academic Proposals. Academic thinking on how to best incorporate enforcement mechanisms into treaties falls into three schools of thought. One group stresses that there is a need for a central authority to coordinate efforts and maintain a steady flow of information on the global environment. The central authority would also set and enforce rules. A second group stresses a process of interaction and cooperation among the parties involved. They believe that most treaty violations are not premeditated or deliberate but are instead caused by the ambiguity and indeterminacy of the treaty language, the domestic limitations of the parties’ abilities to carry out their responsibilities, and the time constraints imposed by treaties on the participants. Therefore, the best way to ensure compliance is not the threat of punishment but a process of interaction and cooperation among the parties involved, including improved dispute resolution, technical and financial assistance, and oversight and public participation. A third group notes that inducing nations to participate in collective deliberation and exposing them to new information could produce a shift in their domestic environmental policies. This group expects that the nations will change their environmental activities as they are exposed to the potential benefits of international environmental cooperation. They believe the nature of the commitments should be as unthreatening as possible and consist of few, if any, specific performance targets or timetables, emphasizing dispute resolution and negotiated compliance management techniques to the exclusion of more coercive enforcement mechanisms. Bibliography Alvarez, Jose. “Foreword: Why Nations Behave; A Symposium on Implementation, Compliance, and Effectiveness.” Michigan Journal of International Law, Vol. 19 (Winter 1998), pp. 303-17. Anderson, J. W., Richard Morgenstern, and Michael Toman. “At Buenos Aires and Beyond.” Resources (Winter 1999), pp. 6-10. Ardia, David S. “Does the Emperor Have No Clothes? Enforcement of International Laws Protecting the Marine Environment; A Symposium on Implementation, Compliance and Effectiveness.” Michigan Journal of International Law, Vol. 19 (Winter 1998), pp. 497-567. Bakke, Lila F. “Wrap-Up Plenary: What’s Next on Implementation, Compliance, and Effectiveness?” American Society of International Law Proceedings, Vol. 91 (Apr. 9-12, 1997), pp. 504-18. Barrett, Scott. “Economic Incentives and Enforcement Are Crucial to a Climate Treaty.” Perspectives on Policy: Do International Environmental Agreements Really Work? Washington, D.C.: Resources for the Future, Dec. 1997. Bell, Ruth Greenspan. “Developing a Culture of Compliance in the International Environmental Regime.” Environmental Law Reporter, Vol. 27 (Aug. 1997), pp. 10402-12. —. “Signing a Climate Treaty Is the Easy Part; Implementing and Enforcing Agreed-Upon Actions Pose Many Challenges.” Perspectives on Policy: Do International Environmental Agreements Really Work? Washington, D.C.: Resources for the Future, Dec. 1997. Benedick, Richard. “The U.N. Approach to Climate Change: Where Has It Gone Wrong?” Perspectives on Policy: Do International Environmental Agreements Really Work? Washington, D.C.: Resources for the Future, Dec. 1997. Danish, Kyle. “The New Sovereignty: Compliance With International Regulatory Agreements.” Virginia Journal of International Law, Vol. 37 (1997), pp. 789-810. Downs, George W. “Enforcement and the Evolution of Cooperation; A Symposium on Implementation, Compliance, and Effectiveness.” Michigan Journal of International Law, Vol. 19 (Winter 1998), pp. 319-44. Gavouneli, Maria. “Compliance With International Environmental Treaties: The Empirical Evidence.” American Society of International Law Proceedings, Vol. 91 (1997), pp. 234-58. Goldberg, Donald M., Glenn Wiser, Stephen J. Porter, and Nuno LaCastra. “Building a Compliance Regime Under the Kyoto Protocol.” Washington, D.C.: Center for International Environmental Law, Dec. 1998. Hahn, Robert W., and Robert N. Stavins. “Thoughts on Designing an International Greenhouse Gas Trading System.” Conference presentation at Climate Change Policy: The Road to Buenos Aires, American Enterprise Institute for Public Policy Research, Sept. 14, 1998. International Institute for Applied Systems Analysis. The Implementation and Effectiveness of International Environmental Commitments: Theory and Practice. Eds., David G. Victor, Kal Raustiala, and Eugene B. Skolnikoff. Cambridge, Mass.: MIT Press, 1998. Jacobson, Harold K. “Afterword: Conceptual, Methodological, and Substantive Issues Entwined in Studying Compliance; A Symposium on Implementation, Compliance and Effectiveness.” Michigan Journal of International Law, Vol. 19 (Winter 1998), pp. 569-79. Jones, Timothy T. “Implementation of the Montreal Protocol: Barriers, Constraints and Opportunities.” Environmental Lawyer, Vol. 3 (1997), pp. 813-58. Kerr, Suzi. “Enforcing Compliance: The Allocation of Liability in International GHG Emissions Trading and the Clean Development Mechanism.” Climate Issue Brief, No. 15. Washington, D.C.: Resources for the Future, Oct. 1998. La Rovere, Emilio Lebro. “The Key Issue Left Unresolved in Kyoto: Penalties for Non-Compliance.” Perspectives on Policy: How Workable Is the Kyoto Protocol for Developing Countries? Washington, D.C.: Resources for the Future, July 1998. Moomaw, William R. “International Environmental Policy and the Softening of Sovereignty.” Fletcher Forum of World Affairs, Vol. 21 (Fall 1997), pp. 7-15. Morlot, Jan Corfee. Monitoring, Reporting and Review of National Performance Under the Kyoto Protocol. OECD Information Paper. Paris, France: Organisation for Economic Co-Operation and Development, 1998. Narain, Sunita. “Rising Above the World of Post-Kyoto Politics.” Perspectives on Policy: How Workable Is the Kyoto Protocol for Developing Countries? Washington, D.C.: Resources for the Future, July 1998. O’Connell, Mary Ellen. “Enforcement and the Success of International Environmental Law.” Global Legal Studies Journal, Vol. 3 (1995), pp. 47-64. Romano, Cesare P. R. “A Proposal to Enhance the Effectiveness of International Environmental Law: The International Environmental Ombudsman.” Earth Summit Watch Programs, New York University Law School, Clinic on International Environmental Law, Spring 1997. Samaan, Andrew Watson. “Enforcement of International Environmental Treaties: An Analysis.” Fordham University Environmental Law Journal, Vol. 5 (Fall 1993), pp. 261-83. Scott, Gary L., Geoffrey M. Reynolds, and Anthony D. Lott. “Success and Failure Components of Global Environmental Cooperation: The Making of International Environmental Law.” International Law Students Association Journal of International and Comparative Law, Vol. 2 (Fall 1995), pp. 23-59. Shogren, Jason. “Benefits and Costs of Kyoto.” Conference presentation at Climate Change Policy: The Road to Buenos Aires, American Enterprise Institute for Public Policy Research, Sept. 14, 1998. United Nations. Declaration of the Hague. Concluded at the Hague, Mar. 11, 1989. U.N. Doc. A/44/340-E/1989/120 Annex (1989) 28 I.L.M. 1308 (1989). —. Framework Convention on Climate Change. Concluded at Rio de Janeiro, May 29, 1992. Reprinted in 31 I.L.M. 849 (1992) (also at http://www.unfccc.de/). —. Kyoto Protocol to the United Nations Framework Convention on Climate Change. Conference of the Parties, Third Session, Concluded at Kyoto, Dec. 10, 1997. FCCC/CP/1997/L.7/Add.1 (also at http://www.unfccc.de/). U.S. Department of State. The Buenos Aires Climate Change Conference (fact sheet). Bureau of Oceans and International Environmental and Scientific Affairs. Washington, D.C.: Dec. 1998. —. Fact Sheet on Supplemental United States’ Climate Change Proposals. Bureau of Oceans and International Environmental and Scientific Affairs. Washington, D.C.: June 1997. —. The Kyoto Protocol on Climate Change (fact sheet). Bureau of Oceans and International Environmental and Scientific Affairs. Washington, D.C.: Jan. 15, 1998. —. 1997 Submission of the United States of America Under the United Nations Framework Convention on Climate Change. Bureau of Oceans and International Environmental and Scientific Affairs. Washington, D.C.: July 1997. U.S. General Accounting Office, Climate Change: Information on the U.S. Initiative on Joint Implementation (GAO/RCED-98-154, June 29, 1998). —. Global Warming: Difficulties Assessing Countries’ Progress Stabilizing Emissions of Greenhouse Gases (GAO/RCED-96-188, Sept. 4, 1996). —. International Environment: International Agreements Are Not Well Monitored (GAO/RCED-92-43, Jan. 27, 1992). —. International Environment: Operations of the Montreal Protocol Multilateral Fund (GAO/T-RCED-97-218, July 30, 1997). —. International Environment: Strengthening the Implementation of Environmental Agreements (GAO/RCED-92-188, Aug. 24, 1992). Vine, Edward, and Jayant Sathaye. The Monitoring, Evaluation, Reporting, and Verification of Climate Change Mitigation Projects: Discussion of Issues and Methodologies and Review of Existing Protocols and Guidelines. Berkeley, Calif.: Lawrence Berkeley National Laboratory, Dec. 1997. Visek, Richard C. “Implementation and Enforcement of EC Environmental Law.” Georgetown International Environmental Law Review, Vol. 7 (Fall 1995), pp. 377-419. Wiener, Jonathan Baert. “Designing Markets for International Greenhouse Gas Control.” Climate Issue Brief, No. 6. Washington, D.C.: Resources for the Future, Oct. 1997. Youel, Kathryn S. “Theme Plenary Session: Implementation, Compliance and Effectiveness.” American Society of International Law Proceedings, Vol. 91 (Apr. 9-12, 1997), pp. 50-73. Zhong, Ma, Yoshitaka Nitta, and Michael Toman. “International Cooperation for Reducing Greenhouse Gas Emissions: From Theory to Practice Through Technology Transfer.” Summary of a China/Japan/U.S. Joint Project. Beijing Environment and Development Institute, Central Research Institute for the Electric Power Industry, and Resources for the Future, Dec. 1997. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO provided information on the three components needed to ensure compliance with international environmental agreements. GAO noted that: (1) data on the results of nations' activities undertaken to meet their international environmental obligations are the basis of determining whether each nation is in compliance with the agreements to which it is a party; (2) historically, such data have had problems, such as being incomplete or inaccurate; (3) it has often been difficult to determine whether nations are meeting their obligations; (4) as a result of efforts to improve reporting, more complete data are being reported; (5) however, according to experts, data quality generally remains questionable; (6) the Kyoto Protocol to the United Nations Framework Convention on Climate Changes contains the same general requirements and supplementary guidelines for data reporting as the Framework Convention itself; (7) the general requirements for the Framework Convention include the requirement to submit annually a national inventory of anthropogenic (manmade) emissions; (8) the details of methodology and the formats to be used to present the data for those inventories--factors that would facilitate analysis, understanding, and comparability of the data reported--are contained in guidelines that provide considerable flexibility and do not require parties to follow a specific procedure; (9) monitoring is the second element necessary to determine whether a nation individually, and all nations collectively, are complying with their international commitments; (10) monitoring includes the review and analysis of data and other information that allow an assessment of the impact or the extent of progress being made in meeting an agreement's stated goal or objective; (11) monitoring can be done to determine both procedural compliance and effectiveness; (12) monitoring activities focused on whether nations implemented processes to transform their international commitments into acceptable rules within their domestic legal systems; (13) however, because enacting domestic laws or implementing policies does not ensure that international commitments will be met, more emphasis is now being placed on mechanisms that monitor effectiveness--that is, whether intended outcomes are being achieved; (14) enforcement is the final element needed to ensure that nations comply with their international environmental obligations; (15) few agreements contain formal provisions for enforcement, however, and the enforcement provisions that do exist are used infrequently or inconsistently; and (16) secretariats and other international organizations are often ineffective at enforcement because they are inadequately funded and are limited in their international jurisdiction. |
Background To carry out its missions, DOE relies on contractors for the management and operation of its facilities. These efforts are normally carried out using cost-reimbursement contracts, which provide for the payment of all costs incurred by the contractor to the extent that these costs are allowable under the specific contract provisions. In addition, DOE’s regulations provide for a fee, or profit, on these contracts. The amount of fee available to the contractor is based on the contract amount and the type of work to be performed and may be allocated among base, award, and/or incentive fees. (App. I provides information on the allocation of fees and total contract amounts for the four sites included in our review.) DOE began using performance-based incentives in fiscal year 1994 in response to one of the recommendations in its February 1994 report on contract reform. In order to implement this recommendation quickly, DOE directed the sites to develop performance-based incentives before it developed key policies and procedures. By fiscal year 1996, most sites had incorporated these incentives into their contracts. However, it was not until May 1997 that DOE provided revised draft guidelines for performance-based incentives. A key to implementing performance-based contracting is having a clear picture of what needs to be accomplished. DOE has laid out its program goals in several documents. DOE defined its mission and program goals and the Department’s plans for achieving those goals in its first strategic plan, issued in 1994; and, in September 1997, in response to the Government Performance and Results Act of 1993, DOE issued a revised strategic plan. DOE’s Environmental Management Program also began developing its own strategic plan, entitled Accelerating Cleanup: Paths to Closure (formerly called the 10-year plan), by incorporating each site’s projections for the scope, cost, and schedule of cleanup. During 1997 and 1998, the OIG reported on problems with the performance-based incentives for fiscal years 1995 and 1996 at four sites, including Hanford, Rocky Flats, and Savannah River. For example, the OIG reported in March 1997, that at Hanford some performance-based incentive fees were paid for work that had been completed before the incentives had been agreed upon and that in one instance safety was compromised by the contractor in order to earn a fee. In addition, DOE reported during 1997 on its assessment of the implementation of performance-based incentives. While this report identified benefits from using performance-based incentive contracts, it also raised a number of concerns. For example, the report indicated that formal guidance for developing and administering performance-based incentives was limited and did not establish criteria for measuring performance or allocating fees to incentives. DOE Has Taken Corrective Action by Incorporating Lessons Learned Recent changes to DOE’s performance-based contracting have generally incorporated the lessons learned from the OIG’s reviews and the Department’s own assessments of performance-based incentives. DOE has issued departmentwide guidance, developed performance-based incentive training, and shared information through workshops and reports on lessons learned. In addition, DOE’s field offices have formalized procedures for the development and administration of incentives and used those procedures, as well as experiences from prior years, to develop the fiscal year 1998 incentives. Although these are steps in the right direction, DOE acknowledges that it still has room for improvement. DOE Has Incorporated Lessons Learned Into Fiscal Year 1998 Incentives In developing its fiscal year 1998 incentives, DOE incorporated the lessons learned from the OIG’s reviews, its own assessment, and prior years’ experiences with performance-based incentives. These lessons included developing fewer and more specific performance-based incentives that were results-oriented, defining key terms in the performance criteria, and ensuring that all key personnel participate in the development of the incentives. DOE’s October 1997 report assessing its implementation of its performance-based incentives noted that using too many incentives made it difficult to focus the contractors’ efforts on key results and created an administrative burden. In response, DOE reduced the overall number of incentives in fiscal year 1998; three of the four sites we visited had fewer incentives. The largest reduction was at Hanford, which went from over 200 incentives in fiscal year 1997 to about 100 for fiscal year 1998. Savannah River and Rocky Flats also reduced the number of incentives in an effort to focus on a few critical measures. The number of incentives for the management and operating contract at the Idaho Falls site has remained at 11 since fiscal year 1995. Furthermore, the incentives that are in place are more results-oriented and better define key terms. For example, at Savannah River, one of the fiscal year 1996 incentives was to “optimize the production” of canisters containing immobilized high-level waste by offering an incentive for each filled canister, starting with the fifty-sixth canister, but the incentive did not include criteria for what constituted an acceptably filled canister or specify the desired number of canisters to be filled. In contrast, the fiscal year 1998 incentive not only requires that at least 150 canisters be filled with processed waste and defines the criteria for an acceptably filled canister (with a minimum level of 96 inches) but also includes a provision to reduce the overall fee available to the contractor if the contractor fills fewer than 100 canisters. Finally, the incentives that are included have been agreed to by key DOE personnel. This is in contrast to how incentives were developed by DOE when it first introduced them. At that time, the incentives were generally developed by DOE’s technical personnel working with their contractor counterparts. According to DOE’s assessment of these earlier efforts, the resulting incentives were narrowly focused and did not necessarily contribute to achieving DOE’s goals for the site. To address this concern, the four sites we visited used an interdisciplinary team of technical, financial, and contracting personnel to develop their fiscal year 1998 incentives. So that these proposed incentives would be considered in the context of each site’s activities, they were reviewed and approved by DOE’s senior management at the sites. Furthermore, any proposed changes to individual incentives must go through a formal review process and be approved by senior management at the sites. DOE Has Taken Corrective Action to Improve the Performance-Based Incentive Process DOE’s October 1997 assessment of its performance-based incentives also noted that the guidance on the development and administration of these incentives was limited and generally did not address such issues as establishing baselines and allocating fee amounts to specific incentives. In response, DOE has taken steps to strengthen departmentwide guidance and training for the development and administration of these incentives. In addition, the four sites we visited have issued site-specific guidance concerning the development, administration, and evaluation of performance-based incentives. Although one of DOE’s program areas—Environmental Management—issued draft guidelines on performance-based incentives in May 1997, departmentwide guidance was not issued until December 1997. The December 1997 guidance stressed the importance of results-oriented performance expectations that can be measured by objective criteria. It also recommended that each field office institute a structured process to develop performance-based incentives and to identify ways to ensure the adequate monitoring and verification of a contractor’s performance in light of these incentives. Furthermore, DOE created an interdisciplinary training course that provides an overview on developing performance-based incentives and presented the training to headquarters and field office personnel. In addition, DOE plans, by the end of 1998, to develop a more detailed course on writing individual performance-based incentives. DOE has held two workshops for field office personnel to share their experiences with performance-based incentives and to identify both efforts that have worked well and areas for further improvement. In March 1998, DOE issued a lessons learned document for performance-based incentives to better disseminate this information, and it plans to continue this practice semiannually. In addition to the corrective action taken by DOE headquarters, the four sites we visited have formalized their procedures for developing and administering performance-based incentives and have improved the quality of the supporting documentation. For example, the Hanford directive issued in September 1997 requires that each incentive, among other things, defines quantifiable performance criteria in terms of cost, schedule, and technical baselines. At Rocky Flats, supporting documentation for each individual performance measure for fiscal year 1998 has a justification and development record that explains the rationale for selecting the activity for an incentive and for assigning the specific amount of fee. DOE Acknowledges That Further Improvements Are Needed Although DOE has taken corrective action and incorporated lessons learned in the fiscal year 1998 incentives, DOE’s Deputy Assistant Secretary for Procurement and Assistance Management acknowledges that there is room for further improvements in both the process for developing the incentives and in the individual incentives themselves. As we discussed in April 1998, one of these areas is the timeliness of the performance-based incentives. For fiscal year 1998, the performance incentives at some of the sites were not approved until several months after the fiscal year had begun. DOE’s fiscal year 1999 goal is to have the incentives approved and in place by the beginning of the fiscal year. DOE’s Fiscal Year 1998 Incentives and Associated Fees Are Generally Linked to Site Objectives For fiscal year 1998, the linkage between DOE’s strategic plan and the performance-based incentives has improved over that of prior years at the sites we visited. Furthermore, all four sites allocated fees to individual performance incentives on the basis of their relative importance and their contribution towards the site’s mission and objectives. Fiscal Year 1998 Incentives Reflect Linkage to DOE’s Objectives For fiscal year 1998, the linkage among DOE’s strategic plan, site-specific long-term and annual work plans, and performance-based incentives has improved over that of prior years. This linkage is important to ensure that incentives contribute towards achieving the goals and objectives of each site. However, as we reported in April 1998, this linkage has not always existed at DOE’s sites. At three of the four sites we visited, DOE’s performance incentives incorporated the baseline measures in DOE’s 10-year plans for environmental cleanup. The fourth site has not yet developed any incentives in environmental management because it is still validating the baseline information. For fiscal year 1998, DOE has focused on improving the linkage of its performance-based incentives with the Department’s goals. For example, Hanford developed its integrated performance measurement system to ensure that linkage exists between an individual performance incentive and DOE’s strategic plan. At Hanford, one of the fiscal year 1998 incentives is for the task of deactivating the waste handling facility; this incentive supports the accomplishment of DOE’s strategic goal to reduce operating costs by completing the deactivation of surplus nuclear facilities. The Idaho Falls site has yet to develop any performance incentives for environmental management and therefore its performance incentives do not incorporate the 10-year plan’s measures for cleanup. However, Idaho Falls has a process to link its annual work plans to the 10-year plan and DOE’s strategic plan, and the goals and objectives of the site do incorporate the performance measures from the 10-year plan for environmental cleanup. According to officials at the site, as soon as they have validated the baseline information for the environmental management program, they will develop performance-based incentives for that area. Although there are currently no incentives for this area, which represents 60 percent of the funding at the site, the plan for evaluating the contractor’s performance includes specific objective criteria to determine award fees. Allocation of Fees for Fiscal Year 1998 Generally Based on Relative Importance of Activities Prior to fiscal year 1998, DOE did not assign fee amounts to individual performance-based incentives on the basis of their relative contribution to a site’s overall goals and mission. Instead, fees were generally allocated to incentives on the basis of the funding levels for the projects. However, for fiscal year 1998, DOE emphasized allocating fee amounts to incentives on the basis of such criteria as the relative importance of the activity to accomplishing the site’s goals and missions. Our review of the fiscal year 1998 incentives showed improvement in this area. For example, for the project to clean up spent nuclear fuel at Hanford, for fiscal year 1997, all of the fee amounts assigned to individual incentives were the same, or 1 percent of the total fee pool. However, for fiscal year 1998, the fee amounts assigned to incentives ranged from 0.25 percent to 6 percent, with the 6 percent assigned to the incentive for actually removing the spent nuclear fuel and lesser percentages to such steps as completing the “sludge pretreatment process selection.” While DOE allocates fee amounts to individual performance incentives to focus the contractor’s efforts on a few critical measures, not all of the total available fee is allocated to incentives. A portion of the total fee is used to ensure that the contractor does not neglect the overall operation of the site while focusing on a few critical measures to earn incentive fees. In addition, the contracts for the operation of the sites include a conditional fee payment clause that requires the contractor to meet environmental, safety, and health standards in order to earn any incentive fee. Furthermore, the individual performance-based incentives for fiscal year 1998 at the four sites include a requirement that the work must be completed within specified cost and schedule variances in order to earn the incentive fee for the activity. For fiscal year 1998, several sites have also initiated new fee provisions to enhance contractors’ performance. One of these initiatives, at Hanford, Rocky Flats, and Savannah River, incorporated provisions in the fiscal year 1998 incentives to reduce the overall fee if specific performance measures are not accomplished. For example, at the Hanford Site, under these provisions, if an activity is not successfully completed in a timely manner, the contractor will not only earn little or no fee for that activity, but the total fee available to the contractor may be reduced. Another initiative, at Rocky Flats, involves the use of “gateway” provisions that require the contractor to complete the prior year’s work for a specific incentive before earning any fee for the current year’s efforts. For example, if the contractor was to remove 100 barrels of waste in fiscal year 1997 and completed only 50, the remaining 50 barrels would have to be removed in fiscal year 1998 before the contractor would be eligible to earn any fee for the 1998 work. In addition, Rocky Flats, Savannah River, and Hanford have begun using “stretch” provisions, under which the contractor can earn more fee if it is able to accomplish additional work with the same level of funding during the year. The Effect of Incentives on Contractors’ Performance Is Not Clear In October 1997, DOE reported that the use of performance-based incentives “has served the Department well in focusing contractor work efforts on results.” DOE stated that the application of performance-based incentive contracting had a positive impact on DOE’s ability to meet its mission needs and cited several specific examples of successful results. However, DOE and contractor officials stated that these successes may be due to the Department’s increased emphasis on program management rather than the result of performance-based incentives. In addition, DOE’s Deputy Assistant Secretary for Procurement and Assistance Management stated that the most positive impact of the performance-based incentives is the need they create for DOE to focus on results and define the tasks it wants to accomplish. DOE has taken corrective action and incorporated lessons learned in its fiscal year 1998 performance incentives. However, until all these incentives are evaluated at the end of the fiscal year, the impact of these changes is unknown. To determine if a task has been completed successfully, a DOE interdisciplinary team evaluates the individual performance incentive to learn whether the contractor met the criteria specified in the incentive and whether the work was done within acceptable cost variances. Agency Comments We sent a draft of this report to the Department of Energy for its review and comment. The Department’s only comment related to our presentation of the fees included in appendix I. According to the Department, the fees available to and earned by its major subcontractors at the Rocky Flats Site are negotiated separately from the prime contractor and therefore should not be included in the table. We included these fees because at the Hanford and Savannah River sites, the fees shown included the amounts available to the prime contractor, who shares the fees earned with the major subcontractors. Because the total contract amount for the Rocky Flats Site includes the amounts paid to the major subcontractors, we believe that it does not make any difference in the presentation of the information to show that the fees are separately negotiated. Therefore, for consistency of presentation, we have retained these fees in our table. Scope and Methodology To determine the extent to which DOE has incorporated lessons learned in developing the fiscal year 1998 performance-based incentives, we interviewed DOE’s Deputy Assistant Secretary for Procurement and Assistance Management and officials from planning and procurement organizations at DOE’s Hanford, Idaho Falls, Rocky Flats, and Savannah River sites. We also reviewed the procedures and other documentation provided by these officials. We also reviewed OIG reports on the performance-based incentives at the Hanford, Rocky Flats, and Savannah River sites and DOE’s assessments of contract reform and performance-based incentives. To determine whether the proposed corrective actions had been implemented, we reviewed the fiscal year 1998 incentives developed at the four sites and compared them with prior years’ incentives. To determine whether the incentives incorporate DOE’s baseline measures in its 10-year plan for environmental cleanup and how fees are allocated to the incentives, we interviewed DOE’s planning and procurement personnel at the four sites. We also reviewed DOE’s September 1997 strategic plan, the site strategic and management plans, including the plans known as Accelerating Cleanup: Paths to Closure plans (formerly called the 10-year plans) at the four sites. We reviewed documentation provided by DOE to demonstrate the linkage among these various levels of planning documents. To determine how fees are allocated among incentives, we interviewed DOE personnel, reviewed procedures, and reviewed the individual performance incentives and supporting documentation. To determine how DOE evaluates completed incentive measures and determines their effectiveness, we interviewed DOE personnel at the four sites, reviewed procedures and other documentation provided by them, and reviewed supporting documentation for completed incentive measures. In addition, we interviewed DOE’s Deputy Assistant Secretary for Procurement and Assistance Management. We performed our review from November 1997 through July 1998 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to the Secretary of Energy. We will also make copies available to others on request. Please call me at (202) 512-7106 if you or your staff have any further questions. Major contributors to this report were Jeffrey E. Heil, Carole J. Blackwell, and Charles A. Sylvis. Allocation of Fees The Department of Energy calculates the total available fee for each contract on the basis of the total contract amount and the type of work to be performed. This total available fee is then allocated to a base fee amount, if any, and a performance fee amount. The base fee represents a portion of the contractor’s profit and is paid regardless of the contractor’s performance level. The remaining fee is earned on the basis of the contractor’s performance and may be divided into two parts: the award fee, which covers overall site operations, and the incentive fee, which covers specific activities. Table I.1 shows how these fees were allocated and earned for the four sites; amounts may not be comparable because of the scope of work and contractual arrangements at each site. The performance for fiscal year 1998 has yet to be evaluated, so earned amounts are unknown. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the performance-based incentives at the Department of Energy's (DOE) Hanford, Idaho Falls, Rocky Flats, and Savannah River sites to determine: (1) the extent to which DOE has incorporated lessons learned in developing its fiscal year (FY) 1998 performance-based incentives; (2) whether these incentives incorporate the baseline measures in DOE's 10-year plan for environmental cleanup, and how fees are allocated to the incentives; and (3) how DOE evaluates completed incentive measures and determines their effectiveness. GAO noted that: (1) during the past year, DOE has taken steps to correct the problems identified in the Office of the Inspector General reports and its own assessment of performance-based incentives; (2) these steps have included issuing guidance, conducting training, and incorporating lessons learned into the FY 1998 incentives; (3) however, DOE believes that FY 1998 represents a transitional period to better performance-based incentives because it plans to continue to make improvements to the incentives; (4) for FY 1998, at three of the four sites GAO visited, DOE's performance-based incentives incorporated the baseline measures in the Department's 10-year plan for environmental cleanup and were generally linked to both DOE's strategic plan and the site-specific plans; (5) the fourth site, Idaho Falls, has not yet developed performance incentives in environmental management, but its goals and objectives do incorporate the 10-year plan's baseline measures; (6) furthermore, each of the four sites generally allocates fees to individual performance incentives in proportion to their relative importance and on the basis of the site's missions and objectives; (7) DOE evaluates completed actions that were tied to performance-based incentives through reviews by its technical, financial, and contracting personnel to determine whether the contractor satisfied the criteria and earned the amount of fee to be paid; (8) overall, DOE maintains that performance-based incentives have been effective in achieving desired end results; (9) however, it is not clear whether the successes reported in the departmentwide assessment have been due to the performance-based incentives or to the accompanying increased emphasis on program management; and (10) furthermore, it is too soon to assess the effectiveness of the FY 1998 incentives because the evaluation of these incentives will not be complete until the end of the fiscal year. |
Introduction The use of safety belts has long been considered an effective way to reduce deaths and injuries on the nation’s highways. The Department of Transportation (DOT) estimates that 10,000 additional lives could be saved annually if all of the occupants of motor vehicles used safety belts. Safety belt technology has existed for more than a century, but belts were not installed in new cars sold in the United States until the mid-1960s. Even after the belts were available, relatively few people used them. In 1984, New York became the first state to enact a law mandating the use of safety belts. Other states soon enacted similar laws. Currently, 48 states and the District of Columbia have some form of law on using belts. DOT’s National Highway Traffic Safety Administration (NHTSA) has estimated that safety belt use increased from 11 percent in 1982 to 67 percent in 1994. High Costs of Traffic Accidents and Nonuse of Safety Belts More than 40,000 people have died in traffic accidents in the United States almost every year since 1960. In 1966, 50,894 fatalities occurred on the highways; in 1994, about 40,700 people died. Although crashes of airplanes and trains receive more attention from the media, the number of highway fatalities far exceeds those that occur in all other modes of transportation combined. NHTSA estimates that annually about 20,000 occupants of motor vehicles die in crashes while not using about 600,000 occupants are injured in crashes while not using safety more people are killed or seriously injured in road crashes than are the victims of crimes, and traffic crashes cost society over $130 billion annually. NHTSA estimates that from 1982 through 1994, 65,290 lives were saved by safety belts, and about 1.5 million moderate to critical injuries were prevented. Despite these successes, enormous costs are still generated when people do not use safety belts. NHTSA reported in June 1994 that not using belts results in 10,000 deaths and 200,000 moderate to critical injuries annually. NHTSA estimates that these deaths and injuries cost society $20 billion annually in medical costs, lost productivity, and other injury-related expenses. History of Safety Belts Safety belts were developed in the 1880s to keep people from bouncing off horse-drawn buggies. However, automobile manufacturers did not offer safety belts in vehicles until the 1950s. In 1961, a few states required that belts be installed in the new cars sold in their states. In 1962, manufacturers began to install safety belt anchorages at the factory, making it easier for car dealers or owners to add safety belts later. In 1964, U.S. manufacturers began making safety belts standard equipment in the front seat of their cars. Various analyses have been conducted to show what happens to belted and unbelted occupants of vehicles involved in crashes. Figure 1.1 shows how a steering wheel, instrument panel, and windshield absorb crash forces affecting an unbelted dummy. In May 1992, we reported the results of various studies on the effectiveness of safety belts, laws on the mandatory use of belts, and the costs of not using belts. These studies showed that using safety belts generally reduced the rates of both fatalities and serious injuries by 50 to 75 percent in crashes involving motor vehicles. The studies also showed that state laws on safety belt use reduced both fatalities and serious injuries by 5 to 20 percent, even though the use of belts was relatively low during the periods in which these studies were performed. Most studies that addressed hospital costs reported that the crash victims who had used belts averaged 60 to 80 percent lower hospital costs than those who had not used belts. The studies also found that the occupants not using belts who were injured in crashes paid less than one-half of their hospital costs, since most of the costs were paid through insurance premiums or Medicare and Medicaid. The tax-supported programs paid between 8 and 28 percent of the hospital costs. Federal and State Laws Promote Safety Belts The Congress and federal agencies have encouraged the installation and use of safety belts since the mid-1960s, and the states began enacting laws on safety belt use in the mid-1980s. Under the initial federal efforts, safety belts were required to meet minimum standards. Since few occupants of vehicles voluntarily used manual safety belts, DOT issued a rule in 1984 mandating that passive restraints—automatic safety belts and airbags—be phased in beginning with 1987 model year cars. Under the rule, the installation of passive restraints could be avoided if states representing two-thirds of the U.S. population enacted satisfactory laws mandating safety belt use. This provision focused attention on mandatory use laws and prompted automobile manufacturers and others to provide funding and support for such laws. The first state law mandating safety belt use was enacted in New York in 1984; by 1986, a total of 22 states and the District of Columbia had such laws in effect. Since DOT’s data showed little increase in safety belt use between 1987 and 1990, the Congress acted in 1991 to again focus attention on increasing the use of safety belts. The Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) (P.L. 102-240) included financial incentives—grants and penalties—to encourage the states to enact basic safety belt laws and increase belt use. ISTEA provided for grants for up to 3 years to those states that had laws mandating safety belt use and that achieved minimal levels of belt use. The grants totaled $12 million per year for fiscal years 1992 through 1994. ISTEA also required those states that did not have basic safety belt laws to transfer up to 3 percent of their federal-aid highway funds to their state highway safety programs. Maine and New Hampshire are the only states that do not have laws on safety belt use. Objectives, Scope, and Methodology This report’s objectives were to determine (1) the nation’s progress in achieving goals for the use of safety belts, (2) the strategies used most successfully by some states to increase safety belt use, and (3) federal strategies that could help increase this use. Our work was requested by the Chairman and Ranking Minority Member, Subcommittee on Transportation and Related Agencies, House Committee on Appropriations. To conduct our work, we visited NHTSA’s headquarters in Washington, D.C., the agencies responsible for highway traffic safety programs in 10 states (California, Colorado, Idaho, Maryland, Mississippi, New Hampshire, New Jersey, New York, North Carolina, and South Carolina), and the seven NHTSA regional offices with responsibility for the 10 states. We judgmentally selected the 10 states to include a cross section of state safety belt programs. In making our selections, we considered whether a state’s survey on safety belt use had been approved by NHTSA, whether the state had a law on safety belt use involving primary or secondary enforcement, the fine the state assessed for noncompliance with the law, the state’s reported rate of safety belt use (so that we selected states with relatively high and low use), and the period in which the state’s last survey on safety belt use had been conducted. At NHTSA and the state agencies, we obtained and reviewed pertinent documents and discussed activities concerning safety belts with officials. More specifically, at the various locations, we obtained and reviewed pertinent documents, including NHTSA’s Regional Action Plans and the states’ Highway Safety Plans, which described the state’s strategies for increasing the use of safety belts and provided information on past successes; reviewed materials developed for public information and education campaigns and for community-based traffic safety programs; discussed with state officials what the federal government is currently doing to increase safety belt use, what is and is not working well, and what changes are desirable; reviewed appropriate laws and regulations and other relevant documents; reviewed the methodologies NHTSA used to calculate the rate of seat belt analyzed the methodologies used in state surveys to determine whether the states were consistent in how the surveys were planned and conducted; and reviewed NHTSA’s guidelines on the state surveys of safety belt use to determine the extent to which the guidance provides for consistent surveys. Also, as requested, we met with the Canadian officials responsible for implementing safety belt programs to learn what strategies Canada had used to achieve that country’s reported 90-percent rate of safety belt use. We provided DOT with a draft of our report for review and comment. We conducted our review between June 1994 and December 1995 in accordance with generally accepted government auditing standards. Safety Belt Use Has Increased, but National Goals Have Not Been Met NHTSA has reported that safety belt use increased from 11 percent in 1982 to 67 percent in 1994. However, DOT’s recent goals for safety belt use nationwide have not been met. For example, DOT had a goal of 70-percent belt use by the end of 1992 and reported belt use in 1992 to be 62 percent. DOT’s current goal is to reach a rate of 75-percent belt use nationwide by 1997. Using two different methodologies, NHTSA has estimated the rate of safety belt use nationwide in 1994 to be either 67 or 58 percent. NHTSA recognized that its methodology for estimating the 67-percent nationwide rate of belt use was not precise because it relied on individual state surveys that did not measure belt use consistently. For example, 22 states surveyed only passenger cars, while 20 states surveyed cars, light trucks, and vans. Also, some states counted belt use by drivers only, while others included use by other passengers as well. During October and November 1994, NHTSA conducted a nationwide survey to gather more detailed data on the use of restraints. This survey found a nationwide use rate of 58 percent—63 percent in passenger cars and 50 percent in light trucks. The rate in light trucks is important because these vehicles now constitute about 40 percent of the new vehicles sold. Given NHTSA’s estimates of a 58-percent or 67-percent nationwide rate of belt use in 1994, significant progress must be made to meet DOT’s goal of a nationwide rate of 75-percent belt use by 1997. NHTSA’s Data Indicate Increases in Safety Belt Use NHTSA has used various methodologies for estimating the rates of safety belt use, and all show substantial increases since the early 1980s. NHTSA’s data indicate that the increase has been gradual from one year to the next with two exceptions. First, the largest increase occurred during 1985-86 when the first state safety belt laws went into effect. The second largest increase occurred during 1991-93 when ISTEA provided financial incentives for the states to enact safety belt laws and NHTSA initiated new programs with state enforcement agencies. The estimates indicate relatively small increases in belt use before 1985, from 1987 through 1990, and between 1993 and 1994. Figure 2.1 shows the changes in safety belt use nationwide since the early 1980s relative to the number of state laws on safety belt use. Measures of safety belt use over time have been available from a variety of sources, such as data about the occupants of vehicles involved in crashes, telephone surveys, and surveys of belt use that NHTSA performed until 1991 in 19 cities. For reasons discussed below, the various sources show very different rates of belt use. However, as figure 2.2 shows, all of the data sources show a substantial increase in belt use since the early 1980s. In addition, the indicators show larger increases during periods of increased federal and state emphasis on safety belt programs. These other sources show different but not necessarily more reliable use rates than those generally quoted by NHTSA and shown in figure 2.1. From 1982 to 1991, NHTSA used a survey that sampled belt use in 19 cities as an indicator of the nationwide rate of belt use. These surveys were useful for tracking changes in use rates in the particular cities included in the study, but the results from the sample cities could not be statistically extrapolated to metropolitan areas not in the sample or to any nonmetropolitan area. A telephone survey has been conducted almost every year since 1983, and the results show higher belt use than NHTSA has reported. This higher result is understandable because other studies have shown that respondents to telephone surveys tend to report higher use than is actually observed. Data about the occupants of vehicles involved in crashes indicate belt use rates both higher and lower than NHTSA’s two reported estimates, but these different results can be explained logically. NHTSA’s Fatal Accident Reporting System (FARS) contains data only from crashes in which someone died. Belt use by the victims in these crashes tends to be low because people who use belts tend to be injured or uninjured rather than killed, so they are more likely to be reported, not in FARS, but in NHTSA’s General Estimates System (GES) as involved in a crash resulting in injury or property damage only. In addition, the belt use reported in the GES data is higher because the data generally come from statements made by the vehicles’ occupants, who tend to tell police officers that they were complying with belt use laws. This tendency is particularly evident in crashes involving property damage only and no apparent injury. While the rates of safety belt use from the federal data on crashes are of limited value in estimating belt use nationwide, they can be useful for NHTSA and the states in evaluating the reasonableness of the use rates shown in the state surveys. The results of the state surveys can be expected to be higher than the FARS results and lower than the GES results for each state for the reasons explained above. NHTSA has developed a model that uses FARS data to predict actual belt use, and these results have been compared with the results from state surveys. While the model does not consider all of the relevant differences among the states, NHTSA officials told us that these comparisons of estimates of safety belt use from the state survey data and FARS generally support the reasonableness of the results of the state surveys. Figure 2.2 also demonstrates the importance of changes in the surveys’ methodology and the effects such changes can have on the results of an analysis. Figure 2.1 shows NHTSA’s analysis of nationwide rates of belt use between 1983 and 1994. According to NHTSA, the sources of the information were the 19-city survey from 1983 through 1990 and the state surveys from 1991 through 1994. Figure 2.2 shows that there were 2 years—1990 and 1991—in which the rates from both the 19-city survey and the state surveys were computed. The state surveys, using a different methodology, showed results 4 percentage points higher in 1990 and 8 percentage points higher in 1991 than the 19-city survey showed. As a result, a substantial portion of the 10-percentage point increase shown in figure 2.1 between 1990 and 1991 was caused by the change in the surveys’ methodology. Although NHTSA may have used the best available data for those years, the change in methodology is an important factor to consider when analyzing the trend. 67-Percent Belt Use Rate for 1994 Is Not Reliable NHTSA’s estimate of a 67-percent nationwide rate of safety belt use for 1994 is not reliable because the rate is based on state surveys that used different methodologies that do not consistently measure belt use. For example, 22 states surveyed only passenger cars, while 20 states surveyed cars, light trucks, and vans; the other states surveyed two of the three vehicle categories. Five states measured belt use by drivers only, and the others measured use by drivers and occupants of the vehicles’ right front seat; no state surveyed belt use by the occupants of the rear seats. The methodologies used for the state surveys also varied in selecting observation locations and in weighting the results. Some states exempted sparsely populated areas from their sampling plans, while others considered all geographic areas eligible for sampling. Also, some states conducted annual surveys, while others did not. NHTSA estimated the 67-percent nationwide rate of safety belt use for 1994 by using 34 state surveys conducted in 1994, 16 surveys conducted before 1994, and information on belt use from Wyoming’s crash data. NHTSA calculated the nationwide use rate by taking each state’s most recent rate and weighting the rate by each state’s population as a proportion of the total U.S. population. In our opinion, this methodology does not provide a reliable estimate of the nationwide rate of safety belt use because it relies on state surveys that use very different methodologies. NHTSA has acknowledged that the state surveys on safety belt use differ in design. However, NHTSA pointed out that 28 states—representing over 70 percent of the U.S. population—conducted probability-based observational surveys. Nevertheless, the agency also said that the remaining states conducted surveys in which their observation sites, while usually adequate in number, were not randomly selected. As a result, no confidence intervals can be calculated from these survey results. In our May 1992 report, we found that statewide data on safety belt use was questionable. NHTSA analysts had told us that the statewide rates of safety belt use provided by the states were generally not based on probability sampling techniques that would provide statistically valid estimates. The states had used a variety of methods that differed in reliability. The states’ data on the rate of belt use were particularly important at that time because ISTEA provided for grants to the states on the basis of these rates. Funds were allocated to the states during fiscal years 1992-94 in part on the basis of the rates of safety belt use as measured by the state surveys. To improve the quality of the data in the state surveys, in June 1992 NHTSA finalized guidelines for state observational surveys of belt use. These guidelines allowed the states substantial latitude in designing and carrying out the surveys. Although the guidelines were very flexible and NHTSA helped the states conform with the guidelines, only 28 states received NHTSA’s approval of their survey methodology. NHTSA officials said that some additional states might be performing a survey that either conforms to the guidelines or nearly conforms, but these states did not need NHTSA’s approval of their survey plan. Since the grants are no longer available, there is no financial incentive for the states to have their survey plan conform with NHTSA’s guidelines. “Available state estimates of safety belt use cannot be used to produce a national estimate. Review of the designs of all states that have conducted state-level surveys of occupant restraint systems has confirmed that results across states are not comparable and cannot be used to produce a national estimate.” We agree with the contractor’s comments. However, NHTSA officials said that the lack of consistency among the state surveys does not preclude using the surveys to develop a reasonable annual estimate of belt use nationwide. They also said that it was important for states to continue to perform surveys so that each state can identify trends and specific local problems with belt use. NHTSA’s Most Recent Survey Reveals 58-Percent Rate of Belt Use Recognizing that the data from the state surveys were limited in scope, NHTSA in 1994 conducted a special national analysis—the National Occupant Protection Use Survey (NOPUS). Data were collected by observing traffic at about 4,000 randomly selected sites in 25 states during October, November, and December 1994. NOPUS was used to estimate the nationwide rate of belt use and to obtain detailed data on (1) belt use by vehicle type and the occupant’s age and gender and (2) the misuse of belts. The initial results from NOPUS were released by NHTSA in early 1995 and showed an overall nationwide rate of safety belt use of 58 percent for 1994. These results indicate, among other things, that the drivers tend to use safety belts more frequently than the passengers in the right front seat and that belt use is higher in the western United States than in the rest of the country. The NOPUS’ breakout by vehicle type showed an overall rate of 63-percent belt use for the occupants of passenger cars and a 50-percent rate of use for the occupants of light trucks. This breakout for light trucks is particularly important because these vehicles make up about 40 percent of the new vehicles sold. NHTSA recently estimated that annually 3,600 occupants of light trucks die and 54,000 are injured because they do not use safety belts. This disparity in belt use rates between the occupants of passenger cars and light trucks indicates that special emphasis and targeted programs may be needed to increase belt use in light trucks. Part of the disparity could relate to the fact, discussed in chapter 4, that several states’ laws on belt use do not cover the occupants of light trucks. NHTSA officials believe that NOPUS’ findings generally support the estimates of the nationwide rate of belt use calculated from the state surveys but agree that comparing the rates in the NOPUS and the state surveys is difficult. NHTSA plans to conduct another NOPUS survey if funds become available, but the agency plans to continue using the state surveys to annually estimate the nationwide rate of belt use. The 67-percent weighted average from the state surveys and the 58-percent rate from NOPUS both fall within the range of estimates of belt use based on other data. Both estimates reveal that substantial progress must be made if DOT’s goal of 75-percent belt use by 1997 is to be achieved. Conclusions Safety belt use increased from 11 percent in 1982 to a reported 67 percent in 1994. Much of the increase resulted from the adoption of laws mandating safety belt use by 48 states and the District of Columbia. Increases in belt use can also be noted during the years in which federal funds were provided to the states for improving their safety belt programs. Belt use in light trucks and vans has remained relatively low. These vehicles are not covered by federal law or by the laws of several states. NHTSA has recognized that individual state surveys do not measure belt use consistently. NHTSA could improve the guidelines for the state surveys, but the effect of such improvements could be minimal since the state laws vary significantly and NHTSA does not offer financial incentives to encourage the states to improve their surveys. Given NHTSA’s two reported nationwide rates of belt use—67 or 58 percent—significant progress must be made if the nation is to achieve DOT’s goal of a rate of 75-percent use of safety belts by 1997. Primary Enforcement Laws and Aggressive Enforcement Are Key to Increased Belt Use The states that are most successful in increasing safety belt use have comprehensive programs that include primary enforcement laws, visible and aggressive enforcement, and vigorous public information and education programs. Primary enforcement laws allow law enforcement officials to stop and ticket a vehicle’s occupants solely for not using their safety belts. Ten states currently have safety belt use laws allowing primary enforcement, while 39 states including the District of Columbia have laws allowing for only secondary enforcement. NHTSA estimated that the rates of belt use in the states with primary enforcement laws were 15 percentage points higher in 1994 than the rates in the states with secondary enforcement laws. Successful State Safety Belt Programs Contain Several Components The states’ laws on safety belt use differ widely in enforcement, coverage, and fines, but the most successful programs share several common key components. Appendix I shows the 1994 rates of safety belt use that the states reported to NHTSA, as well as some information about the belt laws in each state. As reported by the states, the rates of belt use in 1994 ranged from a low of 32 percent to a high of 84 percent; four states reported rates of over 80-percent belt use, while five reported rates of less than 50-percent use. To understand the key components of a successful safety belt program and how they work together to increase belt use, we visited 10 states and their respective NHTSA regional office. As shown in table 3.1, the 10 states we visited included 3 states with primary enforcement laws, 6 states with secondary enforcement laws, and 1 state with no law. Officials in each NHTSA regional office and state we visited stressed that primary enforcement laws were the best way to increase safety belt use but that the other components were needed to maintain that rate of increase. They also stated that in the absence of a primary enforcement law, the most effective way to increase safety belt use was a secondary enforcement law combined with active community involvement in law enforcement and public education and information activities aimed at increasing the use of safety belts. Figure 3.1 shows that the 3 states with primary enforcement laws we visited significantly increased belt use after adopting such a law. Of the 10 states we visited, the average belt use of the 3 states with primary enforcement was about 20 percentage points higher than the average belt use of the 6 states with secondary enforcement. Figure 3.2 shows that the six states with secondary enforcement laws we visited experienced increases in safety belt use after adopting such a law. However, the two figures together show that the overall rates of belt use for the states with secondary enforcement are much lower than the rates of the states with primary enforcement. Primary Enforcement Laws Are Key to Increasing Safety Belt Use States with primary enforcement laws have been the most successful in increasing safety belt use. This success is the result of law enforcement officers stopping and assessing fines to a vehicle’s occupants solely for not using their safety belts. Officials of state safety belt programs work with law enforcement agencies to encourage enforcement and also to help educate and inform the public about the law and the consequences of noncompliance. According to state officials, one of the most successful ways to reach the public is by involving community groups in programs aimed at increasing safety belt use. The ability of primary enforcement laws to increase safety belt use is best illustrated by California’s upgrade of its law mandating safety belt use from a secondary enforcement law to a primary enforcement law. In November 1992, California reported a rate of safety belt use of 70 percent. At that time, California’s secondary enforcement law had been in place for about 7 years. On January 1, 1993, California implemented a primary enforcement law, resulting in an increase in safety belt use of 13 percentage points for a statewide rate of 83 percent in late 1993, according to the results of a state survey. California officials actively publicized this change in the law. A survey of some California drivers conducted during March through September 1993 found that 90 percent of those surveyed knew that they could be stopped for violating a belt law alone and 75 percent felt that the law was being strictly enforced. California increased only slightly the number of citations issued during this period. Therefore, NHTSA officials believe that the change to a primary enforcement law is the primary reason for the significant increase in belt use. Primary enforcement laws increase safety belt use, but sustained and increased safety belt use can be better achieved when these laws are supported with enforcement and public education and information activities. North Carolina provides an example of how these activities, when associated with a primary enforcement law, can dramatically increase safety belt use. Before implementing its primary enforcement law in October 1985, North Carolina had a rate of safety belt use of 24 percent. During a 15-month period when only warnings were issued to violators, the reported rates of safety belt use ranged from 41 to 49 percent. On January 1, 1987, citations began to be issued for not using a belt, and the reported rate of belt use quickly increased to 78 percent. However, after a few years, state surveys showed that the rate of belt use had dropped back to 60 percent. In September 1993, North Carolina embarked on a multiyear campaign—“Click It or Ticket”—to further increase safety belt use and reduce related injuries and fatalities. This intensive enforcement and publicity campaign is credited with increasing North Carolina’s reported rate of safety belt use by 15 percentage points in 3 months—65 to 80 percent—and with achieving North Carolina’s current rate of 81 percent. The campaign featured increased and highly visible enforcement through the use of safety belt checkpoints. These activities were publicized locally, and the message provided to the public was that activities to enforce the safety belt law were the major focus of local law enforcement agencies during the first 4 weeks of the program. This highly visible program was also directly endorsed by North Carolina’s governor, who cited the high costs society pays for individuals who do not use their safety belts. State officials report that in its first 6 months, the “Click It or Ticket” campaign saved 45 lives, prevented 320 disabling injuries, and saved more than $51 million in health care and other costs. New York has also used enforcement and public education and information activities to sustain and increase the rate of safety belt use that the state achieved after it passed the nation’s first law mandating safety belt use on December 1, 1984. Before the passage of this law, New York’s rate of safety belt use was estimated to be 16 percent. Within 6 months, the state’s reported rate of belt use increased to 57 percent. New York now reports a belt use rate of 72 percent. This gain was primarily due to the emphasis placed on enforcing the law through police training and an increase in the number of citations issued. New York has also used public information campaigns and special workshops on restraints for children. NHTSA officials told us that the state’s ability to continue to positively affect the rate of safety belt use results from their emphasis on establishing and incorporating community-based networks into programs to improve traffic safety. Secondary Enforcement Laws Can Increase Safety Belt Use States with secondary enforcement laws are also successful in increasing safety belt use, but their success is limited by the difficulty in effectively enforcing the law. Today, 38 states and the District of Columbia have secondary enforcement laws, which allow a vehicle’s occupants to be ticketed for not using safety belts after they have been stopped for another violation. The success of secondary enforcement laws depends on how well the states work with law enforcement agencies to encourage enforcement and reach out to community members to educate and inform them about the laws and the importance of using safety belts. The states’ efforts to strengthen laws on restraints for children also contribute to increasing adults’ use of safety belts. For the six states with secondary enforcement laws that we visited, the laws contributed greatly to increasing the rates of safety belt use. Also important were aggressive enforcement and public education and information activities. For example, Idaho was able to increase its reported rate of belt use by 24 percentage points—from 35 percent in June 1990 to 59 percent in September 1993—through an increased emphasis on education and enforcement at the local level. Idaho used some of its highway safety funds to provide grants to the local law enforcement agencies that administered these programs. These agencies provided the community with information and education on safety belts and child restraints, and trained law enforcement officers on the use of restraints and the need for increased enforcement. To receive these grants, the enforcement agencies were required to have a policy of writing one safety belt citation for every five citations for hazardous violations. This approach greatly increased the number of citations issued for safety belt violations and resulted in a statewide rate of belt use of 61 percent in 1994. New Jersey was also able to increase its rate of safety belt use substantially through increased enforcement activities. From 1990 to 1991, New Jersey doubled the number of safety belt citations issued, resulting in a reported increase of 18 percentage points—from 50 percent to 68 percent—in its rate of safety belt use. New Jersey also was very active in public information and education, including a “101 Days of Summer” publicity campaign that emphasized why it was important to use safety belts and activities connected with “Buckle Up America Week.” New Jersey’s safety officials are attempting to upgrade the state’s safety belt law to a primary enforcement law because they believe this change could immediately increase the state’s rate of safety belt use by up to 12 percentage points. New Jersey reported a current rate of safety belt use of 64 percent. Other states with secondary enforcement laws we visited have not experienced the level of increase in use rates that Idaho and New Jersey have. Colorado, for example, reported an increase in its use rate to 51 percent from 18 percent when it implemented its law on July 1, 1987, but has not been able to make substantial progress since that time. As of January 1995, Colorado reported a use rate of 54 percent. However, the state believes its rate will likely increase as its many activities are implemented. For example, the state is training law enforcement officers to enforce the safety belt law and is conducting a “Drive Smart Colorado” campaign that assists community leaders in developing strategies and programs to ensure the safety of the traveling public. Also, Colorado recently amended its law on restraints for children to increase the age of the children covered by the law from 4 to 16. New Hampshire, which has no law on mandatory belt use, shows how having a law on restraints for children and aggressive public information and education about that law can contribute to increased adult use of safety belts. In 1984, New Hampshire found that only 16 percent of drivers were using safety belts. Since then, the largest reported annual increase in New Hampshire’s rate of safety belt use by adults—from 37 percent in 1988 to 50 percent in 1989—coincided with an increase in the age of the children covered by the law on restraints for children from up to age 4 to up to age 12. This change in the law provided New Hampshire with the opportunity to educate and inform the public about the child restraint law and the consequences of not using safety belts and then being involved in a traffic accident. In September 1994, New Hampshire reported a rate of safety belt use by adults of 54 percent. State officials said that this latest increase can be attributed to another change in the child restraint law (effective Jan. 1, 1994), which requires children up to age 4 to be restrained in a proper restraint system—a car seat. Conclusions The states that are most successful in increasing their rates of safety belt use have comprehensive programs that include mandatory primary enforcement laws that are visibly and aggressively enforced. These states also actively educate and inform the public about the laws, their benefits, and the consequences of noncompliance. Those states that do not have mandatory safety belt laws involving primary enforcement can also achieve increased safety belt use through increased enforcement of their secondary enforcement laws and through effective efforts to educate and inform the public. However, given the benefits in increased use rates that primary enforcement laws provide, the effectiveness of the state programs that are currently based on secondary enforcement laws could be dramatically increased through the implementation of primary enforcement laws, assuming the other program elements are continued. Federal Strategies for Increasing Safety Belt Use Stronger state laws on safety belt use could increase the rates of belt use, annually preventing thousands of deaths and serious injuries and saving up to $20 billion. Various studies have shown that the public pays for most of the costs resulting from not using safety belts through higher taxes and insurance premiums. While various federal actions could be taken to increase safety belt use, an effective strategy would encourage the states to have comprehensive programs, including primary enforcement laws with aggressive enforcement, coverage of all occupants in vehicles with belts installed, fines that discourage noncompliance, and public education. The current federal policy, contained in ISTEA, encourages the states to have a law mandating safety belt use that covers occupants of passenger cars’ front seats. ISTEA does not specify a primary or secondary enforcement law and does not require occupants of passenger cars’ rear seats or any occupants of light trucks and vans to use safety belts. Nonuse of Safety Belts Generates Large Costs to Society In June 1994, NHTSA reported that the nonuse of safety belts by occupants of passenger cars results in about 6,200 deaths and 150,000 moderate to critical injuries each year. Additionally, 3,600 occupants of light trucks and multipurpose vehicles die and 54,000 are injured unnecessarily because they do not use safety belts. NHTSA estimated that these deaths and injuries cost society $20 billion annually in medical costs, lost productivity, and other injury-related expenses. Most of these costs are borne by society in the form of tax-supported programs and insurance premiums. In response to a mandate in ISTEA, NHTSA analyzed data from seven states to determine the benefits of medical care for crash victims and who pays for that care. This Crash Outcome Data Evaluation System (CODES) project linked statewide data from police reports on motor vehicle crashes with computerized data from emergency medical services, hospital emergency departments, hospital discharges, and other activities so that the costs of the medical treatment of people injured in traffic crashes could be tracked. CODES obtained data on about 880,000 vehicle drivers for various periods between 1990 and 1992 in the seven states. The final report is expected to be provided to the Congress in February 1996. The preliminary data from CODES indicates there is a direct relationship between safety belt use and the medical costs resulting from traffic crashes. The average charges for all drivers (including those not hospitalized) in the CODES study who were involved in crashes was $562 for those not using safety belts and $110 for those using belts. Thus, those drivers using safety belts averaged 80 percent lower charges. For crash victims who were actually admitted to hospitals, the average charges were $13,937 for those not using safety belts and $9,004 for those using belts, which indicates a 35-percent reduction in hospital charges when safety belts were used. The data from CODES are consistent with the data from other studies. Our May 1992 report on the effectiveness of safety belts presented the results from eight studies containing data on the effectiveness of safety belts in reducing hospital charges. All the studies showed that hospital costs were lower for the vehicle occupants using safety belts than for the occupants not using belts. The victims who used belts had average hospital costs that were from 27 to 87 percent lower than those of the victims who did not use belts; most of the studies showed costs between 60 and 80 percent lower. Stated another way, most of the studies showed the hospital costs for the crash victims who did not use belts to be 2-1/2 to 5 times the cost for the victims who used belts. The studies also provided data showing that safety belts reduce other costs related to injuries in traffic crashes, such as ambulance costs or insurance claims costs for personal injury. While the studies discussed in that report indicated a higher rate of temporary and permanent disability for the victims who did not use belts, the data on such long-term effects were generally not available. Unfortunately, none of the studies captured information on the level of income replacement resulting from providing disability or welfare benefits to victims who did or did not use belts. CODES and other studies have shown that society pays a large part of the costs of medical treatment for those injured in traffic crashes. Preliminary data from CODES show that the public paid 16 percent of these costs through such programs as Medicare and Medicaid. About 69 percent was paid by private insurance, which spread the cost to all who pay insurance premiums. At the time the victims were discharged from the hospital, only 15 percent of the charges were classified as paid by others, generally “self payers.” NHTSA pointed out in its draft report that these self payers often are unable to pay their bills, and the cost of providing this care is ultimately passed on through higher charges for those who do pay. CODES data show that the general public may pay a larger portion of the costs than some of the earlier data showed. NHTSA published a report in January 1992 that used data from five states to estimate the costs of hospital care for people injured in motor vehicle crashes in 1990 and the sources of payment of those costs. Those data show that 29 percent was paid by government sources, 52 percent by insurance, and 19 percent by others. Five studies of hospital costs that we reviewed for our May 1992 report also collected data on medical payments for crash victims. Among the victims who did not use belts, from 8 to 28 percent were covered by Medicare or Medicaid, from 41 to 55 percent were covered by insurance, and the remaining 22 to 49 percent were considered self payers. Some costs not covered by public programs or insurance ultimately will not be paid by the injured person or the person’s family, so a portion of the costs to self payers will be paid by other sources of funding for the hospitals. Federal Efforts Have Increased Belt Use, but State Laws Are Not Comprehensive The federal government has recognized the benefits of safety belts and has been requiring their installation and encouraging their use since the mid-1960s. Federal efforts have been effective in encouraging the states to enact basic laws on mandatory safety belt use. NHTSA has not been successful, however, in encouraging the majority of states to enact a primary enforcement law that covers occupants in all types of motor vehicles that have belts installed. NHTSA has encouraged the states to enact a law mandating safety belt use and has distributed material for the states and others to use in urging the public to use safety belts. NHTSA has also initiated national campaigns for public information and awareness and has assisted in state and local campaigns to increase safety belt use. In addition, the states receive federal funding to help them implement highway safety programs. About $170 million was requested for fiscal year 1996 for assistance to the states under the federal highway safety program. DOT encourages the states to use the funds in support of program areas that are national priorities. The Secretary of Transportation has established a goal of a nationwide rate of 75-percent belt use by 1997, in place of an earlier goal of 70-percent use by 1992. NHTSA has worked with the states and local agencies to achieve these goals. NHTSA’s primary focus in increasing safety belt use has been through encouraging states to enact stronger laws and through related efforts in enforcement and public education. Officials in the states we visited told us that NHTSA’s assistance has helped them develop safety belt programs at the state and local levels. They also said that the federal funds have been an important element in state and local activities for education and enforcement. To varying degrees, the states have used NHTSA’s public information materials and have joined in the federal promotional campaigns. NHTSA has encouraged the states to strengthen their safety belt laws. However, most state laws provide for secondary enforcement and minimal fines for violations, cover only occupants of the vehicles’ front seat, and often exempt the occupants of light trucks. Our May 1992 report concluded that stronger and more comprehensive laws were needed and that society could save billions of dollars annually through increased safety belt use. As discussed in chapter 3 of this report, the most effective state laws have strong enforcement provisions and cover all occupants of passenger cars, light trucks, and vans. Since our 1992 report, nine states have enacted new laws on belt use, but these laws are similar to the earlier laws—generally providing for secondary enforcement and relatively low fines. Overall, little progress has been made recently in getting the states to adopt stronger and more comprehensive safety belt laws. As table 4.1 shows, most state laws cover the occupants of the front seat only, and some exempt the occupants of light trucks and/or vans. Ten states provide for primary enforcement, and 39 states (including the District of Columbia) provide for secondary enforcement. Only 11 state laws cover the occupants of rear seats, and 7 state laws exempt the occupants of light trucks and vans. Only 4 states assess fines for violations of belt use laws that exceed $25, and 13 states assess fines of $10 or less, including 2 states that do not assess any fine. Although 49 states (including the District of Columbia) now have laws on safety belt use, compared with 42 in 1991, the laws could be stronger and more comprehensive. Ten states have primary enforcement laws—the same number as in 1991. While California and Louisiana have enacted a primary enforcement law since 1991, Mississippi and Wisconsin have changed from primary to secondary enforcement. The states’ fines for violating belt use laws have changed little since 1991, and most are so low that they have little influence on motivating nonusers of belts to buckle up. “the Committee believes that more aggressive action needs to be taken to achieve a 75 percent seat belt usage rate by 1997. Specifically, the Committee directs NHTSA to develop and distribute it to all states a model seat belt use law as part of its 1996 program.” “Enact legislation that provides for primary enforcement of mandatory safety belt use laws. Consider provisions such as adequate fine levels and the imposition of driver license penalty points.” NTSB sent its recommendations to all the states with a secondary enforcement law and those without safety belt laws, asking the states to report any actions taken on its recommendations. Federal Legislation Could Encourage Stronger State Safety Belt Laws Our May 1992 report discussed the relevant provisions of ISTEA and stated that “the act’s provisions may do little to encourage states to strengthen their existing laws.” The grants established by ISTEA to encourage belt use were available for a 3-year period—1992-94—and did not require strong state laws as a condition of receiving the grants. The penalty provision transfers the fiscal year 1995 funds of the states that have no safety belt law as of October 1, 1993. This provision transfers up to 3 percent of a state’s federal-aid highway funds to the state’s highway safety programs. Under current law, in 1996 only Maine and New Hampshire will be subject to the safety belt penalty, which is estimated at about $1.6 million for each state. ISTEA requires the states to have laws on mandatory safety belt use to avoid the penalty, but it does not require primary enforcement or state fines for nonuse of belts. Also, the act applies only to the occupants of passenger vehicles’ front seats and defines passenger vehicles to exclude vehicles constructed on a truck chassis. As a result, state laws do not have to include the occupants of passenger cars’ rear seats or any occupants of pickup trucks or many vans, even though over 10,000 occupants of such vehicles die each year in crashes. While the number of deaths resulting from crashes of light trucks and vans might be sufficient reason for focusing greater attention on increasing belt use in these vehicles, other data also point to this need. Recent data on crashes show that occupants killed in light trucks were ejected at twice the rate of occupants of passenger cars. NHTSA officials told us that safety belts are very successful in preventing such ejections. NHTSA estimated that annually 3,600 occupants of light trucks die and 54,000 are injured because they do not use safety belts. Also, a 1994 national survey of belt use showed an overall rate of 63-percent use for occupants of passenger cars and 50 percent for occupants of light trucks. The disparities between the use rates in cars and light trucks indicate that special emphasis and targeted programs are needed to increase belt use by the occupants of light trucks. NHTSA currently does not have such emphasis or programs. Federal Role in Encouraging Safety Belt Use NHTSA officials told us that they have limited authority to encourage the states to enact stronger safety belt laws—primary enforcement, higher fines for nonuse, and coverage for all occupants of vehicles. Additionally, NHTSA officials told us that the current political environment that favors local and state initiatives over federal efforts has further reduced the agency’s ability to influence state and local activities. The state officials we interviewed reflected the attitude that the states welcome federal funds but not federal requirements or advice. They told us that the states still want federal funds for their programs but do not want any federal influence on how the funds are spent. They generally agreed that federal financial and technical assistance have helped them increase belt use, thereby reducing deaths, injuries, and the related costs to society. Several said that the positive changes might not have occurred without NHTSA’s influence and the conditions under which the states could accept federal funds under ISTEA. While NHTSA’s focus has been on encouraging the states to enact and enforce laws on safety belt use, other federal agencies have required, through federal regulations and an executive order, that certain occupants of vehicles use safety belts. The Federal Aviation Administration requires each occupant over 2 years old in an airplane to use safety belts during takeoff and landing. Likewise, the Federal Highway Administration requires commercial drivers of interstate trucks and buses to use safety belts. Furthermore, Executive Order 12566, issued in September 1986, requires federal employees to use safety belts when driving on official duty. Federal efforts have been effective in encouraging federal employees to use safety belts in motor vehicles. For example, 48 federal organizations reported a rate of at least 90-percent belt use during 1993 based on observational surveys. Although federal and state officials often disagree on the roles that federal and state agencies should play in traffic safety, several recent polls indicate general public acceptance of laws on mandatory safety belt use. A recent nationwide public opinion poll of 1,000 people by McKeon and Associates found strong support for safety belt laws. A large majority opposed any weakening or repeal of the laws. These results support findings in individual states. For example, California reported widespread public knowledge about and compliance with the state’s recent primary enforcement law. Also, a poll conducted in 1994 for South Carolina found that 88 percent of the state’s residents supported the state’s law on mandatory safety belt use. Canadian Safety Belt Laws Are Strong and Very Successful Safety belt use laws and programs in Canada have been very effective in achieving a high rate of belt use. As of mid-1994, Canada reported that its nationwide rate of belt use was about 90 percent in passenger cars and 88 percent in all vehicles, including vans and light trucks. Five of the 12 Canadian jurisdictions reported rates of belt use over 90 percent. Only one jurisdiction reported a use rate lower than 75 percent. In comparison, NHTSA estimates that the rate of safety belt use in the United States in 1994 averaged either 58 percent or 67 percent, depending on the methodology used. Laws mandating safety belt use were enacted in all 12 Canadian jurisdictions between 1976 and 1992; most were enacted during the 1980s. All the jurisdictions’ laws require primary enforcement (compared with 20 percent of the states in the United States), and all the laws cover occupants of light trucks and vans. Fines for noncompliance are generally higher in Canada than those in the states, and five Canadian jurisdictions provide for demerit points against driver’s licenses for violating belt use laws. In contrast, no U.S. state requires demerit points for such violations. Four states, however, provide demerit points for violating laws on restraints for children. Canada’s success with safety belts appears to result in large part from designating increased belt use as a top national priority. Safety belt use in Canada had leveled off at about 75 percent between 1987 and 1989. In 1989, Canadian officials endorsed the recommendation “to have each jurisdiction set itself the goal of reaching a seat belt use rate of 95% for all occupants by 1995.” The Canadian Council of Motor Transport Administrators developed a strategy, known as the National Occupant Restraint Program (NORP), to assist the jurisdictions in reaching the goal of 95-percent. NORP involved a 6-year strategy in two phases. Phase I was a short-term strategy during late 1989 and all of 1990 that included centralizing the development of training and briefing materials and the delivery of those materials through coordinating committees in each jurisdiction. Phase II, covering 1991-95, involved coordinating, in each jurisdiction, intensive campaigns for enforcement and awareness as well as efforts to reduce the number of exemptions from the laws on safety belt use. The province of Newfoundland’s experience illustrates how the Canadian strategy has worked. The province enacted its law on mandatory safety belt use in 1982. In 1989, the rate of belt use was observed to be 64 percent. In 1990, Newfoundland adopted demerit points for violations of the law, and belt use increased to 84 percent. The demerit system assesses 2 points for most driving infractions, including nonuse of belts, and the accumulation of 12 points in a 2-year period results in suspension of the license. As public awareness campaigns and enforcement programs continued in 1991, belt use increased to 91 percent. One of the strategies recommended by NORP was the issuance of at least 4,000 citations for safety belt violations per year per million population; the rate for 1991 in Newfoundland was 12,525. In 1992, Newfoundland removed many of the exemptions in its belt use law, and the rate of use reached almost 95 percent. The rate remained above 95 percent during 1993 and 1994. This high level of belt use was maintained despite a decrease in the number of citations issued per million population from 12,525 in 1991 to 507 in 1993. A Canadian official said the public has been motivated more by the demerit points provided by the law than by the $45 fine. Conclusions The Congress faces difficult decisions in balancing the federal and state roles concerning safety belts while reducing deaths, injuries, and the costs to society. Increases in the rate of belt use can still be made in many states through better enforcement of existing laws, but the larger increases are likely to be achieved through stronger and more comprehensive state laws on belt use. Stronger state laws could help reduce the thousands of deaths and serious injuries and save up to $20 billion in costs annually because safety belts are not used. The general public, through higher taxes and insurance premiums, pays most of the medical costs for those who fail to use safety belts. The large number of deaths and injuries and the costs to society for nonuse of safety belts will likely continue unless the states adopt stronger and more comprehensive safety belt laws. Federal strategies can be improved in a variety of ways. The House Committee on Appropriations recently directed NHTSA to develop and distribute to the states in 1996 a model safety belt law in order to more aggressively encourage nationwide use of safety belts. States could be encouraged to implement comprehensive safety belt programs that provide for primary rather than secondary enforcement; coverage of all of the occupants in all of the vehicles in which belts are installed, including the occupants of passenger cars’ rear seats and the occupants of light trucks and vans; and aggressive enforcement and higher fines/penalties to encourage belt use. Strong federal involvement has the advantage of facilitating the nationwide implementation of comprehensive strategies that have proven to be successful in the states in increasing belt use and reducing deaths, injuries, and the costs to society. A disadvantage is that the states would have less authority to structure their own programs. NHTSA has reported that the rate of belt use by the occupants of light trucks is only 50 percent. Considering that light trucks now constitute about 40 percent of the new vehicles sold and are increasingly being used to transport passengers, deaths, injuries, and costs could be avoided by giving special attention to increasing belt use by the occupants of these vehicles. Matter for Congressional Consideration Increased seat belt use has the potential to avoid thousands of deaths and serious injuries and save billions of dollars in medical costs, lost productivity, and other expenses resulting annually from the nonuse of safety belts. The federal government’s role in encouraging safety belt use is ultimately a policy decision for the U.S. Congress. Current federal legislation provides for both grants and penalties to encourage the states to enact safety belt laws or improve enforcement of existing laws. Comprehensive programs that include primary enforcement laws, aggressive enforcement, and vigorous public education offer the best opportunity for increasing belt use. If the Congress wants to promote this type of program nationwide, it could encourage the states to adopt a primary enforcement law that covers all occupants in all vehicles in which belts are installed. Those states that do not enact such a comprehensive law could continue to be subject to the provision in the Intermodal Surface Transportation Efficiency Act requiring a transfer of up to 3 percent of their federal-aid highway funds to their state highway safety programs. Recommendation to the Secretary of Transportation In view of the large differences in the rates of safety belt use between the occupants of passenger cars and the occupants of light trucks, we recommend that the Department of Transportation provide special emphasis and targeted programs to increase belt use by the occupants of light trucks. Agency Comments and Our Evaluation We provided copies of a draft of our report to DOT for its comments. We met with agency officials, including the Director, Office of Occupant Protection, NHTSA, and these officials agreed with the report’s findings, conclusions, matter for congressional consideration, and recommendation. The officials agreed that an effective way to increase the nationwide rate of safety belt use is for the states to have a primary enforcement law that contains fines to discourage noncompliance and is aggressively enforced. They agreed that such a law should also cover all of the occupants of all motor vehicles in which belts are installed. The officials provided a number of editorial and technical comments, which we have incorporated in the report where appropriate. State Laws on Safety Belt Use Usage rate (%) Motor vehicles after model year 1964 designed to carry no more than 10 persons Motor vehicles equipped with safety belts Motor vehicles after model year 1971 designed to carry 10 or fewer passengers Motor vehicles except for buses and other public conveyances Passenger motor vehicles designed to carry no more than 10 persons and trucks of less than 6,000 lbs unladen weight Passenger cars, small trucks, vans, taxis, ambulances, and recreational vehicles Passenger motor vehicles (passenger car, station wagon, camper, trucks with load capacity of 1,500 lbs or less, vanpool) Motor vehicles except for farm tractors, medical vehicles, and letter carriers Motor vehicles with seating capacity of eight passengers or fewer Motor vehicles, trucks of unladen weight more than 5,000 lbs except buses and farm tractors Passenger cars after model year 1964 designed to carry 10 passengers or less Motor vehicles except medical, emergency, rental, and commercial vehicles and buses Motor vehicles with weight under 8,000 lbs except for medical and emergency vehicles (continued) Usage rate (%) Motor vehicles manufactured after 12/31/64 except special-use vehicles Passenger motor vehicles manufactured after 12/31/64 (including buses but excluding trucks, tractors, and recreational vehicles) Motor vehicles designed to carry 10 passengers or fewer except for all-terrain vehicles, trailers, and special-use vehicles (continued) Usage rate (%) Passenger automobiles manufactured after 6/30/66 Motor vehicles designed to carry 10 passengers or fewer except trailers, school buses, and trucks Motor vehicles except medical vehicles, taxis, buses, and other special-use vehicles Motor vehicles designed for carrying 10 passengers or fewer except trailers and special-use vehicles Motor vehicles manufactured with safety belts and designed to carry no more than 11 passengers except special-use vehicles Passenger cars, commercial cars, commercial tractors, and trucks with factory-equipped safety belts Passenger cars (excluding trucks, tractors, pickups, vans, recreational vehicles, farm-use vehicles, passengers with medical excuses, and postal carriers) (continued) Usage rate (%) Motor vehicles manufactured after 6/30/66 except special-use vehicles Motor vehicles except special-use vehicles Motor vehicles except special-use vehicles Motor vehicles including passenger cars, multipurpose passenger vehicles except trailers, buses, trucks, and special-use vehicles (continued) Usage rate (%) No usage rate provided. Major Contributors to This Report Transportation and Telecommunications Issues Ronnie E. Wood, Assistant Director R. Kenneth Schmidt, Evaluator-in-Charge (retired) MeShae Brooks-Rollings Karlton P. Davis Susan K. Hoffman David K. Hooper Lynne Goldfarb Paul D. Lacey Sara Ann Moessbauer Phyllis F. Scheinberg Mike Volpe The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed whether federal and state efforts have been successful in increasing the use of safety belts in motor vehicles, focusing on: (1) the progress that has been made in achieving seat belt use goals; (2) state strategies that have been successful in increasing seat belt use; and (3) federal strategies that could increase seat belt use. GAO found that: (1) since 1982, the use of safety belts nationwide has increased significantly; (2) the National Highway Traffic Safety Administration (NHTSA) is unable to report on safety belt use with any accuracy because state surveys use varying methodologies to measure seat belt usage; (3) NHTSA could increase the reliability of usage rates if it developed narrower survey guidelines, but changes are unlikely, since state seat belt laws vary and NHTSA no longer provides financial incentives to encourage states to improve their surveys; (4) states with the highest usage rates generally have primary enforcement laws, which allow law enforcement officers to ticket violators solely for not using seat belts, visible and aggressive enforcement, and active public information programs; (5) states with primary enforcement laws averaged 15 percent higher use rates than states with secondary enforcement laws; (6) financial disincentives in federal transportation law have encouraged many states to adopt primary and secondary enforcement laws; (7) as of 1992, 17 states did not require occupants of light trucks or vans to use safety belts; (8) the lack of laws governing restraint use in light trucks has become an increasing problem, since these vehicles have unfavorable rollover rates and their sales are increasing; (9) the fines assessed for not using seat belts remain low; and (10) the federal government and states could increase the use of safety belts by developing and distributing a model safety law and enacting laws that provide for primary enforcement, coverage of all occupants in all types of vehicles, aggressive enforcement, and higher fines. |
Background CIESIN (pronounced “season”) was established in 1989 as a private, nonprofit organization chartered by the state of Michigan. It is structured as a consortium of university and nongovernmental research organizations. The current members of the consortium are University of Michigan, Ann Arbor, Michigan; Michigan State University, East Lansing, Michigan; Saginaw Valley State University, University Center, Michigan; University of Maryland, College Park, Maryland; Polytechnic University, Brooklyn, New York; and Environmental Research Institute of Michigan, Ann Arbor, Michigan. CIESIN’s Mission According to CIESIN officials, CIESIN’s mission is to provide access to, and enhance the use of, information worldwide on human interactions in the environment and to serve the needs of scientists and public and private decisionmakers. CIESIN uses computer and communications technology to provide tools to search for, obtain and understand data. Some of its current activities include (1) providing information on the human aspects of global environmental change to the research and policy communities; (2) furnishing computer tools for data access, research, and analysis across academic disciplines; (3) serving as a bridge between the socioeconomic and natural science research communities; (4) operating the SEDAC; (5) managing the U.S. Global Change Research Information Office; and (6) continuing the development of its Information Cooperative. The Information Cooperative is being developed to enable worldwide cataloging of data archives to be shared over the Internet. It is intended to enable rapid access to information about human activity and its relationship to the environment through a network of U.S. federal entities, including NASA, National Oceanic and Atmospheric Administration, Department of Agriculture, Environmental Protection Agency, and Agency for Toxic Substances and Disease Registry; state and regional environmental information systems; U. N. agencies, such as the World Health Organization; other multilateral entities, such as the Organization of American States and the World Bank; individual foreign countries, including China and Poland; and selected nongovernmental organizations and international scientific research programs. Members of the Information Cooperative work with CIESIN to make their HDGC-related databases compatible with and accessible through CIESIN’s computer network and to maintain those databases. CIESIN plans to continue to work on broadening the database capabilities and data sources supporting its mission. Further information on CIESIN’s Information Cooperative is in appendix I. CIESIN’s Funding To date, CIESIN has received most of its funding from federal agencies, including the Departments of Defense and Agriculture, the Environmental Protection Agency, the Office of Science and Technology Policy, and NASA, primarily through earmarked appropriations—portions of lump-sum amounts appropriated to agencies for general purposes. Appropriations for CIESIN through fiscal year 1995 totaled over $89 million, exclusive of over $42 million provided for building a CIESIN headquarters facility, which was subsequently withdrawn. Through June 1995, CIESIN was furnished over $82 million by federal agencies and had used almost $74 million, as shown in table 1. Nonfederal sources of funding for CIESIN totaled about $505,000 through June 1995. Most of this funding has been fees from members of CIESIN. Because CIESIN funding was largely specifically designated, the funding agencies initially had to work with CIESIN to find functions or activities for CIESIN to perform that related to their missions. These agencies used various mechanisms, including grants, cooperative agreements, and contracts, to provide funds to CIESIN for work that generally fell into three main categories: (1) obtaining data sets, (2) integrating data systems, and (3) developing software support retrieval and analysis of data. For example, CIESIN’s work for the Department of Agriculture includes five tasks under that agency’s Global Change Data Assessment and Integration Project: (1) data survey, assessment, integration and access; (2) data rescue; (3) Geographic Information Systems; (4) knowledge transfer; and (5) laboratory support. Officials at the federal agencies providing significant funding for CIESIN told us that, in the absence of actual or anticipated earmarks, funding of CIESIN would not have been requested because budgets were tight and that they all had higher priority, mission-related requirements. Of the federal agencies currently funding CIESIN, only NASA plans to do so after the remaining funds are used. NASA’s continued funding of CIESIN is for developing and operating a SEDAC. The SEDAC is one of nine Distributed Active Archive Centers under NASA’s Earth Observing System Data and Information System. The SEDAC is to develop and implement innovative activities that integrate data from both the social and natural sciences and respond to high priority information needs of policy decisionmakers. The SEDAC is also to make the HDGC data it holds—and the information about the earth science held by the eight other centers—easily available to the scientific community. NASA noncompetitively awarded the SEDAC contract to CIESIN in June 1994 for 1 year with up to four 1-year extensions. In June 1995, NASA exercised the first of these extensions, which runs to June 1996. NASA officials told us that, subject to an annual review, they plan to fund the SEDAC at about $5.7 million each year. This total amount is less than half of the average annual funding CIESIN received from federal agencies prior to fiscal year 1995. CIESIN had an increasing flow of operating funds through fiscal year 1992. However, federal agencies’ funding has been decreasing in recent years, as shown in table 2. As a result of decreasing funding and in anticipation of the pending loss of most of its current funding sources, CIESIN has been developing and implementing a strategy to find new customers from domestic, international, governmental, and commercial sources. As of June 1995, CIESIN had submitted 17 proposals and was preparing 12 more. The submitted proposals went to federal agencies, state and local government agencies, and the United Nations. The planned proposals would be funded by a mixture of private corporations, foreign countries, and state or federal agencies. CIESIN has also competed for and won peer recognition. For example, CIESIN won a competition against 22 other nominees for the Computerworld Smithsonian Award in the category of Environment, Energy, and Agriculture for its Gateway software, which is briefly described in appendix I. This award honors creative and innovative uses of technology from throughout the world. In addition, the National Research Council’s Committee on Geophysical and Environmental Data has recommended consideration of CIESIN as a World Data Center to the International Council of Scientific Unions. The recommendation was agreed to at an early 1995 meeting of the International Council of Scientific Unions. Other awards that CIESIN has competed for, and/or won, are identified in CIESIN’s letter found in appendix III. NASA’s Oversight of CIESIN In 1994, NASA brought CIESIN under a contract, in place of a grant, to ensure the development and operation of a SEDAC. As the SEDAC operator, CIESIN is primarily responsible for providing users with access to HDGC data and information. It neither does nor sponsors basic HDGC research. In early 1994, NASA’s Associate Administrator for Mission to Planet Earth wrote that “by rescoping CIESIN’s mission to include only SEDAC-related activities, NASA now possesses the necessary expertise to manage CIESIN. Because the context within which SEDAC will operate is data management and integration, NASA is more uniquely qualified for this role than any other federal agency.” To help it oversee CIESIN’s management of the SEDAC, NASA established a SEDAC Users Working Group in November 1994. The working group consists of social scientists and other experts from universities, state and federal agencies, and environmental groups and other private institutions. According to one of the co-chairs of the working group, the working group has significant influence over the SEDAC, and it makes sure the SEDAC serves the needs of both the earth sciences and socioeconomic segments of the global change research community. The working group has thus far offered several recommendations for improvements. National Science Foundation’s HDGC Centers In the Conference Report explaining the Veterans Affairs and Independent Agencies Appropriations Act for fiscal year 1995, Congress provided the National Science Foundation with $6 million “. . . for a global climate change initiative for a center or consortium for the human dimensions of global climate change.” In December 1994, the Foundation announced the special funding opportunity to facilitate HDGC research, promote HDGC education, and foster interdisciplinary research collaborations on HDGC issues. The Foundation intends to sponsor a variety of HDGC activities under the funding opportunity. The Foundation received 52 proposals for its funding opportunity competition. One was from CIESIN, in which it proposed to provide a data archive and resource center for the HDGC research community. Thus, as HDGC researchers work under Foundation grants, CIESIN would provide electronic data and software support services. When the research is completed, CIESIN would archive and provide access to the research data.Consequently, there would be no duplication between the functions to be performed by the Foundation’s HDGC research centers and research teams and the functions CIESIN would perform. CIESIN’s Building Requirements Through the early part of fiscal year 1993, Congress had appropriated over $46 million for the proposed CIESIN building but in subsequent years gradually withdrew the funding. In the fiscal year 1993 Veteran Affairs and Independent Agencies Appropriations Act, Congress earmarked $42 million for the CIESIN headquarters facility. Later, in July 1993, Congress reduced the amount of available appropriations to $37 million.In October of that same year, Congress directed that another $10 millionnot be used until completion of a NASA Inspector General report. The fiscal year 1995 appropriations act specifically rescinded $10 million. The balance of $27 million was recently rescinded by law. The earmarked items in NASA’s fiscal year 1991 and 1992 appropriations included a total of $4.4 million for planning and designing a headquarters facility for CIESIN. In February 1994, as a result of its Inspector General’s report, which questioned the need for the building, NASA issued a stop-work order on the engineering design work, freezing finalization of the building design. In all, over $3 million was spent on planning and designing the facility. Another $75,000 to $150,000 will be spent to terminate the facility contract. The balance of over $1 million will remain with NASA in the appropriate account for expired unobligated balances. Although the question of federal funding of a headquarters facility for CIESIN is no longer applicable, the question of NASA’s support for CIESIN’s facilities infrastructure is still an open issue, primarily because, as previously noted, CIESIN’s support from federal agencies has been declining. Unless CIESIN is successful in its efforts to generate new business, further reductions will occur with the cessation of the current support CIESIN is receiving from the Departments of Agriculture and Defense and the Environmental Protection Agency. Such prospects raise the issue of the extent of NASA’s future support of CIESIN’s infrastructure under the SEDAC contract, especially under governmentwide guidance for federal agencies’ use in determining the cost of work performed by nonprofit organizations, such as CIESIN. Under the government’s cost principles for nonprofit organizations, the costs of idle capacity are allowable for a reasonable period of time—ordinarily not to exceed 1 year—if the facilities were necessary when acquired but are now idle due to changes in program requirements. CIESIN is currently located in various leased facilities in Washington, D.C., and Ann Arbor and University Center, Michigan. NASA has not yet evaluated the extent to which it should support CIESIN’s infrastructure for SEDAC purposes once other federal agencies’ funding of CIESIN ceases. If NASA must reduce its support of CIESIN’s facilities, it could consider the cost/benefit of various alternatives, including reducing the overall space at currently leased facilities, consolidating activities at fewer existing facilities, and relocating to reasonably accessible vacant federally owned space. NASA officials at the Goddard Space Flight Center told us they would be examining the continuing need for NASA’s support of CIESIN’s current management structure, as well as its facilities, under the SEDAC contract. The Value and Future Use of CIESIN’s Completed Work Have Not Been Determined Department of Defense, Environmental Protection Agency, and Department of Agriculture officials expressed general satisfaction with CIESIN’s performance, including the technical quality and timeliness of its work. These agencies will have spent over $15 million by the time they terminate their current relationships with CIESIN. The products they receive from CIESIN’s efforts have not been examined for their potential usefulness to the U.S. Global Change Research Program. Federal agency officials we spoke with said such an examination would be useful in identifying the products relevant to the needs and priorities of the global research community. Noncompetitive SEDAC Contract Requires Rejustification NASA’s award to CIESIN for the SEDAC was not competed. NASA based its justification for other than full and open competition on the belief that the award of a sole-source contract to CIESIN for the SEDAC was statutorily authorized and, therefore, was an appropriate exception to the competitive requirements set forth in the Competition in Contracting Act. However, we believe that the award to CIESIN was not directed by statute. The Comptroller General has held that language in congressional committee reports and other legislative history about how funds are expected to be spent do not impose legal requirements on federal agencies. Only the language of the enacted law imposes such requirements. In this instance, the conference report, rather than the law, called for CIESIN to function as a SEDAC. Thus, the noncompetitive award to CIESIN could not properly be justified on the basis that it was statutorily authorized. The next opportunity NASA will have to determine whether the noncompetitive award to CIESIN can be justified as an exception to the competitive requirements in the Competition in Contracting Act on a basis other than that it was statutorily authorized is prior to exercising the next 1-year option in June 1996. Recommendations We recommend that the NASA Administrator direct procurement officials at the Goddard Space Flight Center to determine, by the end of fiscal year 1996, the extent of the CIESIN infrastructure that should be supported under the SEDAC contract and, if this determination shows that a reduction in NASA’s support is warranted, NASA should examine the cost/benefit of various alternative actions, including relocating the SEDAC to excess federally owned space that is reasonably accessible to the SEDAC-user community; program officials, in conjunction with the U.S. Global Change Research Program’s Subcommittee on Global Change Research and other appropriate interested parties, to evaluate, and incorporate into Earth Observing System Data and Information System, any useful CIESIN products developed for the Departments of Agriculture and Defense and the Environmental Protection Agency; and procurement officials at the Goddard Space Flight Center to reexamine the Competition in Contracting Act exemptions to full and open competition and, prior to exercising the next 1-year option on the contract, determine whether an appropriate exemption justifies continuation of the noncompetitive award of the SEDAC contract to CIESIN. Agency Comments We obtained formal written comments from both NASA and CIESIN. NASA agreed with our recommendations. NASA officials stated that they appreciated our effort to review CIESIN. (See app. II for NASA’s comments.) CIESIN generally agreed with the report and elaborated on various points discussed in the report. (See app. III for CIESIN’s comments.) Scope and Methodology Our methodology included examining applicable laws, regulations, and policies; interviewing CIESIN and federal agency officials; reviewing plans, contract files, and financial and program reports; and accessing and testing CIESIN’s databases. Specifically, we discussed the nature of CIESIN’s mission and its past, present, and potential future activities with CIESIN officials. Also, we obtained documentation of its funding sources from CIESIN and the federal agencies involved. The material reviewed included federal awards audit reports, audited financial statements, Defense Contract Audit Agency reports, and NASA and Department of Agriculture Inspector General reports. In evaluating CIESIN’s future funding level and building requirements, we discussed future funding plans for CIESIN with federal agency officials, obtained information on CIESIN’s ongoing and planned activities, and discussed the funding levels needed to perform the SEDAC mission with NASA headquarters and Goddard Space Flight Center officials. Further, we reviewed documents associated with the establishment and negotiation of the SEDAC contract, visited CIESIN’s main operating facilities, and obtained information on current and planned staffing levels from CIESIN officials. We discussed NASA’s role in overseeing work on the human dimensions of global change with NASA, National Science Foundation, and Office of Science and Technology Policy officials. We also reviewed documents and held discussions with National Science Foundation and CIESIN officials related to (1) the National Science Foundation’s approach to carrying out congressional direction to establish HDGC centers or a consortium and (2) the relationship of such centers to CIESIN activities. We conducted our review at CIESIN in University Center and Ann Arbor, Michigan, and Washington, D.C.; NASA headquarters, Washington, D.C., NASA’s Goddard Space Flight Center, Greenbelt, Maryland; NASA Inspector General at the Lewis Research Center, Cleveland, Ohio; the Department of Defense, Washington, D.C.; Environmental Protection Agency, Washington, D.C., and Research Triangle Park, North Carolina; Department of Agriculture headquarters, Washington, D.C., and Greenbelt, Maryland; Office of Science and Technology Policy, Washington, D.C.; Office of Management and Budget, Washington, D.C.; National Science Foundation, Arlington, Virginia; and the U.S. Global Change Research Program, Arlington, Virginia. We conducted our review from October 1994 to August 1995 in accordance with generally accepted government auditing standards. Unless you announce its contents earlier, we plan no further distribution of this report for 30 days from its issue date. At that time, we will send copies to the Chairmen of the Senate Committee on Commerce, Science, and Transportation and of the House and Senate Appropriations Committees; the Director of the Office of Management and Budget and the Director of the Office of Science and Technology Policy, Executive Office of the President; the NASA Administrator; the Secretary of Agriculture; the Secretary of Defense; and the Administrator of the Environmental Protection Agency. We will also provide copies to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions. The major contributors to this report are listed in appendix IV. Consortium for International Earth Science Information Network’s Information Cooperative The Information Cooperative has been developed and is being expanded by the Consortium for International Earth Science Information Network (CIESIN) to allow the cataloging of data archives worldwide, which will be shared over the Internet. The Information Cooperative facilitates CIESIN’s accomplishing its mission of providing access to worldwide information on human interactions in the environment. The data may be in the form of the actual data, or information about the data and how best to obtain the information from its original source. provides a means for communication and coordination between global change research organizations, fosters common standards to access data, and makes data available to nations with developing and transitional economies. An important part of the Information Cooperative is the Gateway software, which is a single means of entry to a large number of databases by using state-of-the-art search software and Internet access. Because various databases around the world are often incompatible, the Gateway allows users to simultaneously search many different databases and to rapidly identify and obtain data from various database sources without knowing where the data are coming from—seamless searching. Much of the Information Cooperative is still under development. The connection points currently online, in addition to CIESIN, include the SEDAC, the Department of Agriculture, Environmental Protection Agency, National Oceanic and Atmospheric Administration, Agency for Toxic Substances and Disease Registry, U.S. Global Change Master Directory, Inter-University Consortium for Political and Social Research, Great Lakes Regional Environment Information System, Great Lakes Information Management Resource, World Health Organization, Roper Center, World Bank, and the country of Estonia. Further information about CIESIN can be obtained from CIESIN’s World Wide Web site on the Internet (http://www.ciesin.org). Comments From the National Aeronautics and Space Administration Comments From the Consortium for International Earth Science Information Network The following are comments on the CIESIN letter dated September 11, 1995. GAO Comments 1.Our focus was on the initial time period when CIESIN and the agencies began to have a technical and business relationship. Most agencies, at that time, had structured their on-going activities without considering a role for CIESIN. When funding for CIESIN was earmarked, they had to adjust their activities to do so. 2.The National Aeronautics and Space Administration (NASA) and our office held numerous discussions about the basis of the justification NASA cited for exemption from the Competition in Contracting Act. However, any decision about the justification for exemption from other than free and open competition would be NASA’s. Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Office of Information Management and Communications, Washington, D.C. Elizabeth F. Blevins, Librarian Carol F. Johnson, Librarian William F. Tuceling, Librarian The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the activities of the Consortium for International Earth Science Information Network (CIESIN), focusing on: (1) its mission and funding; (2) the National Aeronautics and Space Administration's (NASA) oversight of CIESIN work on the human dimensions of global change (HDGC); (3) the similarities between CIESIN and the National Science Foundation's (NSF) Centers for HDGC; and (4) CIESIN building requirements. GAO found that: (1) CIESIN enhances scientists' and decisionmakers' use of information on human interactions in the environment through access to HDGC databases worldwide; (2) four federal agencies have provided most of the $82 million in CIESIN funding; (3) although they are satisfied with its performance, three of the agencies will cease CIESIN funding due to budgets constraints and higher priority needs; (4) NASA will continue funding CIESIN so that it can develop and operate a Socioeconomic Data and Applications Center (SEDAC) which will incorporate socioeconomic data into its Earth Observing System Data and Information System; (5) federal funding reductions will cause CIESIN to compete for grants and contracts from other sources; (6) NASA believes it can appropriately oversee CIESIN SEDAC activities; (7) there is no duplication of effort between NSF centers for HDGC and CIESIN because CIESIN does not conduct or sponsor basic research; (8) Congress appropriated about $42 million in fiscal year (FY) 1993 to build CIESIN headquarters, but has subsequently withdrawn all but about $3 million; (9) NASA can support only those CIESIN-leased facilities that support SEDAC activities; and (10) to maximize the usefulness of CIESIN work and to justify NASA noncompetitive contracting decisions, CIESIN work needs to be evaluated for its usefulness to federal programs and the noncompetitive SEDAC contract award needs to be justified. |
Progress Made In Establishing Federal CIO Positions To reap the full benefits of new technologies, federal agencies must have effective information management leaders who can transform IT dollars into prudent investments that achieve cost savings, increase productivity, and improve the timeliness and quality of service delivery. This was widely recognized by the Congress in the 1990s as it worked in conjunction with the administration to craft several key information management reform laws, notably the Federal Acquisition Streamlining Act of 1994, the revision of the Paperwork Reduction Act (PRA) in 1995, and the Clinger- Cohen Act of 1996. Other than the Computer Security Act of 1987, these were the first major information management reforms instituted in the federal government since 1980. The Clinger-Cohen Act, for example, required major departments and agencies to appoint CIOs and implement IT management reforms largely grounded in successful commercial IT management practices.In particular, the act established CIO positions that report directly to the agency heads and have IM as a primary function. As noted below, the CIOs are responsible for a wide range of strategic and tactical information management activities outlined in the Clinger-Cohen Act, such as developing architectures, managing and measuring the performance of IT investment portfolios, and assisting in work process improvements. This mirrors the evolution of the CIO position in industry where it has largely moved from solely a technical support focus to a much more executive and strategic level position. Work with the agency head and senior program managers to implement effective information management to achieve the agency’s strategic goals. Assist the agency head in establishing a sound investment process to select, control, and evaluate IT spending for costs, risks and benefits. Promote improvements to the work processes used by the agency to carry out its programs. Increase the value of the agency’s information resources by implementing an integrated agencywide technology architecture. Strengthen the agency’s knowledge, skills, and capabilities to manage information resources effectively. Effective selection and positioning of CIOs can make a real difference in building the institutional capacity and structure needed to implement the management practices embodied in Clinger-Cohen and PRA.But the position is both relatively new and evolving in the federal government, and agency leaders face many challenges from the growing expectations for dramatic improvements in implementing improved IT management practices and demonstrating cost-effective results. Just finding an effective CIO can be a difficult task, since the individual must combine a number of strengths, including leadership ability, technical skills, an understanding of business operations, and good communications and negotiation skills. Also, the individual selected must match the specific needs of the agency, which must be determined by the agency head based on the agency’s mission and strategic plan. The CIO must recognize the need to work as a partner with other business or program executives and to build credibility in order to be accepted as a full participant in the development of new organizational systems and processes and to achieve successful outcomes with IT investments. Even with the right person in place, the agency head must make a commitment to the success of the CIO by assuring that adequate resources are available and a constructive management framework is in place for implementing agencywide IT initiatives. The resolution of problems founded in unsound investment control processes, poor project management, and weak software development and acquisition capabilities requires executive commitment and active support. CIOs’ progress in working with agency executives to meet these challenges has been mixed. On the positive side, responding to the Year 2000 (Y2K) date conversion challenge helped most agency leaders recognize the importance of consistent and persistent top management attention to information management and technology issues.Progress has been made in strengthening IT management capabilities in order to rectify past failures with costly modernization efforts, e.g., by developing IT architectures, strengthening cost-estimating processes, and improving software acquisition capabilities.In addition, in responding to Y2K, many agencies developed inventories of their information systems, linked those systems to agency core business processes, and jettisoned systems of marginal value.Moreover, more agencies have established much-needed IT policies in areas such as system configuration management, risk management, and software testing. According to officials at the Office of Management and Budget (OMB), the Y2K problem also gave agency CIOs a “crash course” in how to accomplish projects. Many CIOs were relatively new in their positions and expediting Y2K efforts required many of them to quickly gain an understanding of their agency’s systems, work extensively with agency program managers and chief financial officers (CFOs), and become familiar with budgeting and financial management practices. The Federal CIO Council has also facilitated positive developments.For example, the Council has been working actively with the Office of Personnel Management to develop special pay rates for hard-to-hire IT professionals. It has facilitated the development of a web-based information consolidation tool, which provides a standard IT budget reporting format and should assist agencies in linking their internal planning, budgeting, and management of IT resources. The Council also assisted administration officials in tracking the progress of Presidential Decision Directive 63, which tasked federal agencies with developing critical infrastructure protection plans, identification and evaluation of information security standards, and best practices and efforts to build communication links with the private sector. Further, in addressing the Y2K challenge, the Council participated in governmentwide efforts to develop best practices for Y2K conversion and to address important issues such as acquisition and Y2K product standards, data exchange issues, telecommunications, buildings, biomedical and laboratory equipment, and international issues. Still, agencies face incredible challenges in effectively managing their IT investments and in assuring that these investments make the maximum contribution to mission performance that is possible. Some of our recent reviews have found that fundamental IT investment processes are incomplete and not working consistently to help achieve better project outcomes. For example, IT portfolio selection, control, and evaluation processes and performance metrics have not been developed to gauge the progress of investments or their contribution to program outcomes. Acquisitions may be executed faster, but in many cases the link to program performance is lost so the real value of the investment cannot be determined. In short, more clarity could be given to how IT investments are being or will be used to improve performance or help achieve specific agency goals and ensuring that better data exists to guide informed decisions. Other common problem areas include inadequate progress in designing and implementing IT architectures before proceeding with massive modernization efforts and immature software development, cost estimation, and acquisition practices.These are areas where the agency heads were assigned specific responsibility in the PRA and in the Clinger- Cohen Act, and for which CIOs were appointed to help rectify poor agency track records. Information security is another widespread and growing problem confronting federal CIOs. A rash of break-ins at federal websites and disruptions caused by the Melissa computer virus and other malicious viruses sent via the Internet recently highlighted this concern. However, our reviews show that this problem runs much deeper. In particular, our October 1999 analysis of our own and inspector general audits found that 22 of the largest federal agencies were not adequately protecting critical federal operations and assets from computer-based attacks.Among other things, we found that agencies are lacking the strong, centralized leadership needed to protect critical information and assets as well as sound security planning, effective control mechanisms, and speedy response to security breakdowns.These weaknesses pose enormous risks to our computer systems and, more important to the critical operations and infrastructure they support, such as telecommunications; power distribution, national defense, and law enforcement; government services; and emergency services. In the case of computer security, too, the responsibility has been given to the agency heads by the PRA and Clinger-Cohen Act with CIOs to provide support. Clearly, more remains to be done to realize the full potential of CIOs as information management leaders, to build CIO organizations that have the credibility needed to be successful; to define the measures necessary to gauge this success and demonstrate results, and to put in place the structure for organizing information management to meet pressing business needs. The CIO executive guide that we are releasing today is designed to help resolve these challenges. Through our research and interviews with CIOs and other executives in case study organizations, we have developed a framework of critical success factors and leading principles. Federal agencies can turn to this guide for pragmatic assistance in leveraging the CIO position. Learning to Maximize the Success of CIO Organizations Mr. Chairman, our research has demonstrated that CIOs of leading organizations use a consistent set of IM principles to execute their responsibilities successfully. These principles, listed below, span a broad range of management imperatives, from executive leadership and change management through organizational design and workforce development. Some principles need to be addressed by top executives across the organization, rather than by the CIO. For example, along with other top executives, the chief executive officer (CEO) must recognize the role of IM in creating value to the business before appointing a CIO. In addition, the CEO must also undertake responsibility for defining and instituting the CIO position. The other principles are squarely within the domain of the CIO. For example, the CIO must take full responsibility for ensuring the credibility of the IM organization. While other leaders can contribute to this principle, the CIO must be seen as the leader of the unit and must consistently raise the visibility and demonstrate the value of the IM organization across the enterprise. Overall, the principles are strikingly simple and strongly supported by a wide range of other CIO-based research. Nevertheless, consistent attention and commitment often remains elusive and pinpoints the notable difference between leading organizations and others. Six Principles of CIO Management Recognize the role of IM in creating value Measure success and demonstrate results Organize IM to meet business needs Develop IM human capital Position the CIO for success Ensure the credibility of the IM organization Let me also underscore, Mr. Chairman, that the principles are most effective when implemented together in a mutually reinforcing manner. As ad hoc efforts, each principle addresses a single aspect that while necessary, is not sufficient for success by itself. And the failure to execute a single principle may render others less effective. Nevertheless, organizations may find it more feasible to address one principle before another. The Foundations for Achieving CIO Success: Consistent Critical Success Factors and Key Characteristics The six principles we identified naturally fell into three critical success factors that are useful for understanding issues of implementation and impact. These critical success factors are (1) align IM leadership for value creation, (2) promote organizational credibility, and (3) execute IM responsibilities. These success factors provide focus for the CIO when planning how to address the six principles. As the CIO develops strategies for approaching each of the six principles, he or she must consider who else in the organization must be involved in the leadership and what parts of the organization must be involved in the implementation. Within each critical success factor, a specific level of the organization contributes to the leadership, along with the CIO, and a specific part of the organization is involved in carrying out the activities that lead to the successful execution of the factor. For example, to align IM leadership for value creation, the CEO and most other senior executives must actively endorse the CIO and demonstrate the CIO’s role in the strategic management of the organization. The second success factor requires the collaboration of the next lower layer of management where IM successes will be observed. Finally, the third factor is where the rubber hits the road, and the IM organization itself must demonstrate its effectiveness. Each principle identified in our guide is also defined by key characteristics. These key characteristics represent the specific approaches we observed that contribute to the success of the CIO. For example, to ensure the credibility of the IM organization, successful organizations ensure that (1) the CIO model complements organizational and business needs, (2) the CIO’s roles, responsibilities, and accountabilities are clearly defined, and (3) the CIO has the right technical and management skills to do the job. To define performance measures, IM managers generally engage both their internal and external partners and customers and continually work at establishing feedback between performance measurement and business processes. As CIOs or senior agency executives use our guide, they may want to compare their organization to these key characteristics to assess the extent to which their organization resembles those we visited in the development of our guide. They may also gain insight into what aspects of their organization they should address as they work to enhance the effectiveness of their CIO position. Our guide also presents case studies illustrating how these key practices are employed within specific organizations. And it suggests specific strategies for implementing both principles and characteristics. How Leading Organizations Compare With Federal CIO Management Practices In our discussions with half of the Federal CIO Council members, they agreed that the six primary principles emerging from our study were relevant to the issues and challenges confronting them. However, the specific approaches to executing those principles differed, and for a number of principles, the federal sector seemed to not provide much focus at all. For example, while leading organizations generally define the role and authority of their CIO position carefully given the needs of the enterprise, and then select a CIO with the skills to meet the challenge, senior executives in the federal sector do not seem to go through the same process of linking CIO type and skills to agency needs. In addition, leading organizations work hard to forge partnerships at the top levels of the organization, something seen less frequently in the federal sector. This lack of attention to the CIO as the focal point of IM practice in the agency extends to the failure of agency heads to include their CIOs in executive business decision-making. In the federal government setting, IM is still too often treated as purely a technical support function rather than a strategic asset critical to improving mission performance and achieving more cost-effective results. As a result, the CIO’s role is often further from the strategic planning of the organization than in the organizations we contacted for our guide. Moreover, federal organizations are often less flexible in reassigning IM staff and structuring capabilities across business and technology lines due to the highly decentralized IM responsibilities found in many large agencies. Also, the relative inflexibility of federal pay scales makes it difficult to attract and retain the highly skilled IT professionals required to develop and support the systems being proposed. I will be discussing these and other constraints further momentarily, but I would like to point out that such challenges tend to slow the progress of implementing other principles. Interestingly, the practices of federal CIOs tended to be most similar to those CIOs in our study in those principles in which CIOs could exert the most personal control. That is, federal CIOs tend to use the same approach to building credibility within the enterprise as our case study CIOs did. In addition, both groups of CIOs tend to have similar problems with performance measures and demonstrating results. Our case study CIOs had made more advances in building links between IM and business objectives, but the measures themselves are still evolving. On the federal side, the ties to mission performance are not as strong, perhaps because of a lack of collaboration between the program areas and the IM organization in the development of mission requirements, though provisions of the Clinger-Cohen Act are providing the motivation to improve this process. Additional Constraints on Federal CIOs Warrant Further Attention Our interviews with federal CIOs and agency executives helped to highlight several aspects of the environment in which federal CIOs operate that are, in some respects, not common in private industry. In some cases, analogies do exist outside the federal sector, but it is important to understand these differences as contextual factors affecting the speed, pace, and direction of CIO integration in the federal government. As such, these factors may warrant further dialogue and empirical study. The outcomes of these discussions and reviews can form the basis for a constructive dialogue between the Congress and the executive branch on future revisions to IT management statutes and executive branch policies. First, senior executive management in the federal sector can differ significantly from the private sector. The agency head and other top executives are political appointees who are often more focused on national policy issues than building capabilities essential for achieving the desired strategic and program outcomes. This can deny the CIO the CEO- level support that is so critical for the successful integration of IM into the core business or mission functions. The Clinger-Cohen Act addresses this situation by holding the agency heads accountable for IT and requiring the CIOs to work with other executives in the management of their agencies’ information resources. Second, the federal budget process can create funding challenges for the federal CIO that are not found in the private sector. For example, certain information projects may be mandated or legislated, so the CIO does not have the flexibility to decide whether to pursue them. This ties up IT investment funds that might otherwise have been spent on other priorities. Additionally, the annual budget cycle of the federal government creates a great deal of uncertainty in funding levels available year-to-year, particularly when IT dollars are part of overall agency discretionary spending. The multitude of players in the budget process can also lead to unexpected changes in funding and the loss of the connection between budget and achievement of agency mission. This can create dynamic decision-making challenges for long-term investment strategies. Further, IT funds are often contained within the appropriations for a specific program, making them less visible. As a result, the CIO may not have control or direct oversight of key parts of the IT funding within the agency. The Clinger-Cohen Act addresses this by requiring fact-based decision- making for project initiation and control. OMB is charged with reviewing the decision support and inspecting the link between budget proposal and expected performance outcomes. Third, human capital decisions in the federal sector are often constrained relative to the flexibility found elsewhere. Current federal IM job descriptions do not match the occupations recognized in the IM industry today. Funds for skill refreshment are often among the first to be scaled back in across-the-board budget cuts. The Office of Personnel Management has also found IM salaries in the federal government to be lower than in the private sector and incentives available in the private sector do not exist in the federal government. Fourth, the federal CIO may direct an organization without the full range of functional responsibilities that would typically be a CIO’s responsibility in the private sector. For example, some federal CIOs are in charge of larger policy and oversight functions with little operational responsibility. While this may be an appropriate model for some agencies, it is critical that any model be matched with the overall needs of the agency and legislative responsibilities in mind. Fifth, the range of responsibilities, as defined by legislation, that accrue to the CIO are very broad in the federal sector, including areas like records management, paperwork burden reduction and clearance, and Freedom of Information Act requirements, for which there is little parallel in the private sector. While federal CIOs often may not have the operational responsibility for the full range of activities covered in legislation, they are charged with ensuring that these functions are effectively performed. Leadership turnover; shifts in business direction, priorities, and emphasis; changing funding levels; and human capital issues are real issues in all organizations—public and private. As such, these constraints should not be viewed as reasons for why the federal CIO cannot be successful. Instead, these constraints should be recognized and anticipated so that effective management approaches can be put in place to mitigate risks and address accountability. Concluding Remarks Mr. Chairman, as the federal government moves to fully embrace the digital age and focuses on electronic government initiatives, leadership in the management of the government’s information resources is of paramount importance. Yet, as our study shows, as a single individual, a CIO cannot ensure the successful implementation of information management reforms. Rather, the CIO must be buttressed by the full support of agency heads, the commitment of line managers, clearly defined roles and responsibilities, effective measures of performance, highly skilled and motivated IT professionals, and a range of other factors. The practices and key characteristics defined in our CIO guide can put agencies on the right path toward incorporating these ingredients. Moreover, they can help agencies and their CIOs to identify and correct underlying IM weaknesses that have undermined their modernization initiatives. They can even help ensure that agencies will be well positioned to take advantage of cutting-edge technologies in order to transform service delivery and performance. However, implementing the practices alone is not enough. To achieve real success, agency executives as well as the Congress must provide sustained support and attention to facilitating CIO effectiveness and addressing any structural challenges facing CIOs. Using this support, CIOs themselves must be now focused on results— making sure that IT investments make their agencies more innovative, efficient, and responsive. Mr. Chairman, this completes my statement. I would be happy to answer any questions that you or Members of the Subcommittee may have. Contact and Acknowledgments For future contacts regarding this testimony, please contact David L. McClure at (202) 512-6257. Individuals making key contributions to this testimony included Cristina Chaplain, Lester Diamond, Tamra Goldstein, Sondra McCauley, Tom Noone, and Tomas Ramirez. (511704) | Pursuant to a congressional request, GAO discussed the role of chief information officers (CIO) in the federal government. GAO noted that: (1) as the federal government moves to fully embrace the digital age and focuses on electronic government initiatives, leadership in the management of the government's information resources is of paramount importance; (2) yet a CIO, alone, cannot ensure the successful implementation of information management reforms; (3) rather, the CIO must be buttressed by the full support of agency heads, the commitment of line managers, clearly defined roles and responsibilities, effective measures of performance, highly skilled and motivated information technology (IT) professionals, and a range of other factors; (4) the practices and key characteristics defined in GAO's CIO guide can put agencies on the right path toward incorporating these ingredients; (5) moreover, they can help agencies and their CIOs to identify and correct underlying information management weaknesses that have undermined their modernization initiatives; (6) they can even help ensure that agencies will be well positioned to take advantage of cutting-edge technologies in order transform service delivery and performance; (7) however, implementing the practices alone is not enough; (8) to achieve real successes, agency executives as well as Congress must provide sustained support and attention to facilitating CIO effectiveness and addressing any structural changes facing CIOs; and (9) using this support, CIOs themselves must be now focused on results--making sure that IT investments make their agencies more innovative, efficient, and responsive. |
Background Distance education is not a new concept, but in recent years, it has assumed markedly new forms and greater prominence. In the past, distance education generally took the form of correspondence courses— home study courses completed by mail. Distance education today can take many forms and is defined by federal law and regulation as education that uses one or more technologies (such as the Internet or audio conferencing) to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction between the students and the instructor. Instruction provided through the Internet—or online—may be synchronous (simultaneous or “real time”) or asynchronous, whereby students and the instructor need not be present and available at the same time (see fig. 1). In general, for their students to be eligible for federal student aid funds under Title IV programs, schools must be legally authorized by a state, accredited by an agency recognized by Education, and be found eligible and certified by Education. State governments, accrediting agencies, and Education form the program integrity triad established by Title IV of the HEA to oversee postsecondary education. The state authorization role is primarily one of providing consumer protection through the state licensing process, while the accrediting agencies are intended to function as a quality assurance mechanism. In certifying a school for participation, Education is responsible for determining the financial responsibility and administrative capability of schools and is also responsible for monitoring to ensure compliance with Title IV requirements. Accrediting agencies, private educational associations set up to review the qualifications of member schools, are the primary overseers of schools’ academic quality. Accreditation is a peer review process that evaluates a school against the accrediting agency’s established standards. An institutional accrediting agency assesses a school in its entirety, including resources, admissions requirements, services offered, and the quality of its degree programs, while a programmatic accrediting agency reviews specific programs or single-purpose schools. A school’s accreditation is re-evaluated every 3 to 10 years, depending on the accrediting agency. If a school makes a substantive change to its educational programs or method of delivery from those that were offered when the agency last evaluated the school, the agency must ensure the change continues to meet standards. Schools may lose accreditation if their accrediting agency determines that they no longer meet the established standards. While Education does not have the authority to dictate the specifics of an agency’s standards, the department recognizes accrediting agencies by reviewing and assessing their standards in various areas required by statute, such as student achievement, curricula, and student support services. Education’s Office of Federal Student Aid (FSA) is responsible for monitoring the over 6,000 postsecondary schools participating in Title IV programs to ensure their compliance with applicable statutory and regulatory provisions and to ensure that only eligible students receive federal student aid. The postsecondary school types include the following: Public schools—schools operated and funded by state or local governments, including state universities and community colleges. Private nonprofit schools—schools owned and operated by nonprofit organizations whose net earnings do not benefit any shareholder or individual. For-profit schools—schools that are privately owned or owned by a publicly traded company and whose net earnings can benefit a shareholder or individual. Education fulfills its school monitoring responsibilities through four main activities. First, it determines the initial eligibility of schools to participate in the federal student aid programs, as well as recertifies that eligibility periodically. Second, as part of ensuring compliance, FSA staff conduct program reviews of a select number of schools each year where they examine school records, interview school staff and students, and review relevant student information, among other things. FSA issues reports on these reviews, which include information on areas where a school was found to be in violation of the Title IV requirements. Third, schools are required to employ independent auditors to conduct annual compliance reviews and financial audits, which are then submitted to Education. Finally, Education’s OIG conducts its own audits and investigations of schools to identify and combat fraud, waste, and abuse and makes recommendations to the department. Education may assess liabilities and/or impose fines or other sanctions on schools found in violation of Title IV requirements. Brief History of Statutory Provisions Related to Distance Education This change was made following Education’s completion of a mandated distance education demonstration project. The project was undertaken to (1) test the quality and viability of expanded distance education programs, (2) provide increased student access to higher education, and (3) determine the specific statutory and regulatory requirements that should be altered to provide greater access to high-quality distance education programs. In 2005, Education reported to Congress that waivers of the 50 percent rule did not lead to increases in fraud and abuse of Title IV funds. 20 U.S.C. § 1009b(c)(2). Recognized accrediting agencies that do not already have distance education within their scope of review may add distance education to their scope by notifying Education in writing. Such agencies must monitor the head count enrollment at each school they accredit, and if any school experiences an increase of 50 percent or more within 1 year, the agency must report that information to Education and also submit a report outlining the circumstances of the increased enrollment and how the agency evaluates the capacity of the school. Education submits that report to NACIQI for consideration in reviewing the agency’s change in scope. 20 U.S.C. §§ 1099b(a)(4)(B)(i)(II) and 1099b(q); 34 C.F.R. §§ 602.19(e), 602.31(d), and 602.34(c)(1). Distance Education Has Become Common in All Sectors and Is Offered through a Range of Programs and Courses Online Distance Education Has Grown Dramatically and Is Offered in a Variety of Ways While distance education can use various technologies, it has grown most rapidly online with the use of the Internet to support interaction among users. With the emergence of the Internet and expansion of Internet- based communication technologies, distance education today is a common phenomenon and widely used throughout higher education. Moreover, the term “distance education” no longer connotes only instruction separated by physical distance, since many distance education courses—specifically online courses—are offered to students living on campus as well as away from a campus and across state lines. School offerings in online learning range from individual classes to complete degree programs. Individual courses as well as degree programs may also be a mix of face-to-face and online instruction—often referred to as “hybrid” or “blended” instruction. Furthermore, an online class may be synchronous (simultaneous, real-time instruction), or asynchronous, where students and the instructor are not present and available at the same time. According to a 2008 study on distance education conducted by Education, postsecondary schools of all types offer a variety of distance education courses. Specifically, for the 2006-2007 school year, 61 percent of 2-year and 4-year schools reported offering online courses, 35 percent reported hybrid/blended courses, and 26 percent reported other types of distance courses. The study also suggests that the majority of schools offering distance education used asynchronous Internet technologies. Specifically, 92 percent of the degree-granting postsecondary institutions offering distance education in 2006-2007 reported using asynchronous Internet technologies to a moderate or large extent, compared with 31 percent of schools that reported using synchronous technologies to a moderate or large extent. In our interviews at the schools we selected, officials said that online, asynchronous instruction was also their predominate method for providing distance education and that this type of instruction meets students’ need for flexible schedules. For example, over half of the school officials we interviewed noted that many students taking classes online are working adults or active duty military service members who would otherwise be unable to continue or complete their studies. The use of distance education, particularly online learning, has grown dramatically in recent years. According to a 2010 industry survey, online enrollment in degree-granting postsecondary schools has continued to grow at rates far in excess of the growth for total enrollment in higher education. Survey results indicate that over 5.6 million students were taking at least one online course during the fall 2009 term—an increase of nearly 1 million students over the number reported the previous year and an increase of 21 percent, as compared with the less than 2 percent growth in the overall higher education student population. The survey also suggests that nearly 30 percent of higher education students were taking at least one course online. Such remarkable growth may be attributed to institutional efforts to expand access to more students, alleviate constraints on campus capacity, and the desire to capitalize on emerging market opportunities and compete with other schools. According to Education’s 2008 study on distance education, which includes online and other forms of distance education, the top four factors affecting postsecondary schools’ decisions regarding distance education offerings are (1) meeting student demand for flexible schedules; (2) providing access to college for students who otherwise would not have access due to geographic, family, or work- related reasons; (3) making more courses available; and (4) seeking to increase student enrollment. Several of these factors, such as providing access to more students, were also cited by school officials we interviewed. For example, one school we visited had increased access to education by establishing over 20 “cyber-centers,” including one on a National Guard base and another in a shopping mall where students can access computers with Internet capabilities and participate in online courses as well as complete assignments and take exams. Additionally, officials at two of the schools we interviewed noted that on-campus students were registering for online classes, instead of face-to-face classes that were otherwise full or scheduled for times of day that conflicted with their personal schedules. Furthermore, one school we interviewed provided flexibility to its students by allowing them to begin and complete courses at their own pace. While cost savings might be a factor, none of the school officials we spoke with cited cost savings as the primary reason for providing online distance education courses and programs. Moreover, they said students taking distance education courses, including online courses, are generally charged the same tuition and fees as students taking face-to-face courses. These officials cited various costs associated with developing and expanding online distance education offerings, such as the purchase of hardware and software (which includes a learning management system), course development, faculty training and salaries, and the provision of student support services. They also said online instruction is not necessarily less expensive to provide, in part, because schools have to provide similar support services to both online students and classroom students—such as tutoring, library access, and (virtual) faculty office hours. For example, officials at three schools mentioned one of the major expenses associated with online distance education is providing off-hours library access or tutoring. Also, almost all the officials said it is often difficult to isolate the costs of online courses from the costs of providing traditional courses. Professors generally teach both online and face-to- face course sections, and the infrastructure developed for online distance education, such as the online learning management systems, can also be used by students and instructors participating in face-to-face instruction. A Wide Variety of Schools Provide a Range of Distance Education Courses and Programs Schools of all types reported offering distance education, according to data collected by Education through its annual IPEDS survey. Specifically, during the 2009-2010 school year, 46 percent of all Title IV eligible schools reported that they offered distance education opportunities to their students. Figure 2 shows the variation among these schools by sector and program length. As shown in figure 2, public schools, both 2- and 4-year, were more likely to offer distance education opportunities than private nonprofit or for-profit schools. Among public schools, distance education was more likely to be offered at 2-year schools rather than 4-year schools. One school official we spoke with attributed this likelihood to the increased number of students at 2-year schools, given the weak economy and limited capacity at 4-year public schools. With regard to minority-serving institutions and institutions with specific high minority concentrations, IPEDS data indicate that these institutions are as likely or more likely to offer some distance education than all schools combined, with the exception of Hispanic-serving institutions. For the 2009-2010 school year, more than 60 percent of Historically Black Colleges and Universities and Tribal Colleges and Universities offered distance education opportunities to their students, compared with about 46 percent of institutions overall. Furthermore, 49 percent of Asian/Pacific Islander/Native Hawaiian-serving institutions offered distance education to their students. Among Hispanic-serving institutions, just over 30 percent of these schools were offering distance education (see fig. 3). With regard to the size of schools that offer some distance education, the IPEDS data suggest that larger schools—as defined by enrollment—are more likely to offer distance education opportunities than smaller schools. Specifically, 23 percent of schools with fewer than 1,000 students offered distance education, while 96 percent of larger schools—those with 20,000 or more students—did so (see fig. 4). The 2008 distance education study by Education provided additional insights on the extent and nature of distance education offerings by school type, sector, and size. In terms of full degree and certificate programs, the study indicated that in the 2006-2007 academic year, about a third of all degree-granting schools offered entire degree programs or certificate programs through distance education. Additionally, public schools were more likely to offer a degree or certificate program entirely through distance education than were private schools. Larger schools were also more likely to offer a degree or certificate program entirely through distance education than smaller schools (see table 1). Students in Distance Education Enroll Mostly in Public Schools and Represent a Diverse Population Most Distance Education Students Attend Public Schools and Study a Range of Subjects Our analysis of the NPSAS data for the 2007-2008 academic year showed that of the estimated 5 million postsecondary students who have taken distance education, participation was most common among students attending public schools. These students enrolled in a range of academic fields of study. Most distance education students enroll at public schools. As might be expected, most undergraduate and graduate students taking distance education courses or programs were enrolled at public schools, followed by private nonprofit and private for-profit schools (see fig. 5). Distance education students enroll in a variety of fields of study. Both undergraduate and graduate students taking distance education courses or programs had higher rates of enrollment in the fields of business and health. Undergraduates taking distance education courses and programs also often majored in the humanities (liberal arts), while graduate students often studied education. While Students in Distance Education Tend to Be Older and Female, and Have Family and Work Obligations, They Are Also a Diverse Population According to our analysis of 2007-2008 NPSAS data, distance education students varied somewhat from students who did not enroll in distance education in that they tended to be somewhat older and female, and have family and work obligations. Moreover, students who are participating in distance education represent a diverse population that includes students of all races, current and former members of the military, and students with disabilities. Some of these characteristics are consistent with what we reported in our 2002 testimony on distance education and also were corroborated in our recent interviews with selected schools for this report. Distance education students tend to be older. As figure 6 shows, undergraduate and graduate students who took distance education courses or programs were about 3 years older, on average, when compared with students who did not take any distance education courses. Distance education students are more often female. Women represented about 61 percent of undergraduate students who took distance education courses or programs, compared with about 56 percent of undergraduates who took no distance education, and about 57 percent of undergraduates overall. For graduate students, the percentage of students taking distance education courses or programs who were female was about 65 percent, which was higher than those who took no distance education (59 percent) and the overall percentage of graduate students who were female (61 percent). Distance education students more often have family obligations. Figure 7 shows that undergraduate and graduate students who took distance education courses or programs were more often married and had dependents than those taking no distance education courses. Distance education students more often work full time. A higher percentage of students who took distance education courses or programs worked full time when compared with students who did not take any distance education courses. This difference was greatest among graduate students—about 74 percent of the students who took distance education courses or programs worked full time compared with 57 percent of students who did not take any distance education courses. For undergraduates, the figures were 45 percent and 31 percent, respectively. Students of all races and ethnicities participate in distance education to some extent. Postsecondary students of various races and ethnicities participated in distance education (see fig. 8). Current and former members of the military enrolled in postsecondary education participate in distance education. Forty-five percent of active duty service members, 29 percent of reservists, and 30 percent of veterans enrolled in postsecondary education took distance education courses or programs. In addition, of those enrolled in postsecondary education, 42 percent of active duty service members with a disability and 29 percent of veterans with a disability took distance education courses or programs. Taken together, active duty service members, reservists, and veterans represented about 7 percent of all students taking distance education courses and programs, compared with 4 percent of students who took no distance education. Students with disabilities participate in distance education. Twenty-one percent of all students with disabilities, including members of the military and civilians, enrolled in distance education courses or programs. Further, 25 percent of students with disabilities affecting their mobility took distance education courses or programs. Students with disabilities represented 10 percent of all students taking distance education courses and programs, while students with mobility disabilities represented about 3 percent. Many of these student characteristics were also noted by school officials we interviewed. These school officials reported that they collect data such as age, gender, and race and ethnicity of their students. The demographic data provided from schools generally showed similar student characteristics as that suggested by the 2007-2008 NPSAS data—that distance education students tend to be older and female, and have work and family obligations. Officials of at least three of the schools we selected indicated that many of their students taking classes online are veterans or students serving in the military. While at least three schools reported tracking students who identified themselves as having disabilities, at the time of our interviews, none of these schools indicated that they had determined how many of these students were taking online distance education classes. Officials at one of these schools, however, conducted some analysis after our interview and reported that about 3 percent of their students enrolled in the past year had documented disabilities. These students took, on average, 15 percent of their classes online. While most of the schools where we conducted interviews collected demographic data on their students, including those taking courses online, less than half of these schools have compared the demographics of students taking completely online courses with those taking face-to- face courses. Officials at five schools mentioned that comparing data on students can be difficult, in part, because students can take courses or degrees through a mix of instructional modalities—including completely online, hybrid/blended (mix of online and face-to-face), and completely face-to-face. For example, officials from one private nonprofit 4-year school that offers completely online as well as blended courses and degrees said that it is difficult to collect comparison data because the school’s administrative records do not differentiate online students from those who enroll in both online and campus-based courses. Accreditors and Schools Assess the Academic Quality of Distance Education in Several Ways, but Accreditors Reported Some Oversight Challenges Accrediting Agencies Examine the Quality of Distance Education in Various Ways but Reported Some Challenges Accreditors we interviewed have various procedures to examine schools’ distance education programs, but some accreditors reported they face challenges. Federal law and regulations require accrediting agencies to have standards that address student achievement, curricula, faculty, and student support services, among other areas. In addition, accreditors must ensure that schools have a process in place to verify registered students are doing their own work by using methods such as secure logins, passwords, proctored examinations, or other technologies. However, accrediting agencies are not required to have separate standards for distance education. As such, accreditors we spoke with who accredit both distance education and face-to-face programs use the same standards for both, although they differed in the practices they used to examine schools offering distance education. The accreditors we spoke with conduct reviews of schools’ distance education programs according to the accreditors’ own standards. For example, to address the effectiveness of a program, accreditors may review such measures as student retention rates, completion/graduation rates, student satisfaction, placement rates (if applicable), and various measures of student learning. The three regional accreditors we spoke with give schools the responsibility for determining the best way to assess student learning for both face-to-face and distance education programs. However, both national accrediting agencies and the specialized accreditor we spoke with have specific quantitative thresholds as minimum standards on various outcomes. For example, one national accreditor requires that their member schools meet specific thresholds for student retention and placement rates. Officials at this agency said they could sanction schools whose programs fall below these standards. The other national accreditor we spoke with also requires its schools to meet thresholds established for outcomes such as course completion rates, program graduation rates, student satisfaction rates, and student learning (as measured by professional licensing exams such as those for physical therapists and lawyers). One regional accreditor said it was exploring including standardized learning outcomes in its accreditation standards. As part of their periodic site visits to schools to assess the quality of academic programs, accreditors have to adapt their approach when reviewing schools with distance education. Accreditors are required to employ staff who are well-trained and knowledgeable about distance education, for example, when performing on-site reviews of schools providing distance education. Officials at all six accrediting agencies we spoke with said they include such experts on their on-site review teams. At one regional accreditor we interviewed, distance education experts are tasked with specifically reviewing the quality of a school’s distance education learning infrastructure, as well as the educational effectiveness of its programs, and receive specific training to do so. To review schools’ student supports, faculty supports, and educational effectiveness, officials at another regional accreditor told us their distance education experts may use video teleconferences or e-mails to communicate with administrative staff, faculty, and students not located on campus. These experts also remotely observe interactions between students and faculty in online classes. In addition to the periodic on-site accreditation reviews to reassess a school’s accreditation status that are required by statute, accreditors are to be notified if schools make substantive changes to academic programs or their schools. The main purpose of this substantive change policy is to ensure that when schools make changes, they are maintaining the same level of quality they had when last reviewed. While there are a number of circumstances that can trigger the substantive change requirement, the one most applicable to distance education is the addition of courses or programs that represent a significant departure from the existing offerings of educational programs, including method of delivery, from those that were offered when the accreditor last evaluated the school. A shift to distance education courses that constitute more than 50 percent of a program’s offerings was the substantive change threshold used by four of the six accrediting agencies we interviewed. Officials at one regional accrediting agency reported that, in calendar year 2010, the agency turned down 34 percent of initial substantive change requests for new distance education programs because of weak student learning assessments or inadequately trained faculty, among other reasons. However, they said this figure has since come down to about 16 percent because schools have had more training on how to develop a substantive change proposal. To ensure academic integrity, the six accrediting agencies we interviewed require schools to provide evidence that they verify registered students are doing their own work. For example, officials at one regional accreditor we spoke with said they require schools to use a student identification number and password as the minimum for verifying student identity. This accreditor said most institutions also verify student identity through student interaction during the course. In addition, one national accreditor we spoke with said some schools design tests that require a login and password, and may also feature pop-up questions during tests, prompting students to enter verification information such as their address or mother’s maiden name. While the accreditors we interviewed have a range of activities to assess the quality of distance education, a few accrediting agency officials and industry experts we spoke with also expressed some concerns and reported challenges involved in assessing the quality of distance education. These challenges were mostly related to accreditors’ capacity to keep pace with substantive changes and conduct follow-up quality reviews with schools. Officials at one regional and one national accrediting agency said they have had some difficulty keeping up with the high number of substantive change applications for new online programs. Officials representing the national accreditor said these applications have increased by about 30 percent and that the officials have had to double the number of evaluators on staff over the last 5 years. According to officials with the regional accreditor, they have increased the number of follow-up reviews to ensure that schools address concerns about meeting quality standards identified during the initial site visits. These officials reported that they withdrew one school’s accreditation for failure to demonstrate that its distance education programs met the same standards as its face-to-face programs, with respect to curriculum, resources, support, and student learning outcomes. Industry experts also acknowledged that some accreditors have limited resources and have had problems training their peer reviewers in distance education. Schools Use a Range of Course Design Principles and Student Performance Assessments to Hold Distance Education to the Same Quality Standards as Traditional Courses To assure that their distance education programs are accredited by federally recognized accreditors and that their students qualify for Title IV funding, officials we interviewed at 20 selected schools reported that they generally apply certain course design principles and use student performance assessments to assess the quality of the courses that make up these programs. The accreditors we spoke with require schools to have standards that address the quality of degree programs with respect to such things as student achievement, which could include such measures as course completion, licensing exams, and job placement rates, as well as student support services. A majority of school officials reported that they assess their distance education courses by the same standards they use for their traditional courses. Officials at most of the schools we spoke with said they used instructional teams to design their distance education courses according to the schools’ standards. These teams varied in their composition and activities. Some teams include specialized staff who work with faculty to translate traditional face-to-face courses to the online environment. For example, one school we visited in Florida has a 20-member instructional design team that includes instructional designers, graphic artists, multimedia technicians, and quality control coordinators. Officials at this school said the design team considers which instructional methods are most appropriate for the material delivered in each online course. For example, a psychology course may use mostly text-based storytelling, while an anthropology class may rely more heavily on video clips. Officials at an online school we spoke with stressed the need to replace face-to-face course instructors’ body language and tone of voice cues with appropriate text and video media. Besides assisting professors with designing online courses, school officials said instructional design teams also train professors in the pedagogical differences of teaching online and on the online technology used by the school. Officials at over half of the 20 schools we interviewed also reported that, to ensure quality in the design of their courses, they had used standards and best practices, some of which were developed by distance education industry experts. For example, 5 schools subscribe to Quality Matters, a nonprofit organization that lays out principles for designing quality online and blended courses. This organization sets specific standards for learning objectives, technology, faculty-student interaction, student supports, and assessment that online courses must meet in order to receive Quality Matters certification. In addition, school officials reported that their schools collect outcome data to help them assess the quality of courses. The types of learning outcomes that the schools reported tracking include end-of-course grades, course completion rates, and results of national professional licensing assessments. Officials at most schools we spoke with said they also used outcome data to make improvements to their courses. Officials at two schools told us they employ staff to analyze these data and make recommendations for course updates. For example, officials at one fully online school we spoke with noticed their students were performing below the national average on a section of a third-party end-of-course criminal justice test. The officials used the results of this test to strengthen the related material. According to these officials, their criminal justice students’ performance improved on that section of the exam subsequent to their course improvements. In addition to using outcome data to improve their courses, one school we spoke with in Florida had collected these types of data on their online and hybrid courses over a period of 15 years to determine which factors most influenced student success. To meet accreditors’ requirement to verify the identities of students enrolled in their distance education courses or programs, officials at most of the schools reported using various methods. For example, most of the school officials we interviewed said they issue students a secure login and password and some also use other methods, such as proctored exams. Officials at one school said they are also starting to use audiovisual software that works as a web cam to verify the student taking an exam is the one enrolled in the course and to ensure the student is not receiving assistance. In addition to technological safeguards, officials from one school said the interaction between students and faculty is key to ensuring students are doing their own work. They said instructors become familiar with a student’s writing or communication style through online discussions or the completion of assignments, and the instructor recognizes if that style changes. Officials at one school said they cannot be completely sure that distance education students are doing all of their own work even when using these methods; officials also noted that similar challenges exist for face-to-face courses. A few schools mentioned taking further steps to combat potential fraud in their online programs. Specifically, officials at two of the completely online schools we interviewed said they conduct reviews of or request further documentation from students who register with the same e-mail addresses or telephone numbers. Officials at one school we spoke with said they would like more guidance, either from Education or their institutional accrediting agency, on examples of verification and authentication systems for student identity to improve the school’s monitoring of the verification process. Education Has Increased Its Monitoring of Distance Education but Lacks Sufficient Data to Inform Its Oversight Education’s Office of Federal Student Aid (FSA) has recently increased its monitoring of distance education by updating its program review procedures and undertaking a risk analysis project. These efforts are in response to the expansion of distance education and the Education OIG’s identification of distance education as a high-risk area for managing student aid dollars. To better monitor distance education, FSA updated and issued new program review procedures. The previous set of FSA’s procedures, issued in 2008, did not provide in-depth guidance for assessing whether a school was approved to offer distance education or if there was regular and substantive interaction between instructors and students. The new procedures on distance education provide staff with expanded guidance for assessing a school’s compliance with these requirements. FSA officials said staff have been trained on the new procedures and, as of June 2011, have been using them for program reviews. All program reviews will include at least routine testing to determine basic program eligibility for schools that offer distance education, according to Education officials. Schools that offer more than half of any of their programs through distance education will also be required to undergo expanded testing for regular and substantive interaction. Compliance with federal student aid requirements by schools offering distance education programs is difficult to assess because many of the violations Education identifies through its program reviews are not specific to distance education; for those that are, Education does not necessarily identify or code the violations as such in its database, according to an Education official. For example, violations such as a school not appropriately returning Title IV funds when a student withdraws are coded in Education’s database based on the type of violation rather than whether this violation occurred in traditional or distance education. Violations specific to distance education that are tracked by Education are related to a lack of regular and substantive interaction between instructor and students and certain accreditation issues, such as an accrediting agency being ineligible because it does not have distance education in its scope. Education reported that from October 2005 through May 2011, no program reviews or audits identified any lack of regular and substantive interaction or distance education accreditation violations. FSA used several indicators to identify a school’s risk, including a change in school sector (e.g., from proprietary to private nonprofit or from private nonprofit to proprietary), an audit or investigation by the OIG, and the distribution of a high percentage of full student loans, as this may be an indicator that a school is not appropriately monitoring student withdrawals for return of student aid funds. FSA officials said they conducted 25 reviews and the OIG is conducting 2 audits. conducting this project in conjunction with others in the department. FSA officials were not able to estimate a date when all final project reports will be issued, but said their last program review was conducted in early August 2011. They said the results of the project, including its methods for identifying high-risk schools and the procedures used, will be evaluated to determine if any changes need to be made to FSA’s annual program reviews. While the objective of the project was to review high-risk distance education schools, Education lacked data to adequately identify schools’ level of risk based on the extent to which they offered distance education and the amount of federal student aid they received for those programs or courses. For example, to identify high-risk schools that may be offering distance education courses and programs, one indicator Education relied on was the Department of Defense’s enrollment information on its military members. Because distance education provides the flexibility needed to fit active duty service members’ duty schedules and location, many military members are enrolled in distance education courses and programs. Therefore, in its risk analysis, Education included schools that had 200 or more military members receiving tuition assistance from the Department of Defense. While Education’s IPEDS database can show which schools offer distance education, it lacks information on the extent of a school’s offerings and enrollment levels. Despite using data from multiple sources, one of the 27 schools Education originally selected for review through the risk analysis did not actually offer distance education. As a result, FSA officials said they had to substitute another school for the study. While the project is not yet complete, officials reported confidence that their study is currently based on an appropriate selection of schools. Nevertheless, they acknowledged that, in selecting their target schools, they lacked sufficient data to help them identify the extent to which a school was offering distance education as well as the amount of federal dollars being spent for distance education at each school, both of which would have been significant in evaluating a school’s risk. The Office of Federal Student Aid has plans to collect more information on distance education, but complete information on all schools may not be available for several years. Under its new Integrated Partner Management (IPM) system, which will consolidate data systems on schools receiving Title IV funds, FSA will collect information about how a school’s programs are offered. Specifically, FSA officials said when schools apply for Title IV initial certification or recertification, they will be asked to indicate whether a program is predominantly (more than 50 percent) delivered via the classroom, distance education, correspondence, or independent study. They said the IPM system is expected to be implemented in November 2012 and would eventually allow them to analyze comprehensive data about a school. For example, they will be able to match the extent to which schools offer distance education with Title IV violations identified during program reviews. However, because schools are generally required to recertify only every 6 years, officials acknowledged that it could be several years before the IPM system will contain information on all schools’ distance education offerings. Therefore, distance education information on all schools may not be available through IPM until 2018. In the meantime, Education’s NCES is expanding its IPEDS survey to provide a more in-depth picture of distance education offerings and enrollment patterns. The plan by NCES to expand the IPEDS survey with regard to distance education was the result of a decision by its technical review panel to better describe postsecondary education offered throughout the nation, allow schools to compare their distance education activities with those of their peer schools, and provide valuable information to parents and students on available college programs. This expanded data collection will be conducted in phases. The 2011-2012 survey used the definition of distance education as established in 2008 and collected information about whether schools offer their programs completely through distance education. Additional new distance education questions will be added to the 2012-2013 survey. The new survey questions ask for information such as the range of a school’s offerings in distance education, the number of students enrolled either partially or entirely in distance education, and whether the students are located in or out of state in relation to the school (see fig. 9). An NCES official said the new IPEDS data are expected to be available 1 year after the survey closes but may be available earlier. For example, early release data collected during the 2011-2012 survey may be available as early as February 2012 and available publicly by November 2012. Despite the prospect of more comprehensive data on schools and their distance education offerings being collected through IPEDS, FSA does not yet have specific plans to use these data for monitoring school compliance with federal student aid requirements. According to FSA officials, they intend to wait and see what information the survey yields before deciding how to make use of it. Moreover, FSA indicated it was not aware of NCES’s efforts to expand the IPEDS distance education data collection and, therefore, was not involved in the planning and did not provide input during comment periods. According to NCES officials, the NCES technical review panel process engages a number of stakeholders and is open to federal officials who are interested in participating. Conclusions Distance education, specifically online education, has been developing for a number of years and has become a part of the mainstream of higher education. This delivery mode of instruction has provided some new opportunities and access, particularly for nontraditional students and working adults who are looking to advance their careers. Moreover, it is likely to continue growing, as schools across all sectors and levels see it as a critical educational tool in meeting student needs and demand. The growth in distance education and the sizable federal investment in higher education will challenge all segments of the triad responsible for the oversight of higher education—the states, accreditation agencies, and the federal government—in their capacity to provide consumer protection, ensure academic quality, and protect the federal investment. In response to this challenge, Education has taken steps to increase its oversight by providing its staff with expanded guidance for assessing a school’s compliance with distance education requirements and participating in the OIG/FSA risk project, which identified potential risk indicators. However, a key factor in Education’s ability to properly focus oversight on the areas of greatest risk will be the availability and use of pertinent, up-to-date data on both the extent to which schools offer distance education and the extent to which students use federal aid to attend those programs. While FSA’s IPM system may eventually be helpful in providing Education with the opportunity to monitor distance education with better information, the expanded IPEDS data would provide relevant information much sooner. However, without a plan on how to use the new IPEDS data to identify and monitor high-risk schools, FSA may lose the opportunity to strengthen its oversight of distance education in the near term. Moreover, if FSA does not coordinate with NCES going forward, it stands to lose the opportunity to provide input on any additionally needed data that may strengthen oversight and ensure accountability in the long term. Recommendation for Executive Action To help Education strengthen its oversight of distance education, the Secretary of Education should direct FSA to develop a plan on how best to use the new IPEDS distance education data and provide input to NCES on future IPEDS survey work with regard to distance education. Agency Comments and Our Evaluation We provided a draft of this report to officials at Education for their review and comment. Education provided comments, which are reproduced in appendix III of this report, and technical comments, which we incorporated as appropriate. In its comments, Education agreed with our recommendation and noted that FSA will update its School Participation Team procedures to include consideration of IPEDS data on distance education for monitoring schools. Education also stated that FSA will provide input to NCES on the design and results of any future IPEDS surveys that include distance education. We are sending copies of this report to relevant congressional committees, the Secretary of Education, and other interested parties. In addition, this report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This appendix discusses in detail our methodology for addressing the following research objectives: (1) the characteristics of distance education today, (2) the characteristics of students participating in distance education, (3) how the quality of distance education is being assessed, and (4) how Education monitors distance education in its stewardship of federal student aid funds. To address these research questions, we reviewed relevant federal laws and regulations, literature, studies, and reports; interviewed officials from Education, representatives from all types of postsecondary schools, accreditation agencies, and distance education and industry experts; and conducted site visits to Florida, Minnesota, and Puerto Rico to interview state agency and school officials. We selected these sites based on various factors, including the level of state data collected and an industry summary of states’ policies for approving distance education. We also analyzed data from Education’s Integrated Postsecondary Education Data System (IPEDS) and the National Postsecondary Student Aid Study (NPSAS) databases to determine the school and student characteristics involved in distance education. We determined that IPEDS and NPSAS data were sufficiently reliable for the purposes of this report based on prior testing of the data from these systems in 2011. The data were tested for accuracy and completeness, documentation about the data and systems used to produce the data was reviewed, and agency officials were interviewed. To determine the current characteristics of distance education, we analyzed 2009-2010 data from Education’s IPEDS and also from a 2008 report by Education’s National Center for Education Statistics (NCES) to obtain a national perspective on distance education practices and offerings at postsecondary schools. Specifically, we analyzed IPEDS data to provide information on the size, number, sector, and program length of schools offering distance education courses and programs. We used the 2008 distance education report to describe how schools are providing distance education to students, including the type of technology (Internet, video, audio, etc.) and instructional methods (asynchronous and synchronous) used, and the various types of degrees, certificates, and courses offered, including the percentage of courses offered online. In addition, we analyzed the 2010 Sloan Consortium report on online education to show updated enrollment figures specific to online courses. We supplemented the nationally representative data with information obtained from our interviews with industry experts and representatives at a nongeneralizable sample of postsecondary schools regarding the range of delivery and instructional techniques being used, and the type of programs and coursework offered through distance education. To select our sample of postsecondary schools, we used enrollment data from Education’s 2009-2010 IPEDS to identify schools that were offering distance education and had significant increases in total enrollment, which may be due, in part, to increased enrollment in distance education classes or programs. Based on the schools’ percentage change in enrollment, we then selected schools by size—as defined by enrollment— as well as by sector and program length. We also considered the following factors in selecting our sample of schools: geographic dispersion by state, minority serving school status (e.g., Historically Black Colleges and Universities and Hispanic-serving institutions), selectivity in accepting students, industry expert or stakeholder recommendations, extent to which distance education programs and courses are offered (totally online schools versus schools offering both campus-based and online instruction), and whether the schools are regionally or nationally accredited. Based on these considerations, we selected 20 schools representing all sectors and program lengths, for site visits or phone interviews (see app. II for a list of colleges and universities we interviewed). Our selected schools break out as follows: 4 public 2-year schools. 5 public 4-year schools. 6 private nonprofit schools. 5 private for-profit schools. After our interviews with officials from the selected schools, we conducted a content analysis on the information gathered. Interview responses and comments from officials were categorized to identify common themes. The themes were reviewed by a methodologist before all comments were categorized. One analyst coded the information and a second analyst assessed the accuracy of the coding. Disagreements between coders were resolved through discussion. We used the information gathered from these schools for illustrative purposes only. Because the schools were not selected to be representative of all postsecondary schools, the interview results are not generalizable to other postsecondary schools, including groups of schools in the same sector or program length. To determine the characteristics of students participating in distance education courses and programs, as well as those who do not participate, we analyzed Education’s 2007-2008 NPSAS data, the most current available data. These data allowed us to compare distance education students to nondistance education students on the following characteristics: age, gender, marital status, dependent status, and employment status. The data also allowed us to describe the characteristics of students enrolled in distance education, in terms of type of school attended, field of study, race, veteran status, and disability status. We supplemented this analysis with information from our interviews with selected postsecondary schools and student demographic data provided by school officials. To determine how the quality of distance education programs is being assessed, we obtained information from accrediting agency and school officials and reviewed and analyzed federal laws and regulations related to accreditation. We interviewed officials from six accrediting agencies (three regional, two national, and one specialized) and reviewed their standards and policies to determine how they are assessing the quality of distance education courses and programs. In addition, we reviewed documents from the Council for Higher Education Accreditation (CHEA) website, to gain a broader understanding of accreditation. We also interviewed officials from schools in our sample to describe the specific quality assurance frameworks and the outcomes they use to assess the performance of students engaged in distance education. In addition, we interviewed an official from Quality Matters and reviewed quality standards documents provided at the interview. To determine the extent to which Education is monitoring distance education programs to ensure the protection of federal student aid funds, we reviewed relevant federal laws and regulations regarding distance education oversight requirements. We interviewed officials from Education’s Federal Student Aid office and the Office of Postsecondary Education to determine their roles in the monitoring and governance of Title IV programs, specifically with respect to distance education. In addition, we interviewed officials from NCES to learn about their IPEDS data collection efforts and Education’s Office of the Inspector General to learn about their distance education monitoring activities and findings. Finally, we reviewed agency documents, including plans to add distance education variables to the IPEDS survey, OIG testimonies and reports, and an interim status memorandum issued by the OIG/FSA Risk Project. We conducted this performance audit from November 2010 to November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of Colleges and Universities GAO Interviewed Appendix III: Comments from the Department of Education Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tranchau Nguyen, Assistant Director; Susan Chin, Analyst-in-Charge; Amy Anderson; Jeffrey G. Miller; and Jodi Munson Rodríguez made significant contributions to this report in all aspects of the work. Susan Bernstein contributed to writing this report. Michael Silver, Christine San, and John Mingus provided technical support, and Jessica Botsford provided legal support. Mimi Nguyen assisted with report graphics. Related GAO Products Distance Education: Growth in Distance Education Programs and Implications for Federal Education Policy, GAO-02-1125T. Washington, D.C.: Sept. 26, 2002. Distance Education: Improved Data on Program Costs and Guidelines on Quality Assessments Needed to Inform Federal Policy, GAO-04-279. Washington, D.C.: Feb. 26, 2004. Higher Education: Institutions’ Reported Data Collection Burden is Higher Than Estimated but Can Be Reduced through Increased Coordination, GAO-10-871. Washington, D.C.: Aug. 13, 2010. DOD Education Benefits: Increased Oversight of Tuition Assistance Program Is Needed, GAO-11-300. Washington, D.C.: March 1, 2011. DOD Education Benefits: Further Actions Needed to Improve Oversight of Tuition Assistance Program, GAO-11-389T. Washington, D.C.: March 2, 2011. VA Education Benefits: Actions Taken, but Outreach and Oversight Could Be Improved, GAO-11-256. Washington, D.C.: Feb. 28, 2011. | As the largest provider of financial aid in higher education, with about $134 billion in Title IV funds provided to students in fiscal year 2010, the Department of Education (Education) has a considerable interest in distance education. Distance education--that is, offering courses by the Internet, video, or other forms outside the classroom--has been a growing force in postsecondary education and there are questions about quality and adequate oversight. GAO was asked to determine (1) the characteristics of distance education today, (2) the characteristics of students participating in distance education, (3) how the quality of distance education is being assessed, and (4) how Education monitors distance education in its stewardship of federal student aid funds. GAO reviewed federal laws and regulations, analyzed Education data and documents, and interviewed Education officials and industry experts. GAO also interviewed officials from accrediting and state agencies, as well as 20 schools--which were selected based on a variety of factors to represent diverse perspectives. While distance education can use a variety of technologies, it has grown most rapidly online with the use of the Internet. Online distance education is currently being offered in various ways to students living on campus, away from a campus, and across state lines. School offerings in online learning range from individual classes to complete degree programs. Courses and degree programs may be a mix of face-to-face and online instruction--"hybrid" or "blended" instruction. Online asynchronous instruction--whereby students participate on their own schedule--is most common because it provides students with more convenience and flexibility, according to school officials. In the 2009-2010 academic year, almost half of postsecondary schools offered distance education opportunities to their students. Public 2- and 4-year schools were most likely to offer distance education, followed closely by private for-profit 4-year schools. Students in distance education enroll mostly in public schools, and they represent a diverse population. While they tend to be older and female, and have family and work obligations, they also include students of all races, current and former members of the military, and those with disabilities. According to the most current Education data (2007-2008), students enrolled in distance education studied a range of subjects, such as business and health. Accrediting agencies and schools assess the academic quality of distance education in several ways, but accreditors reported some oversight challenges. Federal law and regulations do not require accrediting agencies to have separate standards for reviewing distance education. As such, accreditors GAO spoke with have not adopted separate review standards, although they differed in the practices they used to examine schools offering distance education. Officials at two accreditors GAO spoke with cited some challenges with assessing quality, including keeping pace with the number of new online programs. School officials GAO interviewed reported using a range of design principles and student performance assessments to hold distance education to the same standards as face-to-face education. Some schools reported using specialized staff to translate face-to-face courses to the online environment, as well as standards developed by distance education experts to design their distance education courses. Schools also reported collecting outcome data, including data on student learning, to improve their courses. Education has increased its monitoring of distance education but lacks sufficient data to inform its oversight activities. In 2009, Education began selecting 27 schools for distance education monitoring based on an analysis of risk factors, but it did not have data to identify schools with high enrollments in distance education, which may have impeded its ability to accurately identify high-risk schools. Between 2011 and 2013, Education's National Center for Education Statistics (NCES) will start collecting survey data on the extent to which schools offer distance education, as well as enrollment levels. However, the department's Office of Federal Student Aid (FSA), responsible for monitoring Title IV compliance, was not involved in the process of deciding what distance education information would be collected; therefore, it did not provide input on what types of data could be helpful in oversight. Further, FSA officials said they do not yet have a plan on how they will use the new data in monitoring. To improve its oversight and monitoring of federal student aid funds, Education should develop a plan on how it could best use the new distance education data NCES is collecting and provide input to NCES on future data collections. Education agreed with the recommendation. |
Background The Principal Deputy Under Secretary of Defense for Acquisition, Technology, and Logistics has been designated DOD’s Corrosion Executive. The Corrosion Executive is supported by staff assigned to the Corrosion Office. The Corrosion Office was initially established in 2003 as an independent activity within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, reporting directly to the Corrosion Executive. In 2004, the Corrosion Office was formally assigned to the Defense Systems Directorate. The direct chain of command went through the Defense Systems Directorate, which provided management and administrative support. Following a reorganization of the Acquisition, Technology, and Logistics organization in 2006, the Corrosion Office was moved to the Systems and Software Engineering Directorate. The Corrosion Office no longer reports directly to the Corrosion Executive. Appendix III depicts DOD’s organizational structure to address corrosion. The Corrosion Office is led by the Special Assistant for Corrosion Policy and Oversight and works closely with the Corrosion Prevention and Control Integrated Product Team, which has representatives from the military services and other DOD organizations to accomplish the goals and objectives of the Corrosion Office. Several working teams have also been established to conduct work in the seven areas making up the corrosion strategy: policy and requirements; impact, metrics, and sustainment; science and technology; communications and outreach; facilities; training and doctrine; and specifications, standards, and product qualification. The Defense Acquisition Guidebook contains guidance regarding the defense acquisition system, which exists to manage the nation’s investments in technologies, programs, and product support necessary to achieve the National Security Strategy and support the United States Armed Forces. This guidebook contains specific guidance regarding acquisition strategies, which define the approach a program manager will use to achieve program goals. Among other things, an effective strategy minimizes the time and cost required to satisfy approved capability needs. DOD’s directive on the defense acquisition process states that program managers shall consider corrosion prevention and mitigation when making trade-off decisions that involve cost, useful service, and effectiveness. Moreover, on November 12, 2003, the Under Secretary of Defense for Acquisition, Technology, and Logistics issued a policy memorandum stating that corrosion prevention should be specifically addressed at the earliest phases of the acquisition process by decision authorities at every level. DOD Continues to Have Problems That Hinder Progress in Implementing Its Corrosion Prevention and Mitigation Strategy DOD has had long-standing problems in funding, identification of impacts, and development of metrics, and these are continuing. DOD’s implementation of its long-term corrosion strategy, as required under 10 U.S.C. § 2228(c), has been hindered by weaknesses in these three critical areas. First, the Corrosion Office does not review the services’ corrosion programs or annual budget requests, even though this is required by 10 U.S.C. § 2228(b)(3). Second, the Corrosion Office has made only minimal progress in identifying corrosion impacts. Third, the Corrosion Office has not developed results-oriented metrics, even though we have previously recommended that it do so. DOD’s Corrosion Office Does Not Review All of the Military Services’ Funding Requests Although 10 U.S.C. § 2228(b)(3) requires the Corrosion Office within OSD to review the annual funding requests for the prevention and mitigation of corrosion for each military service, the Corrosion Office has not done so. The Corrosion Office does not review comprehensive corrosion data from the services on their programs and funding requests because (1) DOD has not required the services to provide budget information to the Corrosion Office and (2) the services lack an effective mechanism for coordinating with the Corrosion Office with respect to their corrosion funding requests. None of the four services has a designated official or office to oversee and coordinate corrosion activities, including identifying annual servicewide funding requirements. Without a requirement or mechanism for reporting service funding information, the Corrosion Office officials said they are unable to review the services’ complete corrosion-related funding information, and thus DOD is hampered in its ability to provide oversight of the services’ funding requests. The Corrosion Office currently has oversight over only a small portion of departmentwide corrosion spending that is provided through a separate appropriations account. The Corrosion Office reviews and selects for funding the projects that are proposed by the services based on a combination of criteria, including: whether a project would benefit more than one service, whether it is projected to be completed within 2 years of its initial funding, the availability of matching funds; and the return on investment that it offers. For fiscal year 2006, DOD and the military services funded about $24 million for corrosion strategy efforts. Of this amount, $19 million was spent on 29 corrosion-related projects and about $5 million on contractor support, training, outreach, and other administrative activities. The DOD Corrosion Office projects a combined average return on investment of 42.5 to 1 for the $19 million, or a projected savings of $809 million over the life of the projects. The services frequently bypass the Corrosion Office to obtain their funding for corrosion-related efforts. We reviewed the President’s budget justification for fiscal year 2006 and identified more than $97 million for service-specific corrosion mitigation-related projects in addition to those reviewed by the Corrosion Office. These projects had not been submitted to the Corrosion Office for review, and Corrosion Office officials told us that they lacked any information about the $97 million and the status of the associated efforts. Because corrosion-related projects may be included under other maintenance projects or budget accounts, it is likely that there is more funding that we have not identified. According to recent corrosion cost studies conducted by DOD, the annual corrosion costs for Army ground vehicles and Navy ships alone were identified to be $2.019 billion and $2.438 billion, respectively. Without comprehensive reviews of the services’ corrosion-related programs and proposed funding requests, the Corrosion Office cannot fulfill its oversight and coordination role for the department. None of the four services has a designated official or office to oversee and coordinate corrosion activities, despite a recommendation by the Defense Science Board that they do so. Currently, multiple offices in the services are responsible for corrosion programs and related budgets. For example, several Air Force offices are responsible for corrosion-related matters: maintenance issues belong to the Air Force Corrosion Prevention and Control Office, corrosion policy for weapon systems is managed by an office within the Air Force Maintenance Directorate, and corrosion policy for infrastructure is handled by the Air Force Civil Engineering Directorate. None of these offices has comprehensive knowledge about corrosion activities throughout the Air Force. Without a designated official or office for corrosion, the services do not have the mechanism or capability to fully identify their annual servicewide corrosion funding requirements. Progress in Identifying Corrosion Impacts Has Been Minimal DOD has acknowledged since 2002 that the identification of cost, readiness, and safety impacts is critical to the implementation of its corrosion strategy. We recommended in 2003 that DOD complete a study to identify these impacts, and further recommended in 2004 that DOD accelerate its efforts in order to complete the baseline prior to its original estimated date of 2011. According to DOD, the purpose of the study is to document where corrosion problems exist, identify their causes, and prioritize them for funding according to their relative severity in terms of their impact on DOD costs, readiness, and safety. In August 2004, after developing a cost-estimating methodology, a DOD contractor began a study to determine the total cost of corrosion for military equipment and facilities across the services. DOD currently plans to complete this cost study by 2009, 2 years earlier than originally planned. The study uses fiscal year 2004 costs as a measurement baseline and consists of several segments, to be completed sequentially. To date, it has made some progress in identifying corrosion cost impacts. For example, in April 2006, DOD completed the Army ground vehicle and Navy ship corrosion segments of this study. Several segments remain to be completed, including Army and Marine Corps aviation. Corrosion Office officials told us that progress has been slower than expected, primarily because of a lack of corrosion data. Table 1 shows the corrosion cost segments included in the study and their planned completion dates. The two completed studies generated data that could be potentially useful for developing initiatives aimed at reducing long-term corrosion costs, but DOD lacks an action plan for using these data. For example, the studies estimate the annual corrosion costs for Army ground vehicles and Navy ships at $2.019 billion and $2.438 billion, respectively. Costs are segregated in multiple ways, such as costs incurred at the depot, organizational, and intermediate maintenance levels; costs incurred while addressing a corrosion problem (corrective); costs incurred while addressing a potential problem (preventive); and direct costs incurred on end items or removable parts. However, the Corrosion Office has not developed an action plan on how it will use these data, or the data expected from future cost studies, to develop corrosion prevention and mitigation strategies. Without an action plan, DOD could miss opportunities for achieving long- term corrosion cost savings. Finally, although it acknowledges the importance of identifying corrosion impacts related to readiness and safety, DOD has made virtually no progress in assessing these impacts. DOD officials told us that they decided to identify cost impacts before they identify readiness and safety impacts because more information is available regarding costs, and identifying cost impacts is an important step towards identifying readiness and safety impacts. They said that some of their efforts will shift to readiness and safety as the cost impact study approaches completion. DOD Has Not Yet Developed Results- Oriented Corrosion Metrics In June 2004, we reported that DOD lacked results-oriented metrics in its corrosion strategy and, as a result, could not effectively monitor progress toward achieving the goals of the corrosion strategy. In May 2005, DOD updated its November 2004 long-term corrosion strategy, but the update still does not contain results-oriented metrics for measuring progress toward targeted, quantifiable goals. In the strategy update, DOD has catalogued the aspects of corrosion prevention cost, readiness, and safety impacts that will need to be measured, but it has not quantified them or linked them with targets for improvement. For example, on a table entitled “Potential Revised Metrics Set”, under the column of safety impacts, the “facilities incidents” entry is linked with the description “events over time related to corrosion.” No measurable outcomes are associated with either the designated impact or the description. In addition, DOD officials told us that they cannot establish quantifiable goals regarding corrosion costs until they have completed the corrosion cost baseline, which, as noted earlier, DOD plans to complete sometime in 2009. These officials said that metrics for readiness and safety will likely take several additional years to complete because less information is available regarding readiness and safety impacts than information regarding cost impacts. They told us that the accompanying definitions and procedures will also take several years to complete. Most Major Defense Acquisition Programs We Reviewed Have Not Incorporated Key Elements of Corrosion Prevention Planning The Corrosion Prevention and Control Planning Guidebook encourages the establishment of corrosion prevention and control plans and corrosion prevention advisory teams as early as possible in the acquisition process. However, only 14 of the 51 programs we reviewed actually had both plans and advisory teams. DOD acquisition program officials have taken diverse approaches to corrosion prevention planning. We found that one reason why most programs did not have corrosion prevention plans or corrosion prevention advisory teams is that while they are strongly suggested, these elements are not mandatory. DOD Guidance Encourages Corrosion Prevention Plans and Advisory Teams The guidebook developed by the Corrosion Office is intended to assist acquisition program managers in developing and implementing effective corrosion prevention and control programs for military equipment and infrastructure. According to the Corrosion Prevention and Control Guidebook, the corrosion prevention and control plan and the corrosion prevention advisory team should be established as early as possible in the acquisition process. DOD officials told us that establishing both a plan and a team is critical to effective corrosion prevention planning, and they strongly recommend that corrosion prevention planning begin at the start of the technology development phase of acquisition (Milestone A), when the effort is made to determine the appropriate set of technologies to be integrated into the weapon system. They said it should certainly occur no later than the system development and demonstration phase (Milestone B), when the first system and long lead procurement for follow-on systems may be authorized. According to the guidebook, a corrosion prevention and control plan should address a number of things, including system design, including materials and processes to be used for corrosion prevention and control, and should define the membership and organization of the corrosion prevention advisory team. The team should be actively involved in the review of design considerations, material selections, costs, and any documentation that may affect corrosion prevention and control throughout the life cycle of the system or facility. Members should include representatives from the contractors and DOD. In addition to this DOD guidance, the individual services have issued guidance that also calls for incorporating corrosion prevention planning during acquisition of weapon system programs. Few Programs Have Both Corrosion Plans and Teams Most of the acquisition programs we reviewed did not have either plans or advisory teams for corrosion prevention and control. We reviewed a nonprobability sample of 51 major defense acquisition programs from the Army, Navy, and Air Force and found that only 14 of them had both corrosion prevention and control plans and corrosion prevention advisory teams. A total of 20 programs had developed corrosion prevention and control plans, and 18 had established advisory teams. Of the 51 programs, 27 had neither a plan nor an advisory team. Tables 2 and 3 list, by service, the number of programs we reviewed that had developed corrosion prevention and control plans and established corrosion prevention advisory teams. Appendix IV contains information on specific programs that we reviewed. Service Acquisition Officials Cite Diverse Approaches Taken to Corrosion Prevention Planning Service acquisition officials told us that they retain broad discretion in developing individual approaches to corrosion prevention planning. We found that planning is inconsistently performed, and that so many different approaches are taken within and among the services that DOD is unable to maintain the oversight needed to ensure that corrosion prevention is being effectively conducted. For example, the degree to which corrosion prevention planning is performed depends on the initiative of the respective acquisition program offices. The Air Force’s C- 17A Globemaster program had a corrosion prevention plan and corrosion prevention team in place early in the acquisition process, several months before it obtained approval to proceed with full-scale development. C-17 officials told us that they took a proactive approach to avoid the corrosion problems experienced by the C-5 and KC-135 programs. In contrast, the Javelin program managed by the Army has not established a corrosion prevention plan or corrosion prevention team, even though the system development and most of its production objectives have been completed. Javelin program officials told us that they have extensive corrosion prevention requirements in the system development specification and have obtained the advice of corrosion prevention experts located at the Aviation and Missile Research and Development Center. Further, some program officials told us that specific corrosion prevention plans and corrosion advisory teams were not needed because other documents and processes provide the same function. The Navy’s SSN 774 Class submarine program did not have a specific corrosion prevention plan or corrosion prevention advisory team because the program relied heavily on detailed specifications and technical documents and on the experience of similarly designed submarines. Officials from some programs said it was too early in the acquisition process for them to have a plan or team, while those from other programs claimed it was too late. The Air Force KC-135 Replacement program officials told us they do not have a corrosion prevention plan or team because their system is still in the early development phase and they have yet to establish firm dates for their program design reviews. In contrast, Army High Mobility Artillery Rocket System program officials said that it is not sensible to have a corrosion prevention plan or team at this time because their program is currently in full rate production. Some programs we reviewed did not have a corrosion prevention plan or team because program officials told us that upgrades to existing weapon systems may be covered by an existing corrosion prevention plan or team. On the one hand, the Airborne Warning and Control System Block 40/45 upgrade program is a modification to the prime mission equipment of the E-3 aircraft. This program does not have its own corrosion prevention and control team or corrosion prevention advisory team, but rather is covered by the existing plan and team for the E-3 aircraft. On the other hand, a different Air Force program we reviewed represents an upgrade to the avionics system of the existing C-5 aircraft, and its officials told us that corrosion prevention issues are more appropriately addressed at the C-5 aircraft program level. These officials told us that while the C-5 program has an existing corrosion prevention advisory team, it does not currently have a current corrosion prevention plan, though one is under development and expected to be completed at the end of May 2007. We found that one reason most programs have not prepared corrosion prevention plans or established corrosion prevention advisory teams is that these elements are not mandatory. Major acquisition programs perform corrosion prevention planning at their discretion, and that may or may not include having a corrosion prevention plan, a corrosion prevention advisory team, or both. Further, these programs are not required to provide the Corrosion Office information regarding corrosion prevention planning. As a result, the Corrosion Office could not effectively monitor DOD acquisition practices to ensure that corrosion prevention technologies and techniques are being fully considered and incorporated when appropriate. Moreover, these programs may be missing opportunities to prevent future corrosion and thereby mitigate the impacts of corrosion on the costs, readiness, and safety of military equipment. Conclusions More than 4 years have passed since Congress enacted legislation requiring DOD to establish a corrosion prevention and mitigation program, yet DOD has not met Congress’s expectations. Since the passage of this legislation, we have issued several reports on corrosion and made numerous recommendations to strengthen DOD’s ability to combat corrosion. Further, the Defense Science Board has called for an increased commitment on the part of DOD to prevent and mitigate corrosion, referring to “the importance of leadership commitment and proper incentives for ensuring corrosion is considered early and often in decisions.” DOD’s progress in implementing its corrosion strategy has been stymied by critical weaknesses. These include the absence of DOD guidance directing the services to provide the Corrosion Office with comprehensive data about their annual funding requirements for corrosion prevention and mitigation, the absence of a designated corrosion official or corrosion office within each of the services, and the absence of a DOD action plan to guide use of data in the corrosion cost study to achieve long-term cost savings. Furthermore, the lack of a DOD requirement for all major defense acquisition programs to have both a corrosion prevention plan and a corrosion prevention team could lead to inadequate corrosion prevention and, consequently, long-term corrosion problems throughout the life cycle of weapon systems. These and other weaknesses that we have raised in our previous reports severely hinder DOD’s ability to combat corrosion. Without top DOD and service leadership commitment to addressing these issues, corrosion prevention and mitigation will remain an elusive goal and opportunities to reduce costs, enhance readiness, and avoid safety problems will be lost. Recommendations for Executive Action To effectively implement DOD’s corrosion strategy and meet congressional expectations expeditiously, we recommend that the Secretary of Defense and the Under Secretary of Defense for Acquisition, Technology, and Logistics provide the necessary leadership and commitment to take the following four actions. To ensure that DOD’s Corrosion Office provides oversight and coordination of the services’ proposed funding requests for corrosion prevention and mitigation programs, we recommend that the Secretary of Defense: Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to require the military services to provide comprehensive data about their annual funding requirements for corrosion prevention and mitigation efforts to the DOD Corrosion Office, before annual funding requests are sent to Congress. Direct the Secretaries of the Army, Navy, and Air Force to designate a corrosion official or a corrosion office within each service that is responsible for corrosion prevention and mitigation, and that the responsibilities of this official or office include identifying the annual funding requirements for corrosion prevention and mitigation efforts throughout the service. To ensure that DOD does not miss opportunities for achieving long-term corrosion cost savings, we recommend that the Secretary of Defense: Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to develop an action plan for using the information contained in the Army ground vehicle and Navy ship segments of DOD’s cost impact study. This plan should be completed as expeditiously as possible and be updated in time to support the fiscal year 2009 budget request. This plan should include information on corrosion cost areas having the highest priority and a strategy for reducing these costs. DOD should develop comparable action plans for the information to be derived from cost segments completed in the future. To improve DOD’s ability to avoid or limit corrosion problems experienced by weapon systems, we recommend that the Secretary of Defense: Require major defense acquisition programs to prepare a corrosion prevention plan and establish a corrosion prevention advisory team as early as possible in the acquisition process. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD partially concurred with each of our four recommendations. In its response, DOD cited actions it planned to take which are generally responsive to our recommendations. In addition, the department provided several technical comments which we considered and incorporated where appropriate. DOD’s comments are reprinted in appendix V. DOD partially concurred with our recommendation to require the military services to provide comprehensive data about their annual funding requirements for corrosion prevention and mitigation efforts to the DOD Corrosion Office before annual funding requests are sent to Congress. DOD stated that a draft Corrosion Prevention and Control Department of Defense Instruction will require the military departments during the annual internal DOD budget process to submit information on the proposed corrosion programs and funding levels to the DOD Corrosion Executive. We believe this action is long overdue and is a step in the right direction if implemented. However, it remains uncertain when the instruction will be approved and what it will look like when finalized. Although the instruction was expected to be approved in November 2006, according to DOD officials, it is still undergoing revision. In addition, the draft instruction, as it is currently written, does not provide enough detail regarding the identification and submission of comprehensive data for funding associated with all corrosion prevention and mitigation efforts throughout DOD. For example, the draft instruction does not specify the type of funding information that is to be obtained by the services and reported to the DOD Corrosion Office. DOD also commented that corrosion prevention and mitigation activities are funded through many different sources, no program elements exist in the military departments that directly tie to corrosion, and many activities are funded to complete corrosion-related work but are not identified as such in budget documents. However, as we stated in our report, we reviewed the President’s budget justification for fiscal year 2006 and were able to readily identify more than $97 million for service-specific corrosion mitigation-related projects for which the Corrosion Office lacked any information. DOD partially concurred with our recommendation that the Secretaries of the Army, Navy, and Air Force designate a corrosion official or a corrosion office within each service to be responsible for corrosion prevention and mitigation, and that the responsibilities of this official or office should include identifying the annual funding requirements for corrosion prevention and mitigation efforts throughout the service. DOD stated that the same draft DOD Instruction cited in response to the first recommendation also specifies that the heads of DOD components shall designate a senior individual or office for oversight of corrosion matters, and it directs the Secretaries of the military departments to support this individual or office. DOD stated that the Air Force has already designated such an official. The draft instruction as it pertains to each service having a corrosion executive or a corrosion office responsible for corrosion prevention and mitigation is responsive to our recommendation if implemented. DOD partially concurred with our recommendation to develop an action plan for using the information contained in the Army ground vehicle and Navy ship segments of DOD’s cost impact study. In response, DOD stated that it would be impractical to develop an action plan in time to be used for the 2008 budget cycle. While our recommendation was intended for DOD to develop an action plan as soon as possible to support near-term funding decisions for corrosion prevention and mitigation efforts, we agree that DOD can not do this in time to be used for the 2008 budget cycle. Therefore, we have modified our recommendation to say that DOD develop an action plan as expeditiously as possible and revise the plan in time to support the fiscal year 2009 budget request. DOD also stated that the DOD Corrosion Prevention and Mitigation Strategic Plan already includes a requirement to select and fund corrosion research projects and integrated product team activities to enhance and improve corrosion prevention and mitigation throughout DOD. DOD further stated that the Military Departments assess and make priorities regarding corrosion based, in part, on funding for the “Top Ten” high cost of corrosion- vulnerable systems. While these efforts may have merit, we still believe that an action plan would provide additional benefits as we recommend. DOD partially concurred with our recommendation to require every major defense acquisition program to prepare a corrosion prevention plan and establish a corrosion prevention advisory team as early as possible in the acquisition process. DOD stated that a corrosion prevention control plan will be developed for all ACAT I programs before preliminary design review and implementation will be reviewed at each milestone. DOD noted that the establishment of a separate, formal Corrosion Prevention Advisory Team may not be necessary for all program levels, though such a team will be established for all ACAT I programs. DOD’s response is essentially responsive to our recommendation if carried out. In subsequent discussions DOD officials told us that they partially concurred because the response in some respects goes beyond our recommendation by requiring that all ACAT I programs have a corrosion prevention control plan and corrosion prevention advisory team. In addition to providing comments to our recommendations, DOD commented about our statement that the development of metrics for readiness and safety will likely take several additional years to complete because DOD officials have placed a higher priority on completing the cost impact studies. DOD commented that this is an inaccurate and dangerous assertion and implies that the department holds safety and readiness, the two linchpins of the operation military mind-set, in lower esteem than cost. In subsequent discussions, DOD officials told us that they decided to identify cost impacts before they identify readiness and safety impacts because more information is available regarding costs, and identifying cost impacts is an important step towards identifying readiness and safety impacts. We have modified our report by incorporating this statement. We are sending copies of this report to the Secretary of Defense and interested congressional committees. We will also make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Defense Science Board Recommendations Divide the responsibilities for the Office of the Secretary of Defense’s corrosion effort between three separate organizations: Defense Systems; Logistics, Materiel, and Readiness; and Installations and Engineering. Appendix II: Scope and Methodology To assess the Department of Defense’s (DOD) efforts to implement its corrosion prevention and mitigation strategy, including the oversight of funding; identification of cost, readiness, and safety impacts; and the development of results-oriented metrics, we reviewed DOD’s funding and progress for corrosion-related projects that it initiated during fiscal years 2005 and 2006. We reviewed the President’s budget justification for fiscal year 2006 for corrosion-related efforts and met with DOD officials within the Comptroller’s Office regarding their oversight of the Corrosion Policy and Oversight Office’s budget. We also met with DOD officials within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics to assess their oversight of programs and funding levels of the military services during the annual budget reviews as well as their monitoring of the services’ acquisition practices. In particular, we met with officials with the Corrosion Policy and Oversight Office responsible for managing, directing, and reviewing corrosion prevention and mitigation initiatives. We met with DOD officials involved with developing DOD’s long-term strategy to prevent and control corrosion. We obtained their assessments and perspectives on corrosion prevention and mitigation programs and strategies; obtained and reviewed DOD policies, procedures, guidelines, and draft instructions for prevention and mitigation of corrosion on DOD military equipment and infrastructure; and discussed additional actions that could be taken to further prevent and mitigate corrosion. We reviewed DOD’s funding requirements for fiscal years 2005 through 2007 and future year projections. To assess the extent to which the military services’ have incorporated corrosion prevention planning in the acquisition of major weapon systems, we conducted a review of 51 major defense acquisition programs from the Army, Navy, and Air Force. These 51 programs were selected based on a nonprobability sample of acquisition programs from the Fiscal Year 2006 Major Defense Acquisition Program List approved by the Under Secretary of Defense for Acquisition, Technology, and Logistics. Navy programs were about half of the programs on the list. A program is designated a major acquisition program either by the Secretary of Defense, or because it is estimated to require a total expenditure of more than $365 million in research, development, test, and evaluation funds or require a total expenditure of more than $2.19 billion in procurement funds. Our program selection represented the functional capability areas for battle space awareness, focused logistics, force application, force protection, and joint training and included air, ground, and sea weapon systems. In particular, we selected and reviewed 13 Army programs, 25 Navy programs, and 13 Air Force programs. We met with officials responsible for managing the acquisition programs and with officials having primary responsibility for overseeing corrosion prevention and mitigation within the respective services. We obtained and reviewed military service policies and instructions that establish corrosion prevention and control program requirements. For the acquisition programs we selected, we obtained and reviewed documents, including the acquisition strategy, acquisition plan, and corrosion prevention and control plans, as well as related information establishing corrosion prevention advisory teams and other reports used for tracking and monitoring corrosion-related design initiatives and corrections. In particular, we discussed the barriers that exist to more effectively employing corrosion control at program initiation and acquisition. We also reviewed the recommendations of the Defense Science Board report on corrosion control issued in October 2004, and obtained DOD’s related responses and actions taken to better address its strategy for corrosion prevention and mitigation. We met with Corrosion Policy and Oversight Office officials regarding their concurrence and the related actions taken to date. We conducted our work from April 2006 through January 2007 in accordance with generally accepted government auditing standards. We did not validate the data provided by DOD. However, we reviewed available data for inconsistencies and discussed the data with DOD. We determined that the data used for our review were sufficiently reliable for our purposes. Appendix III: Organizational Structure of DOD’s Corrosion Activities Appendix IV: Corrosion Prevention Planning in Selected Major Defense Acquisition Programs Appendix V: Comments from the Department of Defense Appendix VI: GAO Contact and Staff Acknowledgments Acknowledgments In addition to the individual named above, Harold Reich, Assistant Director; Leslie Bharadwaja; Larry Bridges; Tom Gosling; K. Nicole Harms; Charles Perdue; Cheryl Weissman; and Allen Westheimer made key contributions to this report. | Corrosion can have a deleterious effect on military equipment and infrastructure in terms of cost, readiness, and safety. Recognizing this concern, the Bob Stump National Defense Authorization Act of Fiscal Year 2003 required the Department of Defense (DOD) to designate an official or organization to oversee and coordinate efforts to prevent and mitigate corrosion. Recently, the National Defense Authorization Act of Fiscal Year 2006 directed GAO to examine the effectiveness of DOD's corrosion prevention and mitigation programs. In addition, GAO evaluated the extent to which DOD has incorporated corrosion prevention planning in acquiring weapon systems. GAO reviewed strategy documents, reviewed corrosion prevention planning for 51 recent major weapon system acquisitions, and interviewed DOD and military service officials. DOD continues to have problems that hinder progress in implementing its corrosion prevention and mitigation strategy. While it has created a Corrosion Policy and Oversight Office, that office lacks the ability to oversee and coordinate its efforts throughout DOD, as envisioned by Congress. For example, DOD's office does not review all of the services' proposed funding requests for corrosion programs, even though it is required to do so, because DOD has not directed the services to provide such information and none of the services has a designated official or office to oversee and coordinate servicewide corrosion activities. Without comprehensive reviews of the services' corrosion-related programs and proposed funding requests, the office cannot fulfill its oversight and coordination role. DOD has made some progress in identifying corrosion cost impacts, but it has not identified readiness and safety impacts. It recently completed corrosion cost impact studies for Army ground vehicles and Navy ships, identifying an estimated $4.5 billion in annual corrosion costs. Although the studies provided potentially useful data for reducing these costs, DOD has not developed an action plan to apply these data to developing corrosion prevention and mitigation strategies. Without an action plan, it could miss opportunities to achieve long-term cost savings. DOD has not yet developed results-oriented metrics, although GAO has previously recommended that it do so. Without top DOD and service leadership commitment to address these issues, corrosion prevention and mitigation will remain elusive goals and opportunities to reduce costs, enhance readiness, and avoid safety problems will be lost. Most of the weapon system acquisition programs GAO reviewed had not incorporated key elements of DOD corrosion prevention guidance. GAO found that only 14 of the 51 programs reviewed had both corrosion prevention plans and advisory teams, as encouraged in the DOD guidance. The primary reason most programs did not have these two elements is that they are not mandatory. As a result, these programs may be missing opportunities to prevent and mitigate corrosion. |
Background The Federal Aviation Administration (FAA) has forecast continued growth for commercial and general aviation over the next decade. Growth over the past few decades brought innovations to improve flight safety that contributed to a dramatic lowering of the accident rate by the mid-1970s. Further reductions in the accident rate have, however, remained elusive. Unless the current accident rate can be reduced, the number of fatal accidents is likely to increase as aviation operations continue to grow. During the 1990s, FAA, the aviation industry, and the Congress all acknowledged and studied this potential danger. They set ambitious targets for reducing the accident rate, made over a thousand recommendations for improving aviation safety, and implemented a number of safety initiatives. In spite of these efforts, the accident rate, which is already low, has remained fairly steady. The FAA Administrator, White House and congressional task forces, and aviation industry groups have concluded that FAA and the aviation industry must coordinate their efforts to prioritize safety recommendations and focus resources on those with the most potential to decrease the accident rate. In 1998, the FAA Administrator announced the Safer Skies initiative, a joint government- industry effort to identify and address the greatest threats to aviation safety in order to reduce the fatal accident rate by 80 percent by the year 2007. FAA Expects Continued Growth in Aviation Over the past several decades, aviation has grown substantially in the United States, and FAA expects this growth to continue into the next century. Commercial aviation has grown consistently since 1982, while growth in general aviation has been less consistent. One key measure of aviation activity shows that the number of flight hours for commercial aircraft more than doubled from 8 million hours in 1982 to nearly 18 million hours in 1999. In contrast, general aviation activity dropped fairly steadily from the early 1980s until 1995. While general aviation has grown since 1995, it has not yet returned to 1990 levels. The number of general aviation flight hours decreased by nearly 9 percent from 32.6 million hours in 1982 to 29.9 million hours in 1999. (See fig. 1.) FAA has forecast continued growth for commercial aviation as well as for general aviation into the next century. The number of planes will increase, and these aircraft will fly more miles, spend more hours in the air, and carry more people. For example, FAA estimates that commercial aviation flight hours will grow to 24 million hours in 2007—an increase of 37 percent from 1999. In commercial aviation, FAA projects that the use of large air carriers will grow at an annual rate of 4 percent, while the use of commuter air carriers will grow at 3 percent per year. Although growth has been more erratic in general aviation than in commercial aviation, FAA projects an annual growth rate of 2.2 percent for general aviation into the next century. FAA estimates that general aviation flight hours will increase to about 36 million hours in 2007, a growth of nearly 19 percent over 1999. Fatal Accident Rates Have Decreased for U.S. Aviation Even with the growth in aviation, fatal accidents remain relatively rare, especially in commercial aviation. Fatal accident rates for U.S. aviation are low and have decreased over the past decades for both commercial and general aviation. The fatal accident rate can be calculated as the number of accidents with one or more fatalities divided by a measure of aviation activity, such as the number of aircraft miles flown, aircraft hours flown, or departures. More Fatal Accidents Occur in General Aviation, but Commercial Aviation Accidents Can Be Catastrophic In the 10-year period preceding the initiative, 4,471 fatal aviation accidents occurred in the United States, resulting in a total of 9,802 deaths. Table 2 shows the distribution of accidents and deaths for commercial aviation, which includes large and commuter air carriers, and general aviation, which includes on-demand air taxis. General aviation accounted for the largest number of fatal accidents and deaths in 1988-97. The initiative addresses both commercial and general aviation, but increased attention is focused on further improving the safety of commercial aviation because large and commuter air carriers are the primary forms of air transportation for most Americans. While fatal commercial aviation accidents are rare, large airplane accidents can cause more deaths in an instant than most events, other than wars or natural disasters. They consequently raise concerns with both the public and the media, and commercial aviation is held to a higher standard of safety than other forms of transportation. With commercial aviation expected to grow steadily into the next century, aviation accidents will occur with a frequency that will be unacceptable to the public unless steps are taken to decrease the fatal accident rate. While such accidents remain rare, FAA recognizes that the public demands a high standard of safety and expects continued improvement. How Fatal Accident Rates Are Calculated FAA tracks the number of passenger fatalities for various types of aviation operations and calculates accident rates. Basically, the rates are calculated by dividing the number of accidents with one or more fatalities by one of the various measures of aviation activity. For example, the fatal accident rate for commercial aviation for 1988-97 is 0.058 per 100,000 flight hours, which was calculated by dividing the number of fatal accidents (85)by the number of flight hours (151 million). This translates into about one fatal accident for every 2 million hours flown. The three activity measures generally used to calculate fatal accident rates are the number of individual flights (referred to as departures), aircraft miles flown, and aircraft hours flown. Each activity measure reflects different exposures to the risks associated with flying. For example, most commercial aviation accidents occur during takeoff or landing, rather than during the cruise phase, which constitutes the largest part of the total mileage and hours flown. For this reason, we believe that departures are usually the best measure of exposure to risk. For large and commuter air carriers, all three fatal accident rates are tracked. But for general aviation, the only measure of exposure is the number of flight hours estimated from survey data. Thus, fatal accident rates for commercial aviation (large and commuter air carriers) are generally expressed in terms of the number of fatal accidents per 100,000 departures, while fatal accident rates for general aviation are expressed as the number of fatal accidents per 100,000 flight hours as estimated by FAA’s annual survey.General aviation flight hours are not sufficiently reliable for use in calculating a fatal accident rate for general aviation because they are estimated from a voluntary survey, according to FAA. Fatal Accident Rates Have Decreased Over the past few decades, the annual rate of fatal aviation accidents has decreased significantly for both commercial and general aviation. While the accident rates are low, they have shown little improvement recently. For large commercial air carriers, the U.S. accident rate was 26 fatal accidents per million departures in 1959. Following the advent of large jet aircraft in the 1960s, the rate fell to one or fewer fatal accidents per million departures and has remained fairly steady for three decades. The fatal accident rate for commuter aircraft has also fallen over the last several decades. The accident rate for commuter air carriers fell from about 2 fatal accidents per million departures in 1982 to 3 per 10 million departures in 1996. While there were no fatal commuter accidents in 1998, the five fatal accidents in 1999 resulted in a fatal accident rate of nine per million departures. This increase in the fatal accident rate reflects a 1997 narrowing in the definition of commuter air carrier to include only small aircraft with nine or fewer seats.Similarly, the accident rate for general aviation aircraft has dropped since 1960. The fatal accident rate of six per 100,000 flight hours in 1960 fell to less than two by the early 1980s. The fatal accident rate for general aviation continued to decrease fairly steadily through the 1980s, increased slightly in the early 1990s, and has dropped steadily since 1995. In 1999, the fatal accident rate for general aviation was 1.2 fatal accidents per 100,000 flight hours. (See fig. 2.) The reductions in the fatal accident rates resulted from a combination of technological advances that improved safety. In commercial aviation, these advances included the replacement of large, piston-engine aircraft with jet aircraft with far more reliable engines, the development of navigational equipment to warn pilots of impending crashes, better ground navigation aids, improved aircraft instrumentation, and increased air traffic radar coverage. Some of these improvements have also benefited smaller commuter and general aviation aircraft. As commuter air carriers switched from small aircraft to sophisticated turboprop aircraft, the accident rate among the larger commuter aircraft became comparable to that of large air carriers. If Greater Numbers of Fatalities Are to Be Avoided, the Fatal Accident Rate Must Be Reduced If the current fatal accident rate holds steady and aviation activity grows as FAA has projected, the increased air traffic will result in greater numbers of crashes and fatalities. We estimate that the average of six fatal commercial aviation accidents per year in 1994-96 will likely rise to nine per year by 2007. Similarly, the fatal accidents for general aviation will probably mount from an average of 380 in 1996-98 to 484 in 2007.Table 3 shows our projections of the number of fatal accidents in 2007 calculated from FAA’s growth estimates and the current fatal accident rate for each type of aviation operation. The prospect of more accidents and deaths is unacceptable to the public, FAA, and the aviation industry. Avoiding that outcome means reducing the fatal accident rate significantly. The final report of the National Civil Aviation Review Commission concluded in 1997 that the “anticipated growth in aviation between now and the first quarter of the next century will almost certainly lead to an occurrence of aviation accidents with a frequency that will be wholly unacceptable to the public.” The Commission called for a joint industry-government effort to reduce the accident rate substantially. FAA and the Aviation Industry Made Previous Efforts to Reduce the Fatal Accident Rate During the 1990s, FAA and aviation industry groups had separate and joint efforts under way to use available data to identify and address the major causes of accidents. A series of fatal crashes and concern that the number of accidents and fatalities will increase as air traffic increases prompted these efforts to reduce the accident rate. Many of the reports that resulted from these efforts set specific goals and included recommendations for decreasing aviation accidents. Although FAA and the aviation industry acted on some of these recommendations, the fatal accident rate has remained fairly stable but low. The effectiveness of previous efforts to reduce the fatal accident rate is believed to have been undercut by their limited scope and a lack of coordination between government and industry groups. Many of the studies issued during the 1990s were under the leadership of either FAA or a particular segment of the aviation industry. For example, FAA, on its own, studied controlled flight into terrain (CFIT)accidents and runway incursions. Separately, the Flight Safety Foundation brought together participants from many segments of the aviation industry to study CFIT and approach and landing but initially had only limited FAA involvement. The Aerospace Industries Association initiated an extensive study on the causes of safety-related problems in aircraft engines, including uncontained engine failure.(For a list of key aviation studies and our related reports, see app. II.) According to FAA and industry officials we interviewed, efforts to address specific safety issues were generally unsuccessful when one group failed to coordinate its work with that of other groups that had important roles in aviation safety. Many of these reports issued during the 1990s set specific goals for reducing the overall fatal accident rate or for addressing specific aviation safety problems that result most often in fatalities. They also included numerous specific recommendations to FAA and the aviation industry to help meet these goals. Among the key reports were the following: In 1993, the Flight Safety Foundation led an international task force on CFIT, the leading cause of fatal commercial aviation accidents worldwide. The task force provided specific recommendations and training aids aimed at reducing CFIT accidents. The task force set a goal of reducing these accidents 50 percent worldwide by 1998 and other goals targeting improvements in the regions of the world with the highest CFIT rates. In January 1995, over 1,000 government, industry, and union officials attended an FAA-sponsored safety conference. The officials agreed that they shared responsibility for pursuing a goal of zero accidents. Their report identified 173 high-priority safety initiatives in the areas of crew training, air traffic control and weather, safety data collection and use, applications of emerging technologies, aircraft maintenance procedures and inspections, and development of flight operating procedures. Following the May 1996 ValuJet crash, an FAA task force recommended in September 1996 that FAA target agency resources to safety risks, improve the certification and oversight of new air carriers, and address concerns about inspector guidance and resources. In February 1997, the White House Commission on Aviation Safety and Security recommended that the government and the aviation industry establish a national goal to reduce the aviation fatal accident rate by a factor of five (meaning 80 percent) within 10 years. To achieve that goal, the Commission made specific recommendations for reengineering FAA’s regulatory and certification programs.The Commission did not explicitly state whether the national goal should apply to all types of aviation operations. In December 1997, the National Civil Aviation Review Commission recommended that the government and the aviation industry work together to achieve the White House Commission’s goal of an 80-percent reduction in the accident rate over the next 10 years and recommended specific safety improvements for achieving that goal.While the Commission did not explicitly state whether the 80-percent goal should apply to all types of aviation operations, the Commission specifically discussed the accident rates for large jets, commuter air carriers, general aviation operations, and air taxis. Both the White House Commission on Aviation Safety and Security and the National Civil Aviation Review Commission called for FAA and the aviation industry to work together on aviation safety issues. The Safer Skies Initiative Continued Ongoing Efforts to Use Data Analysis to Address Safety Problems On April 14, 1998, the Vice President, the Secretary of Transportation, and the FAA Administrator announced the Safer Skies initiative, a new aviation safety program committed to reducing the fatal accident rate by 80 percent by 2007. Under the initiative, experts from FAA, the aviation industry, and other government agencies with responsibility for aviation are to jointly analyze U.S. and global data to identify the most serious threats to aviation and to find the root causes of accidents. They will then determine the best actions to break the chain of events that lead to accidents and direct resources first to those actions. These actions are also referred to as interventions. FAA Invited Members of Ongoing Industry and Government Safety Groups to Join the Safer Skies Initiative When FAA announced the Safer Skies initiative, the agency invited participants from a number of ongoing industry and government safety groups to join in creating a unified safety agenda. In establishing the agenda for the initiative, the commercial and general aviation steering committees joined with and expanded the preexisting efforts. To develop the unified agenda, key government and industry aviation officials are to conduct data analyses to identify the causes of fatal accidents and determine what interventions are needed to prevent them. Several of the preexisting safety groups were already using data-driven approaches to study aviation safety issues. Representatives of air carriers, aircraft and engine manufacturers, and related associations had established a commercial aviation group in January 1997 to analyze fatal commercial aviation accidents and to recommend ways to prevent them. Before joining the initiative, this group had outlined a process for obtaining accident data from U.S. and international sources and for reaching consensus on the safety problems to be addressed. Another industry group analyzing data on uncontained engine failure had developed a process for analyzing safety data, using case studies to identify root causes, and evaluating the feasibility of proposed interventions. A third group that represented a cross-section of various general aviation constituencies, such as pilots and small aircraft manufacturers, was addressing the causes of fatal general aviation accidents.A joint government-industry group sponsored by FAA was continuing work on issues pertaining to the safety of passengers and crew members in the aircraft cabin that had been started separately by FAA, industry associations, and unions representing flight attendants.FAA invited members from all four of these groups to participate in the initiative. Steering Committees Selected 16 Safety Problems for the Safer Skies Initiative to Address Safer Skies formed steering committees of safety experts from government and industry to lead the work in each of its three agenda areas: commercial aviation, general aviation, and cabin safety. Each steering committee has co-chairs and participants from both FAA and industry. The commercial and general aviation steering committees used available data to select the safety problems to be addressed in their respective agenda areas. In contrast, the cabin safety steering committee continued the work on safety problems that had already been under way as a joint FAA-industry effort that preceded Safer Skies. The three Safer Skies steering committees ultimately chose to address 16 safety problems: 6 in commercial aviation, 6 in general aviation, and 4 in cabin safety. The commercial aviation and general aviation steering committees selected several of the same safety problems, including weather and loss of control over the aircraft. Because safety problems can affect large and small aircraft differently, the commercial and general aviation steering committees planned to have separate teams study each safety problem with one exception. A joint team will study runway incursions because commercial and general aviation aircraft often share the same runways and accidents have occurred involving both types of aircraft. Table 4 lists and briefly explains each safety problem. Objectives, Scope, and Methodology At the request of the Chairman and Ranking Democratic Member of the Subcommittee on Aviation, House Committee on Transportation and Infrastructure, we reviewed the design and implementation of the Safer Skies initiative. Specifically, they asked us to determine (1) to what extent addressing the safety problems selected by the Safer Skies initiative will help reduce the fatal accident rate; (2) what progress the initiative has made in identifying and implementing interventions to address each of these safety problems; (3) what progress the Safer Skies initiative has made in assessing the effectiveness of those interventions; and (4) how FAA is coordinating the Safer Skies initiative with other safety activities conducted throughout the agency, in partnership with the aviation industry, and by other federal agencies. Because Safer Skies is a 10-year project that hopes to reach its goals in 2007, we analyzed domestic flight operations and accident data for the decade that preceded the 1998 announcement of Safer Skies and for the decade to come. We examined data on fatal accidents and their causes for all types of aviation operations in the United States from 1988 through 1997. We also examined projected data for aviation operations and accidents through 2007. To determine whether addressing the safety problems chosen by Safer Skies will help reduce the fatal accident rate, we interviewed FAA officials responsible for overseeing Safer Skies, officials at the Department of Defense and the National Aeronautics and Space Administration involved in aviation safety, and the chairs and many members of the steering committees for commercial aviation, general aviation, and cabin safety. We reviewed documents related to each of these steering committees as well as data used by these groups in choosing the problems on which Safer Skies would focus. We also discussed the problems Safer Skies selected as priorities with staff at the National Transportation Safety Board, the Flight Safety Foundation, and other aviation safety groups. To determine what progress has been made in identifying, developing, and implementing intervention strategies for the Safer Skies initiative, we interviewed the FAA and industry chairs of the teams formed to address each problem under study. We obtained and reviewed team reports completed for each safety problem to understand the analysis process, modifications made to it by successive work groups, and actions planned to improve aviation safety in each problem area. To determine what progress has been made to date in assessing the effectiveness of its actions to improve aviation safety, we reviewed implementation plans to determine whether schedules were being met and whether ways had been chosen to measure the success of such actions. We also reviewed available team reports and relevant data to determine whether sufficient data were available to measure Safer Skies’ progress in improving aviation safety. To determine how FAA coordinated the Safer Skies initiative with safety activities conducted throughout FAA and in partnership with the aviation industry, we reviewed information on related industry and government safety activities. Specifically, we sought information on activities under the auspices of FAA, the Department of Defense, the National Aeronautics and Space Administration, the National Transportation Safety Board, selected engine and aircraft manufacturers, several major air carriers, the Air Transport Association, and the Aircraft Owners and Pilots Association. During our interviews with members of the Safer Skies steering committees and teams, we discussed efforts to coordinate their work with other government and industry safety activities. We also reviewed the reports from each Safer Skies team for safety problems where coordination would be appropriate. We discussed the budgetary implications of the Safer Skies initiatives and the criteria for prioritizing resources with FAA officials and steering committee members. We conducted our work from August 1999 through June 2000 in accordance with generally accepted government auditing standards. The Safer Skies Initiative Should Help Improve Aviation Safety Addressing the 16 safety problems chosen by the Safer Skies initiative should help reduce the nation’s fatal accident rate. In commercial aviation, eliminating the six safety problems to be addressed by the initiative would approach the 80-percent goal. Other FAA initiatives are addressing additional safety problems in commercial aviation, which should complement efforts under the Safer Skies initiative. In general aviation, the initiative will address six problems that appear to be among the most common causes of fatal accidents for this type of operation, according to available accident data. While the initiative has adopted the 80-percent goal in commercial aviation, which transports most passengers who fly in the United States, the initiative adopted a less aggressive goal for general aviation, which accounted for the vast majority of the fatal aviation accidents. The goal in general aviation is to reduce the number of fatal accidents to 350 in 2007, which represents about a 20-percent reduction. Finally, the initiative addressed four problems in cabin safety. Improving cabin safety will have little impact on lowering the fatal accident rate because cabin safety accounted for only two U.S. fatalities in commercial aviation in 1988-97. No quantitative goal was set for safety improvements in cabin safety. To date, safety improvement efforts by FAA and the Safer Skies initiative have focused on past accidents and incidents, which may not be entirely predictive of future ones. Studying growth and technological changes in the aviation industry can help anticipate and prevent the safety problems and accidents that are likely to arise from such changes. Preliminary international efforts have been initiated to address future hazards, and coordinating these efforts with Safer Skies work could enhance the initiative’s efforts to reduce the fatal accident rate. The Safer Skies Initiative Addresses Major Safety Problems in Commercial Aviation The Safer Skies initiative plans to address six safety problems that accounted for 79 percent of the fatal accidents in commercial aviation in 1988-97. If past accident causes continue, completely eliminating these six safety problems might approach the 80-percent goal. FAA also has safety initiatives under way to address several of the safety problems in commercial aviation not addressed by the initiative. These include sabotage, fuel tank explosions, and structural problems. In combination with the Safer Skies initiative, FAA’s safety initiatives have potential for reducing the fatal accident rate in commercial aviation. For commercial aviation, the Safer Skies initiative has established a goal of reducing the fatal accident rate by 80 percent in 2007 in accordance with the goal envisioned by the White House and congressional commissions on aviation safety. The Safer Skies Initiative Identified Six Major Safety Problems The Safer Skies initiative will address six safety problems that accounted for 79 percent of the fatal commercial aviation accidents in 1988-97. Three of these safety problems were major ones both worldwide and in the United States: pilots’ losing control of their aircraft, pilots’ flying otherwise controllable aircraft into the ground or water (CFIT), and accidents during approach and landing. These three safety problems accounted for 58 of the 85 fatal accidents in U.S. commercial aviation during this period. The commercial aviation teams are examining 34 of these accidents, which involved larger aircraft. The commercial aviation steering committee referred the remaining 24 fatal accidents to the general aviation steering committee for review because they involved small commuter aircraft with nine or fewer seats that operated scheduled commercial service. This was done because (1) the aircraft involved are more similar to general aviation aircraft than to larger commercial aircraft, (2) the types of operating environments and safety problems that caused the accidents more closely resemble those of general aviation than those of commercial aviation, and (3) the interventions to address safety problems in general aviation are more likely to correct these safety problems than interventions designed for large commercial aircraft. We reviewed the National Transportation Safety Board’s (NTSB) reports for the 24 small commuter accidents and found that most of the accidents happened in Alaska when pilots flew into mountains after deteriorating weather reduced visibility. On the basis of our review, we concur with the commercial aviation steering committee’s assessment that these accidents more closely resemble general aviation accidents and would likely benefit from the interventions that emerge to address these safety problems in general aviation aircraft. The potential for improving safety in these smaller commuter aircraft exists with a number of the interventions proposed by the general aviation teams working on weather and CFIT. It is unclear whether the initiative or FAA has mechanisms in place to ensure that small commuter operators will benefit from the interventions developed. For example, many of the interventions involve providing additional training to pilots on weather conditions and assessing the risk factors associated with each flight. Because the initiative plans to deliver much of this training jointly with the Aircraft Owners and Pilots Association, it is essential that notification about this training also be provided to small commuter operators and pilots who could benefit from this training but may not be members. Although members of several other organizations participate in the general aviation steering committee and study teams,neither the initiative nor FAA has made specific provisions to ensure that such interventions are also directed at small commuter aircraft operators and pilots, as well as at general aviation pilots. Because small commuter accidents accounted for 28 percent of the 85 fatal accidents in commercial aviation in 1988-97, reducing the fatal accident rate by 80 percent depends on addressing these safety problems in small commuter aircraft, as well as in large commercial aircraft. To further reduce the fatal accident rate for commercial aviation, the initiative will address three additional safety problems that have resulted in fewer fatal accidents in the United States from 1988 through 1997. The steering committee chose runway incursions, uncontained engine failure, and weather, each of which resulted in from two to four fatal accidents. These three safety problems accounted for an additional 9 accidents, or 11 percent of the 85 fatal accidents. The committee selected these problems because they caused past fatal accidents or serious incidents that could have cost many lives. These areas were also included because each occurred with greater frequency in the United States than worldwide and because FAA or the aviation industry had already begun work on these safety problems. (See fig. 3.) Our analysis of aviation data and review of safety reports confirmed that the initiative is addressing three major safety problems that caused fatal accidents in commercial aviation, as well three other safety problems that have the potential to cause accidents with large numbers of fatalities. Reducing or eliminating safety problems resulting from CFIT, loss of control, approach and landing, runway incursions, weather, and uncontained engine failure should help lower the fatal accident rate. Safer Skies participants, FAA officials, and industry aviation experts whom we interviewed also believe that the initiative is addressing the most important aviation safety problems. Most of these aviation experts indicated strong support for addressing such major safety concerns as CFIT, approach and landing, and loss of control. Furthermore, because of the increasingly global nature of commercial aviation, addressing these safety problems means that many of the interventions recommended by the initiative might have applicability worldwide, as well as in the United States. Many aviation experts we interviewed also supported the inclusion of safety problems with fewer fatalities but with a high potential for fatalities, such as runway incursions and uncontained engine failure. They agreed that reducing or eliminating these safety problems should help reduce the fatal accident rate. Addressing Additional Safety Problems Could Further Reduce the Fatal Accident Rate In addition to successfully addressing the major safety problems discussed above, addressing additional safety problems could further reduce the fatal accident rate in commercial aviation. FAA has a number of aviation safety initiatives under way that potentially can contribute to improvements in the safety of smaller commuter aircraft sometimes used in commercial aviation. For example, FAA’s Capstone Project focuses on improving general aviation safety in Alaska by providing additional navigational aids but also has potential application for addressing the safety problems of small commuter aircraft used elsewhere. FAA also has ongoing initiatives to address the causes of 4 of the 18 commercial aviation accidents not being addressed by Safer Skies teams. These include programs overseen by the agency’s Office of Civil Aviation Security to reduce the threat of sabotage, hijacking, and the transportation of hazardous cargo. Other FAA initiatives are under way to address the structural problems of aging aircraft and fuel tank explosions. FAA has, for example, published a notice proposing requirements for design reviews and mandatory maintenance actions for fuel tank systems on large transport aircraft. Of the remaining fatal accidents in commercial aviation that the initiative is not addressing, 12 were on-ground fatalities, and 2 resulted from other causes.The on-ground accidents each involved the death of a single worker or unauthorized individual at the airport. Most of these accidents occurred near the boarding gate or ramp. For example, several employees were fatally injured when struck by an aircraft’s propeller or nose gear during the course of their work. Of the on-ground fatalities, two resulted from individuals gaining unauthorized access to airport areas that should have been secured, nine involved various airline or airport employees who sustained injuries in the workplace, and one involved a passenger who fell out of an aircraft catering door and onto the ground. Because on-ground accidents accounted for 14 percent of the 85 fatal accidents in commercial aviation in 1988-97, reducing the fatal accident rate by 80 percent by 2007 will be difficult if these safety problems are not addressed. FAA has initiatives to address some of the safety problems that caused on- ground fatalities, but it is unclear how systematically these problems are being addressed. Specifically, FAA’s Office of Civil Aviation Security oversees airline and airport programs to limit access to secure areas to authorized individuals. The status of FAA’s efforts to address workplace safety issues that resulted in on-ground fatalities is less clear. FAA is responsible for regulating the safety and health aspects of the work environment of aircraft crew members when the aircraft is in operation. However, FAA has not promulgated specific regulations that address all employee safety and health issues associated with working conditions on aircraft. FAA held a public meeting in December 1999 to gather information on issues associated with working conditions on and around aircraft and to determine whether additional regulations should be proposed. However, FAA does not currently have a group addressing workplace safety issues and could not identify any regulations, guidance, or other initiatives that have been developed to address the types of workplace safety problems that caused most of the on-ground fatalities. Improving Commercial Aviation Safety Involves Considering More Than Reducing the Fatal Accident Rate Looking at the number of fatalities associated with various safety problems, as well as their contribution to the fatal accident rate, provides additional perspective on Safer Skies’ commercial aviation agenda. Reductions in the fatal accident rate are closely linked to reductions in the number of fatal accidents. Following this logic, the greatest reductions in the fatal accident rate can be achieved by eliminating the safety problems that caused the greatest number of accidents with one or more fatalities. However, strict adherence to the goal of reducing the fatal accident rate could result in focusing attention and resources on the causes of accidents that resulted in single fatalities, rather than on those causes that result in multiple fatalities, as well as multiple accidents. In choosing which safety problems to address, the commercial aviation steering committee selected safety problems that will help reduce fatalities, as well as the fatal accident rate. The fatal accident rate in commercial aviation can most quickly be reduced by addressing the three safety problems that form the core of the Safer Skies agenda in commercial aviation: CFIT, loss of control, and approach and landing. These three safety problems accounted for 34 fatal accidents involving larger aircraft that commercial aviation teams are handling and 24 additional small commuter accidents that general aviation teams are handling. (See table 5.) If the initiative is successful in developing and implementing interventions to eliminate these three safety problems for both large aircraft and small commuter aircraft, it would make progress toward preventing the kinds of safety problems that caused 68 percent of the fatal accidents in 1988-97. If the initiative could successfully eliminate all six safety problems on its agenda for commercial aviation, it would approach the goal of an 80-percent reduction in the fatal accident rate. However, other safety problems actually resulted in more fatal accidents and thus could reduce the fatal accident rate more quickly if eliminated. The initiative could approach the 80-percent goal more quickly by eliminating on-ground accidents, which caused more fatal accidents in commercial aviation than all other safety problems except loss of control and CFIT. On-ground accidents caused 12 fatal accidents in commercial aviation—14 percent of the total. While the safety problems that caused on- ground accidents merit addressing, the safety problems that Safer Skies’ commercial aviation team has chosen to address resulted in multiple fatal accidents and many more fatalities. For this reason, the initiative will probably have more impact on improving the safety of air transportation for the majority of the nation’s passengers than addressing other safety problems, such as on-ground accidents, whose elimination could reduce the fatal accident rate more but would save fewer lives. While focusing on reducing the fatal accident rate by addressing the safety problems that caused the most commercial aviation accidents, the approach taken by the initiative also resulted in choices that recognized where the greatest number of fatalities have occurred or could occur. The three major problems addressed by the initiative’s commercial aviation teams (CFIT, loss of control, and approach and landing) accounted for 57 percent of the 1,756 fatalities in 1988-97. This rises to 66 percent when all six safety problems on the commercial aviation agenda are considered. The additional small commuter accidents that are to be addressed by general aviation teams account for another 5 percent of the total fatalities. The only other safety problems that resulted in hundreds of fatalities were sabotage and fuel tank explosions. The initiative did not focus on these problems for two reasons. First, only one fatal accident resulted from each of these safety problems in 1988-97. Second, FAA already has initiatives under way to address both sabotage and fuel tank explosions. The initiative did include two safety problems on its commercial aviation agenda that each accounted for only about 1 percent of the fatalities in U. S. commercial aviation during this period. However, the initiative recognized the potential of runway incursions, which accounted for 25 U.S. fatalities, to result in hundreds of fatalities. While weather resulted in few commercial aviation accidents and 16 fatalities, the commercial aviation steering committee members felt that the problems of turbulence and icing merited attention. In contrast, the 12 on-ground accidents each resulted in a single fatality that together accounted for fewer than 1 percent of the nation’s commercial aviation fatalities. Eliminating the safety problems that caused on-ground fatalities could reduce the fatal accident rate more quickly than eliminating either CFIT or approach and landing accidents that involved large commercial aircraft. The commercial aviation steering committee has selected safety problems that will help reduce fatalities, as well as the fatal accident rate. (See fig. 4.) The Safer Skies Initiative Has Adopted the 80-Percent Goal for Commercial Aviation The Safer Skies initiative and FAA have adopted the goal of reducing the fatal accident rate for commercial aviation by 80 percent by 2007. Specifically, the goal is to reduce the fatal accident rate for commercial aviation from a 1994-96 baseline of 0.037 fatal accidents per 100,000 flight hours to 0.007 fatal accidents per 100,000 flight hours in 2007. The meaning of this goal can be more readily understood by considering the current number of fatal commercial aviation accidents and the number of accidents projected for 2007 if further safety improvements are not undertaken. In 1994-96, the United States averaged six fatal commercial aviation accidents each year. Given the projected growth of commercial aviation, we estimate that this number could increase to nine in 2007 if safety is not improved. If the initiative achieves the goal of an 80-percent reduction in the fatal accident rate for commercial aviation, we estimate that the number of fatal accidents expected in 2007 would drop to two. (See table 6.) Accident Data and Other Resources Were Used to Identify Safety Problems That Caused Many Fatal Accidents in General Aviation The general aviation steering committee used available accident data, safety reports, and professional expertise in aviation to identify safety problems that caused many of the fatal accidents in general aviation. The six safety problems chosen were controlled flight into terrain, loss of control, aeronautical decision-making, runway incursions, weather, and survivability. Steering committee members told us that they selected these safety problems after reviewing the available data on general aviation accidents and past industry and government-sponsored safety reports on general aviation. They said that the NTSB accident reports were challenging to analyze because many lacked the detail needed to determine the root causes of accidents. They noted, for example, that most general aviation aircraft are not equipped with such key equipment as flight data recorders that would help identify the safety problems that caused the accidents. To meet these additional challenges, FAA developed a training course tailored to the needs of those responsible for analyzing general aviation accidents. Both FAA and industry members attended this course before starting the analysis phase. The general aviation analysis reports on CFIT and weather also included recommendations to address problems with the quality of the data on general aviation accidents. In response to these recommendations, the general aviation steering committee chartered a team in April 2000 to develop strategies to (1) provide increased detail about factors that have contributed to or caused general aviation accidents and incidents and (2) improve the quality and timeliness of estimates of general aviation activity. Members of the steering committee told us that, in cases where the safety problems that caused the fatal accidents were unclear, they used their experience as either pilots or experts in general aviation to determine the possible causal factors involved in the accidents. Members of the steering committee also examined past industry and government reports on the causes of general aviation accidents. One key report was the Nall Report, a report on general aviation accident trends and factors published annually by the Aircraft Owners and Pilots Association’s Air Safety Foundation. According to the 1998 Nall Report, the major causes of fatal general aviation accidents were weather, loss of control or other errors during flights in which the pilot was maneuvering the plane, and accidents on approach to the airport. Another key report was FAA’s study of the causes of general aviation CFIT accidents.FAA’s study concluded that CFIT accidents accounted for 17 percent of the general aviation fatalities and 32 percent of general aviation accidents in weather conditions requiring pilots to have instrument ratings to fly. Steering committee members also told us that several reports indicated growing problems with runway incursions involving general aviation aircraft. For example, a study by DOT’s Office of Inspector General showed that general aviation pilots caused the majority of runway incursions attributable to pilot error in 1990-96.Members of the steering committee told us that they also decided to address survivability in an effort to decrease the number of fatalities among those who survive the impact of a crash but ultimately die from their injuries. Although the data available on general aviation accidents are less detailed than those available on commercial aviation accidents, the general aviation problems the initiative plans to address represent reasonable choices. Most of the safety problems chosen have been identified in past safety reports and NTSB accident reports as major causes of fatal accidents in general aviation. These include weather, loss of control, CFIT, and runway incursions. Aeronautical decision-making has also been cited repeatedly as a factor in such safety problems as weather, when pilots exercise judgment about whether to depart or turn back when faced with potential danger. In addition, aeronautical decision-making includes those decisions made relating to aircraft maintenance. Most of the Safer Skies participants, FAA officials, and aviation experts we interviewed concurred that the six general aviation safety problems to be addressed by the initiative are reasonable ones that will help to reduce the fatal accident rate. The Safer Skies Initiative Adopted a Goal of Reducing Fatal Accidents in General Aviation to 350 in 2007 Although both the White House and congressional commissions on aviation safety called for an 80-percent reduction in the nation’s fatal accident rate, FAA and the Safer Skies initiative applied this goal only to commercial aviation and adopted a less aggressive accident reduction goal for general aviation. The goal is to reduce the number of fatal general aviation accidents to 350 in 2007. This represents a 20-percent reduction in the number of fatal accidents that would likely result from projected growth in general aviation. Because general aviation accounted for 98 percent of U.S. fatal accidents in 1988-97, the goal of an 80-percent reduction in the nation’s fatal accident rate set forth by the two major aviation commissions is unreachable if these fatal accidents are not greatly reduced. The congressionally mandated commission on aviation safety discussed the fatal accident rates for all kinds of aviation operations, including general aviation. Because this commission did not explicitly apply the 80-percent goal to general aviation, it remains unclear whether it intended the goal to apply to general aviation as well as commercial aviation. The goal adopted—350 fatal accidents—contrasts sharply with the 97 fatal accidents that would likely result if the 80-percent goal were achieved. (See table 6.) The steering committee did not adopt the 80-percent goal for general aviation because of strong objections from the general aviation community. Representatives of the general aviation community argued that, given the varied experience levels of its pilots, reducing fatal accidents by 80 percent would be impossible without grounding the fleet. One general aviation representative said that there was a prevailing concern in the general aviation community that any agreement on a solid goal would lead to more regulation and less growth. In addition, these representatives objected to establishing a goal that involved a fatal accident rate. The fatal accident rate for general aviation is calculated by dividing the number of fatal accidents by the number of flight hours. Data on general aviation flight hours are estimated using an annual survey of general aviation operators conducted by FAA. Response to the survey is voluntary. Because the flight hours are estimated on the basis of this survey, representatives of the general aviation community questioned the reliability of these data and expressed concern about using flight hours to calculate past and future fatal accident rates. As a result, the Safer Skies steering committee for general aviation agreed not to use the survey data on flight hours to calculate a fatal accident rate until the data are more reliable. Instead, the accident reduction goal for general aviation was expressed in terms of the number of fatal accidents, rather than the fatal accident rate. To set its goal of reducing fatal accidents to 350, the general aviation steering committee reviewed available data on fatal accidents. The steering committee found the number had declined fairly steadily since 1990 in response to past initiatives to improve safety. The data used by the steering committee showed that, in 1996-98, an average of 379 fatal general aviation accidents occurred each year.The steering committee used this average and the 1.6-percent annual growth expected in general aviation to project that 437 accidents would occur in 2007 if additional safety initiatives were not undertaken. They agreed that a reduction to 350 fatal accidents would be achievable. This represents a 20-percent reduction in the number of fatal accidents that they estimated would occur without additional safety initiatives (437). According to projections by the steering committee and the general aviation community, a reduction of this magnitude would prevent 363 accidents from 2000 through 2007. The goal of reducing the number of fatal accidents to 350 in 2007 is probably achievable, but this goal is not likely to push the general aviation community toward more safety improvement as aggressively as it could. We believe that this goal is achievable for two reasons. First, although the level of general aviation activity has increased, the number of fatal accidents decreased to 354 in 1999, a decrease of 17 percent since 1994. Both FAA and industry officials attributed this decrease in part to ongoing safety initiatives. The goal of 379 accidents established for each of the next 3 years represents a 7-percent growth in the current number of fatal accidents. Second, the goal of 350 accidents set for 2007 is only 4 fewer fatal accidents than occurred last year. Hence, the long-term goal is achievable if the general aviation community is able to hold its number of fatal accidents steady as its air traffic grows by an expected 2.2 percent per year in the coming decade. We recognize that an 80-percent reduction in fatal accidents is probably not achievable in general aviation at this time because of the diversity in pilots’ experience levels, aircraft types, and operating environments. However, we believe that a more aggressive goal would encourage greater efforts by general aviation operators, manufacturers, associations, and FAA to make safety improvements in general aviation operations that could save lives. Improving Cabin Safety Is Important but Will Have Little Impact on Lowering the Fatal Accident Rate Improving cabin safety is unlikely to have much impact on reducing the overall fatal accident rate. In contrast to the safety problems addressed by the commercial and general aviation steering committees, the safety problems addressed by the cabin safety steering committee have not resulted in numerous fatalities, and few data are available on any injuries that result from these problems. The Safer Skies initiative identified only two fatalities in U.S. commercial aviation in 1988-97 related to cabin safety problems.While passengers and crew have been injured in the cabin environment, few data exist on these incidents because air carriers are not required to report such incidents unless they involve a serious injury or fatality. The study of cabin safety problems thus relies more on information shared by flight attendants and air carriers than on analysis of the limited data available. Because cabin safety resulted in few fatalities and affords few data for analysis, it is arguable whether it was appropriate to include cabin safety issues in an initiative directed at reducing the fatal accident rate through a data-driven analysis of safety problems. Although not appropriate for Safer Skies’ focus on the safety problems that caused fatal aviation accidents, cabin safety issues are an appropriate topic for FAA to address jointly with the aviation industry. NTSB and flight crews have raised concerns about the potential for injuries and fatalities in the cabin. FAA and industry were jointly studying cabin safety problems before the initiative was announced. The safety problems under study included those involving child restraint systems, passenger seatbelt use, passenger interference with crew, and carry-on baggage. Concerns about these safety problems are not new. For example, NTSB has long advocated FAA’s requiring the use of child restraints for passengers under the age of 2. NTSB was concerned enough about the use of child restraints to launch a campaign aimed at making parents aware of the benefit of putting children in approved child restraint systems and to declare 1999 as the “year of child transportation safety.” Similarly, representatives of air carrier crews have expressed concern that the incidents of passengers interfering with crew members are increasing. Additional Work on Future Hazards Could Help Anticipate and Prevent Fatal Accidents In December 1997, the congressionally mandated commission on aviation safety recommended that FAA and the aviation industry jointly develop a strategic plan to improve aviation safety and that the process “begin with analysis of both previous and potential failures to meet safety expectations.” These failures include accidents, incidents, insight from flight operational data, and aviation system changes. The analysis of the causes of past accidents provides insights into safety problems that exist within the current aviation system, while the analysis of aviation system changes can help anticipate future hazards that may arise from such changes as growth and technological advances (e.g., vertical takeoff and landing by aircraft). The approaches to the analysis of past safety problems and future hazards are distinct. A data-driven approach is particularly useful for analyzing the safety problems that caused past fatal accidents. Data on nonfatal accidents and incidents can also be used to identify and address safety problems that did not result in fatalities but could have. The data-driven approach is based on the assumption that identifying a problem is possible where historical data are available. While this approach can be used to address the safety problems in the current operating environment, other types of analyses may be more useful for anticipating and preventing the safety problems that could result in new types of fatal accidents. For example, the anticipated growth in air traffic will lead to more congestion around airports, increasing the possibility of runway incursions and midair collisions near airports. Anticipating how changes in the aviation industry may increase existing safety problems or bring about new ones can better position both FAA and the aviation industry to prevent accidents. While FAA, Safer Skies, and industry groups have made progress in the analysis of the causes of past accidents and incidents, efforts to analyze and anticipate future hazards are more preliminary. The Joint Safety Strategy Initiative in Europehas formed a work group to develop a method for examining future hazards. A number of FAA staff participate in this work group, which should facilitate the cooperative exchange of ideas and information on this topic. As of April 2000, the Safer Skies initiative had not established a process for analyzing future hazards. A systematic analysis of the changes occurring in the aviation industry could enhance Safer Skies’ ongoing efforts to reduce the fatal accident rate. Several of the aviation experts interviewed suggested that the initiative could benefit from going beyond the analysis of data on past accidents to consider safety problems that may arise from rapid changes in the aviation operating environment. Participants on Safer Skies’ commercial aviation steering committee also indicated that while data-driven approaches are helpful, it is also important to consider future hazards. FAA’s Director of the Aircraft Certification Servicesaid that the initiative’s first priority was to understand and eliminate the safety problems that caused past accidents but that the commercial aviation steering committee also plans to address future hazards and recently added this topic to its agenda for consideration. Because work on future hazards could help anticipate and prevent fatal accidents, this topic is important for the Safer Skies steering committees to address, especially as it applies to commercial aviation. Coordinating this effort with the work initiated by European and FAA staff on future hazards should help avoid duplication of effort and foster awareness of and solutions to these potential problems internationally. Conclusions The premise of both the White House and congressional commissions on aviation safety was that data on past and possible future causes of accidents could be used to focus resources on substantially reducing the fatal accident rate. While the Safer Skies initiative has made significant strides, it has not yet carried out this mandate as fully as it could. The six safety problems that the initiative is addressing accounted for almost 80 percent of the fatal accidents in commercial aviation in 1988-97. Our review showed that the initiative and FAA have work under way to address these and other safety problems in commercial aviation. However, the initiative has not challenged all sectors of the aviation community to push aggressively for safety improvements. Although the initiative has adopted the challenging goal of reducing the fatal accident rate for commercial aviation by 80 percent by 2007, general aviation is not being asked to set a similarly challenging goal. While an 80-percent reduction in fatal accidents is probably not achievable in general aviation at this time, the goal adopted by the initiative does not push the general aviation community toward implementing the kinds of interventions that could substantially lower the fatal accident rate. A more rigorous goal would encourage greater efforts by general aviation operators, manufacturers, associations, and FAA to make needed safety improvements. In addition, many of the interventions developed to improve general aviation safety could also benefit small commuter operators and pilots, but this benefit will not be realized without a systematic way of ensuring that training and other interventions are also directed at small commercial aviation operations. Finally, the Safer Skies initiative and most aviation safety studies to date have focused on the causes of past accidents. While analyses of accident data are useful for determining the causes of past accidents, reducing fatal accidents during a period of rapid growth in aviation will probably require the analysis of the changing aviation environment to anticipate future safety problems. Preliminary international efforts have been initiated to consider future hazards, and integrating these efforts with Safer Skies’ work would enhance the initiative’s efforts to reduce the fatal accident rate. Recommendations To further reduce the nation’s fatal accident rate and save lives in the type of aviation operation that causes the most fatal accidents and fatalities, we recommend that the Secretary of Transportation direct the FAA Administrator to work with the general aviation community to set a more challenging goal for reducing the number of fatal general aviation accidents by 2007, set interim goals to assess progress toward this new goal, and ensure that training and other interventions that emerge from general aviation teams are communicated to small commuter operators and pilots who may benefit from them. Agency Comments DOT and FAA officials concurred with our recommendations aimed at setting a more challenging interim goal and long-term goals for general aviation and said that they planned to do so in the future. However, the officials noted that existing general aviation accident data are too inaccurate to be used as the basis for setting an accident reduction goal. The general aviation steering committee has established a work group to recommend ways to improve the quality of general aviation data. The officials stated that FAA and the general aviation community would review the accident reduction goal when the quality of the data improves. DOT and FAA officials disagreed with our recommendation aimed at ensuring that training and other interventions emerging from general aviation teams are communicated to small commuter operators because they believe that mechanisms already exist to do this. The officials explained that a number of associations representing smaller commuter aircraft participate on the general aviation steering committee and on its analysis and implementation teams. These organizations provide conduits for transmitting interventions developed by the general aviation teams to small commuter operators. We agree that these organizations may facilitate the transfer of safety interventions developed by the general aviation teams to small commuter air carriers. However, it will be difficult to achieve the mandated 80-percent reduction in commercial aviation fatalities without systematic improvements in the safety record of small commuter air carriers, which accounted for 28 percent of fatal commercial aviation accidents. We believe that Safer Skies would benefit from a systematic plan for ensuring that interventions developed by general aviation teams are communicated to and implemented by small commuter operators. For this reason, we did not modify or delete our recommendation. DOT and FAA officials disagreed with our recommendation calling for an analysis of future safety problems arising from the rapid growth and changes in aviation. The officials noted that efforts involving FAA, Safer Skies, and the European aviation industry are already under way to address future hazards in aviation. On the basis of the information presented by DOT and FAA officials, we withdrew this recommendation. The Safer Skies Initiative Has Made Progress in Selecting and Implementing Interventions Joint FAA and industry teams have started work on 13 of the 16 problems being addressed by the initiative. A two-part process has been developed for use by these teams to first analyze accident and incident data and then to use that analysis to identify, select, and implement safety interventions to help prevent accidents in the future. That process is reasonable and has allowed FAA and industry groups to reach consensus on how to address safety problems identified under the initiative. This process was not used to address cabin safety problems because the cabin safety steering committee had already begun its work before the process was developed. The Safer Skies teams have made progress primarily in those areas that had been studied extensively in the past for which widely supported recommendations already existed. The interventions recommended for five problems are now being implemented: uncontained engine failure and CFIT in commercial aviation and passenger seatbelt use, child restraint systems, and carry-on baggage in cabin safety. The process being used will require more extensive analysis in the future as teams begin to address safety problems that have not been studied previously. Finally, the success of the interventions that the Safer Skies teams have chosen to address these long-standing safety problems depends in part on effective implementation. Our past work has shown that FAA does not consistently follow through on implementing key safety recommendations. Furthermore, FAA and the aviation industry began implementing some of the Safer Skies safety interventions before having a process in place to track their progress. The initiative has developed a process for tracking the implementation of interventions to improve safety in commercial aviation. However, the implementation of Safer Skies’ interventions is not assured because the tracking system for commercial aviation is not sufficiently detailed to assess progress in implementing interventions. Furthermore, the cabin safety steering committee implemented its interventions without having a tracking process in place, and the general aviation steering committee is working toward the final approval of interventions to address two safety problems without having a tracking process. Without a complete tracking process, FAA and the industry cannot ensure that the initiative will improve aviation safety in each of these areas. The Safer Skies Methodology Is Based on Previous Efforts to Identify Safety Problems For the Safer Skies initiative, FAA and the aviation industry jointly developed a two-part process to analyze accident data and then to choose from among the possible interventions. This process grew out of a previous FAA effort that used a data-driven approach to identify threats to aviation safety and develop interventions to address those threats. During the first part of this process, an analysis team reviews accident data to determine what went wrong, why it went wrong, and what interventions might be the most effective in preventing similar accidents in the future. The second part of the process involves another team that assesses the feasibility of each potential intervention, prioritizes the interventions on the basis of their effectiveness and feasibility, and submits plans for implementing projects to the steering committee for approval. However, as we discuss later in this chapter, the steering committee addressing cabin safety problems did not use this process. The Initiative Uses a Two- Part Process to Analyze Data and Identify Interventions The initiative uses a two-part process to analyze data and identify interventions to address safety problems in commercial aviation and general aviation. This process is modeled on an analysis of the most significant threats to aviation safety conducted in 1997 by staff from FAA’s Aircraft Certification Service. The two-part process was developed for use by the teams addressing safety problems in commercial aviation but has also been used by the general aviation teams with some modifications. Under the process, the steering committee forms an analysis team for each aviation safety problem. The team, which includes members from FAA and the aviation industry, reviews accident data, determines accident causes, and identifies possible interventions to prevent future accidents. For selected accidents, the team develops a detailed sequence of events that includes the actions by pilots and air traffic controllers as well as any system or equipment failure. The team determines what went wrong and why and then considers various interventions that could have prevented the accident. In its final report, the analysis team ranks all of the identified interventions by their effectiveness in preventing similar accidents and presents them to the steering committee for further action. Once the analysis team completes its work on a safety problem, the steering committee forms a second team to assess the feasibility of implementing the interventions suggested by the analysis team. The implementation team assesses feasibility in six areas: the cost of the intervention; the time needed to implement it; whether it requires regulatory changes; technical feasibility; the practicality of the project within the operating environment or the nationwide aviation system; and political feasibility. The implementation team prioritizes the interventions by both effectiveness and feasibility and then presents the resulting prioritized list to the steering committee. Once the steering committee initially approves an intervention, the implementation team develops a detailed project plan for implementation that is sent to the steering committee for final approval. Once detailed plans are approved, the interventions are then implemented by the responsible organizations. The general aviation teams have made some modifications to the analysis process initially developed for use by the commercial aviation teams. Although the first commercial aviation analysis team considered feasibility as well as effectiveness, the two-part process ultimately approved for commercial aviation teams considers only effectiveness at the analysis stage. Any consideration of such matters as cost and the need for developing new regulations is left to the implementation team. In contrast, the general aviation analysis teams consider both effectiveness and feasibility. Our review of the general aviation analysis team’s reports for CFIT and weather confirmed that such feasibility criteria as cost and the need for new regulations have been considered far earlier in the assessment of general aviation interventions than in the process now used by commercial aviation teams. While other feasibility factors are also considered, cost, the avoidance of interventions that would require new regulations, and acceptability to the general aviation community have weighed heavily in the choice of interventions to address general aviation safety problems. In emphasizing cost and acceptability to the aviation community, the general aviation teams have selected training and other interventions that will be more affordable to general aviation pilots. The Cabin Safety Team Used a Different Approach While the initiative is using a systematic, defined approach to consider ways to address safety problems in commercial and general aviation, a different approach was used to address cabin safety problems. Several months before the announcement of the Safer Skies initiative, FAA established the Partners in Cabin Safety (PICS) team to provide information to the public about four cabin safety problems: passenger interference with flight crews, the safety benefits of greater use of seat belts by passengers, the safety benefits of child safety restraints, and potential safety issues arising from the stowage of carry-on baggage. According to PICS team members, FAA identified these problems before assigning them as tasks to the team in January 1998. Team members discussed such additional issues as in-flight medical emergencies and cabin air quality but settled on the four that were eventually included. Unlike the commercial and general aviation teams, the PICS team limited the possible interventions to ones that did not require that FAA create new regulations, a process that was viewed by some participants as too slow and unlikely to result in consensus among various industry and government participants. Consequently, the PICS team focused on interventions that involved educating passengers. The four cabin safety problems addressed differ in several important ways from those safety problems addressed by the Safer Skies teams in commercial and general aviation. First, the cabin safety problems resulted in only two fatalities in U.S. commercial aviation from 1988 through 1997, both involving passengers not using their seat belts when the aircraft encountered turbulence. In contrast, during the same period of time, there were more than 9,800 fatalities in all commercial and general aviation accidents. Second, air carriers are not required to maintain or submit data on cabin safety incidents unless they involve fatalities or serious injuries. Since only limited historical data on cabin safety accidents and injuries were available for analysis, the PICS team did not conduct a causal analysis as has been done by the analysis and implementation teams for both commercial and general aviation. The PICS team disbanded in January 1999 after it completed the development of passenger education materials. As part of its passenger education efforts, the team distributed brochures on child restraint systems from a previous campaign by FAA. In addition, the Luggage and Leather Goods Manufacturers of America, along with FAA, developed a brochure addressing carry-on baggage concerns, which the team members were asked to distribute to airlines, luggage stores, and airports. It was also put on FAA’s World Wide Web site for further distribution by interested parties. Steering committee members and FAA officials also told us that the team worked with air carriers to develop additional cabin announcements for the stowage of carry-on baggage and the importance of seat belt usage. Finally, the PICS team developed a passenger safety checklist for publication on FAA’s Web site, which addressed passenger interference with flight crews, seat belt usage, child restraint systems, and carry-on baggage. This checklist, however, is not currently available on FAA’s Web site. According to an official at FAA’s Flight Standards Service, the passenger safety checklist project is on hold until the agency appoints a new national resource specialist for cabin safety who will review the document before it is made available to the public. The Safer Skies Initiative Has Made the Most Progress With Problems Studied Previously Since the FAA Administrator announced the Safer Skies initiative in April 1998, work has started on 9 of the 12 safety problems to be addressed in the commercial and general aviation safety areas. Teams have made the most progress in selecting interventions for safety problems when they could build on previous studies for which widely supported recommendations exist. The commercial aviation steering committee plans to have work started on all of the identified problems before the end of fiscal year 2000, but the general aviation steering committee has not yet determined when work on three of its six problems will begin. Table 7 shows the status of the work on each of the 12 safety problems to be addressed in commercial and general aviation as of April 1, 2000. The general aviation CFIT and weather reports were presented to us as final reports. However, in responding to our draft report, FAA told us that these reports had not received final approval. The steering committees charged with developing interventions for each of the safety problems first formed analysis teams to work on problems for which major studies had already been done or were under way. The ongoing and completed studies conducted by FAA and the industry provided information the analysis teams could use to identify the causes of accidents and potential interventions. For example, the Flight Safety Foundation had completed an extensive study on CFIT, examining over 250 accidents and incidents worldwide. The foundation had also developed training materials for pilots and made other recommendations to prevent CFIT accidents. In another instance, the team analyzing weather-related accidents involving general aviation aircraft identified 11 safety studies that had preceded its efforts, all of which recommended interventions similar to the ones the team ultimately identified. FAA participants on the Safer Skies commercial aviation steering committee told us that beginning with previously studied safety problems helped team members make progress in developing the team’s two-part process for analyzing data and identifying interventions and become comfortable with the analysis and selection process before moving onto more complex issues that may involve original research and analysis. However, this approach meant that work on another area that is important for reducing the fatal accident rate in commercial aviation did not start work until September 1999—17 months after the initiative’s announcement. FAA identified loss of control as the single largest cause of fatal commercial aviation accidents involving U.S. operators. To date, the implementation of interventions has concentrated mostly in areas for which analysis and implementation were well under way or complete when the initiative began. The first of the interventions to be implemented addressed uncontained engine failure: FAA issued a series of airworthiness directives requiring enhanced inspections of high-speed rotating parts in certain jet engines.The directives require industry maintenance personnel to perform additional, more detailed inspections to check for cracks and other signs of irregularities whenever an engine is disassembled for overhaul or maintenance. According to staff at FAA’s Engine and Propeller Directorate, these directives affect more than 90 percent of the jet engines that U.S. airlines currently use. FAA and the industry are also taking steps to implement an intervention endorsed by the commercial aviation team examining CFIT. The team has recommended that enhanced navigational equipment be installed on new and existing aircraft to warn pilots of impending crashes. Air carriers began installing the enhanced navigational equipment to prevent CFIT accidents in their aircraft before FAA issued the final rule requiring that the equipment be installed in the commercial fleet. Specifically, air carriers began installing the enhanced navigational equipment to prevent CFIT accidents in their aircraft before FAA issued its final rule in March 2000 and before the commercial aviation team working on CFIT issued its final report in June 2000. This equipment is now being included on some new aircraft, and airlines had equipped about 4,000 aircraft already in service with the new technology by December 31, 1999. The timetable for analysis and implementation teams addressing the problems included under the initiative has changed since the initiative was announced in April 1998. According to the chairs of the Safer Skies steering committees, some of these schedule changes occurred because the analysis process took longer than anticipated. In other cases, changes to the analysis approach required rescheduling Safer Skies’ efforts. For example, FAA officials explained that the final report date for the commercial aviation CFIT implementation team was rescheduled after the steering committee decided that combining the CFIT and approach and landing teams for the implementation analysis made sense because of overlap in the interventions they had identified. Several high-priority interventions to address CFIT accidents in commercial aviation were, however, forwarded to the steering committee for final approval and implementation without waiting for the implementation team’s final report. An FAA co-chair of the general aviation steering committee told us that they changed the start dates for several of the general aviation teams because general aviation accidents are more numerous than commercial aviation accidents and analyzing them proved more time-consuming than anticipated. This FAA official also said that some general aviation groups participating in the initiative do not have enough people or resources to serve on multiple teams simultaneously. While we believe that these decisions were justified, they also effectively mean that the interventions to resolve some key safety problems will not be identified or implemented until later than originally anticipated. Early Experience Indicates That Future Problems Will Require More Analysis Additional analysis will be needed to identify interventions to address current and future safety problems for which few or no previous studies exist. Safer Skies teams relied initially on a limited number of case study analyses to identify the causes of accidents and incidents, as well as the interventions that could prevent them in the future. The teams compared the results of these case studies with the causes and interventions identified by previous studies to determine whether they are consistent. For example, the team working on CFIT in commercial aviation completed detailed event sequences for 10 accidents and found that the causes identified and the interventions it recommended were similar to those of prior studies. Safer Skies teams working on approach and landing accidents in commercial aviation and weather-related accidents in general aviation also compared the results of their analyses with those of prior studies. Along with other changes as the Safer Skies initiative has evolved, this approach has been modified as teams addressed additional safety problems. For example, the runway incursion analysis team expanded its case studies to include incidents because there were so few fatal accidents involving runway incursions. Similarly, the analysis team now working on loss of control in commercial aviation has selected a larger number of case studies because this safety problem has not been the subject of extensive prior analysis. Effective Implementation Is Critical Next Step in Making Progress Toward the Goals Set for Reducing Fatal Accidents The Safer Skies initiative has identified the major safety problems to be addressed, has made progress in identifying their root causes, and has developed interventions to address some of them. Reducing fatal accidents depends in part on the effective implementation of these interventions. As discussed in chapter one, however, many of these safety problems are long- standing ones that have persisted in spite of previous studies and recommendations. In addition, FAA has not consistently followed through on implementing safety recommendations in the past. The Safer Skies initiative does not yet have in place a process to track the implementation of these interventions that is sufficiently detailed and covers interventions chosen to improve safety in commercial aviation, general aviation, and cabin safety. The Success of Safer Skies Interventions Depends on Effective Implementation Reducing the fatal accident rate in commercial aviation and the number of general aviation accidents will depend in part on effective implementation of the interventions chosen by the Safer Skies teams. Many of the safety problems that the initiative addresses are long-standing ones that have been studied extensively in the past. Actually resolving these problems has proven difficult in the past and remains very challenging. Similar interventions have been recommended, but the desired reductions in fatal accident rates have not been achieved. For example, extensive prior studies of CFIT and approach and landing accidents in commercial aviation recommended many of the same interventions that are now being implemented by the Safer Skies commercial aviation steering committee. Furthermore, reaching the 80-percent goal in commercial aviation will depend heavily on the successful implementation of interventions to address the safety problems that caused the most fatal accidents: loss of control, CFIT, and approach and landing. To reach the goal in commercial aviation, interventions must be effectively implemented for both small commuter aircraft and large commercial air carriers. Even after safety interventions have been identified, implementing them has proven challenging. As DOT’s Inspector General and we have reported previously, FAA does not consistently follow through on implementing safety recommendations. Our review showed that FAA usually agreed with the recommendations on aviation safety made by GAO, NTSB, and DOT’s Inspector General. FAA had implemented 64 percent of the 256 recommendations that we reviewed; however, FAA had not completed actions to implement the remaining 36 percent of the recommendations.We found that FAA sometimes did not establish time frames for implementing the recommendations or did not meet established times for implementing them. Similarly, DOT’s Inspector General found that of the 23 near-term actions FAA planned for addressing runway incursions in its 1998 Action Plan, 15 had not been completed on time.We found that even safety recommendations that received specialized attention, intensive follow-up, and heightened awareness among industry, the Congress, and the public have not been fully implemented. For example, NTSB considered runway incursions so serious that it repeatedly placed this safety problem on its lists of critical safety recommendations in the early 1990s. Although FAA concurred with NTSB’s recommendations, our review found that several of the corrective actions needed had not been implemented, including actions to improve (1) visibility at airports; (2) runway lighting, signage, and surface markings; and (3) radar and related equipment to alert air traffic controllers to impending runway incursions. FAA developed several plans in the 1990s to decrease runway incursions. In spite of these programs, the actual number of runway incursions has increased. DOT’s Inspector General noted in 1999 that the number of runway incursions had increased from 292 in 1997 to 325 in 1998, in part because FAA had not set aside the funds needed to support the initiatives and projects in the runway incursion action plan.As a result, FAA has made limited progress in implementing its plan, and milestones have been missed and extended. DOT’s latest performance report for fiscal year 1999 shows continuing problems in this area. The actual number of runway incursions (321) was 19 percent higher than the goal of 270 established in DOT’s performance plan. Industry participants in the Safer Skies initiative have voiced concern that some interventions may not be implemented promptly or at all. Some of the same Safer Skies participants questioned whether enough resources would be available to complete the implementation of the selected interventions. Without assurance of adequate resources, it is likely that the choice of interventions by Safer Skies teams will be constrained by cost considerations and the implementation of recommended interventions will be incomplete. Effective implementation will also depend on having a process for tracking the implementation of interventions to be carried out by all Safer Skies participants, including FAA; other government agencies; manufacturers; airlines; and other industry participants. The Steering Committees Have Not Yet Developed Effective Processes for Tracking the Implementation of Interventions FAA and the aviation industry began implementing some of the Safer Skies safety interventions before developing a systematic way of tracking the progress being made. This occurred in part because the steering committees incorporated some safety initiatives already under way and endorsed the resulting interventions before they developed a systematic tracking process. In addition, Safer Skies teams have recommended that a few high-priority safety initiatives be started before final implementation reports are issued. While moving forward on important safety initiatives makes sense, ensuring their successful implementation depends on effective tracking. Interventions have been implemented in both commercial aviation and cabin safety with no tracking process in place. The general aviation steering committee is moving toward approval of interventions for CFIT and weather but has not yet developed a tracking process. Several of the Safer Skies participants we interviewed voiced some concerns about whether all the interventions being identified would eventually be implemented, given FAA’s past problems in implementing recommended safety improvements. Tracking Has Been Limited and Not Systematic In its December 1997 report, the congressionally mandated commission on aviation safety recommended that FAA’s and the industry’s strategic plan include milestones for accomplishing specific tasks. The commission noted that the plan should be detailed enough that milestones for accomplishing specific tasks can be readily recognized by agency management and the industry, as well as the public. In addition, the commission directed FAA to report periodically on where initiatives stand, why any delays are occurring, and whether and why changes are being made to the plan. These recommendations are in accordance with sound internal controls for program management. The Safer Skies initiative, which was announced in April 1998, implemented a number of interventions without first developing a process for tracking their progress. In some cases, these were interventions that were developed by teams whose work was incorporated into the Safer Skies effort. In commercial aviation, for example, FAA, relying on the work of the uncontained engine failure team, published airworthiness directives beginning in April 1999 to require more extensive inspections of aircraft engines. The commercial aviation team working on CFIT also implemented several interventions in or before September 1999. These included interventions to verify the operational status of radar equipment to provide minimum safe altitude warnings to pilots and to develop a template for standard operating procedures to be used by airlines in training their pilots in techniques to avoid CFIT accidents. In September 1999, the commercial aviation steering committee recognized the need for the systematic tracking of interventions and directed a work group to develop a proposal. At the commercial aviation steering committee’s meeting in January 2000, the work group presented its proposal for a Joint Implementation Measurement Team. The team designed the tracking process to provide a high-level report on whether each intervention is being implemented as planned. Specifically, this team’s responsibilities will include tracking whether the implementation of approved interventions complies with the implementation plans and their milestones; helping to predict the potential effectiveness of the proposed interventions; and identifying ways of measuring whether the intervention is achieving the desired risk reduction. The team will also provide a brief explanation of what is causing noncompliance with the plan and whether a solution has been found to resolve the problem. As conceived, the tracking report is to be a high-level progress report that does not intrude on the internal planning of the organizations responsible for carrying out the interventions. The tracking report thus does not provide detailed information on interim and long-term milestones or identify individuals responsible for implementing the plan and preparing progress reports for the tracking committee. Without more detailed information than is currently provided in the proposed tracking report, it may be difficult for the steering committee to assess progress in implementing interventions. For example, the tracking team’s January report notes that FAA has completed a plan for implementing two programs critical to gaining access to safety dataand that other industry and government groups have plans in development. However, the tracking report provides no information about the milestones established by FAA’s plan for establishing these key programs, both of which have experienced delays in the past. After we identified concerns about the tracking system, the commercial aviation steering committee agreed that improvements are needed, and it is working on revisions. A draft version provided for our review in June 2000 still lacked key information about major commitments, deliverables, and milestones. Tracking implementation is even more critical for the more complex initiatives whose success depends on coordinated efforts by both FAA and the aviation industry. For example, successful implementation of the highest-priority intervention to prevent CFIT accidents in commercial aircraft—the installation of enhanced aircraft navigational equipment to warn pilots of impending crashes—requires coordination among many parties: FAA must certify that the equipment works, issue technical standards for manufacturers, and issue a final regulation to require that the equipment be installed on new and existing aircraft. Aircraft manufacturers need to make the equipment standard on new aircraft and retrofit it in older aircraft. Air carriers need to incorporate the appropriate procedures for maintaining and using this equipment into maintenance and flight manuals and to train pilots in its use. FAA needs to update its guidance to its inspectors so that they can ensure that air carriers properly carry out their responsibilities for training, maintenance, and use of the equipment. Without a tracking system that provides more detailed information on the implementation of complex interventions, the commercial aviation steering committee will not have the information needed to ensure that they are fully implemented in accordance with planned milestones. The implementation of interventions to improve cabin safety has also not been adequately tracked. The cabin safety steering committee, which completed the development of passenger education materials before it disbanded in January 1999, carried out most of its interventions with no Safer Skies tracking process in place. However, we found that educational materials related to passenger interference with crew had not been distributed or made available on FAA’s Web page as of April 2000. Furthermore, according to a member of the cabin safety steering committee, the distribution of other cabin safety brochures was, in some instances, never completed. The absence of a systematic process for tracking Safer Skies interventions may have contributed to inaccuracies in reporting on the status of cabin safety interventions. Specifically, the DOT FY 2001 Performance Plan and FY 1999 Performance Report states that all initiatives relating to cabin safety were completed as planned. However, planned actions to include material on passenger interference with crew had not been completed as of April 2000. Finally, although the general aviation steering committee is reviewing draft implementation team reports that recommend interventions to address CFIT and weather, it has no process in place to track the implementation of interventions once they are approved. According to the FAA co-chairs of the general aviation steering committee, this group has committed to track the interventions selected but has not yet developed a process for doing that and plans to discuss this issue at a future meeting. Without coordinated, detailed implementation plans that assign responsibilities, FAA and the Safer Skies steering committees will not be able to ensure that all parties complete their portion of the plan and that implementation occurs on time. In addition, as part of the Safer Skies process, FAA and the general aviation community identified efforts that could be accomplished in the short term or were already under way to address the safety areas to be addressed by the initiative. FAA and the industry implemented a number of these short-term initiatives, such as the development and distribution of various safety videos and training aids. However, Safer Skies did not track the implementation of these interventions or evaluate their effectiveness. Conclusions The progress made by the initiative to date has resulted in the implementation of interventions for five safety problems—two in commercial aviation and three in cabin safety. However, a coordinated, centralized method of tracking will be necessary to ensure full implementation of these and future interventions. In the past, FAA has developed plans to make safety improvements but has not consistently implemented them successfully. An effective tracking system would provide for identifying the individuals or entities responsible for implementation, setting milestones, establishing resource estimates, and preparing progress reports. Without a systematic tracking mechanism, there is no assurance that any of the selected interventions will be fully implemented. While the commercial aviation steering committee has developed a system to track the implementation of the interventions it approves, this system is not sufficiently detailed to ensure their implementation. The general aviation steering committee, which is nearing final approval on interventions to address safety problems related to weather and controlled flight into terrain, is only now developing a tracking system modeled after the one used by the commercial aviation steering committee. Finally, nothing comparable has been developed to track interventions recommended by the cabin safety teams. Recommendations To ensure that interventions are implemented and that effective and feasible interventions are identified in the future for issues that the initiative has yet to address, we recommend that the Secretary of Transportation direct the FAA Administrator to advise the Safer Skies steering committees to take the following actions: Develop a systematic way of tracking the implementation of interventions approved by all Safer Skies steering committees. This tracking system should include the identification of responsibility for implementation, the establishment of short- and long-term milestones and resource estimates, and the preparation of progress reports. The progress reports should provide information on the detailed steps to be taken by all government and industry participants to ensure the successful implementation of each intervention. Progress reports should highlight and explain any delays in meeting the milestones. This system should be shared with the relevant Safer Skies steering committees and FAA’s focal point for the initiative as well as with the team that recommended the intervention. Agency Comments DOT and FAA officials concurred with our recommendation on the need to track the implementation of interventions to achieve results, but they disagreed with the level of detail we advised. The officials stated that the commercial aviation steering committee’s draft revised tracking system provides better information for tracking the major commitments and deliverables. The expectation is that more detailed implementation plans will be maintained within each implementing organization. The officials do not believe that it is realistic for steering committees to review the details of every organization’s action plan. They also noted that the general aviation steering committee is developing a tracking system similar to that used to track commercial aviation interventions. We agree that the Safer Skies initiative has taken steps to improve its tracking system for commercial aviation and to work toward the development of a similar system for general aviation. However, the revised tracking system provided for our review in June 2000 did not clearly identify and include time frames for major commitments and deliverables for each of the interventions approved by the commercial aviation steering committee. We agree that individual FAA and industry organizations responsible for implementing Safer Skies interventions would logically have far more detailed systems for tracking implementation than the steering committees. However, without a reliable tracking system in place that contains basic information on major deliverables, responsibilities, and time frames, FAA and Safer Skies will not be in a position to ensure that recommended interventions are implemented to improve aviation safety. DOT and FAA officials disagreed with our recommendation that FAA and the Safer Skies steering committees should analyze a sample of safety problems that were not studied previously. The officials presented information that showed some Safer Skies’ work groups were using or would be using a sample of previously unexamined safety problems in their work. For this reason, we withdrew the recommendation. The Safer Skies Initiative Has Not Yet Developed Performance Measures to Evaluate the Effectiveness of Most Interventions Of the five Safer Skies teams that have begun implementing interventions, only one has developed a performance measure to evaluate whether the interventions it has selected are helping to reduce the safety problems that cause fatal accidents and are worth what they cost. Such evaluations depend on performance measures that serve as the yardsticks for measuring progress toward program goals. The initiative’s ultimate goal is saving lives by reducing fatal accidents. Federal law requires that federal departments evaluate the effectiveness of the program activities for which they request funding. FAA will evaluate progress toward its broad goals for aviation safety using performance measures based on reducing the fatal accident rate for commercial aviation and the number of fatal accidents in general aviation. However, additional performance measures will be needed for evaluating the effectiveness of the interventions selected by the teams working on each of the safety problems. Most teams are still analyzing data on safety problems and selecting safety interventions and thus have not yet determined how to evaluate the effectiveness of interventions selected. Although teams working on 5 of the 16 safety problems have recommended interventions that are being implemented, only one of these teams developed an adequate performance measure before its interventions were implemented. Federal Law Requires the Development of Performance Measures as Part of the Budget Process To ensure that programs achieve their objectives and that funds are expended wisely, federal law requires that each department develop performance measures as part of its budget request. Performance measures are the yardsticks used to evaluate the effectiveness of the activities undertaken as part of federal programs. The initiative plans to develop performance measures to evaluate the effectiveness of the interventions it recommends to save lives by addressing the safety problems that cause fatal accidents. However, developing good performance measures can be difficult. While it is useful to establish a baseline of information about past fatal accidents, they occur too rarely to serve as performance measures to evaluate the effectiveness of interventions. Years may elapse between specific types of fatal accidents, such as uncontained engine failure, making it difficult to see trends or evaluate the effectiveness of interventions. Instead, the initiative must develop performance measures based on events that occur more frequently and that can be linked closely to interventions. A congressional mandate exists for the measurement and evaluation of all federal programs. Performance measurement is a central premise of the Government Performance and Results Act of 1993 (Results Act). This act requires annual performance plans to cover each program activity set out in a federal agency’s budget. Among other requirements, performance plans are to (1) establish performance indicators to be used in measuring or assessing the outcomes of each program activity, (2) determine how to compare actual results with the performance goals, and (3) describe the means to verify and validate information used to report on performance. In accordance with this law, DOT develops annual plans that include performance measures for specific programs and activities. Agencies under DOT, such as FAA, develop more detailed plans and performance measures for each program activity. Because of its impact on FAA’s programmatic and budgeting activities, the Safer Skies initiative falls under the Results Act’s requirement to evaluate program performance. Moreover, it was developed in response to the National Civil Aviation Review Commission’s report, which specifically directed FAA and the aviation industry to establish performance measures and milestones to assess the initiative’s progress in meeting safety goals, to review priorities periodically, and to monitor progress. The Safer Skies initiative incorporates the idea of establishing performance measures to evaluate progress toward safety goals. As a result, the Safer Skies teams that recommend interventions are tasked with developing the performance measures for those interventions approved by the steering committees. For a performance measure to be useful, a baseline must be established against which to measure the effect of the intervention. Good evaluation criteria include (1) definitions of baseline information on the extent of the safety problem over a particular period prior to the implementation of the intervention and (2) timeframes for evaluating changes using the performance measure. Goals and time frames must also be established to determine what the program is expected to achieve and by when. For the initiative, appropriate baseline information includes both the total number of fatal accidents and the number of fatal accidents caused by each safety problem within each type of aviation operation (i.e., commercial aviation and general aviation). Good performance measures have several key features: the event to be measured (e.g., a runway incursion) or desired outcome (a reduction in the number of runway incursions) is measurable; data on the event are or could be collected; and the event occurs with sufficient frequency between evaluations for progress to be measurable. The performance measures under development to evaluate Safer Skies’ initiatives can be assessed against these criteria. Determining the Effectiveness of Safer Skies’ Initiatives Will Require the Development of Additional Performance Measures Determining the effectiveness of Safer Skies interventions will require the development of performance measures other than the overall goals set for commercial and general aviation. Fatal aviation accidents occur so infrequently that their usefulness is limited as a measure of the success of Safer Skies’ interventions. This is especially true for commercial aviation, which had a total of 85 fatal accidents in the United States from 1988 through 1997. The fact that a particular type of accident has not occurred for several years does not mean that the underlying safety problem has been successfully addressed. Furthermore, for several reasons it may be difficult or impossible to match a specific implementation plan to a numerical reduction in fatal accidents overall or attributable to a specific safety problem. For example, in general aviation the lack of detail in accident reporting makes it difficult to determine specific accident causes; the lack of pilot profiles makes it difficult to evaluate the effectiveness of pilot training strategies; and it is hard to predict how many aircraft owners will install new safety equipment in the future. Thus, to determine to what extent an intervention is reducing fatal accidents attributed to a specific safety problem, teams will need to develop additional performance measures. The commercial aviation steering committee recognized early on this need to develop interim measures to evaluate the unique effect of individual interventions. Even if a team identifies suitable performance measures for a specific safety problem, it may be difficult to determine whether a particular intervention, cluster of interventions, or other outside factors influenced changes in the performance being measured. This is especially true for situations in which teams choose numerous interventions to address a safety problem. While the uncontained engine failure team developed a single primary intervention, the team working on CFIT in commercial aviation has already initiated several interventions and is contemplating dozens more. Similarly, the general aviation team working on weather recommended 17 interventions. Without some way to independently evaluate the effectiveness of individual interventions or clusters of interventions, the initiative will have little way of knowing whether particular interventions save lives and are thus worth the time or money being expended on them. In developing performance measures, one option involves using the precursors to accidents as proxies for the likelihood of fatal accidents. Precursors are events that, although they typically precede a particular type of fatal accident, often occur without culminating in a crash. For example, approach and landing accidents are almost always preceded by unstable approaches to the airport, but many unstable approaches may culminate in a hard or late landing that does not result in injuries or a crash. Performance measures based on precursors have been developed to evaluate initiatives for one of the safety problems the initiative is addressing, uncontained engine failure. The success of this approach depends on identifying appropriate accident precursors that can serve as proxies for the specific safety problem the team is addressing. Precursors are most useful when they follow the criteria for good performance measures: they are measurable, relevant data on them are available, and they occur with sufficient frequency. Most Safer Skies’ Interventions Are Being Implemented Without Determining How to Evaluate Their Effectiveness Of the 16 Safer Skies teams, 8 have recommended safety interventions for implementation; interventions from 5 of these teams have been or are being implemented; but only one has developed a performance measure that can show whether the intervention is effective at saving lives. Most Safer Skies teams are still analyzing data on safety problems and selecting interventions and have not yet determined how to evaluate the effectiveness of interventions selected. Of the five teams whose recommendations are being implemented, three have developed some performance measures. Only the uncontained engine failure team has developed two quantifiable performance measures that are based on accident precursors. In contrast, the general aviation teams working on CFIT and weather developed some general performance measures for reducing accidents resulting from these safety problems but did not quantify these measures. No performance measures were developed to evaluate the educational interventions implemented to address the four cabin safety problems. Finally, the team working on CFIT accidents in commercial aviation has implemented one intervention in advance of the team’s final report. While this team has not yet developed a performance measure for this intervention, it is considering using an accident precursor. Performance measures based on accident precursors have potential for use in evaluating the effectiveness of additional interventions being considered to address CFIT and other safety problems. FAA does not presently collect data on some accident precursors that could be used to evaluate the effectiveness of Safer Skies interventions and faces significant barriers to collecting such data. The Uncontained Engine Failure Team Has Chosen Two Accident Precursors as Performance Measures The Safer Skies team working on uncontained engine failure chose two accident precursors as performance measures for evaluating the effectiveness of the intervention it recommended: more extensive engine inspections. Because uncontained engine failure caused just two fatal accidents in the United States in 1988-97, fatal accidents are too infrequent to serve as a performance measure. But well-established trend data show that the safety problem occurs much more frequently, resulting not in fatal accidents but in incidents with severe or serious consequences on an average of about 1.5 times a year.The team chose the rate of these incidents as the primary performance measure for its recommended intervention. The team also chose another accident precursor as a second performance measure: the number of cracks detected in engine disks when engines are overhauled.Data analysis identified cracked disks as the primary cause of uncontained engine failure. According to staff at FAA’s Engine and Propeller Directorate, each crack detected during inspections probably avoids an uncontained engine failure that could have had severe or serious consequences. Both accident precursors chosen—the rate of uncontained engine failure with severe or serious consequences and the detection of cracks in engine disks—have some of the attributes of a good performance measure. Both can be counted, and reporting mechanisms are in place for collecting the key data needed for both measures. Hence, it will be possible to evaluate whether the more extensive engine inspections lead to the detection of more cracks and fewer instances of uncontained engine failure with severe or serious consequences. However, good performance measures track events that occur often enough between evaluations to show whether progress is being made. Uncontained engine failure with severe or serious consequences occurs from one to three times a year, according to data from 1992-98, while cracks in engine disks are likely to be discovered about once in 25,000 inspections, according to staff at FAA’s Engine and Propeller Directorate. Hence, 2 to 5 years may elapse before the effectiveness of the more extensive engine inspections can be judged. Nonetheless, tracking both measures should provide sufficient data for reasonable interim and final performance measures, and the enhanced inspections provide an opportunity to avert potentially catastrophic accidents. The uncontained engine failure team established much of the information needed to use its performance measures to evaluate the effectiveness of enhanced engine inspections. During our review, we worked with FAA staff on the team to develop additional information to provide a more complete context for how that intervention relates to the overall Safer Skies effort and to the fatal accident rate in commercial aviation. We then developed a template for this information that can serve as a model for other Safer Skies implementation teams. (See table 8.) The template displays the data critical for understanding the extent of the safety problem and the baseline for measuring progress in addressing it, including the frequency of the problem’s occurrence in 1988-97 and projections of its occurrence with and without the recommended intervention by 2007, the target year for Safer Skies to achieve an 80-percent reduction in the overall fatal accident rate. The template reflects the team’s goal of reducing the rate and projected number of uncontained engine failures with severe or serious consequences by 50 percent by 2007. General Aviation Teams Did Not Develop Quantified, Specific Performance Measures The general aviation implementation teams for CFIT and weather have completed their draft reports but did not develop quantified, specific performance measures to evaluate the effectiveness of the interventions they recommended. The general aviation CFIT team recommended 5 interventions subdivided into 22 distinct subinterventions. None of the 22 subinterventions included specific, quantified performance measures. For example, the CFIT team recommended developing criteria for standardizing the marking of wires, towers, and support structures to help decrease the number of CFIT accidents that occur when pilots of low-flying aircraft, such as helicopters and small planes, fly into these obstacles. As one measure of effectiveness, the team chose a decrease in the number of CFIT accidents involving wires or towers. However, the team did not provide any baseline information about the number of past CFIT accidents that involved wires or towers or the types of aircraft involved. To the extent that such baseline information is available, it provides a yardstick against which to measure progress in reducing these accidents. Furthermore, the team did not provide any specific interim or long-term accident reduction goals for the number of accidents or the percentage of the fleet affected. Without such information, it will be impossible to determine whether or by how much CFIT accidents involving wires or towers have decreased. Other performance measures for general aviation CFIT initiatives share this lack of quantification and specificity. Without baseline information on the occurrence of the problem prior to the implementation of the intervention and specific quantified goals, it will be impossible to evaluate the effectiveness of the interventions implemented. The general aviation team working on weather experienced similar problems in setting performance measures for its interventions. The team’s final report recommended 17 interventions subdivided into 49 distinct subinterventions. Of the 49 subinterventions, only 1 included a quantified, specific performance measure. The rest had either no performance measures or performance measures that were not quantified or specific. Some of the interventions for which no performance measures were established involve research that is still ongoing to develop the technology suggested in the intervention. For example, NASA has the lead in developing equipment to sense turbulence and warn flight crews so that they can avoid or reduce the dangers associated with turbulence. Because research on this technology is preliminary, the performance measures are described broadly as reducing fatalities and injuries. It is likely too early to establish performance measures for these interventions. The performance measures included for many other subinterventions were too broad to allow actual evaluation of their effectiveness. The performance measure for most of these was a “decrease in the number of weather-related accidents.” These performance measures are neither quantified nor linked in any specific way to the interventions, which makes it impossible to determine what portion of the reduction, if any, is attributable to individual interventions or clusters of interventions. Of the performance measures developed, several measure progress in implementing training interventions, rather than the effectiveness of the training in reducing safety threats. For example, one intervention involves training Flight Service Station specialists and supervisors on in-depth weather analysis and interpretation to improve the weather briefings given to general aviation pilots. The associated performance measure involves training all of these FAA staff by 2002, rather than measuring the effectiveness of that training. In other cases, the team did not include a performance measure when one could have been developed. For example, one intervention involves conducting a refresher clinic for flight instructors to update them about current weather information and provide appropriate training materials for them to use with general aviation pilots. No performance goal was specified for this intervention. To measure how well this intervention has been implemented, it is possible to determine the number of flight instructors, to establish a goal for how many attend this training each year, and to have them provide information on how many pilots they subsequently train using the information. To determine whether the intervention is effective, the pilots who receive the training could later be surveyed to determine whether they had used the weather information provided or their safety records could be compared with the records of pilots who did not have the training. The link between accident reduction and such training is more tenuous than the link between crack detection and the prevention of uncontained engine failure, but it is possible to gain at least some information about the effectiveness of the training. Without such feedback, it is difficult to determine whether the training is effective and should be continued. Without more specific baseline information on these performance measures prior to the implementation of the interventions and interim and long-term goals for progress, the initiative will not be able to evaluate the impact of these interventions. In responding to our draft report, FAA noted that the implementation teams for CFIT and weather relied on the expertise of team members, following analysis of the root causes of accidents, to determine the probable effectiveness of the interventions. Safer Skies analysis and implementation reports described problems with the quantity, quality, and type of data currently available about general aviation. These problems include shortcomings in the data for the types and numbers of operations and in the level of detail of the actual accident investigations. FAA concluded that the problems with general aviation data make it difficult to measure the effectiveness of individual intervention strategies by the traditional approach of how they affect accident rates. While we acknowledge the need to improve general aviation data, we also believe that such data can provide some indication of the relative frequency and importance of the causes of fatal accidents. Such information is also important for making decisions about which interventions to fund and expedite, considering their potential effectiveness and the number of fatal accidents that their use might prevent. While it may not be possible to develop quantitative performance measures for all interventions proposed by the implementation teams, good performance measures depend on having measurable events, a way to collect data on those events, and an event that occurs with sufficient frequency between evaluations for progress to be measurable. The performance measures for both general aviation weather and CFIT could be improved where possible by identifying and quantifying baseline information, ensuring that a means exists for collecting data on the performance measure, and setting interim and long-term goals against which to measure progress in implementing the intervention. The Safer Skies Initiative Did Not Develop a Strategy for Evaluating Cabin Safety Interventions The Safer Skies cabin safety steering committee completed work on four safety problems and implemented most interventions without developing any strategy for evaluating the interventions. Although the steering committee completed its work in January 1999, it did not develop performance measures for the interventions it selected. While the initiative’s broad goal is reducing the fatal accident rate, the broad goal for cabin safety is educating the flying public about four areas: passenger interference with flight crews, passenger use of seat belts, child restraint systems, and carry-on baggage. The steering committee distributed brochures about carry-on baggage and the importance of child restraint systems and worked with air carriers to develop additional cabin announcements to remind passengers to use their seat belts. The team did not, however, set up any evaluation to show whether the public’s knowledge about these issues improved as a result of these interventions and whether that improved knowledge results in fewer fatalities. While useful performance measures could be defined in each of the four cabin safety areas, the steering committee did not develop a strategy for evaluating the impact of its educational initiatives. For example, the steering committee did not plan or track the distribution of the flyers it issued about carry-on baggage or child restraint systems, and it developed no performance measures for evaluating the effectiveness of these initiatives to educate the public. Furthermore, FAA does not have a mechanism for consistently collecting data about any of these areas. Airlines are required to report information related to cabin safety only if something happens in the cabin that results in serious injuries or death. As a consequence, the agency does not have baseline data for measuring improvements that may result from its initiatives. Thus, the Safer Skies initiative has no way of measuring the effectiveness of its educational efforts in the cabin safety area. Precursors of Accidents Have Potential for Use as Performance Measures in Other Safer Skies Areas Precursors to accidents have the potential for use as performance measures for evaluating interventions to address at least three other Safer Skies safety problems: CFIT, runway incursions, and approach and landing. Precursors are needed because fatal aviation accidents caused by all three safety problems occur rarely. The precursors for each safety problem have at least some of the attributes of good performance measures. Navigational Alerts Could Serve as a Performance Measure for One CFIT Intervention The Safer Skies team working on CFIT accidents in commercial aviation is considering using an accident precursor to evaluate the effectiveness of one of its interventions: the installation of enhanced navigational equipment on aircraft that sounds alerts to warn pilots of impending crashes. The equipment tracks data on the frequency of the alerts and the situations in which they occur. Although these data are not currently collected by FAA, they could be used to develop a performance measure based on the alerts sounded as precursors to CFIT accidents. The performance measure of alerts sounded could indicate the number of dangerous situations avoided. Alerts sounded by this navigational equipment have several features of a good performance measure. First, the alerts can be measured. Second, the equipment itself tracks such warnings. Finally, the alerts are sounded with sufficient frequency to be useful as a performance measure. According to the manufacturer, enhanced navigational equipment was installed in over 4,000 aircraft from March 1996 through December 1999. In 14 instances, the alerts enabled pilots to recover from impending crashes. Runway Incursion Incidents Could Serve as a Performance Measure Runway incursion incidents that do not result in accidents provide another useful performance measure and are being used as such by FAA. From 1988 through 1997, 2,345 runway incursions resulted in five fatal accidents and 59 fatalities in the United States.However, runway incursions have the potential to cause much greater numbers of fatalities; the collision of two large aircraft on the ground in the Canary Islands in 1977 resulted from a runway incursion and took more than 580 lives. Because runway incursion incidents are increasing in the United States and have the potential to lead to fatal accidents, FAA’s Performance Plan for FY 2000 has used these incidents to establish a performance measure for a series of safety recommendations designed to reduce accidents caused by runway incursions. The Safer Skies team addressing runway incursions has not yet identified interventions, but FAA’s ongoing work offers some useful performance measures for measuring progress in addressing this safety problem. Runway incursion incidents have all three features of a good performance measure. First, the incidents can be counted. Second, the data can be collected because FAA already has a mechanism for reporting runway incursions.Moreover, FAA has collected data on them for years, and therefore has historical data that can be used to establish baselines against which the effectiveness of interventions intended to reduce runway incursions can be measured. For example, one intervention now in use by FAA involves deploying action teams to airports that have experienced high numbers of runway incursion incidents to determine the causes and develop action plans to resolve them. Data on runway incursion incidents can be used to determine whether the use of action teams reduces such incidents at the airports in question. Finally, runway incursion incidents occur with sufficient frequency to make it possible to measure progress between evaluations. Several hundred runway incursion incidents have been reported each year this decade. Unstable Approaches Could Serve as a Performance Measure for Approach and Landing The Safer Skies team working on approach and landing accidents in commercial aviation is considering using an accident precursor to evaluate the effectiveness of training and other related interventions. The team determined that unstable aircraft approaches to airports were clearly precursors to many approach and landing accidents.Several problems can contribute to unstable approaches, including excess speed on approach, aircraft flaps not in position, and an approach that is too steep or too shallow. Data on each of these key aspects are recorded on an aircraft’s flight data recorder. Thus, the team has an opportunity to develop a performance measure based on reducing the number of unstable approaches. Unstable approaches have some features of good performance measures. First, they are measurable. Second, data on them can be obtained from flight data recorders. However, there are barriers to obtaining these data that must be overcome before unstable approaches can be used as a performance measure for approach and landing interventions. Finally, unstable approaches occur frequently enough to measure progress resulting from interventions. Potential Barriers Exist to the Use of Some Accident Precursors as Performance Measures Barriers exist to using some accident precursors as performance measures. For example, the use of unstable approaches as a performance measure depends on access to information from aircraft flight data recorders. While some airlines use data from flight recorders to analyze the causes of safety problems on routine flights, there are barriers to sharing this information with FAA or with other airlines. Logistical barriers include (1) the limited information tracked by older flight data recorders still in use and (2) differences in the ways that air carriers have programmed flight data recorders to track key information. Because of these differences, the kinds of data items needed to track unstable approaches are not being captured with enough consistency for this measure to be a good indication of performance throughout commercial aviation. Other potential barriers also prevent the use of unstable approaches as a performance measure. Among these barriers are the ongoing debate about how data from flight recorders are to be shared, who should have access to these data, and whether legal enforcement cases can be initiated on the basis of these data. Numerous major aviation safety reports in this decade have advocated a program that would gather and analyze information from flight data recorders about routine flights. FAA has for years promised to establish such a program.However, the inability of FAA, the aviation industry, and other federal agencies to reach consensus on key aspects of this program has delayed its finalization. While shared data can move safety forward, concerns about potential litigation, criminal indictments, and the violation of an air carrier employee’s privacy have served as barriers to the establishment of the program. Such concerns have also delayed the finalization of other programs to enhance the sharing of aviation safety data. For example, safety reports have for years recommended the establishment of Aviation Safety Action Programs to encourage voluntary self-reporting of safety violations by pilots; FAA issued an advisory circular providing guidance for these programs on March 17, 2000. Conclusions Most Safer Skies teams have not finished analyzing the causes of the safety problems they are working on and have not yet selected interventions to prevent the problems. Thus, these teams have not developed methods to evaluate the effectiveness of their interventions. But when interventions have been selected, most have been implemented without first determining how to evaluate their effectiveness. Neither FAA nor the aviation industry will have the information that will be critical in determining whether the interventions have made progress in resolving the safety problems until appropriate performance measures are developed. Evaluating the impact of safety interventions depends on having good baseline data on the extent of the problem prior to the implementation of the intervention, explicit short- and long-term goals against which to measure progress, and performance measures that are clearly linked to the safety problem being addressed. In addition, as Safer Skies teams select interventions to address the safety problems that caused fatal aviation accidents, it would be useful to identify clearly any existing barriers to the development of performance measures. These barriers include differences in aircraft equipment and the absence of needed data. Once such problems are clearly identified, FAA and the aviation industry can work jointly to resolve them. Recommendations To improve the ability to determine the effectiveness of Safer Skies interventions, we recommend that the Secretary of Transportation direct the FAA Administrator to work with the Safer Skies steering committees to direct the teams to identify the extent of fatal accidents resulting from the safety problems they are working on. If possible, data should be developed to establish a consistent baseline against which to measure the progress that results from the Safer Skies initiative. If an analysis team has already completed its report, the implementation team working on the same safety problem should develop these baseline data. More specifically, to better measure progress toward overall safety goals, we recommend that the FAA Administrator work with the Safer Skies steering committees to revise the implementation guidance to (1) develop an overall performance measure or measures to determine progress toward eliminating the safety problem the team is addressing; (2) consider using accident precursors as performance measures for the safety problem in question; and (3) identify any barriers that may impede the implementation of performance measures. Agency Comments DOT and FAA officials agree in principle with the need for baseline data on the extent of each safety problem and performance measures to determine progress toward overall safety goals. They concur with the potential of accident precursors as possible performance measures and with the importance of identifying any barriers that may impede the implementation of performance measures. Coordination Has Been Extensive but Needs Improvement for the Safer Skies Initiative to Succeed FAA coordinated extensively with numerous representatives from the aviation industry, other federal agencies involved in aviation safety, and its own staff on the identification of safety problems and the selection of interventions. However, efforts to prioritize, fund, and evaluate Safer Skies initiatives could be better coordinated with industry and within FAA and the Department of Transportation (DOT). Joint government-industry efforts to improve safety are not new, but participants noted that the initiative was more inclusive than prior joint efforts. This inclusive approach should help FAA gain consensus on which interventions will best address aviation safety problems. However, our review identified three coordination problems that could undermine the implementation and evaluation of Safer Skies interventions. First, although FAA officials have repeatedly committed to funding interventions agreed upon by all parties working on the initiative, skepticism still exists among some participants as to whether this commitment can or will be honored. This is particularly true in general aviation. It also remains unclear what process will be used, if funding is limited, to reprioritize available resources to ensure funding for interventions that emerge later but have greater potential for reducing the fatal accident rate. Finally, Safer Skies steering committees, FAA, and DOT have not coordinated how they will measure progress in achieving the accident reduction goal for commercial aviation. The Safer Skies Initiative Involves an Unprecedented Level of Coordination Between Industry and Government FAA included aviation experts from a wide range of government and industry organizations on the Safer Skies steering committees and the teams working on the 16 safety problems. Many participants represent groups that are directly responsible for the nation’s aviation safety, such as the air carriers and the manufacturers of aircraft and engines. Other participants come from trade associations that represent various aviation groups or from federal agencies that share responsibility for aviation safety, including the Department of Defense and the National Aeronautics and Space Administration. In addition, while giving priority initially to reducing the U.S. accident rate, the initiative recognized the increasingly global nature of aviation. In an effort to address both domestic and worldwide aviation safety problems, the commercial aviation steering committee included representatives from two international aviation authorities, the Joint Aviation Authorities and the International Civil Aviation Organization. Joint efforts between industry and government officials to study aviation safety problems are not new. In prior years, government and industry convened various joint teams to review aviation safety issues and make recommendations; however, according to Safer Skies participants, those earlier teams did not always include representatives from major organizations who were responsible for aviation safety. As a result, FAA was not always successful in obtaining consensus on the safety interventions that those teams recommended. Safer Skies participants noted that the level of participation and cooperation for this initiative is unprecedented among the major groups responsible for aviation safety and should enhance FAA’s chances of implementing the safety interventions made by the various teams. Moreover, the initiative coordinated ongoing aviation safety work that was being conducted independently by FAA, industry, and other federal agencies. For example, aircraft manufacturers had initiated an exhaustive study on ways to prevent uncontained engine failure. FAA eventually joined the aircraft manufacturers in this study, and it subsequently became part of the Safer Skies agenda. In addition, the industry and FAA had been conducting independent studies on runway incursions and CFIT. Under the initiative, representatives from the aircraft manufacturers, airline industry, and government are members of the teams studying 16 safety problems, and together they will decide on the strategies to address them. The Funding, Prioritization, and Evaluation of Safer Skies Interventions Could Be Better Coordinated While coordination between government and industry organizations participating in the initiative has been extensive, we identified three areas in which coordination could be improved. First, although FAA has committed to funding interventions approved by the Safer Skies steering committees, uncertainty remains about the agency’s ability to fund these safety interventions. The steering committees for commercial aviation and general aviation have both sought commitment to the implementation and funding of interventions before giving final approval to move forward. However, FAA’s commitment has come at different points in the approval process for interventions recommended by these steering committees, and FAA’s commitment to the general aviation interventions was still uncertain even after some industry and FAA officials believed the steering committee had given its final approval. As a consequence, general aviation participants were more skeptical about whether FAA would implement or fund their safety interventions. Second, it remains unclear what process will be used to reprioritize available resources if funding is limited. Finally, Safer Skies steering committees, FAA, and DOT have not coordinated how they will measure Safer Skies’ progress in achieving the goal of reducing the fatal accident rate in commercial aviation by 80 percent by 2007. Skepticism Persists About FAA’s Ability to Fund Safety Interventions Skepticism persists about whether FAA can or will be able to honor its commitments to fund the interventions approved by the Safer Skies steering committees to reduce the fatal accident rate. This is especially true in the general aviation community. This skepticism results partly because the process for approving and funding Safer Skies interventions has worked differently for general aviation than it has for commercial aviation thus far. This has contributed to differing perceptions about the likelihood of the funding and implementation of interventions. These perceptions have resulted in part from the different processes used by the two steering committees to seek approval and funding from participating organizations, from the way interventions have moved forward within these two Safer Skies committees, and from FAA’s handling of the interventions recommended by them. The Process for Final Approval of Interventions Has Worked Differently in the Two Steering Committees The final approval of recommended safety interventions has worked differently in the commercial aviation and general aviation steering committees. The commercial aviation steering committee has documented its process for approving interventions, which involves members’ gaining the approval of their respective organizations for both implementation and funding. This approval comes in two stages. First, steering committee members brief their respective organizations on the general concept of each intervention under consideration and seek preliminary approval of each intervention. Changes and modifications may be suggested by the organizations. For organizations that will be involved in the implementation of an intervention, the preliminary approval also involves a tentative commitment to fund the cost of implementing any interventions for which they are responsible. Once members grant preliminary approval, the steering committee asks the team to draw up detailed implementation plans for each intervention. These implementation plans are then submitted to the steering committee for the next level of approval. Members subsequently seek final approval of these plans from the organizations they represent, including firm resource and funding commitments if appropriate. When participating organizations concur with the detailed implementation plans, the steering committee grants final approval. To date, most of the commercial aviation teams have forwarded a few interventions at a time for final approval by the steering committee, rather than complete lists of interventions to address multiple aspects of complex safety problems, such as CFIT. Thus, when the commercial aviation steering committee has given its final approval for an intervention, members interviewed told us they assumed that the intervention had a high priority and that implementation would take place because the organizations responsible for implementation had already committed both the staff and funding needed. In contrast, the general aviation steering committee had not documented its process for approving interventions at the time of our review, although it recently developed draft procedures, according to FAA’s response to our draft report.Furthermore, commitment to provide resources for them is still pending, although some members of both FAA and industry who serve on the steering committee understood that final approval had been given to the interventions chosen to address CFIT and weather. Once these two implementation teams submitted their draft reports to the general aviation steering committee, the steering committee asked members to have their organizations review and comment on each intervention. This process resulted in preliminary approval or disapproval of the concept of each intervention, in some cases after the intervention was modified. Organizations responsible for the implementation of interventions also were expected to give a tentative commitment to fund the cost of their implementation. The steering committee then asked the teams to develop detailed implementation plans for each intervention and to submit those for its final approval. These two teams recommended and developed plans for a total of 17 interventions, many of which involve subinterventions and will require substantial resources either in the form of staff or funding from FAA. Because of the number and potential cost of interventions contained in the two general aviation reports, FAA requested that the general aviation steering committee prioritize the interventions. The general aviation steering committee prioritized the interventions in the letter that transmitted the final CFIT and weather reports to the FAA Administrator in March 2000. Unlike the commercial aviation teams, which have presented one intervention at a time to the steering committee, the general aviation teams have presented their complete series of interventions for each safety problem. As the general aviation CFIT and weather reports moved toward final approval, however, confusion arose. Some industry and FAA participants believed that these reports had received final approval. This perception is supported by a March 22, 2000, letter from the industry and FAA co-chairs of the general aviation steering committee transmitting to the FAA Administrator the final CFIT and weather implementation reports with their detailed implementation plans. The letter and accompanying reports identified high-priority interventions for immediate implementation. These participants were concerned because FAA was still undecided which interventions would actually be implemented and funded. In contrast, FAA’s informal written comments in response to our draft report state that final approval has not been given to either implementation report and depends on the completion of detailed implementation plans by the FAA offices responsible for carrying out the implementation. According to the Director of Aircraft Certification, confusion arose because some members of the steering committee had “misperceptions” about what levels of approval had been agreed to. FAA’s Internal Review and Funding Process for Safer Skies Interventions Has Led to Some Uncertainty About Whether Some Interventions Will Be Funded FAA’s internal review and funding process for Safer Skies interventions has led to uncertainty about whether some interventions will be funded, in part because interventions forwarded by the commercial aviation and general aviation steering committees have been handled somewhat differently thus far. Like the other organizations participating in the initiative, FAA must commit its own resources to the interventions that it is responsible for implementing. In October 1999, FAA formed an executive council to help coordinate the implementation of the agency’s safety agenda, including how to provide funding and staff resources for Safer Skies interventions. The executive council includes the heads of each of FAA’s major program offices, its general counsel, and a regional administrator. The executive council has not yet documented its process for approving and funding interventions, however, and it remains unclear at what point FAA is committing resources to implement Safer Skies interventions. This uncertainty has led to different perceptions on the part of some FAA and industry participants about the likelihood that interventions will be implemented and funded. FAA staff working on the initiative described differences in the way the executive council has handled interventions proposed by the two steering committees. These differences have resulted in a clear indication of funding for commercial aviation interventions before that steering committee’s final approval is given, while the general aviation steering committee’s final approval was given on a series of weather and CFIT interventions that have yet to be approved and funded by FAA. When proposed Safer Skies interventions are under serious consideration by the steering committees, they are also presented to the executive council for discussion of their possible impact on workload and budget, according to FAA staff who serve as co-chairs of the two steering committees. The executive council provides feedback to the steering committees before interventions are approved. FAA staff serving on Safer Skies committees presented conflicting views, however, of when FAA commits to funding interventions. Several of the FAA staff interviewed said that FAA’s commitment of staffing and funding to commercial aviation interventions occurs before that steering committee gives its final approval to interventions. However, the Director of FAA’s Aircraft Certification Service, who serves as co-chair of the commercial aviation steering committee, described the executive council’s role as having more room for interpretation of the intervention and a subsequent determination of whether funding is available. She said that, once the intervention is approved, the executive council again discusses it, determines whether to accept it as stated or to modify it, assigns it to an FAA office for implementation, and determines how it fits in with the office’s existing priorities. The program office then reviews the intervention, can suggest modifications that will achieve the same goal, and determines whether the intervention can be accomplished with existing resources or requires a request for additional funding. She said that the executive council could also request that the steering committee modify or prioritize interventions. For example, she said that FAA agreed to implement the commercial aviation CFIT team’s recommendation to develop precisionlike airport approaches,concluded that the agency’s resources would not permit the completion of approaches for all airports in the time frame envisioned by the intervention, and is now working with the steering committee to identify which airports present the greatest risks and should be completed first. Similarly, she said that the council asked that the general aviation steering committee approve a different way to accomplish one intervention without hiring additional staff and prioritize its list of CFIT and weather interventions according to which ones will have the most impact on improving safety and reducing fatalities. Because the executive council’s role is new and its procedures remain undocumented, confusion persists about when FAA commits its resources to implementing the safety interventions approved by the steering committees. For example, although FAA’s executive safety council had agreed in principle to the highest priority interventions to address general aviation safety problems caused by weather and CFIT, FAA’s response to our draft report indicated that final approval and funding depend on the completion of detailed implementation plans. As a consequence, several Safer Skies participants from FAA and industry, especially those working on general aviation issues, expressed some concern about whether the recommended interventions would be funded or implemented. These concerns stem partly from FAA’s past record for implementing safety recommendations. FAA’s budget does not specifically identify and commit resources to implementing Safer Skies interventions. For example, FAA has no funds set aside in its budgets for fiscal years 2000 or 2001 for general aviation interventions. However, FAA’s Deputy Associate Administrator for Regulation and Certification said that the agency’s approach to budgeting is to retain flexibility by not identifying specific budget amounts for such efforts as the Safer Skies initiative. While we do not advocate including specific Safer Skies line items in FAA’s budget, the uncertainty about funding and implementation also exists because FAA has either not fully funded or not implemented some safety recommendations in the past. Several industry participants in the initiative specifically mentioned concerns about FAA’s lack of follow through on safety recommendations to decrease the number of runway incursions. Although FAA has received many recommendations for reducing runway incursions, continuing problems in this area have been partially attributable to insufficient funding of the safety plans FAA developed, according to DOT’s Inspector General.Additionally, after initially planning to fund the agency’s new inspection system,FAA has still not provided funding to hire analysts to review inspection data on the nation’s 10 major airlines for possible safety concerns. While FAA has implemented many safety recommendations over the years, concerns still persist about the agency’s ability to fund new safety initiatives. Greater assurance about the implementation of Safer Skies interventions could be provided in two ways. First, as mentioned in chapter 3, stronger mechanisms for tracking the implementation of interventions from all three steering committees need to be established. Second, clarifying FAA’s process for committing resources for implementing interventions would provide greater assurance of their implementation. Both of these steps would improve coordination between FAA and other Safer Skies participants. Thus far, the interventions approved by steering committees have not required a major commitment of time and resources by either FAA or industry groups. But future interventions may require substantial resources not included in FAA’s current budget, and choices may have to be made about which interventions to fund. Furthermore, FAA addresses and funds many issues beyond those on the Safer Skies agenda, including security issues and improvements to the air traffic control and airport infrastructure. FAA’s executive council provides a forum for agency managers to discuss and prioritize program and resource needs. However, without clear priorities and a unified aviation safety agenda that also takes such issues into account, FAA will continue to address aviation piecemeal, rather than as an integrated system. While the Safer Skies initiative represents a major step in the direction of coordinating the nation’s aviation safety agenda, a more far-reaching effort has not yet been undertaken to coordinate the nation’s complete aviation agenda. The Initiative Does Not Have A Process for Prioritizing Interventions to Ensure the Implementation of Those With the Greatest Potential to Reduce the Fatal Accident Rate The initiative has not developed a process for prioritizing interventions to ensure the implementation of those with the greatest potential to reduce the fatal accident rate if funding is limited. The initiative has involved prioritization at several points thus far. First, the teams addressing safety problems in commercial aviation and general aviation have prioritized the interventions they considered. For example, the general aviation weather team considered numerous possible safety interventions and eventually developed a list of 17 that it presented in order of priority. The steering committees have also prioritized interventions. For example, the commercial aviation steering committee has moved quickly on several interventions that the CFIT implementation team considered as having a high priority and potential for effectiveness. At the request of the executive council, the general aviation team created a unified list to prioritize its CFIT and weather interventions. Given the constraints of FAA’s budget, such prioritization is critical to ensuring that funds are expended on the interventions that will be most effective in reducing the fatal accident rate. The ability to reprioritize resources for Safer Skies interventions and other aviation work may also become critical. The Safer Skies team has just begun work on loss of control—the safety problem that caused the greatest number of fatal accidents in commercial aviation in 1988-97. Interventions to address loss of control are thus likely to be critical for reducing the fatal accident rate. If funding is limited, this may mean reprioritizing funding from existing programs and Safer Skies interventions that have already been approved to those with more potential to reduce the fatal accident rate and save lives. The initiative’s success will depend in part on its ability to identify those interventions with the most potential impact and to prioritize their implementation and funding. Safer Skies steering committees and FAA’s executive council have not yet established any process for reprioritizing interventions if funding is limited. Safer Skies Steering Committees, FAA, and DOT Have Different Ways of Measuring Progress in Reducing Commercial Aviation’s Fatal Accident Rate A lack of coordination among Safer Skies steering committees, FAA, and DOT has resulted in their having different ways of measuring whether the goal of reducing the fatal accident rate for commercial aviation by 80 percent is achievable by 2007. DOT is responsible for setting safety goals for all modes of transportation under its authority, including aviation. Generally, FAA and other agencies under DOT have established specific goals and use measurements that evaluate their progress in meeting those goals that are in line with those set by DOT. But currently, DOT and FAA measure progress toward the goal of an 80-percent reduction in the fatal accident rate for commercial aviation in different ways. DOT’s Performance Plan for fiscal year 2001 establishes goals for reducing the fatal accident rate in commercial aviation that rely on the Safer Skies initiatives. To determine the progress made in reducing the rate, DOT’s plan uses aircraft flight hours as the activity measure. In contrast, the commercial aviation steering committee and FAA use aircraft departures as the measure of aviation activity. Because DOT, FAA, and Safer Skies all share a common goal of reducing the fatal accident rate, consistency would be desirable in the aviation activity measure they use to calculate the progress being made toward that goal. Since most commercial aviation accidents occur during takeoff and landing, we believe that using departures would better measure the effectiveness of the Safer Skies interventions for commercial aviation. Conclusions Additional steps need to be taken to ensure that those safety interventions most critical to reducing the nation’s fatal accident rate are given top priority and funding. If FAA’s process for prioritizing and funding Safer Skies interventions is not clarified, there is no assurance that the agency will be able to implement these interventions. If funding is limited, a process may well be needed for reprioritizing available staffing and funding to ensure that the interventions with the greatest potential for reducing the nation’s fatal accident rate and saving lives are implemented first. Even if Safer Skies steering committees and FAA agree on the priorities for the nation’s safety agenda, these priorities will continue to compete for resources with other aviation needs until FAA develops a unified aviation agenda. Finally, FAA, the Safer Skies commercial aviation steering committee, and DOT are not using the same aviation activity measure to calculate the progress of Safer Skies interventions in reducing the fatal accident rate for commercial aviation. Consequently, they may reach different conclusions about the effectiveness of the Safer Skies interventions in achieving the goal of reducing the fatal commercial aviation accident rate by 80 percent by 2007. Recommendations To ensure the implementation of the Safer Skies safety interventions, we recommend that the Secretary of Transportation direct the FAA Administrator to clarify the executive council’s process for committing to the funding and implementation of interventions and coordinate with the Safer Skies steering committees about the meaning and timing of this commitment. To ensure that the interventions with the greatest potential for reducing the fatal accident rate and improving aviation safety receive needed resources, we recommend that the Secretary of Transportation direct the FAA Administrator to ensure that the executive council has a process in place for reprioritizing interventions if funding is limited. To ensure that the extent of progress toward reducing the fatal accident rate for commercial aviation is measured consistently, we recommend that the Secretary of Transportation ensure that DOT, FAA, and the Safer Skies commercial aviation steering committee all use departures as the activity measure for calculating the rate. Agency Comments DOT and FAA officials concurred with our recommendations to clarify the executive council’s process for committing to the funding and implementation of interventions and to use departures as the activity measure for calculating the fatal accident rate in commercial aviation. They disagreed with our recommendation that FAA’s executive council should develop a process for reprioritizing interventions if funding is limited. The officials said that such reprioritization falls under the agency’s normal processes for reprogramming funding. However, the role of the executive council is to help coordinate the implementation of the agency’s safety agenda—including how to provide funding and staff resources for Safer Skies interventions. We believe that it would be useful for the executive council to establish some basic criteria and processes for evaluating and comparing the potential impact of existing and emerging safety interventions. For this reason, we did not modify or withdraw our recommendation. | Pursuant to a congressional request, GAO reviewed the Federal Aviation Administration's (FAA) Safer Skies Initiative, focusing on: (1) to what extent addressing the safety problems to be addressed by the initiative will help reduce the fatal accident rate; (2) what progress the initiative has made in identifying and implementing interventions to address each of these safety problems; (3) what progress has been made in assessing the effectiveness of those interventions; and (4) how FAA is coordinating the Safer Skies initiative with other safety activities conducted throughout the agency, in partnership with the aviation industry, and by other federal agencies. GAO noted that: (1) the Safer Skies initiative addresses the safety problems that have contributed to fatal accidents in the past, and in conjunction with other safety problems, it can be expected to reduce the fatal accident rate and thus enhance the safety of the nation's air passengers; (2) in commercial aviation, the initiative addresses safety problems that accounted for over three-quarters of the fatal accidents in those operations in 1988-1997; (3) in general aviation, the Safer Skies initiative plans to address safety problems that appear to be the most common causes of fatal accidents; (4) the initiative has adopted a less aggressive goal in general aviation of reducing the number of fatal accidents to 350 in 2007, which represents about a 20-percent reduction; (5) the initiative addressed four safety problems in cabin safety; (6) to date, safety improvement efforts by FAA and the initiative have focused on reducing the causes of past accidents and incidents, which may not be entirely predictive of future ones; (7) as of April 1, 2000, Safer Skies teams had started work on 13 of the 16 safety problems and had begun implementing interventions for 5 of these--2 in commercial aviation and 3 in cabin safety; (8) since most of the interventions developed under the Safer Skies initiative are in early implementation stages, little progress has been made in evaluating their effectiveness; (9) of the five Safer Skies teams that have begun implementing interventions, only one has developed a performance measure to evaluate whether the interventions it has selected are helping to reduce the safety problems that cause fatal accidents and are worth what they cost; (10) FAA has coordinated extensively with aviation experts from industry, other federal government agencies, and its own staff, but GAO's review identified three coordination problems that could undermine the implementation and evaluation of Safer Skies' interventions; (11) although FAA officials have repeatedly committed to funding interventions agreed upon by all parties, skepticism still exists among some participants as to whether this commitment can or will be honored; (12) furthermore, if funding is limited, it remains unclear what process will be used to reprioritize available resources to ensure funding for interventions that emerge later but have greater potential for reducing the fatal accident rate; and (13) the Safer Skies initiative, FAA, and the Department of Transportation (DOT) have not agreed on how they will measure progress in achieving the accident reduction goal for commercial aviation. |
Background The Global Supply Chain Supply chain security is a principal element of CBP’s layered strategy to protect commerce. In the post-9/11 environment, the movement of cargo shipments throughout the global supply chain from foreign manufacturers, suppliers, or vendors to retailers or other end users in the United States is inherently vulnerable to terrorist actions. Every time responsibility for cargo shipments changes hands along the global supply chain there is the potential for a security breach. Thus, vulnerabilities exist that terrorists could exploit by, for example, placing a weapon of mass destruction into a container for shipment to the United States or elsewhere. Figure 1 illustrates key points of transfer involved in the global supply chain—from the time that a shipment is loaded with goods at a foreign factory to its arrival at the U.S. port and ultimately the retail facility or end user. C-TPAT Program Structure and Membership CBP initiated the C-TPAT program in November 2001 as part of its layered strategy for overseeing global supply chain security. C-TPAT aims to secure the flow of goods bound for the United States through voluntary antiterrorism partnerships with entities that are stakeholders within the international trade community—see table 1 for information on the types of entities eligible for C-TPAT membership. The SAFE Port Act established a statutory framework for the C-TPAT program. In addition to formally establishing C-TPAT as a voluntary government-private sector partnership program to strengthen and improve the overall security of the global supply chain, the act codified existing membership processes for the C-TPAT program and added new components, such as time frames for certifying, validating, and revalidating members’ security practices. Through C-TPAT, CBP intends to enhance the security of the global supply chain to the United States through partnership agreements, and by reviewing and periodically validating C-TPAT members’ security practices. As a first step in C-TPAT membership, an entity must sign an agreement with CBP signifying its commitment to enhance its supply chain security practices consistent with C-TPAT minimum security criteria and to work to enhance security throughout its global supply chain to the United States. The partnership agreements that C-TPAT members sign provide CBP with the authority it needs to validate members’ security practices. According to CBP officials, as of September 2016, there were 11,490 C- TPAT members. Importers, representing 37 percent of the C-TPAT members, were the largest C-TPAT member group and, as shown in Figure 2, the remaining 63 percent of C-TPAT members were distributed among other trade industry sectors. Since we last reported on the C- TPAT program in April 2008, CBP has expanded C-TPAT membership to include other trade industry sectors, such as third party logistics providers and exporters. According to CBP officials, as of September 2016, CBP employed 146 security specialists (to include supervisory security specialists) who are to certify or validate members’ security practices and provide other services for C-TPAT members, such as serving as points of contact concerning C- TPAT program responsibilities. The security specialists operate from C- TPAT headquarters in Washington, D.C., and six field offices throughout the United States: Los Angeles, California; Miami, Florida; Newark, New Jersey; Buffalo and New York, New York; and Houston, Texas. The C- TPAT program, which resides within CBP’s Office of Field Operations (OFO), was funded at over $36 million in fiscal year 2016. C-TPAT Membership Requirements and Benefits CBP employs a multistep process, led by its security specialists, for accepting entities as members in the C-TPAT program, validating their supply chain security practices—to include the practices of their supply chain partners—and providing them benefits. This screening process, which CBP has documented through standard operating procedures, consists of five key steps, as shown in figure 3 and described in greater detail in appendix I. To facilitate the member screening process, C-TPAT staff gather, review, and prepare documentation using an information-sharing and data management system called Portal 2.0. C-TPAT officials use Portal 2.0 to review C-TPAT member-submitted information and record certification and validation results. In addition, C-TPAT member companies use Portal 2.0 to submit program applications, security profiles, and other information to C-TPAT officials. In exchange for allowing C-TPAT staff to review and validate their supply chain security practices, C-TPAT members become eligible to receive benefits—such as reduced likelihood of examinations of their shipments, expedited shipment processing, and greater access to CBP staff and information—once their membership is certified. Upon certification, C- TPAT importers and exporters only are granted Tier I status. Importers and exporters whose supply chain security practices have been validated by C-TPAT security specialists are granted either Tier II (meeting minimum security criteria for their business type) or Tier III (employing innovative supply chain security measures that are considered best practices and exceed minimum security criteria) status. Tier II and Tier III C-TPAT importers and exporters receive increasingly reduced risk scores (i.e., as tier level increases, risk scores are lowered) in CBP’s Automated Targeting System (ATS), thus generally reducing the likelihood that their shipments will be examined upon entering U.S. ports. Specific benefits offered to C-TPAT members, as listed in CBP’s C-TPAT Program Benefits Reference Guide, can be found in Table 2. While C-TPAT membership can reduce the probability of CBP selecting members’ shipments for examinations, holds, or other enforcement actions, CBP maintains the authority to conduct appropriate enforcement actions. In addition, other federal agencies (e.g., U.S. Department of Agriculture, Food and Drug Administration, etc.) can conduct inspections of arriving cargo shipments based on their own selection criteria. CBP Is Taking Steps to Resolve Data System Problems, but Could Take Additional Steps to Ensure Staff Meet Security Validation Responsibilities Data System Problems Have Led to Challenges in Identifying and Completing C-TPAT Member Security Validations C-TPAT staff have faced challenges in meeting C-TPAT security validation responsibilities in an efficient and timely manner because of problems with the functionality of C-TPAT’s data management system (Portal 2.0). In particular, while the intended purpose of transitioning from Portal 1.0 to Portal 2.0 in August 2015 was to improve functionality and facilitate communication between security specialists and C-TPAT members, a series of problems arose as a result of this transition that have impaired the ability of C-TPAT staff to identify and complete required C-TPAT member certification procedures and security profile reviews in a timely and efficient manner. For example, C-TPAT field office directors, supervisory security specialists, and security specialists we met with identified numerous instances in which the Portal 2.0 system incorrectly altered C-TPAT members’ certification or security profile dates. In particular, when C-TPAT officials transferred responsibility for some C- TPAT members’ accounts from one security specialist to another, those members’ certification dates were sometimes incorrectly changed. These altered dates, in turn, have interfered with security specialists’ ability to properly identify and track which member companies are due for an annual security profile review. In addition to the impact they have had on the daily responsibilities of security specialists, Portal 2.0 problems also made it more difficult for C- TPAT managers to complete the C-TPAT program’s 2016 annual work plan for assigning responsibilities for member security validations and revalidations to its security specialists. C-TPAT managers have typically relied on data from Report Builder, a reporting module within Portal, to develop annual work plans. However, in November 2015, C-TPAT staff had difficulty using Report Builder to access historical data from Portal 1.0 that were to have migrated to Portal 2.0 regarding C-TPAT members that were due to have security validations or revalidations conducted in 2016. As a result, the initial 2016 work plan did not have accurate data on the number of C-TPAT members due for security validations or revalidations. In addressing this issue, C-TPAT’s Director pointed out that while the Portal is intended to facilitate the ability of security specialists to perform their responsibilities, each security specialist is experienced and is not to rely solely on the Portal to complete his or her job responsibilities. So, to complete the 2016 work plan, C-TPAT managers implemented a requirement for security specialists in each of C-TPAT’s six field offices to manually review documentation, such as prior security validation reports, for each of their assigned members (approximately 80 to 100 members per security specialist) to verify or correct the certification and security validation dates recorded in Portal 2.0 and to ensure that the 2016 work plan identified all members due for security validations or revalidations in 2016. C-TPAT officials then used the information gathered through these additional manual steps to update and correct the 2016 work plan. Further, security specialists we met with told us that they had difficulty saving and submitting security validation reports to C-TPAT managers in a timely manner because of Portal 2.0 problems. Instead of submitting the validation reports directly through the Portal, security specialists told us they sometimes had to prepare draft security validation reports offline and copy and paste information section by section in order for Portal 2.0 to accept and save the security validation reports. While these alternate means of verifying validation responsibilities and mitigating Portal 2.0 problems have generally allowed security specialists to complete their required annual security profile reviews and validations, the security specialists stated that these work arounds are time consuming and have necessitated the use of overtime. The C-TPAT Director acknowledged the problems her staff have experienced with Portal 2.0 and that these problems have led to some inefficient work practices. Problems with Portal 2.0 have also had an adverse impact on C-TPAT members. For example, security specialists told us that some C-TPAT members have experienced difficulties viewing validation report findings because of problems in accessing Portal 2.0. If validation reports contain findings regarding security practices that fail to meet the minimum security criteria that members are to address, the members are to respond to C-TPAT staff within 90 days about their plans for addressing the findings and improving their supply chain security practices to meet CBP’s minimum security criteria. Because Portal 2.0 problems have sometimes prevented C-TPAT members from accessing and responding to validation reports in a timely manner, C-TPAT managers told us they have needed to grant these members additional time to respond to validation report findings. CBP Staff Are Taking Steps to Address Data System Problems CBP staff from the C-TPAT program and the Office of Information Technology’s Targeting and Analysis Systems Program Directorate (TASPD) have taken steps to address problems with Portal 2.0. When problems with Portal 2.0 first surfaced in August 2015, Portal users in the C-TPAT field offices began recording problems in spreadsheets. C-TPAT headquarters officials collected these spreadsheets and sent them to TASPD managers to address. In an effort to improve coordination and communication between C-TPAT and TASPD staff, in February 2016, CBP officials developed a more centralized and systematic process for documenting, prioritizing, and addressing Portal 2.0 problems as they arise. In particular, a single C-TPAT point of contact receives a list of Portal problems from field office security specialists, creates a work ticket for each problem, and works with TASPD staff to prioritize those work tickets and group them into batches. TASPD staff then attempt to identify the causes of the batched Portal 2.0 problems and test the proposed fixes with input from the end users (primarily security specialists and C-TPAT field office directors) to ensure the problems are corrected during 2 to 3 week intervals called “sprints.” C-TPAT field office staff we met with told us that while these sprints have generally resolved the originally-identified problems, the fixes have, at times, created new problems that affected the accuracy of data and the usability of certain Portal 2.0 features. For example, security specialists have encountered error messages when trying to submit security validation reports for supervisory review and, in fixing that problem, security validation reports became inadvertently archived, which prevented supervisory security specialists from being able to review and edit the reports. Because of the continued Portal 2.0 problems, C-TPAT and TASPD staff have worked together to identify root causal factors that led to the Portal 2.0 problems and are implementing actions to address those factors. For example, TASPD staff cited unclear requirements for the Portal 2.0 system and its user needs as a factor that likely contributed to inadequate system testing and continued problems. In response, TASPD and C- TPAT staff have begun efforts to better capture Portal 2.0 system requirements and user needs and have incorporated more consistent end user testing. Additionally, TASPD and C-TPAT headquarters staff began having regular meetings with C-TPAT field office managers to institute a more-encompassing approach for addressing and understanding system requirements and Portal 2.0 functionality problems. As part of the root cause analysis, TASPD staff have outlined recommended actions for addressing causal factors that led to the Portal 2.0 problems, along with the associated timeframes for completing these actions. While TASPD and C-TPAT staff have already implemented some actions, such as establishing a new team structure, the staff stated that they will continue to work on identifying and addressing potential root causes of the Portal 2.0 problems and noted that this process will likely continue through 2017. Issuing Standardized Guidance to Field Offices Could Better Assure Data on Security Validations Are More Consistent and Reliable Despite the Portal 2.0 problems, C-TPAT headquarters officials and field office directors we met with told us that they are assured that required security validations are being identified and completed in a timely manner because the field offices keep records on required and completed security validations apart from data recorded in Portal 2.0. C-TPAT officials provided us with documentation illustrating the steps taken by security specialists and supervisors at the field offices to identify and complete the required security validations. Field office directors or supervisors then verify that the security validations were completed, as required, by the end of each calendar year. We reviewed this documentation and verified that field offices are tracking completed validations annually. C-TPAT headquarters staff have delegated responsibility to field office directors for ensuring that the required security validations are tracked and completed and reported to headquarters each year, but headquarters has not issued centralized guidance or standard operating procedures to be used by the field offices to ensure that they are tracking and completing the required security validations in a consistent manner. As a result, the field offices have each developed their own varied approaches for tracking required security validations and recording those that are completed. A C-TPAT headquarters official responsible for reviewing security validation information provided by the field offices stated that there may be value in standardizing the approach field offices use to track and report on completed security validations in order to ensure the data received and reviewed by C-TPAT headquarters are more consistent and reliable. While the current validation tracking processes used by field offices do account for security validations conducted over the year, standardizing the process used by field offices to track required security validations could strengthen C-TPAT management’s assurance that its field offices are identifying and completing the required security validations in a consistent and reliable manner. Standards for Internal Control in the Federal Government related to the use of quality information state that management is to obtain relevant information from reliable internal sources on a timely basis in order to evaluate the entity’s performance. In addition, such quality information is to flow down from management to personnel, as well as up from personnel to management, to help ensure that key program objectives are met. This upward communication also helps provide effective oversight of internal controls. The internal control standards also call for management to document responsibilities for operational processes through policies. Developing standardized guidance for its field offices to use in tracking required security validations could further strengthen C-TPAT management’s assurance that its field offices are identifying and completing the required security validations in a consistent and reliable manner. CBP Cannot Determine the Extent to Which C-TPAT Members Are Receiving Benefits Because of Data Concerns CBP Staff Have Concerns Regarding the Accuracy of C-TPAT Member Benefit Data Since 2012, CBP has compiled data on certain events or actions it has taken regarding arriving shipments—such as examinations, holds, and processing times—for both C-TPAT and non-C-TPAT members through its C-TPAT Dashboard. However, based on GAO’s preliminary analyses of data contained in the Dashboard, and data accuracy and reliability concerns cited by C-TPAT program officials, we concluded that CBP staff are not able to determine the extent to which C-TPAT members are receiving benefits, such as reduced likelihood of examinations of their shipments and expedited shipment processing, compared to non- members. We conducted preliminary analyses of C-TPAT program data from the Dashboard to understand, for example, how the examination rates of C- TPAT members’ shipments compared with those of non-C-TPAT members across different modes of transportation (air, truck, vessel, and rail) for each year from fiscal year 2011 through fiscal year 2015. The results of our analyses showed that C-TPAT members’ shipments did not consistently experience lower examination and hold rates and processing times compared to non-members’ shipments across the different modes of transportation. We shared the results from our preliminary analyses with the C-TPAT Director and staff familiar with the C-TPAT Dashboard and they expressed surprise that the data did not show more consistent benefits for C-TPAT members as compared to non-C-TPAT members, and that Tier II members did not consistently receive the benefit of lower examination and hold rates or processing times as compared to Tier I members. We further discussed that the findings from our analyses ran counter to C-TPAT member benefits information published by CBP. In particular, we noted that in its C-TPAT Program Benefits Reference Guide, CBP asserts that entries filed by Tier III Partners are 9 times less likely to undergo a security based examination than are entries filed by non-C-TPAT members and that entries filed by Tier II Partners are about 3.5 times less likely to undergo a security examination than are those filed by non- C-TPAT members. Further, CBP’s Congressional Budget Justification for Fiscal Year 2016 states that C-TPAT importers are 4 to 6 times less likely to incur a security or compliance examination than those for non-C-TPAT members. Subsequent to our discussion with C-TPAT staff on the results of our preliminary analyses and the apparent discrepancy with CBP-reported benefits data, the C-TPAT Director and staff researched the data and calculations within the Dashboard further. Based on their research, the C- TPAT officials stated that there appear to be errors in the data or formulas used to compute various actions that are uploaded into the Dashboard, such as shipment examinations, holds, and processing times. For example, the C-TPAT Director stated that based on their research, they discovered errors in the data contained in the Dashboard regarding the number of CBP shipment examinations on the southwest border in 2015. C-TPAT officials have not yet determined what accounts for the apparent accuracy and reliability issues of data contained in the Dashboard. The C- TPAT Program Director explained that the Dashboard was developed in response to a request for increased data on C-TPAT member benefits by a former C-TPAT Program Director. The current Director noted that the C- TPAT office has not regularly reviewed the data contained in the Dashboard. In addition, officials from the C-TPAT program, as well as from TASPD, explained that while the Dashboard has been in place since 2012, it has functioned in a limited operational mode, with data from the Dashboard only being used internally by program management. The officials stated that the Dashboard’s requirements are dated, and that new requirements need to be verified and tested. They further stated that because of competing priorities, CBP staff have not completed verification, user acceptance testing, or periodic data checks. C-TPAT officials noted, though, that C-TPAT and TASPD staff are in the process of analyzing data contained in the Dashboard to finalize an action plan to correct the data concerns. It is too soon for us to assess whether this process will fully address the Dashboard accuracy and reliability issues. Standards for Internal Control in the Federal Government state that program management should use quality information to achieve the entity’s objectives. Specifically, management is to obtain relevant data from reliable internal and external sources based on identified information requirements. These sources should provide data that are reasonably free from error and bias and represent what they purport to represent. Also, management should process data into quality information. Quality information is to be, among other things, appropriate, current, complete, and accurate. In addition, the SAFE Port Act requires CBP to extend benefits, which may include reduced examinations of cargo shipments, to Tier II and Tier III C-TPAT members. As mentioned earlier, the 2014 C- TPAT Program Benefits Reference Guide also cites reduced examination rates for C-TPAT importers compared to non C-TPAT importers as a benefit of the program. Further, project management criteria established by the Project Management Institute state that it is necessary to establish project objectives and outline a plan of action, via a project schedule with major milestones, to obtain those objectives. Because the data contained in the Dashboard cannot currently be relied upon, CBP is not able to determine the extent to which C-TPAT members have received benefits, such as lower examination or hold rates, or reduced processing times of their shipments when compared to non-C- TPAT members. CBP has likely relied on such questionable data since it developed the Dashboard in 2012, and, thus, cannot be assured that C-TPAT members have consistently received the benefits that CBP has publicized. Other C-TPAT Member Benefits Are Difficult to Quantify Beyond shipment examination and hold rates, and processing times, CBP does not gather data on its other stated C-TPAT member benefits. There are a variety of reasons for this, including the inherent difficulty in being able to quantify certain benefits that are more qualitative in nature, such as having access to security specialists; or are difficult to meaningfully quantify across ports because of the many differences that exist in infrastructure from port to port. C-TPAT officials explained that, although these other benefits are difficult to measure, they are of value to C-TPAT members. The C-TPAT officials acknowledged that while the C-TPAT program might be able to gather and track quantifiable data on some additional benefits—such as increased mitigation of monetary penalties for C-TPAT members—given the Portal 2.0 and Dashboard data accuracy and reliability issues (as described earlier), they plan to focus first on identifying and correcting these data issues for those benefits currently tracked rather than trying to quantify and track other current member benefits. CBP is Considering Additional C-TPAT Member Benefits and a New Metric In addition to the C-TPAT member benefits listed earlier, CBP staff are working with trade industry partners and the COAC, to identify and explore potential new benefits, as well as a metric for quantifying potential cost savings for members. Trade industry officials we met with generally spoke positively of the C-TPAT program and of CBP staffs’ efforts in sharing information and listening to their concerns and suggestions for enhancing the program. However, some trade industry officials we met with have also expressed the desire for C-TPAT to improve and add member benefits. In response to suggestions from members of the trade community, C-TPAT staff said that they are considering some additional benefits, such as the following: Advanced Qualified Unlading Approval (AQUA) Lane: The AQUA Lane pilot is a joint partnership between the C-TPAT program, sea carriers, and world trade associations at select U.S. ports with the goal of reducing the amount of time that the carriers must wait for releasing their cargo. At these pilot ports, select C-TPAT member vessel carriers, who qualify under a set of predetermined requirements, are allowed to offload—but not release—cargo containers arriving at one of the pilot ports prior to CBP officials clearing the vessel carrier for release. AQUA Lane was initially piloted at the ports of Oakland, California; Port Everglades, Florida; New Orleans, Louisiana; and Baltimore, Maryland. According to C-TPAT officials, the pilot has been well received by sea carriers, who have expressed interest in seeing the program expanded to other domestic sea ports. In response, CBP announced a phased expansion of the AQUA Lane program, adding six ports in September 2016, an additional 10 ports in December 2016, with final implementation to all remaining seaports to be completed in early 2017. Trusted Trader Program: The Trusted Trader Program is a CBP-led collaborative effort being tested that aims to enhance information sharing between government agencies regarding importers’ efforts to enhance supply chain security and comply with trade requirements. As this program is expanded, one goal is to reduce redundancies in processing steps, as well as in the vetting and validation procedures of the C-TPAT and ISA programs. For example, under the Trusted Trader Program, CBP may conduct C-TPAT supply chain security and ISA trade compliance validations jointly, reducing time and resources that member companies must invest in these processes. However, the program has received mixed reactions from members of the trade industry with whom we met. While some members of the trade industry spoke favorably of the Trusted Trader Program, other members questioned whether the program’s benefits offered sufficient incentives compared to the costs and administrative requirements. Cost Savings Benefit Metric: CBP is in the process of reviewing a metric regarding the cost savings derived by C-TPAT members as the result of a reduced rate of shipment examinations. This metric was proposed by a C-TPAT member in the summer of 2015 and accepted by DHS consistent with the Government Performance and Results Act. CBP began gathering data for this metric during fiscal year 2016; however, C-TPAT officials noted that they need to revisit the integrity of the supporting data that would be used in this metric as a result of the concerns we and C-TPAT officials have raised before CBP can pursue implementing such a metric. Conclusions CBP’s risk-informed approach to supply chain security focuses on ensuring the expeditious flow of millions of cargo shipments into the United States each year, while also managing security concerns. It is critical that CBP manages the C-TPAT program in a way that enhances the security of members’ global supply chains, while also providing benefits that incentivize program membership. A lack of reliable data has challenged CBP’s ability to manage the C-TPAT program effectively. In particular, problems with the C-TPAT program’s updated Portal 2.0 data system that began in August 2015 have impaired the ability of C-TPAT staff to identify and complete required security validations in a timely and efficient manner. While C-TPAT field offices have implemented procedures for ensuring that required security validations are identified and completed, these procedures are varied because C-TPAT headquarters has not developed standardized guidance for its field offices to follow. Taking steps to standardize C-TPAT field offices’ efforts to track required security validations could strengthen C-TPAT management’s assurance that its field offices are identifying and completing the required security validations in a consistent and reliable manner. Further, because the data contained in the Dashboard cannot be relied upon, CBP is not able to determine the extent to which C-TPAT members are receiving benefits, such as lower examination or hold rates, or reduced processing times of their shipments when compared to those for non-C-TPAT members. Finally, because CBP has likely relied on such questionable data since the Dashboard was developed in 2012, it does not have reasonable assurance, consistent with federal internal control standards, that C-TPAT members have consistently received the benefits that CBP has publicized. Accurate and reliable data will also be important as CBP considers adding additional member benefits and developing a cost savings metric. Recommendations for Executive Action To ensure that C-TPAT program managers are provided consistent data from the C-TPAT field offices on security validations, we recommend that the Commissioner of U.S. Customs and Border Protection develop standardized guidance for the C-TPAT field offices to use in tracking and reporting information on the number of required and completed security validations. Further, to ensure the availability of complete and accurate data for managing the C-TPAT program and establishing and maintaining reliable indicators on the extent to which C-TPAT members receive benefits, we recommend that the Commissioner of U.S. Customs and Border Protection determine the specific problems that have led to questionable data contained in the Dashboard and develop an action plan, with milestones and completion dates, for correcting the data so that the C- TPAT program can produce accurate and reliable data for measuring C- TPAT member benefits. Agency Comments and Our Evaluation In December 2017, we requested comments on a draft of this report from DHS. In January 2017, officials from CBP provided technical comments, which we have incorporated into the report as appropriate. In addition, DHS provided an official letter for inclusion in the report, which can be seen in appendix II. In its letter, DHS stated that it concurred with our two recommendations and has begun to take actions to address them. In particular, for the first recommendation, DHS noted that the C-TPAT program manager has selected a methodology that will include uniform monthly reporting from C-TPAT field offices to the C-TPAT program manager. DHS anticipates that these efforts will be put into effect by May 2017. We will continue to monitor CBP’s efforts in addressing this recommendation. Regarding the second recommendation, DHS noted that the C-TPAT program manager has decided, in conjunction with CBP’S Office of Information Technology, to terminate the existing Dashboard reporting tool and, instead, create a new tool for providing accurate data for measuring C-TPAT member benefits by the end of June 2017. We will continue to monitor CBP’s efforts in addressing this recommendation. We are sending copies of this report to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Further Details on the Customs- Trade Partnership Against Terrorism (C-TPAT) Member Screening Process This appendix provides further details on the process used to screen prospective members of the Customs-Trade Partnership Against Terrorism (C-TPAT) program. 1. Application and eligibility: An entity submits an application for C-TPAT membership that includes corporate information (e.g., company size and location), a supply chain security profile, and an agreement to voluntarily participate in the C-TPAT program. In completing the supply chain security profile, the entity is to conduct a comprehensive self-assessment of its supply chain security procedures or practices using the C-TPAT minimum security criteria for its specific business type, such as importer, highway carrier, or customs broker. The application is assigned to a supply chain security specialist to be reviewed to determine if the applicant meets C-TPAT eligibility requirements for its business type. 2. Vetting: Once a security specialist determines an applicant is eligible for C-TPAT membership, the security specialist is to conduct research as part of the vetting process. Vetting involves a review of the entity’s compliance with Customs laws and regulations; as well as any violation history to identify information that might preclude C-TPAT membership. Once any issues are resolved to U.S. Customs and Border Protection’s (CBP) satisfaction, the entity can move on to the certification stage. 3. Certification: After vetting, a security specialist is to conduct a detailed review of the entity’s security profile, looking for any weaknesses or gaps in security procedures or practices, to determine whether minimum security criteria for that entity’s business type are adequately addressed. This review is to be completed and the application approved or rejected within 90 calendar days from the date the entity submits its security profile. If the security specialist approves the security profile, the entity is certified as a C-TPAT member and is eligible to begin receiving benefits. 4. Validation: Once certified, a security specialist is to conduct a validation of the security measures outlined in a certified member’s security profile to ensure that they are reliable, accurate, effective, and align with CBP’s minimum security criteria. As provided for in the Security and Accountability for Every Port Act of 2006 (SAFE Port Act), a member’s initial security validation is to be completed within 1 year of certification, to the extent practicable. During the validation process, the assigned security specialist is to meet with the member’s representatives to verify that the supply chain security measures contained in its security profile are in place as described. If the member is an importer operating a global supply chain, the security specialist is to visit the member’s domestic site and at least one foreign supply chain partner’s site (e.g., a manufacturer who supplies goods). C-TPAT management and the security specialist assigned to a member are to identify potential sites to visit based on research of the member’s business history, import transportation modes, facility locations, and other factors. To initiate the security validation, the assigned security specialist is to provide the member a site visit agenda and documents to help the member prepare for the visit, such as a validation checklist. Upon completion of the security validation process, the security specialist is to prepare a final validation report to present to the member. The report may include recommendations to improve security practices, as well as any required actions the member is to take to conform to CBP’s minimum security criteria. The security validation report is also to address whether the member should continue to receive program benefits; and, if an importer or exporter, whether additional benefits are warranted. 5. Annual reviews and revalidations: Once a security specialist validates a C-TPAT member’s security practices, the member company is to undergo a review of its eligibility status, vetting, and certification processes on an annual basis. This involves having the member perform an annual self-assessment—essentially an update of its security profile—that provides the member with an opportunity to review, update, or change its security procedures, as needed. Security specialists are to annually certify completion of these member self-assessments. Each C-TPAT member is to undergo a security revalidation not less than once every 4 years after its initial validation, as determined by C-TPAT and in accordance with the SAFE Port Act. A security revalidation calls for a security specialist to conduct updated document reviews and on-site visits to a member and at least one of its foreign supply chain partners to ensure continued alignment with C-TPAT’s minimum security criteria. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Christopher Conrad (Assistant Director), Adam Couvillion (Analyst-in-Charge), David Alexander, Christine Broderick, Charles Bausell, Dominick Dale, Dorian Dunbar, Tyler Mountjoy, Heidi Nielson, Nina Thomas-Diggs, and Eric Winter all made key contributions to this report. | The economic well-being of the United States depends on the movement of millions of cargo shipments throughout the global supply chain—the flow of goods from manufacturers to retailers or other end users. However, cargo shipments can present security concerns. CBP is responsible for administering cargo security and facilitating the flow of legitimate commerce. CBP has implemented several programs as part of a risk-based approach to supply chain security. One such program, C-TPAT, is a voluntary program in which CBP staff validate that members' supply chain security practices meet minimum security criteria. In return, members are eligible to receive benefits, such as a reduced likelihood their shipments will be examined. This report assesses the extent to which (1) CBP is meeting its security validation responsibilities, and (2) C-TPAT members are receiving benefits. GAO reviewed information on security validations, member benefits, and other program documents. GAO also interviewed officials at CBP headquarters and three C-TPAT field offices chosen for their geographical diversity; as well as select C-TPAT members and trade industry officials. Staff from U.S. Customs and Border Protection's (CBP) Customs-Trade Partnership Against Terrorism (C-TPAT) program have faced challenges in meeting C-TPAT security validation responsibilities because of problems with the functionality of the program's data management system (Portal 2.0). In particular, since the system was updated in August 2015, C-TPAT staff have identified instances in which the Portal 2.0 system incorrectly altered C-TPAT members' certification or security profile dates, requiring manual verification of member data and impairing the ability of C-TPAT security specialists to identify and complete required security validations in a timely and efficient manner. While the focus of CBP's staff was initially on documenting and addressing Portal 2.0 problems as they arose, the staff have begun to identify root causes that led to the Portal 2.0 problems. For example, CBP staff cited unclear requirements for the system and its users' needs, coupled with inadequate testing, as factors that likely contributed to problems. In response, CBP staff have outlined recommended actions, along with timeframes for completing the actions. The staff stated that they will continue to work on identifying and addressing potential root causes of the Portal problems through 2017. C-TPAT officials told us that despite the Portal 2.0 problems, they have assurance that required security validations are being tracked and completed as a result of record reviews taking place at field offices. However, these field office reviews were developed in the absence of standardized guidance from C-TPAT headquarters. While the current validation tracking processes used by field offices do account for security validations conducted over the year, standardizing the process used by field offices for tracking required security validations could strengthen C-TPAT management's assurance that its field offices are identifying and completing the required security validations in a consistent and reliable manner. CBP cannot determine the extent to which C-TPAT members are receiving benefits because of data problems. Specifically, since 2012, CBP has compiled data on certain events or actions it has taken regarding arriving shipments—such as examination and hold rates and processing times—for both C-TPAT and non-C-TPAT members through its Dashboard data reporting tool. However, on the basis of GAO's preliminary analyses and subsequent data accuracy concerns cited by C-TPAT program officials, GAO determined that data contained in the Dashboard could not be relied on for accurately measuring C-TPAT member benefits. Also, CBP has likely relied on such questionable data since it developed the Dashboard in 2012, and, thus, cannot be assured that C-TPAT members have consistently received the benefits that CBP has publicized. C-TPAT officials stated that they are analyzing the Dashboard to finalize an action plan to correct the data concerns. It is too soon to tell, though, whether this process will fully address the accuracy and reliability issues. Despite these issues, C-TPAT officials are exploring new member benefits, and industry officials we met with generally spoke positively of the C-TPAT program. |
Agencies Face Challenges in Instilling Aspects of Agency Climate That Contribute to Performance-Based Cultures High-performing organizations reinforce a focus on results through demonstrated top leadership commitment, through positive recognition to employees for their contributions to organizational goals, and by holding managers accountable for results while giving them the necessary decisionmaking authority to achieve them. Our survey data suggest that across the 28 agencies there are ample opportunities to better instill these key attributes of a performance-based culture. Demonstrated Top Leadership Commitment to Achieving Results Successfully addressing the challenges that federal agencies face in becoming high-performing organizations requires agency leaders who are fully committed to achieving results. Top leadership’s commitment to achieving results is essential in driving continuous improvement to achieve excellence throughout an agency and inspiring employees to accomplish challenging goals. Without clear and demonstrated commitment of agency leadership—both political and career—organizational cultures will not be transformed, and new visions and ways of doing business will not take root. However, the responses of many managers in the 28 agencies did not indicate a strong perception that their agencies’ top leadership demonstrated a strong commitment to achieving results. Managers’ positive responses across the 28 individual agencies varied widely from a low at FAA (23 percent) to 3 times that percentage at NSF (69 percent), as shown in figure 1. Specifically, at only four agencies—NSF, the Social Security Administration (SSA), NASA, and NRC—did more that two-thirds of managers perceive such commitment to a great or very great extent. At 11 agencies, less than half of the managers perceived that there was such a degree of commitment. The clear and demonstrated top leadership commitment needed to sustain high levels of performance is not widely perceived among managers across the government overall and progress in fostering such leadership has remained stagnant. Governmentwide, our survey results show that in 2000, just over half of managers—53 percent—reported strong top leadership commitment, while 57 percent had this perception in 1997—not a statistically significant change. Positive Recognition for Helping Accomplish Strategic Goals Incentives are important in steering an agency’s workforce to high levels of performance and they are critical to establishing a results-oriented management environment. A key element in agencies’ efforts to achieve results is their ability to motivate and reward their employees for supporting results through effective incentives, such as positive recognition. However, both our agency-specific and governmentwide survey results suggest that positive recognition has not been an extensively used technique for motivating employees. On an individual agency basis, there are no agencies that stand out as notable at the top of the range for providing positive recognition to employees for helping the agency accomplish its strategic goals. The percentage of managers responding to a great or very great extent at the 28 agencies ranged from 12 percent at FAA to 52 percent at GSA. Even at the top of the range, the percentage of managers who reported that employees received such positive recognition barely exceeded 50 percent at GSA and SBA. At 14 of the 28 agencies surveyed, less than one-third of managers perceived that employees in their agencies were receiving positive recognition to at least a great extent for contributing to the achievement of agency goals. (See fig. 2.) Governmentwide, few managers (31 percent) reported in 2000 that employees in their agencies received positive recognition to a great or very great extent for helping their agencies accomplish their strategic goals. This was not a statistically significant change from the 26 percent reporting this extent of positive recognition in 1997. Because effective incentive programs can help federal agencies maximize the results they achieve by both reinforcing personal accountability for high performance and motivating and rewarding employees, the results of our survey suggest that in most cases agencies are missing opportunities to positively affect program results through more widespread use of effective positive recognition techniques. Accountability for Results and Necessary Decisionmaking Authority to Achieve Them Agencies need to create organizational cultures that involve employees and empower them to improve operational and program performance while ensuring accountability and fairness for those employees. Devolving decisionmaking authority to program managers in combination with holding them accountable for results is one of the most powerful incentives for encouraging results-based management. Additionally, providing managers with such authority gives those who know the most about an agency’s programs the power to make those programs work. The range of responses across individual agencies regarding managers being held accountable for the results of their programs to a great or very great extent ranged from a low of 40 percent at the Forest Service to 79 percent at HUD. At 22 of the 28 agencies included in our survey, more than 50 percent of managers reported such accountability, with 66 percent or more at 10 of these agencies reporting such accountability. (See fig. 3.) In comparison, for each of the individual agencies included in our survey, the percentage of managers who reported that they had, to a great or very great extent, the decisionmaking authority they needed to help their agencies accomplish their strategic goals ranged from 15 percent at IRS to 58 percent at OPM. At only one agency—OPM—was the percentage of managers reporting that they had the decisionmaking authority they needed above 50 percent. In fact, at 10 of the 28 agencies, only one-third or less of managers responded that they had such decisionmaking authority. (See fig. 4.) The differences reflected in managers’ responses to our questions on authority and accountability suggest that many agencies can better balance accountability for results with the authority needed to help achieve agency goals. For managers at 27 agencies, the percentage reporting that they were held accountable for results exceeded the percentage reporting that they had the authority they needed. At 16 of these 27 agencies, the percentage by which being held accountable exceeded having the authority needed was more than 20 percent. At only one agency—the Forest Service—were the percentages approximately equal. However, they were not very high, with the percentage of Forest Service managers responding positively at 41 percent for authority and at 40 percent for accountability. Governmentwide, the differences between the level of accountability and the level of authority managers perceived was great in both our 1997 and 2000 surveys. In 2000, 63 percent of federal managers overall reported that they were held accountable for program results but only 36 percent reported that they had the decisionmaking authority they needed to help their agencies accomplish their strategic goals. These percentages were not statistically significantly different than those from our 1997 survey, when 55 percent of managers reported such accountability for results while 31 percent reported such decisionmaking authority, We recently reported that several agencies have begun to use results- oriented performance agreements for their senior political and career executives to define accountability for specific goals, monitor progress during the year, and then contribute to performance evaluations. Although each agency developed and implemented agreements that reflected its specific organizational priorities, structure, and culture, we identified common emerging benefits from each agency’s use of performance agreements. For example, the Veterans Health Administration (VHA) decentralized its management structure from 4 regions to 22 Veterans Integrated Service Networks (VISN). VHA gave each VISN substantial operational autonomy and established performance goals in the agreements to hold network and medical center directors accountable for achieving performance improvements. Senior VHA officials we spoke to as part of that review credit the use of performance agreements to improvements in key organizational goals. Survey Responses Indicate That the Types of Measures Managers Had To Gauge Program Performance Varied A fundamental element in an organization’s efforts to manage for results is its ability to set meaningful goals for performance and to measure performance against those goals. High-performing, results-oriented organizations establish a set of measures to gauge progress over various dimensions of performance. As discussed in our January 2001 Performance and Accountability Series, a major challenge that agencies face in implementing GPRA is articulating and reinforcing a results orientation. Encouragingly, more managers overall reported having performance measures in 2000 than in 1997. Specifically, 84 percent of federal managers governmentwide said they had performance measures for the programs they were involved with, a statistically significant increase over the 76 percent of managers who responded that way in 1997. The degree to which managers reported having each of the five types of performance measures we asked about—outcome, output, customer service, quality, and efficiency—varied by agency. However, managers’ responses at most federal agencies showed that they still may have room for improvement in this regard. Outcome and Output Measures Output measures that tell how many things are produced or services provided are an essential management tool in managing programs for results, but they represent only one basic dimension in the measurement of program performance. Rather, it is outcome measures that demonstrate whether or not program goals are being achieved and gauge the ultimate success of government programs. Collectively, managers’ responses across the 28 agencies suggest a need for further emphasizing and developing both outcome and output measures to address the multidimensional aspects of performance. For outcome measures specifically, the percentage of managers responding that they had them to a great or very great extent ranged from 17 percent at HCFA to 63 percent at NASA and HUD. At only eight agencies did more than 50 percent of managers report having outcome measures. (See fig. 5.) In comparison, at 17 of the 28 agencies, 50 percent or more of managers reported that they had output measures to a great or very great extent. The percentage of managers responding that they had output measures to that extent ranged from 19 percent at HCFA to 75 percent at SBA and HUD, as shown in figure 6. Governmentwide, 50 percent of managers reported in 2000 that they had output measures to a great or very great extent for their programs, a statistically significant increase over the 38 percent reporting having these measures in 1997. In comparison, 44 percent of managers governmentwide reported in 2000 that they had outcome measures to a similar extent, significantly more than the 32 percent reporting in this way in 1997. Although more managers overall said they had output measures than outcome measures in 2000, at 7 of the 28 agencies—the Federal Emergency Management Agency, the Department of Health and Human Services, the Department of Energy (Energy), the U.S. Agency for International Development, NSF, NASA, and OPM—slightly more managers said they had outcome measures than output measures. Customer Service Measures Among GPRA’s stated purposes is the improvement of federal program effectiveness and public accountability by promoting a new focus on customer satisfaction. However, our survey results suggest that having the measures to determine whether or not agencies are satisfying their customers is still at an early stage in the federal government, and that, as such, there is ample room for improvement. Managers’ responses indicated that the presence of customer service measures for programs in the 28 individual agencies was low. For the 28 individual agencies, the percentage of managers responding to a great or very great extent ranged from 14 percent at NRC to a high of 54 percent at GSA and VA. At only four agencies—GSA, VA, OPM, and NASA—did even slightly over half of the managers report that they had customer service measures to such an extent. In 10 of the agencies, less than one-third of managers reported positively on having these measures. (See fig. 7.) Managers’ responses did not reflect any notable progress in further expanding the presence of customer service measures since our previous survey. Specifically, in 2000, 38 percent of managers reported having customer service measures for their programs to a great or very great extent compared with 32 percent reporting that way in 1997, not a statistically significant increase. Quality and Efficiency Measures In crafting GPRA, Congress expressed its interest in American taxpayers getting quality results from the programs they pay for as well as its concern about waste and inefficiency in federal programs. However, managers’ responses indicate that the extent to which agencies have developed measures of either quality or efficiency is not very high. In only three agencies—NASA, VA, and OPM—did more that 50 percent of managers report having quality measures to a great or very great extent. In 14 of the agencies, less than one-third of managers reported having quality measures to a comparable extent. For the 28 individual agencies, this response ranged from 14 percent at HCFA to 61 percent at NASA. (See fig. 8.) Similarly for efficiency measures, at only two agencies—GSA and Energy— did 50 percent or more of managers report having such measures to a great or very great extent. At almost half of the agencies, less than one-third of managers reported having them to this extent. For the 28 agencies included in our survey, this percentage ranged from 9 percent at HCFA to 56 percent at GSA, as shown in figure 9. Governmentwide, 39 percent of federal managers in 2000 reported having quality measures for their programs, not a statistically significant increase from the 31 percent in 1997. In 2000, 35 percent of managers cited that they had measures that gauged the efficiency of program operations, a significant increase from the 26 percent reporting such measures in 1997. Managers’ Responses Across Agencies on Using Performance Information Were Mixed The fundamental reason for collecting information on a program’s performance is to take action in managing the program on the basis of that information. For five of the management activities we asked about in 1997 and 2000—setting program priorities, allocating resources, adopting new program approaches or changing work processes, coordinating program efforts, and setting individual job expectations—the reported use to a great or very great extent decreased to a statistically significant extent in 2000. Setting Program Priorities, Allocating Resources, and Adopting New Program Approaches or Changing Work Processes In setting program priorities, the information obtained from measuring a program’s performance provides a basis for deciding whether parts of the program or the entire program itself should be given a higher or lower priority. Across the 28 individual agencies, the percentage of managers reporting this use to a great or very great extent ranged from 26 percent at NSF to 64 percent at HUD. At only seven agencies—HUD, SSA, SBA, VA, GSA, OPM, and NASA—did more than 50 percent of managers respond positively regarding this use. (See fig. 10.) When we examined the responses of only those managers who answered on the extent scale, 56 percent of managers overall reported in 2000 that they used performance information when setting program priorities. Although this percentage decreased to a statistically significant extent from 66 percent in 1997, it was the activity for which the highest percentage of managers governmentwide reported this use to a great or very great extent in 2000. In addition, performance information allows program managers to compare their programs’ results with goals and thus determine where to target program resources to improve performance. When managers are forced to reduce their resources, the same analysis can help them target the reductions to minimize the impact on program results. Across the 28 individual agencies, the percentage of managers reporting that they used performance information to a great or very great extent when allocating resources ranged from 24 percent at NSF to 66 percent at OPM, with 50 percent or more of managers reporting such use at only 7 agencies—OPM, SBA, HUD, NASA, GSA, the Department of Treasury, and VA. (See fig. 11.) Governmentwide, 53 percent of those managers who expressed an opinion on the extent scale reported in 2000 that they used performance information to a great or very great extent when allocating resources, a statistically significant decrease from the 62 percent responding in this way in 1997. Third, by using performance information to assess the way a program is conducted, managers can consider alternative approaches and processes in areas where goals are not being met and enhance the use of program approaches and processes that are working well. Across the 28 individual agencies, the percentage of managers reporting such use to a great or very great extent ranged from 25 percent at the Forest Service to 64 percent at OPM. At only seven of the agencies—OPM, SBA, VA, GSA, NASA, HUD, and SSA—did 50 percent or more of managers report such use. (See fig. 12.) Governmentwide in 2000, 51 percent of those managers who expressed an opinion on the extent scale reported that they used performance information when adopting new program approaches or changing work processes, statistically significantly lower than the 66 percent in 1997. For these three key management activities—setting program priorities, allocating resources, and adopting new program approaches or changing work processes—the percentage of managers governmentwide that reported using performance information to a great or very great extent decreased significantly between 1997 and 2000. Moreover, for each of these activities, at only 7 of the 28 agencies did 50 percent or more of managers report such use. These data suggest that in the majority of agencies, the number of managers highly engaged in the application of one of the most fundamental and clear tenets of results-based management— using program performance information to make government programs work better—is in the minority. Coordinating Program Efforts GPRA’s emphasis on results implies that federal programs contributing to the same or similar outcomes should be closely coordinated to ensure that goals are consistent and complementary and that program efforts are mutually reinforcing. For programs that are related, program managers can use performance information to lay the foundation for improved coordination. The survey data show that such use may not be widespread. At the 28 individual agencies, the percentage of managers reporting such use to a great or very great extent ranged from 17 percent at FAA to 57 percent at HUD. Moreover, one-third or less of managers at more than half of the agencies reported using performance information when coordinating program efforts. At only three agencies—HUD, VA, and GSA—was the percentage of managers reporting such use over 50 percent. (See fig. 13.) Overall, 43 percent of those managers who expressed an opinion on the extent scale reported in 2000 that they used performance information when coordinating program efforts with other internal or external organizations—14 percent less than the 57 percent reporting this use in 1997, a statistically significant change. Setting Individual Job Expectations In high-performing organizations, employees’ performance expectations are aligned with the competencies and performance levels needed to support the organizations’ missions, goals and objectives, and strategies. When federal managers use performance information to set individual job expectations, they both emphasize the role their individual employees should play in accomplishing program goals and reinforce the importance of employee responsibility for achieving results. However, the results of our survey suggest that many managers are not consistently using performance information in this important way. At the 28 individual agencies, the percentage of managers reporting the use of performance information to a great or very great extent when setting individual job expectations ranged from 16 percent at HCFA to 66 percent at SBA. As indicated by managers’ responses to our survey, less than half of managers in 21 of the 28 agencies are extensively engaged in taking this important step in reinforcing the relationship between employees’ efforts to implement their agencies’ programs and the results those programs realize. Only seven agencies—SBA, HUD, GSA, the Department of Commerce, VA, OPM, and NASA—had 50 percent or more of managers reporting such use. (See fig. 14.) When we examined the responses of only those managers who answered on the extent scale, 51 percent of managers overall reported in 2000 that they used performance information to a great or very great extent when setting individual job expectations, a statistically significant difference from the 61 percent responding in this way in 1997. The executive branch has taken steps to reinforce the connection between employee performance and agency goals. For example, OMB’s latest Circular No. A-11 guidance on preparing fiscal year 2002 annual performance plans states that those plans should set goals to cover human capital management in areas such as linking individual performance appraisals to program performance. Also, on October 13, 2000, OPM published final regulations, effective November 13, 2000, that change the way agencies are to evaluate the performance of members of the SES. Specifically, agencies are to place increased emphasis on appraising executive performance on results and using results as the basis for performance awards and other personnel decisions. Concluding Observations For agencies to successfully become high-performing organizations, their leaders need to foster performance-based cultures, find ways to measure performance, and use performance information to make decisions. At a fundamental level, results from our 2000 federal managers survey indicate wide differences among individual agencies’ levels of success in demonstrating a results-based climate. However, transforming organizational cultures is an arduous and long-term task. In addition, managers’ responses suggest that while some agencies are clearly showing signs of becoming high-performing organizations, others are not. The survey results provide important information that agency leadership can use to help identify key opportunities to build higher-performing organizations across the federal government. We will continue to work with senior leadership in the individual agencies to identify actions that can be taken to address the issues raised by their managers’ survey responses. Congress has a vital role to play as well. As part of its confirmation, oversight, authorization, and appropriation responsibilities, Congress also has the opportunity to use the information from our 2000 managers survey, as well as information from agencies’ performance plans and reports and our January 2001 Performance and Accountability Series and High-Risk Series, to emphasize performance-based management and to underscore Congress’ commitment to addressing long-standing challenges. Agency Comments On April 9, 2001, we provided the Director, Office of Management and Budget, with a draft of this report for his review and comment. In his May 11, 2001 written response, included in appendix XXX, OMB’s Deputy Director acknowledged the importance of the report providing a basis for comparison to our 1997 survey results as well as allowing for individual analysis of 28 agencies. He said that the report’s findings appeared to be consistent with OMB’s views regarding the extent of agencies’ progress in implementing GPRA, stating that while all agencies are in full compliance with the requirements of the law, most are not yet at a stage where they are truly managing for results. In addition, he outlined the new administration’s planned initiatives to make the federal government more results-oriented, including strengthening the linkage between budget decisionmaking and program performance. As agreed with your office, unless you announce the contents of this report earlier, we plan no further distribution until 30 days after its issue date. At that time, we will send copies of the report to Senator Richard J. Durbin, Ranking Member, Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; Senator Fred Thompson, Chairman, and Senator Joseph Lieberman, Ranking Member, Senate Committee on Governmental Affairs; and Representative Dan Burton, Chairman, and Representative Henry A. Waxman, Ranking Minority Member, House Committee on Government Reform. We will also send copies to the Honorable Mitchell E. Daniels, Jr., Director of the Office of Management and Budget, and the heads of the 28 agencies included in our survey. In addition, we will make copies available to others upon request. If you have any questions concerning this report, please contact J. Christopher Mihm or Joyce Corry on (202) 512-6806. Peter Del Toro and Thomas Beall were key contributors to this report. Scope and Methodology A questionnaire on performance and management issues was sent to a stratified random sample of 3,816 out of a population of about 93,000 full- time, mid- and upper-level civilian managers and supervisors working in the 24 executive branch agencies covered by the Chief Financial Officers Act of 1990 (CFO Act). These agencies represent about 97 percent of the executive branch full-time workforce, excluding the U.S. Postal Service. In reporting the questionnaire data, when we use the term “governmentwide” and the phrase “across the federal government,” we are referring to these 24 CFO Act executive branch agencies, and when we use the terms “federal managers” and “managers,” we are referring to both managers and supervisors. The sample was drawn from the March 1999 Office of Personnel Management’s Central Personnel Data File (CPDF)—the most recent version of the CPDF available when we began drawing our sample— using file designators indicating performance of managerial and supervisory functions. The questionnaire was designed to obtain the observations and perceptions of respondents on such results-oriented management topics as the presence, use, and usefulness of performance measures; hindrances to measuring and using performance information; agency climate; information technology; program evaluation; and various aspects of the Government Performance and Results Act of 1993 (GPRA). Most of the items on the questionnaire were closed-ended—that is, depending on the particular item, respondents could choose one of two or more response categories or rating the strength of their perception on a 5-point extent scale ranging from “to no extent” to “to a very great extent.” In most cases, respondents also had an option of choosing the response category “no basis to judge/not applicable.” About half of the items on the questionnaire were contained in a previous survey that was conducted between November 1996 and January 1997 as part of the work we did in response to a GPRA requirement that we report on implementation of the act. This previous survey, although done with a smaller sample size of 1,300 managers, covered the same agencies as the 2000 survey, which was sent out between January and August, 2000. Individuals who did not respond to the initial questionnaire were sent up to two follow-up questionnaires. In some cases, we contacted individuals by telephone and faxed the questionnaire to them to expedite completion of the survey. The current survey was designed to update and further elaborate on the results of the previous survey. Similar to the previous survey, the sample was stratified by whether the manager or supervisor was Senior Executive Service (SES) or non-SES. The management levels covered General Schedule (GS), General Management (GM), or equivalent schedules at levels comparable to GS/GM-13 through career SES or equivalent levels of executive service. Stratification was also done by the 24 CFO Act agencies, with an additional breakout of 4 selected agencies from their departments—Forest Service, Health Care Financing Administration, Federal Aviation Administration, and Internal Revenue Service. These four agencies were selected on the basis of our previous work identifying them as facing significant managerial challenges. The sample was also stratified to include special pay plans at some agencies to improve our coverage of managers and supervisors working at those agencies. For example, Senior Foreign Service executives from the State Department and the Agency for International Development were included in the sample. We included these special pay plan strata to ensure at least a 90-percent coverage of all managers and supervisors at or comparable to the GS/GM-13 through career SES level at the 28 departments and agencies we surveyed. Finally, we added additional strata to include a group of respondents who answered the previous survey and who still worked in the same agency at the same management level at the time of the 2000 survey. During the course of the survey, we deleted 212 persons from our sample who had either retired, separated, died, or otherwise left the agency or had some other reason that excluded them from the population of interest. We received useable questionnaires from 2,510 sample respondents, or about 70 percent of the remaining eligible sample. The response rate across the 28 agencies ranged from 59 percent to 82 percent. We took several steps to check the quality of our survey data. We reviewed and edited the completed questionnaires, made internal consistency checks on selected items, and checked the accuracy of data entry on a sample of surveys. We also followed up on a sample of nonrespondents to assess whether their views differed from the views of those who returned the survey. We randomly selected a subsample of 136 persons across all strata from that group of individuals who had not returned a completed questionnaire a month or more after the last of 3 attempts were made to elicit their participation in our survey. We received 67 useable surveys from this group. In addition, there were 41 individuals who, when contacted by telephone, refused to participate in the survey but were willing to answer 3 key questions from the survey. We included their answers to the three questions in our analysis of nonrespondents on those three questions. We analyzed the responses of these groups on selected items compared with the responses received from all other respondents. Our analyses of selected items did not show a sufficient or consistent degree of difference between survey nonrespondents and respondents, and, thus, we included the responses of our subsample with all other responses. Except where noted, percentages are based on all respondents returning useable questionnaires. The survey results are generalizable to the 28 departments and agencies we surveyed. All reported percentages are estimates that are based on the sample and are subject to some sampling error as well as nonsampling error. In general, percentage estimates in this report for the entire sample have confidence intervals ranging from about ±2 to ±7 percentage points at the 95 percent confidence interval. In other words, if all CFO Act agency managers and supervisors in our population had been surveyed, the chances are 95 out of 100 that the result obtained would not differ from our sample estimate in the more extreme cases by more than ±7 percent. In the appendixes of this report comparing each agency to the rest of government, confidence intervals for the reported agency percentages and the rest of government percentages range from ±2 to ±16 percentage points. Because a complex sample design was used and different types of statistical analyses are being done, the magnitude of sampling error will vary across the particular groups or items being compared due to differences in the underlying sample sizes and associated variances. Consequently, in some instances, a difference of a certain magnitude may be statistically significant. In other instances, depending on the nature of the comparison being made, a difference of equal or even greater magnitude may not achieve statistical significance. We note when differences are significant at the .05 probability level between 1997 and 2000 governmentwide data throughout the report and between an agency’s data and data for the rest of government in appendices II through XXIX. Figures 1 through 14 in the letter report do not show when individual agencies are statistically significantly different from each other. Department of Agriculture: Selected Survey Results Of all the agencies surveyed, the responses from managers at the Department of Agriculture most closely paralleled those of other managers governmentwide in the aspects of agency climate, performance measurement, and using performance information. That is, Agriculture was not significantly different from the rest of the government for any of the survey questions we examine in this appendix. Agriculture is the only agency of the 28 we surveyed for which this is true. Survey results for one component of Agriculture, the Forest Service, are not included here but are reported in a separate appendix. Top Leadership Less than half (47 percent) of Agriculture’s managers expressed the view that their agency’s top leadership was strongly committed to achieving results to a great or very great extent, as shown below. For the rest of the government, 53 percent of managers indicated a similar level of commitment by top leadership to achieving results. Positive Recognition Less than a quarter (24 percent) of managers at Agriculture reported that employees received positive recognition to a great or very great extent for helping their agencies accomplish their strategic goals, as shown below. Agriculture ranked in the lowest quarter of the 28 agencies surveyed. Authority and Accountability Thirty-eight percent of managers at Agriculture reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 59 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63 respectively. Types of Performance Measures When asked about the types of performance measures they had for their programs, output measures and customer service measures had, respectively, the highest and lowest percentage of managers at Agriculture (51 and 34 percent) responding positively. Using Performance Information Similar to the rest of the government, less than half of managers at Agriculture reported that they used performance information for each of the management activities shown below. Agency for International Development: Selected Survey Results Overall, the Agency for International Development (AID) was largely similar to the rest of government except for being lower in aspects of agency climate. AID was statistically significantly lower than the rest of the government in the percentage of managers who reported the following to at least a great extent: top leadership demonstrated a strong commitment to achieving results; managers were held accountable for results; and employees who helped the agency achieve its strategic goals were positively recognized. In addition, the percentage of managers responding to at least a great extent on positive recognition was the second lowest, after the Federal Aviation Administration (FAA), of the 28 agencies surveyed. In all other areas, AID was not statistically significantly different from the rest of the government. Top Leadership Thirty-nine percent of AID managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, and this percentage is 14 points lower than that of the rest of the government (53 percent). This difference is statistically significant. Positive Recognition Fourteen percent of AID managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, and this percentage is 17 points lower than that of managers who responded this way for the rest of the government (31 percent). This difference is statistically significant. AID was the second lowest ranking agency in this regard, after FAA, of the 28 agencies included in the survey. Authority and Accountability Twenty-five percent of AID managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 43 percent indicated that they were held accountable for results to a similar extent, as shown below. AID managers’ response concerning the extent to which managers were held accountable for results (43 percent) was statistically significantly lower than the 63 percent reported for the rest of the government. AID was one of six agencies surveyed that had less than half of its managers reporting that they were held accountable to at least a great extent. (The others were the Department of Energy, Department of State, Forest Service, General Services Administration, and Health Care Financing Administration.) Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of AID managers (56 percent) reported having outcome measures and the lowest (28 percent) cited efficiency measures, as shown below. AID was one of only seven agencies where outcome measures were cited more frequently than output measures. (The others were the Department of Energy, Federal Emergency Management Agency, Department of Health and Human Services, National Aeronautics and Space Administration, National Science Foundation, and Office of Personnel Management.) In addition, the percentages of AID managers who reported having quality measures (48 percent) and outcome measures (56 percent) to a great or very great extent were both in the highest quarter of the 28 agencies surveyed. Use of Performance Information Similar to the rest of the government, less than half of AID managers reported that they used performance information for each of the management activities shown below. AID ranked in the lowest quarter of the agencies for the percentage of managers who reported that they used performance information when allocating resources (32 percent). Department of Commerce: Selected Survey Results Overall, the Department of Commerce was largely similar to the rest of the government except on two aspects of agency climate and using performance information. Commerce was statistically significantly higher than the rest of the government in the percentage of managers who reported that their agency’s top leadership was strongly committed to achieving results to at least a great extent, and the percentage of managers who indicated that they used performance information when setting individual job expectations for staff to a similar extent. In all other areas, Commerce was not statistically significantly different from the rest of the government. Top Leadership Almost two-thirds (65 percent) of managers at Commerce expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, and this percentage is 12 points higher than that of the rest of the government (53 percent). This difference is statistically significant. Positive Recognition Thirty-nine percent of Commerce managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, whereas 30 percent of managers responded this way for the rest of the government. Authority and Accountability Forty-six percent of managers at Commerce reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 57 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 35 and 63 respectively. Commerce managers’ responses concerning the extent of their decisionmaking authority placed the agency in the highest quarter of the agencies surveyed, although the difference between Commerce and the rest of the government was not statistically significant. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of Commerce managers (50 percent) reported having output measures and the lowest (30 percent) cited efficiency measures, as shown below. Forty-seven percent of managers reported having outcome measures to at least a great extent. Use of Performance Information Commerce ranked statistically significantly higher (55 percent) than the rest of the government (41 percent) in the percentage of managers who indicated that they used performance information when setting individual job expectations for staff, as shown below. Department of Defense: Selected Survey Results Overall, the Department of Defense (DOD) was largely similar to the rest of government, except in aspects of agency climate and performance measurement. DOD was statistically significantly higher than the rest of the government for survey items concerning the percentage of managers who reported that their agency's top leadership was strongly committed to achieving results to at least a great extent, and the percentage of managers who reported having customer service and quality measures to at least a great extent. In all other areas, DOD was not statistically significantly different from the rest of the government. Top Leadership Fifty-nine percent of DOD managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, and this percentage is 9 points higher than that of the rest of the government (50 percent). This difference is statistically significant. Positive Recognition Thirty-one percent of DOD managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, and this percentage is about the same as managers who responded this way for the rest of the government (30 percent). Authority and Accountability Forty percent of managers at DOD reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 66 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 34 and 61 respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of DOD managers (49 percent) reported having output measures and the lowest (36 percent) cited efficiency measures, as shown below. Forty-six percent of managers reported having outcome measures to at least a great extent. In addition, the percentages of DOD managers who reported having customer service and quality measures to a great or very great extent (both 45 percent) were significantly higher than the percentages of managers who responded in this way for the rest of the government (35 and 36 percent, respectively). Use of Performance Information Similar to the rest of the government, less than half of DOD managers reported that they used performance information for each of the management activities shown below. Department of Education: Selected Survey Results The Department of Education was statistically significantly lower than the rest of the government on one aspect of using performance information: the percentage of managers who reported using performance information when setting individual job expectations for staff. Education had the third lowest percentage of managers, after the Health Care Financing Administration and the National Science Foundation, among the 28 agencies surveyed who reported using performance information in this way to a great or very great extent. In all other areas, Education was not significantly different from the rest of the government. Top Leadership Sixty-two percent of managers at Education expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, compared with 53 percent for the rest of the government, as shown below. Positive Recognition Twenty-three percent of managers at Education reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, compared with 31 percent for the rest of the government. Authority and Accountability Twenty-five percent of managers at Education reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 51 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63 respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of Education managers (52 percent) reported having output measures and the lowest (31 percent) cited quality measures, as shown below. Forty-eight percent of managers reported having outcome measures to at least a great extent. Use of Performance Information Similar to the rest of the government, less than half of managers at Education reported that they used performance information for each of the management activities shown below. In addition, Education was significantly lower (28 percent) than the rest of the government (41 percent) in the percentage of managers who indicated that they used performance information when setting individual job expectations for staff. Education was the third lowest agency surveyed, after the Health Care Financing Administration and the National Science Foundation, in the percentage of managers who reported using performance information in this way to a great or very great extent. Department of Energy: Selected Survey Results The Department of Energy was largely similar to the rest of the government except for aspects of performance measurement and agency climate. It was statistically significantly higher than the rest of government in the percentage of managers who reported having outcome and efficiency measures to a great or very great extent. Energy was significantly below the rest of the government in the percentage of managers who reported that managers were held accountable for results to at least a great extent. In all other areas, the agency was not statistically significantly different from the rest of the government. Top Leadership Exactly half (50 percent) of managers at Energy expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, compared with 53 percent for the rest of government. Positive Recognition Thirty-three percent of managers at Energy reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, which was about the same as managers who responded this way for the rest of the government (31 percent). Authority and Accountability Thirty-three percent of managers at Energy reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 49 percent indicated that they were held accountable for results to a similar extent, as shown below. Energy managers' response concerning the extent to which managers were held accountable for results (49 percent) was statistically significantly lower than the 63 percent reported by the rest of the government. Energy was one of six agencies surveyed that had less than half of its managers reporting that they were held accountable to at least a great extent. (The others were the Agency for International Development, Department of State, Forest Service, General Services Administration, and Health Care Financing Administration.) Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of managers at Energy (55 percent) reported having outcome measures and the lowest percentage (45 percent) cited quality measures, as shown below. Energy was one of only seven agencies where outcome measures were cited more frequently than output measures. (The others were the Agency for International Development, Federal Emergency Management Agency, Department of Health and Human Services, National Aeronautics and Space Administration, National Science Foundation, and Office of Personnel Management.) In addition, the percentages of managers at Energy who reported having efficiency (50 percent) and outcome measures (55 percent) to a great or very great extent were significantly higher than the percentages of managers who responded in this way for the rest of the government (35 and 44 percent, respectively). Use of Performance Information Similar to the rest of the government, less than half of managers at Energy reported that they used performance information for each of the management activities shown below. Environmental Protection Agency: Selected Survey Results In general, the Environmental Protection Agency (EPA) was largely similar to the rest of the government except for two aspects of performance measurement. The agency was statistically significantly lower than the rest of the government in the percentage of managers who reported having efficiency and customer service measures to at least a great extent. In all other areas, EPA was not statistically significantly different from the rest of the government. Top Leadership Slightly more than half (52 percent) of EPA managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, and this percentage is about the same as reported by managers in the rest of the government (53 percent). Positive Recognition Twenty-nine percent of EPA managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, and this percentage is about the same as that reported by managers in the rest of the government (31 percent). Authority and Accountability Forty-one percent of EPA managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 53 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63 respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of EPA managers reported having output measures (54 percent) and the lowest percentage (21 percent) cited efficiency measures, as shown below. Thirty-nine percent of managers reported having outcome measures to at least a great extent. In addition, the percentages of EPA managers who reported having efficiency measures (21 percent) and customer service measures (28 percent) to a great or very great extent were significantly below the percentages of managers who responded in this way for the rest of the government (36 and 39 percent, respectively). Use of Performance Information Similar to the rest of the government, less than half of EPA managers reported that they used performance information for each of the management activities shown below. Federal Aviation Administration: Selected Survey Results In general, the Federal Aviation Administration (FAA) was worse than the rest of the government on multiple aspects of agency climate, performance measurement, and the use of performance information. The agency was statistically significantly lower than the rest of the government in the percentage of managers who reported that top agency leadership demonstrated a strong commitment to achieving results; that employees who helped the agency achieve its strategic goals received positive recognition; managers had the decisionmaking authority they needed; that they had outcome, customer service, or quality performance measures; and that they used performance information for all five management activities discussed in this appendix. For other survey items—being held accountable for results and having output and efficiency measures—FAA was not significantly different from the rest of the government. Of the 28 agencies surveyed, FAA rated significantly lower than the rest of the government on more of the survey items discussed in this appendix than any other agency. FAA had the lowest percentage of managers who reported to at least a great extent that their agency’s top leadership was strongly committed to achieving results, that employees received positive recognition for helping their agency accomplish its strategic goals, and using performance information when coordinating program efforts with other internal or external organizations. Top Leadership Less than a quarter (22 percent) of FAA managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This percentage is 33 points lower than that of the rest of the government (55 percent), and this difference is statistically significant. For this item, FAA ranked last of the 28 agencies included in the survey. Positive Recognition Twelve percent of FAA managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is 20 points lower than that of managers who responded this way for the rest of the government (32 percent), and is statistically significant. For this item, FAA was the lowest ranking agency of the 28 agencies included in the survey. Authority and Accountability Sixteen percent of FAA managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 59 percent indicated that they were held accountable for results to a similar extent, as shown below. FAA was among five agencies surveyed where the gap between accountability and authority was wide and exceeded 40 percentage points. (The others were the Internal Revenue Service (IRS), Social Security Administration, Small Business Administration, and Department of Housing and Urban Development.) FAA managers’ response concerning the extent of their decisionmaking authority was the second lowest, after IRS, among the 28 agencies surveyed. FAA’s 16 percent is significantly lower than the 37 percent reported by the rest of the government. Types of Performance Measures When asked about the types of performance measures in their programs, output measures were reported by the highest percentage of FAA managers (46 percent) and customer service measures were cited by the lowest percentage (27 percent), as shown below. In addition, the percentages of FAA managers who reported having customer service measures (27 percent), quality measures (29 percent), and outcome measures (34 percent) to a great or very great extent were all significantly below the percentages of managers who responded in this way for the rest of the government (39, 40, and 44 percent, respectively). Use of Performance Information FAA ranked significantly lower than the rest of the government in the percentage of managers who indicated that they used performance information for each of the management activities shown below. In addition, the agency ranked lowest (17 percent) among the 28 agencies surveyed concerning the use of such information when coordinating program efforts with internal or external organizations. Federal Emergency Management Agency: Selected Survey Results The Federal Emergency Management Agency (FEMA) was generally similar to the rest of the government except for being lower on two aspects of performance measurement and one aspect of how managers use performance information. The agency was statistically significantly lower than the rest of the government in the percentage of managers who reported having output and outcome measures and who reported using performance information when adopting new program approaches or changing work processes to at least a great extent. In all other areas, FEMA was not statistically significantly different from the rest of the government. Top Leadership Forty-two percent of FEMA managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, compared with 53 percent for the rest of the government, as shown below. Positive Recognition Slightly more than a quarter (26 percent) of FEMA managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, compared with 31 percent for the rest of the government. Authority and Accountability Forty-two percent of FEMA managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 69 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63, respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of FEMA managers (42 percent) reported having customer service measures and the lowest percentage (27 percent) cited output measures, as shown below. FEMA was the only agency of the 28 we surveyed where managers identified customer service measures as the most prevalent of the 5 performance measures asked about. In addition, the percentages of FEMA managers who reported having output measures (27 percent) and outcome measures (28 percent) to a great or very great extent were significantly below the percentages of managers who responded in this way for the rest of the government (50 and 44 percent respectively). Use of Performance Information Similar to the rest of the government, less than half of FEMA managers reported that they used performance information for each of the management activities shown below. In addition, the percentage of managers who indicated that they used performance information when adopting new program approaches or changing work processes at FEMA was significantly lower (29 percent) than that in the rest of the government (42 percent). Forest Service: Selected Survey Results Overall, the Forest Service was below the rest of the government in aspects of agency climate, performance measurement, and the use of performance information. It was statistically significantly lower than the rest of the government in the percentage of managers who reported that top agency leadership demonstrated a strong commitment to achieving results; managers were held accountable for results; they had outcome, quality, or efficiency performance measures; and they used performance information to set priorities, adopt new program approaches, or coordinate program efforts. Of the 28 agencies surveyed, Forest Service rated significantly lower than the rest of the government on more of the survey items discussed in this appendix than any other agency except for the Federal Aviation Administration (FAA) and Health Care Financing Administration. Forest Service was the lowest among the agencies we surveyed in the percentage of managers who reported that they were held accountable for achieving results to at least a great extent. In addition, the agency ranked the lowest among the 28 agencies surveyed in the percentage of managers who reported using performance information when adopting new approaches or changing work processes and the second lowest, next to FAA, in the percentage of managers who indicated that they used performance information when coordinating efforts with internal or external organizations. In all other areas, the agency was not statistically significantly different from the rest of the government. Top Leadership Slightly more than a quarter (26 percent) of managers at the Forest Service expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This percentage is 28 points lower than that of the rest of the government (54 percent), and this difference is statistically significant. Forest Service ranked second from last, just ahead of FAA, of the 28 agencies included in the survey. Positive Recognition Twenty-seven percent of Forest Service managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, compared with 31 percent of managers that responded this way for the rest of the government. Authority and Accountability Forty-one percent of Forest Service managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas almost the same percentage of managers—40 percent—indicated that they were held accountable for results to a similar extent, as shown below. Forest Service managers’ response concerning the extent to which managers were held accountable for results (40 percent) was the lowest among the 28 agencies surveyed and is statistically significantly lower than the 63 percent reported by managers in the rest of the government. Forest Service was one of six agencies surveyed that had less than half of its managers reporting that they were held accountable to at least a great extent. (The others were the Agency for International Development, Department of Energy, Department of State, General Services Administration, and Health Care Financing Administration.) Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of Forest Service managers (56 percent) reported having output measures and the lowest (24 percent) cited efficiency measures, as shown below. In addition, the percentages of Forest Service managers who reported having outcome measures (29 percent), efficiency measures (24 percent), or quality measures (25 percent) to a great or very great extent were all significantly below the percentages of managers who responded in this way for the rest of the government (44, 36, and 39 percent, respectively). Use of Performance Information Forest Service was statistically significantly lower than the rest of the government in the percentage of managers who indicated that they used performance information when setting program priorities (34 percent), adopting new program approaches or changing work processes (25 percent), and coordinating program efforts with internal or external organizations (21 percent). General Services Administration: Selected Survey Results The General Services Administration (GSA) was above the rest of the government in aspects of agency climate, performance measurement, and the use of performance information. The agency was statistically significantly higher than the rest of the government in the percentage of managers who reported that employees who helped the agency achieve its strategic goals received positive recognition; they had outcome, customer service, or efficiency performance measures; and they used performance information for four different management tasks. For the survey items discussed in this appendix, GSA and the Small Business Administration had the greatest number of items for which they were statistically significantly higher than the rest of the government. GSA was also significantly lower than the rest of the government concerning the percentage of managers who reported that they were held accountable for results. In all other areas, the agency was not statistically significantly different from the rest of the government. GSA ranked first among the 28 agencies surveyed in the percentage of managers reporting that employees received positive recognition for helping the agency accomplish its strategic goals to at least a great extent. The agency also had the highest percentage of managers who indicated that they had efficiency measures for their programs to at least a great extent and, along with the Department of Veterans Affairs (VA), GSA had the highest percentage of managers who reported having customer service measures to a similar extent. Top Leadership Almost two-thirds (63 percent) of GSA managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, compared with 53 percent for the rest of the government. Positive Recognition Fifty-two percent of GSA managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is 22 points higher than that of managers who responded in this way for the rest of the government (30 percent) and the difference is statistically significant. GSA was the highest-ranking agency of the 28 agencies included in the survey for this item. Authority and Accountability Thirty-six percent of GSA managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 49 percent indicated that they were held accountable for results to a similar extent, as shown below. GSA managers’ response concerning the extent to which managers were held accountable for results (49 percent) was statistically significantly lower than the 63 percent reported by the rest of the government. GSA was one of six agencies surveyed that had less than half of its managers reporting that they were held accountable to at least a great extent. (The others were the Agency for International Development, Department of Energy, Department of State, Forest Service, and Health Care Financing Administration.) Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of GSA managers (58 percent) reported having output measures and the lowest (42 percent) cited quality measures, as shown below. In addition, the percentages of GSA managers who reported having efficiency measures (56 percent), customer service measures (54 percent), and outcome measures (56 percent) to a great or very great extent were significantly above the percentages of managers who responded in this way for the rest of the government. GSA ranked first among the 28 agencies surveyed in the percentage of managers who reported that they had efficiency measures for their programs to at least a great extent and also ranked first, along with VA, for the percentage reporting customer service measures to a similar extent. Use of Performance Information GSA was statistically significantly higher than the rest of the government in the percentage of managers who indicated that they used performance information for the management activities shown below, except for the allocation of resources. Health Care Financing Administration: Selected Survey Results Overall, the Health Care Financing Administration (HCFA) was below the rest of the government in aspects of agency climate, the use of performance information, and especially, performance measurement. The agency was statistically significantly lower than the rest of the government for survey items concerning the percentages of managers who reported that managers were held accountable for results; reported having five different types of performance measures; and indicated that they used performance information for four management tasks. In all other areas, HCFA was not statistically significantly different from the rest of the government. Of the 28 agencies surveyed, HCFA rated significantly lower than the rest of the government on more of the survey items discussed in this appendix than any other agency except for the Federal Aviation Administration. HCFA had the lowest percentage of managers who reported having four of the five types of performance measures we asked about: output, efficiency, quality, and outcome measures. For the fifth type—customer service measures—the agency ranked second lowest ahead of the Nuclear Regulatory Commission (NRC). In addition, the agency had the lowest percentage of managers who indicated that they used performance information when setting individual job expectations for staff. HCFA was also second lowest among the agencies we surveyed in the percentage of managers who reported that they were held accountable for results to at least a great extent. Top Leadership Less than half (46 percent) of HCFA managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, compared with 53 percent for the rest of the government. Positive Recognition Thirty percent of HCFA managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, and this percentage is almost the same as that of managers who responded this way for the rest of the government (31 percent). Authority and Accountability Twenty-eight percent of HCFA managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 42 percent indicated that they were held accountable for results to a similar extent, as shown below. HCFA managers’ response concerning the extent to which managers were held accountable for results (42 percent) was significantly lower than the 63 percent reported by the rest of the government. HCFA was second lowest of the agencies we surveyed, after Forest Service, in the percentage of managers who reported that they were held accountable to at least a great extent. The agency was also one of six agencies surveyed that had less than half of its managers reporting that they were held accountable to at least a great extent. (The others were the Agency for International Development, Department of Energy, Department of State, Forest Service, and General Services Administration.) Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of HCFA managers (19 percent) reported having output measures and the lowest (9 percent) cited efficiency measures, as shown below. Seventeen percent of managers reported having outcome measures to at least a great extent. The percentages of HCFA managers who reported having each of the five types of performance measures shown below were all statistically significantly below the percentages of managers who responded in this way for the rest of the government. In addition, HCFA was the lowest ranking agency of the 28 agencies surveyed for each type of performance measure shown below—except for customer service measures, where it ranked second lowest next to NRC. Use of Performance Information HCFA was statistically significantly lower than the rest of the government in the percentage of managers who indicated that they used performance information for all of the management activities shown below, except for adopting new program approaches and changing work processes. In addition, the agency ranked lowest among the 28 agencies surveyed concerning the use of such information when setting individual job expectations for staff (16 percent), and second lowest ahead of the National Science Foundation, when using performance information to set program priorities (27 percent). Department of Health and Human Services: Selected Survey Results The Department of Health and Human Services (HHS) was largely similar to the rest of the government, except for one aspect of agency climate. HHS was statistically significantly higher than the rest of the government in the percentage of managers who reported that employees received positive recognition to at least a great extent for helping the agency achieve its strategic goals. In all other areas, the agency was not significantly different from the rest of the government. Survey results for one component of HHS, the Health Care Financing Administration, are not included here but are reported in a separate appendix. Top Leadership Sixty percent of HHS managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, compared with 53 percent for the rest of the government. Positive Recognition Forty-six percent of HHS managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is 16 points higher than that of managers who responded this way for the rest of the government (30 percent) and this difference is statistically significant. Authority and Accountability Forty-three percent of HHS managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 68 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 35 and 62 respectively. HHS managers ranked in the top quarter of the 28 agencies surveyed for managers' perceptions concerning both the extent of decisionmaking authority and the degree to which managers were held accountable for results, although the differences between HHS and the rest of the government on these two items were not statistically significant. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of HHS managers (46 percent) reported having outcome measures and the lowest (37 percent) cited efficiency measures, as shown below. HHS was one of only seven agencies where outcome measures were cited more frequently than output measures. (The others were the Agency for International Development, Department of Energy, Federal Emergency Management Agency, National Aeronautics and Space Administration, National Science Foundation, and Office of Personnel Management.) The agency ranked in the lowest quarter of the agencies surveyed for the percentage of managers reporting that their programs had output measures (44 percent), although the difference between HHS and the rest of the government was not statistically significant. Use of Performance Information Similar to the rest of the government, less than half of HHS managers reported that they used performance information for each of the management activities shown below. Department of Housing and Urban Development: Selected Survey Results The Department of Housing and Urban Development (HUD) was above the rest of the government in aspects of agency climate, performance measurement, and particularly, in the use of performance information. The agency was statistically significantly higher than the rest of the government in the percentages of managers who reported that employees received positive recognition for helping the agency achieve its strategic goals; managers are held accountable for results; they have output and outcome measures; and they use performance information to set program priorities, allocate resources, coordinate program efforts, and set job expectations. Of the 28 agencies surveyed, HUD had the second greatest number of total items for which the agency was significantly higher than the rest of the government after the General Services Administration and the Small Business Administration (SBA), both of which had 1 more. In all other areas, HUD was not significantly different from the rest of the agencies we surveyed. Top Leadership Almost two-thirds (64 percent) of HUD managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, whereas 53 percent of managers responded this way for the rest of the government, as shown below. HUD managers were in the top quarter of agencies surveyed for this item. Positive Recognition Forty-seven percent of HUD managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is 17 points higher than that of managers who responded this way for the rest of the government (30 percent) and is a statistically significant difference. Authority and Accountability Thirty-six percent of HUD managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help their agency accomplish its strategic goals, whereas 79 percent indicated that they were held accountable for results to a similar extent, as shown below. HUD was among five agencies surveyed where the gap between accountability and authority was wide and exceeded 40 percentage points. (The others were the Federal Aviation Administration, Internal Revenue Service, SBA, and Social Security Administration.) HUD managers’ response concerning the extent of their decisionmaking authority (36 percent) was identical to that of the rest of the government. Their response concerning the extent to which managers were held accountable for results (79 percent) was statistically significantly higher than the 63 percent reported by managers in the rest of the government. HUD ranked highest in its response concerning accountability among the 28 agencies included in the survey. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of HUD managers reported having output measures (75 percent) which was statistically significantly higher than the rest of the government (50 percent). In addition, HUD and SBA were first among the 28 agencies surveyed in the percentage of managers reporting this type of performance measure. HUD was also significantly higher than the rest of the government in the percentage of its managers who reported having outcome measures. HUD and the National Aeronautics and Space Administration were first among the agencies surveyed in the percentage of managers reporting outcome measures (63 percent). Of the five measures we asked about, HUD managers cited customer services measures least frequently (36 percent), as shown below. Use of Performance Information HUD ranked statistically significantly higher than the rest of the government in the percentage of managers who indicated that they used performance information for each of the management activities shown below, except for adopting new program approaches or changing work processes. In addition, the agency ranked first among the 28 agencies we surveyed concerning the use of performance information when setting program priorities (64 percent) and coordinating program efforts with internal or external organizations (57 percent). HUD also ranked second to SBA in the percentage of managers who reported using performance information when setting individual job expectations for staff (59 percent). Department of the Interior: Selected Survey Results The Department of the Interior was below the rest of the government in aspects of agency climate, performance measurement, and the use of performance information. The agency was statistically significantly lower than the rest of the government in the percentages of managers who expressed the view that the agency's top leadership was strongly committed to achieving results to at least a great extent; reported having efficiency and quality measures; and indicated that they used performance information for setting program priorities, allocating resources, and coordinating program efforts. In all other areas, the agency was not statistically significantly different from the rest of the government. Top Leadership Less than half (44 percent) of managers at Interior expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This percentage is 10 points lower than that of the rest of the government (54 percent) and this difference is statistically significant. Interior ranks in the bottom quarter of the 28 agencies included in the survey. Positive Recognition Thirty-three percent of Interior's managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, and this is about the same as the percentage of managers who responded this way for the rest of the government (30 percent). Authority and Accountability Forty-three percent of managers at Interior reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 60 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 35 and 63, respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of Interior managers (48 percent) reported having output measures and the lowest (24 percent) cited efficiency measures, as shown below. Thirty-nine percent of managers reported having outcome measures. In addition, the percentages of Interior managers who reported having efficiency measures (24 percent) and quality measures (30 percent) to a great or very great extent were statistically significantly below the percentages of managers reporting these results for the rest of the government. Interior also ranked in the lowest quarter of the agencies surveyed for efficiency measures. Use of Performance Information Interior was statistically significantly lower than the rest of the government in the percentage of managers who indicated that they used performance information for all of the management activities shown below, except for adopting new program approaches/changing work processes and setting individual job expectations for staff. Internal Revenue Service: Selected Survey Results The Internal Revenue Service (IRS) was below the rest of the government in aspects of agency climate, performance measurement, and the use of performance information. The agency was significantly lower than the rest of government in the percentage of managers who reported that top leadership at their agency demonstrated a strong commitment to achieving results; that managers had the decisionmaking authority they needed to help their agency accomplish its strategic goals; that their programs had output and outcome performance measures; and that they used performance information when setting program priorities, adopting new program approaches or changing work processes, and coordinating program efforts. In all other areas, the agency was not statistically significantly different from the rest of the government. IRS had the lowest percentage of managers who reported that they had the decisionmaking authority they needed to help their agency accomplish its strategic goals to at least a great extent. The agency also ranked second to last, next to the Health Care Financing Administration, among the agencies surveyed in the percentage of managers who indicated that they had outcome measures for their programs. Top Leadership Forty-two percent of managers at IRS expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This result is 12 percentage points lower than the rest of government (54 percent) and this difference is statistically significant. IRS ranked in the lowest quarter of the agencies surveyed. Positive Recognition Twenty-seven percent of IRS managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, not significantly different from the percentage of managers who responded this way for the rest of the government (31 percent). Authority and Accountability Fifteen percent of IRS managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 60 percent indicated that they were held accountable for results to a similar extent, as shown below. IRS was among five agencies surveyed where the gap between accountability and authority was wide and exceeded 40 percentage points. (The others were the Federal Aviation Administration, Social Security Administration, Small Business Administration, and Department of Housing and Urban Development.) IRS managers' response concerning the extent of their decisionmaking authority was the lowest among the 28 agencies surveyed. The IRS' 15 percent is statistically significantly lower than 37 percent reported by the rest of the government. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of IRS managers (37 percent) reported having quality measures and the lowest (28 percent) cited outcome measures, as shown below. In addition, the percentages of IRS managers who reported having output measures (32 percent) and outcome measures (28 percent) to a great or very great extent were significantly below the percentages of managers reporting these results for the rest of the government. Use of Performance Information IRS ranked statistically significantly lower than the rest of the government in the percentage of managers who indicated that they used performance information for setting program priorities (36 percent), adopting new program approaches or changing work processes (29 percent), and coordinating program efforts (27 percent), as shown below. Department of Justice: Selected Survey Results The Department of Justice was largely similar to the rest of the government, except for two aspects of performance measurement. It was statistically significantly lower than the rest of the government in the percentages of managers who reported having customer service and quality performance measures. In all other areas, Justice was not statistically significantly different from the rest of the government. Top Leadership Less than half (49 percent) of managers at Justice expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, compared with 54 percent for the rest of the government. Positive Recognition Twenty-six percent of managers at Justice reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is not significantly different from that of managers who responded this way in the rest of the government (31 percent). Authority and Accountability Thirty-three percent of managers at Justice reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 60 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63, respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of managers at Justice (44 percent) reported having output measures and the lowest (20 percent) cited customer service measures, as shown below. Thirty-eight percent of managers reported having outcome measures. In addition, percentages of managers at Justice who reported having customer service measures (20 percent) and quality measures (25 percent) to a great or very great extent were significantly below that of the rest of the government. Use of Performance Information Similar to the rest of the government, less than half of managers at Justice reported that they used performance information for each of the management activities shown below. In addition, Justice ranked in the second lowest quarter of the agencies surveyed for the percentage of managers who reported using performance information for each of the management activities shown below. Department of Labor: Selected Survey Results The Department of Labor was largely similar to the rest of the government, except in one aspect of agency climate and one aspect of performance measurement. The agency was statistically significantly lower than the rest of the government in the percentage of managers who reported that employees received positive recognition for helping their agency achieve its strategic goals to at least a great extent and the percentage of managers reporting that they had customer service measures for their programs. In all other areas, Labor was not statistically significantly different from the rest of the government. Top Leadership Over half (56 percent) of managers at Labor expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This percentage is about the same as that of the rest of the government (53 percent). Positive Recognition Eighteen percent of managers at Labor reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is 13 points lower than that of managers who responded this way for the rest of the government (31 percent) and this difference is statistically significant. For this survey item, Labor was the third lowest ranking agency, after the Federal Aviation Administration and the Agency for International Development, of the 28 agencies included in the survey. Authority and Accountability Forty percent of managers at Labor reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 62 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63, respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of managers at Labor (55 percent) reported having output measures and the lowest (28 percent) cited customer service measures, as shown below. In addition, the percentage of managers at Labor who reported having customer service measures to a great or very great extent was significantly below the percentage of managers for the rest of the government (39 percent). Forty percent of managers at Labor reported having outcome measures. Use of Performance Information Similar to the rest of the government, less than half of managers at Labor reported that they used performance information for each of the management activities shown below. In addition, the agency ranked in the lowest quarter of agencies we surveyed concerning the use of such information when coordinating program efforts with internal or external organizations (27 percent). National Aeronautics and Space Administration: Selected Survey Results The National Aeronautics and Space Administration (NASA) was above the rest of the government in aspects of agency climate, performance measurement, and the use of performance information. It was statistically significantly higher than the rest of the government in the percentages of managers who reported that the agency’s top leadership demonstrated a strong commitment to achieving results; that the agency provided positive recognition of employees who helped the agency achieve its strategic goals; that they had outcome, quality, and customer service measures; and that they used performance information to allocate resources. In all other areas, the agency was not statistically significantly different from the rest of the government. For the items discussed in this appendix, NASA was in the top quarter of the 28 agencies we surveyed when ranked by the total number of items they had that were statistically significantly higher than the rest of the government. The percentage of NASA managers reporting that the agency’s leadership demonstrated a strong commitment to achieving results to at least a great extent was second highest, along with the Social Security Administration (SSA), and just behind the National Science Foundation (NSF), among the agencies we surveyed. The agency also had the highest percentage of managers reporting that their programs had quality measures and was tied for second highest with the Office of Personnel Management (OPM), after the Department of Veterans Affairs (VA) and the General Services Administration (GSA), in the percentage of managers reporting that they had customer service measures. Top Leadership Over two-thirds (68 percent) of NASA managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This percentage is 15 points higher than that of the rest of the government (53 percent), and this difference is statistically significant. NASA ranked second highest, along with SSA and after NSF, of the 28 agencies included in the survey. Positive Recognition Forty-seven percent of NASA managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is 17 points higher than that of managers who responded this way for the rest of the government (30 percent) and this difference is statistically significant. Authority and Accountability Forty-one percent of NASA managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 68 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63, respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of NASA managers (63 percent) reported having outcome measures and the lowest (43 percent) cited efficiency measures, as shown below. NASA was one of only seven agencies where outcome measures were cited more frequently than output measures. (The others were the Agency for International Development, Department of Energy, Federal Emergency Management Agency, Department of Health and Human Services, National Science Foundation, and Office of Personnel Management.) In addition, the percentages of NASA managers who reported having outcome measures (63 percent), quality measures (61 percent), or customer service measures (52 percent) to a great or very great extent were all significantly above the percentages of managers reporting these results for the rest of the government. NASA ranked highest of 28 agencies in the percentage of managers reporting that their programs had quality measures and second highest, along with OPM and after VA and GSA, in the percentage citing customer service measures. Use of Performance Information NASA ranked statistically significantly higher than the rest of the government in the percentage of managers who indicated that they used performance information when allocating resources (54 and 43 percent, respectively), as shown below. National Science Foundation: Selected Survey Results The National Science Foundation (NSF) was above the rest of the government in one aspect of agency climate, and the agency was below the rest of the government in aspects of the use of performance information. It was statistically significantly higher than the rest of the government and ranked first of the 28 agencies included in the survey in the percentage of managers who reported that their agency’s top leadership was strongly committed to achieving results to at least a great extent. NSF was significantly lower than the rest of the government in the percentage of managers who indicated that they used performance information when carrying out three management tasks: setting program priorities, allocating resources, and setting individual job expectations. For all three of these items, NSF ranked among the lowest of the agencies we surveyed. The agency had the lowest percentage of managers reporting that they used performance information when setting program priorities and when allocating resources. NSF had the second lowest percentage, next to the Health Care Financing Administration, for managers reporting that they used this information when setting individual job expectations for staff. In all other areas, NSF was not statistically significantly different from the rest of the government. Top Leadership More than two-thirds (69 percent) of NSF managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This percentage is 16 points higher than that of the rest of the government (53 percent), and this difference is statistically significant. For this survey item, NSF ranks first of the 28 agencies included in the survey. Positive Recognition Thirty-seven percent of NSF managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, compared with 31 percent for the rest of the government. Authority and Accountability Forty-four percent of NSF managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 62 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63, respectively. NSF managers’ response concerning the extent of their decisionmaking authority ranked third highest among the 28 agencies surveyed (after the Office of Personnel Management and the Department of Commerce), although the difference from the rest of the government was not statistically significant. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of NSF managers reported having outcome measures (55 percent) and the agency ranked in the top quarter of the agencies surveyed for the percentage of managers citing this type of measure. The lowest percentage of NSF managers responding on this topic were those who reported having efficiency measures to at least a great extent in their programs (35 percent), as shown below. NSF was one of only seven agencies where outcome measures were cited more frequently than output measures. (The others were the Agency for International Development, Department of Energy, Federal Emergency Management Agency, Department of Health and Human Services, National Aeronautics and Space Administration, and Office of Personnel Management.) Use of Performance Information NSF was significantly lower than the rest of the government in the percentage of managers who indicated that they used performance information for each of the management activities shown below, except for adopting new program approaches or changing work processes and coordinating program efforts with internal or external organizations. In addition, the agency ranked last among the 28 agencies we surveyed concerning the use of performance information when setting program priorities (26 percent) and allocating resources (24 percent) and second from last, ahead of HCFA, when setting individual job expectations for staff (22 percent). Nuclear Regulatory Commission: Selected Survey Results The Nuclear Regulatory Commission (NRC) was above the rest of the government for aspects of agency climate, and the agency was both above and below the rest of the government for different aspects of performance measurement. It was statistically significantly higher than the rest of the government in the percentages of managers who reported that their agency’s top leadership demonstrated a strong commitment to achieving results; that the agency provided positive recognition of employees who helped the agency achieve its strategic goals; and that they used output measures. In addition, NRC was significantly lower in the percentage of managers who reported having customer service, quality, and outcome measures. In all other areas, the agency was not statistically significantly different from the rest of the government. NRC ranked fourth, after the National Science Foundation, Social Security Administration, and National Aeronautics and Space Administration, in the percentage of managers who reported that their agency’s top leadership was strongly committed to achieving results to at least a great extent and ranked last among the agencies surveyed in the percentage of managers who reported having customer service measures. Top Leadership More than two-thirds (67 percent) of NRC managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This percentage is 14 points higher than that of the rest of the government (53 percent), and this difference is statistically significant. Positive Recognition Forty-five percent of NRC managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is 14 points higher than that of managers who responded this way for the rest of the government (31 percent) and this difference is statistically significant. Authority and Accountability Thirty-two percent of NRC managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 69 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63, respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of NRC managers (68 percent) reported having output measures and the lowest (14 percent) cited customer service measures, as shown below. NRC was statistically significantly higher than the rest of the government in the percentage of its managers who identified having output measures to a great or very great extent (68 percent). The percentages of NRC managers who reported having customer service measures (14 percent), quality measures (27 percent), or outcome measures (30 percent) were all statistically significantly below the percentages of managers for the rest of the federal government. In addition, NRC ranked last of the 28 agencies included in the survey for the percentage of managers who reported that they had customer service measures. Use of Performance Information Similar to the rest of the government, less than half of NRC managers reported that they used performance information for each of the management activities shown below. Office of Personnel Management: Selected Survey Results The Office of Personnel Management (OPM) was higher than the rest of the government in aspects of agency climate and the use of performance information. The agency was statistically significantly higher than the rest of the government in the percentages of managers who reported that their agency provided positive recognition of employees who helped the agency achieve its strategic goals; that managers had the decisionmaking authority they needed to help their agency accomplish its strategic goals; and that they used performance information when allocating resources and adopting new program approaches or changing work processes. In all other areas, the agency was not statistically significantly different from the rest of the government. Of the 28 agencies surveyed, OPM had the highest percentage of managers who reported that they had the decisionmaking authority they needed to achieve results to at least a great extent. The agency ranked third, after the General Services Administration (GSA) and the Small Business Administration (SBA), in the percentage of managers who indicated that employees received positive recognition for achieving results to a great or very great extent. OPM managers again ranked first among the agencies surveyed in their use of performance information when allocating resources and when adopting new or different program approaches. Top Leadership Almost two-thirds (63 percent) of OPM managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, which is not a statistically significant difference from managers in the rest of government (53 percent). Positive Recognition Forty-nine percent of OPM managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is 18 points higher than that of managers who responded this way for the rest of the government (31 percent) and this difference is statistically significant. OPM was the third highest-ranking agency, behind GSA and SBA, of the 28 agencies included in the survey. Authority and Accountability Fifty-eight percent of OPM managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 76 percent indicated that they were held accountable for results to a similar extent, as shown below. OPM managers’ response concerning the extent of their decisionmaking authority was the highest among the 28 agencies surveyed. OPM’s 58 percent is statistically significantly higher than the 36 percent reported by the rest of the government. OPM managers’ response concerning the extent to which managers were held accountable for results (76 percent) was the second highest of all agencies surveyed (after HUD), although OPM was not statistically significantly different from the rest of the government (63 percent). Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of OPM managers (58 percent) reported having outcome measures and the lowest (43 percent) cited efficiency measures, as shown below. OPM was one of only seven agencies where outcome measures were cited more frequently than output measures. (The others were the Agency for International Development, Department of Energy, Federal Emergency Management Agency, Department of Health and Human Services, National Aeronautics and Space Administration, and National Science Foundation.) In addition, OPM, along with the National Aeronautics and Space Administration (NASA) and behind the Department of Veterans Affairs (VA) and GSA, had the second highest percentage of managers who reported that their programs had customer service measures (52 percent) and third highest after NASA and VA for quality measures (54 percent) and, after NASA and HUD, for outcome measures (58 percent). OPM was not statistically significantly different from the rest of the government on these items. Use of Performance Information OPM ranked statistically significantly higher than the rest of the government in the percentage of managers who indicated that they used performance information when allocating resources (66 percent) and when adopting new program approaches or changing work processes (64 percent), as shown below. The agency ranked first among the 28 agencies surveyed for both of these items. In addition, the agency was among the top quarter of agencies concerning the use of performance information when setting program priorities (56 percent) or setting individual job expectations for staff (50 percent). However, OPM was not statistically significantly different from the rest of the government on either of these two items. Small Business Administration: Selected Survey Results The Small Business Administration (SBA) was higher than the rest of the government in aspects of agency climate, performance measurement, and, particularly, the use of performance information. The agency was lower than the rest of the government in one aspect of performance measurement. It was statistically significantly higher than the rest of the government for survey items concerning the percentage of employees receiving positive recognition, accountability for results, having output measures, and using performance information for all five key activities discussed in this appendix. SBA was significantly below the rest of the government in the percentage of managers who reported having quality measures. Of the survey items discussed in this appendix, SBA and the General Services Administration (GSA) had the greatest number of items for which they were significantly higher than the rest of the government. In all other areas, SBA was not statistically significantly different from the rest of the government. The agency ranked second after GSA among the 28 agencies surveyed in the percentage of managers reporting that employees received positive recognition for helping the agency accomplish its strategic goals to at least a great extent. While generally comparable to the rest of the government for the other types of performance measures we asked about, SBA was ranked first among the agencies surveyed—along with the Department of Housing and Urban Development (HUD)—in the percentage of managers who reported that they had output measures. SBA also ranked first in the percentage of managers who indicated that they used performance information when setting individual job expectations. Top Leadership Slightly more than half (54 percent) of SBA managers expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below, and this percentage is about the same as managers who responded this way for the rest of the government (53 percent). Positive Recognition Fifty-one percent of SBA managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below. This percentage is 21 points higher than that of managers who responded this way for the rest of the government (30 percent) and this difference is statistically significant. SBA ranked second highest, after GSA, of the 28 agencies included in the survey. Authority and Accountability Twenty-seven percent of SBA managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 75 percent indicated that they were held accountable for results to a similar extent, as shown below. SBA was among five agencies surveyed where the gap between accountability and authority was wide and exceeded 40 percentage points. (The others were the Federal Aviation Administration, Internal Revenue Service, Social Security Administration, and Department of Housing and Urban Development.) SBA managers’ response concerning the extent to which managers are held accountable for results (75 percent) was statistically significantly higher than the 63 percent reported by the rest of the government. SBA managers’ response concerning the extent of their decisionmaking authority placed the agency in the bottom quarter of agencies surveyed, although the difference between SBA (27 percent) and the rest of the government (36 percent) was not statistically significant. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of SBA managers (75 percent) reported having output measures and the lowest (24 percent) cited quality measures, as shown below. Forty-four percent of managers reported having outcome measures. SBA was statistically significantly higher than the rest of the government in the percentage of its managers who identified having output measures to a great or very great extent (75 percent). However, the percentage of SBA managers who reported having quality measures (24 percent) was significantly below the percentage for the rest of the government (39 percent). In addition, SBA was tied for first with HUD among the 28 agencies surveyed in the percentage of managers who reported that they had output measures. Use of Performance Information SBA was statistically significantly higher than the rest of the government in the percentage of managers who indicated that they used performance information for all five management activities shown below. In addition, the agency ranked first among the 28 agencies we surveyed concerning the use of performance information when setting individual job expectations for staff (66 percent). SBA was second from the top in the percentage of managers reporting that they used such information when allocating resources (61 percent), after the Office of Personnel Management (OPM). SBA also ranked second to OPM in the percentage of managers who cited using this information when adopting new program approaches or changing work processes (61 percent). Finally, it was third, after HUD and the Social Security Administration, in the percentage of managers reporting that they used such information when setting program priorities (61 percent). Social Security Administration: Selected Survey Results The Social Security Administration (SSA) was above the rest of the government in aspects of agency climate, performance measurement, and the use of performance information, and it was below the rest of the government in other aspects of agency climate and performance measurement. The agency was statistically significantly higher than the rest of the government in the percentage of managers reporting that their agency’s top leadership had a strong commitment to achieving results; that they used output measures, and that they used performance information to set program priorities. SSA was significantly lower in the percentage of managers reporting that they had the decisionmaking authority they needed, and that they had quality performance measures. In all other areas, the agency was not statistically significantly different from the rest of the government. SSA and the National Aeronautics and Space Administration (NASA) were second highest among the 28 agencies, after the National Science Foundation (NSF), in the percentage of managers who reported that their agency’s top leadership was strongly committed to achieving results to at least a great extent. Yet, SSA was the third lowest agency, before the Federal Aviation Administration (FAA) and the Internal Revenue Service (IRS), in the percentage of managers who believed that they had the decisionmaking authority they needed to achieve results to a similar extent. Top Leadership Over two-thirds (68 percent) of managers at SSA expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This percentage is 15 points higher than that of the rest of the government (53 percent), and this difference is statistically significant. SSA and NASA were second to NSF for the 28 agencies included in the survey on this item. Positive Recognition Thirty-six percent of SSA managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals compared with 30 percent for the rest of the government, as shown below. Authority and Accountability Twenty-three percent of SSA managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 68 percent indicated that they were held accountable for results to a similar extent, as shown below. SSA was among five agencies surveyed where the gap between accountability and authority was wide and exceeded 40 percentage points. (The others were the Internal Revenue Service, Federal Aviation Administration, Small Business Administration, and Department of Housing and Urban Development.) SSA managers’ response concerning the extent of their decisionmaking authority (23 percent) was the third lowest, ahead of FAA and IRS, among the 28 agencies surveyed and is statistically significantly lower than the 36 percent reported by the rest of the government. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of SSA managers (73 percent) reported having output measures and the lowest (29 percent) cited quality measures, as shown below. Forty-eight percent of managers reported having outcome measures. SSA was statistically significantly higher than the rest of the government in the percentage of its managers who identified having output measures to a great or very great extent. The percentage of SSA managers who reported having quality measures (29 percent) was significantly below the percentages of managers for the rest of the government. Use of Performance Information In contrast to the rest of the federal government, 62 percent of managers at SSA reported that they used performance information to a great or very great extent when setting program priorities. This is a statistically significant difference when compared to the 44 percent of managers who responded in this way across the rest of the government, as shown below. Department of State: Selected Survey Results The Department of State was below the rest of the government in aspects of agency climate, performance measurement, and the use of performance information. It was statistically significantly lower than the rest of the government in the percentage of managers who reported that managers were held accountable by their agency for results. State also ranked significantly lower in the percentage of managers who reported having customer service and quality measures and using performance information when coordinating program efforts with other organizations. In all other areas, State was not statistically significantly different from the rest of the government. Top Leadership Less than half (46 percent) of managers at the Department of State expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, compared with 53 percent for the rest of the government, as shown below. Positive Recognition Thirty-six percent of State managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, compared with 30 percent for the rest of the government, as shown below. Authority and Accountability Thirty-five percent of managers at State reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 49 percent indicated that they were held accountable for results to a similar extent, as shown below. State managers' response concerning the extent to which managers were held accountable for results (49 percent) was statistically significantly lower than the 63 percent reported by the rest of the government. State was one of six agencies surveyed that had less than half of its managers reporting that they were held accountable to at least a great extent. (The others were the Agency for International Development, Department of Energy, Forest Service, General Services Administration, and Health Care Financing Administration.) Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of managers at State (43 percent) reported having output measures and the lowest (21 percent) cited customer service measures, as shown below. Thirty-seven percent of managers reported having outcome measures. In addition, the percentages of State managers who reported having customer service measures (21 percent) and quality measures (25 percent) to a great or very great extent were statistically significantly below the percentages of managers reporting these results for the rest of the government. Use of Performance Information State ranked statistically significantly lower than the rest of the government in the percentage of managers who indicated that they used performance information when coordinating program efforts with internal or external organizations (21 percent). Department of Transportation: Selected Survey Results The Department of Transportation (DOT) was below the rest of the government in one aspect of performance measurement. It was statistically significantly lower than the rest of the government in the percentage of managers who reported having outcome measures for their programs. In all other areas, DOT was not significantly different from the rest of the government. Survey results for one component of DOT, the Federal Aviation Administration, are not included here but are reported in a separate appendix. Top Leadership Fifty-nine percent of managers at DOT expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, compared with 53 percent of managers in the rest of the government, as shown below. Positive Recognition Thirty-one percent of DOT managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, as shown below, and this percentage is equal to that of managers in the rest of the government. Authority and Accountability Forty-three percent of DOT managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 55 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63, respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of DOT managers (42 percent) reported having output measures and the lowest (27 percent) cited efficiency measures, as shown below. In addition, the percentage of DOT managers who reported having outcome measures to a great or very great extent (32 percent) was statistically significantly below the percentage of managers reporting these results for the rest of the government (44 percent). Use of Performance Information Similar to the rest of the government, less than half of managers at DOT reported that they used performance information for each of the management activities shown below. In addition, DOT ranked in the lowest quarter of the agencies surveyed for the percentage of managers who reported using performance information when allocating resources (34 percent) and when setting individual job expectations with staff (31 percent). Department of the Treasury: Selected Survey Results The Department of the Treasury was above the rest of the government in aspects of agency climate, performance measurement, and the use of performance information. The agency was statistically significantly higher than the rest of the government in the percentage of managers who expressed the view that their agency’s top leadership was strongly committed to achieving results to at least a great extent; who reported that they had both output and outcome measures for their programs; and who indicated that they used performance information when coordinating program efforts. For the items discussed in this appendix, Treasury was in the top quarter of the 28 agencies we surveyed when ranked by the total number of items they had that were statistically significantly higher than the rest of the government. In all other areas, Treasury was not statistically significantly different from the rest of the government. Survey results for one component of Treasury, the Internal Revenue Service, are not included here but are reported in a separate appendix. Top Leadership Almost two-thirds (64 percent) of managers at Treasury expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, as shown below. This percentage is 11 points higher than the rest of the government (53 percent) and is statistically significantly different. Positive Recognition Thirty-nine percent of Treasury managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, compared with 30 percent for the rest of the government, as shown below. Authority and Accountability Thirty-nine percent of Treasury managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 65 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 63, respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of Treasury managers (66 percent) reported having output measures and the lowest (37 percent) cited customer service measures, as shown below. In addition, the percentages of Treasury managers who reported having output measures (66 percent) and outcome measures (56 percent) to a great or very great extent were significantly above the percentages of managers reporting these results for the rest of the government (50 and 43 percent, respectively). Use of Performance Information Treasury was significantly higher than the rest of the government in the percentage of managers who indicated that they used performance information when coordinating program efforts with internal or external organizations (49 percent). Department of Veterans Affairs: Selected Survey Results The Department of Veterans Affairs (VA) was above the rest of the government in aspects of performance measurement and the use of performance information. It was statistically significantly higher than the rest of the government in the percentage of managers who reported having output, customer service, and quality measures and those who reported using performance information to set program priorities, adopt new program approaches or change work processes, and coordinate program efforts with other organizations. In all other areas, the agency was not statistically significantly different from the rest of the government. For the items discussed in this appendix, VA was in the top quarter of the 28 agencies we surveyed when ranked by the total number of items they had that were statistically significantly higher than the rest of the government. In addition, VA and the General Services Administration (GSA) ranked highest among the 28 agencies surveyed in the percentage of managers who reported having customer service measures for their programs. Top Leadership Less than half (47 percent) of managers at VA expressed the view that their top leadership was strongly committed to achieving results to a great or very great extent, compared with 54 percent for the rest of the government, as shown below. Positive Recognition Twenty-three percent of VA managers reported that employees received positive recognition to a great or very great extent for helping their agency accomplish its strategic goals, compared with 31 percent for the rest of the government, as shown below. Authority and Accountability Thirty-seven percent of VA managers reported that they had, to a great or very great extent, the decisionmaking authority they needed to help the agency accomplish its strategic goals, whereas 67 percent indicated that they were held accountable for results to a similar extent, as shown below. For the rest of the government, these percentages were 36 and 62, respectively. Types of Performance Measures When asked about the types of performance measures in their programs, the highest percentage of VA managers (62 percent) reported having output measures and the lowest (44 percent) cited efficiency measures, as shown below. Forty-nine percent of managers reported having outcome measures. In addition, the percentages of VA managers who reported having customer service measures (54 percent), quality measures (57 percent), and output measures (62 percent) to a great or very great extent were significantly above the percentages of managers reporting these results for the rest of the government. VA and GSA were the highest among the agencies surveyed for the percentage of managers who reported having customer service measures for their programs. Use of Performance Information VA was significantly higher than the rest of the government in the percentage of managers who indicated that they used performance information when setting program priorities (60 percent), adopting new program approaches or changing work processes (56 percent), and coordinating program efforts with internal or external organizations (53 percent). Comments From the Office of Management and Budget Ordering Information The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. To Report Fraud, Waste, or Abuse in Federal Programs Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | For federal agencies to become high-performing organizations, top management needs to foster performance-based cultures, find ways to measure performance, and use performance information to make decisions. GAO's survey of federal managers found wide differences in how well individual agencies demonstrated a results-based climate. However, transforming organizational cultures is an arduous and long-term effort. Managers' responses suggest that although some agencies are clearly showing signs of becoming high-performing organizations, others are not. The survey provides important information that agency leadership can use to build higher-performing organizations throughout government. GAO will continue to work with senior leadership in the individual agencies to help address the issues raised by their managers in responding to the survey. Congress has a vital role to play as well. As part of its confirmation, oversight, authorization, and appropriation responsibilities, Congress could use the information from GAO's survey, as well as information from agencies' performance plans and reports and GAO's January 2001 Performance and Accountability Series and High-Risk Series, to emphasize performance-based management and to underscore Congress' commitment to addressing long-standing challenges. |
Background VA’s System of Health Care The mission of VA is to serve America’s veterans and their dependents. All VA programs are administered through three major administrations— VHA, Veterans Benefits Administration, and the National Cemetery Administration. VA provides medical services to various veteran populations—including an aging veteran population and a growing number of younger veterans returning from the military operations in Afghanistan and Iraq. In general, veterans must enroll in VA health care to receive VA’s medical benefits package—a set of services that includes a full range of hospital and outpatient services, prescription drugs, and long-term care services provided in veterans’ own homes and in other locations in the community. VHA is responsible for overseeing the delivery of care to enrolled veterans, as well as the health care professionals and support staff that deliver that care. VHA is also responsible for managing all VA medical facilities. VA organizes its system of care into regional networks called VISNs. In September 2015, there were 21 VISNs nationwide, but VA is in the process of merging VISNs that will result in 18 VISNs when completed. Each VISN is responsible for coordination and oversight of all administrative and clinical activities within its specified geographic region. Medical services are provided in inpatient/residential medical facilities and outpatient medical facilities, including: Inpatient/residential care medical facilities as of January 2017: VA medical centers: A medical facility that provides at least two types of care, such as, inpatient, outpatient, residential, or institutional extended care. There are 168 VA medical centers. Extended care site (community living center): A medical facility that provides institutional care, such as nursing home beds, for extended periods of time. There are 135 community living centers. Residential care site: A medical facility that provides residential care, such as a domiciliary, for extended periods of time. There are 48 domiciliaries. Outpatient care medical facilities as of October 1, 2016: Community-based outpatient clinics (CBOC): A medical facility that provides primary care and mental health services, and in some cases, specialty services such as cardiology or neurology, in an outpatient setting. There are 737 CBOCs. Health care center: A medical facility that provides the same services as CBOCs, but also ambulatory surgical procedures that may require moderate sedation. There are 22 health care centers. Other outpatient service: A medical facility that provides care to veterans but is not classified as a CBOC or health care center, such as a mobile treatment facility. There are 305 other outpatient services sites. In order to meet the needs of the veterans it serves, VA is authorized to pay for veteran health care services from non-VA providers through both the Non-VA Medical Care Program and clinical contracts. In fiscal year 2015, VA obligated about $10.1 billion to purchase care from non-VA providers. The Non-VA Medical Care Program, including the Choice Program and Patient-Centered Community Care, is referred to as “care in the community” by VA, and allows VA to offer care to veterans in non-VA facilities, such as physicians’ offices and hospitals in the community, and pay for this care using a fee-for-service arrangement. Clinical contracts are used by VA to bring non-VA providers—such as physicians, pharmacists, and nurses—into VA facilities to provide services to veterans. Current Efforts to Align Facilities with Veterans’ Needs SCIP Process VA works with the VISNs, and medical facilities to manage its real property assets through VA’s capital-planning process. The SCIP process—established in 2010 to assess and identify long-term capital needs—is VA’s main mechanism for planning and prioritizing capital- planning projects, but is affected by the VA’s budgetary resources, which will determine how many projects will be funded. The goal of SCIP is to identify the full capital need to address VA’s service and infrastructure gaps, and to demonstrate that all project requests are centrally reviewed in an equitable and consistent way throughout VA, including across market areas within VA’s health care system given competing capital needs. The SCIP process for that particular fiscal year’s projects begins approximately 23 months before the start of the fiscal year with VA providing a set of guidelines to the VISNs and medical facilities. SCIP uses information from models to identify excesses and deficits in the services at the local level—called “gaps” within VA—and justify capital investments. For example, SCIP uses data from the EHCPM, a model for projecting veteran enrollment, utilization of VA health care, and the associated expenditures VA needs to meet the expected demand for most of the health care services it provides. VA officials at the VISNs and medical facilities play a major role in the capital-planning process. Each VISN has a Capital Asset Manager and a planner who are responsible for coordination and oversight of facility alignment activities, and work with individual facility planners and engineering staff. Annually, planners at the medical facilities develop a 10-year action plan for their respective facilities, which include capital or non-capital improvement projects to address gaps in service identified by the SCIP process. According to VA, these long-range plans allow the department to adapt to changes in demographics, and health care and benefits delivery, while at the same time incorporating infrastructure enhancements. Medical facility officials then develop more detailed business plans for the capital improvement projects that are expected to take place in the first year of the 10-year action plan. These projects are validated, scored, and ranked centrally based on the extent to which they address the annual VA-approved SCIP criteria using the assigned weights. (See fig. 1 for VA’s capital decision-making process for evaluating and funding capital projects.) Another tool available for use by some VISNs and medical facilities is the VAIP Process, which was implemented in fiscal year 2011 as a pilot project. The goal was to identify the best distribution of health care services for veterans, where the services should be located based on the veterans’ locations and referral patterns, and where VA should adapt services, facilities, and health care delivery options to better meet these needs as determined by locations and referral patterns. Data from the VAIP Process are designed to drive the VISNs’ and VA medical facilities’ operational decisions including costs, challenges, and local preferences, but the VAIP Process’s findings can also result in future SCIP projects. The VAIP Process produces a market-level health services delivery plan for the VISN and a facility master plan for each medical facility within the VISN. (See fig. 2 for an overview of the VAIP Process.) After completing a pilot of the program, VA officials began formally implementing the VAIP Process across VISNs and their medical facilities, utilizing multiple contractors. As of January 2017, VA officials told us they had mostly completed the VAIP Process in 6 of the 18 VISNs and had plans to start or complete the remaining VISNs by October 2018. According to officials who oversee the program, the entire cost of the VAIP process is expected to be about $108 million. Prior Efforts to Align Facilities with Veterans’ Needs Over time, VA has recognized the need to modernize its facilities and align its real property portfolio to provide accessible, high-quality, and cost-effective access to services. In addition, VA has been the subject of several assessments focusing on facility alignment, and planning efforts to modernize its facilities. In 1999, VA initiated the CARES process to assess federally owned buildings and land ownership in response to changing veterans’ inpatient and outpatient demand for care. CARES was the first comprehensive, long-range assessment of VA’s health care capital-asset priorities since 1981 and was designed to assess the appropriate function, size, and location of VA facilities. In May 2004, VA issued the CARES Decision report to Congress and other stakeholders. The decision report listed projects and actions that VA planned to take over the next 20 years, as well as the tools and principles that the agency planned to use to align its infrastructure and upgrade its facilities. VA officials told us that the implementation of CARES recommendations was monitored through fiscal year 2010. In July 2011, VA released an implementation and monitoring report on seven areas highlighted in the CARES report. In this report, VA stated that excess space was reduced through mechanisms such as the demolition of vacant buildings and the realignment of underutilized space. VA also reported that since fiscal year 2009, 509,247 square feet of space was disposed. VA officials stated that as of June 2016, some of the CARES recommendations were not fully implemented, and the process was essentially replaced when the SCIP process was implemented in fiscal year 2012. In a more recent effort, the Choice Act required VA to contract with a private entity to conduct an independent assessment of 12 areas of VA’s health care delivery system and management processes, including its facilities in 2014. Among the 12 areas, an assessment of facilities examined VA’s processes for facility planning, funding, maintenance, and construction. The Independent Assessment identified four systemic findings: 1. A disconnect in the alignment of demand, resources, and authorities. 2. Varying bureaucratic operations and processes. 3. Non-integrated variations in clinical and business data and tools. 4. Leaders are not fully empowered due to a lack of clear authority, priorities, and goals. The Choice Act also established the Commission on Care to examine veterans’ access to VA health care and to examine and report on how best to organize VA, locate health resources, and deliver health care to veterans during the next 20 years. The Commission on Care assessed the results of the Independent Assessment as part of its work. The Commission on Care’s report included 18 high-level recommendations, and was submitted to the President on June 30, 2016. For example, the one recommendation in the Commission on Care’s report related to facility management was for the enactment of legislation, which would authorize a process similar to the Base Realignment and Closure process to facilitate facility realignment decisions. While VA did not fully agree with the specifics of the recommendation, it did agree with the concept of a realignment commission to focus solely on VA’s infrastructure needs once the mission services were determined. Facility Alignment Is Affected by Shifts in Veteran Population and Care Delivery, and an Aging Infrastructure Long-standing factors, such as shifts in the veteran population and the delivery of care, as well as an aging infrastructure affect VA’s efforts to fully align its real property portfolio with the veteran population. Shifts in Veteran Population A decrease and a shift in the veteran population, impact the agency’s ability to fully align its real property portfolio with veterans’ needs. For example, VA’s VetPop2014 projected a 14 percent decrease in the overall veteran population by 2024. It also projected a geographic shift with veterans continuing to migrate from the Northeast and Midwest areas of the United States to areas in the south and west. Figure 3 shows projected percentage population changes through 2024, by county. These shifts in the veteran population—which also mirror general population trends—may result in a misalignment of services relative to veterans’ needs and insufficient capacity in some locations and excess capacity in other locations. As the population continues to shift, VA will need to make decisions on how to best address these capacity issues. For example, planning officials from three medical facilities said that facilities located in areas with a declining veteran population—and thus, most likely a general population decrease—may experience challenges recruiting and retaining certain types of specialty providers. In addition, a planning official at one medical facility told us that in order for a facility’s providers to maintain clinical proficiency, there needs to be sufficient patient volume. With such shifts, these areas could experience space that is underutilized because of a lack of veteran demand and health care providers. Although there is a projected decrease in the overall veteran population, VA’s EHCPM projects that nationally the number of enrolled veterans will increase through 2024, after which it will decline. However, this trend varies by region, and generally mirrors the overall veteran population trends with decreases in the northeast and increases in the south. In addition to a projected enrollment increase in the short term, enrollee demographics and acuity levels are also projected to change—which will affect the amount and type of health care VA is projected to provide. According to VA officials who oversee the EHCPM, the aging of enrolled veterans and the increasing prevalence of service-connected disabilities (either through these disabilities appearing later in life or through VA changing its scope for eligibility) are driving significant increases in projected utilization and financial expenditures. For example, Vietnam era veterans are expected to account for an increased utilization of long-term care services. In addition, these enrollees also tend to have increased rates of transition into the higher acuity priority groups for benefits. Overall, the number of veterans in these higher acuity priority groups is projected to continue to increase and is a key driver of utilization for VA health care services, including in-home and community- based services. As a result of this projected short-term growth in demand, followed by an eventual decline in veteran enrollment, VA must balance the expansion of services to meet the near-term demand with the potential excess capacity in the long-term. Shifts in Care Delivery Shifts in the type of care that VA provides and where its veterans obtain that care affect VA’s efforts to align its facilities to meet the changing veteran population. A Shift from Inpatient Care to Outpatient Care Similar to trends in the health care industry overall, VA’s model of care has shifted away from providing care in an inpatient setting, to that of an outpatient setting, which VA largely houses in converted inpatient space, or in a growing number of CBOCs. This reflects, in part, the shift in demand from inpatient to outpatient services. According to the Independent Assessment, between 2007 and 2014, outpatient visits increased 41 percent while inpatient bed days declined 9 percent. Further, it reported that inpatient bed days have dropped as much as 21 percent in some VISNs and, over the next 20 years, are expected to decline an additional 50 percent or more. This shift in the utilization of inpatient to outpatient services will likely result in underutilized space once used for inpatients, as a majority of VA medical facilities were originally designed for the delivery of inpatient care. Officials who oversee SCIP as well as some of the planners at medical facilities in our review told us that they can close portions of facilities that are underutilized. However, these planners also told us that the savings are small when compared to closing an entire building. For example, a medical facility in our review temporarily closed one of its inpatient wings due to decreased utilization. Although the unused space was technically closed, the wing still had beds and equipment, and used electricity and utilities, including lights in the hallways and power to operate computers. (See fig. 4.) In addition to shifts in the veteran population and the type of care provided, changes to VA’s use of care in the community affect facility alignment. Although VA has traditionally provided care primarily through its own facilities, it has, and does also use its statutory authority to purchase care from providers in the community. VA’s purchased care accounted for a small but growing proportion of VA’s health care budget in the past decade. For example, in fiscal year 2015, VA obligated about $10.1 billion for care in the community for about 1.5 million veterans. Three years earlier, in fiscal year 2012, VA spent about $4.5 billion on care in the community for about 983,000 veterans. VA officials who oversee the EHCPM told us that although under VA’s care in the community programs, a portion of health care utilization may potentially move from VA facilities to community care, the costs of VA facilities— costs such as staffing, utilities, transportation, and laundry—do not decrease proportionally when this shift occurs. As a result, VA may be expanding care in the community while simultaneously operating underutilized and vacant space at its medical facilities. According to the Independent Assessment, if purchased care continues to increase then VA will need to realign resources by reducing its facilities. As VA expands its care in the community programs, questions remain regarding its impact on facility alignment. Planning officials at two of the seven medical facilities in our review told us that there is uncertainty surrounding the extent to which care in the community, as it currently exists, will continue in the future. These officials added that the uncertainty affects capital planning because capital projects are planned years in advance. For example, a planning official from one medical facility told us that in planning for future renovations to address SCIP utilization gaps, officials are hesitant to send entire clinical service lines to the community because if the Choice Act and its associated funds are not re-authorized, the facility may be financially responsible for providing that service through non-VA providers. Aging Infrastructure Aging infrastructure affects facility alignment because many VA facilities are no longer well suited to providing care in the current VA system, and VA will need to make decisions about how it can adapt to current needs. For example, the average VHA building is approximately 60 years old— five times older than the average building of a not-for-profit hospital. Planning officials at five of the seven medical facilities in our review told us it is often difficult and costly to modernize, renovate, and retrofit older facilities—including converting inpatient facilities into outpatient facilities. These challenges have contributed to the presence of vacant and underutilized buildings. VA reported in 2016 that its inventory includes 370 buildings that are vacant or less than 50 percent occupied, and 770 buildings that are underutilized, requiring it to expend funds designated for patient care to maintain more than 11.5 million square feet of unneeded or underutilized space costing $26 million annually to operate and maintain. As veterans continue to use more outpatient care and less inpatient care, VA’s need to make decisions about its aging infrastructure and how it can adapt it to current needs will continue to grow. Planning officials from two VISNs and four medical facilities in our review told us that outdated building configurations—such as low ceilings and small distances between support columns—could prevent facilities from fully complying with more recent VA health care delivery standards. A planning official from one medical facility told us it is difficult to reconfigure a facility in accordance with these new standards after it is already built, and instead, these standards would have to be incorporated in the preliminary design phase. We observed at various locations where despite renovations, VA was unable to fully reconfigure existing spaces to meet newer care standards. See figures 5 and 6 for examples of these challenges to current health care delivery standards. We previously reported that the historic status of certain VA property can add to the complexity of converting or disposing of outdated facilities. In 2014, VA reported having 2,957 historic buildings, structures, or land parcels—the third most in the federal government after the Department of Defense and the Department of the Interior. In some instances it may be more expensive to do renovations then it would be to demolish and rebuild. However, the option to demolish may not always be an option because of restrictions due to these buildings’ designation as historic. For example, planning officials at four of the medical facilities told us that state historic preservation efforts prevented them from demolishing vacant buildings, even though these buildings require upkeep costs and pose potential safety hazards. (See figs. 7-9.) In addition, some VA medical facilities were built as large medical campuses with multiple unattached buildings. This configuration no longer meets modern health care delivery standards where services are more concentrated in one building or a series of attached buildings. For example, three facilities we visited had large campuses that included portions of vacant land and buildings designated as historic. Figure 10 illustrates the historic Chillicothe, Ohio VA medical center campus, which has numerous vacant buildings that the medical facility would like to dispose of. Limitations in the Capital-Planning Processes Impede VA’s Alignment of Facilities with Veterans’ Needs VA’s SCIP Has Limitations SCIP has several limitations in the scoring and approval process, time frames, and access to information that can limit its utility to effectively facilitate the alignment of facilities with veterans’ needs. Limitations with SCIP’s Scoring and Approval Process Planning officials at VA medical facilities submit projects annually to SCIP, where they are centrally scored against a set of department- approved criteria and priority categories. To score high enough to be approved for funding, a project’s narrative portion of the evaluation must demonstrate how it addresses predetermined VA priorities—this narrative portion represents about one-third of a project’s overall score. Planning officials at two of the VISNs and three of the medical facilities in our review told us that the narrative portion is a limitation of the SCIP’s project-scoring and approval process because it relies on facility planning officials’ ability to write an accompanying narrative that addresses more of the priorities. This introduces subjectivity to the process, where the writer’s ability to demonstrate how the project’s narrative addresses more of the priorities can affect scoring independent of project merits. This can undermine SCIP’s goal of ensuring all project requests are reviewed equitably and consistently. The Independent Assessment also found that some facilities have learned to place considerable emphasis on the ability to write a project’s narrative tailored to perceived high value criteria— often using both in-house staff and consultants to try and maximize the scores. For example, planning officials from one medical facility told us that they needed a SCIP project to expand a women’s health center but did not think that it would score highly. Therefore, they told us they wrote the narrative carefully so that it linked back to more priority areas that they would not have originally thought of, such as “increasing patient privacy,” in order for it to score higher. In addition, another limitation to SCIP is that it allows facility planners to gain credit for closing service gaps by proposing capital projects that they have no intention of implementing. Specifically, planners at VA medical facilities must demonstrate within SCIP that they plan to address all service gaps within the 10-year action plan. However, planners can show that they are addressing a gap by including such a project in future or ‘out’ years of their 10-year plan but then continue to delay the project into future years without implementing the project or addressing the service gap. Such actions can undermine the department’s goal of using SCIP to strategically manage its health care facilities. Although the extent to which this is occurring is unclear, facility planners at two of the facilities in our review told us that they routinely enter projects for future years that they have little or no intention of actually pursuing. For example, planning officials from one medical facility told us that in instances where they did not agree with the gaps that SCIP identified, they would include construction or demolition projects in the later years of their SCIP submission. They said they did this because (1) they could include a general project description without having to be too specific, and (2) demolition projects would most likely not score high enough to obtain funding. Planning officials from another medical facility told us that they continue to promote demolition projects to the out years of their SCIP plan as a way of appearing to address an excess space gap in SCIP plans without actually implementing the project. Even though some facility-level planning officials told us they did not think demolition projects would score high enough to get funding, officials who oversee SCIP told us it is possible if the projects’ narratives linked back to several different priority areas, such as “safety” or “reducing facility condition assessment deficiencies.” As a result, they said facilities will only submit a demolition project in SCIP if it is part of a demolition-and- rebuild type project that links back to more priority areas. Figure 11 shows an example of a non-clinical building on a medical facility campus that the facility’s planning officials would like to demolish due to structural issues. In addition, the SCIP scoring and approval process is limited in that it does not have a mechanism in place to correctly sequence projects. Specifically, planning officials at three of the seven medical facilities in our review told us that the SCIP approval process does not allow for, or ensure that projects will be approved in the chronological order they determined to be appropriate. For example, planning officials at one of the medical facilities said that they have been submitting projects in SCIP to try and collocate specialty outpatient clinics that were in separate areas of the campus. On several occasions, a project that needed to start after a predecessor project was finished was approved and funded first because the project addressed a higher priority area. Out of fear that they would lose the funding if they waited for the first project to get approved, these planning officials told us they changed the planned location of one clinic to a less desirable location instead of their initial goal of collocating the clinics. According to OMB guidance, improper funding of segments of a project can lead to poor planning or higher costs. VA officials told us that this problem could be addressed through better training for facility planners, but budget thresholds may also play a role. Specifically, VA officials told us, and the Independent Assessment reported that VA facility planners often divide a larger project into several smaller projects so that they stay under the statutorily-defined threshold for major medical facility projects ($10 million)—a threshold that the Independent Assessment recommended eliminating. Limitations with SCIP Time Frames SCIP’s lengthy project-development and approval timeframes can hinder capital project planning. Specifically, the time between when planning officials at VA medical facilities begin developing the narratives for projects that will be scored in SCIP and when they are notified that a project is funded has taken between 17 and 23 months over the past 6 fiscal-year SCIP submissions. As such, facility planning officials routinely submit their next year’s planned projects before knowing the outcomes of those from the previous year. In one instance, for example, facility planning officials were required to begin working on the narratives for the projects planned for fiscal year 2015 before they learned which projects for fiscal years 2013 or 2014 were approved for funding. In another example, facility planning officials had to wait about 18 months to officially learn that VA only funded 2 of the 1,403 projects submitted for fiscal year 2017. Officials who oversee the program told us that while they recognize that this is a concern, some information for unfunded projects is automatically loaded into the next year’s SCIP submission, reducing rework. Long time frames can also exacerbate SCIP’s inability to correctly sequence projects in the desired order. For example, under current time frames and SCIP guidelines, a facility planner may have to delay submitting subsequent projects in a sequenced group of projects for up to 2 years each while planners wait to ensure the predecessor project was funded. Figure 12 shows the overlapping timelines of the last 6 fiscal- year SCIP submissions. An official from the office that oversees SCIP told us that the timing of the budgeting process that is outside VA’s control contributes to these delays. While aspects of the process are outside VA’s control, over the last 6 fiscal-years’ SCIP submissions, VA has chosen to wait about 6 to 10 months to report the results of the SCIP scoring process to the medical facilities. This situation makes it difficult for local officials to understand the likelihood that their projects will receive funding. Federal standards for internal control note that agencies should ensure that quality information—such as information about approved projects—should be provided on a timely basis. A VA official said that for future SCIP cycles, VA plans to release the scoring results for minor construction and non- recurring maintenance projects to local officials earlier in the process. At the time of our review, however, the official did not have a timeframe as to when this would be done. Limitations Accessing SCIP Information SCIP has limitations in its ability to provide planners with important information they need in the initial steps of planning capital needs. VA subdivides each VISN into a number of smaller “market areas.” However, SCIP limits facility planners’ access to the projects proposed by other markets and VISNs. According to federal standards for internal control, agencies should identify quality information and ensure that it is accessible. Planning officials from four of the medical facilities told us that these access limitations to SCIP information make it difficult to obtain a comprehensive understanding of their needs for capital-planning purposes. For example, planning officials at one of the medical facilities told us that as a result of the lack of access to information about nearby projects, a VA medical facility in a neighboring VISN had plans for a new CBOC near its VISN boundary progress farther than it should have before VA officials determined that it would have been too close to an existing CBOC just over the VISN boarder. VA Has Done Little to Address Known Limitations with the SCIP Process VA is aware of many of the limitations of the SCIP process—as the Independent Assessment found many of the same limitations and made recommendations to address them—but has taken little action. Specifically, in 2015, the Independent Assessment found that SCIP’s scoring and approval processes and time frames, among other things, undermined VA’s capital-planning and prioritization process. In addition, the Independent Assessment made several recommendations to address those limitations, including: (1) refining the SCIP processes to simplify scoring methods; (2) strengthening the business case submission process; and (3) developing mechanisms to ensure projects met promised objectives. Officials who oversee SCIP told us that they were aware of, and mostly agreed with the Independent Assessment’s findings in the facilities section. In order to address all of the Independent Assessment’s recommendations, including most of the same SCIP limitations we found, VA created a task force called the Integrated Project Team, as we reported in September 2016. According to VA officials, the task force identified several actions to enhance and restructure the department’s infrastructure based on the facility section of the Independent Assessment. However, after 6 months of work, the task force disbanded before it developed an implementation plan for those initiatives. VA officials said that they were instead focused on addressing other priorities such as those in the Commission on Care’s report—which built upon the Independent Assessment—and proposed legislation that could affect VHA operations. However, the Commission on Care did not address facility management issues on the level of the Independent Assessment’s recommendations. Not addressing important, known limitations runs counter to federal standards for internal control, which note that agencies should evaluate and determine appropriate corrective action for identified limitations and deficiencies on a timely basis. In addition, managing federal real property is on GAO’s High Risk List, and our High Risk report notes that agencies should have a corrective action plan with steps to implement solutions to recommendations in order to be removed from the list. Without ensuring that recommendations from internal and external reviews are evaluated, decided upon, documented, and promptly acted on, VA does not have reasonable assurance that SCIP can be used to identify the full capital needs to address VA’s service and infrastructure gaps. VAIP Facility Master Plans Have Limited Usefulness Because They Do Not Adequately Consider Care in the Community, among Other Weaknesses VA’s ongoing VAIP Process (estimated by VA officials to cost $108 million upon completion) was designed to provide a more strategic vision for aligning VA’s medical facilities and services with veterans’ needs. However, the facility master-planning process has several limitations, including that it assumes that all future growth in services will be provided directly through VA facilities without considering alternatives, including the status quo and purchasing care from the community. This assumption runs counter to VA guidance from November 2016 that notes the need for using taxpayer resources wisely by avoiding building facilities that create 100-year commitments if they could use community capacity. Nonetheless, facility master plans produced by the VAIP Process make construction recommendations that directly contradict this policy because the plans do not adequately consider care in the community, for example: To address one medical facility’s master plan would require about $762 million to relocate and renovate spaces within a building, acquire adjacent land, demolish inadequate buildings, construct a new medical tower, and provide seven clinical services that are currently provided elsewhere. No analysis was done to determine if these services could be better or more cost effectively purchased through care in the community. Another medical facility’s master plan indicated a need to construct five new structures estimated to cost about $100 million to provide clinical services that veterans were already obtaining elsewhere. Similarly, no cost-benefit analyses were done to consider care in the community as an option. This construction was recommended in addition to an unrelated major construction upgrade costing in excess of $366 million. Because these plans do not consider that care could be provided in the community, if implemented, they increase the risk for spending more than necessary to provide the services. OMB’s acquisition guidance notes that investments in major capital assets should be made only if no alternative private sector source can support the function at a lower cost. Long- term costs for capital assets are particularly relevant for VA as its data project that the number of enrolled veterans will begin to fall after 2024. VA officials told us that operations and maintenance represent 85 to 90 percent of the total life-cycle costs for VA health care facilities. Officials who oversee the VAIP Process told us that the facility master plans’ lack of analyses regarding care in the community was because they were awaiting further guidance from VA on the proportion of care and types of services to obtain from the community versus in VA facilities. VA released this guidance in November 2016. That guidance requires individual analyses at the local level in order to determine the mix of services provided in VA facilities versus those in community care, a requirement that is not in the facility master-planning process. We also identified other limitations that limit the utility of the VAIP Process: Lack of standardization: According to VA officials, they have mostly completed the VAIP Process for 6 of the 18 VISNs, but in part because it is conducted by several contractors, VA has not fully standardized the process across VISNs. As such, the results are also not comparable across VISNs. Officials who oversee the program said that they are proposing an enhancement to the Health Service Delivery Plan portion of the process that would allow for standardized analyses, planning, and reporting. But the proposal was still in its early stages, and at the time of our review, there was no timeline for completion. Lack of accountability for implementing VAIP recommendations: Officials who oversee the program told us that there is no requirement that the facilities or VISNs implement recommendations based on the VAIP Process’s Health Services Delivery Plans or the facility master plans—although VA officials said there have been discussions about requiring this accountability in the future. As a result, there is no accountability for evaluating or responding to the VAIP recommendations. Incomplete cost estimates: The VAIP Process’s facility master plans include estimates for construction and design, but do not include any long-term estimates for operating costs. OMB’s Capital Programming Guide notes that these life-cycle costs, such as operations and maintenance, should be included in a credible life-cycle cost estimate. As previously noted, VA officials said that operating and maintenance cost for VA medical facilities can represent 85 to 90 percent of total facility costs. According to officials who oversee the program, the intent was only to provide costs for completing the identified projects. The limitations of the VAIP Process’s facility master plans reduce its utility for the VA’s planning officials to the point where some local officials said that they do not use VAIP results and planning officials from five of the seven medical facilities in our review told us that they already contract for their own facility master plans, separate from VAIP. According to federal standards for internal control, agencies should take steps to identify, analyze, and respond to risks related to achieving the defined objectives, an approach that in this case, could reduce risks to the VAIP facility master plans’ success. Although officials who oversee the VAIP Process told us that the VAIP-produced master plans uniquely incorporate the Health Care Service Delivery Plan’s recommendations from the first step of the process, and would therefore be different from the facilities’ master-planning efforts, the potential for duplication exists as separate entities could be undergoing strategic planning for the same facility. The magnitude of these limitations and the potential for planning duplication raise questions about the need for and utility of the VAIP facility master plans as they are currently being developed. Local Approaches to Stakeholder Involvement Vary due to a Lack of VA Guidance VA Has Not Consistently Integrated Stakeholders into Facility Alignment Decisions VA does not always include stakeholders in facility alignment decisions that affect veterans’ health care. VA may align its facilities to meet veterans’ needs by expanding or consolidating facilities or services. Stakeholders—including, veterans, local, state, and federal officials, VSOs, historic preservation groups, VA staff, and Congress—often view changes as working against their interests or those of their constituents, when services are eliminated or shifted from one location to another. We have previously identified best practices for stakeholder involvement in facility consolidation actions and recommended agencies identify relevant stakeholders and develop a two-way communication strategy that begins well in advance of any facility changes and addresses concerns and conveys through data the rationale and overarching benefits behind decisions. Failure to effectively engage with stakeholders in these ways can undermine or derail facility alignment. These best practices suggest that VA leadership should engage both external stakeholders, such as veterans groups and local politicians, and internal stakeholders, such as VA employees. However, we found that two-way communication did not always occur when VA engaged stakeholders. External stakeholders: VA often takes steps to involve external stakeholders, but those efforts often fall short of the best practice of developing a two-way communication strategy. VA requires VISN leadership to hold quarterly town hall meetings with external stakeholders to promote ongoing communications. Planning officials from each of the five VISNs and seven medical facilities in our review told us that they meet regularly with external stakeholder groups, usually through quarterly town hall meetings or roundtables. However, in speaking with external stakeholders, we found that, in large part, these meetings were for VA to communicate information, not necessarily to involve stakeholders in the decision-making process. For example, a local VA official said that the monthly stakeholder meetings were primarily a mechanism for VA officials to announce projects after decisions were made. Officials from two local veterans’ organizations agreed with this characterization, and representatives from one stopped attending the meetings as a result. In one of these locations, the breakdown in two-way communication resulted in picketing when veterans’ organizations opposed the closure of a facility. We also found that when stakeholders were not always engaged consistently with best practices, VA’s facility alignment efforts were challenged by external stakeholders. For example, one area that has a declining number of veteran enrollees also has three medical facilities within 25 miles of each other. According to the CARES report, the veteran population and enrollment in this area did not justify multiple inpatient facilities. Based on the CARES recommendation, VA considered a consolidation of services. Officials from a local veterans group told us that due to the one-way nature of communication with VA officials, they did not fully trust that VA would follow through on plans to replace the services following the consolidation and feared this VA property would be sold or disposed of and not replaced with a new VA facility. This alignment proposal prompted members of Congress and of the city council and VSOs to conduct a campaign that resulted in all three facilities remaining operational 13 years after the CARES report was issued. We found that local facility alignment efforts in which VA officials better followed best practices—building transparency by providing data-driven information and utilizing two-way communication strategies—with external stakeholders were more successful. For example, planning officials from one medical facility—which successfully implemented a CARES recommendation to consolidate inpatient beds in a neighboring facility— told us that they communicated with external stakeholder groups as far in advance as they could and presented data to support any proposed change. Planning officials from another medical facility were able to close an underutilized inpatient wing, a leased CBOC that had experienced decreased utilization and increased costs, and relocate a domiciliary from one campus to another. During this process, facility officials developed a communication plan and held meetings with external stakeholders to present their data and explain the reasoning behind the change. Internal stakeholders: Best practices also include engaging internal stakeholders to build consensus for facility alignment actions. Facility alignment can mean job loss, relocation, or changes in the way employees perform their duties. VA officials told us that employees have sometimes challenged the facility alignment process and in some instances, affected the outcomes where these best practices were not incorporated. Specifically, effective communication with internal stakeholders can foster trust and an understanding of the planned changes, potentially defusing opposition while strengthening commitment to the effort. For example, as part of a consolidation and closure at one medical facility, planning officials addressed concerns, as well as presented data-driven information that highlighted the benefits and rationale to employees. Facility officials developed a communication plan and held a meeting with employees to present their data and explain the reasoning behind the change. In this meeting, they also addressed employee concerns by reassuring them that no one was going to lose their job. VA Lacks Guidance That Incorporates the Best Practice of Fully Engaging Stakeholders and Does Not Evaluate Communication Efforts VA does not provide officials at VISNs and medical facilities with guidance that incorporates best practices on fully engaging both internal and external stakeholders about facility alignment decisions, or evaluate the effectiveness of local stakeholder engagement efforts. VA provides guidance on communicating changes to stakeholders, but this guidance does not conform to best practices in that it does not provide details about how and when to communicate. Without official guidance VA cannot be assured that the VISNs and medical facilities are consistently applying best practices that integrate stakeholders into the decision-making process in a way that better ensures the success of alignment efforts. Further, existing VA guidance does not instruct VISNs and facilities to involve stakeholders throughout the decision-making process. Some of the guidance cites required notification procedures, but does not address general best-practice strategies for engaging and building consensus with stakeholders. For example, in April 2016, VA provided guidance to VISNs regarding notification procedures for any changes in clinical services. This memorandum includes direction to the VISN for a communication plan that includes creating congressional notification, patient notification letters, talking points, and a press release 30 days prior to opening a new facility. However, VA’s guidance lacks specific directions on timelines, data, and the extent to which external stakeholders should be a part of the decision-making process. A VA official told us that they do not have such guidance because it is implicitly understood that local officials should engage stakeholders. However, this outcome is not always occurring due, in part, to this lack of specificity in the guidance, we described earlier in this report. We found variation both in the ways local officials engaged external and internal stakeholders in facility alignment efforts and in the results of those efforts. In addition, VA officials stated that they do not monitor and evaluate their communication methods for best practices or for the methods’ effectiveness in reaching their intended audiences. This runs counter to federal standards for internal control, which note that agencies should monitor and evaluate their activities. We observed variation in the involvement of stakeholders and the impact on facility alignment outcomes. As noted earlier, in some cases, we observed one-way communication that resulted in adversarial relationships that reduced VA’s ability to better align facilities to the needs of the veteran population. In other areas, such as with the medical facility that was able to close an underutilized inpatient wing, close a leased CBOC, and relocate a domiciliary, two-way communication with stakeholders resulted in more productive relationships and effective alignment efforts. Evaluating the effectiveness of stakeholder outreach efforts would help VA officials identify and internalize lessons for future activities. However, VA lacks a process for evaluating its stakeholder outreach efforts. Without guidance that adheres to best practices about fully integrating stakeholders and the lack of monitoring and evaluation about this process, VA increases its risk that stakeholders are not appropriately involved in its facility alignment efforts nor can it determine the effectiveness of its efforts, or learn lessons from previous efforts. Conclusions The shifts in veteran demographics and demand for health services combined with antiquated facilities create an imperative for VA to better align its medical facilities and services. However, some of the recommendations from VA’s last major alignment effort—CARES—were not fully implemented and its current efforts to facilitate realignment—the SCIP and VAIP processes—are hindered by key limitations. For example, SCIP is unable to ensure that medical facilities are not adding projects in out years to address gaps that they do not intend to implement. In addition, relying on project narratives for one-third of the project score can introduce subjectivity into the process, a process that was intended to ensure that all projects are reviewed equitably and consistently. If these deficiencies remain, VA’s SCIP process for prioritizing capital projects will continue to limit the agency’s ability to effectively facilitate decisions to correctly align its medical facilities with veterans’ needs and thereby deliver the best and most cost-effective health care to the veteran population. The VAIP facility master plans also have significant limitations as a planning aid. If their trend toward not analyzing the benefits of utilizing medical capacity in the community is continued, the VAIP facility master plans’ recommendations could result in spending more than necessary to provide the services. The level of potential overspending on VA medical centers will become an even more significant issue if the number of enrolled veterans begins to decline after the year 2024 as predicted. Finally, because VA has not consistently followed best practices for effectively engaging stakeholders, stakeholders may not fully support alignment efforts—a situation that poses a risk to success. Also, VA does not have a process for monitoring and evaluating its communication methods, which runs counter to federal standards for internal control. Without this, VA does not know if the local officials are meaningfully or effectively engaging internal and external stakeholders in the capital alignment decisions that affect them. Recommendations for Executive Action and Our Evaluation To improve VA’s ability to plan for and facilitate the alignment of its facilities with veteran needs, we recommend that the Secretary of Veterans Affairs direct the appropriate offices and administrations to take the following four actions: 1. Address identified limitations to the SCIP process, including limitations to scoring and approval, and access to information. 2. Assess the value of VAIP’s facility master plans as a facility-planning tool. Based on conclusions from the review, either 1) discontinue the development of VAIP’s facility master plans or 2) address the limitations of VAIP’s facility master plans. 3. Develop and distribute guidance for VISNs and facilities using best practices on how to effectively communicate with stakeholders about alignment change. 4. Develop and implement a mechanism to evaluate VISN and facility communication efforts with stakeholders to ensure that these communication efforts are working as intended and align with guidance and best practices. Agency Comments We provided a draft of this report to VA for comment. Its written comments are reproduced in appendix II. VA partially concurred with our first recommendation. Specifically, VA said that it generally concurred with the recommendation to address limitations in SCIP process, but limited its concurrence to addressing the limitations that are within VA’s control. We edited our report to indicate that some parts of the process are outside VA’s control and focused our findings on those elements for which VA does have control. VA fully concurred with the other three recommendations and outlined a plan to implement them. VA also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact David J. Wise at (202) 512-2834 or [email protected], or Debra A. Draper at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Buildings Operated by the Veterans Health Administration (VHA) and Designated for Disposal, by State Table 1 lists the 168 VHA-operated buildings over 10,000 square feet in size that, according to VA, had been designated for potential disposal at the end of fiscal year 2014, organized by state and medical center. Appendix II: Comments from the Department of Veterans Affairs GAO’s Comments 1. VA stated that many of the items noted in the draft report mischaracterized the intended outcomes of the SCIP process, such as the fact that SCIP was not designed as a mechanism to force realignment. We did not intend to characterize the SCIP process’s intention as a mechanism to force realignment of VA facilities and clarified our report to reflect this approach. 2. VA stated that many of the items we found that needed to be addressed through the SCIP process were outside of the SCIP program and that VA had limited ability to influence. We agree and have edited the report: to reflect that some elements of the process are outside VA’s control and to re-focus on the aspects that VA does control. Please see comments 3, 4, and 5 below for our responses. Regarding how we characterized the intention of the SCIP process, see comment 1. 3. In regard to our finding that the SCIP process does not have a mechanism in place for ensuring that future-year projects are implemented, VA stated that this was not an intentional limitation resulting from the SCIP process, but was instead an outcome stemming from VA not having enough capital to meet all of the needs identified in the SCIP plan. We agree and clarified the report. However, in our review, planners at two medical facilities told us that they enter projects for future years that they have little or no intention of actually pursuing—which is different than not having enough capital to pursue the project. As such, we continue to believe that SCIP has a limitation in that it does not have a mechanism in place to prevent facility planners from gaining credit for closing service gaps by proposing capital projects that they have no intention of ever implementing. 4. In regard to our finding that SCIP’s development and approval timeframes can hinder capital planning, VA stated that it agrees that the planning process is lengthy, but added that the timeliness of the SCIP process is driven by the government-wide budget process, which is outside of VA’s control. We agree that there are elements of the timeframes that are outside of VA’s control and clarified our report to address this situation and focus on what is within VA’s control. Specifically, over the last 6 fiscal-year SCIP submissions, VA has chosen to wait about 6 to 10 months to report the results of the SCIP scoring process to the medical facilities. 5. In regard to our finding that the SCIP process’s scoring and approval process relies on facility planning officials’ ability to write an accompanying narrative that addresses more of the priorities, VA stated that it disagreed that the scoring was highly based on narrative and/or subjective information. We clarified our report to note that the narrative portion represents about one-third of a project’s overall score, but as VA states, it relies on the planners’ ability to articulate the business need for the project. GAO continues to believe that that relying on planners’ abilities to articulate the business need for a project introduces subjectivity to the scoring process. Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts David J. Wise at (202) 512-2834 or [email protected], or Debra A. Draper at (202) 512-7114 or [email protected]. Staff Acknowledgments In addition to the contacts named above, Keith Cunningham, Assistant Director; Jeff Mayhew, Analyst-in-Charge; Colleen Taylor; Laurie F. Thurber; and Michelle Weathers made key contributions to this report. Also contributing were Jacquelyn Hamilton, John Mingus, Sara Ann Moessbauer, Malika Rice, and Crystal Wesco. | VA operates one of the largest health care systems in the United States, with 168 VA medical centers and more than 1,000 outpatient facilities. Many of these facilities are underutilized and outdated. A previous effort aimed at modernizing and better aligning facilities was not fully implemented. GAO was asked to review the current alignment of VA medical facilities with veterans' needs. This report examines: (1) the factors that affect VA facility alignment with veterans' needs, (2) the extent to which VA's capital-planning process facilitates the alignment of facilities with the veteran population, and (3) the extent to which VA has followed best practices by integrating stakeholders in facility alignment decisions. GAO reviewed VA's facility- planning documents and data, and interviewed VA officials in headquarters and at seven medical facilities selected for their geographic location, population, and past alignment efforts. GAO also evaluated VA's actions against federal standards for internal control and best practices for capital planning. Geographic shifts in the veteran population, changes in health care delivery, and an aging infrastructure affect the Department of Veterans Affairs' (VA) efforts to align its services and real property portfolio to meet the needs of veterans. For example, a shift over time from inpatient to outpatient care will likely result in underutilized space once used for inpatient care. In such instances, it is often difficult and costly for VA to modernize, renovate, and retrofit existing facilities given the challenges associated with these older facilities. VA relies on the Strategic Capital Investment Planning (SCIP) process to plan and prioritize capital projects, but SCIP's limitations—including subjective narratives, long time frames, and restricted access to information—undermine VA's ability to achieve its goals. Although VA acknowledges many of these limitations, it has taken little action in response. Federal standards for internal control state that agencies should evaluate and determine appropriate corrective action for identified limitations on a timely basis. Without doing so, VA lacks reasonable assurance that its facility-alignment reflects veterans' needs. A separate planning process—VA Integrated Planning (VAIP)—was designed to supplement SCIP and to provide planners with a more strategic vision for their medical facilities through the creation of facility master plans. However, GAO found limitations with this ongoing effort, which VA estimated to cost $108 million. Specifically, the facility master plans assume that all future growth in services will be provided directly through VA facilities without considering alternatives, such as purchasing care from the community. However, VA's use of care in the community has increased to an obligated $10.1 billion in fiscal year 2015. Federal capital-acquisition guidance identifies inefficient spending as a risk of not considering other options for delivering services. This consideration is particularly relevant as VA's data project that the number of enrolled veterans will begin to fall after 2024. Officials who oversee the VAIP process said that they were awaiting further analyses required by recently released VA guidance on the proportion of care and types of services to obtain from the community. As a result of this and other limitations, some local VA officials said that they make little use of the VAIP facility master plans and contract for their own facility master plans outside the VAIP process. Although VA instructs local VA officials to communicate with stakeholders, its guidance is not detailed enough to conform to best practices. VA has not consistently followed best practices for effectively engaging stakeholders in facility consolidation efforts—such as in utilizing two-way communication early in the process and using data to demonstrate the rationale for facility alignment decisions. GAO found that when stakeholders were not always engaged consistently with best practices, VA's efforts to align facilities with veterans' needs were challenged. Also, VA officials said that they do not monitor or evaluate these communications efforts and, therefore, have little assurance that the methods used effectively disseminate information to stakeholders. This approach runs counter to federal standards for internal control, which instruct agencies to monitor and evaluate activities, such as communications methods. |
Background Oversight of nursing homes is a shared federal and state responsibility. CMS is the federal agency that manages Medicare and Medicaid and oversees compliance with federal nursing home quality standards. On the basis of statutory requirements, CMS defines standards that nursing homes must meet to participate in the Medicare and Medicaid programs and contracts with states to certify that homes meet these standards through annual inspections and complaint investigations. The “annual” inspection, called a survey, which must be conducted on average every 12 months and no less than every 15 months at each home, entails a team of state surveyors spending several days in the home to determine whether care and services meet the assessed needs of the residents. CMS establishes specific protocols, or investigative procedures, for state surveyors to use in conducting these comprehensive surveys. In contrast, complaint investigations, also conducted by state surveyors within certain federal guidelines and time frames, typically target a single area in response to a complaint filed against a home by a resident, the resident’s family or friends, or nursing home employees. Quality-of-care problems identified during either standard surveys or complaint investigations are classified in 1 of 12 categories according to their scope (the number of residents potentially or actually affected) and their severity (potential for or occurrence of harm to residents). Ensuring that documented deficiencies are corrected is likewise a shared responsibility. CMS is responsible for enforcement actions involving homes with Medicare or dual Medicare and Medicaid certification—about 86 percent of all homes. States are responsible for enforcing standards in homes with Medicaid-only certification—about 14 percent of the total. Enforcement actions can involve, among other things, requiring corrective action plans, imposing monetary fines, denying the home Medicare and Medicaid payments for new admissions until corrections are in place, and, ultimately, terminating the home from participation in these programs. Sanctions are imposed by CMS on the basis of state referrals. States may also use their state licensure authority to impose state sanctions. CMS is also responsible for overseeing each state survey agency’s performance in ensuring quality of care in its nursing homes. One of its primary oversight tools is the federal monitoring survey, which is required annually for at least 5 percent of all Medicare- and Medicaid-certified nursing homes. Federal monitoring surveys can be either comparative or observational. A comparative survey involves a federal survey team conducting a complete, independent survey of a home within 2 months of the completion of a state’s survey in order to compare and contrast the findings. In an observational survey, one or more federal surveyors accompany a state survey team to a nursing home to observe the team’s performance. Roughly 85 percent of federal surveys are observational. Based on prior work, we have concluded that the comparative survey is the more effective of the two federal monitoring surveys for assessing state agencies’ abilities to identify serious deficiencies in nursing homes and have recommended that more priority be given to them. A new federal oversight tool, state performance reviews, implemented in October 2000, measures state survey agency performance against seven standards, including statutory requirements regarding survey frequency, requirements for documenting deficiencies, and timeliness of complaint investigations. These reviews replaced state self-reporting of their compliance with federal requirements. CMS also maintains a central database—the On-Line Survey, Certification, and Reporting (OSCAR) system—that compiles, among other information, the results of every state survey conducted at Medicare- and Medicaid-certified facilities nationwide. Magnitude of Problems Remains Cause for Concern, Even Though Fewer Serious Nursing Home Quality Problems Were Reported State survey data indicate that the proportion of nursing homes with serious quality problems remains unacceptably high, despite a decline in such reported problems since mid-2000. For an 18-month period ending in January 2002, 20 percent of nursing homes (about 3,500) were cited for deficiencies involving actual harm or immediate jeopardy to residents. This share is down from 29 percent (about 5,000 homes) for the previous period. (Appendix I provides trend data on the percentage of nursing homes cited for serious deficiencies for all 50 states and the District of Columbia.) Despite this decline, there is still considerable variation in the proportion of homes cited for such serious deficiencies, ranging from about 7 percent in Wisconsin to about 50 percent in Connecticut. Federal comparative surveys completed during a recent 21-month period found actual harm or higher-level deficiencies in about 10 percent fewer homes where state surveyors found no such deficiencies, compared to an earlier period. Fewer discrepancies between federal and state surveys suggest that state surveyors’ performance in documenting serious deficiencies has improved. However, the magnitude of the state surveyors’ understatement of quality problems remains a serious issue. From June 2000 through February 2002, federal surveyors conducting comparative surveys found examples of actual harm deficiencies in about one fifth of homes that states had judged to be deficiency free. For example, federal surveyors found that a home had failed to prevent pressure sores, failed to consistently monitor pressure sores when they did develop, and failed to notify the physician promptly so that proper treatment could be started. These federal surveyors noted that inadequate monitoring of pressure sores was a problem during the state’s survey that should have been found and cited. CMS plans to hire a contractor to perform approximately 170 additional comparative surveys each year, bringing the annual total to 330, including those conducted by CMS surveyors. We continue to believe that comparative surveys are the most effective technique for assessing state agencies’ ability to identify serious deficiencies in nursing homes because they constitute an independent evaluation of the state survey. Beyond the continuing high prevalence of actual harm or immediate jeopardy deficiencies, we found a disturbing understatement of actual harm or higher deficiencies in a sample of surveys that were conducted since July 2000 at homes with a history of harming residents but whose current surveys indicated no actual harm deficiencies. Overall, 39 percent of 76 surveys we reviewed had documented problems that should have been classified as actual harm: serious, avoidable pressure sores; severe weight loss; and multiple falls resulting in broken bones and other injuries. We were unable to assess whether the scope and severity of other deficiencies in our sample of surveys were also understated because of weaknesses in how those deficiencies were documented. Weaknesses Persist in State Survey, Complaint, and Enforcement Activities Despite increased attention in recent years, widespread weaknesses persist in state survey, complaint investigation, and enforcement activities. In our view, this reflects not necessarily a lack of effort but rather the magnitude of the challenge in effecting important and consistent systemic change across all states. We identified several factors that contributed to these weaknesses and the understatement of survey deficiencies, including confusion over the definition of actual harm. Moreover, many state complaint investigation systems still have timeliness problems and some states did not comply with HCFA’s policy to refer to the agency for immediate sanction those nursing homes that showed a pattern of harming residents, resulting in hundreds of nursing homes not appropriately referred for action. Confusion about Definition of Harm and Other Factors Contribute to Underreporting of Care Problems We identified several factors at the state level that contributed to the understatement of serious quality-of-care problems. State survey agency officials expressed confusion about the definitions of “actual harm” and “immediate jeopardy,” which may contribute to the variability in identifying deficiencies among states. Several states’ comments on our draft report underscored how the lack of clear and consistent CMS guidance on these definitions may have contributed to such confusion. For example, supplementary guidance provided to one state by its CMS regional office on how to assess the severity of a newly developing pressure sore was inconsistent with CMS’s definition of actual harm. Other factors that have contributed to the understatement of actual harm include lack of adequate state supervisory review of survey findings, large numbers of inexperienced surveyors, and continued survey predictability. While most of the 16 states we contacted had processes for supervisory review of deficiencies cited at the actual harm level and higher, half did not have similar processes to help ensure that the scope and severity of less serious deficiencies were not understated. According to state officials, the large number of inexperienced surveyors, which ranged from 25 percent to 70 percent in 27 states and the District of Columbia and is due to high attrition and hiring limitations, has also had a negative impact on the quality of surveys. In addition, our analysis of OSCAR data indicated that the timing of about one-third of the most recent state surveys nationwide remained predictable—a slight reduction from homes’ prior surveys, about 38 percent of which were predictable. Predictable surveys can allow quality-of-care problems to go undetected because homes, if they choose to do so, may conceal certain problems such as understaffing. Many State Complaint Investigation Systems Still Have Timeliness Problems and Other Weaknesses CMS’s 2001 review of a sample of complaints in all states demonstrated that many states were not complying with CMS complaint investigation timeliness requirements. Specifically, 12 states were not investigating all immediate jeopardy complaints within the required 2 workdays, and 42 states were not complying with the new requirement established in 1999 to investigate actual harm complaints within 10 days. Some states attributed the timeliness problem to an increase in the number of complaints and to insufficient staff. CMS also found that the triaging of complaints to determine how quickly to investigate each complaint was inadequate in some states. A CMS-sponsored study of the states’ complaint practices also raised concerns about state approaches to accepting and investigating complaints. For example, 15 states did not provide toll-free hotlines to facilitate the filing of complaints and the majority of states lacked adequate systems for managing complaints. To address the latter problem, CMS planned to implement a new complaint tracking system nationwide in October 2002, but as of today, the system is still being tested and its implementation date is uncertain. Substantial Number of Nursing Homes Were Not Referred to CMS for Immediate Sanctions State survey agencies did not refer a significant number of cases where nursing homes were found to have a pattern of harming residents to CMS for immediate sanction as required by CMS policy, significantly undermining the policy’s intended deterrent effect. Our earlier work found that nursing homes tended to “yo-yo” in and out of compliance, in part because HCFA rarely imposed sanctions on homes with a pattern of deficiencies that harmed residents. In response, the agency required that, as of January 2000, homes found to have harmed residents on successive standard surveys be referred to it for immediate sanction. While most states did not forward at least some cases that should have been referred under this policy, four states accounted for over half of the 700 nursing homes not referred. One of these states did not fully implement the new CMS policy until mid-2002 and another state implemented its own version of the policy through September 2002, resulting in relatively few referrals. In most other states, the failure to refer cases resulted from a misunderstanding of the policy by both some states and CMS regional offices and, in some states, from the lack of an adequate system for tracking a home’s survey history to determine if it met the policy’s criteria. CMS Oversight of State Survey Activities Requires Further Strengthening While CMS has instituted a more systematic oversight process of state survey and complaint activities by initiating annual state performance reviews, CMS officials acknowledged that the effectiveness of the reviews could be improved. Major areas needing improvement as a result of the fiscal year 2001 review include (1) distinguishing between minor and major problems, (2) evaluating how well states document deficiencies, and (3) ensuring consistency in how regions conduct reviews. Data limitations, particularly involving complaints, and inconsistent use of periodic monitoring reports also hampered the effectiveness of state performance reviews. For subsequent reviews, CMS plans to more centrally manage the process to improve consistency and to help ensure that future reviews distinguish serious from minor problems. Implementation has been significantly delayed for three federal initiatives that are critical to reducing the subjectivity in the state survey process for identifying deficiencies and determining the seriousness of complaints. These delayed initiatives were intended to strengthen the methodology for conducting surveys, improve surveyor guidance for determining the scope and severity of deficiencies, and increase standardization in state complaint investigation processes. Strengthening the survey methodology. Because surveyors often missed significant care problems due to weaknesses in the survey process, HCFA contracted in 1998 for the development of a revised survey methodology. The agency’s contractor has proposed a two-phase survey process. In the first phase, surveyors would initially identify potential care problems using data generated off-site prior to the start of the survey and additional, standardized information collected on-site. During the second phase, surveyors would conduct an onsite investigation to confirm and document the care deficiencies initially identified. Compared to the current survey process, the revised methodology under development is designed to more systematically target potential problems at a home and give surveyors new tools to more adequately document care outcomes and conduct onsite investigations. In April 2003, a CMS official told us that the agency lacked adequate funding to complete testing and implementation of the revised methodology under development for almost 5 years. Through September 2003, CMS will have committed about $4.7 million to this effort. While CMS did not address the lack of adequate funding in its comments on our draft report, a CMS official subsequently told us that about $508,000 has now been slated for additional field testing. This amount, however, has not yet been approved. Not funding the additional field testing could jeopardize the entire initiative, in which a substantial investment has already been made. Developing clearer guidance for surveyors. Recognizing inconsistencies in how the scope and severity of deficiencies are cited across states, in October 2000, HCFA began developing more structured guidance for surveyors, including survey investigative protocols for assessing specific deficiencies. The intent of this initiative is to enable surveyors to better (1) identify specific deficiencies, (2) investigate whether a deficiency is the result of poor care, and (3) document the level of harm resulting from a home’s identified deficient care practices. Delays have occurred, and the first such guidance to be completed—pressure sores—has not yet been released. Developing additional state guidance for investigating complaints. Despite initiation of a complaint improvement project in 1999, CMS has not yet developed detailed guidance for states to help improve their complaint investigation systems. CMS received its contractor’s report in June 2002, and indicated agreement with the report’s conclusion that reforming the complaint system is urgently needed to achieve a more standardized, consistent, and effective process. CMS told us that it plans to issue new guidance to the states in late fiscal year 2003—about 4 years after the complaint improvement project initiative was launched. Conclusions As we reported in September 2000, continued federal and state attention is required to ensure necessary improvements in the quality of care provided to the nation’s vulnerable nursing home residents. The proportion of homes reported to have harmed residents is still unacceptably high, despite the reported decline in the incidence of such problems. This decline is consistent with the concerted congressional, federal, and state attention focused on addressing quality of care problems. Despite these efforts, however, CMS needs to continue its efforts to better ensure consistent compliance with federal quality requirements. Several areas that require CMS’s ongoing attention include: (1) developing more structured guidance for surveyors to address inconsistencies in how the scope and severity of deficiencies are cited across states, (2) finalizing and implementing the survey methodology redesign intended to make the survey process more systematic, (3) implementing a nationwide complaint tracking system and providing states additional complaint investigation guidance, and (4) refining the newly established state agency performance standard reviews to ensure that states are held accountable for ensuring that nursing homes comply with federal nursing home quality standards. Some of these efforts have been underway for several years, with CMS consistently extending their estimated completion and implementation dates. The need to come to closure on these initiatives is clear. The report on which this testimony is based contained several new recommendations for needed CMS actions on these issues; CMS generally concurred with our recommendations. We believe that effective and timely implementation of planned improvements in each of these areas is critical to ensuring better quality care for the nation’s 1.7 million vulnerable nursing home residents. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. Contact and Staff Acknowledgments For further information about this testimony, please contact Kathryn G. Allen at (202) 512-7118 or Walter Ochinko at (202) 512-7157. Jack Brennan, Patricia A. Jones, and Dean Mohs also made key contributions to this statement. Appendix I: Trends in The Proportion of Nursing Homes Cited for Actual Harm or Immediate Jeopardy Deficiencies, 1999-2002 Related GAO Products Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: More Can Be Done to Protect Residents from Abuse. GAO-02-312. Washington, D.C.: March 1, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. VA Long-Term Care: Oversight of Community Nursing Homes Needs Strengthening. GAO-01-768. Washington, D.C.: July 27, 2001. Nursing Homes: Success of Quality Initiatives Requires Sustained Federal and State Commitment. GAO/T-HEHS-00-209. Washington, D.C.: September 28, 2000. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Homes: HCFA Should Strengthen Its Oversight of State Agencies to Better Ensure Quality of Care. GAO/T-HEHS-00-27. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: HCFA Initiatives to Improve Care Are Under Way but Will Require Continued Commitment. GAO/T-HEHS-99-155. Washington, D.C.: June 30, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes in Maryland. GAO/T-HEHS-99-146. Washington, D.C.: June 15, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Stronger Complaint and Enforcement Practices Needed to Better Ensure Adequate Care. GAO/T-HEHS-99-89. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Federal and State Oversight Inadequate to Protect Residents in Homes with Serious Care Problems. GAO/T-HEHS- 98-219. Washington, D.C.: July 28, 1998. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since 1998, the Congress and Administration have focused considerable attention on improving the quality of care in the nation's nursing homes, which provide care for about 1.7 million elderly and disabled residents in about 17,000 homes. GAO has earlier reported on serious weaknesses in processes for conducting routine state inspections (surveys) of nursing homes and complaint investigations, ensuring that homes with identified deficiencies correct the problems without recurrence, and providing consistent federal oversight of state survey activities to ensure that nursing homes comply with federal quality standards. GAO was asked to update its work on these issues and to testify on its findings, as reported in Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight, GAO-03-561 (July 15, 2003). In commenting on this report, the Centers for Medicare & Medicaid Services (CMS) generally concurred with the recommendations to address survey and oversight weaknesses. In this testimony, GAO addresses (1) the prevalence of serious nursing home quality problems nationwide, (2) factors contributing to continuing weaknesses in states' survey, complaint, and enforcement activities, and (3) the status of key federal efforts to oversee state survey agency performance and improve quality. The magnitude of documented serious deficiencies that harmed nursing home residents remains unacceptably high, despite some decline. For the most recent period reviewed, one in five nursing homes nationwide (about 3,500 homes) had serious deficiencies that caused residents actual harm or placed them in immediate jeopardy. Moreover, GAO found significant understatement of care problems that should have been classified as actual harm or higher--serious avoidable pressure sores, severe weight loss, and multiple falls resulting in broken bones and other injuries--for a sample of homes with a history of harming residents. Several factors contributed to such understatement, including confusion about the definition of harm; inadequate state review of surveys to identify potential understatement; large numbers of inexperienced state surveyors; and a continuing problem with survey timing being predictable to nursing homes. States continue to have difficulty identifying and responding in a timely fashion to public complaints alleging actual harm--delays state officials attributed to an increase in the volume of complaints and to insufficient staff. Although federal enforcement policy was strengthened in January 2000 by requiring state survey agencies to refer for immediate sanction homes that had a pattern of harming residents, many states did not fully comply with this new requirement, significantly undermining the policy's intended deterrent effect. While CMS has increased its oversight of state survey and complaint investigation activities, continued attention is required to help ensure compliance with federal requirements. In October 2000, the agency implemented new annual performance reviews to measure state performance in seven areas, including the timeliness of survey and complaint investigations and the proper documentation of survey findings. The first round of results, however, did not produce information enabling the agency to identify and initiate needed improvements. For example, some regional office summary reports provided too little information to determine if a state did not meet a particular standard by a wide or a narrow margin--information that could help CMS to judge the seriousness of problems identified and target remedial interventions. Rather than relying on its regional offices, CMS plans to more centrally manage future state performance reviews to improve consistency and to help ensure that the results of those reviews could be used to more readily identify serious problems. Finally, implementation has been significantly delayed for three federal initiatives that are critical to reducing the variation evident in the state survey process in categorizing the seriousness of deficiencies and investigating complaints. These delayed initiatives were intended to strengthen the methodology for conducting surveys, improve surveyor guidance for determining the scope and severity of deficiencies, and increase standardization in state complaint investigation processes. |
Background In 1990, we designated the Medicare program, which is administered by the Centers for Medicare and Medicaid Services (CMS) in HHS, as at high risk for improper payments because of its sheer size and vast range of participants—including about 40 million beneficiaries and nearly 1 million physicians, hospitals, and other providers. The program remains at high risk today. In fiscal year 2001, Medicare outlays totaled over $219 billion, and the HHS/OIG reported that $12.1 billion in fiscal year 2001 Medicare fee-for-service payments did not comply with Medicare laws and regulations. The Congress enacted HIPAA, in part, to respond to the problem of health care fraud and abuse. HIPAA consolidated and strengthened ongoing efforts to combat fraud and abuse in health programs and provided new criminal enforcement tools as well as expanded resources for fighting health care fraud, including $158 million in fiscal year 2000 and $182 million in fiscal year 2001. Under the joint direction of the Attorney General and the Secretary of HHS (acting through the HHS/OIG), the HCFAC program goals are as follows: coordinate federal, state, and local law enforcement efforts to control fraud and abuse associated with health plans; conduct investigations, audits, and other studies of delivery and payment for health care for the United States; facilitate the enforcement of the civil, criminal, and administrative statutes applicable to health care; provide guidance to the health care industry, including the issuance of advisory opinions, safe harbor notices, and special fraud alerts; and establish a national database of adverse actions against health care providers. Funds for the HCFAC program are appropriated from the trust fund to an expenditure account, referred to as the Health Care Fraud and Abuse Control Account, maintained within the trust fund. The Attorney General and the Secretary of HHS jointly certify that the funds transferred to the control account are necessary to finance health care anti–fraud and abuse activities, subject to limits for each fiscal year as specified by HIPAA. HIPAA authorizes annual minimum and maximum amounts earmarked for HHS/OIG activities for the Medicare and Medicaid programs. For example, of the $182 million available in fiscal year 2001, a minimum of $120 million and a maximum of $130 million were earmarked for the HHS/OIG. By earmarking funds specifically for the HHS/OIG, the Congress ensured continued efforts by the HHS/OIG to detect and prevent fraud and abuse in the Medicare and Medicaid programs. CMS performs the accounting for the control account, from which all HCFAC expenditures are made. CMS sets up allotments in its accounting system for each of the HHS and DOJ entities receiving HCFAC funds. The HHS and DOJ entities account for their HCFAC obligations and expenditures in their respective accounting systems and report them to CMS monthly. CMS then records the obligations and expenditures against the appropriate allotments in its accounting system. At DOJ, payroll constituted 78 percent of its total expenditures in fiscal year 2000 and 69 percent in fiscal year 2001. Within DOJ, the Executive Office for the United States Attorneys (EOUSA) receives the largest allotment of HCFAC funds. In EOUSA, each district is allocated a predetermined number of full-time equivalent (FTE) positions based on the historical workload of the district. Specific personnel who ordinarily work on health care activities, such as the Health Care Fraud Coordinator, are designated within the DOJ accounting system to have their payroll costs charged to the HCFAC account. In some districts, one FTE could be shared among several individuals, each contributing a portion of time to HCFAC assignments. EOUSA staff track the portion of time devoted to health care activity and other types of cases and investigations in the Monthly Resource Summary System on a daily or monthly basis. DOJ monitors summary information from the Monthly Resource Summary System to determine how staff members’ time is being used. The HHS/OIG expenditures represented over 96 percent of HHS’s total HCFAC expenditures in fiscal years 2000 and 2001. At HHS/OIG, HCFAC expenditures are allocated based on relative proportions of the HCFAC budget authority and the discretionary funding sources. Table 1 below identifies the relative percentages HHS/OIG used in fiscal years 2000 and 2001. HHS/OIG uses these percentages to compute the amounts of payroll and nonpayroll expenditures to be charged to their two funding sources. HHS/OIG tracks staff time spent on various assignments in separate management information systems (MIS). The information in the MIS is summarized and monitored quarterly to adjust the type of work planned and performed, if necessary, so that the use of the funds is consistent with the funding sources’ intended use. HIPAA also requires that amounts equal to the following types of collections be deposited into the trust fund: criminal fines recovered in cases involving a federal health care offense, including collections pursuant to section 1347 of Title 18, United States Code; civil monetary penalties and assessments imposed in health care fraud amounts resulting from the forfeiture of property by reason of a federal health care offense, including collections under section 982(a)(7) of Title 18, United States Code; penalties and damages obtained and otherwise creditable to miscellaneous receipts of the Treasury’s general fund obtained under the False Claims Act (sections 3729 through 3733 of Title 31, United States Code), in cases involving claims related to the provision of health care items and services (other than funds awarded to a relator, for restitution, or otherwise authorized by law); and unconditional gifts and bequests. Criminal fines resulting from health care fraud cases are collected through the Clerks of the Administrative Office of the United States Courts. Criminal fines collections are reported to DOJ’s Financial Litigation Unit associated with their districts. Based on cash receipt documentation received from the Clerks, the Financial Litigation Units then post the criminal fines collection to a database. The database generates at least a biannual report of the amount of criminal fines collected, which is sent to the Department of the Treasury. Treasury relies on this report to determine the amount to deposit to the trust fund. Civil monetary penalties for federal health care offenses are imposed by CMS regional offices or the HHS/OIG against skilled nursing facilities or long-term care facilities and doctors. CMS collects civil monetary penalty amounts and reports them to the Department of the Treasury for deposit to the trust fund. Penalties and multiple damages resulting from health care fraud cases are collected by DOJ’s Civil Division in Washington, D.C., and by Financial Litigation Units in the United States Attorneys’ offices located throughout the country. The Civil Division and United States Attorneys’ offices report collection information to DOJ’s Debt Accounting Operations Group, which reports the amount of penalties and multiple damages to the Department of the Treasury for deposit to the trust fund. HIPAA also allows CMS to accept unconditional gifts and bequests made to the trust fund. Objectives, Scope, and Methodology The objectives of our review were to identify and assess the propriety of amounts for fiscal years 2000 and 2001 reported as (1) deposits to the trust fund, (2) appropriations from the trust fund for HCFAC activities, (3) expenditures at DOJ for HCFAC activities, (4) expenditures at HHS for HCFAC activities, (5) expenditures for non-Medicare anti–fraud and abuse activities, and (6) savings to the trust fund. To identify and assess the propriety of deposits, we reviewed the joint HCFAC reports, interviewed personnel at various HHS and DOJ entities, obtained electronic data and reports from HHS and DOJ for the various types of deposits, and tested selected transactions to determine whether the proper amounts were deposited to the trust fund. To identify and assess the propriety of amounts appropriated from the trust fund, we reviewed the joint HCFAC reports, and reviewed and analyzed documentation to support the allocation and certification of the HCFAC appropriation. To identify and assess the propriety of expenditure amounts at HHS, we interviewed personnel, obtained electronic data and reports supporting nonpayroll transactions, tested selected nonpayroll transactions, reviewed payroll allocation methodologies, and interviewed selected employees to assess the reasonableness of time and attendance charges to the HCFAC appropriation account for payroll expenditures. To identify and assess the propriety of expenditure amounts at DOJ, we interviewed personnel, obtained electronic data and reports supporting nonpayroll transactions, tested selected nonpayroll transactions, performed analytical procedures, and interviewed selected employees to assess the reasonableness of time and attendance charges to the HCFAC appropriation account for payroll expenditures. We were unable to identify and assess the propriety expenditures for non- Medicare antifraud activities because HHS/OIG and DOJ do not separately account for or monitor such expenditures. To identify and assess the propriety of savings to the trust fund, as well as any other savings, resulting from expenditures from the trust fund for the HCFAC program, we reviewed the joint reports, interviewed personnel, reviewed recommendations and the resulting cost savings as reported in the HHS/OIG’s fiscal years 2000 and 2001 semiannual reports, and tested selected cost savings. We were unable to directly associate the reported cost savings to HCFAC because HHS and DOJ officials do not track them as such due to the nature of health care anti–fraud and abuse activities. We interviewed and obtained documentation from officials at the CMS in Baltimore, Maryland; HHS headquarters—including the Administration on Aging (AOA), the Assistant Secretary for Budget, Technology and Finance (ASBTF) which was formerly the Assistant Secretary for Management and Budget (ASMB), the OIG, and the Office of General Counsel (OGC)—in Washington, D.C.; HHS’s Program Support Center (PSC) in Rockville, Maryland; and DOJ’s Justice Management Division, EOUSA, Criminal Division, Civil Division, and Civil Rights Division in Washington, D.C. We conducted our work in two phases, from April 2001 through June 2001 focusing primarily on fiscal year 2000 HCFAC activity, and from October 2001 through April 2002 focusing primarily on fiscal year 2001 HCFAC activity, in accordance with generally accepted government auditing standards. A detailed discussion of our objectives, scope, and methodology is contained in appendix I of this report. We requested comments on a draft of this report from the Secretary of HHS and the Attorney General or their designees. We received written comments from the Inspector General of HHS and the Acting Assistant Attorney General for Administration at DOJ. We have reprinted their responses in appendices II and III, respectively. DOJ Made Errors in Reporting Collections; However, the Trust Fund Was Minimally Affected The joint HCFAC reports included deposits of about $210 million in fiscal year 2000 and $464 million in fiscal year 2001, pursuant to HIPAA. As shown in figure 1, the sources of these deposits were primarily penalties and multiple damages. In testing at DOJ, we identified some errors in the recording of HCFAC collections that resulted in an estimated overstatement of $169,765 to the trust fund in fiscal year 2001. These uncorrected errors, which related to criminal fines deposited to the trust fund, were not detected by DOJ officials responsible for submitting collection reports to the Department of the Treasury. Our work did not identify errors in recording collections in any of the other categories for fiscal years 2000 and 2001. We did not identify errors related to fiscal year 2000 criminal fines. Of the 58 statistically sampled criminal fines transactions, we tested the collection of 2 fines reported at $8,693 and $50,007 that were supported by documentation for $6,097 and $25,000, respectively, and resulted in overstatements to the trust fund totaling over $27,000. We estimated the most likely overstatement of collections of criminal fines deposited to the trust fund as a result of transactions incorrectly recorded was $169,765. In both cases, the errors were not detected by DOJ staff responsible for submitting the criminal fines report to the Department of the Treasury. DOJ officials told us that there was a programming mistake in generating the criminal fines report that resulted in these errors. DOJ officials also told us that the mistake has been corrected to address the problem in the future and they plan to research the impact of the programming oversight to determine what, if any, adjustments or offsets are needed and will make the necessary corrections next quarter. While the total estimated overstatement is relatively insignificant compared to the total amount of $464 million in HCFAC collections that was reported to the trust fund in fiscal year 2001, the control weaknesses that gave rise to these errors could result in more significant misstatements. HIPAA Appropriations Were Properly Supported As reported in the joint HCFAC reports for fiscal years 2000 and 2001, the Attorney General and the Secretary of HHS certified the entire $158.2 million and $181.9 million appropriations, respectively, as necessary to carry out the HCFAC program. Based on our review, the requests for fiscal years 2000 and 2001 HCFAC appropriations were properly supported for valid purposes under HIPAA. Figures 2 and 3 present fiscal years 2000 and 2001 allocations for the HCFAC program, respectively. Based on our review, we found that the planned use of HCFAC appropriations was intended for purposes as stated in HIPAA statute. According to the joint HCFAC reports, HCFAC’s increased resources have enabled HHS/OIG to broaden its efforts both to detect fraud and abuse and to help deter the severity and frequency of it. The HHS/OIG reported that HCFAC funding allowed it to open 14 new investigative offices and increase its staff levels by 61 during fiscal year 2000, with the result that OIG is closer to its goal of extending its investigative and audit staff to cover all geographical areas in the country. As shown in figures 2 and 3, we also found that DOJ and other HHS organizations requested and were granted $38.9 million in fiscal year 2000 and $51.9 million in fiscal year 2001. DOJ’s funds were used primarily to continue its efforts to litigate health care fraud cases and provide health care fraud training courses. In fiscal year 2001, $4 million of HHS’s HCFAC allocation was approved by designees of the Attorney General and the Secretary of HHS for reallocation to DOJ to support the federal government’s tobacco litigation activities for fiscal year 2001. In addition, $12 million of fiscal year 2001 HCFAC funds allocated to DOJ’s Civil Division were used to support the federal government’s suit against the major tobacco companies, as allowed under HIPAA. In addition, other HHS organizations used their HCFAC allocations for the following purposes in fiscal years 2000 and 2001: The Office of General Counsel used its funds primarily for litigation activity, both administrative and judicial. CMS, the agency with primary responsibility for administering the Medicare and Medicaid programs, along with the ASMB, used its HCFAC funds allocated in fiscal year 2000 to fund contractual consultant services on establishing a formal risk management function within each organization. CMS used its HCFAC funds allocated in fiscal year 2001 to assist states in developing Medicaid payment accuracy measurements methodologies and to conduct pilot studies to measure and reduce state Medicaid payment errors. The AOA was allocated funds to develop and disseminate consumer education information to older Americans and to train staff to recognize and report fraud, waste, and abuse in the Medicare and Medicaid programs. The ASBTF, formerly the ASMB, used its HCFAC funds for consultant services that will help ensure that the new HHS integrated financial management system, of which the CMS Healthcare Integrated General Ledger Accounting System will be a major component, is being developed to meet the department’s financial management goals, which include helping to prevent waste and abuse in HHS health care programs. DOJ’s Controls over Expenditures Need Reinforcement At DOJ, we identified problems indicating that oversight of HCFAC expenditure transaction processing needs to be reemphasized. These problems include charging non-HCFAC transactions to the HCFAC appropriation and the inability to provide us with a detailed list of HCFAC expenditure transactions to support summary totals on their internal financial report in a timely manner. These problems could impede DOJ’s ability to adequately account for growing HCFAC expenditures, which totaled over $23.7 million for fiscal year 2000 and $26.6 million for fiscal year 2001, as shown in figure 4. We found that over $480,000 in interest penalties not related to HCFAC activities were miscoded and inadvertently charged to the HCFAC appropriation. The DOJ officials responsible for recording this transaction told us there was an offsetting error of $482,000 in HCFAC-related expenditures that were not recorded to the HCFAC account. Regardless of whether these errors essentially offset, they are indicative of a weakness in DOJ’s financial processes for recording HCFAC and other expenditures. DOJ was also unable to provide a complete and timely reconciliation of detailed transactions to summary expenditure amounts reported in its internal reports. DOJ made several attempts beginning in January 2002 to provide us with an electronic file that reconciled to its internal expenditure report. As of mid-May 2002, we have not received a reconciled file for fiscal year 2001 HCFAC expenditures. We did, however, receive a reconciled file for fiscal year 2000 HCFAC expenditures on April 23, 2002. To their credit, DOJ officials responsible for maintaining DOJ financial systems identified problems associated with earlier attempts to provide this essential information to support its internal reports. While we were ultimately able to obtain this information for fiscal year 2000, we did not receive it in sufficient time to apply statistical sampling techniques for selecting expenditure transactions for review as we had done at HHS. While we used other procedures to compensate for not obtaining this detailed data file in a timely manner, we cannot project the results of our procedures to the population of DOJ expenditures. Both Office of Management and Budget Circular (OMB) A-127, Financial Management Systems, and the Comptroller General’s Standards for Internal Control in the Federal Government require that all transactions be clearly documented and that documentation be readily available for examination. DOJ’s financial statement auditor noted several problems related to the Department’s internal controls over financial reporting, such as (1) untimely recording of financial transactions, (2) weak general and application controls over financial management systems, and (3) inadequate financial statement preparation controls. The financial statement audit report specifically discusses problems related to untimely recording of financial transactions and inadequate financial statement preparation controls at offices, boards, and divisions that process HCFAC transactions. The financial statement auditor recommended that DOJ monitor compliance with its policies and procedures. Further, the auditor recommended that DOJ consider centralizing information systems that capture redundant financial data, or consider standardizing the accumulation and recording of financial transactions in accordance with the department’s requirements. HHS Expenditures Were Generally Appropriate Overall, we generally found adequate documentation to support $114.9 million in fiscal year 2000 and $129.8 million in fiscal year 2001 HCFAC expenditures shown in figure 5. However, we found that a purchase for an HHS/OIG employee award in fiscal year 2001 was questionable because it did not have adequate documentation to support that it was a valid HCFAC expenditure. We also found that HHS’s policies and procedures for employee awards did not include specific guidance on documenting the purchase of such nonmonetary awards. As stated before, the Comptroller General’s Standards for Internal Control in the Federal Government calls for appropriate control activities to ensure that transactions and internal control policies and procedures are clearly documented. HHS/OIG has since provided us with documentation to support the award as a valid HCFAC transaction and told us that it is revising its current policies and procedures to include nonmonetary employee awards. HHS and DOJ Do Not Separately Track Non- Medicare Expenditures We were not able to identify HCFAC program trust fund expenditures that were unrelated to Medicare because the HHS/OIG and DOJ do not separately account for or monitor such expenditures. Even though HIPAA requires us to report on expenditures related to non-Medicare activities, it does not specifically require HHS or DOJ to separately track Medicare and non-Medicare expenditures. However, HIPAA does restrict the HHS/OIG’s use of HCFAC funds to Medicare and Medicaid programs. According to HHS/OIG officials, they use HCFAC funds only for audits, evaluations, or investigations related to Medicare and Medicaid. The officials also stated that while some activities may be limited to either Medicare or Medicaid, most activities are generally related to both programs. Because HIPAA does not preclude the HHS/OIG from using HCFAC funds for Medicaid efforts, HHS/OIG officials have stated they do not believe it is necessary or beneficial to account for such expenditures separately. Similarly, DOJ officials told us that it is not practical or beneficial to account separately for non-Medicare expenditures because of the nature of health care fraud cases. HIPAA permits DOJ to use HCFAC funds for health care fraud activities involving other health programs. According to DOJ officials, health care fraud cases usually involve several health care programs, including Medicare and health care programs administered by other federal agencies, such as the Department of Veterans Affairs, the Department of Defense, and the Office of Personnel Management. Consequently, it is difficult to allocate personnel costs and other litigation expenses to specific parties in health care fraud cases. Also, according to DOJ officials, even if Medicare is not a party in a health care fraud case, the case may provide valuable experience in health care fraud matters, allowing auditors, investigators, and attorneys to become more effective in their efforts to combat Medicare fraud. Since there is no requirement to do so, HHS and DOJ continue to assert that they do not plan to identify these expenditures in the future. Nonetheless, attributing HCFAC activity costs to particular programs would be helpful information for the Congress and other decision makers to use in determining how to allocate federal resources, authorize and modify programs, and evaluate program performance. The Congress also saw value in having this information when it tasked us with reporting expenditures for HCFAC activities not related to Medicare. We believe that there is intrinsic value in having this information. For example, HCFAC managers face decisions involving alternative actions, such as whether to pursue certain cases. Making these decisions should include a cost awareness along with other available information to assess the case potential. Further, having more refined data on HCFAC expenditures is an essential element to developing effective performance measures to assess the program’s effectiveness. Savings to the Trust Fund Cannot Be Identified In the joint HCFAC reports, HHS/OIG reported approximately $14.1 billion of cost savings during fiscal year 2000 and over $16 billion of cost savings during fiscal year 2001 from implementation of its recommendations and other initiatives. We were unable to directly associate these savings to HCFAC and other program expenditures from the trust fund, as required by HIPAA, because HHS and DOJ officials do not track them as such due to the nature of health care anti–fraud and abuse activities. HIPAA does not specifically require HHS and DOJ to attribute savings to HCFAC expenditures. Of the reported cost savings, $2.1 billion in fiscal year 2000 and $3.1 billion in fiscal year 2001 were reported as related to the Medicaid program, which is funded through the general fund of the Treasury, not the Medicare trust fund. Our analysis indicated that the vast majority of HHS/OIG work related to the reported cost savings of $14 billion and $16 billion was performed prior to the passage of HCFAC. Based on our review, we found that amounts reported as cost savings were adequately supported. Cost savings represent funds or resources that will be used more efficiently as a result of documented measures taken by the Congress or management in response to HHS/OIG audits, investigations, and inspections. These savings are often changes in program design or control procedures implemented to minimize improper use of program funds. Cost savings are annualized amounts that are determined based on Congressional Budget Office estimates over a 5-year period. HHS and DOJ officials have stated that audits, evaluations, and investigations can take several years to complete. Once they have been completed, it can take several more years before recommendations or initiatives are implemented. Likewise, it is not uncommon for litigation activities to span many years before a settlement is reached. According to DOJ and HHS officials, any savings resulting from health care anti–fraud and abuse activities funded by the HCFAC program in fiscal years 2000 and 2001 will likely not be realized until subsequent years. Because the HCFAC program has been in existence for over 4 years, information may now be available for agencies to determine the cost savings associated with expenditures from the trust fund pursuant to HIPAA. Associating specific cost savings with related HCFAC expenditures is an important step in helping the Congress and other decision makers evaluate the effectiveness of the HCFAC program. Conclusions Our review of fiscal years 2000 and 2001 HCFAC activities found that appropriations, HHS expenditures, and reported cost savings were adequately supported, but we did identify some errors in the recording of collections and expenditures at DOJ. These errors indicate the need to strengthen controls over DOJ’s processing of HCFAC collections and expenditures to ensure that (1) moneys collected from fraudulent acts against the Medicare program are accurately recorded and (2) expenditures for health care antifraud activities are justified and accurately recorded. Effective internal control procedures and management oversight are critical to supporting management’s fiduciary role and its ability to manage the HCFAC program responsibly. Further, separately tracking Medicare and non-Medicare expenditures and cost savings and associating them by program could provide valuable information to assist the Congress, management, and others in making difficult programmatic choices. Recommendations for Executive Action To improve DOJ’s accountability for the HCFAC program collections, we recommend that the Attorney General fully implement plans to make all necessary correcting adjustments for collections transferred to the trust fund in error and ensure that subsequent collection reports submitted to the Department of the Treasury are accurate. To improve DOJ’s accountability for HCFAC program expenditures, we recommend that the Attorney General make correcting adjustments for expenditures improperly charged to reinforce financial management policies and procedures to minimize errors in recording HCFAC transactions. To facilitate providing the Congress and other decision makers with relevant information on program performance and results, we recommend that the Attorney General and the Secretary of HHS assess the feasibility of tracking cost savings and expenditures attributable to HCFAC activities by the various federal programs affected. Agency Comments and Our Evaluation A draft of this report was provided to HHS and DOJ for their review and comment. In written comments, HHS concurred with our recommendation to assess the feasibility of tracking cost savings and expenditures attributable to HCFAC activities by the various federal programs affected. In its written comments, DOJ agreed with all but one of our recommendations, and expressed concern with some of our findings. The following discussion provides highlights of the agencies’ comments and our evaluation. Letters from HHS and DOJ are reprinted in appendixes II and III. DOJ acknowledged the two errors we found in fiscal year 2001 criminal fine amounts and attributed them to a programming problem. As we discussed in the report, DOJ indicated it had already taken action to address our recommendations by correcting the programming error to address future amounts reported for criminal fines. DOJ also stated that an effort is currently under way to research the impact of the programming error and plans to determine what, if any, adjustments or offsets are needed to correct amounts previously reported to the Department of the Treasury. DOJ indicated that it had already discovered and fixed the programming error prior to our review. However, as we reported, DOJ was not aware of the errors we identified, nor did it call our attention to the possibility of errors occurring due to this programming problem. In addition, DOJ acknowledged in its comments that errors have occurred in the recording of valid HCFAC expenditure transactions and stated that corrections have been made to address our related recommendation. Additionally, DOJ incorrectly interpreted our statement that the problems identified in our review could impede its ability to account for growing HCFAC expenditures. In its comments, DOJ construed this to mean that we concluded that program managers lack timely access to financial reports or supporting transactions. That was not our intent nor the focus of our review. As stated in our report, the problems we encountered indicate that additional emphasis should be placed on DOJ’s financial management policies and procedures to minimize errors in recording HCFAC transactions. DOJ did state that it will continue its standing practice of continually educating its staff and reinforcing its financial management policies and procedures to minimize errors in recording HCFAC and all other transactions within DOJ. However, based on our findings, this standing practice needs modification in order to bolster its effectiveness. DOJ also stated that our reference to the findings for departmental systems as cited in the Audit Report: U.S. Department of Justice Annual Financial Statement Fiscal Year 2001, Report No. 02-06, was inapplicable. To address DOJ’s concerns, we clarified the report to cite problems that its financial statement auditors found at entities within DOJ that process HCFAC transactions. Finally, regarding our recommendation to both HHS and DOJ to assess the feasibility of tracking cost savings and expenditures attributable to HCFAC activities by the various federal programs affected, HHS/OIG stated in its written comments that it had previously considered alternatives that would allow it to track and attribute cost savings and expenditures but had identified obstacles to doing so. At the same time, HHS/OIG agreed with our recommendation to perform an assessment of tracking cost savings and expenditures by program, which is critical to developing effective performance measures. However, DOJ stated that it is neither practical nor beneficial to track cost savings or non-Medicare expenditures associated with HCFAC enforcement activities. Without capturing such information, the Congress and other decision makers do not have the ability to fully assess the effectiveness of the HCFAC program. Therefore, we continue to believe that, at a minimum, DOJ should study this further, as HHS has agreed to do. We are sending copies of this report to the Secretary of HHS, the Attorney General, and other interested parties. Copies will be made available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512-9508 or by e-mail at [email protected] or Kay L. Daly, Assistant Director, at (202) 512-9312 or by e-mail at [email protected]. Key contributors to this assignment are listed in appendix IV. Objectives, Scope, and Methodology To accomplish the first objective, identifying and assessing the propriety of amounts reported for deposits in fiscal years 2000 and 2001 as (1) penalties and multiple damages, (2) criminal fines, (3) civil monetary penalties, and (4) gifts and bequests, we did the following: Reviewed the joint HHS and DOJ HCFAC reports for fiscal years 2000 and 2001 to identify amounts deposited to the trust fund. Interviewed personnel at various HHS and DOJ entities to update our understanding of procedures related to collections/deposits. Obtained access to databases and reports from HHS and DOJ for the various collections/deposits as of September 30, 2000, and September 30, 2001. Tested selected transactions to determine whether the proper amounts were deposited to the trust fund. We obtained and recomputed supporting documentation from various sources depending on the type of collection/deposit. We traced amounts reported on the supporting documentation to reports and other records to confirm that proper amounts were appropriately reported. To perform these tests, we did the following: Drew dollar unit samples of 60 items from a population of 626 penalties and multiple damages (PMD), totaling $454,615,907, from an electronic database for CMS PMDs and from the FMIS Dept Management Transfer of Funds from the U.S. Department of Justice Via OPAC Report for DOJ PMDs for fiscal year 2001, and 60 items from a population of 479 penalties and multiple damages, totaling $147,268,092, from an electronic database for CMS PMDs and from the FMIS Dept Management Detail Report for DOJ PMDs for fiscal year 2000. Drew dollar unit samples of 58 items from a population of 179 criminal fines, totaling $2,894,234, from the Criminal Fines Report for fiscal year 2001, and 58 items from a population of 178 criminal fines totaling $57,209,390 from the Criminal Fines Report for fiscal year 2000. Drew dollar unit samples of 29 items from a population of 2,381 civil monetary penalties, totaling $6,060,481, from an electronic database for fiscal year 2001, and 57 items from a population of 1,221 civil monetary penalties, totaling $5,220,177, from an electronic database for fiscal year 2000. Reviewed the entire population of four gifts and bequests, totaling $5,501, for fiscal year 2001. We obtained and analyzed supporting documentation including the letters and checks retained at CMS. There were no gifts and bequests reported for fiscal year 2000, therefore none were tested. To accomplish our second objective, identifying and assessing the propriety of amounts reported in fiscal years 2000 and 2001 as appropriations from the trust fund for HCFAC activities, we did the following: Obtained the funding decision memorandum and reallocation documents to verify the HCFAC funds certified by HHS and DOJ officials. Analyzed the reasons for requesting HCFAC funds to determine that amounts appropriated from the trust fund met the purposes stated in HIPAA to, among other things, coordinate federal, state, and local law enforcement efforts; conduct investigations, audits, and studies related to health care; and provide guidance to the health care industry regarding fraudulent practices. Compared allocations amount reported in the joint HCFAC reports to the approved funding decision memorandum and reallocation documents to verify the accuracy of amounts reported. To accomplish our third objective, identifying and assessing the propriety of amounts for HCFAC expenditures at DOJ for fiscal years 2000 and 2001, we obtained DOJ’s internal financial report, the Expenditure and Allotment Report, EA101, which detailed total expenditure data for each component by subobject class for fiscal year 2000 and fiscal year 2001. To test our population, we further requested that DOJ provide us with a complete detailed population of transactions to support the summary totals on the internal financial report. Because the data were not provided to us on time, nor were they fully reconciled, we could not statistically select a sample and project the results to the population as a whole. We modified our methodology and nonstatistically selected 19 transactions, totaling $2,695,211 in fiscal year 2000, and 38 transactions, totalling $1,362,579 in fiscal year 2001, from DOJ focusing on large dollar amounts, unusual items, and other transactions, which would enhance our understanding of the expenditure process. To determine whether these transactions were properly classified as HCFAC transactions, we interviewed DOJ officials to obtain an understanding of the source and processing of transactions and reviewed, analyzed, and recomputed supporting documentation, such as purchase orders, invoices, and receipts, to determine the propriety of the expenditures. We performed analytical procedures and tested DOJ payroll on the largest component, EOUSA offices. To assess the reasonableness of payroll expenses, we performed a high-level analytical review. To enhance our understanding of how personnel record their work activity in the Monthly Resource Summary System, we nonstatistically selected 20 individuals from 10 districts for fiscal years 2000 and 2001. We interviewed these individuals on their method for charging time to the HCFAC program for fiscal year 2000 and 2001 and to verify whether time charged to the Monthly Resource Summary System was accurate. In the interview, employees were asked whether the time that was recorded in the system was accurate and how and where they received guidance on charging of time. To accomplish our fourth objective, identifying and assessing the propriety of amounts for HCFAC expenditures at HHS for fiscal years 2000 and 2001, we obtained internal reports generated from the agency’s accounting system to identify HCFAC expenditure amounts, obtained detailed records to support HHS payroll and nonpayroll tested selected payroll and nonpayroll transactions to determine whether they were accurately reported. To evaluate payroll charges to the HCFAC appropriation by HHS/OIG employees during fiscal years 2000 and 2001, we performed analytical procedures. We analyzed the methodology used by the HHS/OIG to verify that expenditures were within the predetermined allocation percentages for HCFAC and non-HCFAC expenditures. We also reviewed 10 HHS/OIG employee time charges for fiscal years 2000 and 2001. The selected employees were interviewed regarding their time charges for fiscal years 2000 and 2001. In the interview, employees were asked to verify the time that was recorded by the department’s management information systems or timecards. We also inquired as to how and where employees received guidance on charging their time and whether they understood the various funding sources used to support OIG activities. We verified that the pay rate listed on the employees Standard Form 50 Notification of Personnel Action was the same as the amount charged to the Department of Health and Human Services Regional Core Accounting System Data Flowback Name List (CORE - Central Accounting System). We verified that the summary hours as recorded in the U.S. Department of Health & Human Services Employee Data Report (TAIMS - Time and Attendance application) traced to the management information system or time and attendance records. We interviewed the employees to verify that the time charged to the management information system or time and attendance records were accurate. We drew dollar unit samples of 44 items from a population of 36,380 nonpayroll expenditures, totaling $34,156,369, from HHS’s internal accounting records for fiscal year 2001, and 39 items from a population of 27,884 nonpayroll expenditures, totaling $32,914,328, for fiscal year 2000. To assess the propriety of these transactions, we obtained supporting documentation such as invoices, purchase orders, and receipts. We recomputed the documentation as appropriate to the transaction. We were unable to accomplish our fifth objective, to identify and assess the propriety of amounts reported as fiscal years 2000 and 2001 expenditures for non-Medicare anti–fraud and abuse activities, because HHS/OIG and DOJ do not separately account for or monitor such expenditures. Even though HIPAA requires that we report on expenditures related to non- Medicare activities, it does not specifically require HHS or DOJ to separately track Medicare and non-Medicare expenditures. To accomplish our sixth objective, to identify and assess the propriety of amounts reported as savings to the trust fund, we obtained the fiscal years 2000 and 2001 HHS/OIG semiannual reports to identify cost savings as reported in the joint reports and tested selected cost saving transactions to determine whether the amounts were substantiated. We were unable to attribute the reported cost savings to HCFAC expenditures as well as identify any other savings from the trust fund because, according to DOJ and HHS officials, any savings resulting from health care anti–fraud and abuse activities funded by the HCFAC program in fiscal years 2000 and 2001 will likely not be realized until subsequent years. We interviewed and obtained documentation from officials at CMS in Baltimore, Maryland; HHS headquarters–AOA, ASBTF, OIG and the OGC–in Washington, D.C.; HHS’s Program Support Center in Rockville, Maryland; and DOJ’s Justice Management Division, EOUSA, Criminal Division, Civil Division, and Civil Rights Division in Washington, D.C. We conducted our work in two phases, from April 2001 through June 2001 focusing primarily on fiscal year 2000 HCFAC activity, and from October 2001 through April 2002 focusing primarily on fiscal year 2001 HCFAC activity, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of HHS and the Attorney General. We received written comments from the Inspector General of HHS and the Acting Assistant Attorney General for Administration at DOJ. We have reprinted their responses in appendixes II and III, respectively. Comments from the Department of Health and Human Services Comments from the Department of Justice Staff Acknowledgments Ronald Bergman, Sharon Byrd, Lisa Crye, Jacquelyn Hamilton, Corinne Robertson, Gina Ross, Sabrina Springfield, and Shawnda Wilson made key contributions to this report. Related GAO Products Civil Fines and Penalties Debt: Review of OSM’s Management and Collection Processes. GAO-02-211. Washington, D.C.: December 31, 2001. Criminal Debt: Oversight and Actions Needed to Address Deficiencies in Collection Processes. GAO-01-664. Washington, D.C.: July 16, 2001. GAO’s Mission The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs | The Medicare program is the nation's largest health insurer with almost 40 million beneficiaries and outlays of over $219 billion annually. Because of the susceptibility of the program to fraud and abuse, Congress enacted the Health Care Fraud and Abuse Control (HCFAC) Program as part of the Health Insurance Portability and Accountability Act (HIPPAA) of 1996. HCFAC, which is administered by the Department of Health and Human Services' (HHS) Office of Inspector General (OIG) and the Department of Justice (DOJ), established a national framework to coordinate federal, state, and local law enforcement efforts to detect, prevent, and prosecute health care fraud and abuse in the public and private sectors. HIPPAA requires HHS and DOJ to issue a joint annual report no later than January 1 of each year to Congress for the proceeding fiscal year. The joint HCFAC reports included deposits of $210 million for fiscal year 2000 and $464 million for fiscal year 2001, pursuant to the act. In testing at DOJ, GAO found errors in the recording of criminal fines deposits to the Federal Hospital Insurance Trust Fund in fiscal year 2001 that resulted in an estimated overstatement to the trust fund of $169,765. GAO found that the planned use of HCFAC appropriations was in keeping with the stated purpose in the act. Although GAO found expenditures from the trust fund were generally appropriate at HHS, at DOJ GAO found $480,000 in interest penalties not related to HCFAC activities that were charged to the HCFAC appropriation. GAO was unable to identify expenditures from the HCFAC trust fund for activities unrelated to Medicare because the HHS/OIG and DOJ do not separately account for or monitor these activities. Likewise, GAO was unable to identify savings specifically attributable to activities funded by the HCFAC program. |
Background IBD refers to Crohn’s disease and ulcerative colitis. Crohn’s disease can involve any area of the gastrointestinal tract but most commonly affects the small intestine, which is responsible for the body’s absorption of most needed nutrients, and the beginning of the large intestine, or colon. This inflammation can result in excessive diarrhea, severe rectal bleeding, anemia, fever, and abdominal pain. In addition, malnutrition or nutritional deficiencies are also common among Crohn’s disease patients, particularly if the disease is extensive and of long duration. Two-thirds to three- quarters of patients with Crohn’s disease will require surgery—in most cases, to remove the diseased segment of the bowel and any associated abscess. In some cases, an ostomy to remove the colon also may be required. However, surgery is not considered a cure for Crohn’s disease patients because the disease frequently recurs. Ulcerative colitis only affects the colon. This condition causes diarrhea and bleeding, and can ultimately lead to colon cancer. In one-quarter to one-third of patients with ulcerative colitis, medical therapy is not completely successful or complications arise. Under these circumstances, an ostomy operation may be performed. Because inflammation in ulcerative colitis is confined to the colon, the disease is curable by this operation. IBD may occur at any age, but it most commonly develops between the ages of 10 and 30. One-third of IBD patients develop symptoms before adolescence. In such cases, the disease poses special problems because it can impair children’s bodies’ ability to absorb nutrients and thus adversely affects their growth and development. IBD patients, depending on each individual’s unique circumstances, may rely on one or more of the following key therapies in either home health or outpatient delivery settings to manage their disease: Parenteral nutrition is the intravenous administration of nutrients through a catheter that carries liquid nutrients directly into the bloodstream, where they are absorbed by the body, entirely bypassing the gastrointestinal tract. It is typically used to treat patients with severe cases of IBD. In such instances, patients’ gastrointestinal tracts cannot tolerate nutrition by mouth or a feeding tube. The provision of parenteral nutrition allows the intestines to rest and heal, and may relieve acute attacks and delay or avoid the need for surgery. Supplies used in parenteral nutrition include parenteral nutrition solutions and various products necessary to administer the solutions to the patient, such as infusion pumps and intravenous poles. Parenteral nutrition supply kits include supplies necessary to transfer the solution to the infusion pump, such as tubes, and sterilization pads. Parenteral nutrition administration kits include supplies necessary to transfer the solution from the pump to the patient, such as intravenous catheters, dressings, tapes, antiseptics, and sterile gloves. Enteral nutrition is indicated for patients with a functioning gastrointestinal tract but whose oral nutrient intake is insufficient to meet their nutritional needs. Enteral nutrition employs a feeding tube to deliver a liquid nutritional formula to the stomach or small intestine—it is administered either through the nose or directly through the abdominal wall into the gastrointestinal tract. For IBD patients, and particularly for Crohn’s disease patients whose inflamed small intestine may not allow them to absorb enough nutrients, this method—either used alone, or in combination with food or liquids taken orally—may restore good nutrition to patients weakened by severe diarrhea and poor nutrition. In addition, according to gastrointestinal disease experts, enteral nutrition may have therapeutic effects as well, by inducing remission. Supplies used in enteral nutrition include enteral formulas and supplies necessary to administer this therapy, such as enteral nutrition infusion pumps, intravenous poles, catheters, and tubes. Enteral feeding supply kits include supplies necessary to administer the formula to the patient, such as syringes, tubing to transfer the formula to the catheter, tube connectors, and sterile gloves. Tubing that goes inside the patient’s body to administer the nutrients—i.e., nasogastric tubing that delivers the formula to the patient’s gastrointestinal system through the nose, or gastrostomy tubing that delivers the formula through a surgically created opening in the stomach— is also necessary. Other supplies needed may include additives, such as fiber, to thicken enteral formulas. Medically necessary food products are products that can be taken orally. They include food supplements, such as the formulas used in enteral nutrition, and prescription strength vitamins. For example, because Crohn’s disease and surgical procedures that remove parts of the small intestine can inhibit absorption of vitamins, fats, and other important nutrients, taking certain supplements, such as fish oil, antioxidants, and mineral supplements, may be beneficial for patients with Crohn’s disease. Medications are often required to treat Crohn’s disease and ulcerative colitis. The FDA has approved both brand name drugs and generic drugs to treat IBD. These drugs are typically self-administered and taken to reduce inflammation in the intestinal wall. In addition, there are other medications approved by the FDA—but not specifically to treat IBD—that may be effective in treating the disease. IBD patients who have had an ostomy operation need to use specific supplies for their ostomy care. An ostomy surgery creates an opening in the abdomen. This opening, called a stoma, permits digested food to exit the body. In most cases, this type of surgery results in a permanent opening. Subsequent to the operation, ostomy patients need certain supplies to manage the abdominal opening and the waste. For example, the patient wears a pouch over the opening to collect the waste and then empties the pouch as needed. Other necessary supplies include skin barriers to protect the skin and irrigation and fluid discharge supplies. Medicare pays for beneficiaries’ medically necessary health care needs as long as they fit into one of the broadly-defined categories of benefits established in the Social Security Act. Among other things, these categories include commonly used medical services and supplies such as physician visits, inpatient hospital stays, diagnostic tests, durable medical equipment, and prosthetic devices. While the act provides for broad coverage of many medical and health care services, it does not provide an exhaustive list of all services covered. Similarly, the act generally does not specify which medical devices, surgical procedures, or diagnostic services the program covers. In addition, the act states that the program cannot pay for any supplies or services that are not “reasonable and necessary” for the diagnosis and treatment of an illness or injury. With the Social Security Act serving as the primary authority for all coverage provisions, CMS has established coverage policies that specify the procedures, devices, and services that are covered in the broad benefit categories established in the act. In addition, CMS has established the criteria used to determine whether these supplies are reasonable and necessary for a beneficiary’s treatment. CMS’s national coverage determinations (NCDs) describe the circumstances for Medicare coverage for a specific medical service, procedure, or device and they outline the conditions for coverage. CMS interpretive manuals further define when and under what circumstances items or services may be covered. Claims administration contractors are required to follow CMS’s national coverage policies. However, if an NCD does not specifically exclude or limit coverage for an item or service, or if the item or service is not mentioned at all in an NCD or CMS manual, it is up to the contractors to determine whether they will cover a particular item or service within their geographic area. This is often done through a local coverage determination (LCD). LCDs specify under what circumstances the item or service is considered to be reasonable and necessary, in accordance with the Social Security Act, and are supplemented by additional instructions from the contractors. LCDs related to durable medical equipment, prosthetic devices, orthotics, and a number of other supplies are made by the DMERCs—the four CMS claims administration contractors that process claims exclusively for these supplies. The DMERCs are required by CMS to coordinate their coverage development process with one another and they publish identical LCDs. Medicaid coverage policies vary by state. While all state Medicaid programs must pay for certain services, such as inpatient and outpatient hospital services, and early and periodic screening, diagnostic, and treatment services for individuals under the age of 21, states have broad discretion in setting up their Medicaid programs. They may set different eligibility standards, scope of services, and payments, and can elect to cover a range of optional populations and benefits. Coverage of IBD Therapies Is Subject to Medicare and Medicaid Standards Medicare generally covers parenteral and enteral nutrition and ostomy care in home health and outpatient delivery settings for beneficiaries who meet certain medical standards. These three IBD therapies are included in specific benefit categories established by the Social Security Act— primarily the prosthetic devices benefit category, and in the case of ostomy care provided in a home health care delivery setting, the home health benefit category. Medicare does not cover medically necessary food products or most drugs approved by the FDA that are used to treat IBD. However, in January 2006, Medicare will begin to cover medically necessary drugs when the program’s new prescription drug benefit becomes effective. None of the five therapies we examined for this report are mandatory services under Medicaid. However, our survey of Medicaid programs indicates that most of these programs provided eligible individuals some coverage for all five therapies. We also found that coverage standards that Medicaid recipients must meet to receive these therapies varied by state. Table 1 summarizes the number of states covering each of the five therapies. (See app. II for specific information on each state Medicaid program’s coverage of these therapies.) Medicare and Medicaid Coverage Standards for Parenteral Nutrition Our analysis showed that Medicare and state Medicaid programs will generally cover parenteral nutrition as follows: Medicare: Medicare generally covers parenteral nutrition, as CMS has determined that it falls under the prosthetic devices benefit category, established in the Social Security Act. CMS’s coverage standards for parenteral nutrition therapy are outlined in both an NCD and in local coverage policy. Coverage is provided in both home health and outpatient delivery settings. The NCD requires the patient to have a severe pathology of the alimentary tract that does not allow absorption of sufficient nutrients to maintain weight and strength commensurate with the patient’s general condition. A period of hospitalization is required to initiate coverage for parenteral nutrition and to train the patient in how to prepare, manage, and administer the formula and equipment. The NCD also requires a physician’s written order or prescription and sufficient medical documentation to show that the prosthetic device coverage requirements are met and that parenteral nutrition therapy is medically necessary. In addition, before approving coverage, the carrier must agree that a particular condition qualifies for parenteral nutrition therapy. Medicare will approve coverage of parenteral nutrition at periodic intervals of no more than three months. In addition, Medicare will pay for no more than one month’s supply of nutrients at a time. Building upon the coverage standards in the NCD, the DMERCs’ local coverage policy on parenteral nutrition provides significantly more detailed requirements. The policy consists of specific clinical criteria for showing that parenteral nutrition is considered reasonable and necessary. Like the NCD, the local policy specifies that a patient must either have a condition involving the small intestine that significantly impairs the absorption of nutrients, or a disease of the stomach or intestine that impairs the ability of nutrients to be transported through the gastrointestinal system. The local coverage policy also requires that the patient’s inability to maintain proper weight and strength necessitates intravenous nutrition, and that the patient is unable to be treated through either diet modification or with drugs. It also describes specific clinical conditions that meet these criteria. For patients who do not meet the standards for these clinical conditions, coverage for parenteral nutrition will be considered on an individual basis if detailed documentation is submitted. However, some patients with moderate abnormalities may not be covered unless they have experienced an unsuccessful trial of enteral nutrition. Medicaid: Our survey responses indicated that all states provide some parenteral nutrition coverage for children and all but one—Georgia— provide such coverage for adults. However, Georgia reported that it would consider coverage for adults under an appeal process to its medical director. Our results showed variation among states in the standards used to determine coverage for parenteral nutrition. Seven states used all six of the coverage standards listed in our survey to determine whether Medicaid would cover parenteral nutrition therapy for adults and children. The remaining 44 states used a variety of the six coverage standards. For example, Arkansas, California, Kentucky, North Carolina, and Oregon require individuals to meet three of the six standards, including pathology and documentation. Forty-five states indicated that before covering parenteral nutrition therapy for individuals, they would require some form of documentation, such as proof of a medical condition. Forty-one of these same states also required individuals to have a severe pathology of the gastrointestinal tract that would not allow absorption of sufficient nutrients to maintain weight and strength. Only one state—Minnesota— provided coverage for parenteral nutrition therapy without listing any specific conditions that individuals must meet to receive therapy. For details on specific coverage standards for parenteral nutrition therapy by state, see app. III. Medicare and Medicaid Coverage Standards for Enteral Nutrition Our analysis showed that Medicare and most state Medicaid programs will generally cover enteral nutrition as follows: Medicare: Medicare covers enteral nutrition under the prosthetic devices benefit category. The NCD coverage standards for enteral nutrition are very similar to those for parenteral nutrition, with the primary difference being the requirements involving the patient’s clinical condition. As with parenteral nutrition, coverage for enteral nutrition is provided in both home health and outpatient delivery settings. However, for enteral nutrition, the patient may have a functioning gastrointestinal tract but must be unable to maintain appropriate weight and strength due to pathology to, or the nonfunction of, the structures that normally permit food to reach the digestive tract. The only other differing requirement in the NCD between the two therapies is that there is no hospitalization requirement for a patient seeking Medicare coverage for enteral nutrition. The NCD also requires a physician’s written order or prescription and sufficient medical documentation to show that the prosthetic device coverage requirements are met and that enteral nutrition therapy is medically necessary. The local coverage policy on enteral nutrition is simpler than the local policy for parenteral nutrition. It provides coverage for enteral nutrition so long as adequate nutrition is not possible by either dietary adjustment or oral supplements. Tube feedings of enteral nutrition must be required to provide sufficient nutrients to maintain weight and strength commensurate with the patient’s overall health status due to either one of two conditions: (1) a permanent non-function or disease of the structures that normally permit food to reach the small bowel, or (2) a disease of the small bowel which impairs digestion and absorption of an oral diet. However, coverage is possible for patients with partial impairments, such as a Crohn’s disease patient who requires prolonged infusion of enteral nutrients to overcome a problem with absorption. Enteral nutrition products administered orally are not covered. Medicaid: Forty-nine states reported that they provided some coverage for enteral nutrition therapy for both adults and children. One state— Oklahoma—indicated that it provided coverage for children, but not for adults. West Virginia responded that it did not cover this therapy at all. Analysis of survey results also indicated that there was some variation in coverage standards used among the 49 states that covered enteral nutrition therapy for adults and children. Six states reported that they cover enteral nutrition therapy for patients who meet all six coverage standards listed in our survey. The remaining states used a variety of the six coverage standards. For example, Arizona, Colorado, Michigan, New Mexico, and Wisconsin indicated that they use five of the six standards— these states did not require the patient to have a permanent condition in order to be covered for this therapy. Washington reported that, in addition to subjecting individuals to most of the criteria listed in our survey, it also requires prior approval of enteral nutrition therapy based on documentation showing that the therapy is medically necessary and outlining why traditional food is not appropriate. We also found that for both adults and children, 45 of the 49 states that cover enteral nutrition therapy require individuals to have specific documentation in their medical records before the states would render coverage. We also found that 12 states had less restrictive coverage standards for children. See app. IV for more details on enteral nutrition therapy and supplies coverage standards for each state. Medicare and Medicaid Coverage Standards for Ostomy Care Medicare and Medicaid provide at least some coverage of ostomy care. In outpatient delivery settings, Medicare covers ostomy care for IBD patients under its benefit category of prosthetic devices—similar to parenteral and enteral nutrition. In home health care delivery settings, Medicare covers this therapy as a home health benefit. While there is no NCD for ostomy care, the four DMERCs have established a local coverage policy for these supplies. According to the policy, the only Medicare coverage standard is that the patient must have had an ostomy. Similarly, all state Medicaid offices, according to our survey responses, provide coverage of ostomy care for adults and children who have had ostomies. Medicare and Medicaid Coverage Standards for Medically Necessary Food Products Medicare does not cover medically necessary food products because such supplies are not included in any of the benefit categories contained in the Social Security Act. On the other hand, according to our survey results, Medicaid provides at least some coverage of medically necessary food products to its recipients in 46 of the states. Nevada, North Carolina, Ohio, Utah, and West Virginia were the five states that did not provide any coverage for medically necessary food products. Of those states reporting that they provided coverage, 14 also noted that they had a requirement that the individuals receive a certain percentage of their nutrition from oral supplements in order for these supplements to be covered. In some instances, this percentage was as high as 75 to 100 percent. For example, Florida, Georgia, Mississippi, Rhode Island, and South Dakota required some individuals to meet 100 percent of their nutritional requirements from oral supplements; however these individuals did not have to meet all of the other conditions listed in our survey. On the other hand, while North Dakota reported that individuals must receive at least 51 percent of their nutrition from oral supplements, it had the most stringent standards overall because it required that individuals meet all three conditions for coverage listed in our survey. For more information on states’ coverage standards for medically necessary food products, see app. V. Medicare and Medicaid Coverage Standards for Drugs to Treat IBD Medicare does not generally cover medications that are self-administered, including drugs approved by the FDA to treat IBD. Coverage is not provided because such self-administered medications are not included in any of the benefit categories contained in the Social Security Act. However, in 2003, the Social Security Act was amended, establishing a new voluntary prescription drug benefit for Medicare beneficiaries that will become effective in January 2006. At that time, Medicare will begin to cover self-administered drugs approved by the FDA to treat IBD. States generally provide some coverage of drugs approved by the FDA to treat IBD. Generally, before covering a drug, states require that: (1) a physician or licensed practitioner writes the prescription; (2) a licensed pharmacist or licensed authorized practitioner dispenses the prescription; and (3) the drug is dispensed on a written prescription that is recorded and maintained in the pharmacist’s or practitioner’s records. Our survey did not ask state Medicaid programs about the standards used to determine coverage of drugs to treat IBD because state Medicaid programs are not required to cover prescription drugs. Our survey also asked state officials whether their Medicaid programs cover the off-label use of drugs to treat IBD. Responses to this question varied. Nineteen states responded that they had no policy for the use of off-label drugs or that their state did not cover off-label use. Many of these respondents wrote that they only covered drugs approved by the FDA to treat IBD. Twenty-four states indicated that they cover off-label drug use. However, 20 of these 24 states responded that they would only cover the drug under certain conditions. Many of these states reported that individuals obtaining such prescriptions must receive prior approval or documentation justifying medical necessity. Michigan has the most detailed off-label coverage policy of all the states; it indicated that off-label drugs must receive prior authorization as well as documentation outlining the (1) diagnosis, (2) medical reason why the individual cannot use another covered drug; (3) results of therapeutic alternative medication tried, and (4) medical literature citations supporting the off-label usage. The remaining eight states did not respond to this question. Variation in Medicare and Medicaid Programs’ Coverage of Specific Supplies Related to IBD Therapies Once coverage standards are met, Medicare generally covers all medically necessary supplies for the administration of parenteral and enteral nutrition and ostomy care—the three therapies that this program covers. On the other hand, our survey of Medicaid programs showed that although most states provide eligible individuals at least some coverage of each of the five therapies addressed in this report, the specific supplies that states will pay for vary and may be subject to restrictions. According to our survey results, most states will cover necessary supplies related to parenteral and enteral nutrition with only slight variations for the specific supplies supplied. We also found that, while all states provided some coverage of ostomy care, the specific supplies that states cover varied. Our survey also showed that, while most states will cover at least one of the five medically necessary food products listed in our survey, no state covers all of them for both adults and children. Finally, we found that most Medicaid programs generally covered many of the brand name drugs and equivalent generic drugs listed in our survey. Parenteral Nutrition Supplies Covered by Medicare and Medicaid Medicare will generally cover parenteral nutrition therapy supplies, such as nutrients and administration supplies, for beneficiaries who have met applicable coverage standards. Specifically, according to the applicable local coverage policy, Medicare will cover necessary parenteral nutrition solutions. In addition, when coverage requirements for parenteral nutrition are met, Medicare will also pay for one supply kit and one administration kit for each day that parenteral nutrition is administered, if such kits are medically necessary and used. Medicare will also cover infusion pumps—only one pump will be covered at any one time. The local coverage policy also outlines several documentation requirements for ensuring that the patient’s medical records—including test reports and records from the physician’s office, home health agency, hospital, nursing home, and other health care professionals—establish the medical necessity for the care provided. These records must be made available to the DMERC upon request. In addition, an order for each item billed and a certificate of medical necessity must be signed and dated by the treating physician, kept on file by the supplier, and be made available to the DMERC. Besides the initial certification, there are also documentation requirements if recertifications or revised certifications are necessary. States’ Medicaid coverage of the five most commonly used parenteral nutrition therapy supplies shows some variation, depending on the item and the delivery setting. As table 2 shows, parenteral nutrition therapy supplies—such as the infusion pump—are covered by more states than the parenteral nutrition solution. In addition, more states reported that they cover parenteral nutrition therapy supplies in outpatient delivery settings than in home health delivery settings. There was little difference in the coverage of various supplies between adults and children. Further analysis of survey results revealed that 28 states covered all supplies in both home health and outpatient delivery settings for adults and children. For more specific information on the parenteral nutrition supplies covered by each state, see app. VI. Enteral Nutrition Supplies Covered by Medicare and Medicaid Medicare will generally cover supplies associated with enteral nutrition therapy for beneficiaries who meet coverage standards. According to the enteral nutrition local coverage policy, Medicare will cover all enteral formulas for covered beneficiaries. In addition, Medicare will also cover medically necessary equipment and supplies for this therapy, such as feeding supply kits and pumps that are associated with the specific method of administration used by the patient. However, a few limitations apply. For example, claims for more than one type of kit delivered on the same date will be denied as not medically necessary. Similarly, Medicare will rarely consider the use of more than three nasogastric tubes or one gastrostomy tube over a 3-month period as medically necessary. The local coverage policy also outlines several documentation requirements for coverage of enteral nutrition supplies. Similar to the parenteral nutrition local policy, the enteral nutrition policy requires that the patient’s medical record reflect the need for the care provided. It also has requirements associated with the certification of enteral nutrition. For example, if the physician orders enteral nutrition supplies for a longer period of time than is indicated on the original certificate of medical necessity, the enteral nutrition policy will require recertification. However, the enteral nutrition policy generally has fewer documentation requirements than that of parenteral nutrition. Based on our survey, state Medicaid programs’ payment for seven of the most commonly used enteral nutrition therapy supplies varies depending on the type of product, delivery setting, and whether the patient is an adult or a child. Table 3 shows that states reported that their Medicaid programs pay for enteral feeding supply kits and tubing more than other therapy supplies. In addition, more states pay for enteral supplies for children than adults and more states pay for supplies in outpatient delivery settings than in home health delivery settings. Further analysis revealed that 15 states pay for all seven supplies listed in our survey in both home health and outpatient delivery settings for adults and children. Thirty states pay for five or more enteral nutrition supplies for adults and children in these same settings. We also found that additives for enteral formula, such as fiber, are the least covered product, with only 21 states covering it in both home health and outpatient delivery settings for adults and children. For specific results of enteral nutrition supplies provided by each state, see app. VII. Ostomy Supplies Covered by Medicare and Medicaid Medicare covers all of the types of ostomy supplies used by IBD patients who require ostomy care. However, there are two restrictions regarding the types of ostomy supplies covered. First, Medicare will only provide a beneficiary with one type of liquid skin barrier if one is needed—either a liquid or spray barrier, or individual wipes. Second, Medicare will only pay for one type of drainage supply—a stoma cap, a stoma plug, or gauze pads—on a given day. These restrictions are imposed by the DMERCs in a local coverage policy, which also specifies the “usual maximum quantity” of supplies that typically meet the needs of ostomy patients for a specific time period (generally for either 1 or 6 months) for each of the most commonly used ostomy supplies. However, according to the four DMERC medical directors, these quantities only serve as guidelines. Because the need for ostomy supplies can vary substantially among patients, DMERCs may cover supplies that exceed the usual maximum quantities if the need is justified. Medicare’s coverage of ostomy supplies is different for IBD patients who receive care under a home health plan of care than for those who receive it in an outpatient delivery setting. If an IBD patient is being served by a home health agency and is under a home health plan of care, all of the patient’s medical supplies, including ostomy supplies, are considered part of the Medicare home health services benefit. This is generally the case even when the IBD is a pre-existing condition unrelated to the immediate reason for home health care, such as hip replacement surgery. Medicare pays a fixed amount determined under a prospective payment system to the home health agency for the cost of all covered home health visits, including ostomy supplies delivered during these visits. The home health agency is obligated to provide the beneficiary with the necessary ostomy supplies, which are bundled with all other necessary home health services. The home health agency selects the type of ostomy products to be used and if the patient wishes to use different products, the patient must do so at his or her own expense. This practice can be contrasted to the outpatient delivery setting, where the products are generally selected by the patient, or the patient’s doctor. All states responded that their Medicaid programs pay for ostomy supplies for adults and children who have had ostomies; however the range of supplies covered varied. Because of the relatively large number of supplies commonly used by ostomy patients we grouped these supplies in nine categories, based on input from a representative of the United Ostomy Association. Table 4 shows the median percent of states covering ostomy supplies in home health and outpatient delivery settings, after they have been placed in these categories. For example, for the 14 supplies in the drainable pouch with standard barrier supplies category—half of supplies are covered by at least 84 percent of states in home health delivery settings and 85 percent of states in outpatient delivery settings. In general, states’ coverage of ostomy supplies was greater in outpatient, than in home health delivery settings. For more details on the individual ostomy supplies included in each category and the percent of states covering each supply, see app. VIII. Twenty-four states reported covering all of the ostomy supplies listed in our survey in both delivery settings. Nine of the 24 states that covered all supplies imposed no supply limits or dollar caps on individuals. The remaining 15 states reported that they had supply limits or dollar caps; however five of these states—Arizona, Hawaii, North Dakota, Rhode Island, and Virginia—added that they often allowed individuals to exceed these limits and caps for certain supplies. For example, one state reported that while it has a supply limit of one box of 50 skin barrier wipes and dollar cap of $9.36 per month, it will often allow individuals to exceed limits and caps. See app. IX for more details on individual states’ coverage of supplies, including supply limits and dollar caps, in both home health and outpatient delivery settings. Medically Necessary Food Products Covered by Medicaid Unlike Medicare, which does not pay for any medically necessary food products, most state Medicaid programs pay for some products. These products include prescription strength vitamins, oral nutritional formulas, food thickeners, baby foods, blended grocery products, and other supplies. According to our survey, 46 states reported covering at least one of the five products listed in our survey for either adults or children. As table 5 shows, out of the five food products, state Medicaid programs reported paying for oral nutritional formulas most often. Baby food and other blenderized products were the least common products covered with only four states—Missouri, New Jersey, Tennessee, and Texas—reporting that they paid for these products. In addition, more states paid for medically necessary food products for children than for adults. For more details on states’ payment of medically necessary food products, see app. X. Drugs Covered by Medicaid to Treat IBD All states reported that their Medicaid programs paid for at least one of the nine brand name drugs or two of the generic drugs that were included in our survey and which were approved by the FDA to treat IBD. Figure 1 shows the number of states covering each drug. The brand name drug Remicade was the most commonly paid for drug, with all states reporting payment. The generic drugs available for Azulfidine and Rowasa were covered by 48 and 46 states respectively. Further analysis revealed that six states—Colorado, Minnesota, Montana, Nevada, Oklahoma, and Wisconsin—reported that individuals must use generic drugs if they are available, before obtaining the equivalent, but more expensive brand name drugs. Three states—California, Iowa, and Ohio—indicated that they would not cover the brand name drug Remicade without prior authorizations. See app. XI for a listing of each state’s coverage of drugs listed in our survey to treat IBD for adults and children. Agency Comments We provided a draft of this report to CMS. In its written comments, CMS said that it determined that we correctly described the Medicare coverage policies for parenteral and enteral nutrition and ostomy supplies. However, CMS suggested that we clarify our description of Medicare’s coverage policy for prescription drugs that are not self-administered. We revised our language to address this concern. It also said that, as it proceeds with policy development, it will continue to give consideration to access issues that affect Medicare beneficiaries and Medicaid recipients in their treatment of IBD. We have reprinted CMS’s letter in app. XII. We also provided FDA with excerpts of the draft concerning drugs it has approved to treat Crohn’s disease and ulcerative colitis. FDA responded by e-mail and provided a list that contained several additional drugs it said it considered as valid, labeled, treatments for IBD. FDA’s revised list was provided after our survey was administered and these drugs are not discussed in this report. We modified our report to note this. We are sending copies of this report to the Secretary of Health and Human Services, the Administrator of CMS, the Commissioner of FDA, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staffs have any questions about this report, please contact me at (312) 220-7600 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in app. XIII. Appendix I: Scope and Methodology In this report, we (1) identify the Medicare and Medicaid coverage standards for five therapies—parenteral nutrition, enteral nutrition formula, ostomy care, medically necessary food products, and drugs approved by the Food and Drug Administration (FDA) for inflammatory bowel disease (IBD); and (2) determine what specific supplies used in these therapies Medicare and state Medicaid programs pay for in home health and outpatient delivery settings. In examining Medicare and Medicaid coverage of these therapies and the related supplies, we considered whether each program would cover these items in both home health and outpatient settings. For purposes of this study, we defined these settings as follows: Home health care refers to the situation in which a medical supply is being provided to the individual by a home health aide or others through an arrangement made by a home health agency, in accordance with a plan for furnishing the supply that a physician has established and periodically reviews. The supply is provided through visits made to an individual’s residence. Outpatient care refers to any situation in which a patient receives a medical supply but does not require an overnight hospital stay. This includes a situation in which the supply is provided to the individual during a visit with a physician in an office or hospital. It may include a situation in which the individual obtains and self-administers the supply outside of the office or hospital setting, without the assistance of a home health aide or a home health agency. Medicare and Medicaid’s Coverage Standards of IBD Therapies To identify Medicare’s coverage standards for parenteral and enteral nutrition, ostomy care, medically necessary food products, and drugs approved by the FDA for the treatment of IBD in home health and outpatient delivery settings, we reviewed the standards established by the Centers for Medicare & Medicaid Services (CMS) in its national coverage policies. Specifically, we examined CMS’s database of national coverage determinations (NCD) as well as its interpretive manuals, which address coverage policies. We also reviewed local coverage policies established by CMS’s four Durable Medical Equipment Regional Carriers (DMERC). In addition, we reviewed relevant Medicare laws and regulations. To clarify our understanding of these materials, we interviewed CMS officials and the medical directors of the four DMERCs. We also reviewed relevant laws, and other CMS and DMERC documentation to determine if the program covers these therapies in both the home health and outpatient delivery settings. To identify the Medicaid program’s coverage standards in each state for the five therapies addressed by our study in home health and outpatient delivery settings, we sent a survey to Medicaid offices in the 50 states and the District of Columbia. The survey addressed each state’s coverage policies and medical criteria that an individual must meet to receive each of the five therapies as a Medicaid benefit. Specifically, we asked states to indicate whether their program provides coverage of each of the five therapies and the criteria and conditions they have established, if applicable. In general, we used Medicare’s coverage policies as a basis for the survey’s coverage questions, and we provided states the opportunity to describe how their policies varied from Medicare’s policies. We also provided states with the option of describing other pertinent criteria they may have established. The survey asked them to indicate whether they had different coverage policies for adults and children for such therapies. Because Medicare does not cover medically necessary food products and self-administered prescription medications, we formulated our survey questions on applicable coverage standards for these two items based on discussions with medical experts and organizations that represent IBD patients, and our review of pertinent literature. Regarding drugs used to treat IBD, we consulted with the FDA, which provided us with a list of nine brand name drugs and two generic drugs that it had approved to treat Crohn’s disease and ulcerative colitis. We included these drugs in our survey. We pretested our survey with Medicaid officials in the District of Columbia, Georgia, and Virginia. We selected the District of Columbia and Georgia because of the contrasting sizes of these two Medicaid programs. We selected Virginia to obtain additional input on the structure of our questions related to prescription drug coverage. We received responses from all of the states and reviewed these data for obvious inconsistency errors and completeness. For responses that were unclear or incomplete, we contacted survey respondents to obtain clarification before conducting our analyses. We did not verify all the information we received in the survey. When necessary, we compared our electronic data files of survey responses with the actual surveys we obtained from states. We also did several internal verification checks to ensure accuracy. Based on these efforts, we determined that the data were sufficiently reliable for the purposes of this report. Specific Supplies Paid for by Medicare and State Medicaid Programs To identify the specific supplies used in the covered therapies that Medicare will pay for, we reviewed relevant NCDs, local coverage policies, and CMS interpretive manuals. We interviewed CMS officials and the four DMERC directors about the supplies that Medicare will pay for, and any applicable limitations or restrictions. To improve our understanding of the various supplies used in each therapy, we obtained information from the two medical experts and representatives of organizations that participated in our panel. To determine the specific supplies that state Medicaid programs will pay for, we provided in our survey a list of commonly used supplies for each of the five therapies. To determine the supplies that are most commonly used in the five therapies, we interviewed the directors of the four DMERCs, representatives of some of the organizations that participated in our panel, and the two medical experts, and reviewed relevant literature. States were asked to report whether or not the specific supplies listed were covered for adults and children, and whether their Medicaid program would cover these supplies in both home health and outpatient delivery settings. In the case of parenteral and enteral nutrition, and ostomy supplies, we listed items by name and included their identifying codes as specified in the Health Care Common Procedure Coding System (HCPCS). Because there is no standard definition of what constitutes medically necessary food products, we developed a list of items that members of our panel and the physicians we spoke to generally considered commonly used. To determine whether states covered medications to treat IBD, we asked states to indicate whether they paid for the nine brand name drugs and two generic drugs listed in our survey. With the exception of drugs, we asked states to indicate whether they had established any restrictions, including supply limits and monetary caps, on the provision of covered products. We conducted our work from December 2004 through November 2005, in accordance with generally accepted government auditing standards. Appendix II: Reported State Medicaid Program Coverage of Therapies Used by IBD Patients Only total parenteral nutrition is covered. Appendix III: Reported Parenteral Nutrition Therapy Coverage Standards by State Medicaid Program Patient has to have a severe pathology of the gastrointestinal tract that does not allow absorption of sufficient nutrients to maintain weight and strength. For acute care adults receiving total parenteral nutrition, parenteral nutrition therapy must be the sole source of nutrition. Only total parenteral nutrition is covered. Individuals must document the reason enteral feeding cannot be given. The coverage standards related to pathology and clinical conditions are only applicable in home health delivery settings. The recipient must require total parenteral nutrition to sustain life. Adequate nutrition must not be possible by dietary adjustment, oral supplements, or tube enteral nutrition. Appendix IV: Reported Enteral Nutrition Therapy Coverage Standards by State Medicaid Program Applie to oth lt nd children Ste doe not cover therpy Coverge ndrd or requirement doe not pply Patient has to have a severe pathology or non-function of the structures that normally permit food to reach the small bowel (e.g., inability to swallow), which impairs the ability to maintain weight and strength. For acute care adult patients, enteral therapy must be the sole source of nutrition. For adults, enteral nutrition is covered only if it is the sole source of nutrition. For adults and children, enteral nutrition must provide 51 percent of more of caloric intake. For adults, the tube feeding criterion is only applicable in home health delivery settings. The state does not require documentation for adults. It did not respond to this question for children. Enteral nutrition therapy must be the primary source of nutrition. The state may cover oral nutritional products for children who have had an early and periodic screening, diagnostic, and treatment screening which results in a diagnosed condition that impairs absorption of specific nutrients. Documentation must indicate that there is a defined pathologic process for which nutritional support is therapeutic. Appendix V: Reported Medically Necessary Food Products Coverage Standards by State Medicaid Program Appendix VI: Reported Parenteral Nutrition Supplies Covered by Medicaid in Home Health and Outpatient Delivery Settings Parenteral nutrition solution includes all types of solutions. Supplies are covered only when administered at home. They are not covered in other outpatient delivery settings. Appendix VII: Reported Enteral Nutrition Supplies Covered by Medicaid in Home Health and Outpatient Delivery Settings Ste doe not cover supply Ste doe not cover therpy Enteral formula includes all types. State’s coverage is limited to home health delivery settings. The state does not cover enteral nutrition infusion pump – without alarm. The state does not cover blenderized enteral formulas. For adults, the state handles coverage for enteral supplies on a case-by-case basis. The state only covers specific enteral nutrition supplies. Nasogastric tubings with and without stylets along with stomach tubes are only covered for children. Pediatric enteral formula and blenderized enteral formula are only covered for children under the age of 21. The state does not cover all enteral formulas. Appendix VIII: Reported Percent of States Covering Ostomy Supplies in Home Health and Outpatient Delivery Settings Drainable pouch with extended wear barrier Ostomy pouch, drainable, with extended wear barrier attached Ostomy pouch, drainable, with extended wear barrier attached, with built-in convexity Ostomy pouch, drainable with faceplate attached, plastic Ostomy pouch, drainable with faceplate attached, rubber Ostomy pouch, drainable, for use on faceplate, plastic Ostomy pouch, drainable, for use on faceplate, rubber Ostomy pouch, drainable, high output, for use on a barrier with flange (2 piece system), with filter Ostomy pouch, closed, for use on barrier with locking flange, with filter (2 pieces) Ostomy pouch, drainable, with barrier attached, with filter (1 piece) Ostomy pouch, drainable, for use on barrier with non-locking flange, with filter (2 pieces) Ostomy pouch, drainable, for use on barrier with locking flange (2 pieces) Ostomy pouch, drainable, for use on barrier with locking flange, with filter (2 pieces) Ostomy pouch, drainable, without barrier attached (1 piece) Ostomy pouch, drainable with barrier attached (1 piece) Ostomy pouch, drainable, for use on barrier with flange (2 piece system) Ostomy skin barrier, with flange, extended wear with built-in convexity, larger than 4x4 inches Ostomy skin barrier, with flange, extended wear, without built-in convexity, 4x4 inches or smaller Ostomy skin barrier, with flange, extended wear, without built-in convexity, larger than 4x4 inches Ostomy faceplate equivalent, silicone ring Ostomy skin barrier, non-pectin based, paste Adhesive or non-adhesive, disk or foam pad Ostomy skin barrier, closed, with extended wear barrier attached, with built-in convexity Ostomy pouch, closed, with barrier, with filter Ostomy pouch, closed, with barrier attached, with built-in convexity Ostomy pouch, closed, without barrier, with filter (1 piece) Ostomy pouch, closed, fuse use on barrier with non-locking flange (2 pieces) Ostomy pouch, closed, fuse use on barrier with locking flange (2 pieces) Ostomy supplies were placed in related categories based on discussions with an official from the United Ostomy Association. Appendix IX: Reported Information on Medicaid Coverage of Ostomy Supplies and Related Limits Number of supplies covered Percent of covered supplies with dollar caps and/or supply Supplies are only covered if they are used at home. Dollar caps and supply limits only apply to adults. The state has supply limits and dollar caps that can never be exceeded for certain supplies; however some of the limits and caps are very high. For example, for one item that can never be exceeded— the ostomy belt with peristomal hernia support—the state reported that it will pay for up to 999 belts and $38,571.39 per month. There are no supply limits or dollar caps for home health ostomy supplies. Supply limits or dollar caps are only for home health. Once the accumulated dollar value of all products reaches $300 or more in a year, the state looks at the usage patterns and other information. The state reported that IBD patients often reach or exceed the $300 limit but it often allows individuals to exceed the amount with written justification. Appendix X: Reported Medically Necessary Food Products Covered by State Medicaid Program For prescription strength vitamins, the state covers prenatal vitamins for pregnant women only. Prescription fluoride vitamins are covered for children up to eight years of age. The state only covers prenatal vitamins. Food thickeners are covered for any condition, as long as they are medically necessary. For prescription strength vitamins, the state limits coverage to prenatal vitamins, folic acid, pediatric vitamins with fluoride for children less than 13 years of age, multivitamins for dialysis patients, and iron supplements. The state covers special metabolic formulas for oral administration for children under medically necessary food products. For prescription strength vitamins, multivitamins can be covered but they must have prior authorization and meet the state’s criteria for medically necessary. Coverage for prescription strength vitamins is based on documented vitamin deficiencies in the patient’s medical record. Nutritional formulas taken orally must have prior authorization. CMS standard exemptions related to legend vitamins are covered. Pediatric vitamin supplements with fluoride are covered. Other pediatric legend vitamins may be covered with statement of medical necessity. The state requires a defined/specific pathologic condition for which nutritional support is therapeutic. If the purpose of the supply is simply to provide food, then it is not considered medically necessary. The state covers general nutritional supplements. Other disease specific products are not covered. Appendix X: Reported Medically Necessary Food Products Covered by State Medicaid Program For prescription strength vitamins, the state limits coverage for children less than two years of age or for prenatal use. For prescription strength vitamins, the state covers prenatal vitamins for women. The state does not cover nutritional shakes and vitamins. Appendix XI: Summary of Drugs Listed in Our Survey to Treat IBD That Are Covered by Medicaid for Adults and Children The state requires patients to use a generic equivalent drug, if available. The state covers brand name drugs only after documentation of medical necessity is complete. The documentation has to include a summary of benefit versus risk. The state will cover brand name drugs with prior authorization when there are generic equivalent drugs available. The state does not cover Remicade, Colozal, and Entocort for children age 11 or under. The state requires prior authorization for Remicade and Asacol. The state did not indicate whether it covered the generic drug for Azulfidine for children. The state will pay for brand name drugs after demonstrating failure of generic equivalent drugs. The state will cover brand name drugs with prior authorization when there are generic equivalent drugs available. Appendix XI: Summary of Drugs Listed in Our Survey to Treat IBD That Are Covered by Medicaid for Adults and Children The state requires prior authorization for Remicade. The state requires patients to use a generic equivalent drug, if available. The state may require prior authorization if generic equivalent drug or therapeutic alternatives exist. The state requires prior authorization for brand name drugs when there is a generic equivalent drug available. The state will cover brand name drugs with prior authorization when there are generic equivalent drugs available. Appendix XII: Comments from the Centers for Medicare & Medicaid Services Appendix XIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Geraldine Redican-Bigott, Assistant Director; Shaunessye Curry; Adrienne Griffin; Ba Lin; Janet Rosenblad; and Pauline Seretakis made key contributions to this report. | Inflammatory bowel disease (IBD) affects an estimated one million Americans. IBD patients often have difficulty digesting food. As a result, they may require parenteral nutrition (intravenous feeding) or enteral nutrition (tube feeding), medically necessary food products to supplement their diets, and medications. In addition, some IBD patients must care for their ostomies--surgically created openings for the discharge of digested food. IBD advocates have recently expressed concerns regarding the ability of IBD patients to obtain the health care they need. The Research Review Act of 2004 directed GAO to study the Medicare and Medicaid coverage standards for individuals with IBD, in both home health and outpatient delivery settings. GAO (1) identified the Medicare and Medicaid coverage standards for five key therapies used for the treatment of IBD and (2) determined what specific supplies used in these therapies Medicare and Medicaid programs will pay for. In this work, GAO examined Medicare's national and local coverage policies and conducted a survey of Medicaid programs in the 50 states and the District of Columbia. Medicare generally provides coverage for parenteral and enteral nutrition and ostomy supplies in both home health and outpatient delivery settings. However, specific standards regarding medical conditions and appropriate documentation must be met for parenteral and enteral nutrition to be covered. Medicare has one coverage standard governing the provision of ostomy supplies--that beneficiaries receiving these items have had an ostomy. Medicare does not cover medically necessary food products and generally does not cover self-administered drugs, which include most drugs taken by IBD patients. However, medically necessary drugs, including those that are self-administered, will be covered by Medicare's voluntary prescription drug benefit, which becomes effective in January 2006. State Medicaid programs reported covering, at least partially, each of the five therapies. The survey indicated that most states' Medicaid coverage standards are generally comparable to Medicare's coverage for parenteral and enteral nutrition and ostomy care. Once Medicare coverage standards are met, the program will generally cover all medically necessary supplies associated with parenteral and enteral nutrition and ostomy care. The survey of state Medicaid programs showed variation in the specific supplies that states will provide. While many states pay for most supplies associated with parenteral and enteral nutrition, the specific ostomy supplies states cover vary. Most states--46--reported covering at least some medically necessary food products. GAO also found that states generally cover the drugs listed in the survey. CMS said that GAO correctly described its Medicare coverage policies and suggested that we clarify our description of Medicare's coverage policy for prescription drugs that are not self-administered. It also said that it will continue to consider access issues for Medicare and Medicaid IBD patients. |
The Budget Deficit The first deficit we face is the federal budget deficit (see fig. 1). In 2005 the unified federal budget deficit was around $318 billion or 2.6 percent of gross domestic product (GDP). This figure is an approximation of what the federal government absorbs from private saving. Although a single year’s federal deficit is not a cause for concern, persistent deficits are. Federal deficits reduce the amount of national saving available for investment. They also lead to growing federal debt, on which net interest payments must be made by current and future generations. The Saving Deficit A budget deficit represents dissaving by the government, but the U.S. suffers from an even broader national saving deficit. National saving is the sum of personal saving, corporate saving, and government saving. Last year, for the first time since 1934, net national saving declined to less than 1 percent of GDP and the personal saving rate was slightly negative (see fig. 2). Remarkably—and unfortunately—the United States has returned to saving levels not seen since the depths of the Great Depression. A negative saving rate means that, in the aggregate, households are spending more than their current income by drawing down past saving, selling existing assets, or borrowing. No one is sure why the personal saving rate has declined. One possible explanation is increases in household wealth, which surged in the late 1990s due to the stock market boom and more recently due to the run-up in housing prices. Household wealth relative to income increased from 4.7 in 1990 to 5.8 in 2005 (see fig. 3). If people feel wealthier, they may feel less need to save. Continued financial liberalization and innovation have made it easier for Americans to borrow, particularly against their real estate wealth, which may have lead to greater consumption. Clearly, as the Comptroller General has said, many Americans, like their government, are living beyond their means and are deeply in debt. This trend is particularly alarming in an aging society such as our own. Those Americans who choose to save more will certainly live better in retirement. Those Americans who choose to save less are rolling the dice on whether they will have adequate resources for a secure retirement. While Social Security provides a foundation for retirement income, Social Security benefits replace only about 40 percent of preretirement income for the average worker. As a result, Social Security benefits must be supplemented by private pensions, accumulated assets, or other resources in order for individuals to maintain a reasonable standard of living in retirement compared to their final working years. Though the aggregate wealth-to-income ratio remains relatively high, it is a misleading indicator of financial status of the typical household because wealth is highly concentrated among a few households. While the median net worth of all families was $93,100 in 2004, the top 10 percent of the families had a median net worth of over $1.4 million and the bottom quarter of the families had a median net worth of about $1,700. Moreover, measures of wealth are largely based on market values, which on occasion can exhibit substantial swings. This is illustrated by the sharp run-up in stock prices in the late 1990s and their subsequent decline beginning in 2000. The only components of national saving that have not shown a long-term decline are corporate and state and local saving. In fact, corporate saving is actually high by historical standards. After declines in corporate profits in 2000-2001, corporate saving has rebounded to almost 4 percent of GDP—a level not seen since the late 1960s. The state and local sector as a whole experienced a deficit from 2002 to 2004 but has since returned to a slight surplus. The Current Account Deficit Now let me turn to the third deficit: our current account deficit. The current account deficit is the difference between domestic investment and national saving. That is, it is the amount of domestic investment financed by borrowing from abroad. Over most of the last 25 years, the United States has run a current account deficit, but in 2005 the current account deficit hit an all-time record—$782 billion, or over 6 percent of GDP (see fig.4). That is twice what it was only 6 years earlier. Funds from overseas have been pouring into the United States. One explanation for these inflows is that high productivity in the U.S. raised the perceived return on U.S. assets. Moreover rising federal budget deficits and declining personal saving rates have necessitated foreign borrowing to help finance domestic investment. Another possible explanation for persistent U.S. current account deficits may be the weakness of foreign demand and the efforts of some countries to support their exports by keeping their own currencies from strengthening. Also, other countries’ populations are aging more rapidly than the U.S. population and they may be investing in the U.S. in order to build up a stock of assets to prepare for their retirement spending. Whatever the reason for high current account deficits, policymakers should be aware of the implications these financial inflows have for the nation’s economic growth and for future living standards. While current account deficits support domestic investment and productivity growth, they also translate into a rising level of indebtedness to other countries. Figure 5 shows that the net foreign ownership of U.S. assets grew to more than 20 percent of GDP in 2005. The fact that our net indebtedness to other nations is rising more rapidly than our income raises concerns that the U.S. current account balance is on an unsustainable path. Despite the growth of foreign asset holdings in the United States in recent years, the United States earned more in interest, dividends, and other investment returns from other countries than it paid on U.S. assets held by foreigners. This may seem counterintuitive to the notion that U.S. assets, on average, pay a higher return than foreign assets and thus attract a large amount of foreign investment. The positive net income receipts reflect differences in the composition of foreign and U.S. investment and the higher rate of return that U.S. firms earn on their direct investments abroad compared to the earnings of foreign companies from their U.S. subsidiaries. A larger share of foreign-owned assets in the U.S. is held in portfolio investment, such as stocks, bonds, loans, and bank deposits, which pay a lower yield than U.S. direct investments abroad. A recent study by the Congressional Budget Office (CBO) attributed this to three factors. First, U.S. subsidiaries abroad have generally been in business longer than foreign-owned subsidiaries in the U.S., which contributes to greater profitability. Second, investors of U.S. subsidiaries abroad may require higher returns because they face greater political and economic risks than subsidiaries of foreign-owned corporations. Finally, some observers argue that U.S subsidiaries abroad may overstate their profits for tax reasons, while foreign-owned subsidiaries in the United States understate their profits. However, given the nation’s increasingly negative net international investment position, it is not clear how long the U.S. will continue to earn more on its foreign investment than it pays on foreign investment in the U.S. The effect of large foreign borrowing on our economy also depends in part on how the borrowed funds are used. To the extent that borrowing from abroad finances domestic investment, the foreign borrowing adds to the nation’s capital stock and boosts productive capacity. Thus, even though some of the income generated by the investment must be paid to foreign lenders, the investment—and hence the borrowing that financed it— augments future income. However, if the borrowing from abroad is used to finance consumption, this is not true. Short-term well-being is improved but the ability to repay the borrowing in the future is not. Both economists and policymakers are concerned about whether the United States can maintain its reliance on foreign capital inflows to sustain domestic investment. Investors generally try to achieve some balance in the allocation of their portfolios, and U.S. assets already represent a growing and significant share of foreign portfolios (see fig. 6). Although the United States accounts for 29 percent of global GDP, it received 70 percent of the net saving exported by countries with current account surpluses in 2004. Observers suggest that the United States’ favorable investment climate, including the potential for high rates of return, may explain why the U.S. absorbs such a large share of the world’s saving. However, it is probably not realistic to expect ever-increasing foreign investment in the United States. Imagine what would happen to the stock and bond markets if these foreign investors began to lose confidence and lowered their rates of accumulation, or worse yet, started to sell off their holdings. We would likely face some adverse effects in the form of higher interest rates, reduced investment, and more expensive imports. Why Does It Matter? Economic growth in recent years has been high despite the fact that national saving was low by U.S. historical standards. This is because more and better investments were made. Each dollar saved bought more investment goods, and a greater share of saving was invested in highly productive information technology. Also, as discussed earlier, the United States was able to invest more than it saved by borrowing from abroad. However, we cannot let our recent good fortune lull us into complacency. While the U.S. has benefited from high levels of foreign investment in recent years, this is not a viable strategy for the long run. Many of the nations currently financing investment in the United States face aging populations and their own retirement financing challenges that may reduce foreign saving available for U.S. domestic investment. If the net inflow of foreign investment were to diminish, so too would domestic investment and potentially economic growth if that saving is not offset by saving here in the U.S. Also, our nation faces daunting fiscal and demographic challenges, which may be even more of a reason to address our nation’s low saving rates. Saving and economic growth will be key factors to prepare future generations to bear the burden of financing the retirement and health costs of an aging population. Nation Faces Long-term Fiscal Challenges Given our nation’s long-term fiscal outlook, acting sooner rather than later to increase national saving is imperative. The federal government’s current financial condition and long-term fiscal outlook present enormous challenges to future generations’ levels of well-being. No one can forecast with any precision what the next 75 years will look like—that would require the ability to predict changes in the economy and future legislation. However, there is a fair amount of certainty in one major driver of our long-term outlook—demographics. As life expectancy rises and the baby boom generation retires, the U.S. population will age, and fewer workers will support each retiree. Over the next few decades, federal spending on retirement and health programs—Social Security, Medicare, Medicaid, and other federal pension, health, and disability programs—will grow dramatically. Absent policy changes on the spending and/or revenue sides of the budget, a growing imbalance between expected federal spending and tax revenues will mean escalating and eventually unsustainable federal deficits and debt that will threaten our future economy and standard of living. As Comptroller General Walker has said, “Simply put, our nation’s fiscal policy is on an imprudent and unsustainable course.” Neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both together—would eliminate the imbalance. Although revenues will be part of the debate about our fiscal future, assuming no changes to Social Security, Medicare, Medicaid, and other drivers of the long-term fiscal gap would require at least a doubling of taxes—and that seems highly implausible. GAO’s long-term simulations illustrate the magnitude of the fiscal challenges associated with an aging society. Indeed, the nation’s long-term fiscal outlook is daunting under many different policy scenarios and assumptions. For instance, under a fiscally restrained scenario, if discretionary spending grows only with inflation over the next 10 years and all existing tax cuts expire as scheduled under current law, spending for Social Security and health care programs would grow to consume over 80 percent of federal revenue by 2040 (see fig. 7). On the other hand, if discretionary spending grew at the same rate as the economy in the near term and if all tax cuts were extended, by 2040 federal revenues may just be adequate to pay only some Social Security benefits and interest on the growing federal debt (see fig. 8). GAO’s long-term simulations show the squeeze on budgetary flexibility that the combination of demographics and health care cost growth will create. The burden on the budget and on the economy mean that letting current policy continue will leave few resources for investment in new capital goods and technology and result in slower income growth. National Saving Critical for Long-term Economic Growth There are three key contributors to economic growth—labor force growth, capital input, and total factor productivity (or increased efficiency in the use of capital and labor). Figure 9 shows the slowing in labor force growth (potential hours worked) over the next decade. Indeed, the Social Security and Medicare trustees project labor force growth to slow after 2010 and be negligible after 2020. Without improvements in managerial efficiencies or increases in capital formation, low labor force growth will lead to slower growth in the economy—and to slower growth in federal revenues at a time when the expenditure demands on federal programs for the elderly are increasing. This illustrates the imperative to increase saving and investment and explore other efficiency-enhancing activities, such as education, training, and R&D. Greater economic growth from saving more now would make it easier for future workers to achieve a rising standard of living for themselves while also paying for the government’s commitments to the elderly. While economic growth will help society bear the burden of financing Social Security and Medicare, it alone will not solve the long-term fiscal challenge. Closing the current long-term fiscal gap would require sustained economic growth far beyond that experienced in U.S. economic history since World War II. Tough choices are inevitable, and the sooner we act the better. The Federal Government’s Role in National Saving Although there may be ways for the government to affect private saving, the only sure way for the government to increase national saving is to decrease government dissaving (the budget deficit). Each generation is a steward for the economy it bequeaths to future generations, and the nation’s long-term economic future depends in part on today’s decisions about consumption and saving. To address our nation’s daunting long-term fiscal challenges, we must change the path of programs for the elderly and build the economic capacity to bear the costs of an aging population. From a macroeconomic perspective, it does not matter who does the saving—any mix of increased saving by households, businesses, and government would help to grow the economic pie. Yet, in light of the virtual disappearance of personal saving, concerns about U.S. reliance on borrowing from abroad to finance domestic investment, and the looming fiscal pressures of an aging population, now is an opportune time for the federal government to reduce federal deficits. Higher federal saving—to the extent that the increased government saving is not offset by reduced private saving—would increase national saving and tend to improve the nation’s current account balance, although typically not on a dollar-for- dollar basis. Reduce Federal Deficits As the Comptroller General has said, meeting our nation’s large, growing, and structural fiscal imbalance will require a three-pronged approach: restructuring existing entitlement programs, reexamining the base of discretionary and other spending, and reviewing and revising existing tax policy, including tax expenditures, which can operate like mandatory spending programs. Increased government saving and entitlement reform go hand-in-hand. Over the long term, the federal government cannot avoid massive dissaving unless it reforms retirement and health programs for the elderly. Without change, Social Security and Medicare will constitute a heavy drain on the earnings of future workers. Although saving more yields a bigger pie, policymakers will still face the difficult choice of how to divide the pie between retirees and workers. It is worth remembering that policy debates surrounding Social Security and Medicare reform also have implications for all levels of saving—government, personal, and, ultimately, national. Restoring Social Security to sustainable solvency and increasing saving are intertwined national goals. Saving for the nation’s retirement costs is analogous to an individual’s retirement planning in that the sooner we increase saving, the greater our benefit from compounding growth. The way in which Social Security is reformed will influence both the magnitude and timing of any increase in national saving. The ultimate effect of Social Security reform on national saving depends on complex interactions between government saving and personal saving—both through pension funds and by individuals on their own behalf. Various proposals would create new individual accounts as part of Social Security reform or in addition to Social Security. The extent to which individual accounts would affect national saving depends on how the accounts are funded, how the account program is structured, and how people adjust their own saving behavior in response to the new accounts. As everyone here knows, health care spending is the major driver of long- term government dissaving. This is due to both demographics and the increasing cost of modern medical technology. The current Medicare program largely lacks incentives to control health care consumption, and the cost of health care decisions is not readily transparent to consumers. In balancing health care spending with other societal priorities, it is important to distinguish between health care wants, needs, affordability, and sustainability at both the individual and aggregate level. Reducing federal health care spending would improve future levels of government saving, but the ultimate effect on national saving depends on how the private sector responds to the reductions and the extent to which overall health care spending is moderated. For example, reforms that reduce federal deficits by merely shifting healthcare spending to state and local governments or the private sector might not increase national saving on a dollar-for-dollar basis. Tax expenditures have represented a substantial federal commitment over the past three decades. Since 1974, the number of tax expenditures more than doubled and the sum of tax expenditure revenue loss estimates tripled in real terms to nearly $730 billion in 2004. On an outlay-equivalent basis, the sum of tax expenditure estimates exceeded discretionary spending for most years in the last decade. Tax expenditures result in forgone revenue for the federal government due to preferential provisions in the tax code, such as exemptions and exclusions from taxation, deductions, credits, deferral of tax liability, and preferential tax rates. These tax expenditures are often aimed at policy goals similar to those of federal spending programs; existing tax expenditures, for example, are intended to encourage economic development in disadvantaged areas, finance postsecondary education, and stimulate research and development. A recent GAO report calls for a more systematic review of tax expenditures to ensure that they are achieving their intended purposes and are designed in the most efficient and effective manner. Saving Incentives The federal government has sought to encourage personal saving both to enhance households’ financial security and to boost national saving. However, developing policies that have the desired effect is difficult. Tax incentives may affect how people save for retirement but do not necessarily increase the overall level of personal saving. Even with preferential tax treatment for employer-sponsored retirement saving plans and individual retirement accounts (IRA), the personal saving rate has steadily declined. For example, although tax benefits seem to encourage individuals to contribute to these kinds of accounts, the amounts contributed are not always new saving. Some contributions may represent saving that would have occurred even without the tax incentives—and may even be shifted from taxable assets or financed by borrowing. Economists disagree about whether tax incentives have been or could be effective in increasing the overall level of personal saving. The net effect of a tax incentive on national saving depends on whether the tax incentive induces enough additional saving by households to make up for the lower government saving resulting from the government’s revenue loss. The bottom line is that we have many saving incentives but very little information on whether they work and how they interact. Saving Education A leading obstacle to expanding retirement saving has been that many Americans do not know how to save for retirement, let alone how much to save. The need to improve consumers’ financial literacy—their ability to make informed judgments and effective decisions about the management of money and credit—has become increasingly important. Congress has responded by passing legislation, such as the Savings Are Vital for Everyone’s Retirement Act of 1997 (SAVER Act). In addition, in the Fair and Accurate Credit Transactions Act of 2003, Congress created the Financial Literacy and Education Commission, which is charged with coordinating federal efforts and developing a national strategy to promote financial literacy. Also, GAO has identified financial literacy as a 21st century challenge. In a July 2004 Comptroller General forum, we discussed the federal government’s role in improving financial literacy. Among other things, forum participants suggested that the federal government serve as a leader using its influence and authority to make financial literacy a national priority. Some federal agencies already play a role in educating the public about saving. For example, as mandated by the SAVER Act, the Department of Labor maintains an outreach program in concert with other public and private organizations to raise public awareness about the advantages of saving and to help educate workers about how much they need to save for retirement. Also, individualized statements now sent annually by the Social Security Administration to most workers aged 25 and older provide important information for personal retirement planning, but knowing more about Social Security’s financial status would help workers to understand how to view their personal benefit estimates. Concluding Observations Increasing the nation’s economic capacity is a long-term process. Acting sooner rather than later could allow the miracle of compounding to turn from enemy to ally. This is why the Comptroller General has called for reimposing budget controls; reforming Social Security, Medicare and Medicaid; and reexamining the base of all major spending programs and tax policies to reflect 21st century challenges. As I said before, every generation is in part responsible for the economy it passes on to the next. Our current saving decisions have profound implications for the nation’s future well-being. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. Scope and Methodology My remarks are based largely on our previous report National Saving: Answers to Key Questions and other related GAO products. We updated the information from the National Saving report with the most recent published data from OMB, BEA, the Federal Reserve Board, CBO and the IMF. We also reviewed some recently published studies and statements from academic journals, Federal Reserve officials, the IMF, CBO and other sources. Contacts and Acknowledgments Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information on this testimony, please contact Thomas J. McCool at (202) 512-2700 or [email protected] or Susan J. Irving at (202) 512-9142 or [email protected]. Individuals making key contributions to this testimony include Rick Krashevski, Assistant Director; and Melissa Wolf, Senior Analyst. Related GAO Products 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. February 2005. Highlights of a GAO Forum: The Federal Government’s Role in Improving Financial Literacy. GAO-05-93SP. Nov. 15, 2004. Federal Debt: Answers to Frequently Asked Questions, An Update. GAO- 04-485SP. August 2004. National Saving: Answers to Key Questions. GAO-01-591SP. June 2001. See also http://www.gao.gov/special.pubs/longterm/ for information on GAO’s most recent long-term simulations and http://www.gao.gov/special.pubs/longterm/longtermproducts.html a bibliography of GAO’s issued work on the long-term fiscal outlook. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Chairman of the Senate Committee on Finance asked GAO to testify on our nation's low saving and discuss the implications for long-term economic growth. National saving--the portion of a nation's current income not consumed--is the sum of saving by households, businesses, and all levels of government. National saving represents resources available for investment to replace old factories and equipment and to buy more and better capital goods. Higher saving and investment in a nation's capital stock contribute to increased productivity and stronger economic growth over the long term. Our nation faces a number of deficits, including our nation's budget deficit, a saving deficit, and a current account deficit. Unfortunately, America has been heading in the wrong direction on all three deficits in recent years. In 2005 our nation's budget deficit was around $318 billion or 2.6 percent of GDP. For the first time since 1934, net national saving declined to less than 1 percent of GDP and the personal saving rate was slightly negative in 2005. While the United States has run a current account deficit--or borrowed to finance domestic investment--over most of the last 25 years, the current account deficit hit an all time record--$782 billion, or over 6 percent of GDP in 2005. Despite low national saving in recent years, economic growth has been high. However, we cannot let our recent good fortune lull us into complacency. If the net inflow of foreign investment were to diminish, so too would domestic investment and potentially economic growth if that saving is not offset by saving here in the U.S. Also, our nation faces daunting fiscal and demographic challenges, which provide even more of a reason to address our nation's low saving rates. Greater economic growth from saving more now would make it easier for future workers to bear the burden of financing Social Security and Medicare, but economic growth alone will not solve the long-term fiscal challenge. Tough choices are inevitable, and the sooner we act the better in order to allow the miracle of compounding to turn from enemy to ally. |
Background For many years, the Congress has expressed concerns about the public’s telephone access to SSA. Efforts to improve this access have resulted in a dual system of telephone service (a nationwide 800 number and local office service at more than 800 of SSA’s field offices) and also led to the current demonstration project. Telephone Service at SSA In 1989, SSA established a nationwide toll-free 800 number to replace its local office telephone service. With the implementation of this service, SSA directed its local offices to remove their general inquiry telephone numbers from local phone directories. In their place, the offices listed the new toll-free 800 number. In establishing this toll-free network, SSA intended to provide all of its customers with equal and toll-free access to program services. SSA envisioned that the public would call the 800 number with basic questions about the program, when reporting changes in benefit status, with problems or concerns specific to Social Security records, or to make appointments with local field office staff. The public could continue to contact local office staff when necessary by requesting the unpublished telephone number for any office from SSA’s 800 number staff. The establishment of a national toll-free telephone network was planned to facilitate an agencywide 20-percent staff reduction that occurred between 1985 and 1990. By transferring a large workload from its field offices to the 800 number, SSA hoped the downsized offices would be better able to conduct nontelephone business. SSA had start-up problems when the 800 number went on-line nationwide. It had underestimated the volume of calls that would be made to the 800 number and was not able to staff the service adequately, especially when call volumes were heaviest. High busy-signal rates made it difficult for the public to reach SSA, generating complaints to SSA and to the Congress. In response, SSA took several steps to expand its capacity to handle the volume of 800 number calls. These included actions to increase staff devoted to handling calls during the heaviest calling periods, converting additional facilities to 800 number phone centers, and increasing the number of telephone lines devoted to 800 number calls. Even with these actions, busy-signal rates remained high because the number of calls placed to the 800 number continued to grow rapidly. For example, in 1990, callers placed 85 million calls to SSA, and the overall busy-signal rate was 34 percent. In 1994, callers placed almost 117 million calls to the 800 number, and the overall busy-signal rate grew to about 45 percent. During the start-up of the 800 number, these problems concerned the Congress so much that, in 1990, it required SSA to restore telephone access to local offices. As a result, SSA reinstated direct local telephone service to about 830 of its more than 1,300 local offices by publishing their telephone numbers in local directories in addition to the 800 number. However, the Congress did not provide any additional resources for SSA to either purchase telephone equipment or increase staff to handle the reinstated workload. Because it had fewer field office staff due to its downsizing in the late 1980s, SSA chose to implement the local office telephone service with a minimum number of telephone lines and staff. In June 1992, the House Committee on Ways and Means asked us to evaluate the public’s ability to access local offices that offered local phone service. In March 1993, we reported that the busy-signal rate at local offices averaged 47.3 percent during the month tested. In October 1993, SSA advised the Congress about its plans to conduct a demonstration project to enhance local office operations and perhaps improve telephone access to its local offices. Telephone Demonstration Project: Design and Installation To improve the public’s telephone access to its local offices, SSA is conducting a demonstration project to test telephone equipment known as automated attendant and voice mail. SSA’s demonstration project involves 30 of its field offices and three different configurations of the automated attendant and voice mail equipment (referred to as methods A, B, and C in this report). SSA wanted local offices from each of its 10 regions involved in the project, and it allowed the regions to select these offices on the basis of the type of telephone equipment they were already using and their willingness to participate in the project. Each method being tested in the demonstration project represents a different configuration of equipment. In method A offices, SSA added automated attendant and question-and-answer mail boxes to its general inquiry lines. In addition, it added voice mail to staff member extensions. A caller to method A offices hears a recorded greeting that identifies the agency, office hours, and address. This basic information answers caller questions in many cases. Callers seeking other types of assistance have other options: Callers may press the extension number of a particular employee with whom they may be working on a claim or other matter. If not already working with an SSA representative, callers may also select an automated service menu for routine matters such as reporting changes in address, making an appointment to file for benefits, or requesting an original or duplicate Social Security card. These services are provided without direct staff intervention through the use of question-and-answer voice mail messages. Finally, if callers wish to speak to an SSA representative, they can choose to hold the line until one becomes available. Method B offices operate the same way as method A offices except that one additional feature is available. Method B offices have an additional general inquiry telephone line to play a message that advises callers that all available lines are busy. This message also states that the caller should either call at a later time or may call SSA’s toll-free 800 number. Callers are only connected to this line when all the other general inquiry lines are already in use. For the demonstration, method C offices do not have any additional telephone lines, automated attendant, or the related question-and-answer mailboxes on their general inquiry lines. They have only voice mail capability at the desks of staff members. The underlying objective of the demonstration project is to improve the public’s access by making more telephone lines available to handle phone calls at local offices. The demonstration project equipment configurations have also extended service hours for method A and B offices because, with automated attendant, after-hours calls can be answered and callers can leave voice mail messages. Most method A and B offices received additional general inquiry telephone lines when SSA installed the new equipment in their offices. Local managers in some participating offices, however, did not want additional lines because they believed that they could not handle additional telephone calls without increased staffing. Table 1 shows each method A and B office and the number of general inquiry lines each had before and after equipment was installed for the demonstration project. As shown, five method A and eight method B offices received at least one additional general inquiry line. Telephone Access Has Improved, but More Calls Are Being Placed on Hold We found statistically significant improvement in access under method B, while method A showed no statistically significant change in access. Under method B, busy-signal rates dropped greatly, but more calls were being placed on hold. Because method C did not involve any change to the general inquiry lines, we did not consider its effect on access to the local lines. When examining how telephone access changed by the individual offices in the demonstration, we found mixed results among both methods A and B. We also found that SSA staff in the demonstration offices strongly believe that the voice mail equipment on their desk phones enhanced efficiency and public service. To measure changes in access for evaluation purposes, we grouped the call outcomes into two categories: access and no access. We considered access to consist of two call outcomes: calls in which we spoke to an SSA employee without spending any time on hold and calls in which we were on hold for less than 2 minutes before speaking to an SSA employee. We considered no access to consist of five different call outcomes: busy signals, no answer after the phone rang 10 times, recorded messages directing us to call at a later time, calls that were disconnected before we had a chance to speak with an SSA representative, and all calls in which we were placed on hold for more than 2 minutes. We selected 2 minutes as the time we would wait on hold before hanging up because we thought it was a reasonable expectation. In addition, our definition is consistent with information SSA obtained from a survey of its clients. In July 1994, SSA reported that 90 percent of the respondents who used the 800 number said that being on hold for no more than 2.3 minutes would be good service. More Calls Have Reached SSA at Method B Offices Table 2 compares how telephone access changed with the installation of new equipment at method A and B offices. It shows that method B offices had an improvement of 23 percentage points in the calls reaching SSA and that this change was large enough to be statistically significant. The method A configuration did not produce a statistically significant change in access under our test. Examining the results of our analysis by call outcomes provides a better understanding of the changes occurring under the demonstration project. As shown in table 3, the installation of the new equipment and additional telephone lines has resulted in a large drop in busy signals. After installation, busy signals dropped at method B offices by 55.2 percentage points. The large increase in the number of callers receiving the “call later message” after installation of the new equipment probably accounts, in part, for the drop in busy-signal rates. The other substantial change shown in table 3 relates to calls placed on hold. The table shows two categories for calls placed on hold: on hold less than 2 minutes and on hold more than 2 minutes. The percent of calls in both of these categories increased greatly under the demonstration. With newer equipment, more telephone lines, and a constant level of staff assigned to answer these calls, the additional calls reaching SSA are being placed on hold until staff can answer them. Examining how access changed at each office within methods A and B showed mixed results. For example, tables 4 and 5 show that 3 of the 10 method A offices and 4 of the 9 method B offices showed statistically significant improvement in access. However, five of the method A offices and the five remaining method B offices showed no significant change in access. Furthermore, two method A offices also showed statistically significant declines in telephone access rates. Local factors such as call volumes, the number of telephone lines available, and staffing issues may account for the wide variation in access rates at the office level. We recognize that a caller placed on hold (rather than receiving a busy signal) can be considered successful access to SSA. In fact, SSA considers access to its 800 number successful when a caller is connected to SSA regardless of whether the caller has spoken with a representative, heard a recorded message, spent a long period of time on hold, or hung up while on hold. Analyzing our data using this broader interpretation of access, we found that statistically significant improvement occurred under both methods A and B. These results are shown in table 6. Using this definition of access, on an office level, we noted additional improvements. Among method A offices, significant improvement in access occurred in 1 more office—4 of the 10 offices instead of 3 of the 10 offices improved. Among method B offices, significant improvement in access occurred in three additional offices—seven of the nine method B offices improved instead of four of the nine offices. Voice Mail Equipment Has Improved Office Efficiency and Public Service Staff at all demonstration offices had voice mail installed on their desk telephones. We visited 12 of the 30 demonstration offices and met with office managers and staff using the new equipment. Overall, we heard almost universal praise about how the voice mail feature improved office operations and enhanced customer service. All 12 of the office managers we interviewed were enthusiastic about the new equipment’s voice mail feature. Seven of the 12 managers told us that the voice mail equipment increased their claims representatives’ efficiency. Other managers told us that the voice mail equipment added flexibility to their offices and improved customer service. Finally, all of these managers told us that feedback they have received from the public about the new voice mail equipment has been positive. We also interviewed 71 staff members who use the voice mail equipment. Most of these staff members told us that the new equipment has improved service to the public by making it easier to reach SSA. They said that when a caller tries to reach a specific SSA representative who is not at his or her desk, the caller can leave a message on the staff person’s voice mail. Furthermore, many of the staff members we interviewed told us that voice mail has enabled them to manage their workload better and has increased their productivity. Some of these staff also told us that they no longer worry about losing messages or receiving inaccurately recorded messages. Others said that with voice mail, callers can leave messages and information needed for processing a claim. This eliminates the need for repeated calls between SSA and the public, speeding up the claims process. SSA’s Internal Evaluations of the Demonstration Project Two separate SSA organizational entities are evaluating the telephone demonstration project. SSA’s Office of Workforce Analysis (OWA) is evaluating the equipment’s effect on office productivity and employee reactions. The Office of Program Integrity Reviews (OPIR) is evaluating public reaction to the equipment. Neither SSA study had been finished as of early February 1996. OWA Study: Objectives and Methodology OWA’s study has two basic objectives, determining the equipment’s effect on productivity levels and identifying employee experiences and reactions to using the equipment. To measure the new equipment’s effect on productivity, OWA planned to gather and compare certain data. For example, OWA planned to examine how busy-signal rates and call volumes have changed using data obtained from the telephone companies servicing the demonstration offices. OWA also planned to measure the amount of work generated by callers using the automated services option (reporting address changes or missing checks). It has directed local offices to prepare weekly reports on the number of callers using these services. To examine employee reactions, OWA has planned to have field office managers and staff who answer the telephones fill out a short questionnaire. The questionnaire is soliciting information about how well the system has performed and respondents’ views on ease of use and training adequacy. OPIR Study: Objectives and Methodology To obtain information about the public’s reaction to the new equipment, OPIR planned to install caller ID equipment at 19 of the 30 demonstration offices. Offices with caller ID are to record the phone numbers on certain dates. OPIR prepared several different questionnaires for its staff to use when contacting callers. OPIR planned to contact 1,500 callers, 500 for each equipment configuration but has encountered complications. Its report is to be finished in February 1996. Conclusions Overall, the addition of new equipment and telephone lines has demonstrated that access to SSA offices can be improved. Even if SSA does not devote additional staff to answering telephones in local offices, this technology may help improve the efficiency and effectiveness of the agency’s service to the public. To fully evaluate whether to install the demonstration phone equipment in other locations, however, an important consideration for SSA will be the public’s and SSA employees’ views along with the equipment’s relative costs and contributions to meeting SSA’s public service goals. Agency Comments SSA commented on a draft of this report in a letter dated January 29, 1996 (see app. II). SSA agreed with our findings that enhanced technology has increased the public’s telephone access to field offices. It also agreed with our view that a full evaluation of productivity issues and employee acceptance of and public reaction to the new equipment is needed before installation of this equipment across the board. SSA noted that its internal studies on these issues will be completed by the end of February 1996. Copies of this report are being sent today to SSA and parties interested in Social Security matters. Copies will be made available to others upon request. If you have any further questions, please contact me on (202) 512-7215. GAO contacts and staff who prepared this report are listed in appendix III. Objectives, Scope, and Methodology The objective of our review was to determine if the installation of the new telephone equipment has improved the public’s access to the participating offices in SSA’s demonstration project. To do this, we placed phone calls to offices before and after installation of the new equipment being tested and recorded outcomes of these calls (busy signal, placed on hold, and the like). From these outcomes, we then calculated access rates. As noted earlier in this report, SSA installed two types of new equipment at 30 field offices: automated attendant and voice mail. The equipment was installed in three different configurations. We labeled these configurations methods A, B, and C. SSA designated 10 offices to test each method. Table I.1 shows these office locations. Table I.1: SSA Offices Participating in the Demonstration Project by Method Attleboro, Mass. Bangor, Me. Albany, N.Y. Geneva, N.Y. Petersburg, Va. Reading, Penn. Asheville, N.C. Charleston, S.C. West Indianapolis, Ind. Cedartown, Ga. El Dorado, Ark. Champaign, Ill. Norfolk, Neb. Oklahoma City, Okla. Roswell, N.M. Stockton, Cal. Winfield, Kans. Pocatello, Ida. Las Vegas, Nev. We conducted the preinstallation phase of the test from mid-January through the end of February 1995, placing our calls on what we believed to be the 8 busiest days during that period. We reasoned that the best way to measure changes in phone access was to test performance on the busiest calling days rather than on average calling days. To identify the busiest calling days, we used information on telephone call volume to the 800 number during the same period in 1994. SSA has information that tracks the busy-signal rate for the 800 number. Using these data, we identified the 8 busiest days from mid-January through the end of February in 1994. We chose this period because SSA began installation of the new equipment at the 30 offices during the last week of February 1995. The busiest days tended to be Mondays, Fridays, the third of the month (when Social Security checks are normally delivered), and the day after a holiday. The exact days we chose for study were January 17 and 30 and February 1, 3, 6, 7, 21, and 27. SSA had planned to complete installation of the phone service by June 1995. However, it encountered several installation problems. By late July, only one office did not yet have the equipment installed. We decided to give the field offices some time to become acquainted with the equipment. By using the 1994 call log for SSA’s 800 number, we selected the following 8 days on which to conduct the postinstallation phase calls: August 22, 29, and 30 and September 5, 6, 8, 11, and 13. Sampling Procedure We designed the test using statistical sampling principles so that calls would be randomly distributed throughout the day and across the 30 SSA offices during each of the two 8-day test periods. To provide an adequate level of precision for our estimates of the busy-signal rates, we made 350 preinstallation calls and 350 postinstallation calls for each of the three methods being tested. To determine the time of the calls, we divided the workday into 28 15-minute segments (beginning at 9 a.m. and ending at 4 p.m.). This created 224 time periods over the 8-day test period (8 days times 28 time periods per day). Since 10 locations could be called during each of the 224 time periods, we had a total of 2,240 possible time/location combinations, with each representing a possible telephone call. We numbered these combinations 1 through 2,240, with number 1 assigned to the combination of the first location and the first time period (9:00 to 9:15 a.m.) of the first of the 8 days, and number 2,240 assigned to the combination of the tenth location and the last time period (3:45 to 4:00 p.m.) on the eighth day. We then picked at random 350 of the numbers from 1 to 2,240. For each number picked, we looked up the corresponding time/location combination that had been assigned that number and placed a telephone call at that time to that location. For example, one of the random numbers we picked was 572. We had assigned that number to location number 2 during the 9:15 to 9:30 a.m. period on the third day. As shown in table I.1, location number 2 for method A is the Flatbush office. Therefore, we placed a call to Flatbush during the 9:15 to 9:30 a.m. period on the third day. We also placed calls during the same period on the same day to the Albany and Geneva offices, locations number 2 under methods B and C. For the postinstallation period, we placed an identical set of calls, in time and location, to those placed to estimate the busy-signal rates before installation of the new equipment. For example, since we had picked the number 572 we again placed calls to the Flatbush, Albany, and Geneva offices during the 9:15 to 9:30 a.m. period on the third day of our postinstallation test. We used the same set of 350 random numbers for both our pre- and postinstallation tests of the equipment to make our comparisons of changes in the three methods’ access rates as fair as possible. By placing the preinstallation test calls on the same days and at the same times to each of the three groups of 10 locations, we hoped to minimize the effect on our estimates of variation among locations in the volume of calls received on particular days or during particular hours. Similarly, by placing our postinstallation test calls to the same locations and at the same times as those of our preinstallation test calls, we attempted to minimize the effect of variation among locations in the general call volume between the mid-January through February period and the period of our postinstallation test in August and September. Adjustments to Our Sampling Plans Several events arose during our analysis that necessitated adjusting the data for study purposes. Table I.2 summarizes these adjustments. Table I.2: Adjustments to SSA Offices Participating in the Demonstration Project by Method Attleboro, Mass. Bangor, Me. Albany, N.Y. Geneva, N.Y. Petersburg, Va. Reading, Penn. Asheville, N.C. Charleston, W.Va. West Indianapolis, Ind. Cedartown, Ga. El Dorado, Ark. Champaign, Ill. Norfolk, Neb. Oklahoma City, Okla. Roswell, N.M. Stockton, Cal. Winfield, Kans. Pocatello, Ida. Las Vegas, Nev. Affected demonstration office. Due to unforeseen events, we could not complete our comparison exactly as planned. Some of the field offices had to be dropped from the study or moved to another method. SSA did not install new equipment in the St. Paul or Bangor field offices as had been planned. Therefore, we excluded St. Paul and Bangor from our study. We also discovered that the phone number we had used in the first phase of the study for the Murray field office was incorrect so we excluded this office from our analysis. Finally, the Las Vegas field office, which was to receive equipment for method C, instead received the equipment for method A. These adjustments resulted in 10 field offices using method A, 9 field offices using method B, and 8 field offices using method C in our analyses. For each method, we estimated the proportion of times that the public would have accessed SSA when calling the offices in the test during the 8 days on which we placed calls. Because our estimates—which apply only to the 8 days on which we placed calls—are based on a limited number of phone calls, each estimate has an associated sampling error. At the 95-percent confidence level, sampling errors for our estimates of access rates under each method (both pre- and postinstallation) are about 5 percentage points. Sampling errors for our estimates of changes in access rates under each method are about 7 percentage points. In many instances, sampling errors for estimates of access rates at individual offices are substantially higher. Questionnaire We designed a simple computer-assisted telephone interview to collect the data on the outcome of each telephone call attempt. The information collected included whether (1) we got a busy signal, (2) the phone rang without being answered (we hung up after 10 rings), (3) a person answered, (4) we were placed on hold (we waited 2 minutes before hanging up), and (5) we were disconnected. Comments From the Social Security Administration GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, the following individuals made important contributions to this report: Jim Wright and Jay Smale developed our sample design and the computer-assisted interview instrument used to record telephone call outcomes; Inez Azcona, and Jeffrey Bernstein collected the data, visited local SSA offices and helped prepare this report; Wayne Turowski and Steve Machlin did the computer programming and analysis of data. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Social Security Administration's (SSA) telephone access demonstration project. GAO found that: (1) under one of the two telephone demonstration project configurations tested, telephone access improved significantly, busy-signal rates dropped, and the number of callers reaching SSA improved; (2) although improvements were statistically significant under a broader access definition, more callers were placed on hold because staffing had not increased; (3) configuration results varied among SSA offices; (4) SSA field office staff believe that the installation of voice mail equipment has improved office efficiency and public service; (5) SSA expects to complete its internal evaluations of the project's effect on local operations, employees, and public relations by the end of February 1996; and (6) the equipment tested in the demonstration project has the potential to further SSA public service goals, but SSA must assess the costs and contributions of the equipment before installing it systemwide. |
Background Roles and Responsibilities for DOD Advertising Programs Under Title 10 of the United States Code, the Secretary of each Military Department (the Army, the Navy, and the Air Force) has the responsibility to recruit personnel, subject to the authority, direction, and control of the Secretary of Defense. As such, each Secretary has the authority to organize and delegate responsibility for advertising efforts within each military service or components within each military service and, as a result, the organizational structure of advertising programs and associated recruiting organizational structure for each of the military departments differs. While advertising is carried out by the military services, some roles and responsibilities for advertising reside with the Office of the Under Secretary of Defense for Personnel and Readiness. Within that office, the Accessions Policy office within Manpower and Reserve Affairs has responsibility for the (1) policy, planning, and program review of active and reserve personnel, procurement, and processing and (2) development, review, and analysis of policies, resource levels, and budgets for recruiting enlisted personnel and for officer commissioning programs. One function for recruiting enlisted personnel and officer commissioning programs is advertising. Further, JAMRS reports to the Director of DOD’s Defense Manpower and Data Center and is responsible for joint marketing communications and market research and studies. JAMRS is to conduct research about the perceptions, beliefs, and attitudes of American youth as they relate to joining the military, which is often referred to as the propensity to serve. JAMRS officials stated that understanding these factors helps ensure recruiting efforts are directed in the most efficient and beneficial manner. JAMRS has in the past carried out joint advertising aimed at “influencers,” or the adults that can influence or impact a potential recruit’s decisions regarding military service. JAMRS also maintains a database that is used across the military services to begin their outreach to potential recruits. Types of Advertising Conducted by the Military Service Components The military service components conduct advertising in support of their recruitment missions. Consistent with the private sector, the components’ advertising programs follow a strategy that considers the phases of an individual’s decision-making process, sometimes referred to as the consumer journey. The decision to enlist in the military is a significant commitment and can be affected by numerous factors, such as other employment or educational opportunities available to an individual who is considering a career in the military. According to military service officials, these phases are generally characterized by the components as awareness, engagement, and lead generation, as illustrated in figure 1. The goal of military advertising is to move a potential recruit through each phase and, ultimately, to a decision to enlist. Further, each military service conducts advertising throughout each phase that according to military service officials is intended to communicate and reinforce a certain brand or image among potential recruits, leading potential recruits to determine that a particular military service is the best fit for their individual interests, beliefs, or goals. Awareness. The military service components conduct general awareness advertising to inform members of an audience about the opportunity to serve in the military and the distinct characteristics of each military service. The components typically pursue awareness through traditional advertising formats such as television commercials, print advertisements, and banners at events or signs within a community. Engagement. Advertising focused on building engagement targets individuals who are aware of the military as a career option and have begun to consider the possibility of enlisting. During this phase, the components seek to provide recruits with additional information to aid in their decision-making process. Often this phase of advertising takes place in the digital environment, as components seek to provide informative social media posts and use banner advertisements to attract individuals to visit their websites for more information. Figure 2 describes a variety of digital advertising activities conducted by the components. Lead Generation. Lead generation advertising targets individuals who have considered military service and are ready to discuss the possibility of enlistment. As such, lead generation activities seek to encourage these individuals to provide their contact information in order to schedule an opportunity to meet with a recruiter. Lead generation is often conducted in person, such as through recruiters’ presence at events like career fairs or sports games. Lead generation may also be conducted through other means—such as direct mail and online or print classified advertisements—as long as the advertisement features a “call to action” intended to prompt viewers to provide their contact information. Further, the military services often employ the use of “mobile assets,” such as large trucks and trailers fitted with equipment and activities intended to draw crowds and encourage and facilitate public interaction with a recruiter at an event in order to generate leads. Figure 3 shows some examples of various types of military service advertising used for recruiting purposes, such as mobile assets used at recruiting events to advertise a specific military service, digital advertising on social media, and print brochures. DOD Has Coordinated Some Advertising Activities among Its Components, but Has Not Developed a Formal Process for Coordination DOD Has Taken Steps to Coordinate Some Advertising Activities DOD has taken steps to coordinate some advertising activities among the military service components. Within the military departments, there are seven military service component advertising programs (see app. II for more details) that compete to attract recruits from a relatively small pool of individuals that are eligible for military service. Private sector advertising industry experts we spoke to emphasized the importance of maintaining a unique brand and strategy for each of the service components when there is competition for a target audience. According to DOD officials, each component works to develop a unique brand that differentiates the military services in order to compete for potential recruits. While industry experts stated that competition is inherent to advertising, these experts also stated that coordination can sometimes be beneficial to increase efficiencies and effectiveness, and that DOD could pursue greater coordination in some instances to help address any inefficiencies. Despite the competition that exists among the military components’ advertising programs, DOD has coordinated certain advertising activities. For example, DOD established the JAMRS office in 2002 to create a centralized program for joint market research and communication. JAMRS provides the military components with information that informs their advertising programs from surveys of U.S. youth attitudes toward joining the military, which change and evolve considerably over time. Officials across the military components stated that they heavily relied on the market research conducted by JAMRS and reported using the information to tailor their advertising and recruiting activities to address the interests of U.S. youth. JAMRS used to provide joint advertising campaigns directed at adults who might influence an individual’s decision to join the military, such as parents or coaches, known as influencers. According to JAMRS officials, joint advertising campaigns of any type have not been carried out in recent years due to budget constraints. Military service component officials stated that JAMRS’ joint advertising campaigns had been important for building awareness among influencers and assisting in promoting a positive image of military service to the U.S. public. Further, these officials stated that joint advertising for influencers provided by JAMRS was beneficial for all the military services and that due to the resources needed to conduct service-specific advertising, it is not feasible for their components to conduct additional advertising focused on influencers. Additionally, the Office for Accessions Policy within the Under Secretary of Defense for Personnel and Readiness has taken specific steps to coordinate among the components to attempt to increase the effectiveness of DOD’s advertising or to address shared challenges in advertising that are crosscutting in nature and can affect all of the military service components. For example, the Office coordinated a response to media and congressional interest in recent activities conducted as part of sports advertising that were questioned as being inappropriate. Further, the Under Secretary responded to this crosscutting challenge in September 2015 by issuing interim guidance for sports marketing that applied to all service components to prevent inappropriate activities from occurring as part of sports advertising. In addition, DOD officials stated that, in an effort to improve the effectiveness of DOD’s advertising, in 2008 the Office for Accessions Policy convened a cross-service working group of military service component advertising officials to develop a set of consistent performance measures that each military service component could use to assess performance. However, DOD officials stated that consensus was not reached, and thus the measures were not developed. Officials from the military service components stated that they meet about quarterly with their counterparts at meetings held by JAMRS to obtain their respective results from joint market research conducted by JAMRS, and at these meetings they will on occasion discuss crosscutting issues that affect the services’ advertising programs. Military service component advertising officials stated that these discussions have at times provided an opportunity to engage in discussions on effective advertising practices and lessons learned across service programs. Further, military service officials stated that they have long-standing working relationships with their counterparts among the services and that they do on occasion share some information during these discussions. However, the officials stated that they may not share comprehensive details of best practices or lessons learned because of the competition for recruits among the components and because they are not required to share information or coordinate. DOD Does Not Have a Formal Process for Coordination among the Services, Which Can Result in Possible Unnecessary Duplication, Overlap, and Fragmentation The private sector executives and professional association representatives we spoke with stated that while competition is an inherent aspect of advertising, increasing coordination should be considered in order to reduce inefficiencies and leverage resources effectively. Further, we have found in prior work that mechanisms to coordinate programs that address crosscutting issues may reduce the instances of potentially duplicative, overlapping, and fragmented efforts. While DOD has taken some steps to coordinate, the department has not established a formal process to ensure that service component advertising officials have a forum to share information and address crosscutting issues systematically, and that these discussions are consistently occurring, to help ensure an efficient use of resources in the competitive environment for recruitment. In the absence of a formal process for coordination, we found examples of possible unnecessary duplication, overlap, and fragmentation within DOD’s advertising activities. Risk of Increased Cost of Advertising Media Purchases. Industry experts and various military service officials stated that as each military service component purchases advertising space in the same media market, such as the time to air an advertisement on radio or television, competition among the services could increase the prices of these purchases. Further, they stated that working together could increase the buying power for certain activities. As such, from a department-wide perspective, DOD is at risk of making duplicative purchases in the same market with no coordination regarding the cost and effectiveness of those purchases. Officials from one military service component acknowledged that the competition for media purchases most likely results in a more expensive manner of doing business for the entire military. Multiple Contracts for Similar Functions. While each service component’s advertising agency works with the component to develop a unique brand that differentiates the military services, the components each contract for some functions that are not brand specific. For example, service officials reported that each component has advertising contracts that include services such as call centers and website chat-function support that respond to requests for information generated by advertising, and that screen potential recruits against general criteria for military service. As such, there are at least seven call center functions that the department uses to screen potential recruits. Most of these call centers are staffed by contractors from the general public and do not provide a service-specific function. Thus the department is paying several different companies to provide a similar service. Advertising industry experts pointed to functions that are not brand specific as possible opportunities for obtaining efficiencies through coordination or consolidation. However, military service officials disagreed with this assessment and stated that benefits that might result from consolidation of these types of functions are unclear and that call centers may provide brand-specific information. For example, officials from the Army National Guard stated that trained individuals, sometimes Army National Guard members, can offer state- or territory guard- specific information when responding to calls from their contracted call center. As discussed below, we also found examples where better coordination within and among the military service components could increase efficiencies or effectiveness by addressing the fragmentation of these advertising programs. Three Advertising Programs within the Air Force. The Air Force has three components that each contract with a different advertising agency to develop and implement three separate advertising programs. Maintaining three separate programs can lead to inefficiencies. For example, officials from the Air Force reserve and guard components stated that they do not have the resources to do marketing mix modeling. Marketing mix modeling is a best practice employed by agencies to determine the most efficient and effective allocation of a client’s budget toward media buys including print, television, and digital advertising. However, if the Air Force components coordinated on similar advertising functions that are not part of their unique branding, such as marketing mix modeling, they could potentially afford to jointly contract for such functions. In response to our questions about why they had not coordinated these programs, officials from the components stated that both the reserve component and Air National Guard need to focus on the geographic location of potential recruits to fill vacancies, whereas this is not a concern for the active duty components. However, they further stated that there had been discussion of consolidating some components in the past, possibly the guard and reserve components, but that this was not pursued. Officials could not provide any further rationale for requiring separate programs or for why further efforts for consolidation were not pursued. Two Army Advertising Programs. There are two Army advertising programs, one for the Army active duty and reserve components and another for the Army National Guard. Better coordination between these two programs could result in more effective use of resources for a common purpose. Army National Guard officials stated that they do not have the resources to fund some needed services that the active Army could support. For example, Army National Guard officials stated that they do not procure warehouse space for the storage of advertising materials for guard units throughout the country or mobile assets to deploy for National Guard campaigns, which the active Army could potentially support. Army National Guard officials stated regular coordination does not exist between these two programs and to address these shortcomings they have begun to engage in discussions with the active Army component to further explore coordination for these activities. A January 2016 report from the National Commission on the Future of the Army stated that separation of the Army’s recruiting programs and associated advertising is inefficient and unproductive. The report recommended the establishment of pilot programs that align the recruiting efforts of the active duty Army, Army National Guard, and Army Reserve as consolidating the administration and budgeting of recruiting, advertising, and branding for all components will yield increased effectiveness and efficiency. Army National and Local Advertising. We also found fragmentation in the Army’s active and reserve advertising programs resulting in coordination and communication challenges. Within the active and reserve Army components’ advertising program, the Army has two organizations that share some degree of responsibility for carrying out advertising activities: its Army Marketing and Research Group advertising office which directs the Army’s national level advertising and U.S. Army Recruiting Command which directs local advertising for recruiting. Senior officials from both organizations stated their organization is responsible for lead generation, and recruiters generate leads at both national and local events. While we acknowledge the importance of maintaining a unique brand for the service components, a formal process for coordination could allow the military service components to more effectively share best practices. As part of this process, a review of DOD’s existing programs could potentially identify opportunities to obtain efficiencies by reducing unnecessary duplication, overlap, and fragmentation that may exist within and among military service components’ advertising programs. As numerous department officials cited a lack of needed resources to appropriately carry out advertising in order to best execute the difficult task of recruiting, identifying and reducing unnecessary duplication, overlap, and fragmentation could potentially free up additional resources. In the absence of a formal process for coordination of the department’s several advertising programs, DOD may not be positioned to best leverage its advertising funds. DOD Has Generally Followed Commercial Best Practices for Assessing the Effectiveness of Advertising, but Components Vary in Their Ability to Determine Whether Their Activities Are Generating Recruitment Leads With some exceptions, DOD—through its military service components and together with their contracted advertising agencies—generally follows commercial best practices that we identified for assessing the effectiveness of advertising, shown in table 1. We determined that a well- defined and widely accepted list of best practices had not been established; therefore, we asked a nongeneralizable sample of advertising experts and professional associations to identify best practices. We compiled and condensed the identified best practices into a list that we then validated with those same industry experts. However, when we compared the military service components advertising programs against the best practices that we identified, we found variations among the components in establishing measurable goals. Further, differences across the components in their processes for collecting and reviewing performance data have resulted in varying abilities to measure the effectiveness of advertising on generating leads, especially at the local level. Components Have Structured Their Advertising Programs’ Organization to Safeguard against Bias in Performance Measurement and Coordinate Vendors We found that the military service components have largely established roles and responsibilities to address the need for unbiased performance information, and ensured that coordinator roles were assigned when multiple vendors were used. Industry experts we spoke with stated that advertising decisionmakers should consider whether to distribute responsibilities for various aspects of advertising—including creative development, media buying, and performance analysis—across different vendors, but when doing so, should ensure that a coordinator role is assigned to maintain a cohesive advertising strategy. While the majority of the military service components rely heavily on their contracted advertising agencies to assess the effectiveness of their advertising activities, they also adhered to these best practices, as described in the examples below. Consistent with this commercial best practice that we identified, for example, we found the following: A Marine Corps official responsible for overseeing the component’s advertising program stated that four Marine Corps project officers are designated to work closely with its advertising agency in monitoring and analyzing performance across key areas. Officials from the Army, which has the largest advertising budget of the components, stated that the Army had additionally contracted with third-party research firms to further assess the effectiveness of its advertising. All of the components independently review quantitative performance data, which they may access through their own data systems or reports provided by the advertising agencies. Military service officials stated that monitoring such data can help them to assess the performance of both their advertising activities and contracted advertising agencies. Each of the components also cited research conducted by DOD’s JAMRS program as a source of unbiased performance data. For example, JAMRS conducts a quarterly advertising tracking survey, evaluating the target audience’s recall and reactions to the components’ television advertising campaigns, which service officials stated can help the components assess the effectiveness of that specific type of advertising. As each component contracts with a lead advertising agency, the components typically adhere to this best practice as the agency fulfils the coordinator role in conjunction with the component. Components Vary in the Extent to Which They Meet Commercial Best Practices on Planning That We Identified While all of the military service components develop evaluation frameworks that identify a target audience in accordance with the commercial best practice, the goals set by the components vary in measurability. In our prior work, we have also found that high-performing organizations have goals that are aligned with performance management. We found that almost all of the service components typically develop an annual advertising plan, including information about their target market and goals to be met by advertising that year. All of the plans we reviewed demonstrated a detailed understanding of the target market. For example, the Marine Corps’ fiscal year 2015 plan recognizes the varying levels of the propensity to join the military among its target market and identifies an opportunity to focus on “the movable middle”—the portion of prospective recruits who are not currently inclined to join the military but might be willing to consider it. Likewise, reflecting its organizational structure of 54 state and territorial units, the Air National Guard’s 2015 annual advertising plan emphasizes targeting advertising to audiences located near units where career opportunities are available. However, the commercial best practice on planning that we identified also calls for advertising plans to include measurable goals. In our review of the most recent version of each component’s annual advertising plan, we found that the components’ plans varied in the extent to which their goals can be measured. For example, the Army plan we reviewed identifies a series of marketing objectives supported by measurable, numeric goals related to the public perception of the Army. In contrast, the Marine Corps and active duty Air Force plans state goals that relate to emphasizing positive aspects of the components, such as diversity and core values, but these goals are not numeric and measurable with related performance measures. For example, the most recent annual advertising plan of the Marine Corps includes a goal to weave diversity into all advertising efforts, while the Air Force annual plan contains a goal that all advertising tells the Air Force a story in a way that highlights Air Force core values. Neither plan includes information on how these goals would be measured. We also found that the annual plans of the Air Force National Guard and Air Force reserve had a mixture of the types of goals identified, some of which could be measured. In addition, Army National Guard officials stated that while it requires each state unit to create an annual plan that includes goals, the Army National Guard headquarters does not establish annual goals at a national level. Army National Guard officials stated that there is not a national level annual advertising plan with associated goals because each state unit has a unique recruiting mission, including the types of positions that need to be filled and the demographic makeup of the target audience. The absence of measurable goals at the national level may limit Army National Guard headquarters’ ability to determine the success of any national level advertising efforts or to distribute advertising funds strategically and efficiently among the state units. Military service officials stated that the annual advertising campaigns and associated advertising plans for each component can change considerably from year to year. Ensuring that future iterations of each component’s annual advertising plan contain measurable goals could enhance their ability to demonstrate the success of their advertising programs. The components also vary in their use of sophisticated modeling to determine how to distribute available advertising funds across different types of advertising (e.g., television, print, Internet, etc.), which was a key planning best practice cited by industry experts. Specifically, industry experts stated it is a best practice to use some form of modeling, such as marketing mix modeling, to determine the optimum distribution of advertising funds. Officials from the active duty Army, Marine Corps, and Air Force active duty components stated that they use such modeling, provided by their contracted advertising agencies, to determine how to spend their advertising funds. However, the remaining components do not currently leverage marketing mix modeling, and Air Force Reserve and Army National Guard officials cited its high cost as a barrier. Industry experts acknowledged that, while marketing mix modeling is a best practice, it can present a significant expense, and other methods of modeling may be more appropriate in some situations. As such, Army National Guard officials stated that marketing mix modeling might not be best suited for modeling when mission requirements fluctuate year to year or when geographic location of vacancies are of a primary concern. Components Have Taken Some Steps to Measure Performance, but Some Have Insufficient Data to Measure Whether Outcomes Are Attributable to Advertising Components Successfully Applied Industry Standard Measures for Two of the Three Purposes of Advertising Activities We found that the military service components successfully applied industry standard measures appropriate for two of the purposes of advertising activities—awareness and engagement— but varied in their ability to assess the effectiveness of advertising related to the third activity, lead generation. For example, to measure effectiveness of awareness related advertising, such as television commercials or national print advertisements, the components typically measure the number of times an advertisement is viewed by a member of the audience, numbers known as impressions. To measure engagement, the components use a range of real-time digital analytics, including click-through rates and social media “likes,” among others, when they conduct digital advertising. According to military officials and representatives of their advertising agencies, it is an industry standard to directly link the results demonstrated by these analytics to the purchase of advertising for awareness and engagement. The components rely on their advertising agencies to negotiate these purchases with the intent of achieving a level of performance that is consistent with industry standards for these forms of advertising. In contrast, we found that the components varied in their ability to use industry standard performance measures to assess the effectiveness of advertising activities focused on lead generation. Service officials stated that such activities can include recruiter booths at events, direct mail, and certain types of print or digital advertising, for which performance is measured on the basis of how many leads are generated by the activity. Whereas military service officials stated that advertising focused on generating awareness and engagement is generally executed at the national level, lead generation activities are often carried out at the local level and can depend on recruiters’ familiarity and knowledge of their local markets. For example, service officials stated that recruiters may select high school populations to receive direct mailings or interact with potential recruits at local events such as career fairs or sports games and may also be responsible for identifying the number of leads generated by such activities. As a result, the responsibility for measuring the performance of locally executed lead generation advertising activities is carried out, in part, at the local level, and some components do not obtain or measure the performance data needed to assess the effectiveness of these activities. Shortcomings in Measuring Lead Generation Contribute to Difficulties in Understanding How Outcomes Can Be Attributed to Advertising Insufficient data to measure the performance of local lead generating activities has diminished some of the components’ ability to understand how to attribute outcomes to specific advertising activities. According to industry experts and our prior work, determining the precise impact of advertising on outcomes, such as recruitment, is inherently challenging, in part due to the concurrent effects of external factors, such as the influence of family support and the availability of other career or educational opportunities. In addition, the length and complexity of a decision to enlist in the military necessitates the use of multiple types of advertising throughout the recruiting process, and an individual may ultimately decide to enlist months or years after first being exposed to advertising by the military. Although industry experts acknowledged that understanding how outcomes are impacted by advertising is a challenge in both the private sector in general and for the military in particular, they stated it is nonetheless important for advertising managers to develop an understanding of how outcomes can be attributed to advertising. With the exception of certain new requirements specific to sports advertising contracts, DOD does not require that the service components measure the performance of their advertising activities, and the extent to which the components can currently measure the effectiveness of their advertising activities varies. As such, measurement and monitoring performance is at the discretion of the military service components, and there is variation in the components’ processes for collecting and reviewing advertising activity performance data. According to service officials, we found that some components either do not collect local advertising performance data related to lead generation in their systems or cited issues with the reliability of such data. Specifically, in terms of collecting this type of performance data, service component officials stated the following: The Marine Corps’ leads tracking system allows the Marine Corps to link multiple exposures to advertising—such as a direct mailing or interaction with a recruiter at an event—to a prospective recruit’s lead record, enhancing the ability to analyze performance in lead generation. While the Navy requires that performance data for local advertising be entered into its data systems, currently it does not have the capability to analyze performance in the same manner and cannot attribute potential leads to multiple exposures to advertising. Air Force active duty recruiting squadrons are not required to report to their headquarters on the performance of their advertising activities, including the performance of local advertising activities in lead generation. However, Air Force officials stated they conduct and assess the performance of the majority of their advertising at the headquarters level, and a comparatively small portion of advertising funds are distributed to the squadrons for local advertising activities. Both the Air Force Reserve and Air National Guard require that recruiters submit performance data, such as attendance and leads generated for any local advertising activity and that headquarters officials review these data in their lead tracking data systems. The Army National Guard does not routinely require state units to provide headquarters with performance data related to advertising, including lead generation. The Army requires in its policy that subheadquarters units submit advertising performance data for headquarters review, but Army advertising officials cited concerns with the reliability of these data for lead generation. Regarding the Army’s concerns regarding unreliable data, Army officials stated that when leads are collected by recruiters at advertising events, in many cases those leads are coded in their data system as “recruiter generated,” rather than being attributed to the appropriate advertising activity. As a result, Army officials stated that although they believe their locally executed advertising activities are a good investment, they do not have sufficient evidence to demonstrate their effectiveness. For example, while Army Recruiting Command officials cited “register to win” giveaways for promotional items at local events as an effective lead generator, Army Marketing and Research Group officials stated they do not have data to support the effectiveness of this advertising activity and questioned the quality of the leads produced. Army officials stated that they are currently working to address this issue to ensure leads are properly coded and thus improve the reliability of this data. We did not find the proper coding of lead generation to be an issue for other service components; for example, according to service officials, recruiters from the Marine Corps and active Air Force that attend an event typically record a potential lead’s information into an electronic tablet that automatically assigns the proper event code for the lead based on the recruiter’s location and the date and time of the lead’s entry. Federal internal controls standards state that program managers need appropriate data to determine whether they are meeting stated goals and achieving an effective and efficient use of resources. Without access to the necessary performance data, such as reliable leads collection and attribution data, a military service component may be limited in its ability to measure the performance of its advertising activities against stated goals. Without processes in place to facilitate the measurement and monitoring of advertising performance across all levels—especially at the local level—the military service components may be unable to ensure advertising dollars are used efficiently to help meet stated recruiting goals could result in inefficient use of advertising dollars. In the Absence of Policy, DOD Does Not Have Comprehensive Oversight of Its Components’ Advertising Activities DOD does not have comprehensive oversight of the military service components’ advertising activities, as it does not have a policy that defines its oversight role as well as procedures to guide the components’ respective advertising activities. Federal internal control standards for an agency’s organizational structure call for planning, directing, and controlling activities to ensure goals and objectives can be achieved. Further, these standards require that an agency’s activities be directed by policies and that management should provide appropriate oversight of activities. When DOD issues policy, such as directives and instructions, the department requires that the policy establishes roles and responsibilities and defines the procedures that are to be followed by all defense offices and organizations that are involved with the activity or program the policy directs. DOD’s Components Oversee Their Own Programs, Which May Have Led to Negative Effects in Some Instances As there is no department-wide policy that defines DOD’s role in overseeing advertising activities or the procedures that should be followed when the components carry out their advertising activities, the department’s advertising activities are overseen at the military service level and in some cases, within a service’s individual active, reserve, or guard components. Each military service component has component- level policy and guidance that defines the overall objectives of its advertising program and sets forth the roles and responsibilities for the component’s advertising program. The high-level policy is in many cases supplemented by yearly guidance that communicates the recruiting goals and the priorities of the commanding officer or senior leader responsible for a component’s advertising. However, the high-level policies issued by each service component for its advertising program vary considerably in the level of oversight and direction provided, as the procedures the components are to follow as they carry out their respective advertising activities have not been defined by DOD policy. For example, the Navy’s guidance includes financial thresholds for headquarters’ level review of certain expenditures and the types of cost- effectiveness reviews that must be performed, while other components’ guidance does not address the review of expenditures. The Marine Corps guidance specifies some advertising activities that are prohibited, whereas other components’ guidance—including the Air Force—does not include this type of detail. The variation in the oversight and direction provided to each service component’s advertising program has allowed for an inconsistent understanding of digital advertising rules and regulations and for negatively perceived activities to occur in some instances. Digital Advertising. We found differences in the understanding of rules and regulations that apply to digital advertising activities. The government is restricted in tracking the digital behavior of those that visit government websites. However, we observed differing opinions and understanding among military service officials regarding what types of digital tracking was permissible. Officials from the Air Force active duty component’s advertising program and representatives from their contracted advertising agency stated that persistent tracking would be used in their new website development and that these plans adhere to relevant DOD guidance. Further, they stated that subsequent advertising could be sent to individuals after visiting the Air Force website. Officials from the Office of the Under Secretary of Defense for Personnel and Readiness stated that they had reviewed the plans of the Air Force regarding digital tracking and subsequent advertising and agreed that these actions would be permissible and in accordance with DOD guidance. However, officials from other military services we spoke with described these same digital advertising strategies as not allowed per DOD guidance and stated that they could not pursue digital tracking similar to that of the private sector. The other services could be at a disadvantage as a result of these different interpretations of regulations. Sports-Related Advertising. While contracting with sports teams and events for advertising is widely practiced among the military service components, media reports from 2015 and congressional attention revealed that some components’ contracts with professional sports teams included provisions to conduct ceremonies that honored servicemembers and provide items such as tickets to games, which were perceived to be inappropriate and came at a cost to the federal government. In our review of contracts from fiscal years 2013 and 2014, we found several contracts with a marketing firm that connects brands to collegiate sports teams that included costs for honorary or swearing in ceremonies and items that are personal in nature. For example, we identified a contract spanning fiscal years 2013 and 2014 in which a major public university received $20,000 from a state Army National Guard unit for two swearing in ceremonies of recruits conducted at sporting events, and another state Army National Guard unit contracted in fiscal year 2014 for a “VIP experience” at university sporting events costing approximately $8,700. DOD Has Taken Some Steps to Improve Oversight but Does Not Have a Department-Wide Policy to Guide the Components’ Advertising Programs Subsequent to the negative media reports in 2015 and resulting DOD reviews in September 2015, the acting Under Secretary for Personnel and Readiness issued interim guidance that provided more focused direction and strengthened oversight of sports marketing and advertising contracts. The interim guidance acknowledged the inappropriateness of paying for recognition or swearing in ceremonies during sports events or the inclusion of items that are personal in nature, typically sports tickets or parking, for which the receipt of those items is not clear or controlled. Specifically, this guidance (1) requires that a senior military component reviewing official approve sports marketing contracts, (2) prohibits paying for recognition ceremonies, (3) restricts items that are personal in nature, and (4) requires reporting and analysis on the returns generated by larger sports partnerships. While the issuance of the interim DOD-level sports marketing guidance is a positive step, a senior DOD official acknowledged that the interim guidance was vague regarding how some of these new requirements are to be implemented, did not include some monetary thresholds that it could have identified, and applied only to sports marketing and advertising contracts. As such, in our discussions with service officials responsible for carrying out local level advertising, we observed discrepancies in how the guidance was being interpreted. For example, there were differences in opinions among local level advertising officials about the inclusion of honorary or swearing in ceremonies in contracts with sports teams. One local level official stated that the recent interim guidance now restricts any mention of an honorary ceremony in a contract, while other officials stated that such activities could still be included in a contract as long as they were listed with a cost of $0.00 and considered “added value.” Further, as the interim guidance only applies to sports advertising, it does not address the procedures that the components should follow when carrying out other types of advertising. For example, the components may contract to carry out advertising at music concerts or festivals. These events are similar in nature to sporting events as they may also allow for the inclusion of tickets other premium items in advertising contracts, but currently there is no DOD-wide advertising guidance that prevents the inclusion of such items in contracts. DOD officials stated that they are currently in the beginning stages of developing guidance for advertising that is more comprehensive to replace and build upon the interim guidance that was focused solely on sports-related advertising, but the details of what this proposed expanded guidance will cover are not clear. These officials stated that the department’s process for development, coordination, approval, and publication of new DOD guidance can be a lengthy one. Further, they stated that they planned to review the implementation of the interim guidance and incorporate any lessons learned from the issuance of the September 2015 guidance, as well as incorporate the findings of our report before moving forward to create a DOD-wide policy. Because DOD officials were not able to provide us a draft of this proposed expansion of guidance, and they stated that they are in the earliest stage of a lengthy process, it is unclear when DOD will issue department-wide guidance for advertising. Further, it is unclear whether the guidance will clearly define DOD’s role in oversight, clarify all remaining issues related to sports advertising, and provide direction to other types of advertising such as digital advertising and concerts or other types of event advertising. Without a policy that clearly defines DOD’s role in oversight of the advertising activities of the military service components and outlines the procedures the components should follow for all types of advertising activities, DOD may not be able to ensure that the components are carrying out other types of advertising in a manner that it considers appropriate. More broadly, while it may be appropriate for the service components to have variations in advertising policies given that they operate separate programs in some instances and we are encouraged that DOD officials stated that they will address current weaknesses such as those we identify in this report, without a policy that defines procedures for all types of advertising activities, DOD risks abuses or inappropriate activities in other advertising in the future. Conclusions Advertising activities provide information and seek to influence the beliefs and understanding of potential recruits about each military service, and the services conduct advertising to help meet their recruitment goals. The unique branding of each service plays a role in the decision of an individual to become a soldier, sailor, airman, or marine and, as such, the department relies on the military services to carry out their own advertising programs. However, despite the competition for potential recruits, a formal process for coordination of advertising activities among the military service components could improve the department’s ability to leverage resources and thus improve the efficiency of DOD’s advertising activities. Further, while the components generally follow commercial best practices we identified for evaluating advertising, DOD has not addressed variations in measurable goals among the components or insufficient data that has prevented some components from being able to assess the effectiveness of their advertising activities in generating leads. Lastly, the absence of a department-wide policy that clearly defines DOD oversight of and procedures to guide the advertising activities has allowed for activities of questionable appropriateness to occur in some instances. Without such a policy, DOD cannot ensure—through comprehensive oversight—that each service or component is carrying out advertising that meets departmental standards and that appropriately invests taxpayer dollars. Recommendations for Executive Action We recommend that the Secretary of Defense take the following three actions: Direct the Under Secretary of Defense for Personnel and Readiness, in consultation with officials from the military service components and the JAMRS office, to develop a formal process for coordination on crosscutting issues to facilitate better leveraging of resources. As part of this process, DOD could review existing advertising programs to identify opportunities to reduce unnecessary duplication, overlap, and fragmentation and obtain potential efficiencies. Direct the Secretaries of the Departments of the Army, the Navy, and the Air Force to ensure that each military service component fully measure advertising performance. This should include both the identification of measurable goals in future versions of the service components’ advertising plans and assurance that the service components have access to the necessary performance data to determine the effectiveness of their advertising activities for lead generation activities. Direct the Under Secretary of Defense for Personnel and Readiness to ensure, as the department undertakes its effort to issue a department-wide policy for advertising, that this policy (1) clearly defines DOD’s role in overseeing the advertising activities of military service components; (2) clarifies issues related to sports-related advertising; and (3) outlines procedures that should guide the components’ advertising activities for other types of advertising, such as music concerts, other event advertising, and digital advertising. Agency Comments and Our Evaluation We provided a draft of this report to DOD for comment. In written comments, DOD generally concurred with our three recommendations. Specifically, DOD concurred with our recommendations aimed at improving coordination and providing more direction to military service advertising. DOD partially concurred with our recommendation regarding the need for better performance measurement. DOD stated in its written comments that it will work with the military services to develop guidance that addresses our recommendations, which will be provided in the form of a DOD issuance. DOD’s comments are reprinted in their entirety in appendix IV. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD concurred with our first recommendation that the department develop a formal process for coordination on crosscutting issues to facilitate better leveraging of resources. In its written comments, DOD stated that it is developing a DOD instruction for marketing and that this guidance will formalize coordination among the military services, which it states should facilitate better leveraging of resources. DOD partially concurred with our second recommendation that the military departments fully measure advertising performance. We recommended that this performance measurement should include both the identification of measurable goals in future versions of the service components’ advertising plans and assurance that the service components have access to the necessary performance data to determine the effectiveness of their advertising activities for lead generation activities. In its written comments, DOD stated that it agrees with our recommendation in broad terms. DOD highlighted actions already underway, and stated that as part of the development of its instruction for marketing, it will further clarify and codify guidance related to performance measurement. However, DOD further states that not all goals and measures relate to lead generation and that other goals and objectives can be used to measure success. We acknowledge in this report that the goal of some forms of advertising is to improve awareness or engagement, and not solely lead generation. Therefore, we do not believe that all measurement and performance data should be tied back to lead generation if that is not the goal of the advertising activity in question. However, as we state in our report, we found that not all military services collect the necessary performance data to determine if activities intended to generate leads are performing as intended, and DOD did not address this issue of data collection in its written comments. We believe that DOD and GAO largely agree on this issue and are encouraged that DOD plans to issue an instruction that will clarify the need for better performance measurement of advertising activities. While its September 2015 interim guidance was an important step that required assessment of performance of sports related advertising, we reiterate that the draft DOD instruction in development should clarify that performance measurement is important for all types of advertising, not only sports related advertising activities, and that the appropriate goals, performance data, and performance measures be used to assess the performance of all advertising activities. DOD concurred with our third recommendation that the department issue policy for advertising. In its written comments, DOD stated that it is developing a DOD instruction for marketing and that this instruction will clearly define DOD’s role in overseeing the advertising activities of military service components; clarify issues related to sports-related advertising; and outline procedures that should guide the components’ advertising activities for other types of advertising. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Director of JAMRS; and the Secretaries of the Army, the Navy, and the Air Force. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Andrew Von Ah at (213) 830-1011 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to examine the extent to which the Department of Defense (DOD) (1) has coordinated its advertising activities among the military service components, (2) has followed commercial best practices to assess the effectiveness of advertising activities, and (3) has oversight of the components ‘advertising activities. To determine the extent to which DOD has coordinated its advertising activities among the military service components, we reviewed department- and service-level guidance pertaining to roles and responsibilities for advertising. We interviewed officials from the Accessions Policy office within the Office of the Under Secretary of Defense for Personnel and Readiness, the Joint Advertising and Marketing Research Service (JAMRS), and each of the military service components’ advertising programs to discuss the ways in which DOD has taken steps to coordinate advertising activities as well as ways in which the military service components coordinate with each other to achieve efficiencies in advertising. Further, we discussed any methods used to share best practices or lessons learned for advertising, either formally or informally, among the military service components. We also conducted interviews in Fort Knox, Kentucky, with officials from the U.S. Army Recruiting Command, the U.S. Army Cadet Command, and the U.S. Army Accessions Support Brigade, organizations that play a role in advertising for the active and reserve Army components. We selected these Army organizations for this objective because the Army is the only service component that has an office responsible for advertising functions—the Army Marketing and Research Group—that is separate from the recruiting function, carried out by the U.S. Army Recruiting Command. We compared identified policies and practices that describe any coordination, as well as any instances of coordination described by officials, against a best practice identified by private sector advertising experts during the course of this review and our duplication, overlap, and fragmentation evaluation and management guide. Specifically, the private sector advertising experts we met with during the course of this review stated that it is a best practice to consider an effective use of coordination to increase efficiencies in advertising. As there are seven advertising programs carried out by the military services as well as JAMRS, we reviewed these programs against the evaluation and management guide as it describes how to identify and evaluate instances of fragmentation (more than one agency involved in the same broad area), overlap (multiple agencies or programs with similar goals, activities, or beneficiaries), and duplication (two or more agencies or programs engaged in the same activities or services for the same beneficiaries) among programs. Further, the guide can help identify options to reduce or better manage the negative effects of fragmentation, overlap, and duplication, and evaluate the potential trade-offs and unintended consequences of these options. We took four major steps to determine the extent to which DOD has followed commercial best practices to assess the effectiveness of advertising activities. First, we interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness and each of the advertising programs of the military service components to determine the types of advertising they conduct and how its effectiveness is determined. Second, to identify existing best practices for assessing effectiveness of advertising, we conducted a preliminary search, including a literature review, and determined that a well-defined and widely accepted list of best practices had not been established. Therefore, we selected a nongeneralizable sample of advertising companies and professional organizations using a “snowball sampling methodology,” which consisted of interviewing advertising industry experts from an initial set of organizations and requesting those experts to refer additional contacts to participate in the review. Based on this approach, we identified and interviewed the following organizations: Companies: Ad Council, Agent, WideOpen, Widmeyer Professional organizations: Association of National Advertisers, An executive from another private sector advertising company preferred that we not include the name of the organization that currently employs the individual, stating that the input provided during the course of our review reflected experience obtained from numerous positions held at various advertising companies. During the interviews, we asked officials to describe their knowledge of industry best practices for assessing the effectiveness of advertising. Based on the interviews, we identified key practices reported by each expert or organization, including organizational structures to safeguard against bias in performance evaluation, processes for planning and goal- setting in advance of advertising, and standard performance measures used to assess advertising. We compiled these practices into a list and provided the list to the organizations for review and comment. Although the perspectives of the organizations included in our sample are not generalizable to the advertising industry as a whole, we found a sufficient degree of consensus among the sample for the purpose of developing criteria for this review. Third, following the development of the best practices, we interviewed officials from each of the military service components responsible for advertising, as well as the components’ contracted advertising agencies, to determine the extent to which the components follow the practices we identified. We compared the most recent annual advertising plan of each of the military service components (which ranged from fiscal year 2014 through 2016, depending on the component) and other department- and service-level guidance pertaining to advertising with the Standards for Internal Control in the Federal Government, which requires that an organization have relevant, reliable, and timely information and that there is communication of that information throughout the agency in order to achieve its objectives. We also considered regulatory requirements or other potential barriers to government agencies following the best practices we identified. Finally, to determine how performance data related to advertising activities is collected and reviewed below the headquarters level, we developed and administered a structured questionnaire to lower level recruiting officials with responsibility for local advertising from two lower level units per service component. Based on our discussions with the components’ headquarters-level advertising officials, we determined the appropriate officials to participate in the semistructured interview were located at the brigade level of the Army, district level of the Navy and Marine Corps, squadron level of the Air Force active and reserve components, and state level of the Army and Air National Guards. Given the total population of 177 units, we chose to use a judgmental, nonprobability sampling approach to select 2 subordinate units from each of the seven components, for a total 14 interviews. We selected the units, in consultation with service component headquarters officials, using criteria intended to obtain a variation of perspectives within each of the components. Due to the degree of differences among the components in command structure and methods for allocating advertising funds across subordinate units, we tailored the criteria to the characteristics of each component. For several components, we selected units based on the size of the budget received for local advertising, choosing a unit with a larger- than-average advertising budget and a unit with a smaller-than-average advertising budget. If a component provided roughly equal local advertising funding to all subordinate units, we used another selection criterion to obtain varying perspectives, such as selecting two units that represented regions with different propensities to serve or two units with different recruiting missions such as an enlisted unit and a health professionals unit. We relied on the headquarters officials from each of the service components to identify states or units that matched our selection criteria. Because we interviewed a nongeneralizable sample, the results cannot be used to make inferences about the population as a whole. To determine the extent to which DOD has oversight of its advertising activities, we reviewed (1) department-level guidance on advertising and recruiting and (2) service-level guidance on the review and approval processes related to the services components’ advertising programs. We interviewed officials from the Under Secretary of Defense for Personnel and Readiness about the oversight provided to service advertising programs and the role of JAMRS. In addition, to determine how the military service components provide direction to and oversight of their respective advertising programs, we interviewed officials from the headquarters of each of the military services components’ advertising programs and we developed and administered a structured questionnaire to lower level recruiting officials with responsibility for local advertising from two lower level units per service component. We met with officials from a judgmental, nonprobability sample of lower level units that represent a geographic region or area of the United States, such as a recruiting district, squadron, or brigade depending on the service. The subordinate level units were selected to obtain distinct views within each component, based on variations in the size of budgets, recruiting goals, or the propensity of youth to serve in the military within the geographical area of the unit. We compared any departmental and service guidance, as well as the information obtained during our interviews, against Standards for Internal Control in the Federal Government, which states that an agency’s organizational structure should feature planning, directing, and controlling activities to ensure that goals and objectives of the agency can be achieved. Further, internal controls require that policies be in place to direct an agency’s activities. Lastly, we reviewed contracting data from fiscal year 2014 and some of fiscal year 2015 to identify variations in the types of activities included in military service component advertising contracts. We obtained contracting data from fiscal year 2014 and some of 2015 because this time frame included information before and after the issuance of interim guidance related to sports advertising, including the most recently available data as of March 2016. Through our discussions with relevant experts and our review of past work, we determined the data to be sufficiently reliable for the purpose of corroborating the types of activities included in advertising contracts. We conducted this performance audit from June 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Military Service Component and Joint Advertising Organizations Within the military departments, there are seven military service component advertising programs and the organizational structure of each program differs. Further, there is a joint advertising program—Joint Advertising and Marketing Research Service—that is to provide some advertising functions for the department. Within all of the military service components, except for the Army Active Duty and Reserve Component Advertising Program, the remaining six programs’ advertising function is part of the recruiting function of the respective military service component. Each of the military services has its own recruiting structures and organizations, which are responsible for the military service’s recruiting mission and functions. The role of a military service’s recruiting command is to provide support to the recruiting force and guidance for the recruitment and enlistment process. In addition, a recruiting command plays a role in developing the recruiting goals. The commands are structured similarly across the military services with some variation in organizational structure. The recruiting command is the recruiting headquarters for each military service, with subordinate commands between the headquarters level and recruiting stations or substations where frontline recruiters work to reach out to prospective applicants and discuss the benefits of joining the military. Table 2 describes each of the seven advertising programs within the military departments. Appendix III: Advertising Program Budgets and Recruitment and Retention Goals Each military service component receives an annual appropriation to carry out operations and maintenance activities, including advertising and marketing. Table 3 shows the DOD reported amounts allotted to the military service components’ respective advertising and marketing activities from each component’s annual operations and maintenance appropriations for fiscal years 2015 through 2017. The Army has received the highest allotments for its advertising programs. Specifically, the Army has received about at least two to three times the amount that the other active-duty components have received. In addition, each military service component sets annual goals for recruitment and retention, for both enlisted servicemembers and officers, in order to meet defined endstrength requirements. Recruits must meet numerous standards before they are accessed into the military, specifically physical, educational, and other standards (e.g., an acceptable record of behavior). For the reserve and guard components, the geographic location of vacancies is also important for understanding and addressing recruitment and retention goals because these components recruit from a geographic area to fill vacancies in that area and recruits typically continue to live and work in that area. Conversely, active duty components recruit from anywhere within the United States and its territories to meet recruitment goals and recruits are assigned subsequently to duty stations. Table 4 shows the recruitment goals by fiscal year from fiscal year 2014 through fiscal year 2016, as of March 1, 2016. The Army and the Army National Guard had the largest recruiting missions over this time period. Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, key contributors to this report were Margaret Best (Assistant Director), Serena Epstein, Mae Jones, William Lamping, Felicia Lopez, Suzanne Perkins, Carol Petersen, Ophelia Robinson, Andrew Stavisky, and Amie Lesser. | As part of its efforts to meet yearly recruitment goals for the military, DOD requested almost $575 million in fiscal year 2017 to conduct advertising intended to increase awareness of military service and ultimately generate leads for potential recruits. Senate Report 114-49 included a provision for GAO to assess DOD's advertising activities. This report examines the extent to which DOD (1) has coordinated advertising activities among the military service components, (2) has followed commercial best practices to assess the effectiveness of advertising activities, and (3) has oversight of its components' advertising activities. GAO identified best practices for assessing the effectiveness of advertising in consultation with advertising industry experts and reviewed DOD and service policies and the most recent version of each military service components' advertising plans, from fiscal years 2014-16. The Department of Defense (DOD) has taken steps to coordinate some advertising activities among the military service components, but it has not developed a formal process for coordination. DOD conducts joint market research and service officials responsible for advertising at times share some information about lessons learned. However, there is no formal process for addressing inefficiencies and to ensure information sharing among the services. GAO found examples of possible unnecessary duplication, overlap, and fragmentation that may result from the absence of coordination. For example, the Air Force has three advertising programs that contract with three advertising agencies, but officials could not provide a rationale for requiring separate programs. In the absence of a formal process for coordination, the services may be missing opportunities to effectively leverage advertising resources. While DOD has generally followed commercial best practices GAO identified to assess the effectiveness of advertising, DOD's components vary in their ability to determine whether their activities are generating leads for potential recruits. For example, while the Marine Corps has developed a framework to assess the effectiveness of its advertising including leads generated from advertising activities at the local level, Army officials stated they do not have reliable data to evaluate whether locally executed advertising activities are generating leads, and the Army National Guard does not require state units to report on the performance of their advertising activities. Without fully measuring advertising performance, especially at the local levels, DOD may be unable to ensure advertising dollars are used efficiently to help meet recruiting goals. DOD does not have comprehensive oversight of the components' advertising activities; instead, DOD's components oversee their own programs. However, examples identified by GAO and others of some components paying sport teams to provide recognition ceremonies for servicemembers—a practice later deemed unacceptable by DOD—suggest that the absence of DOD oversight may have contributed to some activities of questionable appropriateness. Further, GAO observed discrepancies in how recent sports advertising guidance was being interpreted and in service officials' understanding of regulations that direct digital advertising. Without a department-wide policy that clearly defines its oversight role, DOD lacks reasonable assurance that advertising is carried out in an appropriate manner. |
The Coast Guard Has Made Progress in Improving Its Risk Management In December 2005, we reported that risk management, a strategy for helping policymakers make decisions about assessing risks, allocating resources, and taking actions under conditions of uncertainty, had been endorsed by Congress and the President as a way to strengthen the nation against possible terrorist attacks against ports and other infrastructure. Risk management has long been used in such areas as insurance and finance, but at the time its application to domestic terrorism had no precedent. We noted that unlike storms and accidents, terrorism involves an adversary with deliberate intent to destroy, and the probabilities and consequences of a terrorist act are poorly understood and difficult to predict. The size and complexity of homeland security activities and the number of organizations involved—both public and private—add another degree of difficulty to the task. We have examined Coast Guard efforts to implement risk management for a number of years, noting how the Coast Guard’s risk management framework developed and evolved. In 2005 we reported that of the three components GAO reviewed—the Coast Guard, the Office for Domestic Preparedness (this office’s function is now within the Federal Emergency Management Agency), and the Information Analysis and Infrastructure Protection Directorate (now the National Protection and Programs Directorate)—the Coast Guard had made the most progress in establishing a foundation for using a risk management approach. While the Coast Guard had made progress in all five risk management phases, its greatest progress had been made in conducting risk assessments—that is, evaluating individual threats, the degree of vulnerability in maritime facilities, and the consequences of a successful attack. However, we reported that those assessments were limited because they could not compare and prioritize relative risks of various infrastructures across ports. At the time the Coast Guard had actions under way to address the challenges it faced in each risk management phase and we did not make recommendations in those areas where the Coast Guard had actions well under way. Several of these actions were based, in part, on briefings GAO held with agency officials. Our recommendations were designed to spotlight those areas in which additional steps were most needed to implement a risk management approach to Coast Guard port security activities. We recommended that the Coast Guard take action to: establish a stronger linkage between local and national risk assessment efforts—an action that could involve, for example, strengthening the ties between local assessment efforts, such as area maritime security plans, and national risk assessment activities; and ensure that procedures for evaluating alternatives and making management decisions consider the most efficient use of resources— actions that could entail, for example, refining the degree to which risk management information is integrated into the annual cycle of program and budget review. Since we made those recommendations, both DHS and the Coast Guard have made progress implementing a risk management approach toward critical infrastructure protection. In 2006, DHS issued the National Infrastructure Protection Plan (NIPP), which is DHS’s base plan that guides how DHS and other relevant stakeholders should use risk management principles to prioritize protection activities within and across each critical infrastructure sector in an integrated and coordinated fashion. In 2009, DHS updated the NIPP to, among other things, increase its emphasis on risk management, including an expanded discussion of risk management methodologies and discussion of a common risk assessment approach that provided core criteria for these analyses. For its part, the Coast Guard has made progress assessing risks and integrating the results of its risk management efforts into resource allocation decisions. Regarding risk assessments, the Coast Guard transitioned its risk assessment model from the Port Security Risk Assessment Tool (PS- RAT) to the Maritime Security Risk Assessment Model (MSRAM). In 2005 we reported that the PS-RAT was designed to allow ports to prioritize resource allocations within, not between, ports to address risk most efficiently. However, the new MSRAM can assess risk across ports and is used by every Coast Guard unit and assesses the risk—threats, vulnerabilities, and consequences—of a terrorist attack based on different scenarios; that is, it combines potential targets with different means of attack, as recommended by the NIPP. The Coast Guard uses the model to help implement its strategy and concentrate maritime security activities when and where relative risk is believed to be the greatest. According to the Coast Guard, the model’s underlying methodology is designed to capture the security risk facing different types of targets, allowing comparison between different targets and geographic areas at the local, regional, and national levels. We have also reported that the Federal Emergency Management Agency has included MSRAM results in its Port Security Grant Program guidelines as one of the data elements included in determining grant awards to assist in directing grants to the ports of greatest concern or at highest risk. With regard to the integration of risk management results into the consideration of risk mitigation alternatives and the management selection process, Coast Guard officials stated that the Coast Guard uses MSRAM to inform allocation decisions, such as the deployment of local resources and grants. We have also reported that at the national level, the Coast Guard uses MSRAM results for (1) long-term strategic resource planning, (2) identifying capabilities needed to combat future terrorist threats, and (3) identifying the highest-risk scenarios and targets in the maritime domain. For example, Coast Guard officials reported that results are used to refine the Coast Guard’s requirements for the number of required vessel escorts and patrols of port facilities. At the local level, the Captain of the Port can use MSRAM as a tactical planning tool. The model can help identify the highest risk scenarios, allowing the Captain of the Port to prioritize needs and better deploy security assets. The 2011 Congressional Budget Justification showed that the Coast Guard uses risk or relative risk to direct resources to the mitigation of the highest risk. For example, the use of risk management in the allocation of resources that is specific to port security concerns the Ports, Waterways, and Coastal Security program. This program has a performance goal to manage terror-related risk in the U.S. Maritime Domain to an acceptable level. The Coast Guard uses a program measure to direct resources to the programs that reduce risk the most based on the amount invested. Based on the development of the MSRAM assessment process and the use of risk management analysis results in its allocation of resources, we believe that the Coast Guard has addressed the recommendations discussed earlier concerning risk management. DHS and the Coast Guard Have Taken Several Actions to Address the Small- Vessel Threat but Challenges Remain in Mitigating the Risk In recent years, we reported that concerns had arisen about the security risks posed by small vessels. In its April 2008 Small Vessel Security Strategy, DHS identified the four gravest risk scenarios involving the use of small vessels for terrorist attacks, which include the use of a small vessel as (1) a waterborne improvised explosive device, (2) a means of smuggling weapons into the United States, (3) a means of smuggling humans into the United States, and (4) a platform for conducting a stand- off attack—an attack that uses a rocket or other weapon launched at a sufficient distance to allow the attackers to evade defensive fire. According to the former Commandant of the Coast Guard, small vessels pose a greater threat than shipping containers for nuclear smuggling. Some of these risks have been shown to be real through attacks conducted outside U.S. waters, but—as we reported in December 2009—no small- vessel attacks have taken place in the United States. Many vessels frequently travel among small vessels that operate with little scrutiny or notice, and some have suffered waterborne attacks overseas by terrorist or pirates who operated from small vessels. For example, at least three cruise ships have been attacked by pirates on small boats while armed with automatic weapons and rocket propelled grenades, although the three vessels were able to evade the pirates by either maneuvering or fighting back. Oil tankers have also been attacked. For example, in October 2002, a small vessel filled with explosives rammed the side of an oil tanker off the coast of Yemen. The concern about small-vessel attacks is exacerbated by the fact that some vessels, such as cruise ships, sail according to precise schedules and preplanned itineraries that could provide valuable information to terrorists in preparing for and carrying out an attack against a vessel. DHS and the Coast Guard have developed a strategy and programs to reduce the risks associated with small vessels; however, they face ongoing challenges related to some of these efforts. The following discusses some of our key findings with regard to reducing the risks associated with small vessels. Small Vessel Security Strategy. DHS released its Small Vessel Security Strategy in April 2008 as part of its effort to mitigate the vulnerability of vessels to waterside attacks from small vessels, and the implementation plan for the strategy is under review. According to the strategy, its intent is to reduce potential security and safety risks posed by small vessels through operations that balance fundamental freedoms, adequate security, and continued economic stability. After review by DHS, the Coast Guard, and CBP, the draft implementation plan was forwarded to the Office of Management and Budget in April 2010, but the release of the plan has not been approved by the Office of Management and Budget. Community Outreach. Consistent with the Small Vessel Security Strategy’s goal to develop and leverage strong partnerships with the small- vessel community, the Coast Guard, as well as other agencies—such as the New Jersey State Police, have several outreach efforts to encourage the boating community to share threat information; however, the Coast Guard program faces resource limitations. For example, the Coast Guard’s program to conduct outreach to the boating community for their help in detecting suspicious activity, America’s Waterway Watch, lost the funding it received through a Department of Defense readiness training program for military reservists in fiscal year 2008. Now it must depend on the activities of the Coast Guard Auxiliary, a voluntary organization, for most of its outreach efforts. In addition to America’s Waterway Watch, the Coast Guard piloted a regional initiative—Operation Focused Lens—to increase public awareness of suspicious activity in and around U.S. ports, and direct additional resources toward gathering information about the most likely points of origin for an attack, such as marinas, landings, and boat ramps. According to Coast Guard officials, the agency views Operation Focused Lens to be a best practice, and the agency is considering plans to expand the program or integrate it into other existing programs. Vessel Tracking. In December 2009, we reported that the Coast Guard was implementing two major unclassified systems to track a broad spectrum of vessels; however, these systems generally could not track small vessels. The Coast Guard and other agencies have other technology systems, though—including cameras and radars—that can track small vessels within ports, but these systems were not installed at all ports or did not always work in bad weather or at night. Even with systems in place to track small vessels, there was widespread agreement among maritime stakeholders that it is very difficult to detect threatening activity by small vessels without prior knowledge of a planned attack. Nuclear Material Detection Efforts. DHS has developed and tested equipment for detecting nuclear material on small vessels; however, efforts to use this equipment in a port area have been limited to pilot programs. DHS is currently conducting 3-year pilot programs to design, field test, and evaluate equipment and is working with CBP, the Coast Guard, state, local, tribal officials, and others as they develop procedures for screening. These pilot programs are scheduled to end in 2010, when DHS intends to decide the future path of screening of small vessels for nuclear and radiological materials. According to DHS officials, initial feedback from federal, state, and local officials involved in the pilot programs has been positive. DHS hopes to sustain the capabilities created through the pilot programs through federal grants to state and local authorities through the port security grant program. Security Activities. The Coast Guard also conducts various activities to provide waterside security including boarding vessels, escorting vessels into ports, and enforcing fixed security zones, although they are not always able to meet standards related to these activities. Through its Operation Neptune Shield, the Coast Guard sets the standards for local Coast Guard units to meet for some of these security activities. Although the Coast Guard units may receive some assistance from other law enforcement agencies in carrying out these security activities, Coast Guard data indicates that some units are not able to meet these standards due to resource constraints. However, the Coast Guard’s guidance allows the Captain of the Port the latitude to shift resources to other priorities when deemed necessary, for example when resources are not available to fulfill all missions simultaneously. The planned decommissioning of five Maritime Safety and Security Teams—a domestic force for mitigating and responding to terrorist threats or incidents—may continue to strain Coast Guard resources in meeting security requirements. Although remaining teams are to maintain readiness to respond to emerging events and are to continue performing routine security activities, such as vessel escorts, their ability to support local units in meeting operational activity goals may be diminished. The Coast Guard Has a Program in Place to Assess the Security of Foreign Ports, but Challenges Remain in Implementing the Program The security of domestic ports also depends upon security at foreign ports where cargoes bound for the United States originate. To help secure the overseas supply chain, MTSA required the Coast Guard to assess security measures in foreign ports from which vessels depart on voyages to the United States and, among other things, recommend steps necessary to improve security measures in those ports. In response, the Coast Guard established a program, called the International Port Security Program, in April 2004. Under this program, the Coast Guard and host nations review the implementation of security measures in the host nations’ ports against established security standards, such as the International Maritime Organization’s International Ship and Port Facility Security (ISPS) Code. Coast Guard teams have been established to conduct country visits, discuss security measures implemented, and collect and share best practices to help ensure a comprehensive and consistent approach to maritime security in ports worldwide. Subsequently, in October 2006, the SAFE Port Act required the Coast Guard to reassess security measures at such foreign ports at least once every 3 years. As we reported in October 2007, Coast Guard officials told us that challenges exist in implementing the International Port Security Program. Reluctance by some countries to allow the Coast Guard to visit their ports due to concerns over sovereignty was a challenge cited by program officials in completing their first round of port visits. According to these officials, before permitting Coast Guard officials to visit their ports, some countries insisted on visiting and assessing a sample of U.S. ports. The Coast Guard was able to accommodate their request through the program’s reciprocal visit feature in which the Coast Guard hosts foreign delegations to visit U.S. ports and observe ISPS Code implementation in the United States. This subsequently helped gain the cooperation of the countries in hosting a Coast Guard visit to their own ports. However, as Coast Guard program officials stated, sovereignty concerns may still be an issue, as some countries may be reluctant t a comprehensive country visit on a recurring basis because they believe the frequency is too high. Another challenge program officials cited is having limited ability to help countries build on or enhance their capacity to implement the ISPS Code requirements. Program officials stated that while their visits provide opportunities for them to identify potential areas to improve or help sustain the security measures put in place, other than sharing best practices or providing presentations on security practices, the program does not currently have the resources to directly assist countries, particularly those that are poor, with more in-depth training or technical assistance. To overcome this, program officials have worked with other agencies (e.g., the Departments of Defense and State) and international organizations (e.g., the Organization of American States) to secure funding for training and assistance to countries where port security conferences have been held (e.g., the Dominican Republic and the Bahamas). CBP Has Established a Program to Scan U.S.-Bound Cargo Containers, but Challenges to Expanding the Program Remain Another key concern in maritime security is the effort to secure the supply chain to prevent terrorists from shipping weapons of mass destruction (WMD) in one of the millions of cargo containers that arrive at U.S. ports each year. CBP has developed a layered security strategy to mitigate the risk of an attack using cargo containers. CBP’s strategy is based on a layered approach of related programs that attempt to focus resources on potentially risky cargo shipped in containers while allowing other cargo containers to proceed without unduly disrupting commerce into the United States. The strategy is based on obtaining advanced cargo information to identify high-risk containers, utilizing technology to examine the content of containers, and partnerships with foreign governments and the trade industry. One of the programs in this layered security strategy is the Secure Freight Initiative (SFI). In December 2006, in response to SAFE Port Act requirements, DHS, and the Department of Energy (DOE) jointly announced the formation of the SFI pilot program to test the feasibility of scanning 100 percent of U.S.-bound container cargo at three foreign ports (Puerto Cortes, Honduras; Qasim, Pakistan; and Southampton, United Kingdom). According to CBP officials, while initiating the SFI program at these ports satisfied the SAFE Port Act requirement, CBP also selected the ports of Busan, South Korea; Hong Kong; Salalah, Oman; and Singapore to more fully demonstrate the capability of the integrated scanning system at larger, more complex ports. As of April 2010, SFI has been operational at five of these seven seaports. In October 2009, we reported that CBP has made some progress in working with the SFI ports to scan U.S.-bound cargo containers; but because of challenges to expanding scanning operations, the feasibility of scanning 100 percent of U.S.-bound cargo containers at over 600 foreign seaports remains largely unproven. CBP and DOE have been successful in integrating images of scanned containers onto a single computer screen that can be reviewed remotely from the United States. They have also been able to use these initial ports as a test bed for new applications of existing technology, such as mobile radiation scanners. However, the SFI ports’ level of participation, in some cases, has been limited in terms of duration (e.g., the Port of Hong Kong participated in the program for approximately 16 months) or scope (e.g., the Port of Busan, Korea, allowed scanning in one of its eight terminals). In addition, the Port of Singapore withdrew its agreement to participate in the SFI program and, as of April 2010, the Port of Oman had not begun scanning operations. Furthermore, since the inception of the SFI program in October 2007, no participating port has been able to achieve 100 percent scanning. While 54 to 86 percent of the U.S.-bound cargo containers were scanned at three comparatively low- volume ports that are responsible for less than 3 percent of container shipments to the United States, sustained scanning rates above 5 percent have not been achieved at two comparatively larger ports—the type of ports that ship most containers to the United States. Scanning operations at the SFI ports have encountered a number of challenges—including safety concerns, logistical problems with containers transferred from rail or other vessels, scanning equipment breakdowns, and poor-quality scan images. Both we and CBP had previously identified many of these challenges, and CBP officials are concerned that they and the participating ports cannot overcome them. In October 2009, we recommended that DHS conduct a feasibility analysis of implementing the 100 percent scanning requirement in light of the challenges faced. DHS concurred with our recommendation. CBP and DOE spent approximately $100 million through June 2009 on implementing and operating the SFI program, but CBP has not developed a comprehensive estimate for future U.S. program costs, or conducted a cost-benefit analysis that compares the costs and benefits of the 100 percent scanning requirement with other alternatives. The SAFE Port Act requires CBP to report on costs for implementing the SFI program at foreign ports, but CBP has not yet estimated total U.S. program costs because of both the lack of a decision by DHS on a clear path forward and the unique set of challenges that each foreign port presents. While uncertainties exist regarding a path forward for the program, a credible cost estimate consistent with cost estimating best practices could better aid DHS and CBP in determining the most effective way forward for SFI and communicating the magnitude of the costs to Congress for use in annual appropriations. To address this, in October 2009, we recommended that CBP develop comprehensive and credible estimates of total U.S. program costs. DHS concurred with our recommendation. CBP and DOE have paid the majority of SFI costs for operating the SFI program. The SAFE Port and 9/11 Commission Acts do not address the issue of who is expected to pay the cost of developing, maintaining, and using the infrastructure, equipment, and people needed for the 100 percent scanning requirement, but implementing the requirement would entail costs beyond U.S. government program costs, including those incurred by foreign governments and private terminal operators, and could result in higher prices for American consumers. CBP has not estimated these additional economic costs, though they are relevant in assessing the balance between improving security and maintaining trade capacity and the flow of cargo. To address this, in October 2009, we recommended that DHS conduct a cost-benefit analysis to evaluate the costs and benefits of achieving 100 percent scanning as well as other alternatives for enhancing container security. Such an analysis could provide important information to CBP and to Congress to determine the most effective way forward to enhance container security. DHS agreed in part with our recommendation that it develop a cost-benefit analysis of 100 percent scanning, acknowledging that the recommended analyses would better inform Congress, but stated the recommendations should be directed to the Congressional Budget Office. While the Congressional Budget Office does prepare cost estimates for pending legislation, we think the recommendation is appropriately directed to CBP. Given its daily interaction with foreign customs services and its direct knowledge of port operations, CBP is in a better position to conduct any cost-benefit analysis and bring results to Congress for consideration. Senior DHS and CBP officials acknowledge that most, if not all foreign ports, will not be able to meet the July 2012 target date for scanning all U.S.-bound cargo. Recognizing the challenges to meeting the legislative requirement, DHS expects to grant a blanket extension to all foreign ports pursuant to the statue, thus extending the target date for compliance with this requirement by 2 years, to July 2014. In addition, the Secretary of Homeland Security approved the “strategic trade corridor strategy,” an initiative to scan 100 percent of U.S.-bound containers at selected foreign ports where CBP believes it will mitigate the greatest risk of WMD entering the United States. According to CBP, the data gathered from SFI operations will help to inform future deployments to strategic locations. CBP plans to evaluate the usefulness of these deployments and consider whether the continuation of scanning operations adds value in each of these locations, and potential additional locations that would strategically enhance CBP efforts. While the strategic trade corridor strategy may improve container security, it does not achieve the legislative requirement to scan 100 percent of U.S.-bound containers. According to CBP, it does not have a plan for full-scale implementation of the statutory requirement by July 2012 because challenges encountered thus far in implementing the SFI program indicate that implementation of 100 percent scanning worldwide by the 2012 deadline will be difficult to achieve. However, CBP has not performed a feasibility analysis of expanding 100 percent scanning, as required by the SAFE Port Act. To address this, in October 2009, we recommended that CBP conduct a feasibility analysis of implementing 100 percent scanning and provide the results, as well as alternatives to Congress, in order to determine the best path forward to strengthen container security. DHS concurred with our recommendation. In DHS’s Congressional Budget Justification FY 2011, CBP requested to decrease the SFI program’s $19.9 million budget by $16.6 million. According to the budget justification, in fiscal year 2011, SFI operations will be discontinued at three SFI ports—Puerto Cortes, Honduras; Southampton, United Kingdom; Busan, South Korea—and the SFI program will be established at the Port of Karachi, Pakistan. Furthermore, CBP’s budget justification did not request any funds to implement the strategic trade corridor strategy. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. GAO Contacts and Staff Acknowledgments For questions about this statement, please contact Stephen L. Caldwell at 202-512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contacts named above, John Mortin, Assistant Director, managed this review. Jonathan Bachman, Charles Bausell, Lisa Canini, Frances Cook, Tracey Cross, Andrew Curry, Anthony DeFrank, Geoff Hamilton, Dawn Hoff, Lara Miklozek, Stanley Kostyla, Jan Montgomery, and Kendal Robinson made key contributions to this statement. Related GAO Products Combating Nuclear Smuggling: DHS Has Made Some Progress but Not Yet Completed a Strategic Plan for Its Global Nuclear Detection Efforts or Closed Identified Gaps. GAO-10-883T. Washington, D.C.: June 30, 2010. Maritime Security: Varied Actions Taken to Enhance Cruise Ship Security, but Some Concerns Remain. GAO-10-400. Washington, D.C.: April 9, 2010. Coast Guard: Deployable Operations Group Achieving Organizational Benefits, but Challenges Remain. GAO-10-433R. Washington, D.C.: April 7, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. Coast Guard: Observations on the Requested Fiscal Year 2011 Budget, Past Performance, and Current Challenges. GAO-10-411T. Washington, D.C.: February 25, 2010. Supply Chain Security: Feasibility and Cost-Benefit Analysis Would Assist DHS and Congress in Assessing and Implementing the Requirement to Scan 100 Percent of U.S.-Bound Containers. GAO-10-12. Washington, D.C.: October 30, 2009. Transportation Security: Comprehensive Risk Assessments and Stronger Internal Controls Needed to Help Inform TSA Resource Allocation. GAO-09-492. Washington D.C.: March 27, 2009. Maritime Security: Vessel Tracking Systems Provide Key Information, but the Need for Duplicate Data Should Be Reviewed. GAO-09-337. Washington, D.C.: March 17, 2009. Risk Management: Strengthening the Use of Risk Management Principles in Homeland Security. GAO-08-904T. Washington, D.C.: June 25, 2008. Supply Chain Security: Challenges to Scanning 100 Percent of U.S.- Bound Cargo Containers. GAO-08-533T. Washington, D.C., June 12, 2008. Highlights of a Forum: Strengthening the Use of Risk Management Principles in Homeland Security. GAO-08-627SP. Washington, D.C.: April 15, 2008. Maritime Security: Federal Efforts Needed to Address Challenges in Preventing and Responding to Terrorist Attacks on Energy Commodity Tankers. GAO-08-141. Washington, D.C.: December 10, 2007. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. Maritime Security: The SAFE Port Act and Efforts to Secure Our Nation’s Ports. GAO-08-86T. Washington, D.C.: October 4, 2007. Information on Port Security in the Caribbean Basin. GAO-07-804R. Washington, D.C.: June 29, 2007. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Ports, waterways, and vessels handle more than $700 billion in merchandise annually, and an attack on this system could have a widespread impact on global trade and the economy. Within the Department of Homeland Security (DHS), component agencies have responsibility for securing the maritime environment. The U.S. Coast Guard is responsible for protecting, among other things, U.S. economic and security interests in any maritime region. U.S. Customs and Border Protection (CBP) is responsible for keeping terrorists and their weapons out of the United States, securing and facilitating trade, and cargo container security. This testimony discusses DHS and its component agencies' progress, and challenges remaining, regarding (1) strengthening risk management (a strategy to help policymakers make decisions about assessing risks, allocating resources, and acting under conditions of uncertainty), (2) reducing the risk of small-vessel (watercraft less than 300 gross tons used for recreational or commercial purposes) threats, (3) implementing foreign port assessments, and (4) enhancing supply chain security. This statement is based on GAO products issued from December 2005 through June 2010, including selected updates conducted in July 2010. DHS and its component agencies have strengthened risk management through the development of a risk assessment model to help prioritize limited port security resources. In December 2005, GAO reported that while the Coast Guard had made progress in strengthening risk management by conducting risk assessments, those assessments were limited because they could not compare and prioritize relative risks of various infrastructures across ports. Since that time, the Coast Guard developed a risk assessment model designed to capture the security risk facing different types of targets, and allowing comparisons among targets and at the local, regional, and national levels. The Coast Guard uses the model to help plan and implement its programs and focus security activities where it believes the risks are greatest. DHS and the Coast Guard have developed a strategy and programs to reduce the risks associated with small vessels but they face ongoing challenges. GAO reported from 2007 through 2010 that DHS and the Coast Guard have (1) developed a strategy to mitigate vulnerabilities associated with waterside attacks by small vessels; (2) conducted community outreach to encourage boaters to share threat information; (3) initiated actions to track small vessels; (4) tested equipment for detecting nuclear material on small vessels; and (5) conducted security activities, such as vessel escorts. However, the Coast Guard faces challenges with some of these efforts. For example, vessel tracking systems generally cannot track small vessels and resource constraints limit the Coast Guard's ability to meet security activity goals. DHS and the Coast Guard developed the International Port Security Program in April 2004 to assess the security of foreign ports, but challenges remain in implementing the program. GAO reported in October 2007 that Coast Guard officials stated that there is reluctance by certain countries to allow the Coast Guard to visit their ports due to concerns over sovereignty. Also, the Coast Guard lacks the resources to assist poorer countries. Thus the Coast Guard is limited in its ability to help countries enhance their established security requirements. To overcome this, officials have worked with other federal agencies and international organizations to secure funding for training and assistance to countries that need to strengthen port security efforts. DHS and CBP established the Secure Freight Initiative (SFI) to test the feasibility of scanning 100 percent of U.S.-bound cargo containers, but face challenges expanding the program. In October 2009, GAO reported that CBP has made progress in working with the SFI ports to scan U.S.-bound cargo containers; but because of challenges implementing scanning operations, such as equipment breakdowns, the feasibility of scanning 100 percent of U.S.-bound cargo containers remains largely unproven. At the time, CBP officials expressed concern that they and the participating ports could not overcome the challenges. GAO recommended that DHS conduct a feasibility analysis. DHS concurred with our recommendation, but has not yet implemented it. |
Background Organization for Management and Oversight of Conventional Ammunition DOD has an extensive organizational structure for managing and overseeing conventional ammunition, with the Army having a prominent role. Since 1975, the Secretary of the Army has served as DOD’s Single Manager for Conventional Ammunition (hereafter referred to as the Single Manager). Under DOD guidance, the Single Manager’s mission encompasses all aspects of the life cycle of conventional ammunition, from research and development through acquisition, inventory management, and eventual disposal. The Single Manager, the military services, and U.S. Special Operations Command all have responsibilities pertaining to assigned conventional ammunition items, including logistics management, stock control, and reporting on the status of inventory. In addition, DOD organizations involved in ammunition management have developed joint policies and procedures to guide certain activities. The Secretary of the Army, in executing the Single Manager role, has delegated and designated related functions to a number of Army entities. For example, responsibility for issuing policy and providing oversight of the Single Manager mission is delegated to the Assistant Secretary of the Army for Acquisition, Logistics, and Technology. The Deputy Commanding General of Army Materiel Command serves as the Executive Director for Conventional Ammunition, responsible for monitoring and assessing the overall Single Manager mission and for overseeing the Single Manager’s execution of its mission for joint service activities. Joint Munitions Command, a subordinate command of Army Materiel Command, is assigned as the field operating activity for the Single Manager, responsible for providing logistics and sustainment support, storing and managing wholesale ammunition for all of the military services, and providing information to the military services on ammunition stored at Army depots. In this role, the Joint Munitions Command maintains items, performs physical inventory checks, and reports on the status of assets that are stored at its eight depots across the United States. Aviation and Missile Command provides similar functions for tactical missiles that are stored at these sites. The Under Secretary of Defense for Acquisition, Technology, and Logistics has responsibility to provide policy and guidance for the Single Manager’s mission and, in collaboration with other DOD component heads, appraise the overall performance of the Single Manager in accomplishing the mission objectives outlined in guidance and facilitate improvements.offices. Military Services’ Systems for Managing Ammunition Inventory All of the military services have automated information systems for managing and maintaining accountability for ammunition inventory. The Air Force has developed the Combat Ammunition System, which contains comprehensive information on ammunition at all levels—depot, ammunition supply points in theaters of operations, and individual units. The Navy uses its Ordnance Information System, which is divided into two sub-systems for wholesale and retail stocks. The Marine Corps has developed the Ordnance Information System–Marine Corps, which shares a common system architecture with the Navy’s system. The Army has LMP as well as other systems—such as the Worldwide Ammunition Reporting System–New Technology—which contain comprehensive information about wholesale and Army retail ammunition stocks. LMP, among other functions, stores information on wholesale inventory in all classes of supply, including ammunition. LMP contains information on all ammunition inventory stored at Army depots, including inventory that the Army manages for other services. The Army completed final deployment of initial LMP capabilities, referred to as Increment 1, in 2010. In December 2011, the Army began to develop additional capabilities for LMP—referred to as LMP Increment 2. The Army plans to deploy Increment 2 in multiple stages between December 2013 and September 2016. DOD’s Efforts toward a Single Database for Ammunition Inventory DOD has worked for decades to achieve department-wide visibility of ammunition stocks in a single database. Drawing upon lessons learned from the Gulf War in 1990-91, it sought to develop a Joint Total Asset Visibility system to provide logistics information on all classes of supply. For ammunition, it aimed to integrate multiple service databases to provide a department-wide ammunition capability. DOD also initiated a program to develop a Joint Ammunition Management Standard System. When that program was terminated because it did not provide a single, viable department-wide source of ammunition data, the Army then agreed to support the sustainment of the ammunition portion of the joint asset visibility database, known as the National Level Ammunition Capability, or NLAC. In fiscal year 2012, NLAC’s budget was about $2.4 million. NLAC receives data from a variety of sources, serving as a DOD-wide data repository, and in turn provides data to DOD decision support systems (see fig. 1). As shown in the figure, NLAC receives data from the services’ ammunition systems and other sources. It provides data to two DOD decision support systems—the Defense Readiness Reporting System and the Global Combat Support System-Joint, both of which are used by operational planners. Testing is underway with interfaces between NLAC and the Joint Operation Planning and Execution System and the Global Command and Control System-Joint. In its guidance, DOD has indicated a need for systems to provide logistics information visibility to support the joint warfighter. DOD’s Inventory Reports and Its Annual Redistribution Process DOD guidance directs the military services to stratify their conventional munitions inventory into several categories and prepare annual ammunition reports. The stratification separates the inventory into several categories to assess the ability of the inventory to meet stated requirements, ensure that inventories above requirements are kept only if warranted, and optimize the department’s ammunition inventory. The categories are requirement-related munitions stock, including items needed for war reserve, training, and testing; contingency retention munitions stock, which includes items that support requirements other than those already considered in the war reserve requirement and the training and testing requirements; economic retention munitions stock, a category that refers to inventory that is more expensive to dispose of and reacquire in the future than to retain to meet future requirements; and potential reutilization and disposal stocks, meaning inventory that exceeds the total of the other categories. In fiscal year 2013, the Army listed about 3.8 billion ammunition items on its annual report; the Marine Corps listed about 1.5 billion, and the Air Force listed about 730 million. The services prepare their annual stratification reports prior to an annual conference called the Quad Services Cross-Leveling Review. The purpose of the conference is to identify ammunition that is excess to one service’s needs (i.e., stock identified for potential reutilization or disposal) and can be transferred to another service that has identified a requirement for that same item.Defense Security Cooperation Agency screens any inventory that is not redistributed at this annual meeting for suitability for foreign military sales. According to a 2010 Army Audit Agency report, the Army had significantly underestimated the funding requirements needed to perform its conventional ammunition demilitarization mission and, as a result, the The stockpile has grown to over 557,000 tons, representing a $1 billion liability. GAO’s Prior Work on Ammunition Inventory GAO last reported on DOD’s conventional ammunition management in 1999.conventional ammunition program continued to be fragmented despite internal recognition of the problem and efforts to identify alternative solutions. We recommended that the Secretary of Defense direct the Secretary of the Army to establish a timeframe for implementing an Army- wide reorganization to integrate the management of conventional ammunition. DOD concurred, commenting that although the Army was working to resolve the inefficiencies we noted and to make the necessary organizational changes, the conventional ammunition program continued to be fragmented. DOD agreed that the three commands then dealing with various aspects of conventional ammunition needed to implement an Army-wide organizational restructuring to integrate the management of conventional ammunition. DOD subsequently took actions to implement GAO’s recommendation. Military Services’ Systems Have Some Limitations That Affect Their Ability to Facilitate Efficient Management of Conventional Ammunition Military Services’ Ammunition Information Systems Use Different Formats and Cannot Directly Exchange Data The military services’ ammunition systems cannot directly exchange data because they use different data exchange formats. Only the Army’s LMP system uses the standard DOD format; the Navy, Marine Corps, and Air Force systems operate with formats that are obsolete and not compatible with the format used by LMP. In December 2003, the Under Secretary of Defense for Acquisition, Technology, and Logistics issued a memo to the military services and other components establishing policy for migration of logistics systems to the Defense Logistics Management Standards (DLMS) and elimination of the use of the Military Standard Systems (MILS). MILS is based on standards and computer technology developed more than 50 years ago. According to DOD, MILS is functionally constraining, technologically obsolete, and unable to support the tracking of an item throughout its life cycle and across the entire supply chain using unique identifier codes. The Under Secretary of Defense for Acquisition, Technology, and Logistics reaffirmed this direction in 2013, calling for compliance by 2019. In addition, DOD guidance indicates that DLMS, rather than MILS, shall be the basis for new, replacement, and major modifications to logistics business processes or systems. LMP uses DLMS for data exchange; however, the other services’ ammunition systems continue to exchange data using MILS. Consequently, LMP cannot exchange data directly with the other services’ ammunition systems. Rather, data must pass through a translation process at Defense Logistics Agency (DLA) Transaction Services. DLA’s process translates data from one format to another format, enabling otherwise incompatible systems to exchange data. With respect to the services’ ammunition systems, the translation process occurs for the exchange of ammunition data between LMP and Navy’s and the Marine Corps’ ammunition systems; however, according to Air Force Combat Ammunition System officials, the translation of data between LMP and the Air Force’s system is not complete. Air Force Combat Ammunition System officials we interviewed stated that the ammunition data the Air Force system receives from LMP that DLA Transaction Services translates does not include lot and serial number information. Figure 2 provides a high-level overview of how data flow between service ammunition systems using the translation process. The DLA Logistics Management Standards Office has recommended the services and other DOD components make DLMS implementation a top priority in order to achieve efficiencies consistent with the direction from the Under Secretary of Defense for Acquisition, Technology, and Logistics. According to DLA officials, any system operating under MILS is limiting its capabilities for being able to send and receive data from more advanced systems. MILS is restrictive in that it does not allow for more detailed information to be included when conducting specific transactions. As a result of the use of different data exchange formats, the services rely on e-mail for certain business transactions related to ammunition. For example, Navy, Marine Corps, and Air Force personnel have to type e- mails to submit requisitions for certain ammunition items to an Army Joint Munitions Command item manager for processing through LMP. These requisitions include items managed by the Single Manager that include specific instructions, as well as items the services are transferring to a different account for reutilization. For items that are not managed by the Single Manager and include specific instructions, Navy, Marine Corps, and Air Force personnel have to type e-mails for such requisitions directly to the depots for processing. Figure 3 provides an overview of the requisition processes that the services currently use when requesting ammunition items, whether or not managed by the Single Manager, that include specific instructions. According to Air Force Global Ammunition Control Point officials, using the e-mail procedure for requisitioning ammunition increases processing time by as much as a week and lacks visibility because there is no confirmation either that the requisition was received or that it was completed. In addition, because different data exchange standards are used, an Army Joint Munitions Command official we interviewed stated that instructions had to be issued for standardizing processes with the other services for requisitions that cannot be completed through the services’ systems and LMP. Although an Under Secretary of Defense for Acquisition, Technology, and Logistics Business Strategy calls for transition to DLMS by 2019, Marine Corps officials stated that they have no plans at present to update their ammunition system to DLMS, and Naval Supply Systems Command officials told us that the Navy’s plan to update its ammunition system to DLMS has not been funded. According to Air Force officials, the Air Force plans to update its ammunition system to the DLMS standard by 2017. The services have lagged in transitioning to DLMS for different reasons, one of them being that funding for this upgrade has not been a priority. According to Naval Supply Systems Command officials we interviewed, they submitted funding requests annually from 2010 through 2013 to update the Navy’s ammunition system to DLMS; these requests were denied. Marine Corps officials stated they are waiting to update the Marine Corps ammunition system until after the Navy completes its DLMS update. However, the Navy and the Marine Corps made significant changes to their respective ammunition systems in 2004 and 2008 without updating to DLMS. According to Naval Supply Systems Command officials, the Navy incorporated wholesale ammunition operations to the current system in 2004. Similarly, Marine Corps officials we interviewed stated that the Marine Corps replaced its legacy system with the current system, Ordnance Information System–Marine Corps, in October 2008. Without upgrades of the Navy, Marine Corps, and Air Force systems to DLMS, the services will continue to devote extra time and resources to ensure the efficient transfer of ammunition data between these systems and LMP. LMP Has Some Limitations That Can Affect the Accuracy and Completeness of Data on Ammunition Items Stored at Depots LMP was not specifically designed to track ammunition and has some limitations in its ammunition-related functionality that can affect the accuracy and completeness of data for items stored at Army depots. If ammunition-related functionality in LMP is not corrected, any data problems that exist may be replicated because LMP provides information to other services’ ammunition systems. To address ongoing data quality concerns, the Army and the other services have had to use manual processes to check and, when necessary, make corrections to ammunition data. DOD guidance on supply chain materiel management requires components to implement data administration policies and procedures aggressively in ways that provide clear, concise, consistent, unambiguous, accurate, up-to-date, and easily accessible data DOD- wide, thereby minimizing the cost and time required to transform, translate, or research different-appearing, but otherwise identical data. Further, guidance jointly developed by DOD components involved in ammunition management indicates that the Single Manager Field Operating Activity will provide accurate and timely information to the military services on ammunition stored at Single Manager sites,the Army depots where conventional ammunition is stored. LMP, however, has some limitations in its ammunition-related functionality that can affect the quality of data that it maintains and provides to the other services. For example, we found the following: LMP does not accurately calculate ammunition storage capacity at Army ammunition depots. Depot personnel need accurate information on the storage capacity that is available in buildings in order to plan for storing the ammunition that arrives at the depot. According to officials at Tooele Army Depot and Letterkenny Munitions Center, LMP overestimates the amount of space available for storage, and depot personnel must calculate storage capacity manually. Tooele officials said this process can often take up to a day and, in the end, is still only an approximation of available space. Joint Munitions Command assessments that were conducted in fiscal years 2012 or 2013 found that all the ammunition depots had problems with calculating storage capacity using LMP. The assessments we reviewed do not quantify the extent that the depots must expend resources to calculate storage capacity manually. However, seven of the eight assessments stated that LMP’s limitation in calculating ammunition storage capacity could have a negative impact on mission performance; and six of eight assessments indicated that the issue could result in unnecessary costs. Army officials at the Joint Munitions Command told us they expect to improve this functionality in 2014. LMP may not fully account for ammunition items that are shipped from Army depots to other locations. DOD guidance requires that the Single Manager is accountable for inventory items until the destination receives them. However, as documented in an assessment of the ammunition process by the Army that concluded in November 2012, LMP lacks receipt confirmation for shipped ammunition items. LMP drops the item from record once the item ships from the depot, but there is no confirmation of receipt back to LMP from the receiving location. Without receipts for shipped items, there is a gap in accountability and visibility of ammunition items. LMP does not have a capability for generating certain performance information used for ammunition stockpile management. According to the fiscal year 2012 annual report by the Executive Director for Conventional Ammunition, LMP was unable to provide inventory accuracy rates, which is a key performance metric used to measure the Single Manager’s ability to perform stockpile management. According to a briefing slide provided by Army Joint Munitions Command, inventory accuracy is the comparison between the physical inventory and the accountable record. Similarly, the Marine Corps noted in its response to a fiscal year 2012 Army survey that inventory accuracy had neither been verified through physical inventory nor reconciled within LMP. According to a Joint Munitions Command official, the Command has been sending LMP-generated inventory accuracy data to the ammunition depots for them to confirm and correct if necessary. As a result, officials expect the data will be used as the basis for the next Executive Director for Conventional Ammunition annual report on the Single Manager. Because the Executive Director for Conventional Ammunition has not released its report for fiscal year 2013, we were unable to determine whether inventory accuracy has been adequately addressed. Officials at the other military services also have cited various concerns about the reliability of LMP data. According to responses provided by Marine Corps officials, data that the Marine Corps receive from LMP sometimes fail to differentiate information about the ammunition’s intended purpose and ownership details. we interviewed stated that LMP assigned a new lot number to an ammunition item that had undergone maintenance, but it still kept the old lot number on record—causing double counting. That problem, according to Air Force officials, required personnel to spend time determining which data were accurate. Further, ammunition officials with the Navy, Marine Corps, and Air Force stated that they spend time verifying the information sent from LMP to their respective systems. According to an Army planning document, there are several processes for manual review and corrections between LMP and the other services’ ammunition systems. Ownership codes are numeric codes used to identify which military service owns the item. Purpose codes are alphabetic codes used to identify the purpose for which the item is being held. GAO, Defense Logistics: Additional Oversight and Reporting for the Army Logistics Modernization Program Are Needed, GAO-11-139 (Washington, D.C.: Nov. 18, 2010). system commonly referred to as SmartChainaddress specific functionality to ship, receive, inventory, and perform stock movements for ammunition items for the Army’s Joint Munitions and Lethality Command. However, the Army has recognized other limitations associated with ammunition-related data in LMP that also affect the other services’ ammunition systems. Although the Army had planned several upgrades to LMP’s ammunition- related functionality in Increment 2, the Army has decided not to include a number of these upgrades. Increment 2 is a major enhancement to LMP and is scheduled for deployment in phases through fiscal year 2016. Of five ammunition-related upgrades that had been planned for Increment 2, only one is now included (see table 1). LMP Product Management Office officials said that the cost and schedule for implementing Increment 2 had affected their ability to include all the planned ammunition-related upgrades. As shown in table 1, one of the upgrades that the Army is no longer including in Increment 2 is an upgrade to improve LMP’s capability to provide accurate asset posture reporting and transaction reporting and reconciliation between LMP and services’ ammunition systems. The upgrade, according to the Army, would eliminate many of the manual processes currently in place. Joint Munitions Command officials said some requirements originally associated with this upgrade have been or The Army, however, has not will be addressed outside of Increment 2.yet developed a comprehensive plan, with timeframes and costs, for addressing the limitations that exist in LMP ammunition-related functionality, including those that were to be addressed by the planned upgrades in Increment 2. Such a plan could provide DOD reasonable assurance that its efforts to upgrade ammunition-related functionality in LMP are making progress. Further, without addressing these limitations, the Army and the services will continue to rely on manual processes to check and correct LMP ammunition-related data. NLAC Has Some Limitations in Providing Visibility of Conventional Ammunition The Army’s NLAC is a DOD-wide repository of ammunition data; however, it has some limitations in providing visibility of conventional ammunition and is not widely used outside of the Army. The Army does not have reasonable assurance that NLAC collects complete and accurate data from service ammunition systems. In addition to the challenges with LMP data discussed earlier in this report, NLAC also does not have certain checks and controls that could help ensure that data are accurately being transferred from source systems to NLAC. Another limitation to NLAC’s ability to provide visibility of assets is that DOD has not determined whether NLAC should be designated as an authoritative source of ammunition data. As noted previously, DOD guidance on supply chain materiel management requires components to implement data administration policies and procedures aggressively in ways that provide clear, concise, consistent, unambiguous, accurate, up-to-date, and easily accessible data DOD-wide, to help minimize the cost and time required to transform, translate, or research different-appearing, but otherwise identical data. In addition, federal internal control standards state that information systems should have effective internal controls that include application controls, which are designed to help ensure completeness, accuracy, authorization, and validity of all transactions during application processing. Our prior work has shown that controls should be installed at an application’s interfaces with other systems to ensure that all inputs are received and are valid and outputs are correct and properly distributed. An example of the recommended controls is computerized edit checks built into the system to review the format, existence, and reasonableness of data. NLAC was designed to be a repository for all services’ ammunition data by aggregating and distributing information throughout DOD. NLAC collects data at both the wholesale and retail levels, including inventory information such as quantity, location, requirements, and production. Several times a day, the repository receives updated data that are maintained in a web-based application for users across DOD—including headquarters, combatant commands, and ammunition supply points. NLAC data are available for use by service and joint component officials and, as noted earlier, feeds into the Defense Readiness and Reporting System and the Global Combat Support System-Joint. Although NLAC contains ammunition data from across DOD, NLAC is not widely used outside the Army. Information on NLAC provided by the Army shows that most users are from within that service. For example, NLAC is used as the data source for the Army’s semiannual Total Army and the Ammunition Authorization and Allocation ConferenceCentralized Ammunition and Missile Management system.depicts NLAC’s user base. In our discussions about NLAC, ammunition officials who we interviewed from the Joint Staff, Marine Corps, Air Force, and Navy regarded data from other ammunition systems to be more accurate and complete than the data in NLAC. Non-Army officials who used NLAC said they confirm information with other service systems or by contacting knowledgeable officials. For example, Joint Staff and Marine Corps ammunition officials stated that, rather than relying solely on NLAC, they preferred to take the extra step of phoning or e-mailing their counterparts in the other services to obtain information on specific items. NLAC does not have checks and controls that federal internal controls recommend to ensure that data from source systems are reliable. As a result, errors in the originating data also will appear in NLAC. In addition to being subject to inaccuracies from source systems, NLAC may not receive complete data transmissions from source systems into NLAC. For example, NLAC is not receiving all serial numbers for serialized items, even where these exist in LMP. Joint Munitions Command officials have submitted a request to NLAC for assistance to determine why the transmission from LMP to NLAC is not taking place, but the issue has not yet been addressed and officials have not received a timetable for resolution. Also, data available in NLAC on the services’ ammunition data may differ from data found in the services’ accountable records because of different business processes among the military services. For example, the Air Force—unlike the Army—accounts for ammunition items that have been shipped to another location by retaining the amount to be delivered at the originating location until receipt has been confirmed at the destination. As a result, one Air Force official observed that the Air Force’s Combat Ammunition System will show higher quantities of ammunition items that have amounts designated for shipment than are shown in NLAC. NLAC program personnel have taken some steps to improve accuracy and to address errors and inconsistencies in data received from the services’ systems. For example, according to program officials, they have monitored incoming data to ensure that updates occurred, such as whether the repository received the expected volume of information. Furthermore, they receive some data elements, such as the weight of the explosive component of the ammunition, directly from other sources even if these exist in LMP, because of known issues with LMP accuracy. They also conduct logic checks of incoming data; for example, they can detect and correct if a data field is supposed to contain an alphabetic character but the incoming file actually has a numeric character. However, they do not check whether the correct alphabetic characters appear. NLAC officials told us that users are their best source for detecting errors, particularly Army officials, as that service accounts for the preponderance of total users. NLAC is also limited in its ability to provide visibility of conventional ammunition because DOD has not determined whether NLAC should be designated as an authoritative source of ammunition data. According to DOD guidance, an authoritative data source is a recognized or official source of data that could have a designated mission statement to publish reliable and accurate data.visibility module of the Joint Ammunition Materiel Management System, but when that program was terminated, the Army took over the effort from its joint predecessor. Although NLAC is an outgrowth of a DOD effort, DOD has not designated an authoritative data source for providing asset visibility of the conventional ammunition inventory DOD-wide. By designating NLAC as an authoritative data source, DOD might be better able to provide visibility of conventional ammunition department-wide. The Global Combat Support System—Joint is the capstone joint logistics mission application enabling the Global Combat Support System strategy of providing unimpeded access to information regardless of source and fusing information from disparate sources into a cohesive and common operational picture. Services Have a Process for Collecting and Sharing Data on Conventional Ammunition, but the Army Does Not Report Information about All Available and Usable Items The military services have a process for collecting and sharing data in annual reports on conventional ammunition levels and use these reports to identify inventory owned by one service that may be available to meet the requirements of another service. However, the Army’s command that manages certain missiles has not contributed to these annual reports for the missile inventory, including any items that exceed the service’s requirement-related munitions stock. Officials stated that they do not contribute to the annual report because the missile stockpile rarely has items to offer for redistribution. Also, the Army’s annual report does not provide information about all available, usable ammunition items. Specifically, the Army’s report does not include information from prior years about usable ammunition that was unclaimed by another service and stored for potential foreign military sales or slated for potential disposal. Services Have a Process for Collecting and Sharing Data in Annual Reports about Inventory of Conventional Ammunition The military services have a process for collecting and sharing data on conventional ammunition information through the stratification reports that they prepare annually. They use these reports to identify inventory owned by one service that may be available to meet the requirements of another service. DOD Regulation 4140.1-R directs the military services to assess the ability of the ammunition inventory to meet stated requirements by stratifying their inventories into various categories and requires them to prepare an annual internal report that lists the current inventory level of all The annual internal report divides the inventory into the ammunition.categories of requirement-related munitions stock, economic retention munitions stock, contingency retention munitions stock, and potential reutilization and disposal stock. The regulation also directs the services to develop an external report identifying inventory in the long-supply categories of economic retention munitions stock, contingency retention munitions stock, and potential reutilization and disposal stock. The services are to use this report to identify potential opportunities for redistributing potential reutilization and disposal munitions stock, inventory that exceeds the requirements of an individual military service, but may not exceed the requirements of DOD. The regulation also directs the services to consider the economic retention and contingency retention stock as potentially available inventory for redistribution if another service has a shortage. In their reports on fiscal year 2012 inventory, the Army, the Marine Corps, and the Air Force reported approximately 6 billion ammunition items in the inventory, of which approximately 224 million items (3.7 percent) were excess to the requirements of one service, categorized as potential reutilization and disposal stock. ammunition inventory fluctuated from 5.2 percent to 28.4 percent of total inventory. Prior to the Quad Services Cross-Leveling Review, different organizations within each service may review drafts of the annual reports to verify that information from each ammunition system is accurate. The reports are then distributed to the other services. In addition, the Office of the Executive Director for Conventional Ammunition, which facilitates this process, compares the data in the inventory reports with data on planned procurements of ammunition. After the services share their annual reports on ammunition inventory, including which ammunition could be reutilized, service officials meet to discuss how they will redistribute ammunition that is available for any service’s requirements. As noted previously, specific information pertaining to the active Navy inventory is classified and is not included in this GAO report. Information on ammunition items that were available for redistribution is not classified. approximately 44 million items, of which approximately 32 million were small-caliber items such as ammunition for machine guns or pistols, 11 million were demolition materials such as detonation cords, fuses and pyrotechnic initiators, 1 million were ground defense items such as grenades used for riot control, and the remaining 2 million were a mixture of other various types of ammunition. Army’s Aviation and Missile Command Does Not Contribute to Required Annual Report The Navy, the Marine Corps, and the Air Force share information on the availability of missiles and missile support material that exceed their requirement-related munitions stock, but the Army’s Aviation and Missile Command does not report annually about the same information. DOD Regulation 4140.1-R requires that the services include all conventional ammunition, including tactical missiles, in their annual report. The Logistics Center under the Army Aviation and Missile Command manages the inventory of certain missile items—including Stinger, Javelin, and Hellfire missiles—and the Joint Munitions Command manages all other ammunition items, including small rockets such as shoulder-launched ammunition. The Army’s annual report does not include reference to the tactical missile inventory that is managed by the Aviation and Missile Command. According to Army officials, the Aviation and Missile Command does not contribute to the annual report on inventory because the missile stockpile rarely has items to offer for redistribution. The Navy, the Marine Corps, and the Air Force all include information in their annual reports on the various types of missiles that they manage. Some of the missiles that are included in their reports are the same kind of missiles that are managed by the Aviation and Missile Command’s Logistics Center. According to Army officials, the Aviation and Missile Command does not contribute to the annual reporting or redistribution process, but the Army has engaged in an internal process that annually reviews the missile inventory separately from the annual ammunition reporting process. In the course of our review, an Army headquarters official indicated that the Army was planning to take steps to include information from the Aviation and Missile Command for future annual ammunition reports. However, the Army had not yet articulated a plan of action for making this change. If the Army and its missile command do not annually report any missiles, including missiles excess to the service’s requirements, it risks other services spending additional funds to procure missiles that already are unused and usable in the Army’s stockpile. Also, without such annual reporting, the information DOD obtains lacks full transparency about missiles that could be used to support some of the other services’ requirements. Therefore, it will be important for the Army to ensure that the Aviation and Missile Command implement its direction in fiscal year 2014 and beyond. Army’s Annual Reports Do Not Include Information on All Usable Ammunition The Army’s annual stratification report includes current ammunition inventory levels and does not include information from prior years about usable ammunition that was unclaimed by another service and stored for potential foreign military sales or slated for potential disposal. DOD Regulation 4140.1-R directs the military services to assess the ability of the ammunition inventory to meet stated requirements by stratifying inventory into categories. It also directs the preparation of annual reports that list the current inventory levels for ammunition items. The annual internal report divides the inventory into requirement-related, retention, and potential reutilization and disposal stocks. The regulation also directs the services to use the annual reporting process to identify potential opportunities for redistributing potential reutilization and disposal stocks, inventory that exceeded—was greater than—the requirements of an individual military service, but may not exceed the requirements of DOD. As the Single Manager, the Secretary of the Army is disposing of serviceable or unserviceable conventional ammunition that is obsolete— inventory that is no longer needed due to changes in technology, laws, customs, or operations—or in excess to requirements of the department—inventory that has completed reutilization screening within DOD and is not required for the needs of any DOD activity. According to an Army financial statement in June 2013, the Army had about 39 percent of its total inventory (valued at about $16 billion) in a storage category for ammunition items that were excess to all the services’ requirements in a prior year and could be disassembled or destroyed in the future. However, a service may decide in a subsequent year that it needs additional ammunition of some type and check with the Army for availability before starting a procurement or to meet an emergent need. Officials told us that since October 2012 the Army has reclaimed at least 44 missiles from the disposal stockpile to meet its needs—such as fulfilling a testing requirement. Also, in 2012, the Marine Corps reclaimed ammunition storage components to meet a service need. In another example, Navy officials told us that a functional ammunition component, called sonobuoys, was reclaimed from the disposal stockpile when a need arose for the ammunition component. The Army is not sharing information on all usable ammunition that previously was unclaimed by another service and stored as part of the disposal stockpile. This information is not routinely shared with all services in the annual reports on ammunition inventory because DOD guidance does not require reporting this type of inventory as part of the stratification process. Officials told us that prior to the annual redistribution meeting, the Office of the Executive Director for Conventional Ammunition reviews the stockpile of usable ammunition that was previously unclaimed by any other service and stored as part of the disposal stockpile. However, this information is not included in the annual reports and shared with the services as part of the redistribution process. Without guidance to require that the Army’s annual reports or another report used as part of the redistribution process include all information about available and usable inventory—comprehensive information from multiple years—there is risk that the services may budget for funds to procure new supplies of conventional ammunition to meet a requirement when the ammunition items already are available in the DOD inventory but categorized for demilitarization or disposal. Conclusions DOD policy requires the highest possible degree of efficiency and effectiveness in wholesale conventional ammunition logistics functions for the inventory, but DOD’s systems have some limitations that hamper the department’s ability to manage this inventory efficiently. The use of outdated data exchange standards by Navy, Marine Corps, and Air Force ammunition systems makes it difficult for them to efficiently share data with LMP, the only system using the updated standards. In addition, while the Army has made progress in improving LMP data overall, ammunition- related functionality continues to have challenges that affect the accuracy and completeness of LMP ammunition data used by the services for ammunition management. The Army is aware of these challenges but has not developed a plan to address them. A comprehensive plan, with time frames and costs, for resolving limitations in LMP ammunition-related functionality could provide DOD reasonable assurance that its efforts to upgrade this functionality in LMP are making progress. Further, efforts to achieve DOD-wide visibility of ammunition assets are hampered because the existing data repository, NLAC, lacks some checks and controls that could improve the reliability of data from source systems. Moreover, DOD has not designated an authoritative source of data on conventional ammunition DOD-wide, whether NLAC or through some other means. By designating an authoritative source, DOD could have a means to provide better visibility of conventional ammunition department-wide. The services use the stratification and redistribution process to better optimize the department’s ammunition inventory by collecting and sharing information on available inventory that could meet the requirement of another service. However, the Army does not provide information on missiles in the annual reports that it prepares as part of this process. Also, the Army does not share information on usable inventory in a storage category for ammunition items that were excess to all the services’ requirements in a prior year and placed into storage in preparation for disassembly or disposal. Without such annual reporting, the information DOD obtains may lack full transparency about all available items and may miss opportunities to avoid procurement costs for certain usable items that may already be available in the Army’s stockpile. Recommendations for Executive Action We are making seven recommendations to improve the efficiency of DOD’s systems for managing its conventional ammunition inventory and to improve data sharing among the services. To improve the efficiency of data exchanges between LMP and other service ammunition systems, we recommend that the Secretary of Defense, in coordination with the Under Secretary of Defense for Acquisition, Technology, and Logistics, take the following two actions: Direct the Secretary of the Navy to (1) take steps to incorporate DLMS into the Ordnance Information System and (2) direct the Commandant of the Marine Corps to take similar steps with regard to the Ordnance Information System–Marine Corps. Direct the Secretary of the Air Force to assess the feasibility of accelerating the 2017 target date for incorporating DLMS into the Combat Ammunition System and, if determined to be feasible, take appropriate implementation actions. To provide greater assurance that LMP is capable of maintaining accurate, timely, and more complete ammunition data in accordance with DOD supply chain materiel management and ammunition guidance, we recommend that the Secretary of Defense direct the Secretary of the Army to establish a plan, with timeframes and costs, for incorporating ammunition-related functionality into LMP, including functionality that is no longer being included in the planned ammunition-related upgrades for Increment 2. To improve DOD’s ability to provide total asset visibility over conventional ammunition, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in conjunction with the Secretaries of the Army, the Air Force, and the Navy, to take the following two actions: Identify and implement internal controls, consistent with federal internal control standards, that will provide reasonable assurance that NLAC collects comprehensive, accurate data from other service ammunition systems. Designate an authoritative source of data on conventional ammunition DOD-wide—whether NLAC or through some other means—and issue guidance to implement this decision. To enable the military services to make maximum use of ammunition in the inventory, we recommend that the Secretary of Defense take the following two actions: Direct the Secretary of the Army to ensure that annual stratification reports on conventional ammunition include missiles managed by the Army Aviation and Missile Command. Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to revise guidance to require the Secretary of the Army to include in its annual reports, or another report, as appropriate, information on all available ammunition for use during the redistribution process—including ammunition that in a previous year was unclaimed by another service and categorized for disposal. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD concurred with our seven recommendations and provided additional comments describing actions underway or planned to address them. DOD also provided technical comments, which we incorporated as appropriate. The full text of DOD’s comments is reprinted in appendix III. With regard to the first recommendation, that the Navy and Marine Corps take steps to incorporate DLMS into their ammunition systems to improve the efficiency of data exchanges with the Army’s LMP, DOD concurred and cited several examples of DOD guidance that underscore the importance of DLMS and use of the standard for logistics systems and data exchanges. Further, DOD stated that recent guidance related to materiel management directs DOD components to use standard logistics data exchanges. Taking actions to implement this guidance, as we recommended, would better position the Navy and Marine Corps ammunition systems to efficiently exchange data with LMP. With regard to the second recommendation, that the Air Force assess the feasibility of accelerating the 2017 target date for incorporating DLMS into the Combat Ammunition System, DOD concurred and stated that incorporation of DLMS is tied to overall development efforts planned for the system. While DOD noted that the DLMS capability cannot be incorporated into the Air Force’s existing ammunition system independently, DOD stated that the Air Force expected to be able to incorporate DLMS by fiscal year 2017 with the possibility of earlier implementation based on contract performance. If fully implemented as planned, this action should help address the intent of the recommendation to ensure that the Air Force incorporates DLMS into the Combat Ammunition System on or before its target fiscal year 2017 timeframe. With regard to the third recommendation, that the Army take steps to establish a plan, with timeframes and costs, for incorporating ammunition- related functionality into LMP, including functionality no longer included in the planned ammunition-related upgrades for Increment 2, DOD concurred and noted that the Army has taken a phased approach to LMP implementation. DOD stated that some additional ammunition-related functionality is scheduled for deployment as part of Increment 2 in fiscal year 2016, and additional functionality will be evaluated for potential inclusion in follow-on increments of LMP. Given the schedule delays in incorporating needed ammunition-related functionality in LMP, as discussed in the report, we continue to believe that the Army should establish a plan with timeframes and costs for incorporating this functionality. Such a plan could provide DOD with reasonable assurance that the Army's efforts to upgrade ammunition-related functionality in LMP are making progress and, moreover, provide greater assurance that LMP is capable of maintaining accurate, timely, and more complete ammunition data in accordance with DOD supply chain materiel management and ammunition management guidance. With regard to the fourth recommendation, that DOD identify and implement internal controls, consistent with federal internal control standards, that will provide reasonable assurance that NLAC collects comprehensive, accurate data from other service ammunition systems, DOD concurred and stated that the Army updated the performance work statement for NLAC to include analyzing new data sources to identify improved system interfacing that will improve data accuracy, completeness, quality assurance, and auditability. If implemented as planned, this action should help to address the intent of the recommendation. With regard to the fifth recommendation, that DOD designate an authoritative source of data on conventional ammunition DOD-wide and issue guidance to implement this decision, DOD concurred and stated that it would assess the alternatives and designate the appropriate solution by the fourth quarter of fiscal year 2015. We are encouraged that DOD will seek to identify an authoritative source of data and reiterate that, at that time, DOD should also issue implementing guidance. With regard to the sixth recommendation, that annual stratification reports on conventional ammunition include missiles managed by the Army Aviation and Missile Command, DOD concurred and stated that it would clarify direction in its recently issued guidance to ensure that this happens. DOD added that the Army had already begun to provide missile information during the 2014 stratification meeting. We are encouraged by this step and believe that DOD will benefit by ensuring that the Army continues to provide this information. With regard to the seventh recommendation, that the Under Secretary of Defense for Acquisition, Technology, and Logistics revise guidance to require the Secretary of the Army to include information on all available ammunition for use during the redistribution process, including ammunition that in a previous year was unclaimed by another service and categorized for disposal, DOD concurred and noted that the Under Secretary would clarify direction in recently issued guidance that the military departments will use information on all available ammunition categorized for disposal. This is a positive step, but DOD does not state in its response how such information will be reported for use in the redistribution process. Requiring the Army to include this information as part of the redistribution process, as we recommended, would increase transparency about all available items and potentially help DOD avoid procurement costs for certain usable items that may already be available in the Army’s stockpile. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology To determine the extent to which Department of Defense (DOD) systems facilitate efficient management of the conventional ammunition inventory, we reviewed DOD guidance on exchanging data, developing data systems, and maintaining department-wide ammunition visibility. We reviewed relevant documents, including memos from the Under Secretary of Defense for Acquisition, Technology, and Logistics and Defense Logistics Agency; technical guidance from the Defense Logistics Agency Management Standards Office (particularly Defense Logistics Manual 4000.25, Defense Logistics Management System); and discussed with Air Force officials their plans for updating their ammunition system. At the service level, we identified the services’ systems, particularly the Logistics Modernization Program (LMP), which is the Army’s system of record for Army wholesale inventory and wholesale ammunition belonging to all of the services; we reviewed documents pertaining to capabilities and limitations of these systems. In the case of LMP, these documents included: change requests indicating systems adjustments to improve ammunition management capabilities; requirements, business case, and cost estimate for the update known as Increment 2; and records of inspections at each of the eight depots at which the Army’s Joint Munitions Command stores ammunition for the Army and the other services. At the DOD-wide level, we reviewed minutes of Joint Ordnance Commanders’ Group (JOCG) meetings. The JOCG is an inter-service forum, among whose goals is to identify, implement or recommend for implementation joint opportunities to reduce cost, increase effectiveness and ensure interoperability and/or interchangeability of munitions systems. Also, we interviewed DOD officials responsible for inventory records at each of the services’ ammunition system program offices to discuss the capabilities and limitations of each of the services’ ammunition inventory systems of record. These included officials from the Army, Navy, and Air Force headquarters logistics staffs–Army Deputy Chief of Staff for Logistics (G-4); Deputy Chief of Naval Operations for Material Readiness and Logistics (N-4); and Deputy Chief of Staff of the Air Force for Logistics, Installations, and Mission Support (A-4/7). For the Army, we interviewed officials from the Army Materiel Command and two of its subordinate commands (Joint Munitions Command and Aviation and Missile Command) that have responsibilities for ammunition management. We also interviewed officials from the services’ ammunition systems program offices, including the Army’s LMP Product Management Office; the Naval Supply Systems Command and Ordnance Information System program office in Mechanicsburg, Pennsylvania; the Marine Corps Systems Command, Program Manager - Ammunition in Stafford, Virginia; the Air Force Global Ammunition Control Point at Hill Air Force Base, Utah; and the Air Force Combat Ammunition System program office. In order to better understand inventory management challenges at the depot level, we met with logistics specialists at two depots: Tooele Army Depot, Utah; and Letterkenny Ammunition Center, Pennsylvania. We selected these depots primarily based on their proximity to other ammunition management locations. We reviewed National Level Ammunition Capability (NLAC) documents, including performance work statements and requirements documents, usage statistics, and interface testing results; and reviewed memorandums of agreement and interface documents to assess how the services and NLAC exchange data. We obtained access to NLAC, and developed and executed queries to compare whether its data were consistent with Army and Air Force source data. Because this report focused on wholesale ammunition stocks, we did not gather information about all systems that provide information to NLAC. For example, we did not attempt to study the extent to which systems containing information about retail or in-transit ammunition stocks are complete and accurate. We conducted telephone interviews with NLAC program management officials at Army Materiel Command headquarters in Huntsville, Alabama and Joint Munitions Command headquarters in Rock Island, Illinois, and met with NLAC contractor personnel in Chambersburg, Pennsylvania. To learn more about the interface between NLAC and the Global Combat Support System-Joint, we also reviewed the memorandum of agreement between the Program Management Offices for those systems and interviewed an official from the Defense Information Systems Agency, which oversees that system. To determine the extent to which the military services collect and share inventory data to help them meet their stated requirements, we reviewed policies, procedures, and other guidance, as well as reports related to conventional ammunition reporting requirements for the services, including DOD Regulation 4140.1-R and the Joint Conventional Ammunition Policies and Procedures. We examined DOD Regulation 4140.1-R to gain an understanding of the responsibility of the services to report inventory levels for items in long-supply retention categories to the other services. We examined the Joint Conventional Ammunition Policies and Procedures to gain an understanding of the responsibility of the Office of the Executive Director for Conventional Ammunition and its role in the annual redistribution process. Also, we obtained annual stratification reports—DOD’s term for each service’s list of items in the current inventory—from the Army, Navy, Marine Corps, and Air Force. These reports list items in the current inventory (as of the date of the report) and display how much meets or exceeds service requirements. We reviewed these reports to gain an understanding of the size and scale of the inventory and to determine the percentage of items in each category for fiscal years 2009 through 2013. We interviewed officials knowledgeable about the annual ammunition report process and determined that the information included in those reports was sufficiently reliable for the purposes of our report. Also, we attended the March 2013 annual meeting at Picatinny Arsenal, New Jersey, at which service representatives met to discuss these reports and redistribute ammunition excess to service needs to help other services to meet their requirements. We obtained and reviewed the records of results from the redistribution meeting for fiscal years 2009 through 2013. We discussed the process for collecting, reviewing and categorizing the inventory data with officials responsible for compiling these reports. We also met with officials from Army Materiel Command’s Office of the Executive Director for Conventional Ammunition to understand its processes for preparing for the redistribution meeting and for reporting on the results of the meeting. We circulated a standard set of questions to each of the services and analyzed the results, and determined that the information was sufficiently reliable for the purposes for which we used it. That is, we determined that the services have established processes for collecting and reporting data into their own systems and for receiving information about stocks that are stored at Army depots. We did not attempt to verify figures about quantities, locations, or other attributes of the data. The standard set of questions we circulated to the services asks detailed and technical questions about the systems. For example, for system architecture we asked how and in what format does the Army’s LMP send data to the other services’ ammunition systems. Similarly, we asked how and in what format do the services’ systems send ammunition data to NLAC. We also asked about data quality controls and limitations and the services’ perception of LMP and NLAC’s data quality and limitations. We collected responses from each of the services regarding their ammunition systems and conducted interviews to gain further clarification on their responses. We conducted this performance audit from December 2012 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Single Manager for Conventional Ammunition Organization Appendix II: Single Manager for Conventional Ammunition Organization This chart depicts the roles of various entities with respect to conventional ammunition. The connecting lines do not necessarily represent the complete administrative, operational, or reporting relationships for all purposes and functions. The Deputy Commanding General of AMC receives support for the role of Executive Director for Conventional Ammunition by a joint-staffed office of senior service military and civilian ammunition management specialists assigned to PEO Ammunition who report directly to the Executive Director. Although the Joint Ordnance Commanders Group Charter identifies the Commander of the Joint Munitions Command as chair, the Joint Ordnance Commanders Group’s Annual Report for 2012 was signed by Joint Munitions Command and PEO Ammunition as cochairmen. The Joint Munitions and Lethality Life-Cycle Management Command brings together the resources and expertise of three organizations: PEO Ammunition, Joint Munitions Command, and the Armament Research, Development, and Engineering Center. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following individuals made contributions to this report: Thomas W. Gosling, Assistant Director; Darreisha M. Bates; Richard D. Brown; Rebecca Guerrero; Sally Newman; Richard Powelson; Michael Shaughnessy; Michael Silver; and Amie Steele. | DOD manages nearly $70 billion of conventional ammunition—which includes many types of items other than nuclear and special weapons—at eight Army depots. The military services use automated information systems to manage their inventory. They also compile annual reports that compare ammunition inventory levels against stated requirements. GAO was asked to evaluate DOD's management of conventional ammunition. This report addresses the extent to which (1) the services' information systems facilitate efficient management of the conventional ammunition inventory and (2) the services collect and share inventory data to help them meet their stated requirements. GAO reviewed DOD guidance on materiel management and logistics systems, reviewed the services' annual inventory reports for fiscal years 2009 to 2013, and discussed inventory management and related issues with service officials. The military services use automated information systems to manage and maintain accountability for the Department of Defense (DOD) ammunition inventory, but the systems have some limitations that affect their ability to facilitate efficient management of conventional ammunition. The systems cannot directly exchange ammunition data because they use different data exchange formats. Only the Army's Logistics Modernization Program (LMP) system uses the standard DOD format. The other services have not adopted this format, although Air Force officials have said that they plan to by 2017. Without a common format for data exchange, the services will continue to devote extra time and resources to ensure efficient data exchange between their systems and LMP. LMP has some limitations in its ammunition-related functionality that can affect the accuracy and completeness of data for items stored at Army depots and require extra time and resources to confirm data or correct errors. The Army acknowledges there are limitations in LMP; however, it has not yet developed a comprehensive plan, with time frames and costs, for addressing the limitations. Such a plan could provide DOD reasonable assurance that its efforts to upgrade ammunition-related functionality in LMP are making progress. The Army developed the National Level Ammunition Capability (NLAC) as a DOD-wide repository of ammunition data, but NLAC has some limitations in providing ammunition visibility—that is, having complete and accurate information on items wherever they are in the supply system. The Army does not have reasonable assurance that NLAC collects complete and accurate data from service systems because it does not have checks and controls that federal internal control standards recommend to ensure source data is reliable. Without steps to ensure the quality of the data that flows into NLAC, DOD officials risk making decisions based on inaccurate and incomplete inventory information, or ammunition offices may have to devote extra staff and time to obtain accurate data of DOD-wide inventory. To identify inventory owned by one service that may be available to meet the requirements of another service, the military services have a process for collecting and sharing ammunition data. Through a stratification and redistribution process, the services assess whether inventory can meet stated requirements and then may transfer available inventory, including inventory in excess of one service's requirement, to another service. This redistribution offsets procurements of ammunition items. To facilitate this process, each service develops and shares ammunition inventory data in annual reports. The Army's annual report, however, does not include information on certain missiles. Also, the Army's report does not include information on all available, usable ammunition that in a prior year was unclaimed by another service and placed in storage for disposal; DOD guidance does not require that such inventory be included in the reports. Without incorporating these items in the Army's report, DOD may lack full transparency about all available items and may miss opportunities to avoid procurement costs for certain usable items that may already be available in the Army's stockpile. |
Introduction The general well-being of children and families is a critical national policy goal. Current priorities aimed at protecting children and preserving families include an effective child support enforcement program to meet the needs of millions of parents who annually seek child support for their eligible children. In our report, Child Support Enforcement: Families Could Benefit From Stronger Enforcement Program (GAO/HEHS-95-24, Dec. 27, 1994), we found that the Office of Child Support Enforcement (OCSE) lacked essential management tools, such as programwide planning and goal-setting, to assess and improve program performance. On the basis of these findings, we made several recommendations to strengthen OCSE’s leadership and management of the program. Given the need to improve program management, the Chairman, Senate Committee on Finance, asked us to assess the progress that OCSE has made in implementing our previous recommendations. Child Support Enforcement Program Overview A rise in welfare costs resulting from out-of-wedlock birth rates and parental desertion, coupled with a growing demand to relieve taxpayers of the financial burden of supporting these families, prompted the Congress to create the national child support enforcement program. Created in 1975 under title IV-D of the Social Security Act, the program’s purpose is to strengthen existing state and local efforts to find noncustodial parents, establish paternity, obtain support orders, and collect support payments. The national program incorporated the already existing state programs. Increasingly, the child support enforcement program has faced the growing demands of millions of children and families seeking support payments. In 1995, the program reported an estimated 20.1 million cases, an increase of about 50 percent over the previous 5 years. In that year, states collected about $10.8 billion in child support payments for 3.8 million cases, or 19 percent of the program’s caseload. Expenditures to administer the child support enforcement program totaled about $3.0 billion, of which $2.1 billion was paid by the federal government. In response to the growing caseloads and as a way to improve performance, some states have privatized child support enforcement services to supplement their own state-administered programs. The program serves two populations: families receiving Aid to Families With Dependent Children (AFDC) and those who do not. The Congress believed that government welfare expenditures could be reduced and to some extent prevented by recouping AFDC benefits from noncustodial parents’ child support payments. In addition, the Congress believed that earlier enforcement of child support obligations for families not receiving AFDC could help prevent these families from needing support in the form of welfare benefits. Families entering the child support enforcement program require different combinations of services at different times. In some cases, the child’s paternity has not been established and the location of the alleged father is unknown. In these cases, the custodial parent needs help with every step: locating the alleged father, establishing paternity and a child support order, enforcing the order, and collecting the support payment. In other cases, the custodial parent may have a child support order, and child support enforcement agencies must periodically review and, possibly, modify the order as a result of changes in the employment status or other circumstances pertaining to the noncustodial parent. For AFDC recipients, the family receives the first $50 of any current child support payment each month without a decrease in its AFDC payment. Any remainder of the child support payment is retained by the federal and state governments in proportion to their respective AFDC payments. Payments that are collected on behalf of non-AFDC families are sent to the families. The child support enforcement program is an intergovernmental program involving the federal, state, and local governments. Federal responsibility for the program lies within the Department of Health and Human Services’ (HHS) Administration for Children and Families (ACF). Within ACF, OCSE central office and regional office staff develop policy and oversee the state-administered programs. Figure 1.1 illustrates the partnership arrangements among key players involved in overseeing and administering the child support enforcement program. The child support enforcement program envisions an aggressive federal role in ensuring that states provide effective child support services. Federal law requires OCSE to establish standards for state program effectiveness and to monitor the operation of state programs through periodic audits. To help ensure program effectiveness, OCSE has the authority to assess financial penalties if an audit reveals that a state has failed to meet certain program standards. Among other functions, regional office staff review state child support enforcement plans to ensure consistent adherence with federal requirements. OCSE also is authorized to work with the states to help them plan, develop, design, and establish effective programs. In addition, OCSE is responsible for maintaining effective working relationships with federal, state, and local government officials; national interest groups; and other key stakeholders in the child support field. State child support enforcement agencies are responsible for all activities leading to securing from noncustodial parents financial support and medical insurance coverage for children. The agencies provide four principal services: (1) locating absent parents, (2) establishing paternity, (3) obtaining and enforcing child support orders, and (4) collecting support payments. To meet federal requirements and receive federal funds, state child support enforcement programs must have HHS-approved plans indicating compliance with federal law and regulations and must operate in accordance with those plans. HHS can levy financial penalties against states found substantially out of compliance with their plan. There are significant differences in the ways state child support enforcement programs are organized, which state organization they report to, what relationships exist between the child support enforcement program and other state agencies, and the policies and procedures that are followed. These characteristics usually vary by the type of service delivery structure, levels of court involvement required by state family law, population distribution, and other variables. For example, some state child support agencies operate their programs with state funds through a network of regional offices, while others share the federal funding with and supervise county and other local jurisdictions’ operations. The child support enforcement funding structure was designed to share program costs between the federal and state governments. The federal government matches 66 percent of states’ administrative and certain management information systems development costs and 90 percent of laboratory costs related to paternity establishment. The federal government also pays incentives to states for collection efficiency. These incentives are calculated separately for AFDC and non-AFDC collections by dividing the amount collected for each category by total program administrative costs. On the basis of these calculations, states with higher ratios of collections to program costs receive more incentive funds than states with lower ratios. Incentive payments for AFDC collections range from amounts equal to 6 to 10 percent of the collections. Incentive payments for non-AFDC collections also range from 6 to 10 percent of non-AFDC collections but cannot be greater than 115 percent of the AFDC incentive payments. These incentive payments are funded from the federal portion of recovered AFDC collections. States must share incentives with local governments that bear some of the program’s administrative costs. However, states may use the incentive payments and AFDC recoveries to fund programs other than child support enforcement. GAO-Reported Management Challenges We reported earlier that clear federal management strategies coupled with state management efforts could better position the child support enforcement program to serve the families that depend on it. The increase in children needing support has focused attention on federal and state efforts to enforce parental responsibilities to support their children. However, these efforts have been hampered by management weaknesses, such as the lack of programwide planning and accurate data, that have kept OCSE from developing specific strategies for contributing to improved program performance and judging how well the program is working. We also reported that OCSE had reduced the level of technical assistance it provided to state programs following reductions in federal program resources. Various organization and staffing changes reduced the number of federal staff assigned to the child support enforcement program, thereby creating communication problems between federal and state program officials. OCSE audits and data collection efforts, while satisfying legal requirements for monitoring and tracking the states’ programs, did not provide either OCSE or the states with adequate information on program results. Moreover, we reported that federal incentive funding was not sufficiently aligned with desired program outcomes. On the basis of these and other findings, we made several recommendations to the Secretary of HHS to focus management of the child support enforcement program on results. These recommendations address four key program areas for which OCSE has responsibility: (1) strengthening its partnership with state child support programs, (2) developing its own management strategies for how it will contribute to improved program results, (3) reorienting its audit processes to assess state results, and (4) realigning federal incentive funding with state performance. OCSE said it would address our recommendations in the course of its implementation of the Government Performance and Results Act (GPRA) of 1993, legislation that focuses federal departments’ and agencies’ management on program results. GPRA Provides Opportunity for OCSE to Manage for Results GPRA requires federal agencies to reorient program management toward results. Traditionally, federal agencies have used factors such as the amount of program funds, the level of staff deployed, or the number of tasks completed as measures of performance. By only using these kinds of measures, an agency has not considered whether its programs have produced real results. Today’s environment is more results-oriented. The Congress, executive branch, and the public are beginning to hold agencies accountable less for inputs and outputs than for outcomes, such as how programs affect participants’ lives. Under GPRA, federal agencies are faced with reorienting their policies, planning efforts, and operations toward measuring and improving program results. To reorient federal planning and management, GPRA requires federal agencies to (1) define their mission and desired outcomes, (2) measure performance, and (3) report performance information as a basis for making management decisions. The first step—defining mission and desired outcomes—requires agencies to develop strategic plans containing mission statements and outcome-related strategic goals. The Environmental Protection Agency, for example, launched the National Environmental Goals Project, a long-range planning initiative under which it involved stakeholders in developing measurable goals, such as managing and cleaning up radioactive waste, for the agency to pursue in improving the quality of the nation’s environment. The second step—measuring performance—requires agencies to develop annual performance plans with annual performance goals and indicators to measure performance. The National Oceanic and Atmospheric Administration, for example, set up a method to measure its performance by measuring changes in the lead time it gives the public before severe weather events. The third step—reporting performance information—requires agencies to prepare annual performance reports with information on the extent to which they have met their annual performance goals. To implement this step, the Department of Veterans Affairs initiated efforts to provide caregivers improved medical outcomes data to use in improving services to veterans. To begin implementing GPRA, the Office of Management and Budget (OMB) designated 68 pilot tests for performance planning and reporting in 26 federal entities. OCSE was one of the federal agencies selected by OMB in 1994 to undertake a pilot test. OMB based its selection of OCSE, in part, on OCSE’s previous efforts to develop a 5-year strategic plan; its ability to quantify program goals, such as child support collections; and the involvement of state and local governments as key program administrators. Scope and Methodology To review OCSE’s progress made toward implementing our previous recommendations, we examined OCSE program management and conducted case studies in seven states (see fig. 1.2). We interviewed OCSE central office and regional staff and obtained relevant documentation to discuss and analyze management initiatives undertaken since our previous review. We also interviewed state and local program officials to obtain their perspectives on any recent changes in their interactions with OCSE. Regarding OCSE’s implementation of GPRA, we reviewed GPRA documentation, such as strategic plans, performance reports, memoranda, and studies. Our review also included interviews with officials in HHS’ Office of the Secretary, ACF, and OMB. In addition, we reviewed changes in OCSE’s management policies and practices since our previous report. We did not assess, however, the child support enforcement program results attributable to such changes because of the relatively short period of time they had been in effect. The seven case studies we conducted were designed to obtain information on local program priorities and state interactions with OCSE regional and central office staff. We judgmentally selected states that differed in their fiscal health, geographic location, demographics, program administration, status of any state GPRA pilot projects (see fig. 1.3), and management reform initiatives. On the basis of these selection criteria, we reviewed child support enforcement programs in Alabama, Illinois, Minnesota, New Jersey, Oregon, Texas, and Virginia. Our case studies also included interviews with officials in six regional offices, covering 33 state or local programs, as shown in table 1.1. In addition, we interviewed representatives from five national interest groups—the Center for Law and Social Policy, Children’s Defense Fund, National Council of Child Support Enforcement Administrators, National Institute for Responsible Fatherhood and Family Development, and Association for Children for Enforcement of Support—to obtain their views on implementation of the child support enforcement program. Appendix II contains a profile of selected program and demographic data for each state included in our review. We conducted our review from June 1995 through August 1996 in accordance with generally accepted government auditing standards. HHS provided comments on a draft of this report. These comments are presented and evaluated in chapter 4 and included in appendix III. We also obtained comments from states selected for our case studies. Their suggested revisions and technical comments from HHS were included in the report as appropriate. Federal/State Partnership Strengthened; OCSE Needs Its Own Strategies to Manage for Results OCSE has made progress in reorienting its management toward program results by working with the states to develop national goals and objectives for increasing the number of paternities established, support orders obtained, and collections received. Through this joint planning process, OCSE has also strengthened its partnership with state child support enforcement programs. The partnership was further strengthened by OCSE’s designating regional staff to provide technical assistance responsive to local needs. As a next step in its planning process, OCSE needs to develop its own long-term strategies for how it will help achieve the national goals and objectives, in addition to annual performance agreements established for top managers. OCSE and the States Developed National Goals and Objectives and Strengthened Their Partnership In February 1995, OCSE and the states developed and approved a strategic plan with national goals and objectives for the child support enforcement program. In our earlier review, we found that OCSE’s planning efforts had not focused on overall program goals. Except for paternity establishment, the program lacked long-term goals and objectives. In addition, OCSE had not sought input from its state partners, leading to uncertainty and frustration among state officials regarding the future direction of the program and their lack of participation in program planning. National Program Goals and Objectives Established Recognizing the need to improve its planning process and working relationships with states, OCSE sought to reorient its management focus toward program outcomes and involve states in the development of program goals and objectives. GPRA provided legislative impetus for OCSE to initiate a new management orientation intended to look beyond traditional management and planning priorities, such as process-oriented tasks and activities. In 1994, as the first step in this long-term process, OCSE specified performance levels that states were expected to achieve in such areas as paternities established and collections received. However, state program officials strongly objected to this mandate, because they did not have an opportunity to participate in this planning process. Following these initial planning efforts, OCSE sought to obtain wider participation from program officials at the federal, state, and local levels of government. In addition, OCSE established task forces consisting of federal, state, and local officials to help focus management of the program on long-term goals. OCSE regional officials also worked with states to help reorient program management toward results. During the planning process, participants agreed that the national goals and objectives would be based on the collective suggestions of the states and that the plan’s final approval would be reached through a consensus. After reaching consensus, OCSE and state program officials for the first time approved mutually acceptable goals and objectives, as shown in table 2.1. For each goal, the participants identified interim objectives that, if achieved, would represent progress toward the stated goal. For example, OCSE and the states first agreed to increase the number of paternities established within 1 year of birth to help meet the goal of establishing paternity for all children with child support enforcement cases. At the time of our review, OCSE and the states also were developing performance measures, such as the percentage of children in the child support enforcement caseload with paternity resolved, as statistical tools for identifying state progress toward achieving these goals. In addition, OCSE intends to work with states to develop performance standards against which it will assess the quality of state performance, consistent with GPRA. Performance Agreements With States Attempt to Link National and State Goals In an effort to achieve the program goals established under GPRA, OCSE has encouraged its regional staff to develop performance agreements with states. These agreements are to specify both general working relationships between OCSE regional offices and state program officials and performance goals for each state. In four states that we visited, regional and state officials negotiated mutually acceptable goals for the agreements. OCSE officials said that by working toward the goals in each agreement, states would help meet the desired national increases in the number of paternities established, support orders obtained, and collections received. OCSE officials said, however, that they are limited in using the performance agreements as an effective management tool for fostering improved program performance. They explained that OCSE does not currently have the statutory authority to link federal incentive funding to the achievement of performance goals included in each agreement. OCSE officials also stated that, until legislation making that link is enacted, they must rely on the good will of states to improve program results. The limitations of the current incentive funding structure are discussed in further detail in chapter 3. Federal/State Partnership Strengthened Since our previous review, OCSE and the states have worked to strengthen their partnership. Joint program planning conducted by both OCSE and state officials in 1994 and 1995 has increased the states’ influence in developing the national goals and objectives, compared with the level of state involvement we previously reported. During this joint planning, state officials had an opportunity to discuss the challenges that they face as the programs’ principal administrators. Child support program officials in five of seven states we contacted generally believe that OCSE made a commitment to work actively with states as partners. As program partners, state officials had the opportunity to develop, amend, and approve specific program objectives. For example, OCSE and state officials created a Performance Measures Work Group to develop statistical measures for assessing state progress toward achieving the national goals and objectives. The work group, which consists of officials from ACF, OCSE, and state and local child support enforcement programs, met several times in 1995 and 1996 to discuss mutually acceptable performance measures. OCSE also selected 32 local GPRA pilot programs that states and counties believed would strengthen federal/state commitment to improve program results. Appendix I contains a brief description of the five state and county pilot programs operated in the states we reviewed. These pilots cover a broad range of program services and focus state and local program management on goals and objectives similar to those established at the national level. OCSE Technical Assistance Generally Responsive to State Needs, but Could Be Better Targeted in Certain Cases To further strengthen its partnership with states, OCSE improved its technical assistance in response to state program needs. In our earlier review of the child support enforcement program, we reported that HHS had experienced workforce reductions in the 1980s, leading to fewer resources in OCSE. As a result, technical assistance and training, which had formed a large part of OCSE efforts to foster improved program results, virtually disappeared. In addition, an HHS-wide reorganization left OCSE with no organizational control over those HHS regional staff serving as contact points for the states on some program matters. Since our previous review, HHS has reorganized staffing assignments in its 10 regional offices to decentralize program decision-making. As a result, OCSE central and regional office staff, often designated as child support enforcement program managers and specialists, are now providing technical assistance more responsively to state needs. Program officials in six of seven states included in our review were generally satisfied with the responsiveness of OCSE regional staff. For example, Oregon officials stated that child support enforcement officials in federal Region X have continually provided technical assistance on regulatory interpretations and have sponsored forums to discuss other issues pertaining to customer service and specialized interstate cases. New Jersey program staff also said that they worked closely with OCSE officials in Region II to identify state GPRA pilot project strategies, such as processing criminal child support enforcement cases, that could be used to improve the New Jersey program. On the whole, OCSE officials believe that they have been responsive to state inquiries. In certain cases, several state officials and national interest groups we contacted believe that OCSE could provide more effective guidance or financial support to improve state programs. For example: Alabama child support enforcement officials stated it would be helpful if OCSE developed staffing standards, as currently required by federal law, in cooperation with state child support staff. Such standards could be used by states to assist in caseload distributions and workload management. In Minnesota, child support officials in four counties believed that, through additional funding, OCSE could promote state and local level development of innovative approaches to service delivery. Several national interest groups we contacted believe that OCSE does not actively promote innovative approaches to state program improvement. Representatives from these groups said that OCSE has not fulfilled its role in fostering improved state programs. While the representatives told us that OCSE has assembled relevant program data as a central depository of information, they believe that OCSE should work more closely with states to help foster improved program results. OCSE Needs to Develop Its Own Strategies to Help Achieve National Goals and Objectives While OCSE has made notable progress in developing national goals and objectives for the program as a whole and establishing performance agreements with states, as a next step it now needs to develop its own plan for realizing the long-term program goals. As the federal partner in child support enforcement, OCSE has responsibility to help achieve the national goals developed jointly with states. Further, GPRA requires OCSE to develop such strategies by describing the operational processes, skills, technologies, and resources required to meet the program’s goals. As we reported in December 1994, the scope of OCSE responsibilities has grown with each expansion in legislative requirements, such as provisions contained in the Child Support Enforcement Amendments of 1984 and the Family Support Act of 1988. OCSE has undertaken initiatives to address issues as diverse as developing a standardized form for withholding income from noncustodial parents who owe child support to piloting a system for identifying parents’ Social Security numbers. In response to its growing responsibilities, OCSE recognizes the need to establish its own strategies for how it will help achieve newly established program goals. Beginning in 1995, key managers—including the Assistant Secretary for Children and Families, who is also the Director of Child Support Enforcement, and the Deputy Director of Child Support Enforcement—developed their own annual performance agreements in consultation with selected states. These agreements, similar to personnel contracts for the federal government’s Senior Executive Service, are intended to hold OCSE senior managers and staff accountable for achieving program goals. For example, the 1996 agreement between the Assistant Secretary and Deputy Director cites the national program goals and a mixture of 52 measurable and abstract process goals that the Deputy Director is required to meet, including promoting “effective asset identification and collection techniques” and continuing “a meaningful dialogue with national public interest groups.” While performance agreements have been developed for its top managers, OCSE also needs to develop its own long-term management strategies for helping to achieve the program goals, prioritize its responsibilities, specify intended results from its operations, and identify measures for assessing its own performance. Unlike long-term management strategies for the organization, the performance agreements specify annual program goals for OCSE’s top managers. For example, one such performance agreement indicates that OCSE will promote the review and modification of child support orders to help foster the self-sufficiency of eligible clientele. However, the agreement does not specify how each manager will promote such a tool, how such promotion will contribute toward achieving the national goals, or any performance measures for assessing progress toward meeting the goals through this particular activity. Without its own long-term management strategies for helping to achieve the national program goals, OCSE will be hindered in establishing its priorities and applying its resources in ways that will effectively contribute to improved program results. OCSE Faces Additional Challenges in Fostering Improved State Program Results While OCSE has established national goals and objectives through a strengthened partnership with state child support enforcement programs, it faces additional challenges in fostering improved state performance. To help move management of the program toward a more results-oriented focus, OCSE undertook efforts to improve its audit processes, the quality of state-reported data, and the federal incentive funding structure. Beyond these initial efforts, more needs to be accomplished in all three areas in order to further OCSE’s reorientation toward managing for results. Despite Improvements, OCSE Audits Remain Compliance Focused We reported earlier that OCSE’s audit role was focused more on assessing state compliance with federal program requirements than on assessing the effectiveness of state programs. Therefore, we recommended that OCSE change its audit function to focus more on state program results. While compliance audits are needed, program results audits, in contrast, would (1) measure state progress toward accomplishing the national goals; (2) investigate barriers to effective child support enforcement programs; (3) recommend program improvements, when appropriate; and (4) ensure that the data states submit on their performance are accurate and comparable across states. OCSE Primarily Audits States’ Compliance With Program Requirements Currently, OCSE’s audits, which include a substantial compliance review and several more specialized audits, remain largely focused on state compliance with federal program requirements. While OCSE officials agreed that their audits, as currently constructed, are insufficient for assessing state program results, they identified several reasons why they do not conduct such program results audits. According to the Director of OCSE’s Division of Audit, OCSE cannot use a program results audit until it and the states approve performance measures currently under consideration. He said that once these performance measures are finalized they can then be used as criteria for auditing program results. The Director also indicated that if OCSE was not relieved of its current statutory requirement to conduct the substantial compliance audits, its operations would be strained by having to conduct both compliance and program results audits with limited staff resources. The Director of OCSE’s Division of Audit also believed that a penalty provision similar to that used for its substantial compliance audits would be needed to sanction states for poor performance. He said that without a penalty provision, program results audits would be construed by states as merely advisory. Other OCSE officials said that, given their current emphasis on compliance audits, it may be inappropriate to penalize states for poor performance while finding them in compliance with regulatory requirements. We believe that OCSE can conduct program results audits that would provide states with valuable information to use in improving program results. First, we believe that OCSE could conduct such audits without approved performance measures by using its accumulated knowledge of state practices and results. Once approved, however, performance measures could provide OCSE auditors with additional criteria to assess state progress toward achieving the national goals. Also, program results audits could be conducted at the discretion of OCSE’s Director, Division of Audit, considering the history of each state’s program, staff workloads, and other factors. In addition, recent welfare reform legislation—the Personal Responsibility and Work Opportunity Reconciliation Act of 1996—requires that states review and report annually on their compliance with federal program requirements. Instead of conducting compliance audits, OCSE is required under the legislation to review the states’ compliance reports and provide them with comments, recommendations for corrective actions, and technical assistance. This should reduce OCSE’s workload previously associated with compliance audits, thereby making resources available to conduct the program results audits. Finally, we do not believe that penalties are necessary because the intent of such audits would be to help states improve their performance. OCSE Streamlined Audits and Focused Reviews on State Reporting Systems While OCSE has not yet audited state program results, it has undertaken other initiatives to improve its oversight of state programs. Previously, states expressed concern about the scope, complexity, and length of time it took to respond to substantial compliance audits conducted by OCSE. At the time of our previous review, OCSE relied on an audit approach that had over 50 compliance criteria. These criteria included 29 for auditing state compliance with federal requirements and 23 to ensure that states provided child support services in accordance with their approved state plans. For these audits, states had to provide the necessary evidence to demonstrate the extent to which they met the applicable criteria. In addition, audits were untimely—sometimes final reports were not issued until 2 years after the period audited. In these cases, the audits were not a useful management tool to states. In December 1994, OCSE issued final regulations to streamline its substantial compliance audits and make them less burdensome to states. Using a materiality test, OCSE decided that if 90 percent or more of all states met a particular criterion, thus demonstrating general proficiency, that criterion would be deleted from the substantial compliance audit. As a result of eliminating several criteria, these audits have been redefined and now focus on state compliance with service-related criteria. In addition to its efforts to streamline its audit processes, OCSE has undertaken efforts to assess the accuracy of state data. In our previous report, we recommended that OCSE reexamine its audit role to support accurate state performance reporting. Since our recommendation, OCSE has placed greater emphasis on its reporting system reviews, which analyze the procedures and systems states use to accumulate, record, and report data. Since 1994, OCSE conducted reporting system reviews in 20 states, most of which found that the audited state did not have reliable systems for reporting data accurately and that improvements will be needed as OCSE moves to results-oriented management. To date, OCSE has received responses from six states on actions they have taken to address its findings and recommendations. OCSE suspects that it is possible that other states are taking action to correct problems identified but have not yet provided documentation of these actions. Of those states that have notified OCSE, typical corrective actions include the following: establishing procedures requiring periodic reconciliations of collections and expenditure data to ensure accuracy, revising states’ automated system programming to generate collections data without the need for manual data entry, and revising states’ reporting format to document the cumulative fees collected from absent parents for the cost of blood testing to determine paternity. The greater emphasis that OCSE has placed on assessing the accuracy of state-reported data corresponds to its audit role contained in the recent welfare reform legislation. This law requires that OCSE, at least once every 3 years, assess the completeness, reliability, and security of data and the accuracy of state reporting systems. While Efforts Attempt to Resolve Data Problems, Discrepancies Among States Magnify Challenges in Assessing Performance In addition to the data accuracy issues surfaced through OCSE’s reporting system reviews, the lack of comparable data across state and local jurisdictions compounds the challenges OCSE faces in measuring state performance. For example, data discrepancies resulting from differences in the way the states define what constitutes a child support case contribute to the current difficulty of uniformly measuring state performance. In OCSE’s move toward results-oriented management under GPRA, quality data that are accurate and comparable will be needed to make performance-based incentive payments to states and management decisions on the future direction of child support enforcement. In addition to the reporting systems reviews, the efforts of OCSE’s Performance Measures Work Group to develop a set of GPRA performance measures may also prove useful in improving data quality by bringing about greater comparability in state reporting. Given the numerous entities that can be involved in state child support enforcement programs, such as courts, hospitals, and other state and county agencies, we earlier reported that OCSE needed universally understood definitions and procedures by which states can collect and report data. As early as 1992, OCSE undertook efforts through its Measuring Excellence Through Statistics (METS) initiative to improve the comparability of state-reported data by developing standard data definitions for key child support enforcement terms, including a definition for what constitutes a child support enforcement case. In the process of developing measures to assess state performance, the Performance Measures Work Group has built upon the work of the METS initiative by incorporating the use of standardized definitions for measuring state performance. For example, measures that have been developed to assess state performance in obtaining support orders require that states use the METS definition of a child support enforcement case to report these data. In 1996, OCSE requested that states test the data requirements for performance measures currently under development. It asked that states identify differences in how they currently compile and report data and how they would be compiled and reported using performance measures. While at the time of our review no state had yet provided OCSE any substantive feedback on the pilot, OCSE officials said that data requirements for several of the proposed performance measures would require states to obtain data from sources other than those that currently provide information on program factors such as out-of-wedlock births and the location of noncustodial parents. Federal Incentive Funding Structure Remains Weakly Linked to State Performance In our previous report, we found that the incentive funding structure has yet to achieve its potential. In practice, all states—regardless of performance—received some incentive payments. Moreover, the amount of incentive payments depends on a state’s collections and program costs and does not reflect success in achieving each of the three program goals, such as establishing paternities and obtaining support orders. Therefore, we previously recommended that OCSE reexamine the incentive funding structure because of its poor linkage to state program outcomes. Today, the incentive funding structure remains weakly linked with state program performance. A new arrangement that considers progress toward achieving the national program goals will be needed in order to foster improved program results. State child support enforcement programs receive 66 percent of their program costs through federal financial participation and additional funds as a result of the incentive funding policy prescribed by law. In 1995, incentive payments to states were estimated at $400 million. However, the current incentive funding structure has two major limitations. First, while funding is awarded to states on the basis of a collections-to-cost ratio, the current structure does not consider other program results, such as increased paternities established and support orders obtained. Second, states receive incentive funding equal to at least 6 percent of their collections, regardless of how well or poorly they perform. Therefore, as currently constructed, federal funding does not provide a real incentive for states to improve their performance. OCSE officials told us that the current incentive funding structure does not provide them an effective means to foster improved program results at the state level. They said, for example, that the performance agreements OCSE currently has with the states to improve program results are unenforceable. Under the existing incentive funding structure, if a state fails to meet or exceed stated goals, OCSE does not have the statutory authority to alter the existing incentive funding scheme to adjust the state’s award consistent with its performance. The state program officials we interviewed also agree that the current incentive funding structure needs improvement. In designing a new structure, state officials believe that the existing pool of incentive funds should not be reduced and that incentive payments should be based on one or more of several standards, such as improving state performance, surpassing an aggregate level of performance, or completing appropriate corrective actions. State officials also believe that OCSE must help states meet the standards under a new system and should be held accountable for states’ successes or failures. In response to these state views, OCSE officials have continued to work closely with the states to include their priorities in development and approval of the measures used to assess performance of the program. In addition, state officials cited the continued need for uniform data definitions, such as those included in METS, and compliance with program requirements to help ensure that the new system is fair to all states. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996, when fully implemented, will establish a new incentive funding structure. It requires the Secretary of HHS, in consultation with the states, to develop a new incentive funding structure that provides payments to states based on performance. The Secretary must report details of the new system to the Congress by March 1, 1997. The system developed will become effective for fiscal year 2000; the current structure will remain effective until then. While the legislation requires HHS and the states to develop a new structure, it does not specify the factors on which incentive payments should be based. Conclusions, Recommendations, and Agency Comments To date, through implementation of GPRA and other undertakings, OCSE has made notable progress toward establishing a results-oriented framework for the child support enforcement program. While OCSE has additional steps to take, the challenges it faces in managing for results can be met. The national child support enforcement program, however, continues to face growing service needs without the benefit of knowing how OCSE plans to help achieve the program’s newly established goals and objectives. We believe that OCSE should develop its own long-term management strategies, as we had previously recommended, to help meet the national goals and objectives. In accordance with GPRA requirements, OCSE’s activities, core processes, and resources should be aligned to support its mission and help it achieve these goals. Through long-term management strategies, OCSE can prioritize its expanding program responsibilities, conduct operations in direct support of the national goals, specify the results anticipated from implementing its strategies, and develop measures for assessing its own performance. By strengthening the linkage between its management strategies and the national goals, we believe that OCSE will be in a better position to foster improved program results. While OCSE has initiated certain management improvements to establish program goals and strengthen its partnership with states, limitations in its audit processes and the federal incentive funding structure continue to constrain improvements in program results. While we recognize that performance measures have yet to be approved, we continue to believe that OCSE should assess state program performance to identify problems states encounter that inhibit their effectiveness and, when appropriate, recommend actions to help states improve their performance. Once approved, performance measures would help define audit criteria for assessing state performance. Moreover, program results audits could help OCSE respond to state requests for additional information on how to improve program performance. The incentive funding structure remains weakly linked with state performance. New welfare reform legislation—the Personal Responsibility and Work Opportunity Reconciliation Act of 1996—requires HHS and the states to develop a new incentive funding structure. The act does not specify the factors to be used in assessing state performance. We believe that the structure should be realigned so that incentive payments are earned for progress toward the agreed upon national goals of increasing the number of paternities established, support orders obtained, and collections received. By realigning incentive funding with state performance, OCSE would be better equipped to reward states for progress toward achieving the national goals. Recommendations We recommend that the Secretary of HHS direct OCSE, as part of its GPRA efforts, to do the following: Develop its own long-term management strategies, in conjunction with the states, to help increase paternities established, support orders obtained, and collections received. Such strategies should (1) prioritize OCSE’s roles and responsibilities, (2) specify results that OCSE anticipates from its prioritized operations, and (3) develop performance measures for assessing its own performance. Conduct program results audits of state progress toward achieving the national program goals. These audits should assess the accuracy of state-reported data; investigate barriers to achieving improved program results; and recommend approaches, when appropriate, for states to meet program goals. Include payments in the new incentive system, required by recent welfare reform legislation, that are based on state progress toward increasing paternities established, support orders obtained, and collections received. Agency Comments HHS provided written comments on a draft of this report (see app. III). HHS generally concurs with our recommendations. The Department expressed its commitment to moving forward in the direction of our recommendation that OCSE develop its own long-term management strategy. It stated that developing longer-term management strategies and program priorities can be beneficial and cited steps OCSE has taken in this direction, such as creating a series of federal/state work groups to address longer-term issues and planning major enhancements to the Federal Parent Locator Service. We are encouraged by the Department’s commitment to OCSE developing its own long-term management strategy and by these initial efforts. As OCSE proceeds to fully implement our recommendation, it also should ensure that, as the national office for the child support enforcement program, it has strategies to establish its own priorities, specify anticipated results from its program activities, and develop measures to assess its performance. In response to our recommendation on program results auditing, HHS commented that with the enactment of welfare reform, OCSE will be required to conduct program results audits. While welfare reform legislation requires that OCSE verify the accuracy of state-reported data, our recommendation covers several additional steps essential for reorienting OCSE’s audit function toward program results. Specifically, program results audits conducted by OCSE should investigate why states have not met performance targets and make recommendations, when appropriate, to assist states in improving their performance. With regard to our recommendation related to developing a new incentive funding structure, HHS stated that OCSE, through its strategic planning process and the Performance Measures Work Group, has made progress toward revising the basis on which states receive incentive payments. While these steps show promise in strengthening the linkage between the incentive funding structure and state performance, the revised structure, when fully implemented, should base payments on state progress made toward achieving all three program goals as we recommend. HHS also provided technical comments that we incorporated in the final report as appropriate. | Pursuant to a congressional request, GAO reviewed the Office of Child Support Enforcement's (OCSE) management of the child support enforcement program, focusing on OCSE progress in: (1) strengthening its partnership with state and local child support enforcement programs; (2) achieving national program goals; (3) improving assessment of state program results; and (4) redesigning the federal incentive funding structure for improved state performance. GAO found that: (1) OCSE is making progress in reorienting its management of the child support enforcement program toward program results; (2) OCSE and the states have approved 5-year national goals and objectives for increasing the number of paternities established, the number of support orders obtained, and the amount of collections received; (3) OCSE has also negotiated voluntary performance agreements with states specifying intended program results; (4) OCSE audits continue to focus on state compliance rather than on state progress in achieving program goals because of a lack of performance measures, the absence of penalties for poor-performing states, and limited staff resources; (5) the OCSE federal incentive funding structure, which is based on maximizing child support collections relative to administrative costs rather than on all program goals, limits its use as an incentive for improved results; and (6) welfare reform legislation enacted in August 1996 presents the Department of Health and Human Services an opportunity to more strongly link incentive funding with demonstrated state performance. |
Background Threats to systems supporting critical infrastructure and federal information systems are evolving and growing. Advanced persistent threats—where adversaries that possess sophisticated levels of expertise and significant resources to pursue its objectives repeatedly over an extended period of time—pose increasing risks. In 2009, the President declared the cyber threat to be “ne of the most serious economic and national security challenges we face as a nation” and stated that “America’s economic prosperity in the 21st century will depend on cybersecurity.” The Director of National Intelligence has also warned of the increasing globalization of cyber attacks, including those carried out by foreign militaries or organized international crime. In January 2012, he testified that such threats pose a critical national and economic security concern. To further highlight the importance of the threat, on October 11, 2012, the Secretary of Defense stated that the collective result of attacks on our nation’s critical infrastructure could be “a cyber Pearl Harbor; an attack that would cause physical destruction and the loss of life.” These growing and evolving threats can potentially affect all segments of our society, including individuals, private businesses, government agencies, and other entities. We have identified the protection of federal information In 2003, this systems as a high-risk area for the government since 1997.high-risk area was expanded to include protecting systems supporting our nation’s critical infrastructure. Each year since that time, GAO has issued multiple reports detailing weaknesses in federal information security programs and making recommendations to address them. A list of key GAO products can be found at the end of this report. Sources of Threats and Attack Methods Vary The evolving array of cyber-based threats facing the nation pose threats to national security, commerce and intellectual property, and individuals. Threats to national security include those aimed against the systems and networks of the U.S. government, including the U.S. military, as well as private companies that support government activities or control critical infrastructure. These threats may be intended to cause harm for monetary gain or political or military advantage and can result, among other things, in the disclosure of classified information or the disruption of operations supporting critical infrastructure, national defense, or emergency services. Threats to commerce and intellectual property include those aimed at obtaining the confidential intellectual property of private companies, the U.S. government, or individuals with the aim of using that intellectual property for economic gain. For example, product specifications may be stolen to facilitate counterfeiting and piracy or to gain a competitive edge over a commercial rival. In some cases, theft of intellectual property may also have national security repercussions, as when designs for weapon systems are compromised. Threats to individuals include those that lead to the unauthorized disclosure of personally identifiable information, such as taxpayer data, Social Security numbers, credit and debit card information, or medical records. The disclosure of such information could cause harm to individuals, such as identity theft, financial loss, and embarrassment. The sources of these threats vary in terms of the types and capabilities of the actors, their willingness to act, and their motives. Table 1 shows common sources of adversarial cybersecurity threats. These sources of cybersecurity threats make use of various techniques, or attacks that may compromise information or adversely affect computers, software, a network, an organization’s operation, an industry, or the Internet itself. Table 2 provides descriptions of common types of cyber attacks. The unique nature of cyber-based attacks can vastly enhance their reach and impact, resulting in the loss of sensitive information and damage to economic and national security, the loss of privacy, identity theft, or the compromise of proprietary information or intellectual property. The increasing number of incidents reported by federal agencies, and the recently reported cyber-based attacks against individuals, businesses, critical infrastructures, and government organizations have further underscored the need to manage and bolster the cybersecurity of our government’s information systems and our nation’s critical infrastructures. Number of Incidents Reported by Federal Agencies Continues to Rise, and Recently Reported Incidents Illustrate Potential Impact Federal agencies have reported increasing numbers of cybersecurity incidents that have placed sensitive information at risk, with potentially serious impacts on federal operations, assets, and people. The increasing risks to federal systems are demonstrated by the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. As shown in figure 1, over the past 6 years, the number of incidents reported by federal agencies to the U.S. Computer Emergency Readiness Team (US-CERT) has increased from 5,503 in fiscal year 2006 to 48,562 incidents in fiscal year 2012, an increase of 782 percent. These incidents include, among others, the installation of malware, improper use of computing resources, and unauthorized access to systems. Of the incidents occurring in 2012 (not including those that were reported as under investigation), improper usage, malicious code, and unauthorized access were the most widely reported types across the federal government. As indicated in figure 2, which includes a breakout of incidents reported to US-CERT by agencies in fiscal year 2012, improper usage accounted for 20 percent of total incidents reported by agencies. In addition, reports of cyber incidents affecting national security, intellectual property, and individuals have been widespread and involve data loss or theft, economic loss, computer intrusions, and privacy breaches. The following examples from news media and other public sources illustrate that a broad array of information and assets remain at risk. In February 2012, the National Aeronautics and Space Administration (NASA) inspector general testified that computers with Chinese-based Internet protocol addresses had gained full access to key systems at its Jet Propulsion Laboratory, enabling attackers to modify, copy, or delete sensitive files; create user accounts for mission-critical laboratory systems; and upload hacking tools to steal user credentials and compromise other NASA systems. These individuals were also able to modify system logs to conceal their actions. In March 2011, attackers breached the networks of RSA, the Security Division of EMC Corporation, and obtained information about network authentication tokens for a U.S. military contractor. In May 2011, attackers used this information to make duplicate network authentication tokens and breached the contractor’s security systems containing sensitive weapons information and military technology. EMC published information about the breach and the immediate steps customers could take to strengthen the security of their systems. In 2008, the Department of Defense was successfully compromised when an infected flash drive was inserted into a U.S. military laptop at a military base in the Middle East. The flash drive contained malicious computer code, placed there by a foreign intelligence agency, that uploaded itself onto the military network, spreading through classified and unclassified systems. According to the then Deputy Secretary of Defense, this incident was the most significant breach of U.S. military computers at that time, and DOD’s subsequent Strategy for Operating in Cyberspace was designed in part to prevent such attacks from recurring in the future. In March 2011, an individual was found guilty of distributing source code stolen from his employer, an American company. The investigation revealed that a Chinese company paid the individual $1.5 million to create control system source code based on the American company’s design. The Chinese company stopped the delivery of the turbines from the American company, resulting in revenue loss for the American company. In February 2011, media reports stated that computer attackers broke into and stole proprietary information worth millions of dollars from networks of six U.S. and European energy companies. In mid-2009, a research chemist with DuPont Corporation downloaded proprietary information to a personal e-mail account and thumb drive with the intention of transferring this information to Peking University in China and also sought Chinese government funding to commercialize research related to the information he had stolen. In May 2012, the Federal Retirement Thrift Investment Board reported a sophisticated cyber attack on the computer of a third party that provided services to the Thrift Savings Plan (TSP). As a result of the attack, approximately 123,000 TSP participants had their personal information accessed. According to the board, the information included 43,587 individuals’ names, addresses, and Social Security numbers; and 79,614 individuals’ Social Security numbers and other TSP-related information. In March 2012, attackers breached a server that held thousands of Medicaid records at the Utah Department of Health. Included in the breach were the names of Medicaid recipients and clients of the Children’s Health Insurance Plan. In addition, approximately 280,000 people had their Social Security numbers exposed, and another 350,000 people listed in the eligibility inquiries may have had other sensitive data stolen, including names, birth dates, and addresses. In March 2012, Global Payments, a credit-transaction processor in Atlanta, reported a data breach that exposed credit and debit card account information of as many as 1.5 million accounts in North America. Although Global Payments does not believe any personal information was taken, it provided alerts and planned to pay for credit monitoring for those whose personal information was at risk. These incidents illustrate the serious impact that cyber attacks can have on federal and military operations, critical infrastructure, and the confidentiality, integrity, and availability of sensitive government, private sector, and personal information. Federal Information Security Responsibilities Are Established in Law and Policy Federal law and policy address agency responsibilities for cybersecurity in a variety of ways, reflecting its complexity and the nature of our country’s political and economic structure. Requirements for securing the federal government’s information systems are addressed in federal laws and policies. Beyond high-level critical infrastructure protection responsibilities, the existence of a federal role in securing systems not controlled by the federal government typically relates to the government’s application of regulatory authority and reflects the fact that much of our nation’s economic infrastructure is owned and controlled by the private sector. Certain federal agencies have cybersecurity-related responsibilities within a specific economic sector and may issue standards and guidance. For example, the Federal Energy Regulatory Commission approves cybersecurity standards in carrying out responsibilities for the reliability of the nation’s bulk power system. In sectors where the use of federal cybersecurity guidance is not mandatory, entities may voluntarily implement such guidance in response to business incentives, including to mitigate risks, protect intellectual property, ensure interoperability among systems, and encourage the use of leading practices. The Federal Information Security Management Act of 2002 (FISMA)sets forth a comprehensive risk-based framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. In order to ensure the implementation of this framework, FISMA assigns specific responsibilities to agencies, OMB, NIST, and inspectors general. FISMA requires each agency to develop, document, and implement an information security program to include, among other things, periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; policies and procedures that (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; and procedures for detecting, reporting, and responding to security incidents. In addition, FISMA requires each agency to report annually to OMB, selected congressional committees, and the U.S. Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. OMB’s responsibilities include developing and overseeing the implementation of policies, principles, standards, and guidelines on information security in federal agencies (except with regard to national security systemsand approving or disapproving agency information security programs. ). It is also responsible for reviewing, at least annually, NIST’s responsibilities under FISMA include the development of security standards and guidelines for agencies that include standards for categorizing information and information systems according to ranges of risk levels, minimum security requirements for information and information systems in risk categories, guidelines for detection and handling of information security incidents, and guidelines for identifying an information system as a national security system (NIST standards and guidelines, like OMB policies, do not apply to national security systems). NIST also has related responsibilities under the Cyber Security Research and Development Act that include developing a checklist of settings and option selections to minimize security risks associated with computer hardware and software widely used within the federal government. FISMA also requires each agency inspector general to annually evaluate the information security program and practices of the agency. The results of these evaluations are submitted to OMB, and OMB is to summarize the results in its reporting to Congress. In the 10 years since FISMA was enacted into law, executive branch oversight of agency information security has changed. As part of its FISMA oversight responsibilities, OMB has issued annual guidance to agencies on implementing FISMA requirements, including instructions for agency and inspector general reporting. However, in July 2010, the Director of OMB and the White House Cybersecurity Coordinator issued a joint memorandum stating that DHS was to exercise primary responsibility within the executive branch for the operational aspects of cybersecurity for federal information systems that fall within the scope of FISMA. The memo stated that DHS activities would include five specific responsibilities of OMB under FISMA: overseeing implementation of and reporting on government cybersecurity policies and guidance; overseeing and assisting government efforts to provide adequate, risk-based, and cost-effective cybersecurity; overseeing agencies’ compliance with FISMA; overseeing agencies’ cybersecurity operations and incident response; annually reviewing agencies’ cybersecurity programs. The OMB memo also stated that in carrying out these responsibilities, DHS is to be subject to general OMB oversight in accordance with the provisions of FISMA. In addition, the memo stated that the Cybersecurity Coordinator would lead the interagency process for cybersecurity strategy and policy development. Subsequent to the issuance of M-10-28, DHS began issuing annual reporting instructions to agencies in addition to OMB’s annual guidance. In addition to FISMA’s information security program provisions, federal agencies operating national security systems must also comply with requirements for enhanced protections for those sensitive systems. National Security Directive 42 established the Committee on National Security Systems, an organization chaired by the Department of Defense, to, among other things, issue policy directives and instructions that provide mandatory information security requirements for national security In addition, the defense and intelligence communities develop systems. implementing instructions and may add additional requirements where needed. The Department of Defense also has particular responsibilities for cybersecurity issues related to national defense. To address these issues, DOD has undertaken a number of initiatives, including establishing the U.S. Cyber Command. An effort is underway to harmonize policies and guidance for national security and non-national security systems. Representatives from civilian, defense, and intelligence agencies established a joint task force in 2009, led by NIST and including senior leadership and subject matter experts from participating agencies, to publish common guidance for information systems security for national security and non-national security systems. Various laws and directives have also given federal agencies responsibilities relating to the protection of critical infrastructures, which The Homeland are largely owned by private sector organizations.Security Act of 2002 created the Department of Homeland Security. Among other things, DHS was assigned with the following critical infrastructure protection responsibilities: (1) developing a comprehensive national plan for securing the critical infrastructures of the United States, (2) recommending measures to protect those critical infrastructures in coordination with other groups, and (3) disseminating, as appropriate, information to assist in the deterrence, prevention, and preemption of, or response to, terrorist attacks. Sector-specific agencies are federal agencies designated to be focal points for specific critical infrastructure sectors. responsibility in consultation with DHS. The Federal Information Security Amendments Act of 2012, H.R. 4257, proposed to preserve OMB’s FISMA oversight duties. The Executive Cyberspace Coordination Act of 2011, H.R. 1136, would have given OMB’s role to a newly created National Office for Cyberspace in the Executive Office of the President. While H.R. 4257 was passed by the House of Representatives, none of these bills were enacted into law during the recently completed 112th Congress. Strategic Approaches to Cybersecurity Can Help Organizations Focus on Objectives Implementing a comprehensive strategic approach to cybersecurity requires the development of strategy documents to guide the activities that will support this approach. These strategy documents are starting points that define the problems and risks intended to be addressed by organizations as well as plans for tackling those problems and risks, allocating and managing the appropriate resources, identifying different organizations’ roles and responsibilities, and linking (or integrating) all planned actions. As envisioned by the Government Performance and Results Act (GPRA) of 1993, developing a strategic plan can help clarify organizational priorities and unify employees in the pursuit of shared goals. Such a plan can be of particular value in linking long-term performance goals and objectives horizontally across multiple organizations. In addition, it provides a basis for integrating, rather than merely coordinating, a wide array of activities. If done well, strategic planning is continuous and provides the basis for the important activities an organization does each day, moving it closer to accomplishing its ultimate objectives. By more closely aligning its activities, processes, and resources with its goals, the government can be better positioned to accomplish those goals. Federal Strategy Has Evolved Over Time but Is Not Fully Defined Although the federal strategy to address cybersecurity issues has been described in a number of documents, no integrated, overarching strategy has been developed that synthesizes these documents to provide a comprehensive description of the current strategy, including priority actions, responsibilities for performing them, and time frames for their completion. Existing strategy documents have not always addressed key elements of the desirable characteristics of a strategic approach. Among the items generally not included in cybersecurity strategy documents are mechanisms such as milestones and performance measures, cost and resource allocations, clear delineations of roles and responsibilities, and explanations of how the documents integrate with other national strategies. The items that have generally been missing are key to helping ensure that the vision and priorities outlined in the documents are effectively implemented. Without an overarching strategy that includes such mechanisms, the government is less able to determine the progress it has made in reaching its objectives and to hold key organizations accountable for carrying out planned activities. Cybersecurity Strategy Documents Have Evolved Over Time There is no single document that comprehensively defines the nation’s cybersecurity strategy. Instead, various documents developed over the span of more than a decade have contributed to the national strategy, often revising priorities due to changing circumstances or assigning new responsibilities to various organizations. The evolution of the nation’s cybersecurity strategy is summarized in figure 3. The major cybersecurity initiatives and strategy documents that have been developed over the last 12 years are discussed below. In 2000, President Clinton issued the National Plan for Information Systems Protection. The plan was intended as a first major element of a more comprehensive effort to protect the nation’s information systems and critical assets from future attacks. It focused on federal efforts to protect the nation’s critical cyber-based infrastructures. It identified risks associated with our nation’s dependence on computers and networks for critical services; recognized the need for the federal government to take a lead role in addressing critical infrastructure risks; and outlined key concepts and general initiatives to assist in achieving its goals. The plan identified specific action items and milestones for 10 component programs that were aimed at addressing the need to prepare for and prevent cyber attacks, detect and respond to attacks when they occur, and build strong foundations to support these efforts. In 2003, the National Strategy to Secure Cyberspace was released. It was also intended to provide a framework for organizing and prioritizing efforts to protect cyberspace and was organized according to five national priorities, with major actions and initiatives identified for each. These priorities were a National Cyberspace Security Response System, a National Cyberspace Security Threat and Vulnerability Reduction a National Cyberspace Security Awareness and Training Program, Securing Governments’ Cyberspace, and National Security and International Cyberspace Security Cooperation. In describing the threats to and vulnerabilities of cyberspace, the strategy highlighted the potential for damage to U.S. information systems from attacks by terrorist organizations. Although it is unclear whether the 2003 strategy replaced the 2000 plan or was meant to be a supplemental document, the priorities of the 2003 strategy are similar to those of the 2000 document. For example, the 2003 strategy’s priority of establishing a national cyberspace security threat and vulnerability reduction program aligns with the 2000 plan’s programs related to identifying critical infrastructure assets and shared interdependencies, addressing vulnerabilities, and detecting attacks and unauthorized intrusions. In addition, the 2003 strategy’s priority of minimizing damage and recovery time from cyber attacks aligns with the 2000 plan’s program related to creating capabilities for response, reconstitution, and recovery. The 2000 plan also included programs addressing awareness and training, cyber-related counterintelligence and law enforcement, international cooperation, and research and development, similar to the 2003 strategy. In 2008, President Bush issued National Security Presidential Directive 54/Homeland Security Presidential Directive 23, establishing the Comprehensive National Cybersecurity Initiative (CNCI), a set of 12 projects aimed at safeguarding executive branch information systems by reducing potential vulnerabilities, protecting against intrusion attempts, and anticipating future threats. The 12 projects were the following: 1. Trusted Internet Connections: Reduce and consolidate external access points with the goal of limiting points of access to the Internet for executive branch civilian agencies. 2. EINSTEIN 2: Deploy passive sensors across executive branch civilian systems that have the ability to scan the content of Internet packets to determine whether they contain malicious code. 3. EINSTEIN 3: Pursue deployment of an intrusion prevention system that will allow for real-time prevention capabilities that will assess and block harmful code. 4. Research and Development Efforts: Coordinate and redirect research and development (R&D) efforts with a focus on coordinating both classified and unclassified R&D for cybersecurity. 5. Connecting the Centers: Connect current cyber centers to enhance cyber situational awareness and lead to greater integration and understanding of the cyber threat. 6. Cyber Counterintelligence Plan: Develop a government-wide cyber counterintelligence plan by improving the security of the physical and electromagnetic integrity of U.S. networks. 7. Security of Classified Networks: Increase the security of classified networks to reduce the risk of information they contain being disclosed. 8. Expand Education: Expand education efforts by constructing a comprehensive federal cyber education and training program, with attention to offensive and defensive skills and capabilities. 9. Leap-Ahead Technology: Define and develop enduring leap-ahead technology, strategies, and programs by investing in high-risk, high- reward research and development and by working with both private sector and international partners. 10. Deterrence Strategies and Programs: Define and develop enduring deterrence strategies and programs that focus on reducing vulnerabilities and deter interference and attacks in cyberspace. 11. Global Supply Chain Risk Management: Develop a multipronged approach for global supply chain risk management while seeking to better manage the federal government’s global supply chain. 12. Public and Private Partnerships “Project 12”: Define the federal role for extending cyber security into critical infrastructure domains and seek to define new mechanisms for the federal government and industry to work together to protect the nation’s critical infrastructure. The CNCI’s projects are generally consistent with both the 2000 strategy and the 2003 strategy, while also introducing new priorities. For example, all three strategy documents address counterintelligence, education and awareness, research and development, reducing vulnerabilities, and public-private partnerships. However, the CNCI introduces additional priorities for the security of classified networks and global supply chain risk management, and it does not include programs to address response, reconstitution, and recovery or international cooperation, as in the previous strategies. Shortly after taking office in 2009, President Obama ordered a thorough review of the federal government’s efforts to defend the nation’s information and communications infrastructure as well as the development of a comprehensive approach to cybersecurity. The White House Cyberspace Policy Review, released in May 2009, was the result. It recommended that the President appoint a national cybersecurity coordinator, which was completed in December 2009. It also recommended, among many other things, that a coherent unified policy guidance be developed that clarifies roles, responsibilities, and the application of agency authorities for cybersecurity-related activities across the federal government; a cybersecurity incident response plan be prepared; a national public awareness and education campaign be initiated that promotes cybersecurity; and a framework for research and development strategies be created. According to the policy review, President Obama determined that the CNCI and its associated activities should evolve to become key elements of a broader, updated national strategy. In addition, the CNCI initiatives were to play a key role in supporting the achievement of many of the policy review’s recommendations. National Strategy for Trusted Identities in Cyberspace is one of The National Strategy for Trusted Identities in Cyberspaceseveral strategy documents that are subordinate to the government’s overall cybersecurity strategy and focuses on specific areas of concern. Specifically, this strategy aims at improving the security of online transactions by strengthening the way identities are established and confirmed. The strategy envisions secure, efficient, easy-to-use, and interoperable identity solutions to access online services in a manner that promotes confidence, privacy, choice, and innovation. In order to fulfill its vision, the strategy calls for developing a comprehensive Identity Ecosystem building and implementing interoperable identity solutions, enhancing confidence and willingness to participate in the Identity ensuring the long-term success and viability of the Identity Ecosystem. The strategy defines an “Identity Ecosystem” as an online environment where individuals and organizations will be able to trust each other because they follow agreed upon standards to obtain and authenticate their digital identities—and the digital identities of devices. The first two goals focus on designing and building the necessary policy and technology to deliver trusted online services. The third goal encourages adoption, including the use of education and awareness efforts. The fourth goal promotes the continued development and enhancement of the Identity Ecosystem. For each goal, there are objectives that enable the achievement of the goal by addressing barriers in the current environment. The strategy states that these goals will require the active collaboration of all levels of government and the private sector. The private sector is seen as the primary developer, implementer, owner, and operator of the Identity Ecosystem, and the federal government’s role is to “enable” the private sector and lead by example through the early adoption and provision of Identity Ecosystem services. In response to the R&D-related recommendations in the White House Cyberspace Policy Review, the Office of Science and Technology Policy (OSTP) issued the first cybersecurity R&D strategic plan in December 2011, which defines a set of interrelated priorities for government agencies conducting or sponsoring cybersecurity R&D. This document is another of the subordinate strategy documents that address specific areas of concern. The priorities defined in the plan are organized into four goals—inducing change, developing scientific foundations, maximizing research impact, and accelerating transition to practice—that are aimed at limiting current cyberspace deficiencies, precluding future problems, and expediting the infusion of research accomplishments in the marketplace. Specifically, the plan identifies what research is needed to reduce cyber attacks. It includes the following themes: building a secure software system that is resilient to attacks; supporting security policies and security services for different types of cyberspace interactions; deploying systems that are both diverse and changing, to increase complexity and costs for attackers and system resiliency; and developing cybersecurity incentives to create foundations for cybersecurity markets, establish meaningful metrics, and promote economically sound and secure practices. Like the strategies for trusted cyberspace identities and cyberspace R&D, the International Strategy for Cyberspace, released by the White House in May 2011, is a subordinate strategy document that addresses a specific area of concern. The International Strategy for Cyberspace is intended to be a roadmap for better definition and coordination of U.S. international cyberspace policy. According to the strategy, in order to reach the goal of working internationally to promote an open, interoperable, secure, and reliable information and communications infrastructure, the government is to build and sustain an environment in which norms of responsible behavior guide states’ actions, sustain partnerships, and support the rule of law in cyberspace. The strategy stated that these cyberspace norms should be supported by principles such as upholding fundamental freedoms, respect for property, valuing privacy, protection from crime, and the right of self-defense. The strategy also included seven interdependent focus areas: 1. Economy: Promoting International Standards and Innovative, Open Markets. 2. Protecting our Networks: Enhancing Security Reliability and Resiliency. 3. Law Enforcement: Extending Collaboration and the Rule of Law. 4. Military: Preparing for 21st Century Security Challenges. 5. Internet Governance: Promoting Effective and Inclusive Structures. 6. International Development: Building Capacity, Security, and Prosperity. 7. Internet Freedom: Supporting Fundamental Freedoms and Privacy. In a March 2012 blog post, the White House Cybersecurity Coordinator announced that his office, in coordination with experts from DHS, DOD, NIST, and OMB, had identified three priority areas for improvement within federal cybersecurity: Trusted Internet connections: Consolidate external telecommunication connections and ensure a set of baseline security capabilities for situational awareness and enhanced monitoring. Continuous monitoring of federal information systems: Transform the otherwise static security control assessment and authorization process into a dynamic risk mitigation program that provides essential, near real-time security status and remediation, increasing visibility into system operations and helping security personnel make risk management decisions based on increased situational awareness. Strong authentication: Increase the use of federal smartcard credentials such as Personal Identity Verification and Common Access Cards that provide multifactor authentication and digital signature and encryption capabilities, authorizing users to access federal information systems with a higher level of assurance. According to the post, these priorities were selected to focus federal department and agency cybersecurity efforts on implementing the most cost-effective and efficient cybersecurity controls for federal information system security. To support the implementation of these priorities, cybersecurity was included among a limited number of cross-agency priority goals, as required to be established under the GPRA Modernization Act of 2010.percent use of critical cybersecurity capabilities on federal executive branch information systems by the end of 2014, including the three priorities mentioned above. The White House Cybersecurity Coordinator was designated as the goal leader, but according to one White House website, http://www.performance.gov, DHS was tasked with leading the government-wide coordination efforts to implement the goal. The administration’s priorities were included in its fiscal year 2011 FISMA report to Congress. In addition, both OMB and DHS FISMA reporting The cybersecurity goal was to achieve 95 instructions require federal agencies to report on progress in meeting those priorities in their 2012 FISMA reports. There are a number of implementation plans aimed at executing various aspects of the national strategy. For example, the National Infrastructure Protection Plan (NIPP) describes DHS’s overarching approach for integrating the nation’s critical infrastructure protection initiatives in a single effort. The goal of the NIPP is to prevent, deter, neutralize, or mitigate the effects of terrorist attacks on our nation’s critical infrastructure and to strengthen national preparedness, timely response, and rapid recovery of critical infrastructure in the event of an attack, natural disaster, or other emergency. The NIPP’s objectives include understanding and sharing information about terrorist threats and other hazards with critical infrastructure partners; building partnerships to share information and implement critical infrastructure protection programs; implementing a long-term risk management program; and maximizing the efficient use of resources for critical infrastructure protection, restoration, and recovery. No Overarching Cybersecurity Strategy Has Been Developed While various subordinate strategies and implementation plans focusing on specific cybersecurity issues have been released in the past few years, no overarching national cybersecurity strategy document has been prepared that synthesizes the relevant portions of these documents or provides a comprehensive description of the current strategy. According to officials at the Executive Office of the President, the current national cybersecurity strategy consists of several documents and statements issued at different times, including the 2003 strategy, which is now almost a decade old, the 2009 White House policy review, and subordinate strategies such as the R&D strategy and the international strategy. Also implicitly included in the national strategy are the modifications made when the CNCI was introduced in 2008 and the 2012 statement regarding cross-agency priority goals. Despite the fact that no overarching document has been created, the White House has asserted that the national strategy has in fact been updated. We reported in October 2010 that a committee had been formed to prepare an update to the 2003 strategy in response to the recommendation of the 2009 policy review. However, no updated strategy document has been issued. In May 2011, the White House announced that it had completed all the near-term actions outlined in the 2009 policy review, including the update to the 2003 national strategy. According to the administration’s fact sheet on cybersecurity accomplishments, the 2009 policy review itself serves as the updated strategy. The fact sheet stated that the direction and needs highlighted in the Cyberspace Policy Review and the previous national cybersecurity strategy were still relevant, and it noted that the administration had updated its strategy on two subordinate cyber issues, identity management and international engagement. However, these actions do not fulfill the recommendation that an updated strategy be prepared for the President’s approval. As a result, no overarching strategy exists to show how the various goals and activities articulated in current documents form an integrated strategic approach. Useful Strategies Should Include Desirable Characteristics In 2004 we identified a set of desirable characteristics that can enhance the usefulness of national strategies as guidance for decision makers in allocating resources, defining policies, and helping to ensure accountability. Table 3 provides a summary of the six characteristics. We believe that including all the key elements of these characteristics in a national strategy would provide valuable direction to responsible parties for developing and implementing the strategy, enhance its usefulness as guidance for resource and policy decision makers, and better ensure accountability. Federal Cybersecurity Strategy Documents Have Not Always Included Key Elements of Desirable Characteristics The government’s cybersecurity strategy documents have generally addressed several of the desirable characteristics of national strategies, but lacked certain key elements. For example, the 2009 White House Cyberspace Policy Review, the Strategy for Trusted Identities in Cyberspace, and the Strategic Plan for the Federal Cybersecurity Research and Development Program addressed purpose, scope, and methodology. In addition, all the documents included the problem definition aspect of “problem definition and risk assessment.” Likewise, the documents all generally included goals, subordinate objectives, and activities, which are key elements of the “goals, subordinate objectives, activities, and performance measures” characteristic. However, certain elements of the characteristics were missing from most, if not all, of the documents we reviewed. The key elements that were generally missing from these documents include (1) milestones and performance measures, (2) cost and resources, (3) roles and responsibilities, and (4) linkage with other strategy documents. Milestones and performance measures were generally not included in strategy documents, appearing only in limited circumstances within subordinate strategies and initiatives. For example, the Cross-Agency Priority Goals for Cybersecurity and the National Strategy for Trusted Identities in Cyberspace,strategy, included milestones for achieving their goals. In addition, the progress in implementing the Cross-Agency Priority Goals for Cybersecurity is tracked through FISMA reports submitted by agencies and their inspectors general. However, in general, the documents and initiatives that currently contribute to the government’s overall cybersecurity strategy do not include milestones or performance measures for tracking progress in accomplishing stated goals and objectives. For example, the 2003 National Strategy to Secure Cyberspace included no milestones or performance measures for addressing the five priority areas it defined. Likewise, other documents generally did not include either milestones for implementation of the strategy or outcome-related performance measures to gauge success. which represent only a portion of the national The lack of milestones and performance measures at the strategic level is mirrored in similar shortcomings within key government programs that are part of the government-wide strategy. For example, the DHS inspector general reported in 2011 that the DHS Cybersecurity and Communications (CS&C) office had not yet developed objective, quantifiable performance measures to determine whether it was meeting its mission to secure cyberspace and protect critical infrastructures. Additionally, the inspector general reported that CS&C was not able to track its progress efficiently and effectively in addressing the actions outlined in the 2003 National Cybersecurity Strategy or achieving the goals outlined in the NIPP. Accordingly, the inspector general recommended that CS&C develop and implement performance measures to be used to track and evaluate the effectiveness of actions defined in its strategic implementation plan. The inspector general also recommended that management use these measures to assess CS&C’s overall progress in attaining its strategic goals and milestones. DHS officials stated that, as of January 2012, CS&C had not yet developed objective performance criteria and measures, and that development of these will begin once the CS&C strategic implementation plan is completed. DHS Office of Inspector General, Planning, Management, and Systems Issues Hinder DHS’ Efforts to Protect Cyberspace and the Nation’s Cyber Infrastructure, OIG-11-89 (Washington, D.C.: June 2011). Many of the experts we consulted cited a lack of accountability as one of the root causes for the slow progress in implementing the nation’s cybersecurity goals and objectives. Specifically, cybersecurity and information management experts stated that the inability of the federal government to make progress in addressing persistent weaknesses within its risk-based security framework can be associated with the lack of performance measures and monitoring to assess whether security objectives are being achieved. Without establishing milestones or performance measures in its national strategy, the government lacks a means to ensure priority goals and objectives are accomplished and responsible parties are held accountable. Though the 2000 plan and the 2003 strategy linked some investments to the annual budget, the strategy documents generally did not include an analysis of the cost of planned activities or the source and type of resources needed to carry out the strategy’s goals and objectives. The 2000 National Plan for Information Systems Protection identified resources for certain cybersecurity activities, and the 2003 National Strategy to Secure Cyberspace linked some of its investment requests— such as completing a cyber incident warning system—to the fiscal 2003 budget. However, none of the strategies included an analysis of the cost and resources needed to implement the entire strategy. For example, while the cybersecurity R&D strategic plan mentioned specific initiatives, such as a Defense Advanced Research Projects Agency program to fund biologically inspired cyber-attack resilience, it did not describe how decisions were made regarding the amount of resources to be invested in this or any other R&D initiative. The plan also did not outline how the chosen cybersecurity R&D efforts would be funded and sustained in the future. In addition, the strategies did not include a business case for investing in activities to support their goals and objectives based on assessments of the risks and relative costs of mitigating them. Many of the private sector experts we consulted stated that not establishing such a value proposition makes it difficult to mobilize the resources needed to significantly improve security within the government as well as to build support in the private sector for a national commitment to cybersecurity. A convincing assessment of the specific risks and resources needed to mitigate them would help implementing parties allocate resources and investments according to priorities and constraints, track costs and performance, and shift existing investments and resources as needed to align with national priorities. Most of the strategies lacked clearly defined roles and responsibilities for key agencies, such as DHS, DOD, and OMB, that contribute substantially to the nation’s cybersecurity programs. For example, as already discussed, while the law gives OMB responsibility for oversight of federal government information security, OMB transferred several of its oversight responsibilities to DHS. According to OMB representatives, the oversight responsibilities transferred to DHS represent the operational aspects of its role, in contrast to the general oversight responsibilities stipulated by FISMA, which OMB retained. The representatives further stated that the enlistment of DHS to assist OMB in performing these responsibilities has allowed OMB to have more visibility into the cybersecurity activities of federal agencies because of the additional resources and expertise provided by DHS and that OMB and DHS continue to work closely together. While OMB’s decision to transfer several of its responsibilities to DHS may have had beneficial practical results, such as leveraging the resources of DHS, it is not consistent with FISMA, which assigns all of these responsibilities to OMB. With these responsibilities now divided between the two organizations, it is also unclear how OMB and DHS are to share oversight of individual departments and agencies, which are responsible under FISMA for ensuring the security of their information systems and networks. For example, both DHS and OMB have issued annual FISMA reporting instructions to agencies, which could create confusion among agency officials. Further, the instructions vary in content. In its 2012 instructions, DHS included, among other things, specific actions agencies were required to complete, time frames for completing the actions, and reporting metrics. However, the OMB instructions, although identically titled, included different directions. Specifically, the OMB instructions required agencies to submit metrics data for the first quarter of the fiscal year, while the DHS reporting instructions stated that agencies were not required to submit such data. Further, the OMB instructions stated that agency chief information officers would submit monthly data feeds through the FISMA reporting system, while the DHS instructions indicated that inspectors general and senior agency officials for privacy would also submit monthly data feeds. Issuing identically titled reporting instructions with varying content could result in inconsistent reporting. Further, it is unclear which agency currently has the role of ensuring that agencies are held accountable for implementing the provisions of FISMA. Although FISMA requires OMB to approve or disapprove agencies’ information security programs, OMB has not made explicit statements that would indicate whether an agency’s information security program has been approved or disapproved. As a result, a mechanism for establishing accountability and holding agencies accountable for implementing effective programs is not being used. Mirroring these shortcomings, several GAO reports have likewise demonstrated that the roles and responsibilities of key agencies charged with protecting the nation’s cyber assets are inadequately defined. For example, as described in our recent report on gaps in homeland defense and civil support guidance, although DOD has prepared guidance regarding support for civilian agencies in a domestic cyber incident and has an agreement with DHS for preparing for and responding to such incidents, these documents do not clarify all key aspects of how DOD will support a response to a domestic cyber incident. For example, the chartering directives for the Offices of the Assistant Secretary of Defense for Global Strategic Affairs and the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs assign overlapping roles and responsibilities for preparing for and responding to domestic cyber incidents. In an October 2012 report, we recommended that DOD update guidance on preparing for and responding to domestic cyber incidents to align with national-level guidance and that such guidance should include a description of DOD’s roles and responsibilities. Further, we stated that federal agencies in a March 2010 report on the CNCI,had overlapping and uncoordinated responsibilities and it was unclear where overall responsibility for coordination lay. We recommended that the Director of OMB better define roles and responsibilities for all key CNCI participants to ensure that essential government-wide cybersecurity activities are fully coordinated. Many of the experts we consulted agreed that the roles and responsibilities of key agencies are not well defined. Clearly defining roles and responsibilities for agencies charged with implementing key aspects of the national cybersecurity strategies would aid in fostering coordination, particularly where there is overlap, and thus enhance both implementation and accountability. The cybersecurity strategy documents we reviewed did not include any discussion of how they linked to or superseded other documents, nor did they describe how they fit into the overall national cybersecurity strategy. For example, the 2003 National Strategy to Secure Cyberspace does not refer to the 2000 plan nor describe progress made since the previous strategy was issued or if it was meant to replace or enhance the previous strategy. Each of the subsequent documents that have addressed aspects of the federal government’s approach to cybersecurity—such as the Comprehensive National Cybersecurity Initiative, the National Strategy for Trusted Identities in Cyberspace, and the International Strategy to Secure Cyberspace—has established its own set of goals and priority actions, but none of these cybersecurity agendas are linked to each other to explain why planned activities differ or are prioritized differently. For example, in 2012, the administration determined that trusted Internet connections, continuous monitoring, and strong authentication should be cross-agency priorities, but no explanation was given as to how these three relate to priorities established in other strategy documents. Specifying how new documents are linked with the overall national cybersecurity strategy would clarify priorities and better establish roles and responsibilities, thereby fostering effective implementation and accountability. The importance of developing an overarching strategy that links component documents and addresses all key elements was confirmed by our discussions with experts. For example, experts agreed that a strategy should define milestones for achieving specific outcomes and that it should be linked to accountability and execution with performance measures to help in determining whether progress is being made. Without addressing these key elements, the national cybersecurity strategy remains poorly defined and faces many implementation challenges. Until an overarching strategy is developed that addresses these elements, progress in cybersecurity may remain limited and difficult to determine. The Federal Government Continues to Face Challenges in Implementing Cybersecurity that Could Be Addressed by an Effective Strategy As demonstrated in our reviews and the reviews of inspectors general, the government continues to face cybersecurity implementation challenges in a number of key areas, including those related to protecting our nation’s critical infrastructure. For example, audits of federal agencies have found that weaknesses in risk-based management and implementation of controls have not substantially improved over the last 4 years. Incident response capabilities, while becoming more sophisticated, also face persistent challenges in sharing information and developing analytical capability. Challenges likewise remain in developing effective initiatives for promoting education and awareness, coordinating research and development, and interacting with foreign governments and other international entities. Until steps are taken to address these persistent challenges, overall progress in improving the nation’s cybersecurity posture is likely to remain limited. Federal Agencies Face Challenges in Designing and Implementing Risk- based Programs Developing, implementing, and maintaining security controls is key to preventing successful attacks on computer systems and ensuring that information and systems are not compromised. Ineffective implementation of security controls can result in significant risks, including loss or theft of resources, including money and intellectual property; inappropriate access to and disclosure, modification, or destruction of sensitive information; use of computer resources for unauthorized purposes or to launch attacks on other computer systems; damage to networks and equipment; loss of business due to lack of customer confidence; and increased costs from remediation. From a strategic perspective, it is important that effective processes be instituted for determining which controls to apply, ensuring they are properly implemented, and measuring their effectiveness. Such processes are core elements of an effective cybersecurity strategy. Federal strategy documents reflect the risk-based approach to managing information security controls established by FISMA and federal guidance. For example, the 2003 National Strategy to Secure Cyberspace recognizes the importance of managing risk responsibly and enhancing the nation’s ability to minimize the damage that results from successful attacks. It encourages the use of commercially available automated auditing and reporting tools to validate the effectiveness of security controls, and states that these tools are essential to continuously understanding the risks to information systems. While acknowledging the importance of these principles, the 2003 strategy document did not indicate time frames or milestones for accomplishing specific actions or establish measures to determine the progress in achieving those actions. The 2009 White House Cyberspace Policy Review provided more specifics, stating that the federal government, along with state, local, and tribal governments and industry, should develop a set of threat scenarios and metrics that all could use for risk management decisions. The DHS released in November 2011, Blueprint for a Secure Cyber Future,included reducing exposure to cyber risk as one of its four goals for protecting critical information infrastructure. According to the blueprint, to achieve this goal the department must identify and harden critical information infrastructure through the deployment of appropriate security measures to manage risk to critical systems and assets. As discussed previously, OMB, in July 2010, issued a memorandum expanding DHS’s cybersecurity role in overseeing federal agencies’ implementation of FISMA requirements. As part of DHS’s responsibilities for FISMA reporting, the Cybersecurity Performance Management Program within DHS annually reviews FISMA data submitted by agencies and inspectors general to, among other things, identify cyber risks across the federal enterprise. This information informs the annual report to Congress. To assist agencies in identifying risks, NIST has released risk management and assessment guides for information systems. These guides provide a foundation for the development of an effective risk management program, and include the guidance necessary for assessing and mitigating risks identified within information technology systems. Agencies are required to use these guidance documents when identifying risks to their systems. NIST’s guide for managing information security risk provides guidance for an integrated, organization-wide program for managing information security risk to organizational operations, organizational assets, individuals, other organizations, and the nation resulting from the operation and use of federal information systems. The guide describes fundamental concepts associated with managing information security risk across an organization, including risk management at various levels, called tiers. According to NIST, risk management is a process that requires organizations to (1) frame risk (i.e., establish the context for risk-based decisions); (2) assess risk; (3) respond to risk once determined; and (4) monitor risk on an ongoing basis. Figure 4 illustrates the risk management process as applied across the tiers—organization, mission/business process, and information system. Our audits and the audits of inspectors general have identified many weaknesses in agencies’ risk management processes. Numerous recommendations were made to agencies in fiscal years 2011 and 2012 to address these security control weaknesses, which include risk assessment weaknesses, inconsistent application of controls, and weak monitoring controls. According to NIST, risk is determined by identifying potential threats to the organization and vulnerabilities in its systems, determining the likelihood that a particular threat may exploit vulnerabilities, and assessing the resulting impact on the organization’s mission, including the effect on sensitive and critical systems and data. These assessments increase an organization’s awareness of risk and can generate support for policies and controls that are adopted in response. Such support can help ensure that policies and controls operate as intended. In addition, identifying and assessing information security risks are essential to determining what controls are required. Agencies’ capabilities for performing risk assessments, as required by FISMA, have declined in recent years. According to OMB’s fiscal year 2011 report to Congress on FISMA implementation, agency compliance with risk management requirements suffered the largest decline of any FISMA metric between fiscal year 2010 and 2011. Inspectors general for 8 of 22 major agencies reported compliance in 2011, while 13 of 24 inspectors general reported compliance the year before. The following deficiencies were cited most frequently: accreditation boundaries for agency systems were not defined (13 of specific risks were not sufficiently communicated to appropriate levels of the organization (12 of 23 agencies), risks from a mission or business process perspective were not addressed (12 of 23 agencies), and security assessment report was not in accordance with government policies (11 of 23 agencies). Our own analysis of weaknesses reported by inspectors general shows that the number of weaknesses related to the risk assessment process has greatly increased over the last 4 years. In fiscal year 2008 only 3 of the 24 inspectors general reported weaknesses related to assessing risk. In fiscal year 2011, 18 of 24 reported weaknesses in this area. For example, according to a November 2011 inspector general report, one agency did not have a risk management framework in place and had not fully developed risk management procedures, due to budget cuts. Around the same time, another agency’s inspector general reported that while risk management procedures at a system-specific level had been implemented, an agency-wide risk management methodology had not been developed. In an October 2011 report on agencies’ efforts to implement information security requirements, we reported that of the 24 major agencies, none had fully or effectively implemented an agency- wide information security program. Of those, 18 had shortcomings in the documentation of their security management programs, which establish the framework and activities for assessing risk, developing and implementing effective security procedures, and monitoring the effectiveness of these procedures. Risk management was also a topic that our experts felt was very important to a comprehensive approach to cybersecurity. One expert stated that cybersecurity is not a technical problem, but an enterprise- wide risk management challenge that must be tackled in a far more comprehensive manner than is generally understood both at the enterprise and government level. One expert cited defining the cost of insecurity as one of the most significant challenges in improving the nation’s cybersecurity posture. Another expert suggested that the risk guidance be reviewed and updated due to changes in technology. NIST has developed guidance to assist agencies, once risks have been assessed, in determining which controls are appropriate for their information and systems. In August 2009, NIST released the third revision of special publication 800-53, Recommended Security Controls for Federal Information Systems and Organizations, which provides a catalog of controls and technical guidelines that federal agencies must use to protect federal information and information systems.NIST guidance for nonfederal information systems, such as those in the nation’s critical infrastructure, is encouraged but not required. Agencies have flexibility in applying NIST guidance, and according to NIST, agencies should apply the security concepts and principles articulated in special publication 800-53 in the context of the agency’s missions, business functions, and environment of operation. In addition, in order to ensure a consistent government-wide baseline, specific guidance has been developed for implementing and configuring controls in certain widely used computing platforms. In fiscal year 2010, DOD, DHS, NIST, and the federal CIO Council worked closely together to develop the United States Government Configuration Baseline (USGCB) for Windows 7 and Internet Explorer 8. As a baseline, USGCB is the core set of default security configurations for all agencies; however, agencies may customize the USGCB baseline to fit their operational needs. In fiscal year 2011, the USGCB was expanded to include RedHat Enterprise Linux 5 Desktop, and multiple updates for Windows 7 and Internet Explorer 8 were released. Although guidance for implementing appropriate cybersecurity controls has been available for many years, we have consistently identified weaknesses in agencies’ implementation of the guidance in control areas such as configuration management. Configuration management is an important process for establishing and maintaining secure information system configurations, and provides important support for managing security risks in information systems. However, inspectors general have consistently reported weaknesses in agencies’ implementation of such controls. For example, the fiscal year 2011 report to Congress on the implementation of FISMA listed configuration management as one of the 11 cybersecurity program areasAccording to that report, 18 of 24 agencies’ configuration management that needed the most improvement. programs needed significant improvement. The following deficiencies were found to be the most common: configuration management policy was not fully developed (13 of 23 agencies), configuration management procedures were not fully developed (9 of 23 agencies), standard baseline configurations were not identified for all hardware components (9 of 23 agencies), and USGCB was not fully implemented (8 of 23 agencies). Our own analysis of weaknesses reported by agency inspectors general also shows that the number of weaknesses related to configuration management has increased over the last 4 years. In fiscal year 2008, inspectors general from 15 agencies reported weaknesses related to configuration management, whereas 23 reported weaknesses in 2011. The experts we consulted focused on the need for security controls to be included in systems development, instead of being applied as an afterthought. One expert stated that commercial companies often forgo the extra cost associated with meeting defined cybersecurity specifications, and security is weakened as a result of the lack of built-in controls. Another expert made a similar comment by saying that one of the most significant changes that would improve cybersecurity is building in security instead of “bolting it on” after the fact. He added that this would involve changing the mindset of various stakeholders. According to NIST, security control effectiveness is measured by correctness of implementation and by how adequately the implemented controls meet organizational needs in accordance with current risk tolerance (i.e., whether the control is implemented in accordance with the security plan to address threats and whether the security plan is adequate). Further, according to NIST, a key element in implementing an effective risk management approach is to establish a continuous monitoring program. Continuous monitoring is the process of maintaining an ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions. The objectives are to (1) conduct ongoing monitoring of the security of an organization’s networks, information, and systems; and (2) respond by accepting, transferring, or mitigating risk as situations change. Continuous monitoring is one of the six steps in NIST’s risk management framework and is an important way to assess the security impacts on an information system due to changes in hardware, software, firmware, or environmental operations. As part of its reporting instructions since fiscal year 2010, OMB requested inspectors general to report whether agencies had established continuous monitoring programs. For fiscal year 2011, the administration identified continuous monitoring as one of three FISMA priorities, and therefore the fiscal year 2011 FISMA reporting instructions included expanded metrics related to continuous monitoring. OMB’s fiscal year 2011 report on the implementation of FISMA shows that, according to agency reporting, implementation of automated continuous monitoring capabilities rose from 56 percent of total assets in fiscal year 2010 to 78 percent of total assets in fiscal year 2011. Agencies reported that they had implemented automated capabilities for activities such as inventorying assets, configuration management, and vulnerability management, which contributed to improvements in continuous monitoring capabilities (see fig. 5). However, the report also states that inspectors general cited 4 out of 11 cybersecurity program areas, including continuous monitoring, as needing the most improvement. The weaknesses in continuous monitoring management most reported by agency inspectors general were continuous monitoring policy was not fully developed (9 of 23 agencies), key security documentation was not provided to the system authorizing official or other key system officials (8 of 23 agencies), and continuous monitoring procedures were not consistently implemented (7 of 23 agencies). Similarly, in October 2011, we reported that most of the 24 major federal agencies had not fully implemented their programs for continuous monitoring of security controls in fiscal year 2010. We and inspectors general identified weaknesses in 17 of 24 agencies’ fiscal year 2010 efforts for continuous monitoring. In addition, in a July 2011 report we stated that while the Department of State is recognized as a leader in federal efforts to develop and implement a continuous risk monitoring capability, this capability’s scope did not include non-Windows operating systems, firewalls, routers, switches, mainframes, databases, and intrusion detection devices. We recommended that State take several steps to improve the implementation of its continuous monitoring capability. Further, 2 inspectors general also reported that their respective agencies had not established a continuous monitoring program. While 15 inspectors general reported that their agencies had programs in place, all cited weaknesses in their agencies’ programs. These weaknesses included, for example, that continuous monitoring procedures were not fully developed or consistently implemented at 11 agencies. In another example, 10 inspectors general cited weaknesses in ongoing assessments of selected security controls. Experts had mixed views about the importance of continuous monitoring as a tool to improve cybersecurity in the federal government. While one of the experts we consulted stated that moving from a paperwork-intensive process to a continuous monitoring process was the single most important action that could be taken to improve federal information security, another expert cited penetration testing as the single most important action. Two of the CIOs we surveyed also stated that the move to relying on automated tools to continuously monitor government systems is a practical way to contribute to meaningful security. Although federal agencies are making progress in implementing continuous monitoring programs that include automated capabilities for managing agency assets, configuration management, and vulnerability management, much more progress is needed to meet the administration’s goal for continuous monitoring. Until agencies can fully implement their continuous monitoring programs, they may have little assurance that they are aware of the true security impacts on their information and information systems due to changes in hardware, software, firmware, or environmental operations. Given the persistent shortcomings in all three key elements of agency risk management processes—assessment, implementation of controls, and monitoring results—it is important that a clearly defined OMB oversight process be in place to ensure that agencies are held accountable for implementing required risk management processes. Without a means to hold agencies accountable, the pattern of persistent risk management shortcomings is unlikely to improve. DHS and sector-specific agencies have responsibilities for facilitating the adoption of cybersecurity protective measures within critical infrastructure sectors. The NIPP states that, in accordance with HSPD-7, DHS is a principal focal point for the security of cyberspace and is responsible for coordinating efforts to protect the cyber infrastructure owned and operated by the private sector and is responsible for providing guidance on effective cyber-protective measures, assisting sector-specific agencies in understanding and mitigating cyber risk, and assisting in developing effective and appropriate protective measures. To accomplish these responsibilities, according to the NIPP, sector-specific agencies are to work with their private sector counterparts to understand and mitigate cyber risk by, among other things, determining whether approaches for critical infrastructure inventory, risk assessment, and protective measures address assets, systems, and networks; require enhancement; or require the use of alternative approaches. Security controls for critical infrastructure are likely to be determined largely by industry benchmarks and standards. In some instances, federal agencies have regulatory authority to require private sector implementation of controls. Some controls have also been recommended by federal agencies. In other areas there is little or no federal regulation of private sector cybersecurity practices. For example, as we reported in December 2011, the information technology, communications, and water critical infrastructure sectors and the oil and natural gas subsector of the energy sector are not subject to direct federal cybersecurity-related regulation. Our December 2011 report stated that although the use of cybersecurity guidance is not mandatory for all sectors, entities may voluntarily implement such guidance in response to business incentives, including the need to mitigate a variety of risks. Officials familiar with cybersecurity issues from both the communications and information technology sectors stated that the competitive market place, desire to maintain profits, and customer expectation of information security—rather than federal regulation—drive the adoption of best practices. Officials responsible for coordinating the oil and gas sector said that their member companies are not required to follow industry guidelines, but legal repercussions regarding standards of care may motivate the incorporation of such cybersecurity guidance into their operations. Other critical infrastructure entities, such as depository institutions in the banking and finance sector; the bulk power system in the electricity subsector of the energy sector; the health care and public health sector; and the nuclear reactors, materials, and waste sector, are required to meet mandatory cybersecurity standards established by federal regulation. For example, the Federal Energy Regulatory Commission approved eight mandatory cybersecurity standards that address the following topics: critical cyber asset identification, security management controls, personnel and training, electronic security perimeter(s),physical security of critical cyber assets, systems security management, incident reporting and response planning, and recovery plans for critical cyber assets. However, applicability of these standards is limited to the bulk power system—a term that refers to facilities and control systems necessary for operating the electric transmission network and certain generation facilities needed for reliability. Further, regulatory oversight of the electric industry is fragmented among federal, state, and local authorities, thus posing challenges in gaining a system-wide view of the cyber risk to the electric grid in an environment where cyber threats and vulnerabilities of one segment of the grid could affect the entire grid. DHS’s Office of Cybersecurity and Communications’ Control Systems Security Program has also issued recommended practices to reduce risks to industrial control systems within and across all critical infrastructure sectors. For example, in April 2011, the program issued the Catalog of Control Systems Security: Recommendations for Standards Developers, which is intended to provide a detailed listing of recommended controls from several standards related to control systems. Individual industries and critical infrastructure sectors also have their own specific standards, and some are required to comply with regulations that include cybersecurity. These include standards or guidance developed by regulatory agencies that assist entities within sectors in complying with cybersecurity-related laws and regulations. DHS, National Cyber Security Division, Control Systems Security Program, Catalog of Control Systems Security: Recommendations for Standards Developers (April 2011). and operators of cyber-reliant critical infrastructure for the associated seven critical infrastructure sectors, determine whether it is appropriate to have key cybersecurity guidance listed in sector plans or annual plans and adjust planning guidance accordingly to suggest the inclusion of such guidance in future plans. The agency concurred with our recommendation. Many of the experts we consulted agreed that private sector companies controlling critical infrastructure had not done enough to protect against cyber threats and that the government had not done enough to engage these companies in efforts to enhance cybersecurity. Experts told us that the limited commitment of private sector companies to implement the government’s cybersecurity strategy was due to the fact that the government had not made a convincing business case, or value proposition, that specific threats affecting these companies merited substantial new investment in enhanced cybersecurity controls. We continue to believe that DHS, in collaboration with key private sector entities, should implement our recommendation to determine whether it is appropriate to have key cybersecurity guidance listed in sector plans or annual plans and adjust planning guidance accordingly to suggest the inclusion of such guidance in future plans. Information Sharing and Timely Analysis and Warning Challenge Federal Efforts to Detect, Respond to, and Mitigate Cybersecurity Incidents FISMA recognizes incident response as a key element in safeguarding agencies’ information systems, and assisting in enhanced security and risk management. The White House and DHS have issued strategies for identifying and responding to cyber incidents affecting both federal information systems and the nation’s critical infrastructure and emphasize sharing information, developing analysis and warning capabilities, and coordinating efforts. However, despite efforts made to improve the coordination of information sharing and development of a timely analysis and warning capability, agency officials and experts we consulted confirmed that these areas remain challenges. Since 2000, government strategies have identified the need to improve incident response, detection, and mitigation both within the federal government and across the nation. These strategies have consistently emphasized the importance of information sharing, analysis and warning capabilities, and coordinating efforts among relevant entities to minimize the impact of incidents. The 2000 National Plan for Information Systems Protection was largely focused on preparing for and responding to cyber incidents. Two of its three overall objectives were to: Prepare for and prevent cyber attacks. This objective was aimed at minimizing the possibility of a significant attack and building an infrastructure that would remain effective in the face of such an attack. Detect and respond to cyber attacks. This objective focused on identifying and assessing attacks in a timely way, containing the attacks, and quickly recovering from them. The plan established programmatic elements and specific activities to achieve each objective with target completion dates. For example, programmatic elements to meet the “detect and respond” objective included detecting unauthorized intrusions, creating incident response capabilities, and sharing attack warnings in a timely manner. Specific activities to address these programmatic elements included developing a pilot intrusion detection network for civilian federal agencies and mechanisms for the regular sharing of federal threat, vulnerability, and warning data with private sector Information Sharing and Analysis Centers (ISAC). The 2003 National Strategy to Secure Cyberspace assigned DHS the lead responsibility for coordinating incident response and recovery planning as well as conducting incident response exercises. The strategy set three objectives that mirror those of the 2000 plan: prevent cyber attacks against America’s critical infrastructures, reduce national vulnerability to cyber attacks, and minimize damage and recovery time from cyber attacks that do occur. Developing a national cybersecurity response system was identified as one of five national priorities, and activities were identified to achieve this priority. According to the strategy, an effective national cyberspace response system would involve public and private institutions and cyber centers performing analysis, conducting watch and warning activities, enabling information exchange, and facilitating restoration efforts. The strategy recommended, among other things, that DHS create a single point of contact for the federal government’s interaction with industry and other partners, which would include cyberspace analysis, warning, information sharing, incident response, and national-level recovery efforts. In response to the strategy’s recommendations, DHS established US-CERT, which is charged with defending against and helping to respond to cyber attacks on executive branch agencies as well as sharing information and collaborating with state and local governments, industry, and international partners. The 2003 strategy also stated that DHS would use exercises to evaluate the impact of cyber attacks on government-wide processes. Such exercises were to include critical infrastructure that could have an impact on government-wide processes. According to DHS, it has conducted several exercises since the strategy was issued, including four national- level exercises through its National Exercise program and four Cyber Storm exercises under DHS’s Office of Cybersecurity and Communications. The 2008 CNCI included several projects designed to limit the government’s susceptibility to attack and improve its ability to detect and respond to cyber incidents. Unlike the previous strategies, the CNCI focused on technical solutions for incident detection and response. The CNCI projects included the trusted Internet connections initiative, which aimed to limit the ways in which attackers could gain access to federal networks by consolidating external access points, and phases 2 and 3 of the National Cybersecurity Protection System (operationally known as EINSTEIN). The EINSTEIN 2 project involved deploying sensors to inspect Internet traffic entering federal systems for unauthorized accesses and malicious content. EINSTEIN 3’s goal was to identify and characterize malicious network traffic to enhance cybersecurity analysis, situational awareness, and security response. The NIPP sets out a strategy for strengthening national preparedness, timely response, and rapid recovery of critical infrastructure from cyber attacks or other emergencies. According to the NIPP, this goal can be achieved by building partnerships with federal agencies; state, local, tribal, and territorial governments; the private sector; international entities; and non-governmental organizations to share information and implement critical infrastructure protection programs and resilience strategies. Accordingly, the NIPP relies on public-private partnerships to coordinate information-sharing activities related to cybersecurity. It also encourages private sector involvement by establishing sector coordinating councils for each critical infrastructure sector established by HSPD-7. Sectors also utilize ISACs, which provide operational and tactical capabilities for information sharing and, in some cases, support for incident response activities. Through the public-private partnership, the government and private sectors are to work in tandem to create the context, framework, and support for coordination and information-sharing activities required to implement and sustain a specific sector’s critical infrastructure protection efforts. The NIPP also states that government and private sector partners are to work together to ensure that exercises include adequate testing of critical infrastructure protection measures and plans, including information sharing. The 2009 Cyberspace Policy Review subsequently concluded that previous federal responses to cyber incidents were less than fully effective because they had not been fully integrated, thus returning to an emphasis on information sharing and coordination. For example, it stated that while federal cybersecurity centers often shared their information, no single entity combined all information available from these centers and other sources to provide a continuously updated and comprehensive picture of cyber threats and network activity. Such a comprehensive picture could provide indications and warning of incoming incidents and support a coordinated incident response. The policy review observed that the government needed a reliable and consistent mechanism for bringing all appropriate incident and vulnerability information together and recommended the development of an information-sharing and incident response framework. The review recommended that the federal government leverage existing resources such as the Multi-State Information Sharing and Analysis Center and the 58 state and local fusion centersresponding to cyber incidents. Implementation of the recommended framework would require developing reporting thresholds, adaptable response and recovery plans, information sharing, and incident reporting mechanisms. to develop processes to assist in preventing, detecting, and The review also identified and recommended near and midterm actions, which included preparing a cybersecurity incident response plan, initiating a dialogue to enhance public-private partnerships, and developing a process between the government and the private sector to assist in preventing, detecting, and responding to incidents. In response to the policy review recommendations, DHS drafted the Interim National Cyber Incident Response Plan in 2010, which establishes an incident response framework and designates the National Cybersecurity Communications and Integration Center (NCCIC) as the national point of execution for response activities within the scope of DHS authorities. The NCCIC is the point of integration for sharing information from federal agencies, state, local, tribal, and territorial governments, and the private sector, including international stakeholders. According to the response plan, all stakeholders—public and private sector stakeholders, law enforcement agencies, and the intelligence community—are responsible for assessing lessons learned from previous incidents and exercises and incorporating these lessons into their preparedness activities and plans. In addition, organizations are responsible for engaging with the NCCIC, operational organizations like ISACs, and other organizations within the cyber incident response community, among other things, to coordinate incident response activities. Despite repeated emphasis on information sharing, analysis and warning capabilities, and coordination, the federal government continues to face challenges in effectively sharing threat and incident information with the private sector and in developing a timely analysis and warning capability. While DHS has made incremental progress in improving its information sharing and developing timely analysis and warning capabilities, these challenges remain. According to the 2009 Cyberspace Policy Review, sharing of information among entities is key to preventing, detecting, and responding to incidents. Network hardware and software providers, network operators, data owners, security service providers, and in some cases, law enforcement or intelligence organizations may each have information that can contribute to the detection and understanding of sophisticated intrusions or attacks. A full understanding and effective response may only be possible by bringing together information from those various sources for the benefit of all. DHS has taken steps to facilitate information sharing. For example, in 2010, the DHS inspector general reported that US-CERT had established the Joint Agency Cyber Knowledge Exchange (JACKE) and Government Forum of Incident Response and Security Teams to facilitate collaboration on detecting and mitigating threats to the .gov domain and to encourage proactive and preventative security practices. Additionally, in 2010, the DHS inspector general reported that DHS shared cyber incident information through its Government Forum of Incident Response and Security Teams and US-CERT portals. In 2008 and 2010, we reported that one of the barriers to information sharing was the lack of individuals with appropriate security clearances to receive classified information related to potential or actual cyber-related incidents, which prevented federal agencies and private sector companies from acting on these incidents in a timely manner. In 2010, we also reported that private sector companies were often unwilling to share incident data because they were concerned about their proprietary data being seen by competitors. We recommended that the Cybersecurity Coordinator and the Secretary of Homeland Security focus their information-sharing efforts on the most desired services, including providing security clearances. Since these reports, DHS stated that it has taken steps to increase the number of individuals in the public and private sector who are granted security clearances and are able to receive classified information related to cyber incidents. According to the DHS inspector general, the department has also coordinated the installation of classified and unclassified information technology systems at fusion centers to support information sharing.has established information-sharing agreements between the federal government and the private sector or ISAC, and a program to address private sector partners’ concerns related to protecting their proprietary data. Further, DHS reported that, as of May 2012, there were 16 organizations, including federal agencies and private sector companies, operating and participating within the NCCIC to share information. Finally, according to DHS officials, the NCCIC and its components are also collaborating with industry to develop a set of technical specifications intended to help automate information sharing by establishing a framework for exchanging data. In addition, DHS stated that it To improve government and critical infrastructure collaboration and public-private cybersecurity data sharing, DHS reported that it had established the Critical Infrastructure Information Sharing and Collaboration Program. The program’s goal is to improve sharing among ISACs, information and communications technology service providers, and their respective critical infrastructure owners, operators, and customers. According to DHS, this program facilitated the sharing and distribution of 11,000 indicators of cyber threat activity and over 400 products, including indicator and analysis bulletins. In addition, according to DHS, US-CERT has incorporated a Traffic Light Protocol into its information-sharing products. The Traffic Light Protocol provides a methodology to specify a color on a product to reflect when information should be used and how it may be shared. In addition, according to a DHS official, in October 2012, DHS’s Office of Cybersecurity and Communications was realigned to include all entities reporting to the NCCIC division. This new structure brought all of the department’s operational communications and cybersecurity programs together under a single point of coordination. DHS has not always been able to take action to improve information sharing, however. For example, the Office of the Director of National Intelligence issued a directive on sharing “tear-line” information among intelligence community members, and state, local, tribal, and private sector partners. This policy directs the intelligence community to improve tear-line utility for the needs of recipients prior to publication and specifies that tear lines should be extended to the broadest possible readership. However, DHS does not have the authority to declassify information it receives from other entities. For example, the inspector general reported that DHS cannot generate tear-line reports or release any information that may hinder another agency’s ongoing investigation, work in progress, or violate applicable classification policies.was not able to act on the new directive. Difficulties in sharing and accessing classified information and the lack of a centralized information-sharing system continue to hinder DHS’s progress in sharing cyber-related incident data in a timely manner. For example, in December 2011, the DHS inspector general reported that classification of information impedes effective information sharing between officials within fusion centers and emergency operations centers. The inspector general recommended that DHS effectively disseminate and implement a directive to improve policies for safeguarding and governing access to classified information shared by the federal government with state, local, tribal, and private sector entities. DHS concurred with the recommendation. In addition, in July 2012, the DHS former inspector general reported that state and local fusion center personnel had expressed concern with federal information-sharing systems due to the fact that the systems were not integrated and information could not easily be shared across the systems, resulting in continued communication and information-sharing challenges. The DHS inspector general also reported that US-CERT collected and posted information from several systems and sources to different portals, all of which had different classification levels resulting in communication and The inspector general recommended that information-sharing issues.the department establish a consolidated, multiple-classification-level portal that can be accessed by the federal partners and includes real-time incident response related information and reports. According to DHS officials, a secure environment for sharing cybersecurity information, at all classification levels, intended to address these issues is scheduled to be fully operational in fiscal year 2018. Information sharing presents a challenge not only within the nation, but also with the international community. In August 2012, the DHS inspector general reported that information sharing with foreign partners has been hindered due, in part, to varying classification policies. Foreign governments have developed their own policies for classifying sensitive information, which has resulted in inconsistencies in classifying information among different countries. According to the inspector general, an international team surveyed indicated that inconsistent classification requirements hinder foreign countries’ abilities to share cyber threat data in a timely manner, as information shared must be approved by different authorities in various countries before it can be disseminated to international partners and private organizations. The inspector general recommended that DHS conduct information-sharing assessments to identify internal gaps and impediments in order to increase situational awareness and enhance collaboration with foreign nations. DHS concurred with the recommendation. Agency officials, CIOs, and experts we consulted agreed that information sharing remains a significant challenge. According to a DHS official, despite the NCCIC being in operation, there are still challenges with coordinating and sharing information. The official explained that these challenges are due in part to DHS’s lack of authority over agencies’ information-sharing practices and the private sector’s cybersecurity efforts, and that agencies and private sector companies are not always able to identify the benefit of reporting information to DHS. Seven out of the 11 CIOs that responded to our survey stated that the most effective way to enhance information sharing would be to develop a streamlined process for declassifying key information and making it available to stakeholders. One CIO also explained that the current process for notifying agencies about incidents lacks specificity, making it unclear what the threat is and how to mitigate it. The CIO added that a declassification process would be helpful. Several CIOs stated the most effective way to enhance information sharing would be to improve the timeliness of incident information reports. Further, 6 of the 11 CIOs indicated that focused information-sharing efforts, including working toward increased private sector engagement and a robust information-sharing framework, are the most important actions that the federal government can take now to improve protection of cyber critical infrastructure. Six CIOs also stated that improving information sharing and coordination is the most important action that the federal government could take to improve the national response to large-scale cyber events. Several experts surveyed agreed that information sharing is a challenge. For example, one expert stated that the most important action that can be taken now to improve federal information security is improving information sharing. The expert explained that real-time information sharing between different branches of government, including the Department of Defense and intelligence community, would be valuable. In addition, experts stated that information sharing is one of the most significant challenges in improving the nation’s cybersecurity posture. Establishing analytical and warning capabilities is essential to thwarting cyber threats and attacks. Cyber analysis and warning capabilities include (1) monitoring network activity to detect anomalies, (2) analyzing information and investigating anomalies to determine whether they are threats, (3) warning appropriate officials with timely and actionable threat and mitigation information, and (4) responding to threats. The 2009 Cyberspace Policy Review identified a need for the federal government to improve its ability to provide strategic warning of cyber intrusions. In 2008, we identified 15 key attributes associated with these capabilities, including integrating the results of the analysis of the information into predictive analysis of broader implications or potential future attacks. This type of effort—predictive analysis—should look beyond one specific incident to consider a broader set of incidents or implications that may indicate a potential threat of importance. US-CERT has established a cyber analysis and warning capability that includes many elements of the key attributes we identified in our 2008 report. For example, it obtains internal network operation information via technical tools and EINSTEIN; obtains external information on threats, vulnerabilities, and incidents; and detects anomalous activities based on the information it receives. To help improve the federal government’s analysis and warning capability, DHS has completed several actions. For example, according to DHS, the department has (1) increased its cybersecurity workforce, (2) improved the training available to federal staff, such as periodic training on EINSTEIN capabilities; and (3) launched a loaned executive program to obtain ad hoc, unpaid, short- term expertise through appointment of private sector individuals. According to DHS, to strengthen its analytical capabilities, it is using an analysis tool to enhance its ability to track malicious activity. DHS also reported utilizing a cyber indicators analysis platform that acts as a centralized repository for cyber threat network data and facilitates information exchange among US-CERT and its partners to conduct analysis. Also, DHS has established the NCCIC as its 24-hour cyber and communications watch and warning center with representation from law enforcement and intelligence organizations, computer emergency response teams, and private sector information-sharing and analysis centers.have been expanded, and 53 federal agencies are now using EINSTEIN 2 intrusion detection sensors. DHS staff have also stated that the department is incorporating an EINSTEIN 3 accelerated (EA) strategy allowing for accelerated deployment of intrusion prevention services through an Internet Service Provider-based managed security service. According to DHS, the E We recommended that the Secretary of Homeland Security expand capabilities to investigate incidents. In response to our report, DHS stated that while it has made progress in developing its predictive capability through the EINSTEIN program, it remained challenged in fully developing this capability. DHS plans to test tools for predictive analysis across federal agencies and private networks and systems by the first quarter of fiscal year 2013. In addition, in 2010, the DHS inspector general reported that the tools US-CERT used did not allow for real-time analyses of network traffic.DHS establish a capability to share real-time EINSTEIN information with The inspector general recommended that federal agency partners to assist them in the analysis and mitigation of incidents. In response to the inspector general report, DHS stated that while it plans to upgrade its capabilities to share real-time information with multiple stakeholders and better analyze cyber incidents, these capabilities are not expected to be fully operational until fiscal year 2018. In addition, agency CIOs and experts that responded to our survey indicated that developing a timely analysis and warning capability remains a challenge due in part to personnel changes, a lack of qualified personnel and incentives, and the lack of appropriate tools. For example, one CIO stated that there has been a significant amount of turnover of cyber leadership at DHS and that this is one of the most significant challenges to improving the nation’s cybersecurity posture. Another CIO indicated that increased funding, hiring of more qualified personnel, and more timely notifications would also significantly assist in developing timely warning capabilities. Likewise, a cybersecurity expert we interviewed agreed that DHS may be losing skilled personnel to the private sector because of incentives such as higher salaries. A federal CIO further stated that additional funding was needed for monitoring and intrusion prevention tools. DHS has taken a number of steps to improve information sharing and timely analysis and warning capabilities, including addressing many of our prior recommendations. However, it has not yet fully addressed all of the recommendations made by us and the inspector general. We continue to believe that DHS needs to fully implement these recommendations in order to make better progress in addressing the challenges associated with effectively responding to and mitigating cybersecurity incidents. Until the previous recommendations are addressed, these challenges are likely to persist. Addressing Challenges in Promoting Education, Increasing Awareness, and Workforce Planning Is Key to Implementing a Successful Cybersecurity Strategy NIST includes education as a key element in its guidance to agencies, noting that establishing and maintaining a robust and relevant information-security training and awareness program is the primary conduit for providing the workforce with the information and tools needed to protect an agency’s vital information resources. Specifically, the ability to secure federal systems is dependent on the knowledge, skills, and abilities of the federal and contractor workforce that uses, implements, secures, and maintains these systems. This includes federal and contractor employees who use IT systems in the course of their work as well as the designers, developers, programmers, and administrators of the programs and systems. Workforce planning addresses education at a strategic, agency-wide level. Our own work and the work of other organizations, such as the Office of Personnel Management (OPM),practices that workforce planning should address, including suggest that there are leading developing workforce plans that link to the agency’s strategic plan; identifying the type and number of staff needed for an agency to achieve its mission and goals; defining roles, responsibilities, skills, and competencies for key developing strategies to address recruiting needs and barriers to filing ensuring compensation incentives and flexibilities are effectively used to recruit and retain employees for key positions; ensuring compensation systems are designed to help the agency compete for and retain the talent it needs to attain its goals; and establishing a training and development program that supports the competencies the agency needs to accomplish its mission. The 2000 National Plan for Information Systems Protection stated that a cadre of trained computer science and information technology specialists was the most urgently needed solution for building a defense of our nation’s cyberspace, but the hardest to acquire. The plan proposed steps to stimulate the higher education market to produce more cybersecurity professionals. Specifically, the plan described five Federal Cyber Services (now CyberCorps) training and education programs intended to help solve the federal IT security personnel problem. These five programs were an occupational study to assess the numbers and qualifications of IT positions in the federal government, the development of Centers for Information Technology Excellence, the creation of a scholarship program to recruit and educate federal IT personnel, the development of a high school recruitment and training initiative, and the development and implementation of a federal information security awareness curriculum. At the time, these programs were targeted for implementation by May 2002. The 2003 National Strategy to Secure Cyberspace also recognized the importance of education, awareness, and training, expanding the focus of the 2000 plan on building a stronger workforce, to include a national security awareness and training program as one of its five priority areas. The strategy identified four major actions and initiatives to address this priority. They were foster adequate training and education programs to support the nation’s cybersecurity needs; promote a comprehensive national awareness program to empower all Americans—businesses, the general workforce, and the general population—to secure their own parts of cyberspace; increase the efficiency of existing federal cybersecurity training programs; and promote private-sector support for well-coordinated, widely recognized professional cybersecurity certifications. The 2003 strategy recommended that DHS be the lead agency responsible for implementing programs to address its four major actions and initiatives. To foster adequate training and education programs, DHS was charged with implementing and encouraging establishment of training programs for cybersecurity professionals in coordination with the National Science Foundation, OPM, and the National Security Agency. DHS was also charged with developing a coordination mechanism for federal cybersecurity and computer forensics training programs and encouraging private sector support for professional cybersecurity certifications. To increase public awareness, DHS was asked to facilitate a comprehensive awareness campaign; encourage and support the development of programs and guidelines for primary and secondary school students in cybersecurity; and create a public-private task force to identify ways to make it easier for home users and small businesses to secure their systems. The 2008 CNCI focused again on the cybersecurity workforce and included a training program for cybersecurity professionals among its 12 programs. Specifically, CNCI called for constructing a comprehensive federal cyber education and training program, with attention to offensive and defensive skills and capabilities. The CNCI education and training project was assigned to DHS and DOD as a joint effort, altering the responsibilities defined in the 2003 strategy. The 2009 White House Cyberspace Policy Review also noted the importance of cybersecurity education, awareness, and workforce planning. It stated that the United States needed a technologically advanced workforce and that the general public needed to be well informed about how to use technology safely. To do this, it recommended (1) promoting cybersecurity risk awareness for all citizens; (2) building an education system to enhance understanding of cybersecurity and allow the United States to retain and expand upon its scientific, engineering, and market leadership in information technology; (3) expanding and training the workforce to protect the nation’s competitive advantage; and (4) helping organizations and individuals make smart choices as they manage risk. It named the Cybersecurity Coordinator as the lead for the development and implementation of a public awareness strategy and a strategy for better attracting cybersecurity expertise and increasing cybersecurity staff retention within the federal government. It tasked all departments and agencies with expanding support for key education programs and facilitating programs and information sharing on threats, vulnerabilities, and effective practices across all levels of government and industry. Consistent with the federal government’s evolving strategy for education, awareness, and workforce planning, DHS, NIST, and other agencies have initiated a comprehensive cybersecurity education program that includes education, awareness, and workforce planning. In April 2010, the National Initiative for Cybersecurity Education (NICE) was begun as an interagency effort coordinated by NIST to improve cybersecurity education, including efforts directed at training, public awareness, and the federal cybersecurity workforce. To meet NICE objectives, efforts were structured into the following four components: 1. National Cybersecurity Awareness: This component included public service campaigns to promote cybersecurity and responsible use of the Internet as well as making cybersecurity popular for children. It was also aimed at making cybersecurity a popular educational and career pursuit for older students. 2. Formal Cybersecurity Education: Education programs encompassing K-12, higher education, and vocational programs related to cybersecurity were included in this component, which focused on the science, technology, engineering, and math disciplines to provide a pipeline of skilled workers for private sector and government. 3. Federal Cybersecurity Workforce Structure: This component addressed personnel management functions, including the definition of cybersecurity jobs in the federal government and the skills and competencies they required. Also included were new strategies to ensure federal agencies attract, recruit, and retain skilled employees to accomplish cybersecurity missions. 4. Cybersecurity Workforce Training and Professional Development: Cybersecurity training and professional development for federal government civilian, military, and contractor personnel were included in this component. In March 2010, we reported that CNCI faced a number of key challenges in achieving its objectives, including reaching agreement among stakeholders on the scope of cybersecurity education efforts. Stakeholders could not reach agreement on whether to address cybersecurity education from a much broader perspective as part of the initiative, or remain focused on the federal cyber workforce. A panel of experts stated at the time that the federal government needed to publicize and raise awareness of the seriousness of the cybersecurity problem and to increase the number of professionals with adequate cybersecurity skills. They went on to say that the cybersecurity discipline should be organized into concrete professional tracks through testing and licensing. Such tracks would increase the federal cybersecurity workforce by strengthening the hiring and retention of cybersecurity professionals. We recommended that the Director of National Intelligence and the OMB Director reach agreement on the scope of CNCI’s education projects to ensure that an adequate cadre of skilled personnel was developed to protect federal information systems. The scope of the CNCI education projects was subsequently expanded from a federal focus to a larger national focus. In August 2011, NIST released a draft version of the NICE Strategic Plan that included the high-level goals and vision for cybersecurity education. In November 2011, we reported that while the NICE strategic plan described several ambitious outcomes, the departments involved in NICE had not developed details on how they were going to achieve the outcomes. We further reported that specific tasks under and responsibilities for NICE activities were unclear and a formal governance structure was missing. We recommended that Commerce, OMB, OPM, and DHS collaborate through the NICE initiative to clarify the governance structure for NICE to specify responsibilities and processes for planning and monitoring of initiative activities; and develop and finalize detailed plans allowing agency accountability, measurement of progress, and determination of resources to accomplish agreed-upon activities. Since then, DHS has developed a plan for its role in implementing NICE. Although the plan does not contain detailed steps on how the department will achieve the stated goals, it does include a timeline for completion and immediate and long-term recommended calls to action. In addition, in support of the NICE initiative, the National Security Agency established a program in April 2012 for the Academic Centers of Excellence in Cyber Operations to further the goal of broadening the pool of skilled cybersecurity workers. This program provides a particular emphasis on technologies and techniques related to specialized cyber operations to enhance the national security posture of the United States. We have also evaluated the extent to which federal agencies have implemented and established workforce planning practices for cybersecurity personnel. In November 2011, we reported on the progress selected agencies had made in developing workforce plans that specifically define cybersecurity needs.reviewed, only two—DOD and the Department of Transportation (DOT)— Of the eight agencies we had developed workforce plans that addressed cybersecurity. DHS and the Department of Justice had plans that, although not specific to cybersecurity, did address cybersecurity personnel. One agency—the Department of Veterans Affairs (VA)—had a guide on implementing competency models that addressed elements of workforce planning. The remaining three agencies—the Department of Commerce, the Department of Health and Human Services (HHS), and the Department of the Treasury—had neither departmental workforce plans nor workforce plans that specifically addressed cybersecurity workforce needs. Additionally, data provided from various sources on these agencies’ cybersecurity workforce numbers were inconsistent due, in part, to the challenge of defining cybersecurity positions. These agencies had generally taken steps to define cybersecurity roles and responsibilities and related skills and competencies; however, the approaches taken by each agency varied considerably. All eight agencies reported challenges with filling cybersecurity positions. Further, only three of the eight agencies had a department-wide training program for their cybersecurity workforce. Two of the three had established certification requirements for cybersecurity positions. We recommended that Commerce, HHS, and Treasury develop and implement a department-wide cybersecurity workforce plan or ensure that departmental components are conducting appropriate workforce planning activities; that DOD and DOT update their department-wide cybersecurity workforce plan or ensure that departmental components have plans that appropriately address human capital approaches, critical skills, competencies, and supporting requirements for their cybersecurity workforce strategies; and that VA update its department-wide cybersecurity competency model or establish a cybersecurity workforce plan that fully addresses gaps in human capital approaches and critical skills and competencies, supporting requirements for its cybersecurity workforce strategies, and monitoring and evaluating agency progress. In addition, to help federal agencies better identify their cybersecurity workforce and to improve cybersecurity workforce efforts, we recommended that OPM identify and develop government-wide strategies to address challenges federal agencies face in tracking their cybersecurity workforce; finalize and issue guidance to agencies on how to track the use and effectiveness of incentives for hard-to-fill positions, including cybersecurity positions; and maximize the value of the cybersecurity competency model by (1) developing and implementing a method for ensuring that the competency model accurately reflects the skill set unique to the cybersecurity workforce, (2) developing a method for collecting and tracking data on the use of the competency model, and (3) creating a schedule for revising or updating the model as needed. Five of the agencies concurred with our recommendations, and one agency neither concurred nor nonconcurred with our recommendations. In August 2012, NIST published the National Cybersecurity Workforce Framework, which established a common taxonomy and lexicon that is to be used to describe all cybersecurity work and workers regardless of where or for whom the work is performed. The developers of the framework intended it to be used in the public, private, and academic sectors. According to the framework, the inability to truly understand the cybersecurity workforce will persist, and the nation will be unnecessarily vulnerable to risk, unless the framework is adopted verbatim. Of the agency CIOs and experts we surveyed, a substantial number believe education, awareness, and workforce planning are a key challenge. Four of the 11 agency CIOs that responded to our survey, as well as 5 of the 12 experts we surveyed, cited weaknesses in education, awareness, and workforce planning as a root cause hindering progress in improving the nation’s cybersecurity posture. According to these CIOs and experts, executives in both federal and private sector organizations often lack a clear understanding of the cybersecurity threat they face and thus often do not make the necessary commitment to developing and maintaining adequate cybersecurity defenses. Specifically, three CIOs stated that the root cause hindering progress in improving the nation’s cybersecurity posture is the lack of understanding of the threats and risks to cyber assets. One CIO responded that there does not seem to be sufficient understanding or appreciation of the seriousness of the threats. He went on to state that we must find ways to convince the public that immediate, priority actions are necessary. Two of the cybersecurity experts we surveyed agreed that a poor understanding of the threats and risks was a root cause hindering progress in cybersecurity. For example, one expert stated that it was commonplace for corporate executives to underestimate cybersecurity threats, believing that Internet-based attacks are “not going to happen to me.” In addition, several CIOs and experts were concerned that the cybersecurity workforce was inadequate, both in numbers and training. One CIO stated that role-based qualification standards are needed for the cybersecurity and general workforce with specific actions and activities that are common across the government. He added that the quality of the workforce is one of the largest contributors to the success or failure of a cybersecurity program. During our panel discussion, one expert cited the difficulties in retaining cyber professionals as a challenge. Another panel participant agreed, adding that the lack of cyber professionals at the local government level was also a problem. He added that another challenge was that not enough effort had been spent on implementing planned education and awareness initiatives. For example, he stated that the NICE initiative had stalled in part because funding was devoted to an additional study of the issues involved in education and workforce development. While DHS and other agencies have taken steps to address our recommendations to clarify the scope of CNCI education initiatives and the governance structure of the NICE initiative, other recommendations have not yet been fully addressed. We continue to believe that OPM and other agencies need to fully implement our recommendations regarding the need to develop and implement department-wide cybersecurity workforce plans or ensure that departmental components are conducting appropriate workforce planning activities. Such actions can contribute to better progress in addressing the challenges associated with enhancing education, awareness, and workforce planning. Until our recommendations are addressed, these challenges are likely to persist. A National Strategy for Promoting Research and Development Has Not Been Fully Implemented Investing in R&D in cybersecurity technology is essential to creating a broader range of choices and more robust tools for building secure, networked computer systems. The increasing number of incidents and the greater sophistication of cyber threats highlight the importance of investing in R&D to develop new measures to effectively counter these threats. Over the past two decades, federal law and policy have repeatedly called for enhancements to R&D activities to focus on cybersecurity and accelerate useful results. Several laws and executive directives have called for activities that promote cybersecurity R&D. For example, in 1998, Presidential Decision Directive 63 established a focal point for cybersecurity R&D. It directed OSTP to coordinate research and development agendas and programs for the government through the National Science and Technology Council. The directive stated that R&D should be subject to multiyear planning, take into account private sector research, and be adequately funded to minimize vulnerabilities on a rapid timetable. In November of 2002, the Cyber Security Research and Development Act authorized funding to the National Institute of Standards and Technology and the National Science Foundation to create more secure cyber technologies and expand cybersecurity R&D. The act called for an increase in federal investment in computer and network security R&D to improve vulnerability assessment, technology, and systems solutions. In addition, it called for an expansion and improved pool of researchers and better coordination of information sharing and collaboration among industry, government, and academic research projects. Also, in 2002, the E-Government Act mandated that OMB ensure the development and maintenance of a government-wide repository of information about federally funded R&D, which would include R&D related to cybersecurity. HSPD-7, which replaced Presidential Decision Directive 63, also promoted cybersecurity R&D and directed the Department of Commerce to work with private sector, academic, and government organizations to improve technology for cyber systems. It also directed OSTP to coordinate interagency R&D to enhance the protection of critical infrastructure and to assist in preparing an annual federal research and development program. In addition to these laws and directives, the federal government has repeatedly adopted cybersecurity strategies that call for enhancing research and development. For example, in response to Presidential Decision Directive 63, the 2000 National Plan for Information Systems Protection called for a critical infrastructure protection R&D program that would rapidly identify, develop, and facilitate technological solutions to existing and emerging infrastructure threats and vulnerabilities. To achieve this goal, the plan recommended that the process include an awareness of the state of new technological developments; an ability to produce affordable R&D programs in critical infrastructure protection in a timely manner; a functioning, effective two-way interaction with the private sector, academia, and other countries to minimize R&D overlap and ensure that the needs of the private sector and government are met; and an innovative and flexible management structure that is responsive to rapid changes in the environment in terms of technology and threats. Additionally, it tasked an interagency working groupproper coordination of individual R&D programs within and across agencies and the rapid transfer of technologies among agencies and with the private sector. The 2003 National Strategy to Secure Cyberspace also noted the importance of R&D. As part of the strategy’s priority to reduce threats and related vulnerabilities, it called for the prioritization of federal cybersecurity research and development agendas. To achieve this, the strategy directed OSTP to coordinate development of a federal R&D agenda that included near-term, midterm, and long-term IT security research for fiscal year 2004 and beyond. Like the 2000 plan, it also noted the importance of coordination. The 2003 National Strategy to Secure Cyberspace directed DHS to ensure that adequate mechanisms existed for coordination of research and development among academia, industry, and government. DHS was further tasked with facilitating communication between the public and private research and security communities to ensure that emerging technologies were periodically reviewed by the National Science and Technology Council. The 2008 CNCI included research and development as one of the three overall goals of the initiative and defined specific R&D efforts to achieve those goals. Two of the 12 projects included in the initiative support its R&D goal. Like the 2000 plan and the 2003 strategy, the first project called for OSTP to coordinate and redirect R&D efforts with a focus on better coordinating both classified and unclassified cybersecurity R&D. The second project called for OSTP to define and develop enduring “leap- ahead” technology, strategies, and programs by investing in high-risk, high-reward R&D and by working with both private sector and international partners. The 2009 Cyberspace Policy Review likewise called for the development of a framework for R&D strategies that would focus on “game-changing” technologies with the potential to enhance the security, reliability, resilience, and trustworthiness of digital infrastructure. The policy review asked that the research community be given access to event data to facilitate developing tools, testing theories, and identifying workable solutions. The policy review again focuses on the need for coordination. According to the review, the government should greatly expand its coordination of R&D work with industry and academic research efforts to avoid duplication, leverage complementary capabilities, and ensure that the technological results of R&D efforts enter the marketplace. The NIPP also identified R&D as a key element in protecting the nation’s critical infrastructure. Like previous strategies, the NIPP identified coordination as a goal for R&D. It stated that federal agencies should work collaboratively to design and execute R&D programs to help develop knowledge and technology to more effectively mitigate the risk to critical infrastructure. The plan described the national critical infrastructure protection R&D plan, which identified three long-term, strategic R&D goals for critical infrastructure protection: a “common operating picture” to continuously monitor the health of a next-generation Internet architecture with designed-in security; and resilient, self-diagnosing, self-healing infrastructure systems. According to the plan, these strategic goals were to be used to guide federal R&D investment decisions and coordinate overall federal research. As previously stated, in December 2011 OSTP issued the first cybersecurity R&D strategic plan in response to the R&D-related recommendations in the Cyberspace Policy Review. According to a key Subcommittee on Networking and Information Technology Research and Development (NITRD) official who works closely with OSTP, the federal cybersecurity R&D strategic plan is intended to provide an overall vision or direction for R&D, while specific research priorities and time frames are to be determined at the agency level. As early as 2000, the National Plan for Information Systems Protection acknowledged the challenges of implementing a coordinated R&D program. For example, the plan stated that coordinating federal R&D with ongoing private sector programs would be complicated by industry’s desire to guard proprietary programs and trade secrets. Specifically, the plan noted that it was difficult to identify all relevant ongoing R&D programs and that some of them overlapped. In a June 2010 report on research and development, we concluded that despite the continued focus on coordination between federal agencies and the public sector, R&D initiatives were hindered by limited sharing of detailed information about ongoing research. According to federal and private experts we consulted for the 2010 report, key factors existed that reduced the private sector’s and government’s willingness to share information and trust each other with regard to researching and developing new cybersecurity technologies. Specifically, private sector officials stated that they were often unwilling to share details of their R&D with the government because they wanted to protect their intellectual property. On the government side, officials were concerned that the private sector was too focused on making a profit and may not necessarily conduct R&D in areas that require the most attention. Additionally, at the time of our report, government and private sector officials indicated that the government did not have a process in place to communicate the results of completed federal R&D projects. The private and public sectors had shared some cybersecurity R&D information, but such information sharing generally occurred only on a project-by-project basis. For example, we reported that the National Science Foundation’s Industry University Cooperative Research Center initiative established centers to conduct research that is of interest to both industry and academia, and DOD’s Small Business Innovation Research program funded R&D at small technology companies. However, according to federal and private sector experts we consulted at that time, widespread and ongoing information sharing generally had not occurred. Further, the 2010 report also stated that no complete and up-to-date repository existed to track all cybersecurity R&D information and associated funding as required by law. At that time, an OSTP official indicated that it was difficult to develop and enforce policies for identifying specific funding as R&D, and that the level of detail to be disclosed was also a factor because national security must be protected. To help facilitate information sharing about ongoing and planned R&D projects, we recommended that OSTP, in conjunction with the Cybersecurity Coordinator, direct NITRD to (1) establish a mechanism, consistent with existing law, to keep track of all ongoing and completed federal cybersecurity R&D projects and associated funding; and (2) utilize the newly established tracking mechanism to develop an ongoing process to make federal R&D information available to federal agencies and the private sector. OSTP concurred with our recommendations. Subsequently, in September 2012, we reported that OMB had not fully established the repository for providing information on R&D funded by the federal government. We found that only 11 of the 24 major agencies in our study reported providing research information to http://www.Science.gov. Moreover, 2 agencies in our study reported not being aware of any R&D repository. OMB officials pointed to an R&D dashboard website being developed by OSTP that was intended to meet the requirement for an R&D repository. However, this website provided information on federal investments in research and development for only 2 agencies. Further, according to OMB, a timeline had not yet been developed for when all agencies were to provide information for the R&D dashboard, and guidance had not been issued for agencies to upload their information to the website. We continue to believe that implementing our recommendations to OMB to issue guidance on reporting cybersecurity R&D activities and to OSTP to establish a mechanism to track ongoing and completed federal cybersecurity R&D projects is important for addressing challenges associated with effectively promoting cybersecurity R&D in the federal government. Until our recommendations are addressed, these challenges are likely to persist. The Federal Government Continues to Face International Cybersecurity Challenges Recent intrusions on U.S. corporations and federal agencies by attackers in foreign countries highlight the threats posed by the worldwide connection of our networks and the need to adequately address the global security and governance of cyberspace. The global interconnectivity provided by the Internet allows cyber attackers to easily cross national borders, access vast numbers of victims at the same time, and easily maintain anonymity. Governance over Internet activities is complicated because Internet users may be able to retrieve or post information or perform an activity which is illegal where they are physically located, but not illegal in the country where the computer they are accessing is located. A number of agencies have responsibilities for, and are involved in, international cyberspace security and governance efforts. Specifically, the Departments of Commerce, Defense, Homeland Security, Justice, and State, among others, are involved in efforts to develop international standards, formulate cyber-defense policy, facilitate overseas investigations and law enforcement, and represent U.S. interests in international forums. Agencies also participate in international organizations and collaborative efforts to influence international cyberspace security and governance, including engaging in bilateral and multilateral relationships with foreign countries, providing personnel to foreign agencies, and coordinating U.S. policy among government agencies. As threats to cyberspace have persisted and grown and cyberspace has expanded globally, the federal government has developed policies, strategies, and initiatives that recognize the importance of addressing cybersecurity on a global basis. While the 2000 National Plan for Information Systems Protection focused on domestic efforts to protect the nation’s cyber critical infrastructure, it described U.S. law enforcement collaboration with law enforcement counterparts from other nations to enhance international cooperation and develop a common approach to criminalizing intrusions and attacks on information networks and systems. In addition, the plan noted that national security agencies needed programs regarding permissible roles for national security agency involvement in foreign activities. The 2003 National Strategy to Secure Cyberspace went further by establishing international cyberspace security cooperation as a key part of one of its five national priorities. The strategy stated that securing global cyberspace required international cooperation to raise awareness, share information, promote security standards, and investigate and prosecute cybercrime. The strategy identified five key initiatives, led by the Department of State, to strengthen international cooperation, including working through international organizations and with industry to facilitate and promote a global “culture of security”; developing secure networks; promoting North American cyberspace security; fostering the establishment of national and international watch-and- warning networks to detect and prevent cyber attacks as they emerge; and encouraging other nations to accede to the Council of Europe Convention on Cybercrime, or to ensure that their laws and procedures were at least as comprehensive. To fulfill the Department of State’s lead responsibility, a number of the department’s entities were given roles, including having the Bureau of Intelligence and Research, Office of Cyber Affairs, coordinate outreach on cybersecurity issues and the Bureau of International Narcotics and Law Enforcement Affairs coordinate policy and programs to combat cybercrime. International cooperation is also identified as a priority for critical infrastructure in HSPD-7, which directed DHS to, among other things, develop a strategy for working with international organizations on critical infrastructure protection. The directive also designated State, in conjunction with Commerce, DOD, DHS, Justice, Treasury, and other appropriate agencies, to work with foreign countries and international organizations to strengthen the protection of U.S. critical infrastructure. The requirements set forth in HSPD-7 were addressed with the creation of the NIPP in 2006, and its update in 2009. The NIPP includes a section on international cooperation to protect critical infrastructure that focuses on, among other things, international cybersecurity and cooperation with international partners through activities such as national cyber exercises. In contrast to the 2003 strategy, the 2008 CNCI did not include international cooperation as one of its 12 component projects. While none of the projects directly addressed international cooperation, one initiative that focused on deterring interference and attacks in cyberspace included a goal of better articulating roles for private sector and international partners. The initiative also recognized the need to develop an approach to better manage the federal government’s global supply chain. The 2009 White House Cyberspace Policy Review adhered more closely to the 2003 strategy, identifying international coordination as part of one of its five key topic areas. The review called for the development of an international strategy to foster cooperation on issues such as acceptable legal norms regarding territorial jurisdiction, sovereign responsibility, and the use of force. The review recommended, among other things, that the United States accelerate efforts to help other countries build legal frameworks and capacity to fight cybercrime and continue to promote cybersecurity practices and standards. It also recommended that the Cybersecurity Coordinator work with federal agencies to strengthen and integrate interagency processes to formulate and coordinate international cybersecurity-related positions and to enhance the identification, tracking, and prioritization of international venues, negotiations, and discussions where cybersecurity-related policy-making was taking place. In addition, the review recommended that the federal government work with the private sector to develop a proactive engagement plan for use with international standards bodies, including looking at the policies that already exist and refining them to make sure the full range of cybersecurity interests was taken into account. DOD and DHS have also identified international coordination as a key aspect of their recently released cyberspace strategies. In July 2011, the DOD Strategy for Operating in Cyberspace identified five strategic initiatives, including building relationships with U.S. allies and international partners to strengthen collective cybersecurity. The strategy states that DOD will assist U.S. efforts to develop and promote international cyberspace norms, cooperate with allies to defend U.S. and allied interest in cyberspace, and expand its international cyber cooperation to a wider pool of allied and partner militaries to develop collective self-defense and increase collective deterrence. The November 2011 DHS Blueprint for a Secure Cyber Future makes similar pledges. One of the blueprint’s two overarching focus areas— protecting critical information infrastructure—includes international partnerships as a necessary element for success, and many of the capabilities identified within the strategy’s four goals for protecting critical information infrastructure are to be developed and implemented in collaboration with international partners. For example, DHS commits to increasing its capacity to deter, investigate, and prosecute crimes committed through the use of cyberspace by, among other things, developing productive international relationships to safeguard and share evidence to bring cyber criminals to justice. DHS also identified multiple capabilities related to information dissemination to international partners in areas such as adverse incidents and proven practices to decrease the spread and impact of hazards. While progress has been made in identifying the importance of international cooperation and assigning roles and responsibilities related to it, the government’s approach for addressing international aspects of cybersecurity has not yet been completely defined and implemented. We have identified significant challenges within the federal government’s international cybersecurity efforts. In our March 2010 report focused on the CNCI, we observed that the federal government was facing strategic challenges in areas that are not the subject of existing projects within CNCI but that remained key to achieving the initiative’s overall goal of One of the strategic challenges securing federal information systems.we identified was coordinating with international entities. We found that there was no formal strategy for coordinating outreach to international partners for the purposes of standards setting, law enforcement, and information sharing. Accordingly we recommended that the Director of OMB establish a coordinated approach for the federal government in conducting international outreach to address cybersecurity issues strategically. GAO, Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance, GAO-10-606 (Washington, D.C.: July 2, 2010). the White House Cybersecurity Coordinator’s authority and capacity to effectively coordinate and forge a coherent national approach to cybersecurity, which were needed to lead near-term international goals and objectives from the President’s Cyberspace Policy Review, were still under development; the U.S. government had not documented a clear vision of how the international efforts of federal entities, taken together, supported overarching national goals; federal agencies had not demonstrated an ability to coordinate their international activities and project clear policies on a consistent basis; some countries had attempted to mandate compliance with their own cybersecurity standards in a manner that risked discriminating against U.S. companies or posed trade barriers to foreign companies that sought to market and sell their products to other countries; the federal government lacked a coherent approach toward participating in a broader international framework for responding to cyber incidents with global impact; the differences among laws of nations could impede U.S. and foreign efforts to enforce domestic criminal and civil laws related to cyberspace; and some federal agencies reported that they participated in efforts that may contribute to developing international norms, but agencies reported challenges such as, that this was a complicated and long- term process and that the absence of agreed-upon definitions for cyberspace-related terminology could impede efforts to develop international norms. We concluded that until these challenges were addressed, the United States would be at a disadvantage in promoting its national interests in the realm of cyberspace. Accordingly, we recommended that the Cybersecurity Coordinator, in collaboration with others, take five actions to address these challenges, which included the following: Develop with the Departments of Commerce, Defense, Homeland Security, Justice, and State and other relevant federal and nonfederal entities, a comprehensive U.S. global cyberspace strategy that articulates overarching goals, subordinate objectives, specific activities, performance metrics, and reasonable time frames to achieve results; addresses technical standards and policies while taking into consideration U.S. trade; and identifies methods for addressing the enforcement of U.S. civil and criminal law. Enhance the interagency coordination mechanisms by ensuring relevant federal entities are engaged and that their efforts, taken together, support U.S. interests in a coherent and consistent fashion. Determine, in conjunction with the Departments of Defense and State and other relevant federal entities, which, if any, cyberspace norms should be defined to support U.S. interests in cyberspace and methods for fostering such norms internationally. Although the White House developed and released the International Strategy for Cyberspace in May 2011 that addresses several of our recommendations, it does not include all the elements we recommended. To its credit, the strategy included goals for establishing cyberspace norms that should be accepted internationally and methods for fostering such norms internationally, such as developing cybercrime norms in appropriate forums and incorporating existing efforts. However, the strategy does not fully specify outcome-oriented performance metrics, or time frames for completing activities. For example, the strategy discusses multiple goals and objectives, but does not provide performance metrics to help ensure accountability and gauge results. We continue to believe that the international strategy should specify outcome-oriented performance metrics, and time frames for completing activities. Including outcome-oriented performance metrics and time frames for completion would help to ensure that agencies with international responsibilities are taking appropriate actions to implement the strategy and are making progress in improving international cooperation. Until our recommendations are addressed, challenges in defining and implementing an approach for addressing international aspects of cybersecurity are likely to persist. Conclusions Given the range and sophistication of the threats and potential exploits that confront government agencies and the nation’s cyber critical infrastructure, it is critical that the government adopt a comprehensive strategic approach to mitigating the risks of successful cybersecurity attacks. Such an approach would not only define priority problem areas but also set a roadmap for allocating and managing appropriate resources, making a convincing business case to justify expenses, identifying organizations’ roles and responsibilities, linking goals and priorities, and holding participants accountable for achieving results. However, the federal government’s efforts at defining a strategy for cybersecurity have often not fully addressed these key elements, lacking, for example, milestones and performance measures, identified costs and sources of funding, and specific roles and responsibilities. As a result, the government’s cybersecurity strategy remains poorly articulated and incomplete. In fact, no integrated, overarching strategy exists that articulates priority actions, assigns responsibilities for performing them, and sets time frames for their completion. In the absence of an integrated strategy, the documents that comprise the government’s current strategic approach are of limited value as a tool for mobilizing actions to mitigate the most serious threats facing the nation. Previous GAO and inspector general reviews as well as federal CIOs and experts have made recommendations to address challenges faced by federal agencies and the private sector in effectively implementing a comprehensive approach to cybersecurity and reducing the risk of successful cybersecurity attacks. Many of these recommendations have not yet been fully addressed, leaving much room for more progress in addressing cybersecurity challenges. In many cases, the causes of these challenges are closely related to the key elements that are missing from the government’s cybersecurity strategy. For example, the persistence of shortcomings in agency cybersecurity risk management processes indicates that agencies have not been held accountable for effectively implementing such processes and that oversight mechanisms have not been clear. It is just such oversight and accountability that is poorly defined in cybersecurity strategy documents. Clarifying oversight responsibilities is a topic that could be effectively addressed through legislation. An overarching strategy that better addresses key desirable characteristics could establish an improved framework to implement national cybersecurity policy and ensure that stated goals and priorities are actively pursued by government agencies and better supported by key private sector entities. To be successful such a strategy would include a clearer process for OMB oversight of agency risk management processes and a roadmap for improving the cybersecurity challenge areas where previous concerns have not been fully addressed. The development and implementation of such a strategy would likely lead to significant progress in furthering strategic goals and lessening persistent weaknesses. Recommendations for Executive Action In order to institute a more effective framework for implementing cybersecurity activities, and to help ensure such activities will lead to progress in cybersecurity, we recommend that the White House Cybersecurity Coordinator in the Executive Office of the President develop an overarching federal cybersecurity strategy that includes all key elements of the desirable characteristics of a national strategy, including milestones and performance measures for major activities to address cost, sources, and justification for needed resources to accomplish stated priorities; specific roles and responsibilities of federal organizations related to the strategy’s stated priorities; and guidance, where appropriate, regarding how this strategy relates to priorities, goals, and objectives stated in other national strategy documents. This strategy should also better ensure that federal departments and agencies are held accountable for making significant improvements in cybersecurity challenge areas, including designing and implementing risk- based programs; detecting, responding to, and mitigating cyber incidents; promoting education, awareness, and workforce planning; promoting R&D; and addressing international cybersecurity challenges. To address these issues, the strategy should (1) clarify how OMB will oversee agency implementation of requirements for effective risk management processes and (2) establish a roadmap for making significant improvements in cybersecurity challenge areas where previous recommendations have not been fully addressed. Matter for Congressional Consideration To address ambiguities in roles and responsibilities that have resulted from recent executive branch actions, Congress should consider legislation to better define roles and responsibilities for implementing and overseeing federal information security programs and for protecting the nation’s critical cyber assets. Agency Comments and Our Evaluation We provided a draft of this report to the Executive Office of the President, OMB, DHS, DOD, and Commerce. We received comments from the General Counsel of OSTP, who provided comments from both the National Security Staff and OSTP in the Executive Office of the President; the Deputy General Counsel of OMB; the Director of the Departmental GAO-OIG Liaison Office at DHS; and the Special Assistant for Cybersecurity in the Office of the Secretary of Defense. The Executive Office of the President and OMB both commented on our draft recommendations, and the Executive Office of the President concurred with our matter for congressional consideration. The audit liaison officer in the Director’s Office of the National Institute of Standards and Technology within the Department of Commerce responded that the department did not have any comments. A summary of comments we received follows. The General Counsel of OSTP in the Executive Office of the President provided comments via e-mail in which the National Security Staff stated that the administration agrees that more needs to be done to develop a coherent and comprehensive strategy on cybersecurity and noted that a number of strategies and policies had been issued to address specific cybersecurity topics. According to the National Security Staff, remaining flexible and focusing on achieving measurable improvements in cybersecurity would be more beneficial than developing “yet another strategy on top of existing strategies.” We agree that flexibility and a focus on achieving measurable improvements in cybersecurity is critically important and that simply preparing another document, if not integrated with previous documents, would not be helpful. The focus of our recommendation is to develop an overarching strategy that integrates the numerous strategy documents, establish milestones and performance measures, and better ensure that federal departments and agencies are held accountable for making significant improvements in cybersecurity challenge areas. We do not believe the current approach accomplishes this. The National Security Staff also agreed with our matter for congressional consideration and that comprehensive cybersecurity legislation that addresses information sharing and baseline standards for critical infrastructure, among other things, is necessary to mitigate the threats posed in cyberspace. The General Counsel also provided technical comments from OSTP, which we have incorporated into the final report as appropriate. In comments provided via e-mail, the Deputy General Counsel at OMB responded to our draft recommendation, stating that OMB’s responsibility under FISMA is to “oversee” agency implementation of requirements for effective risk management processes. We agree that FISMA gives OMB the responsibility of overseeing agency implementation of cybersecurity risk management requirements and have changed the wording of our recommendation to reflect OMB’s role as specified by the act. The Deputy General Counsel also expressed concern about our description of actions OMB took in 2010 with regard to roles and responsibilities under FISMA. According to the Deputy General Counsel, OMB did not delegate or transfer any statutory authorities to DHS. Instead, DHS exercised its own authorities in taking on additional responsibilities. We disagree. FISMA specifies in detail a number of oversight responsibilities that it assigns to OMB. It was several of these specific responsibilities that in 2010 OMB announced DHS would be assuming. Therefore we conclude that OMB transferred these responsibilities to DHS. More importantly, with these responsibilities now divided between the two organizations, it remains unclear how OMB and DHS are to share oversight of individual departments and agencies. The Director of the Departmental GAO-OIG Liaison Office at DHS provided written comments that discussed specific actions the department has taken or plans to take to address challenges we identified, such as information sharing, analysis and warning, and expanding the cybersecurity workforce. He added that the department’s Blueprint for a Secure Cyber Future aligns with the various national strategies we discuss in this report and addresses the challenge areas we identified. In addition, the audit liaison officer in the Office of the Chief Financial Officer provided technical comments via e-mail, which we have incorporated into the final report as appropriate. DHS’s written comments are reprinted in appendix III. The Special Assistant for Cybersecurity in the Office of the Secretary of Defense provided general observations about the draft report as well as technical comments via e-mail. For example, the comments indicated that any update to the national cybersecurity strategy should address ways to make cyberspace more defensible. The Special Assistant for Cybersecurity also acknowledged inconsistencies in departmental guidance but said that DOD officials were not confused about their responsibilities and that future updates to the departmental guidance would clarify cyber policy responsibilities. We agree that clarification of DOD organizations’ roles and responsibilities would enhance the department’s ability to support DHS during significant domestic cyber incidents. In addition, the comments indicated that cybersecurity strategies should be evaluated in terms of to whom the strategy is addressed (i.e., the federal government or the private sector), the rapidity of change in cybersecurity issues, and the environment for which the strategy is written (i.e., federal civilian government, the military, or the private sector). We agree that these are important factors to consider in developing comprehensive cybersecurity strategies and believe our report reflects these factors. We also believe that the issues we identified remain of critical importance in developing and implementing an effective national cybersecurity strategy. Finally, the comments identified actions DOD has taken or is taking to address challenges related to sharing information, promoting education, and promoting R&D. We are sending copies of this report to the Special Assistant to the President and Cybersecurity Coordinator, the Acting Director of the Office of Management and Budget, the Director of the Office of Science and Technology Policy, the Secretary of the Department of Homeland Security, the Secretary of Defense, the Acting Secretary of the Department of Commerce, and other interested parties. The report will also be available on the GAO website at no charge at http://www.gao.gov. For any questions about this report, please contact: Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512- 4499, or by e-mail at [email protected] or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) determine the extent to which the national cybersecurity strategy includes key desirable characteristics of effective strategies, and (2) identify challenges faced by the federal government in addressing a strategic approach to cybersecurity, including: (a) establishing a management structure to assess cybersecurity risks, developing and implementing appropriate controls, and measuring results; (b) detecting, responding to, and mitigating the effects of attacks on federal civilian and critical infrastructure; (c) enhancing awareness and promoting education; (d) promoting research and development; and (e) developing partnerships to leverage resources internationally. To determine the extent to which the national cybersecurity strategy includes key desirable characteristics of effective strategies, we assessed the current national cybersecurity strategy and other government-wide strategies against the desirable characteristics of a national strategy. Our assessment determined the extent to which all of the elements of each desirable characteristic were addressed by the strategies. These desirable characteristics were developed by GAO in 2004. At that time, we identified these characteristics by consulting statutory requirements pertaining to certain strategies we reviewed, as well as legislative and executive branch guidance for other national strategies. In addition, we studied the Government Performance and Results Act of 1993 (GPRA), general literature on strategic planning and performance, and guidance from the Office of Management and Budget (OMB) on the President’s Management Agenda. We also gathered published recommendations made by national commissions chartered by Congress; past GAO work; and various research organizations that have commented on national strategies. To determine and assess challenges faced by the federal government in addressing a strategic approach to cybersecurity, we interviewed agency officials with cybersecurity-related responsibilities from agencies with key responsibilities in protecting federal systems and the nation’s cyber infrastructure. These agencies were: the Department of Homeland Security (DHS) (including officials from the Office of Cybersecurity and Communications, the National Cybersecurity and Communications Integration Center, the United States Computer Emergency Readiness Team (US-CERT), Office of Program Analysis and Evaluation, Federal Network Security Branch, and the Critical infrastructure Cyber Protection and Awareness Branch); the Department of Defense (DOD) (including officials from the National Security Agency and the Defense Information Systems Agency); the Executive Office of the President (including officials from OMB, the National Coordination Office, Office of Science and Technology Policy, and the National Security Staff); and the National Institute of Standards and Technology (NIST). We also obtained the views of private sector cybersecurity and information management experts and federal chief information officers on the key issues and challenges of the current federal strategy for cybersecurity through convening panel discussions and administering surveys. Our first of two panels consisted of information management experts who are members of GAO’s Executive Committee for Information Management and Technology and resulted in documenting their identified key issues and challenges. We further surveyed chief information officers from the 24 agencies identified in the Chief Financial Officers Act to determine their key issues and challenges. Eleven of the 24 chief information officers responded to our survey (see app. II). Our second panel and survey involved a selection of private sector cybersecurity experts. To identify private sector cybersecurity experts, we first obtained a universe of experts by reviewing membership and advisor roles for pertinent cybersecurity boards and commissions (e.g., the Information Security and Privacy Advisory Board and the National Academies’ Computer Science and Telecommunications Board), key associations that are leading thinkers on cybersecurity (e.g., the Internet Security Alliance), and witnesses from cybersecurity-related congressional hearings. We then made the initial selections by identifying those individuals or organizations that were listed in multiple independent sources.advisors. We also selected the last two White House cybersecurity Lastly, we reviewed agency inspector general and GAO reports that previously identified challenges related to government-wide cybersecurity strategies and initiatives, and met with staff from the DHS Office of Inspector General to determine the current status of related recommendations in their prior reports. We then assessed progress in overcoming the inspector general- and GAO-identified challenges through interviews with agency officials and reviewing agency documentation and publicly available data. We performed our work on the initiative of the U.S. Comptroller General to evaluate the federal government’s cybersecurity strategies and understand the status of federal cybersecurity efforts to address challenges in establishing a strategic cybersecurity approach. We conducted this performance audit from April 2012 to February 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of Panel and Survey Participants This appendix lists the names and affiliations of the cybersecurity and information management professionals who participated in the cybersecurity expert panel discussion and the Executive Committee for Information Management and Technology panel discussion, as well as the respondents to our surveys of cybersecurity experts and agency CIOs. Cybersecurity Expert Panel Discussion Attendees Executive Committee for Information Management and Technology Panel Discussion Attendees The names and affiliation of the experts who participated in the panel discussion held September 12, 2012, in Washington D.C., are as follows: Lynda Applegate, Harvard Business School Hank Conrad, CounterPoint Corporation Mary Culnan, Bentley University John Flynn, Principal, FK&A Inc. Peter Neumann, SRI International Computer Science Laboratory Theresa Pardo, Director, Center for Technology in Government, University at Albany, New York Douglas Robinson, Executive Director, National Association of State Chief Information Officers (NASCIO) Paul Rummell, Management Consultant Dugan Petty, State of Oregon and NASCIO Eugene H. Spafford, CERIAS, Purdue University Nancy Stewart, Wal-Mart (retired) Expert and CIO Survey Participants Expert Survey Participants Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individuals named above, key contributions to this report were made by John de Ferrari (Assistant Director), Richard B. Hung (Assistant Director), Melina Asencio, Tina Cheng, Rosanna Guerrero, Nicole Jarvis, Lee McCracken, David F. Plocher, Dana Pon, Kelly Rubin, Andrew Stavisky, and Jeffrey Woodward. Related GAO Products Information Security: Better Implementation of Controls for Mobile Devices Should Be Encouraged. GAO-12-757. Washington, D.C.: September 18, 2012. Medical Devices: FDA Should Expand Its Consideration of Information Security for Certain Types of Devices. GAO-12-816. Washington, D.C.: August 31, 2012. Bureau of the Public Debt: Areas for Improvement in Information Systems Controls. GAO-12-616. Washington, D.C.: May 24, 2012. Cybersecurity: Challenges in Securing the Electricity Grid. GAO-12-926T. Washington, D.C.: July 17, 2012. Electronic Warfare: DOD Actions Needed to Strengthen Management and Oversight. GAO-12-479. Washington, D.C.: July 9, 2012. Information Security: Cyber Threats Facilitate Ability to Commit Economic Espionage. GAO-12-876T. Washington, D.C.: June 28, 2012 Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012. IT Supply Chain: National Security-Related Agencies Need to Better Address Risks. GAO-12-361. Washington, D.C.: March 23, 2012. Information Security: IRS Needs to Further Enhance Internal Control over Financial Reporting and Taxpayer Data. GAO-12-393. Washington, D.C.: March 16, 2012. Cybersecurity: Challenges in Securing the Modernized Electricity Grid. GAO-12-507T. Washington, D.C.: February 28, 2012. Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use. GAO-12-92. Washington, D.C.: December 9, 2011. Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination. GAO-12-8. Washington, D.C.: November 29, 2011. Information Security: Additional Guidance Needed to Address Cloud Computing Concerns. GAO-12-130T. Washington, D.C.: October 6, 2011. Information Security: Weaknesses Continue Amid New Federal Efforts to Implement Requirements. GAO-12-137. Washington, D.C.: October 3, 2011. Personal ID Verification: Agencies Should Set a Higher Priority on Using the Capabilities of Standardized Identification Cards. GAO-11-751. Washington, D.C.: September 20, 2011. Information Security: FDIC Has Made Progress, but Further Actions Are Needed to Protect Financial Data. GAO-11-708. Washington, D.C.: August 12, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure. GAO-11-865T. Washington, D.C.: July 26, 2011. Defense Department Cyber Efforts: DOD Faces Challenges in Its Cyber Activities. GAO-11-75. Washington, D.C.: July 25, 2011. Information Security: State Has Taken Steps to Implement a Continuous Monitoring Application, but Key Challenges Remain. GAO-11-149. Washington, D.C.: July 8, 2011. Social Media: Federal Agencies Need Policies and Procedures for Managing and Protecting Information They Access and Disseminate. GAO-11-605. Washington, D.C.: June 28, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure and Federal Information Systems. GAO-11-463T. Washington, D.C.: March 16, 2011. Information Security: IRS Needs to Enhance Internal Control Over Financial Reporting and Taxpayer Data. GAO-11-308. Washington, D.C.: March 15, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to Be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Information Security: National Nuclear Security Administration Needs to Improve Contingency Planning for Its Classified Supercomputing Operations. GAO-11-67. Washington, D.C.: December 9, 2010. Information Security: Federal Agencies Have Taken Steps to Secure Wireless Networks, but Further Actions Can Mitigate Risk. GAO-11-43. Washington, D.C.: November 30, 2010. Information Security: Federal Deposit Insurance Corporation Needs to Mitigate Control Weaknesses. GAO-11-29. Washington, D.C.: November 30, 2010. Information Security: National Archives and Records Administration Needs to Implement Key Program Elements and Controls. GAO-11-20. Washington, D.C.: October 21, 2010. Cyberspace Policy: Executive Branch Is Making Progress Implementing 2009 Policy Review Recommendations, but Sustained Leadership Is Needed. GAO-11-24. Washington, D.C.: October 6, 2010. Information Security: Progress Made on Harmonizing Policies and Guidance for National Security and Non-National Security Systems. GAO-10-916. Washington, D.C.: September 15, 2010. Information Management: Challenges in Federal Agencies’ Use of Web 2.0 Technologies. GAO-10-872T. Washington, D.C.: July 22, 2010. Critical Infrastructure Protection: Key Private and Public Cyber Expectations Need to Be Consistently Addressed. GAO-10-628. Washington, D.C.: July 15, 2010. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Information Security: Governmentwide Guidance Needed to Assist Agencies in Implementing Cloud Computing. GAO-10-855T. Washington, D.C.: July 1, 2010. Cybersecurity: Continued Attention Is Needed to Protect Federal Information Systems from Evolving Threats. GAO-10-834T. Washington, D.C.: June 16, 2010. Cybersecurity: Key Challenges Need to Be Addressed to Improve Research and Development. GAO-10-466. Washington, D.C.: June 3, 2010. Information Security: Federal Guidance Needed to Address Control Issues with Implementing Cloud Computing. GAO-10-513. Washington, D.C.: May 27, 2010. Information Security: Opportunities Exist for the Federal Housing Finance Agency to Improve Control. GAO-10-528. Washington, D.C.: April 30, 2010. Information Security: Concerted Response Needed to Resolve Persistent Weaknesses. GAO-10-536T.Washington, D.C.: March 24, 2010. Information Security: IRS Needs to Continue to Address Significant Weaknesses. GAO-10-355. Washington, D.C.: March 19, 2010. Information Security: Concerted Effort Needed to Consolidate and Secure Internet Connections at Federal Agencies. GAO-10-237. Washington, D.C.: March 12, 2010. Information Security: Agencies Need to Implement Federal Desktop Core Configuration Requirements. GAO-10-202. Washington, D.C.: March 12, 2010. Cybersecurity: Progress Made but Challenges Remain in Defining and Coordinating the Comprehensive National Initiative. GAO-10-338. Washington, D.C.: March 5, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. Department of Veterans Affairs’ Implementation of Information Security Education Assistance Program. GAO-10-170R. Washington, D.C.: December 18, 2009. Cybersecurity: Continued Efforts Are Needed to Protect Information Systems from Evolving Threats. GAO-10-230T. Washington, D.C.: November 17, 2009. Information Security: Concerted Effort Needed to Improve Federal Performance Measures. GAO-10-159T. Washington, D.C.: October 29, 2009. Critical Infrastructure Protection: OMB Leadership Needed to Strengthen Agency Planning Efforts to Protect Federal Cyber Assets. GAO-10-148. Washington, D.C.: October 15, 2009. Information Security: NASA Needs to Remedy Vulnerabilities in Key Networks. GAO-10-4. Washington, D.C.: October 15, 2009. Information Security: Actions Needed to Better Manage, Protect, and Sustain Improvements to Los Alamos National Laboratory’s Classified Computer Network. GAO-10-28. Washington, D.C.: October 14, 2009. Critical Infrastructure Protection: Current Cyber Sector-Specific Planning Approach Needs Reassessment. GAO-09-969. Washington, D.C.: September 24, 2009. Information Security: Federal Information Security Issues. GAO-09-817R. Washington, D.C.: June 30, 2009. Information Security: Concerted Effort Needed to Improve Federal Performance Measures. GAO-09-617. Washington, D.C.: September 14, 2009. Information Security: Agencies Continue to Report Progress, but Need to Mitigate Persistent Weaknesses. GAO-09-546. Washington, D.C.: July 17, 2009. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Information Technology: Federal Laws, Regulations, and Mandatory Standards to Securing Private Sector Information Technology Systems and Data in Critical Infrastructure Sectors. GAO-08-1075R. Washington, D.C.: September 16, 2008. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Information Security: Federal Agency Efforts to Encrypt Sensitive Information Are Under Way, but Work Remains. GAO-08-525. Washington, D.C.: June 27, 2008. Privacy: Lessons Learned about Data Breach Notification. GAO-07-657. Washington, D.C.: April 30, 2007. | Cyber attacks could have a potentially devastating impact on the nation's computer systems and networks, disrupting the operations of government and businesses and the lives of private individuals. Increasingly sophisticated cyber threats have underscored the need to manage and bolster the cybersecurity of key government systems as well as the nation's critical infrastructure. GAO has designated federal information security as a government-wide high-risk area since 1997, and in 2003 expanded it to include cyber critical infrastructure. GAO has issued numerous reports since that time making recommendations to address weaknesses in federal information security programs as well as efforts to improve critical infrastructure protection. Over that same period, the executive branch has issued strategy documents that have outlined a variety of approaches for dealing with persistent cybersecurity issues. GAO's objectives were to (1) identify challenges faced by the federal government in addressing a strategic approach to cybersecurity, and (2) determine the extent to which the national cybersecurity strategy adheres to desirable characteristics for such a strategy. To address these objectives, GAO analyzed previous reports and updated information obtained from officials at federal agencies with key cybersecurity responsibilities. GAO also obtained the views of experts in information technology management and cybersecurity and conducted a survey of chief information officers at major federal agencies. Threats to systems supporting critical infrastructure and federal operations are evolving and growing. Federal agencies have reported increasing numbers of cybersecurity incidents that have placed sensitive information at risk, with potentially serious impacts on federal and military operations; critical infrastructure; and the confidentiality, integrity, and availability of sensitive government, private sector, and personal information. The increasing risks are demonstrated by the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. The number of incidents reported by federal agencies to the U.S. Computer Emergency Readiness Team has increased 782 percent from 2006 to 2012. GAO and inspector general reports have identified a number of key challenge areas in the federal governments approach to cybersecurity, including those related to protecting the nations critical infrastructure. While actions have been taken to address aspects of these, issues remain in each of these challenge areas, including: Designing and implementing risk-based federal and critical infrastructure programs . Shortcomings persist in assessing risks, developing and implementing controls, and monitoring results in both the federal government and critical infrastructure. For example, in the federal arena, 8 of 22 major agencies reported compliance with risk management requirements under the Federal Information Security Management Act (FISMA), down from 13 out of 24 the year before. In the critical infrastructure arena, the Department of Homeland Security (DHS) and the other sectorspecific agencies have not yet identified cybersecurity guidance applicable to or widely used in each of the critical sectors. GAO has continued to make numerous recommendations to address weaknesses in risk management processes at individual federal agencies and to further efforts by sector-specific agencies to enhance critical infrastructure protection. Detecting, responding to, and mitigating cyber incidents . DHS has made incremental progress in coordinating the federal response to cyber incidents, but challenges remain in sharing information among federal agencies and key private sector entities, including critical infrastructure owners, as well as in developing a timely analysis and warning capability. Difficulties in sharing and accessing classified information and the lack of a centralized information-sharing system continue to hinder progress. According to DHS, a secure environment for sharing cybersecurity information, at all classification levels, is not expected to be fully operational until fiscal year 2018. Further, although DHS has taken steps to establish timely analysis and warning, GAO previously reported that the department had yet to establish a predictive analysis capability and recommended that DHS expand capabilities to investigate incidents. According to the department, tools for predictive analysis are to be tested in fiscal year 2013. Promoting education, awareness, and workforce planning . In November 2011, GAO reported that agencies leading strategic planning efforts for education and awareness, including Commerce, the Office of Management and Budget (OMB), the Office of Personnel Management, and DHS, had not developed details on how they were going to achieve planned outcomes and that the specific tasks and responsibilities were unclear. GAO recommended, among other things, that the key federal agencies involved in the initiative collaborate to clarify responsibilities and processes for planning and monitoring their activities. GAO also reported that only 2 of 8 agencies it reviewed developed cyber workforce plans and only 3 of the 8 agencies had a department-wide training program for their cybersecurity workforce. GAO recommended that these agencies take a number of steps to improve agency and government-wide cybersecurity workforce efforts. The agencies generally agreed with the recommendations. Promoting research and development (R&D) . The goal of supporting targeted cyber R&D has been impeded by implementation challenges among federal agencies. In June 2010, GAO reported that R&D initiatives were hindered by limited sharing of detailed information about ongoing research, including the lack of a repository to track R&D projects and funding, as required by law. GAO recommended that a mechanism be established for tracking ongoing and completed federal cybersecurity R&D projects and associated funding, and that this mechanism be utilized to develop an ongoing process to make federal R&D information available to federal agencies and the private sector. However, as of September 2012, this mechanism had not yet been fully developed. Addressing international cybersecurity challenges . While progress has been made in identifying the importance of international cooperation and assigning roles and responsibilities related to it, the governments approach to addressing international aspects of cybersecurity has not yet been completely defined and implemented. GAO recommended in July 2010 that the government develop an international strategy that specified outcome-oriented performance metrics and timeframes for completing activities. While an international strategy for cyberspace has been developed, it does not fully specify outcome-oriented performance metrics or timeframes for completing activities. The government has issued a variety of strategy-related documents over the last decade, many of which address aspects of the above challenge areas. The documents address priorities for enhancing cybersecurity within the federal government as well as for encouraging improvements in the cybersecurity of critical infrastructure within the private sector. However, no overarching cybersecurity strategy has been developed that articulates priority actions, assigns responsibilities for performing them, and sets timeframes for their completion. In 2004, GAO developed a set of desirable characteristics that can enhance the usefulness of national strategies in allocating resources, defining policies, and helping to ensure accountability. Existing cybersecurity strategy documents have included selected elements of these desirable characteristics, such as setting goals and subordinate objectives, but have generally lacked other key elements. The missing elements include: Milestones and performance measures . The governments strategy documents include few milestones or performance measures, making it difficult to track progress in accomplishing stated goals and objectives. The lack of milestones and performance measures at the strategic level is mirrored in similar shortcomings within key government programs that are part of the government-wide strategy. The DHS inspector general, for example, recommended in 2011 that DHS develop and implement performance measures to be used to track and evaluate the effectiveness of actions defined in its strategic implementation plan. As of January 2012, DHS had not yet developed the performance measures but planned to do so. Cost and resources . While past strategy documents linked certain activities to budget submissions, none have fully addressed cost and resources, including justifying the required investment, which is critical to gaining support for implementation. In addition, none provided full assessments of anticipated costs and how resources might be allocated to address them. Roles and responsibilities . Cybersecurity strategy documents have assigned high-level roles and responsibilities but have left important details unclear. Several GAO reports have likewise demonstrated that the roles and responsibilities of key agencies charged with protecting the nations cyber assets are inadequately defined. For example, the chartering directives for several offices within the Department of Defense assign overlapping roles and responsibilities for preparing for and responding to domestic cyber incidents. In an October 2012 report, GAO recommended that the department update its guidance on preparing for and responding to domestic cyber incidents to include a description of its roles and responsibilities. In addition, it is unclear how OMB and DHS are to share oversight of individual departments and agencies. While the law gives OMB responsibility for oversight of federal government information security, OMB transferred several of its oversight responsibilities to DHS. Both DHS and OMB have issued annual FISMA reporting instructions to agencies, which could create confusion among agency officials because the instructions vary in content. Clarifying oversight responsibilities is a topic that could be effectively addressed through legislation. Linkage with other key strategy documents . Existing cybersecurity strategy documents vary in terms of priorities and structure, and do not specify how they link to or supersede other documents, nor do they describe how they fit into an overarching national cybersecurity strategy. For example, in 2012, the administration determined that trusted Internet connections, continuous monitoring, and strong authentication should be cross-agency priorities, but no explanation was given as to how these three relate to priorities previously established in other strategy documents. The many continuing cybersecurity challenges faced by the government highlight the need for a clearly defined oversight process to ensure agencies are held accountable for implementing effective information security programs. Further, until an overarching national cybersecurity strategy is developed that addresses all key elements of desirable characteristics, overall progress in achieving the government's objectives is likely to remain limited. |
Background PRWORA built upon and expanded state-level welfare reforms to transform federal welfare policy for needy families with children. PRWORA replaced the individual entitlement to benefits under the 61-year-old Aid to Families with Dependent Children (AFDC) program with the TANF block grant, which provides family assistance grants to the states, and emphasizes the transitional nature of assistance and the importance of reducing welfare dependence through employment, among other goals. HHS administers the TANF block grant program, which provided grants to states totaling up to $16.5 billion each year through September 2002. To receive its grant, each state must also spend at least a specified amount of its own funds, referred to as state maintenance of effort (MOE) funds. State Flexibility on TANF Work Requirements and Time Limits While states have had flexibility to design programs that meet their own goals and needs, they also have been required to implement federal work requirements and time limits designed to promote employment among those able to work. First, TANF established stronger work requirements for those receiving aid than did the AFDC program. Specifically, to avoid financial penalties, states had to meet federal participation rate requirements, under which states were to ensure that an increasing percentage of adult recipients were participating in federally defined activities each year through fiscal year 2002. Second, states have been required to reduce the cash assistance benefit of an adult who did not participate as required by the state, referred to as a sanction, and could opt to terminate cash aid for the entire family. Third, states also have had to enforce a 60-month limit (or less at state option) on the length of time a family may receive federal TANF assistance. However, the law also provided states considerable flexibility in how they implemented work requirements and time limits, and some states and localities have used this flexibility to exempt recipients with disabilities from these requirements. For example, in our 2002 report on states’ implementation of work requirements and time limits, we noted that states have generally faced greatly reduced federal participation rate requirements. This resulted from the law’s “caseload reduction credit” which adjusted downward the federally required rate if a state’s caseload declined, which is exactly what occurred in most states—dramatic caseload declines from 1996 through at least mid-2001. In fiscal year 2000, these caseload reduction credits reduced required rates from 40 percent (the required rate) to 0 in 31 states. These lower participation rate requirements gave states more flexibility in exempting TANF recipients considered hard to employ from meeting work requirements. We found that while almost all states met or exceeded their adjusted required rate in that year, the federal participation rates that states actually achieved before adjustment ranged from about 6 percent to more than 70 percent. Regarding time limits, we found that states generally excluded from time limits families with a parent or caretaker with a disability or caring for a family member with a disability. States could do this by using the 20-percent federal time limit extension established in the law or by using state maintenance of effort funds, as also allowed by the law. Our work also showed that most families had not yet reached their federal or state- imposed cash assistance time limit as of fall 2001. While recipients with impairments may sometimes be exempted from work requirements and time limits, they may be at risk of having their benefits reduced or terminated through sanctions. A study in four urban areas conducted by the Manpower Demonstration Research Corporation (MDRC) found that recipients with a greater number of health problems were more likely to be sanctioned for noncompliance with program requirements than their healthier counterparts. Over 50 percent of former recipients with at least one health problem left welfare due to sanctions compared with 39 percent of recipients without health problems. Our earlier report on sanctions under the TANF program found that families who left welfare due to sanctions relied on support from family and friends after TANF payments stopped, rather than on income from employment, to a greater extent than families who left the program for other reasons. The Relationship between TANF and SSI TANF often serves, as did AFDC, as a temporary stopping point for low- income individuals with physical or mental impairments that may be considered severe enough to make them eligible for the federal SSI program. SSI, administered by the Social Security Administration (SSA), provides cash assistance to low-income individuals who are aged or who are unable to work because of a severe long-term impairment and who do not have sufficient work history to qualify for SSA’s Disability Insurance (DI) program. To qualify for SSI, an applicant’s impairment must be of such severity that the person is not only unable to do the kind of work that he or she engaged in previously, but is also unable to do any other kind of substantial gainful activity that exists in the national economy. In most states, SSI eligibility also entitles individuals to Medicaid benefits. As distinct from TANF, SSI for adults has federally established eligibility requirements and benefit levels and a nationwide disability determination process. Some individuals who apply for TANF may have impairments severe enough to make them eligible to receive SSI. Even before welfare reform, states had been actively identifying and referring potential SSI-eligible welfare recipients to SSI. In these cases, individuals may be on TANF while they are waiting for their SSI eligibility to be determined. In recent years, receiving an initial disability determination took an average of about 4 months from the date of SSI application. For claims that are denied and appealed, it may take over a year to reach a final decision. Generally, except for more temporary conditions, TANF recipients who have impairments but are not eligible for SSI or DI may be expected to work, as their impairments have been deemed not severe enough to preclude substantial employment. Title I of the ADA prohibits discrimination against such persons who have impairments but who are nonetheless able to perform the essential functions of the job they seek or hold. Under Title II of the ADA, no qualified individual with a disability shall be excluded from participation or be denied the benefits of the services, programs, or activities of a public entity, or be subject to discrimination by such entity. TANF, as a federal program, is subject to this requirement. Identifying and Measuring Impairments Identifying and measuring impairments or disabilities is a complex undertaking, and no single survey instrument has been accepted or generally agreed upon as the preferred method for identifying impairments within a population. Census believes the extensive set of disability questions contained in the SIPP make it a preferred source to examine most impairment-related issues. Nevertheless, SIPP data should be interpreted with care. For instance, the SIPP relies on self-reports of impairments and, therefore, may not accurately reflect the size of the general or TANF population with impairments. This can result in the overreporting or underreporting of impairments. For example, although some impairments, such as the inability to walk, missing or impaired limbs, or severely impaired vision, are easy to identify, many impairments are not. Individuals may not report less obvious impairments because of certain stigmas surrounding them or because they may not know of their existence. Some examples of these impairments include learning disabilities, depression, and mental illness. Other surveys use different approaches to measure impairments. The National Household Survey of Drug Abuse and the University of Michigan’s Women’s Employment Survey, for example, use nonclinical in-depth diagnostic questioning to identify certain psychiatric disorders that may be overlooked by other survey techniques. Impairments Were Relatively Common Among TANF Recipients Physical and mental impairments were reported to be relatively common among TANF recipients and, to a lesser degree, their children, compared with their prevalence among the non-TANF population. National survey data from the SIPP show that a total of 44 percent of TANF recipients reported in both 1997 and 1999 that they either had one or more physical or mental impairments as defined by Census or that they were caring for a child with such impairments. Specifically, in 29 percent of the TANF cases, only the adult recipient was reported to have impairments; in 7 percent of the cases, only the child was reported to have impairments; and in 8 percent of the cases both the adult and child were reported to have impairments. The prevalence of impairments among TANF recipients is greater than among the U.S. non-TANF population, among whom a total of 15 percent of individuals reported that they or their children had impairments. (See fig. 1.) Appendix I lists the specific criteria developed by Census that individuals must meet to be considered impaired as applied in the SIPP. We considered individuals to be impaired if they met the Census criteria in both 1997 and 1999. As shown in figure 2, SIPP data show some demographic differences between TANF recipients aged 18 to 62 who have impairments and those who do not have impairments. Two-thirds of adult recipients with impairments were over 35 years old, while fewer than a quarter of adult recipients without impairments were older than 35. Age differences between individuals with and without impairments exist not only among TANF recipients, but among the non-TANF population as well. Among the non-TANF population with impairments, 81 percent were aged 36 to 62 compared with 54 percent of those without impairments. Figure 2 also shows that TANF recipients with and without impairments differed by race. Forty-three percent of adult recipients with impairments were white compared with 28 percent of adult recipients without impairments. Among the non-TANF population, roughly equal percentages of people with and without impairments were white. Finally, as shown in figure 2, we found that SIPP data indicated no significant differences between recipients with and without impairments in the percentage who were married or the percentage who had no more than a high school education. Regardless of impairment status, about one-quarter of adult recipients were married and two-thirds to three-quarters had no more than a high school education. Recipients with Impairments Were Less Likely to Exit TANF Than Recipients without Impairments Impairments, whether they affected either adults or children, were associated with a decreased likelihood that a family would exit TANF. In particular, adult recipients with impairments were half as likely to exit TANF as adult recipients without impairments, after controlling for demographic differences, such as age, race, and marital status. Recipients caring for children with impairments were less than half as likely to exit TANF as others, after controlling for demographic differences. Different types of impairments or impairments of differing severity could have different effects on TANF exits, although we were not able to measure these effects. Furthermore, factors other than impairments may also affect whether recipients exit TANF. Adult Recipients with Impairments Were Half as Likely to Exit TANF as Adult Recipients without Impairments Using a statistical model to control for basic demographic factors (gender, race, age, marital status, and education) and state-level differences, we found that adult recipients with impairments were half as likely to exit TANF as recipients without impairments. That is, an individual with an impairment who received TANF at some point between July 1997 and July 1999 was less likely than an individual without an impairment to have exited TANF by July 1999, all else being equal. For example, among whites, those with impairments were less likely to exit TANF than were whites without impairments. Likewise, among nonwhites, those with impairments were less likely to exit TANF than were nonwhites without impairments. If demographic factors are not taken into account, approximately equal proportions (about 3 out of 4) of recipients with and without impairments exited TANF. Among those recipients who did exit TANF, a number of them returned to the TANF rolls at some point. SIPP data show that among individuals who received TANF and subsequently exited TANF between July 1997 and July 1999, about 1 in 4 had returned to TANF before the end of that period. This was true both of individuals with impairments and those without impairments. Other studies of TANF leavers that have included various time periods, populations, and methodologies have found similar results.For example, a recent study using data from the National Survey of America’s Families found that 21.9 percent of families leaving welfare in 1997 returned within 2 years. They also found that almost half of those who returned originally left welfare to work and that return rates were higher for former recipients with little education, limited work experience, and poor health. Recipients Caring for Children with Impairments Were Less Than Half as Likely to Exit TANF as Others After using a statistical model to control for demographic factors, we found that recipients caring for children with impairments were less than half as likely to exit TANF as their counterparts not caring for children with impairments. A variety of complicating factors related to their children’s impairments may contribute to the decreased likelihood that this population of TANF recipients will pursue and maintain employment. For instance, parents of children with impairments may face demands on their time related to their children’s impairments in the form of special therapies, the administering of medications, regular medical appointments, and hospitalizations. Furthermore, the chronic and unpredictable nature of many impairments, such as severe asthma and seizures, may cause parents to be absent from work frequently and with little or no advance notice to their employers. This may be particularly problematic for TANF leavers, many of whom enter into low- or unskilled entry-level jobs that offer limited flexibility and benefits, such as vacation time, sick leave, and health insurance. Finding child care and maintaining adequate health insurance coverage can be particularly challenging for parents caring for children with impairments. Children with impairments may need child care providers with the specialized training and equipment to accommodate their needs. In earlier work, we found that child care providers for children with special needs are sometimes in limited supply, especially in low-income neighborhoods. In addition to the difficulty in obtaining child care, families may be less likely to leave TANF if they are concerned about losing health care coverage. While the Congress established provisions to ensure that adults and children would continue to be eligible for Medicaid after leaving TANF, in our 1999 report we found some evidence to suggest that the reforms of 1996 initially contributed to confusion on the part of both beneficiaries and caseworkers about the criteria for maintaining Medicaid coverage after TANF benefits have been discontinued. Increased awareness of the need to ensure continued Medicaid enrollment for families exiting welfare has given rise to outreach efforts designed to promote awareness and maximize enrollment among eligible families. Factors Other Than Impairments May Also Affect Whether Recipients Exit TANF Although recipients with impairments were less likely to exit TANF than recipients without impairments, SIPP data did not provide reliable data on several other factors that may also affect whether recipients exit TANF. For example, there were insufficient data to differentiate among individuals based on the severity, type, or number of their impairments. However, it is possible that these factors might affect whether individuals exit TANF, as evidenced in a study of SIPP data from the early 1990’s that suggested that respondents with more severe disabilities were less likely to exit welfare than respondents with less severe limitations.Furthermore, intangible factors such as family support and personal motivation might also lead to very different experiences with TANF for otherwise similar individuals. In our 1997 survey of individuals receiving Social Security Disability Insurance, encouragement from family and friends and high self-motivation were identified to be among a range of factors that enabled these individuals with impairments to return to work. In addition, local TANF policies, which are not measured by the SIPP questionnaire, may affect whether recipients with impairments exit TANF. For example, local TANF policies regarding screening, assessment, and work requirements may affect whether recipients with impairments receive assistance that could help them move toward employment. In a national survey of county TANF agencies conducted for our October 2001 report, almost all the counties reported that they screened and assessed TANF recipients for impairments, but many used methods that may not accurately identify all impairments. In some cases, this may not be a problem because recipients may find jobs and leave welfare without special assistance. In other cases, recipients may need assistance targeted to their special needs to help them take steps toward employment or to transition to SSI. We also found that many counties reported exempting from state work requirements TANF recipients who had impairments or were caring for a child with an impairment. While exemptions from work requirements may be appropriate in some cases, in other cases it may mean recipients may not be getting the help, direction, or encouragement they need to take steps toward employment and increase their chances of exiting TANF. Exemptions from work requirements could also leave them more at risk of reaching a time limit without getting the assistance they need to find employment or alternative means of support such as SSI. Our previous work and other research makes clear that recipients exit TANF for a variety of reasons—increased income, time limits, sanctions, and voluntary exits—and that the reason that a family exits TANF could have an effect on the family’s outcomes or circumstances. However, SIPP data did not provide reliable data on the reasons families exited TANF. After Leaving TANF, People with Impairments Were Less Likely to Be Employed and Were More Likely to Receive Federal Supports Than Were People without Impairments TANF leavers with impairments were less likely to be employed and more likely to receive federal supports than were leavers without impairments. Although we found, after controlling for certain factors, that leavers with impairments were less likely to be employed than leavers without impairments, many of the leavers with impairments received income from SSI. Leavers with impairments also were more likely to receive Food Stamps and Medicaid. Leavers with Impairments Were Less Likely to Be Employed, but Many Received SSI Leavers with impairments were one-third as likely to be employed as leavers without impairments, after controlling for basic demographic factors, state-level differences, and receipt of SSI. In other words, for those not receiving SSI, leavers with impairments were one-third as likely to be employed as leavers without impairments, all else being equal. Leavers caring for children with impairments were equally likely to be employed as others, after controlling for demographics and other factors. In addition to estimating the probability of employment, we determined the actual percentages of adults who reported being employed at some point after leaving TANF between July 1997 and July 1999. Thirty-nine percent of adult leavers with impairments were employed at some point after leaving TANF, including 6 percent who also received SSI at some point after leaving TANF. (See fig. 3.) In contrast, 82 percent of leavers without impairments reported being employed at least at some point after leaving TANF between July 1997 and July 1999. In addition to the 6 percent of adult leavers with impairments who reported both employment and receipt of SSI, 34 percent reported receipt of SSI but not employment, indicating that a number of TANF recipients had impairments severe enough to qualify them for SSI and presumably also severe enough to limit their ability to sustain regular employment. Figure 3 shows that the proportion of leavers with impairments who reported either employment or SSI receipt, or both, is about the same as the proportion of leavers without impairments who reported employment. The fact that many recipients with impairments seem to have impairments severe enough to qualify them for SSI suggests that many recipients are relying on TANF while awaiting determination of their eligibility for SSI. Again, it may take over a year from the time that an individual applies for SSI to the time that a final eligibility decision is made. During this time, individuals on TANF may or may not be exempted from work requirements. Employed (no SSI) Receiving SSI (not employed) “Employed” and “Receiving SSI” include people who reported being employed or receiving SSI, respectively, in any month after leaving TANF and before the end of July 1999. “Not employed” and “no SSI” include people who reported not being employed or not receiving SSI, respectively, the entire time after leaving TANF and before the end of July 1999. Leavers with impairments were not only less likely than those without impairments to be employed at any time after leaving TANF, but not surprisingly, they were also less likely to report having personal earnings from employment or other sources in any single month. In each of the first 6 months after exiting TANF, about 20 percent of leavers with impairments reported having personal earnings, compared with about 60 percent of leavers without impairments. For those who did report personal earnings, though, the average amount of earnings for members of both groups was essentially equal, at about $1,000 per month. About 35 percent of leavers in both groups also reported household earnings. Regardless of their impairment status, their household earnings amounted on average to about $2,000 per month in addition to any personal earnings they may have had. Leavers with impairments were more likely than those without impairments to report having no income—from personal or household earnings or SSI—in any single month, although they may have received Food Stamps or Medicaid. In their first month after leaving TANF, 36 percent of leavers with impairments reported having no personal or household earnings, or SSI, compared with 23 percent of leavers without impairments. (See fig. 4.) These proportions remained relatively constant in each of the first 6 months after leaving TANF. Over the course of the entire 24-month observation period, 10 percent of all individuals who left during that period reported never having income from personal or household earnings or SSI at any point after leaving TANF. This means that 90 percent of leavers had income from at least one of these sources at some point after leaving TANF. There were insufficient data to examine whether there were any differences between people with and without impairments on this measure. Leavers with Impairments Were More Likely to Receive Food Stamps and Medicaid A greater proportion of leavers with impairments reported receiving Food Stamps and Medicaid than did leavers without impairments. Specifically, 77 percent of leavers with impairments received Food Stamps compared with 62 percent of leavers without impairments. Similarly, 89 percent of leavers with impairments reported receiving Medicaid in contrast to 71 percent of leavers without impairments. (See fig. 5.) Concluding Observations The 1996 welfare reform legislation enacted by the Congress clearly emphasizes the importance of welfare recipients taking steps toward employment and self-support. At the same time, the legislation provides states some flexibility to design programs that meet the needs of families affected by serious physical and mental impairments who may need special attention to facilitate the transition to work or to SSI. As states move beyond the first 5 years of the TANF program, a key challenge will be to ensure that recipients with impairments and those caring for children with impairments receive the supports they need to meet the work-focused goals and requirements of TANF. Our findings underscore the magnitude and complexity of this challenge. Our findings that both adult recipients with impairments and recipients caring for children with impairments are less likely to exit TANF, and that adult leavers with impairments are less likely to be employed, suggest that in the early years of welfare reform at least, these families were not as successful as those without impairments at leaving welfare through work. Our finding that 40 percent of families with impairments who did leave welfare received SSI after leaving TANF shows that SSI is an important source of support for many of these families. This finding raises the difficult question of how best to use their time on TANF while awaiting SSI eligibility determination, such as what work expectations to have for these recipients. These findings also raise the more general question for policymakers about how best to promote work and personal responsibility—through work requirements and time limits—while at the same time taking into consideration the particular needs of recipients with impairments and those caring for children with impairments. While our analysis provides descriptive information on outcomes for TANF recipients with impairments, much remains unknown about how best to help people with different types of impairments become self-sufficient. Agency Comments In commenting on a draft of this report, HHS said that the topic of TANF recipients with impairments is an important one. HHS also noted that our analysis, while possibly the best available approach, has limited application in providing information on the extent to which different types of impairments, impairments of varying severity, or local employment services may affect outcomes for individuals with impairments. We acknowledge that our analysis focuses on describing outcomes rather than identifying explanations for these outcomes, in part because information is not readily available to look at the more complex picture of each individual’s needs and the particular services received. However, our analysis provides important information on what is happening in the early years of welfare reform with regard to recipients with impairments as a whole. We added language to our concluding observations to state that much remains unknown about how best to help people with different types of impairments to become self-sufficient. HHS also noted that an analysis that excluded recipients who moved onto SSI would be useful. We added language to the report to clarify that our finding that recipients with impairments are one-third as likely to be employed as recipients without impairments refers to recipients who did not receive SSI. HHS’s written comments are included in appendix II. HHS and two welfare experts also provided technical comments, which we have incorporated where appropriate. We are sending copies of this report to the Secretary of Health and Human Services, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215 or Gale Harris at (202) 512-7235. Other contacts and acknowledgments are listed in appendix III. Appendix I: Scope and Methodology To describe the role of physical and mental impairments in the lives of families leaving Temporary Assistance for Needy Families (TANF), we developed estimates of the number of TANF recipients with impairments and investigated the differences between TANF recipients and leavers with and without impairments, using a 2-year cross section of data from the Census Bureau’s Survey of Income and Program Participation (SIPP). The SIPP is a national household survey conducted by the U. S. Census Bureau in which panels of individuals representative of the nation, including those receiving TANF, are interviewed over a period of 2 years or more. At 4-month intervals, panel participants are asked a set of “core” questions involving such subjects as their labor force activity, welfare program participation, and demographic characteristics. Periodically, the survey also asks a detailed set of questions called “topical modules” on a variety of topics not covered in the core section, such as disabilities. For our purposes, we selected panels starting in 1996 and sampled TANF and non-TANF adults between the ages of 18 and 62. Data from the topical modules on disability that we analyzed were from interviews conducted from August 1997 to November 1997, and August 1999 to November 1999, in which respondents were asked about their status in recent months, including July of that year. We included respondents who were in the sample in both July 1997 and July 1999 and analyzed their responses during this time period. During these interviews, panel members were asked an extensive set of questions about their physical or mental impairments, including questions on a range of functional or other activity limitations. To be identified as having a disability or impairment in the SIPP, individuals must meet specific disability criteria developed by the U. S. Census Bureau. That is, they must meet any of the following criteria: 1. Had difficulty performing one or more functional activities, including seeing, hearing, speaking, lifting, and carrying, using stairs, and walking. 2. Had difficulty with one or more activities of daily living, such as getting around inside the home, getting in or out of a bed or chair, bathing, dressing, and eating. 3. Had difficulty with one or more instrumental activities of daily living, including going outside the home, keeping track of money or bills, preparing meals, doing light housework, and using the telephone. 4. Had one or more specific conditions, including a learning disability, mental retardation or another developmental disability, Alzheimer’s disease, or some other type of mental or emotional condition. 5. Had an other mental or emotional condition that seriously interfered with everyday activities, including frequently depressed or anxious, trouble getting along with others, trouble concentrating, or trouble coping with day-to-day stress. 6. Had a condition that limited the ability to work, including around the house. 7. Had a condition that made it difficult to work at a job or business. 8. Received federal benefits based on inability to work. 9. Used a wheelchair, a cane, crutches, or a walker. For our purposes, we considered individuals to have impairments if their survey responses indicated they had impairments at both times that the disability topical module was administered (i.e., in both 1997 and 1999). We considered individuals to not have impairments if their survey responses indicated they did not have impairments at both times that the disability topical module was administered. Individuals whose impairment status differed between the first and second modules were excluded from the analyses. (We excluded 12.5 percent of respondents for this reason). We used appropriate techniques to weight the data to make population estimates for 1999 as well as to take into account the complex sampling design when estimating variances. Because the estimates we reported from the SIPP were based on samples, they are subject to sampling error, which varied but did not exceed plus or minus 8 percentage points at the 95-percent confidence interval. Therefore, the chances are 95 out of 100 that the actual population percentages are within no more than plus or minus 8 percentage points of our estimates. Logistic Regression Analyses In addition to descriptive statistics, we used logistic regression models to examine the effects of recipients’ having impairments, and of recipients’ caring for children with impairments on the likelihood of leaving TANF and of being employed after leaving TANF, after controlling for age, gender, marital status, race, and educational attainment. Recognizing that TANF policies may vary across states, we controlled for state in the models as well. The models of post-TANF employment also controlled for receipt of Supplemental Security Income (SSI). The results from the models we used are odds ratios that estimate, in table 1, the relative likelihood of leaving TANF for each factor and, in table 2, the effect of each factor on the likelihood of being employed after leaving TANF. If there were no significant differences between two groups, their odds would be equal, and the ratio of their odds would be 1.00. The more the odds ratio differs from 1.00 in either direction, the larger the effect it represents. The odds ratios in each table were computed in relation to a defined reference group. In table 1 an odds ratio that is greater than 1.00 indicates a greater likelihood of leaving TANF than the reference group while a ratio under 1.00 indicates a lesser likelihood of leaving than the reference group. In table 2 an odds ratio that is greater than 1.00 indicates a greater likelihood of being employed after leaving TANF than the reference group while a ratio under 1.00 indicates a lesser likelihood of being employed after leaving TANF than the reference group. Both tables also show the 95-percent confidence intervals around the odds ratios. If these intervals contain 1.00, the difference is not statistically significant. Definitions of Other Variables TANF recipient: Respondents who reported receiving TANF in any month during the period (July 1997 through July 1999). TANF leaver: Respondents who reported receiving TANF in some month during the period and subsequently not receiving TANF at some point for at least 2 consecutive months. Non–TANF population: Respondents who did not receive TANF benefits in any month during the time period. Employed (leavers): Respondents who reported employment in any month after leaving TANF during the time period. Age: Categorized as 18-35 and 36-62 and defined as the respondent’s reported age in July 1997. Education: Categorized as either having more than a high school education or not. For models of TANF exits, education is defined as the reported level of education in July 1997; for models predicting employment among leavers, education is defined as the reported level of education in the month the respondent reported leaving TANF. Marital status: Categorized as either married or not. For models of TANF exits, marital status is defined as reported status in July 1997; for models predicting employment among leavers, marital status is defined as reported status in the month the respondent reported leaving TANF. Received Food Stamps/Medicaid (leavers): Respondents who reported receiving Food Stamps/Medicaid in any month after leaving TANF during the time period. Received SSI (leavers): Respondents who reported receiving SSI in any month after leaving TANF during the time period. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, Tiffany Boiman, Wendy Ahmed, and Grant Mallie made important contributions to this report. Bibliography Acs, Gregory, and Pamela Loprest. “Do Disabilities Inhibit Exits from AFDC?” Washington, D.C.: The Urban Institute, 1994. Brandon, Peter D., and Dennis P. Hogan. “The Effects of Children with Disabilities on Mothers’ Exit from Welfare.” Paper presented at the Joint Center for Poverty Research, Research Conference. Washington, D.C.: February 2002. Collier-Bolkus, Winifred. “The Impact of the Welfare Reform Law on Families with Disabled Children That Need Child Care.” Ph.D. diss.,Widener University, 2000. Danziger, S.K., and others. "Barriers to the Employment of Welfare Recipients." In Prosperity for All? The Economic Boom and African Americans. Eds. R. Cherry and W. Rodgers. New York: Russell Sage Foundation Press, 2000. Lee, Sunhwa, Melissa Sills and Gi-Taik Oh. “Disabilities Among Children and Mothers in Low Income Families.” Washington, D.C.: Institute for Women’s Policy Research, June 20, 2002. Meyers, Marcia K., Anna Lukemeyer and Timothy Smeeding. “Work, Welfare and the Burden of Disability: Caring for Special Needs of Children in Poor Families.” Syracuse University: Center for Policy Research, Maxwell School of Citizenship and Public Affairs. 1996. Polit, Denise F., Andrew S. London, and John M. Martinez. “The Health of Poor Urban Women: Findings from the Project on Devolution and Urban Change.” New York, NY: Manpower Demonstration Research Corporation. 2001. Smith, Lauren A., MD et al. “Employment Barriers Among Welfare Recipients and Applicants with Chronically Ill Children.” American Journal of Public Health, 92, no. 9 (September 2002): 1453-1457. Wise, Paul H., MD et al. “Chronic Illness Among Poor Children Enrolled in the Temporary Assistance for Needy Families Program.” American Journal of Public Health, 92, no. 9 (September 2002): 1458-1461. Wood, Pamela R., MD et. al. “Relationships Between Welfare Status, Health Insurance Status, and Health and Medical Care Among Children with Asthma.” American Journal of Public Health, 92, no. 9 (September 2002): 1446-1452. Related GAO Products Welfare Reform: Tribes Are Using TANF Flexibility to Establish Their Own Programs. GAO-02-695T. Washington, D.C.: May 10, 2002. Welfare Reform: Federal Oversight of State and Local Contracting Can Be Strengthened. GAO-02-661. Washington, D.C.: June 11, 2002. Welfare Reform: States Are Using TANF Flexibility to Adapt Work Requirements and Time Limits to Meet State and Local Needs. GAO-02-501T. Washington, D. C.: March 7, 2002. Welfare Reform: More Coordinated Federal Efforts Could Help States and Localities Move TANF Recipients with Impairments Toward Employment. GAO-02-37. Washington, D. C.: October 31, 2001. Welfare Reform: Moving Hard-to-Employ Recipients Into the Workforce. GAO-01-368 Washington, D. C.: March 15, 2001. Welfare Reform: Work-Site-Based Activities Can Play an Important Role in TANF Programs. GAO/HEHS-00-122. Washington, D. C.: July 28, 2000. Welfare Reform: Means-Tested Programs: Determining Financial Eligibility is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort. GAO/HEHS-00-48. Washington, D. C.: April 27, 2000. Welfare Reform: State Sanction Policies and Number of Families Affected. GAO/HEHS-00-44 Washington, D. C.: March 31, 2000. Welfare Reform: Assessing the Effectiveness of Various Welfare-to-Work Approaches. GAO/HEHS-99-179. Washington, D. C.: September 7, 1999. Welfare Reform: Information on Former Recipients’ Status. GAO/HEHS-99-48. Washington, D. C.: April 28, 1999. Welfare Reform: States’ Experiencs in Providing Employment Assistance to TANF Clients. GAO/HEHS-99-22. Washington, D. C.: February 26, 1999. Welfare Reform: Status of Awards and Selected States’ Use of Welfare-to- Work Grants. GAO/HEHS-99-40. Washington, D. C.: February 5, 1999. | Debates surrounding the reauthorization of welfare reform legislation have involved some discussion regarding outcomes for TANF recipients with physical or mental impairments. To inform this discussion, GAO was asked to report on (1) whether recipients with impairments were as likely to exit TANF as their counterparts without impairments and (2) the sources of income reported by leavers with and without impairments. To obtain this information, GAO analyzed self- reported data for the most recent years available from the Census Bureau's Survey of Income and Program Participation (SIPP)--a national survey of households that includes questions about TANF status and functional impairments. Recipients of Temporary Assistance for Needy Families (TANF) who had impairments were found to be half as likely to exit TANF as recipients without impairments, and recipients caring for children with impairments were found to be less than half as likely to exit TANF as recipients not caring for children with impairments, after controlling for demographic differences such as age, race, and marital status. Although impairments affect exits, other factors, including family support and personal motivation, as well as local TANF policies, may also affect whether recipients exit TANF. After leaving TANF, people with impairments were one-third as likely as people without impairments to be employed, according to a statistical model that controlled for demographic differences, and they were more likely to receive federal supports. Forty percent of leavers with impairments reported receiving cash assistance from Supplemental Security Income (SSI), a federal program designed to assist low-income individuals who are aged, blind, or disabled. Leavers with impairments were also more likely to receive non cash support in the form of Food Stamps and Medicaid than their counterparts without impairments. These findings underscore the challenge states face in ensuring that recipients with impairments and those caring for children with impairments receive the supports they need to meet the work-focused goals and requirements of TANF. |
Background The Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) of 1996 required INS and SSA to operate three voluntary pilot programs to test electronic means for employers to verify an employee’s eligibility to work, one of which was the Basic Pilot Program. The Basic Pilot Program was designed to test whether pilot verification procedures could improve the existing employment verification process by reducing (1) false claims of U.S. citizenship and document fraud; (2) discrimination against employees; (3) violations of civil liberties and privacy; and (4) the burden on employers to verify employees’ work eligibility. The Basic Pilot Program provides participating employers with an electronic method to verify their employees’ work eligibility. Employers may participate voluntarily in the Basic Pilot Program, but are still required to complete Forms I-9 for all newly hired employees in accordance with IRCA. After completing the forms, these employers query the pilot program’s automated system by entering employee information provided on the forms, such as name and social security number, into the pilot Web site within 3 days of the employees’ hire date. The pilot program then electronically matches that information against information in SSA and, if necessary, DHS databases to determine whether the employee is eligible to work, as shown in figure 1. The Basic Pilot Program electronically notifies employers whether their employees’ work authorization was confirmed. Those queries that the DHS automated check cannot confirm are referred to DHS immigration status verifiers who check employee information against information in other DHS databases. In cases when the pilot system cannot confirm an employee’s work authorization status either through the automatic check or the check by an immigration status verifier, the system issues the employer a tentative nonconfirmation of the employee’s work authorization status. In this case, the employers must notify the affected employees of the finding, and the employees have the right to contest their tentative nonconfirmations by contacting SSA or CIS to resolve any inaccuracies in their records within 8 days. During this time, employers may not take any adverse actions against those employees, such as limiting their work assignments or pay. Employers are required to either immediately terminate the employment, or notify DHS of the continued employment, of workers who do not successfully contest the tentative nonconfirmation and those who the pilot program finds are not work-authorized. Various Weaknesses Have Undermined the Employment Verification Process, but Opportunities Exist to Enhance It Current Employment Verification Process Is Based on Employers’ Review of Documents In 1986, IRCA established the employment verification process based on employers’ review of documents presented by employees to prove identity and work eligibility. On the Form I-9, employees must attest that they are U.S. citizens, lawfully admitted permanent residents, or aliens authorized to work in the United States. Employers must then certify that they have reviewed the documents presented by their employees to establish identity and work eligibility and that the documents appear genuine and relate to the individual presenting them. In making their certifications, employers are expected to judge whether the documents presented are obviously counterfeit or fraudulent. Employers are deemed in compliance with IRCA if they have followed the Form I-9 process, including when an unauthorized alien presents fraudulent documents that appear genuine. Form I-9 Process Is Vulnerable to Document and Identity Fraud Since passage of IRCA in 1986, document and identity fraud have made it difficult for employers who want to comply with the employment verification process to ensure they hire only authorized workers. In its 1997 report to Congress, the Commission on Immigration Reform noted that the widespread availability of false documents made it easy for unauthorized aliens to obtain jobs in the United States. In past work, we reported that large numbers of unauthorized aliens have used false documents or fraudulently used valid documents belonging to others to acquire employment, including at critical infrastructure sites like airports and nuclear power plants. In addition, although studies have shown that the majority of employers comply with IRCA and try to hire only authorized workers, some employers knowingly hire unauthorized workers, often to exploit the workers’ low cost labor. For example, the Commission on Immigration Reform reported that employers who knowingly hired illegal aliens often avoided sanctions by going through the motions of compliance while accepting false documents. The Number and Variety of Acceptable Documents Hinders Employer Verification Efforts The number and variety of documents that are acceptable for proving work eligibility have complicated employer verification efforts under IRCA. Following the passage of IRCA in 1986, employees could present 29 different documents to establish their identity and/or work eligibility. In a 1997 interim rule, INS reduced the number of acceptable work eligibility documents from 29 to 27. The interim rule implemented changes to the list of acceptable work eligibility documents mandated by IIRIRA and was intended to serve as a temporary measure until INS issued final rules on modifications to the Form I-9. Since the passage of IRCA, we and others have reported on the need to reduce the number of acceptable work eligibility documents to make the employment verification process simpler and more secure. In 1998, INS proposed a further reduction in the number of acceptable work eligibility documents to 14, but the proposed rule has not been finalized. According to DHS officials, the department is currently assessing possible revisions to the Form I-9 process, including reducing the number of acceptable work eligibility documents, but has not established a target time frame for completing this assessment and issuing regulations on Form I-9 changes. The Basic Pilot Program Shows Promise to Enhance Employment Verification, but Challenges Exist to Increased Use Various immigration experts have noted that the most important step that could be taken to reduce illegal immigration is the development of a more effective system for verifying work authorization. In particular, the Commission on Immigration Reform concluded that the most promising option for verifying work authorization was a computerized registry based on employers’ electronic verification of an employee’s social security number with records on work authorization for aliens. The Basic Pilot Program, which is currently available on a voluntary basis to all employers in the United States, operates in a similar way to the computerized registry recommended by the commission, and shows promise to enhance employment verification and worksite enforcement efforts. Only a small portion—about 2,300 in fiscal year 2004—of the approximately 5.6 million employer firms nationwide actively used the pilot program. The Basic Pilot Program enhances the ability of participating employers to reliably verify their employees’ work eligibility and assists participating employers with identification of false documents used to obtain employment by comparing employees’ Form I-9 information with information in SSA and DHS databases. If newly hired employees present counterfeit documents, the pilot program would not confirm the employees’ work eligibility because their employees’ Form I-9 information, such as the false name or social security number, would not match SSA and DHS database information when queried through the Basic Pilot Program. Although ICE has no direct role in monitoring employer use of the Basic Pilot Program and does not have direct access to program information, which is maintained by CIS, ICE officials told us that program data could indicate cases in which employers do not follow program requirements and therefore would help the agency better target its worksite enforcement efforts toward those employers. For example, the Basic Pilot Program’s confirmation of numerous queries of the same social security number could indicate that a social security number is being used fraudulently or that an unscrupulous employer is knowingly hiring unauthorized workers by accepting the same social security number for multiple employees. ICE officials noted that, in a few cases, they have requested and received pilot program data from CIS on specific employers who participate in the program and are under ICE investigation. However, CIS officials told us that they have concerns about providing ICE broader access to Basic Pilot Program information because it could create a disincentive for employers to participate in the program, as employers may believe that they are more likely to be targeted for a worksite enforcement investigation as a result of program participation. According to ICE officials, mandatory employer participation in the Basic Pilot Program would eliminate the concern about sharing data and could help ICE better target its worksite enforcement efforts on employers who try to circumvent program requirements. Moreover, these officials told us that mandatory use of an automated system like the pilot program could limit the ability of employers who knowingly hired unauthorized workers to claim that the workers presented false documents to obtain employment, which could assist ICE agents in proving employer violations of IRCA. The Basic Pilot Program may enhance the employment verification process and a mandatory program could assist ICE in targeting its worksite enforcement efforts. However, weaknesses exist in the current program. For example, the current Basic Pilot Program cannot help employers detect identity fraud. If an unauthorized worker presents valid documentation that belongs to another person authorized to work, the Basic Pilot Program would likely find the worker to be work-authorized. Similarly, if an employee presents counterfeit documentation that contains valid information and appears authentic, the pilot program may verify the employee as work-authorized. DHS officials told us that the department is currently considering possible ways to enhance the Basic Pilot Program to help it detect cases of identity fraud, for example, by providing a digitized photograph associated with employment authorization information presented by an employee. Delays in the entry of information on arrivals and employment authorization into CIS databases can lengthen the pilot program verification process for some secondary verifications. Although the majority of pilot program queries entered by employers are confirmed via the automated SSA and DHS verification checks, about 15 percent of queries authorized by DHS required secondary verifications in fiscal year 2004. According to CIS, cases referred for secondary verification are typically resolved within 24 hours, but a small number of cases take longer, sometimes up to 2 weeks, due to, among other things, delays in entry of employment authorization information into CIS databases. Secondary verifications lengthen the time needed to complete the employment verification process and could harm employees because employers might reduce those employees’ pay or restrict training or work assignments, which are prohibited under pilot program requirements, while waiting for verification of their work eligibility. DHS has taken steps to increase the timeliness and accuracy of information entered into databases used as part of the Basic Pilot Program and reports, for example, that data on new immigrants are now typically available for verification within 10 to 12 days of an immigrant’s arrival in the United States while, previously, the information was not available for up to 6 to 9 months after arrival. According to CIS officials, current CIS staff may not be able to complete timely secondary verifications if the number of employers using the program significantly increased. In particular, these officials said that if a significant number of new employers registered for the program or if the program were mandatory for all employers, additional staff would be needed to maintain timely secondary verifications. Currently, CIS has approximately 38 Immigration Status Verifiers allocated for completing Basic Pilot Program secondary verifications, and these verifiers reported that they are able to complete the majority of manual verification checks within their target time frame of 24 hours. However, CIS estimated that even a relatively small increase in the number of employers using the program would significantly slow the secondary verification process and strain existing resources allocated for the program. Low Priority and Implementation Challenges Have Hindered Worksite Enforcement Efforts Worksite Enforcement Remains a Low Priority Worksite enforcement was a low priority for INS and continues to be a low priority for ICE. In the 1999 INS Interior Enforcement Strategy, the strategy to block and remove employers’ access to undocumented workers was the fifth of five interior enforcement priorities. We have reported that, relative to other enforcement programs in INS, worksite enforcement received a small portion of INS’s staffing and enforcement budget and that the number of employer investigations INS conducted each year covered only a fraction of the number of employers who may have employed unauthorized aliens. Furthermore, INS investigative resources were redirected from worksite enforcement activities to criminal alien cases, which consumed more investigative hours by the late 1990s than any other enforcement activity. After September 11, 2001, INS and ICE focused investigative resources on national security-related investigations. According to ICE, the redirection of resources from other enforcement programs to perform national security-related investigations resulted in fewer resources for traditional program areas, like worksite enforcement and fraud. The resources INS and ICE devoted to worksite enforcement have continued to decline. As shown in figure 2, between fiscal years 1999 and 2003, the most recent fiscal year for which comparable data are available, the percentage of agent workyears spent on worksite enforcement efforts generally decreased from about 9 percent, or 240 full-time equivalents, to about 4 percent, or 90 full-time equivalents. Workyear data for fiscal year 2004 cannot be directly compared with workyear data for previous fiscal years because of changes in the way INS and ICE agents entered and categorized data in their respective case management systems. However, ICE data indicate that the agency allocated about 65 full-time equivalents to worksite enforcement in fiscal year 2004. In addition, the number of notices of intent to fine issued to employers as well as the number of unauthorized workers arrested at worksites have also declined. Between fiscal years 1999 and 2004, the number of notices of intent to fine issued to employers for improperly completing Forms I-9 or knowingly hiring unauthorized workers generally decreased from 417 to 3. (See figure 3.) The number of worksite arrests declined by about 84 percent from 2,849 in fiscal year 1999 to 445 in fiscal year 2003. (See figure 4.) Difficulties Proving Employer Violations, Collecting Fines, and Detaining Aliens Have Weakened the Worksite Enforcement Program The difficulties that INS and ICE have experienced in proving that employers knowingly hired unauthorized workers and in setting and collecting fine amounts that meaningfully deter employers from knowingly hiring unauthorized workers have limited the effectiveness of worksite enforcement efforts. In particular, the availability and use of fraudulent documents has not only undermined the employment verification process, but has also made it difficult for ICE agents to prove that employers knowingly hired unauthorized workers. In 1996, the Department of Justice Office of the Inspector General reported that the proliferation of cheap fraudulent documents made it possible for the unscrupulous employer to avoid being held accountable for hiring illegal aliens. In 1999, we reported that the prevalence of document fraud made it difficult for INS to prove that an employer knowingly hired an unauthorized alien. ICE officials told us that employers who they suspect knowingly hire unauthorized workers can claim that they were unaware that their workers presented false documents at the time of hire, making it difficult for agents to prove that the employer violated IRCA. According to ICE officials, when agents can prove that an employer knowingly hired an unauthorized worker, difficulties in setting and collecting meaningful fine amounts have undermined the effectiveness of worksite enforcement efforts and the deterrent effect of employer fines. Under IRCA, employers who fail to properly complete, retain, or present for inspection a Form I-9 may be administratively fined from $110 to $1,100 for each employee. Employers who knowingly hire or continue to employ unauthorized aliens may be administratively fined from $275 to $11,000 for each employee, depending on whether the violation is a first or subsequent offense. ICE officials told us fine amounts recommended by both INS and ICE agents were often negotiated down in value during discussions between agency attorneys and employers. These officials said that the agency mitigates employer fines because doing so may be a more efficient use of government resources than pursuing employers who contest or ignore fines, which could be more costly to the government than the fine amount sought. Furthermore, the amount of mitigated fines may be, in the opinion of some ICE officials, so low they believe that employers view it as a cost of doing business, and they believe the fines do not provide an effective deterrent for employers who attempt to circumvent IRCA. In addition, the Debt Management Center, which is responsible for collecting fines issued against employers for violations of IRCA, has faced difficulties in collecting the full amount of fines from employers. According to ICE, the agency has faced difficulties in collecting fines from employers for a number of reasons, for example, because employers went out of business or declared bankruptcy. In such instances, the agency determines whether to pursue collection of employer fines based on the level of resources needed to pursue the employer and the likelihood of collecting the fine amount. Finally, the Office of Detention and Removal has limited detention space, and unauthorized workers detained during worksite enforcement investigations are a low priority for that space. In 2004, the Under Secretary for Border and Transportation Security sent a memo to the Commissioner of U.S. Customs and Border Protection and the Assistant Secretary for ICE outlining the priorities for the detention of aliens. According to the memo, aliens who are subjects of national security investigations were among those groups of aliens given the highest priority for detention, while those arrested as a result of worksite enforcement investigations were to be given the lowest priority. According to ICE officials, the lack of sufficient detention space has limited the effectiveness of worksite enforcement efforts. For example, they said that if investigative agents arrest unauthorized aliens at worksites, the aliens would likely be released because the Office of Detention and Removal detention centers do not have sufficient space to house the aliens and they may re-enter the workforce, in some cases returning to the worksites from where they were originally arrested. Worksite Enforcement Focus Shifted to Critical Infrastructure Protection after September 11, 2001 In keeping with the primary mission of DHS to combat terrorism, after September 11, 2001, INS and then ICE has focused its resources for worksite enforcement on identifying and removing unauthorized workers from critical infrastructure sites, such as airports and nuclear power plants, to help reduce vulnerabilities at those sites. According to ICE officials, the agency shifted its worksite enforcement focus to critical infrastructure protection because unauthorized workers employed at critical infrastructure sites indicate security vulnerabilities at those sites. In conducting critical infrastructure operations, the agency has worked with employers to identify and remove unauthorized workers and, as a result, has not focused on sanctioning employers at critical infrastructure sites. In 2003, ICE headquarters issued a memo requiring field offices to request approval from ICE headquarters prior to opening any worksite enforcement investigation not related to the protection of critical infrastructure sites, such as investigations of farms and restaurants. ICE officials told us that the purpose of this memo was to help ensure that field offices focused worksite enforcement efforts on critical infrastructure protection operations. Field office representatives reported that non critical infrastructure worksite enforcement is one of the few investigative areas for which offices must request approval from ICE headquarters to open an investigation and also reported that worksite enforcement is not a priority unless it is related to critical infrastructure. In addition, some of these representatives, as well as immigration experts we interviewed, noted that the focus on critical infrastructure protection does not address the majority of worksites in industries that have traditionally provided the magnet of jobs attracting illegal aliens to the United States. Concluding Observations Efforts to reduce the employment of unauthorized workers in the United States require a strong employment eligibility verification process and a credible worksite enforcement program to ensure that employers meet verification requirements. The current employment verification process has not fundamentally changed since its establishment in 1986, and ongoing weaknesses have undermined its effectiveness. The Basic Pilot Program shows promise for enhancing the employment verification process and reducing document fraud if implemented on a much larger scale. However, the weaknesses identified in the current implementation of the Basic Pilot Program, as well as the costs of an expanded program, are considerations that will need to be addressed in deciding whether this program, or a similar automated employment verification process, should be significantly expanded or made mandatory. Even with a strengthened employment verification process, a credible worksite enforcement program would be needed because no verification system is foolproof and not all employers may want to comply with IRCA. We are continuing our work and expect to have several recommendations aimed at improving employment verification and worksite enforcement efforts. This concludes my prepared statement. I would be pleased to answer any questions you and the Subcommittee members may have. GAO Contact and Staff Acknowledgments For further information about this testimony, please contact Richard Stana at 202-512-8777. Other key contributors to this statement were Orlando Copeland, Michele Fejfar, Ann H. Finley, Rebecca Gambler, Kathryn Godfrey, Eden C. Savino, and Robert E. White. Related GAO Products Social Security: Better Coordination among Federal Agencies Could Reduce Unidentified Earnings Reports. GAO-05-154. February 4, 2005. Tax Administration: IRS Needs to Consider Options for Revising Regulations to Increase the Accuracy of Social Security Numbers on Wage Statements. GAO-04-712. August 31, 2004. Immigration Enforcement: DHS Has Incorporated Immigration Enforcement Objectives and Is Addressing Future Planning Requirements. GAO-05-66. October 8, 2004. Overstay Tracking: A Key Component of Homeland Security and a Layered Defense. GAO-04-82. May 21, 2004. Social Security Administration: Actions Taken to Strengthen Procedures for Issuing Social Security Numbers to Noncitizens, but Some Weaknesses Remain. GAO-04-12. October 15, 2003. Homeland Security: Challenges to Implementing the Immigration Interior Enforcement Strategy. GAO-03-660T. April 10, 2003. Identity Fraud: Prevalence and Links to Alien Illegal Activities. GAO-02 830T. June 25, 2002. Illegal Aliens: Significant Obstacles to Reducing Unauthorized Alien Employment Exist. GAO/GGD-99-33. April 2, 1999. Immigration and Naturalization Service: Overview of Management and Program Challenges. GAO/T-GGD-99-148. July 29, 1999. Immigration Reform: Employer Sanctions and the Question of Discrimination. GAO/GGD-90-62. March 29, 1990. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The opportunity for employment is one of the most important magnets attracting illegal aliens to the United States. The Immigration Reform and Control Act (IRCA) of 1986 established an employment eligibility verification process and a sanctions program for fining employers for noncompliance. Few modifications have been made to the verification process and sanctions program since 1986, and immigration experts state that a more reliable verification process and a strengthened worksite enforcement capacity are needed to help deter illegal immigration. In this testimony, GAO provides preliminary observations from its ongoing assessment of (1) the current employment verification process and (2) U.S. Immigration and Customs Enforcement's (ICE) priorities and resources for the worksite enforcement program and the challenges it faces in implementing that program. The current employment verification (Form I-9) process is based on employers' review of documents presented by new employees to prove their identity and work eligibility. On the Form I-9, employers certify that they have reviewed documents presented by their employees and that the documents appear genuine and relate to the individual presenting the documents. However, document fraud (use of counterfeit documents) and identity fraud (fraudulent use of valid documents or information belonging to others) have undermined the employment verification process by making it difficult for employers who want to comply with the process to ensure they hire only authorized workers and easier for unscrupulous employers to knowingly hire unauthorized workers. In addition, the number and variety of documents acceptable for proving work eligibility has hindered employer verifications efforts. In 1998, the former Immigration and Naturalization Service (INS), now part of the Department of Homeland Security (DHS), proposed revising the Form I-9 process, particularly to reduce the number of acceptable work eligibility documents, but DHS has not yet finalized the proposal. The Basic Pilot Program, a voluntary program through which participating employers electronically verify employees' work eligibility, shows promise to enhance the current employment verification process, help reduce document fraud, and assist ICE in better targeting its worksite enforcement efforts. Yet, several current weaknesses in the pilot program's implementation, such as its inability to detect identity fraud and DHS delays in entering data into its databases, could adversely affect increased use of the pilot program, if not addressed. The worksite enforcement program has been a low priority under both INS and ICE. For example, in fiscal year 1999 INS devoted about 9 percent of its total investigative agents' time to worksite enforcement, while in fiscal year 2003 it allocated about 4 percent. ICE officials told us that the agency has experienced difficulties in proving employer violations and setting and collecting fine amounts that meaningfully deter employers from knowingly hiring unauthorized workers. In addition, INS and then ICE shifted its worksite enforcement focus to critical infrastructure protection after September 11, 2001. |
What Are Leading Organizations Doing and How Does the Government Compare What we found was that leading organizations have cut administrative costs—some cut expense report processing costs by more than 80 percent—and time—what once took 3 weeks can now be done within 48 hours—as a direct result of reengineering how they arrange and process travel. Their total administrative costs per trip now range from about $10 to $20. They achieved these improvements by consolidating travel management and processing centers, eliminating unnecessary review layers, simplifying the travel process, and streamlining and automating the expense reporting process and integrating it with the financial system. Most federal agencies’ administrative travel costs and processes, on the other hand, lag behind those of leading organizations, although some agencies have begun to close the gap. Many agencies have not determined what their administrative travel costs and processes are, and for those agencies for whom costs were determined, their administrative costs per trip ranged from about $37 to $123—1 and a half to 12 times more expensive than the leading organizations. Many federal agencies use numerous processing centers, require multiple travel documents, and fill out these travel documents manually or maintain travel systems that do not have an agencywide automated interface with the financial system. Part of the problem has been that a primary focus of many federal agencies’ travel management has historically been on maintaining and monitoring compliance, rather than on managing costs and efficiencies. This is not to say that all federal agencies’ processes are poor or that administrative costs are high. Indeed, most agencies’ travel costs and processes lie along a continuum of performance. Some agencies, in fact, have reduced their administrative costs to levels that begin to match those of leading organizations. The Internal Revenue Service (IRS), for instance, has reported total administrative costs of about $39 per trip, while the Forest Service (region 5) has reported administrative costs of about $37. And there are also several agencies, including GSA and the Departments of State, Transportation, Defense, and Energy, who have initiated pilots that could go a long way to improving operations and reducing administrative travel costs. Figure 1 provides a listing of estimated total administrative costs for the leading organizations and four civilian agencies. The figure also shows administrative cost estimates developed by six DOD agencies, although these estimates may not be fully comparable. In addition, the chart highlights two estimates—current federal agency costs per trip and improved costs per trip—that were developed by a travel improvement task force from the Joint Financial Management Improvement Program (JFMIP). Comparable civilian agency cost estimates (includes estimates from the Forest Service (region 5), IRS National Office, JMD, and State) DOD cost estimates (includes estimates from the DOD Language Institute for both the Army and Air Force, DOD Performance Review, the Air Force District of Washington, Naval Post-graduate School, and NSA) Figure 2 shows where agencies stand in their identification of administrative costs. The chart breaks the agencies into three groups—those that could identify all administrative costs, those that identified some of their administrative costs, and those who could not identify any of their administrative costs. Finally, the following table lists the travel practices of the leading organizations and compares them to the practices of federal agencies, as found in our survey. How Did the Leading Organizations Get Here When looking at the leading organizations and what they have accomplished, it is important to remember that all of them found themselves, at one point or another, in a situation very similar to where many federal agencies now stand—they had to reduce costs, while at the same time dramatically improving service to the customer. These leading organizations set out to rethink and redesign how their financial management processes, including travel processing, were conducted. In doing so, the leading organizations shared many of the same characteristics: they generally assessed travel management as part of a larger, financial management reengineering effort, they benchmarked themselves against other recognized organizations, and they instituted a common set of best practices. The strategies used by these leading organizations in reengineering and adapting their practices can be grouped into three common areas: consolidation, simplification, and automation and integration. By addressing all three of these areas, the organizations were able to achieve dramatic cost and process improvements. Consolidation A travel manager from one of the leading organizations told us that after assessing current practices and processes, the first thing he would do when embarking on an improvement effort would be to consolidate operations. Before they started reengineering, leading organizations had business units that operated independently. Each unit was responsible for making travel arrangements and for processing their own travel. For instance, one organization was processing expense reports at over 300 separate locations. These decentralized operations can (1) lead to duplication of effort because each unit has to be responsible for similar processes, such as reimbursement and expense processing, (2) reduce opportunities to achieve economies of scale, (3) make organizationwide travel policy enforcement more difficult and inconsistent, and (4) hinder the organization from gathering and maintaining organizationwide travel data. Leading organizations realized that they could cut costs and improve service by having central, organizationwide sources for making travel arrangements and for processing expense reports. They also established a central travel management group to oversee organizationwide travel and to establish, monitor, and enforce travel policies. As processing locations were consolidated, the organizations found that they were able to reduce costs and cycle time. They also began to maintain travel data on the organization as a whole. These corporate travel data can be particularly helpful in negotiating rates with vendors. The leading organizations also reduced the number of travel agencies that they were using. There are several benefits that can be gained from this. For instance, this can (1) assist in uniform monitoring and enforcing of the organization’s travel policies, (2) ensure consistency in how services are provided, and (3) provide management reports on the travel patterns of both individual travelers and the organization as a whole. This travel information is particularly helpful in monitoring policy compliance, tracking travel trends for negotiating rates with travel vendors, and for comparing actual to reported expenses. Simplification The second strategy used by leading organizations was to simplify operations, for both the traveler and the organization. Such simplification includes eliminating the need for front-end travel approval and consolidating all expense reporting on one form. Two of the organizations even automated the expense reporting process once they had decided on a streamlined reporting structure. A travel manager from one of the leading organizations noted that prior to reengineering, up to seven signatures were required to approve one expense report; now the expense report is automated and the only approval step occurs at the back end of the process after the voucher has been processed. Consolidating information also cuts cycle time, makes it easier to track costs, and provides easier access to data because all information is maintained in one central location. Leading organizations also simplified and streamlined operations by mandating the use of charge cards for all transportation and lodging expenses, as well as for cash advances, cost areas that can account for 80 to 90 percent of all travel expenses. One organization, in fact, requires an explanation for any instance in which the corporate charge card is not used for at least 90 percent of all business travel expenses. A key benefit of using a corporate charge card is eliminating advance processing costs and cycle time. Under the old system, a traveler would have to spend time filling out an advance request form and getting it approved by a supervisor. The organization would also have to keep an amount of petty cash on hand at various cash windows. And the organization had to track and reconcile each cash disbursement that occurred. By mandating charge card usage for cash advances, an organization can eliminate the processing time and costs for getting the advance, no longer has to maintain petty cash at cash windows, and can conduct one reconciliation for all travel expenses. Other benefits of using a corporate charge card are that it provides greater cash management and establishes better information management. Automation and Integration Finally, once leading organizations had assessed and consolidated their current processes, they looked to use automation to further simplify and streamline operations. They integrated expense reporting with travel expense processing, built policy conformance checks into the travel system, reimbursed travelers electronically, and developed a management information system to maintain the travel data that were being gathered. Maintaining this information gives an organization the specific information it needs when negotiating rates with travel vendors and setting predetermined travel costs. It also helps to track and enforce policy compliance and provides greater assurance of data integrity. As mentioned previously, two of the leading organizations we talked to developed an automated expense reporting system as part of their consolidation and simplification efforts. The expense reporting systems they developed are user friendly and provide various aids to the traveler, including calculating expense totals and maintaining current per diem rates. One organization’s system builds policy compliance into the traveler’s expense reporting by using a series of prompts and questions to highlight exceptions to policy. The system prompts the traveler to provide reasons whenever a response deviates from policy, such as using a noncontracted vendor or exceeding per diem rates. The system highlights the exceptions to the approving supervisor, who must approve all of them before reimbursement can occur. The system also produces a report that highlights to senior management all approved exceptions. Another benefit of automation is the reduced cycle time provided by electronic reimbursement. For instance, prior to implementing their automated systems, it took two leading organizations over 3 weeks to reimburse travel expenses. Now a traveler can travel one day, submit an expense report the next day, and be reimbursed the following day. A final benefit of automation is that all travel information can be maintained in a central repository. As one travel manager from a leading organization noted, “travel management is really about information management.” The information that is gathered can come from a variety of sources, including the charge card company, booking information from the travel agency, and expense information from the expense reporting system. This information is useful to analyze and compare what was booked, what was charged, and what is claimed. Issues Facing the Government as It Looks to Improve Mr. Chairman, a question you or others may be asking now is whether federal agencies can match what these leading organizations have done. The answer is yes, but there are many factors and issues, ranging from governmentwide policy, technical, and regulatory issues to agency-specific union and culture issues, that have to be taken into account. I would like to highlight a couple of the key issues. The first issue facing agencies is the lack of accurate, up-to-date information related to travel costs and processes. Such baseline information is essential to measure progress and to ensure that the organization is focusing its improvement efforts on the most critical areas. Without accurate baseline information, organizations can waste valuable time and effort investigating technological solutions without truly knowing what process problems they are trying to solve. In addition, if the organization does not know where it is starting from, it is very difficult to measure what progress has been made. One travel manager from a leading organization summed it up by noting that you can’t travel cheaper until you know exactly how you travel. It appears that many federal agencies may be going ahead with improvement projects, including the acquisition of automation, without first assessing what their current situation is. For example, in response to our survey, 25 agencies said that they recently revised their travel processes, but only 11 of these agencies reported that they had assessed their current processes. As we have highlighted in previous reports, the risk of automating without analyzing current processes is that hardware and software may be acquired to automate the inefficient processes that are already in place. The Justice Management Division (JMD) within the Department of Justice, for instance, recently acquired a travel system to streamline operations by producing travel authorizations and vouchers and providing for electronic approval of these documents. JMD plans for this system to be fully integrated with the financial system. However, this does not occur now. As a result, the travel system produces a hard-copy version of the voucher and information from the voucher must then be manually reentered into the financial system. Such duplication is inefficient and introduces a risk of data error during reentry. As agencies look to automate their travel systems they will also have to ensure that they incorporate adequate controls, as noted in Title 2 and Title 7 of GAO’s Policy and Procedures Manual for Guidance of Federal Agencies, to ensure the integrity of the data. We have issued several reports emphasizing that improvements to streamline employee travel payment processes should be made only within a framework of adequate, cost-effective controls that reasonably ensure that payment transactions are properly authorized and sufficient records of these transactions are maintained. One area where this has drawn particular attention is in the approval of authorizations and vouchers through the use of electronic signature. We have previously reported that to provide adequate safeguards, an electronic signature should be (1) unique to the signer, (2) under the signer’s sole control, (3) capable of being verified, and (4) linked to the data in such a way that if the data are changed, the signature is invalidated. The National Institute of Standards and Technology (NIST) has established procedures for the evaluation and approval of certain electronic signature techniques to ensure the integrity of the data and compliance with the previously mentioned criteria. Several federal pilots, including the Corps of Engineers, the Department of Energy, and DOD, are currently working with us and NIST to address these concerns and develop standardized systems that can be used by other agencies. Another factor that will have to be addressed as agencies look to reengineer travel is the federal travel regulations (FTR), which govern how federal travel is to be conducted. For instance, the FTR say that a traveler must obtain both a travel authorization (pre-trip) and travel voucher (post-trip) and that travel approval for both must be given by an authorized official. In its report on improving governmentwide travel management, the JFMIP travel improvement task force made nine recommendations for improving how TDY travel was processed. Of these nine recommendations, the task force estimated that eight would require some regulatory change, and the final recommendation will require both legislative and regulatory changes. Federal Efforts Show Promise All of this is not to say that improvements have not been made, or that little is being done in the federal government. On the contrary, there is a great deal of momentum for changing how travel is arranged and processed. For instance, 50 agencies in our study said that they planned to implement a revised travel process or that they were planning to revise in the near future. Some federal agencies have already begun to implement many of the best practices and reduce administrative costs. IRS, for instance, processed 83 percent of its fiscal year 1995 travel vouchers using an automated travel system. Travelers enter information in to the travel system and this information is transmitted to a supervisor who approves it electronically. The travel system is integrated with IRS’ financial system, where the travel information is processed once approval has been given. The information is then uploaded into Treasury’s system for reimbursement. There are also several federal agencies who have initiated pilots, some that are quite aggressive, that demonstrate the improvements that are possible. These efforts include the following: A Forest Service improvement team assessed its processes and found that almost half of its processing steps added no value to the processing of a travel voucher. It has now made several recommendations about how travel processing can be improved. The State Department studied its travel process and found that it could reduce its indirect costs by $18 to $72 per trip. State also received waivers from the FTR and developed one form to be used for both travel authorizations and vouchers. An internal GSA improvement team has proposed, and is beginning to move towards, an even more streamlined approach for GSA in which all paper travel documents would be eliminated. Other agencies that have ongoing pilots include the Departments of Transportation, Defense, and Energy. In addition to these agency-specific efforts, the JFMIP travel improvement task force, made up of representatives from several agencies across government, has assessed both TDY and permanent relocation travel and estimated that hundreds of millions could be saved by implementing a number of key recommendations. These recommendations mirror many of the best practices of leading organizations, including requiring the use of a corporate charge card and consolidating and automating travel data. What Needs to Be Done In summary, Mr. Chairman, there are many things that can be done to move the government closer to the performance of leading organizations. First and foremost, agencies need to assess their costs and processes and establish a baseline of current performance. As I mentioned earlier, tremendous gains are possible by rethinking and redesigning travel management. However, it will be difficult for agencies to decide where to start and to measure progress until they assess where they are now. Some of the necessary information will be gathered as part of the requirements of the Chief Financial Officers (CFO) Act, which requires that agencies provide complete, reliable, and timely information regarding the management activities of the agency. However, agencies will still need to work to develop and assess other travel-related information. As agencies develop this baseline, they should also look for areas where operations can be streamlined and consolidated. We also strongly urge agencies to study and implement the practices and approaches identified by the JFMIP travel improvement team. Everyone should eventually be at or near the savings levels offered by JFMIP; IRS and the Forest Service have already shown that achieving these levels is possible. However, reaching this goal is only a start. As the travel improvement team noted, the improvements they recommend are just the beginning. Continual assessment and improvement will help agencies move even closer to the results achieved by leading organizations. Finally, agencies should always be looking for new ways to build and learn. Such learning can occur on two levels. First, agencies can learn from the successes and failures of other organizations, both private and public. Second, they can pilot projects of their own, build on the lessons that they learn, and then look to share this information with others. In conjunction with agency efforts, GSA, as the government’s primary manager of travel policy, should take the lead to oversee the various travel improvement efforts that are planned or underway. Such oversight may include the establishment of travel data standards, a cross-services directory, and an applications directory. GSA should also form a users group to facilitate the sharing of knowledge and information. Such a group, in coordination with other interested parties, including JFMIP and the CFO Council, will go a long way to speeding the successful application of the practices and guarding against redundant actions. Finally, GSA needs to assess and revise the FTR based on the suggestions of JFMIP and lessons that are learned. In addition, we encourage the ongoing interest, support, and oversight in this area by congressional committees. The progress of agencies and GSA should be monitored to ensure that all are moving towards the improvements listed here and in the JFMIP report—helping to get higher, better value for the public’s dollar by operating more efficiently. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed efforts for improving governmentwide travel management, focusing on a comparison of civilian federal agencies' and private-sector travel management practices. GAO noted that: (1) private-sector organizations have cut travel processing costs and time by consolidating travel management and processing centers, eliminating unnecessary review layers, simplifying travel processing, and streamlining and automating the expense reporting process; (2) many federal agencies have not identified their administrative travel costs and processes; (3) many federal agencies use numerous processing centers, require multiple travel documents, and lack automated systems that interface with their financial systems; (4) private-sector organizations approach travel cost reduction by assessing travel management as part of the larger financial management system, benchmarking themselves against other organizations, and instituting a common set of best practices; (5) legislative and regulatory changes may be needed for federal agencies to implement some travel management improvements; and (6) some agencies have implemented changes or initiated pilots to improve travel management. |
Background DOD is one of the nation’s largest employers—employing approximately 1.4 million active duty military personnel and 1.2 million reservists. In addition, 2 million retirees receive pay and benefits from the department. DOD is also the largest employer and trainer of young adults in the United States, recruiting about 200,000 individuals into active duty in 2004—the majority of them recent high school graduates. Although DOD competes with academia and other employers for these qualified people, the military’s distinctive culture and job experience are unique from any other government or private sector employer. To maintain national security, DOD must meet its human capital needs by recruiting, retaining, and motivating sufficient numbers of qualified people. The Office of the Secretary of Defense, Personnel and Readiness is principally responsible for establishing active duty compensation policy. However, reservists fall under the active duty compensation policy if they have been mobilized for active duty. The department also sponsors regular studies on military compensation called the Quadrennial Review of Military Compensation—most recently completed in 2002—that typically focus on specific issues like flexibility in compensation. In 2005, the Secretary of Defense formed a committee to study military pay in order to seek ways to maintain a cost-effective, ready force. Although the structure of the military compensation system has been largely unchanged since the end of World War II, with the advent of the all-volunteer force in 1973, the system has been enhanced by adding various pays, benefits, and tax preferences over time. Currently, the system is a complex mix of pays, benefits, and tax preferences—about a third of which are deferred until after the completion of active duty service. In the 1970s, DOD and Congress adopted the concept of “regular military compensation”—which is defined as the sum of basic pay, allowances for housing and subsistence, and the federal tax advantage—to describe the foundation of servicemembers’ cash compensation which can be used to compare military and civilian pay. Basic pay—which is predicated on rank and tenure of service—is the largest component of regular military compensation. In addition to regular military compensation, there are over 60 authorized special and incentive pays—generally offered as incentives to undertake or continue service in a particular specialty or type of duty assignment—as well as the combat zone tax exclusion, which generally makes income earned while serving in a combat zone nontaxable. Furthermore, DOD offers a wide range of benefits, many of which are directed at members with family obligations. DOD believes benefits are central to morale and readiness as well as important in providing members with a quality of life to help cope with the sacrifices they make. DOD’s Compensation System Lacks Transparency to Identify Total Costs and How Compensation Is Allocated Decision makers in Congress and at DOD do not have adequate transparency over total costs for providing military compensation to active duty servicemembers in terms of how compensation is allocated in the near term, if compensation investments are cost effective in meeting recruiting and retention goals, how much changes to compensation will cost in the long term, and whether compensation costs are affordable and sustainable in the future. Lack of transparency over costs is in part due to the sheer number of pays and benefits that make up the military compensation system and to the lack of a single source to show total cost of compensation. Moreover, the lack of principles to guide military compensation policy is a long-standing problem for DOD. Cost of Compensation Is Scattered Across Many Budgets A total cost to compensate servicemembers does not exist in a single source for decision makers to view, and transparency is further hindered by the sheer number and types of pays, benefits, and tax preferences in the military compensation system. Good business practice requires adequate transparency over investments of resources, especially in times of fiscal balance constraint. In a typical civilian firm, managers would know the costs of compensation among other investments, such as capital and technology, in order to make decisions on the most efficient use of resources because decision makers have to consider both the obvious and implicit costs of their actions. Furthermore, federal accounting standards are aimed at providing relevant and reliable cost information to assist Congress and executives in making decisions about allocating federal resources. Therefore, we believe it is good business practice for decision makers to establish transparency over total compensation costs, including the long-term cost implications of current decisions. Because of the lack of a single source of compensation costs and the number and types of pays and benefits that combine to make up the compensation system, we had to gather information from multiple sources to compile our estimate of the total costs to provide military compensation. Funding for the numerous components of compensation resides in different budgets (see table 1). For example, the funding for cash compensation that servicemembers receive today, such as basic pay and housing allowance, are in DOD’s military personnel budget. Despite its name, this title does not include all of the funding provided for military compensation and benefits. Funding for noncash benefits, such as health care and education assistance, is displayed partially in the department’s Operation and Maintenance budget as well as in the VA’s budget. And, deferred benefits like retirees’ health care, which represent a significant portion of the costs of compensation, have long-term cost associations and are not adequately visible. Funding for military retirement is budgeted for on an accrual basis in the military personnel budget, as is health care for retirees over 65 years of age and their dependents. However, health care funding for retirees less than 65 years of age and their dependents is appropriated annually in the Operation and Maintenance budget as part of the Defense Health Program. In addition to DOD’s deferred benefits, some servicemembers are eligible for veteran’s benefits after they leave the military. These benefits, such as health care, pensions, and other compensations, are funded annually through the VA’s budget. Furthermore, the lost federal tax revenue—the federal tax benefit servicemembers’ receive because part of their cash compensation is nontaxable—is not displayed in a DOD budget exhibit for decision makers to consider when assessing new proposals or changes to the compensation system. This amount is significant: we estimate that it totaled about $6.4 billion in fiscal year 2004. Furthermore, we should note that our calculations underestimate the impact of compensation costs to the federal government annually, because we did not include significant outlays for the unfunded liabilities for current retirees’ pay and health care benefits that are in the Department of the Treasury’s annual budget, which totaled $34.4 billion in fiscal year 2004. These unfunded liabilities are being paid out of current appropriations, because the government has not always set aside monies for future liabilities. In fact, the government did not start setting aside monies for retirement pay until fiscal year 1985 or for health care benefits for retirees until 2003. Since the costs of compensation are scattered across the federal budget, no one organization has visibility over the total costs of military compensation. The lack of transparency over total costs to compensate servicemembers impacts decision makers’ ability to manage the system, including (1) assessing the long-term cost implications, (2) determining how best to allocate resources to ensure an optimum return on investment, and (3) assessing the efficiency of the current compensation system on DOD’s ability to meet recruiting and retention goals. As a result, the current compensation system is made up of a number of benefits and over 60 different pays and allowances that have been added piecemeal over the years to address specific needs. The main problem with this piecemeal approach is that it does not consider the system as a whole—and, as a result, new initiatives are considered in isolation and often with little consideration to how these will contribute to the ability of the military compensation system to efficiently meet recruiting and retention goals and if resources are being allocated to ensure the optimum return on compensation investments within current and expected resource levels. For example, in 2000 Congress enhanced retirement benefits to include health care for retirees over 65 years old and their dependents. This additional benefit came at significant costs, about $6.5 billion accrual funding in fiscal year 2004 alone, and with little evidence of whether, and if so how it contributes to the efficiency and effectiveness of the compensation system in terms of recruiting and retention. Lack of Principles to Guide Military Compensation Policy Is a Long-standing Problem “the relationships between the individual components of compensation and their systemic interrelationships as a coherent structure remain largely implicit rather than explicit. Virtually every aspect of military activity has explicit doctrines, principles, and practices embodied in field manuals, technical manuals, and various joint publications. Military compensation is noteworthy in its lack of such an explicit intellectual foundation.” Furthermore, the Secretary of Defense tacitly admitted the difficulty in changing military compensation from inside the department when he formed an independent advisory committee to study compensation in 2005. Of particular concern to the department was the growth in entitlement spending for things like health care and the appropriateness of the mix of in-service and post-service compensation. However, Congress has taken certain measures to enhance post-service compensation or benefits that DOD has not requested or in some cases has discouraged, such as the expansion of concurrent receipts for retirees with disabilities. At the time of this report, the Secretary of Defense’s committee was just beginning its work; however, the Secretary had requested the committee provide an interim report by September 2005 and conclude its work by spring 2006. In order to achieve lasting and comprehensive change, organizations need sustained top leadership to help make significant and systematic changes become a reality and sustain them over time. We have recently recognized and suggested that to achieve such leadership a Chief Management Official with responsibility for DOD’s overall business transformation efforts should be created. We believe the long-standing lack of explicit principles and the difficulty in changing military compensation from inside DOD is another example of how a Chief Management Official could be beneficial to DOD. Heavy Reliance on Benefits May Not Be Appropriate for Meeting Key Human Capital Goals or Not Sustainable in the Long Term The federal government’s total costs to compensate its active duty servicemembers have increased significantly in the past 5 years, and given that costs are heavily weighted toward noncash and deferred benefits, the structure of the current compensation system raises questions about the reasonableness, appropriateness, and long-term affordability and sustainability of DOD’s approach to compensating its military workforce. Between fiscal years 2000 and 2004, overall compensation costs increased from $123 billion to $158 billion—or about 29 percent, in 2004 dollars. Increases in costs were primarily driven by basic pay, allowances for housing, and health care benefits. Furthermore, over half of the mix of compensation costs is in the form of noncash and deferred benefits, which stands in contrast to private sector and federal civilian organizations that tend to rely more heavily on cash pay, and less on benefits. Military analysts have noted that benefits, especially deferred benefits like retirement, are a relatively inefficient way to influence recruiting and retention, compared to cash pay. Compensation Costs Have Significantly Increased The total cost to the government to provide compensation for active duty members has grown about 29 percent, adjusted for inflation, between fiscal years 2000 and 2004, as shown in figure 1. Over this same time period, the number of active duty troops remained relatively constant at about 1.4 million people, but military compensation costs grew from $123 billion to $158 billion in fiscal year 2004 dollars annually. We estimate that the average cost of compensation per servicemember (i.e., both enlisted personnel and officers) in 2004 was about $112,000. Three things are important to understand about our estimate. First, it is an average of what it cost the government to compensate servicemembers, not what the servicemembers “receive in their paycheck.” Individual cash compensation will vary significantly based on rank and other factors. Furthermore, the value of benefits also varies significantly depending on individual circumstances. Second, because agencies other than DOD provide compensation to servicemembers, our estimate includes costs appropriated to The Department of Veterans Affairs, the Department of Education, and the Department of Labor as well as the lost tax revenue as a result of special tax advantages received by military personnel among others. Third, it does not represent the marginal cost of adding servicemembers, because it does not include significant costs for acquiring and training military personnel. Such costs are substantial: DOD officials told us that the cost for training can be as much as $36,000 per person if a broad range of training costs are included. Recently, other defense analysts have made attempts to estimate the cost of compensation. While these estimates vary based on what costs are included in their analyses, the trends are the same. For example, the Congressional Budget Office (CBO) estimated that in 2002 compensation cost about $99,000 per active duty servicemember. DOD’s Office of Program Analysis and Evaluations calculated all military compensation costs to be approximately $117,000 per servicemember in fiscal year 2004. Other DOD officials have done similar work which included compensation costs; for example, officials in the Office of Secretary of Defense told us they recently estimated DOD’s cost to add an additional servicemember at about $109,000 per servicemember. Navy officials have done detailed work to estimate the cost of manpower to the Navy, and concluded that for example, the standard programming rate for officers in pay grade O4 was about $126,000 and about $79,000 for enlisted personnel in pay grade E-7; however, these costs do not include health care accrual costs for retirees. Military Compensation Components Driving Total Cost Growth in military compensation is attributed, primarily, to increases in (1) basic pay, (2) allowances for housing, and (3) health care cost. Basic pay, found in the military personnel budget, increased from $38.4 billion to $47.4 billion from fiscal years 2000 to 2004—an increase of about 23 percent—and is the largest component of the compensation system. DOD has asked for and Congress has supported sizable across-the-board raises in basic pay in order to address concerns that military members may be underpaid compared to comparable educated civilian counterparts. From fiscal years 2000 to 2004 the average pay increases to servicemembers exceeded average wage increases for all private sector employees. Allowances for housing, found in the military personnel budget, increased by about 66 percent from $7.3 billion to $12 billion between fiscal years 2000 to 2004. Prior to fiscal year 2001, DOD’s policy was for members to pay for 15 percent of their housing costs out of pocket; however, in fiscal year 2000, DOD introduced the “zero out of pocket” initiative that increased servicemembers’ housing allowances to eliminate their out-of-pocket expenses by fiscal year 2005. This effort was to encourage servicemembers to live off-base and is consistent with DOD’s policy that states it is the department’s preference for servicemembers to live in civilian housing. Health care costs, including the costs for active duty servicemembers and their dependents as well as accrual costs for retirees and their dependents, increased from about $13.8 billion to $23.3 billion between fiscal years 2000 and 2004, an increase of about 69 percent. This increase is attributable, in part, to the fiscal year 2002 expansion of health care benefits to retirees over 65 years of age to cover them and their dependents for life. DOD raised concerns about expanding entitlements, such as health care, that do not provide them leverage over readiness. Also contributing is the higher-than-average increase in costs of medical care. A 2003 study by CBO projected that if DOD’s medical spending increases at the same rate as per capita medical spending in the United States, as a whole, it could increase to possibly as much as $52 billion, or $38,000 per servicemember in 2002 dollars, by 2020. These costs include (1) current appropriations from the operations and maintenance, defense health program budget, for current servicemembers and their dependents; and (2) estimated accrual costs for retirees and their dependents from the DOD actuary. Given CBO’s projections of substantial growth in future costs and the 69 percent increase over the past 4 years, serious questions about the affordability and sustainability of the current compensation system arise. Officials within the department told us that they sought increases in basic pay and housing allowances because they think investments in these types of compensation are more efficient in meeting the department’s recruiting and retention goals. Furthermore, continued, significant increases in these areas—especially health care costs, which could exceed $50 billion annually by 2020—raise questions about the long-term affordability and sustainability of the current compensation approach. The summary of military compensation components displayed in table 2 shows the percentage changes in costs between fiscal years 2000 and 2004. In addition, special and incentive pays grew about 30 percent—from $3.3 billion to $4.3 billion—from fiscal years 2000 through 2004. This increase is substantial on a percentage basis, but special and incentive pays are not driving the overall budget trends because the amount is a relatively small portion of the overall compensation cost. By our calculations, these special pays only represent about 6 percent of cash compensation, and about 3 percent of total compensation, on average. DOD has more than 60 different special pays that fall into this category including reenlistment bonuses and hazardous duty pay, as well as other pays for specific duties like aviation, medical, and incentives for servicemembers to take certain assignments among others. Because most compensation is determined by factors such as tenure, rank, location, and dependent status, these special pays and allowances are the primary monetary incentives DOD has for servicemembers other than promotions. Heavy Emphasis on Benefits Reflects DOD Commitment to Servicemembers and Their Families, but Is Unlike Civilian Counterparts and Inefficient for Recruiting and Retention Noncash and deferred benefits made up just over half of the total costs of providing military compensation since 2000. DOD has historically viewed noncash benefits as critical to morale, retention, and the quality of life for servicemembers and their families. In April 2002, DOD issued a strategic human capital plan addressing quality-of-life issues and benefits. According to DOD officials, the plan, entitled A New Social Compact: A Reciprocal Partnership Between the Department of Defense, Service Members and Families, is needed to ameliorate the demands of the military lifestyle, which includes frequent separations and relocations, and to provide better support to servicemembers and their families. It emphasizes the need to maintain programs and services viewed as benefits by servicemembers. Furthermore, we recently reported that DOD has instituted a number of benefits that reflect demographic changes in the active duty force— primarily the increase in servicemembers with family obligations. Compared to civilians in government and in the private sector, the military’s compensation costs are much more heavily weighted toward benefits and deferred compensation like retirement and health care for retirees. Efficiency, as defined by DOD, is the amount of military compensation—no higher or lower than necessary—that is required to fulfill the basic objective of attracting, retaining, and motivating the kinds and numbers of active duty servicemembers needed. However, the efficiency of some benefits is difficult to assess because the value that servicemembers place on them is different and highly individualized. It is generally accepted and a recent study indicates that some deferred benefits, such as retirement, are not valued as highly by servicemembers as current cash compensation. Military Compensation System Is Weighted Toward Noncash and Deferred Benefits In fiscal year 2004, noncash and deferred benefits made up about 51 percent of total compensation costs, on average. This means that it costs the government more to provide benefits and deferred compensation than current cash compensation. Of this, deferred benefits represented a significant portion of noncash compensation, as figure 2 shows. Since 2000, deferred benefits have made up about one-third of total compensation costs. These benefits are the promise of future compensation—like retirement pay and health care as well as other benefits—for active duty servicemembers who retire with at least 20 years of service or who leave the force and become eligible for veterans benefits. Deferred benefits impact the current cost of compensation because monies must be set aside today to provide these benefits in the future, over the servicemember’s lifetime. Civilian Compensation Emphasizes Salary and Wages While it is difficult to make direct comparisons between military and civilian compensation because of the accessibility of some benefits (e.g., health care, retirement to private sector employees), it seems clear that, in general, DOD compensation is weighted much more heavily toward benefits. Some private sector organizations and the federal government provide benefits similar to those provided by the military, such as retirement, health care, paid time off, and life insurance; military benefits, in some instances, far exceed those offered by the private sector, such as free health care and housing as well as discount shopping. In contrast to the mix of compensation for the military, figure 3 shows that civilian counterparts in the private sector and the federal government receive, in broad terms, most of their compensation in cash salary/wages. Civilians in private industry, on average, received about 82 percent in salary and wages while federal government civilians received about 67 percent in salary and wages. Thus, one third or less of these workers’ compensation is typically in the form of benefits or deferred compensation. Current Mix Is Highly Inefficient for Recruiting and Retention The mix of compensation is highly inefficient for meeting near-term recruiting and retention needs. Cash pay today is generally accepted as a far more efficient tool than future cash or benefits for recruiting and retention. Because the preference for cash is particularly strong in young adults, this adage is especially true for the military because the active duty workforce is mainly comprised of people in their twenties. For example, a recent study offering servicemembers a choice of lump-sum payments or annuities found that a vast majority of servicemembers preferred a lump- sum cash payment versus deferred compensation in the form of an annuity. According to the study, more than 50 percent of officers and 90 percent of enlisted servicemembers had discount rates of at least 18 percent; that is, they value $1 received in 20 years to be worth only about 4 cents today. The study also found that the preference for cash today was particularly strong among younger servicemembers. This “personal discount rate” has important implications for military compensation policy, especially when it comes to considering deferred benefits or compensation. Not only do people heavily discount the value of future benefits, less than one in five in the military will receive the most lucrative and costly benefits offered by the military, specifically active duty retirement pay and health care benefits. This is because only 17 percent of those who join the military will ultimately serve a 20-year career and thus earn nondisability retirement pay and health care for life. Figure 4 illustrates that based on current actuarial assumptions, 47 percent of new officers and 15 percent of new enlistees attain 20 years of active duty service. Thus, a significant portion of the compensation budget—about 17 percent—is being allocated to provide for future retirement pay and health care for current active duty members who will become eligible to receive these benefits even though a relatively small percent of the force will ultimately receive these benefits. Taking together the personal discount rate and the relatively few servicemembers who earn retirement benefits, defense compensation analysts have suggested that this is an inefficient allocation of the overall compensation investment. This insight is not new, and is likely a key reason why private sector companies have such a high proportion of cash in their compensation mix. Thus, DOD’s current approach to compensation raises serious questions about the reasonableness and appropriateness of continuing to weight compensation toward noncash and deferred benefits. However, DOD officials told us they feel that this efficiency argument about entitlements is often outweighed by the desire in DOD and in Congress to “take care” of servicemembers and their families. This makes adjusting compensation extremely difficult for decision makers, especially amid concerns of eroding benefits, as discussed later in this report. When concerns have arisen, benefits have often been added with little consideration of what they will cost, how they compare with overall market data, whether costs are affordable and sustainable over the long term, or their effectiveness and return in terms of recruiting or retention. The cumulative effect of this approach raises serious questions about the reasonableness, appropriateness, affordability, and sustainability of the current military compensation system in light of 21st century trends and challenges. DOD’s Lack of an Effective Communication and Education Effort on Compensation Has Allowed Servicemembers’ Misperceptions and Concerns about Their Compensation to Perpetuate According to DOD surveys and analysis of our focus group findings and survey data, many servicemembers are dissatisfied, and in some cases, harbor significant misperceptions about their pay and benefits in part because DOD does not effectively educate them about the competitiveness of their total compensation packages. This has led to an atmosphere of perpetual dissatisfaction and misunderstanding about compensation among servicemembers. Servicemembers tend to be more satisfied with their total compensation packages than with specific aspects of their pay and benefits. They continue to express dissatisfaction with specific aspects of their compensation. In our focus groups, servicemembers had misperceptions about compensation; specifically (1) they underestimated the costs of their compensation and how it compares to civilian wages, (2) were unaware of or confused about certain aspects of their compensation, and (3) were concerned about erosion of benefits. Servicemembers Have Found DOD’s Efforts to Educate Them about Their Compensation Unreliable and Difficult to Access It is industry best practice for employers to educate employees about the value of their pay and benefit components of their compensation. We also believe that by communicating the value of the compensation investment and ensuring it is understood, DOD will ensure an increased return on their investment because employees who know the value of their total compensation packages are more likely to be engaged and motivated in their work. In addition, past studies suggest that revealing more information about components of compensation has a greater impact on the component’s satisfaction rate than the actual amount itself. Servicemembers, especially enlisted personnel, said they frequently relied on unofficial sources of information, such as word of mouth or service newspapers. Many servicemembers discussed how the official sources of information available are often difficult to access or appear to them to be dishonest or misleading. DOD makes various efforts to educate servicemembers by providing annual earnings statements (see app. II for a sample of Personal Statement of Military Compensation) providing online information on compensation, and providing servicemembers access to personnel specialists to answer questions. However, servicemembers stated that the online services frequently are down or that it is difficult to access services such as the “myPay” Web site. DOD officials acknowledge that access to the “myPay” Web site has been a long-standing, recognized problem that is a result of efforts to ensure the security of the personal pay information available on the site. Over half of our focus groups commented on how unhelpful official sources are to them in understanding aspects of compensation. Many stated that the annual earnings statement, at times referred to despairingly in our focus groups as the “lie sheet,” was not believable because they do not understand how the amounts identified as their total compensation (noncash and deferred) were calculated. Enlisted members and officers told us that they felt recruiters and personnel specialists often gave misguided information or could not provide answers to servicemembers’ questions on compensation. Additionally, members often discussed how the lack of comprehensive communication and education on compensation is a problem because they often find themselves unaware of certain additions or changes to their pay and benefits. Servicemembers offered suggestions as to how to improve education on compensation by consolidating information into a single location to obtain information on all pay and benefit elements, while assuring it is clearly accessible and easy to understand. DOD officials acknowledge that generally servicemembers do not realize the full value of their compensation and have misperceptions about their compensation. The Defense Finance and Accounting Service is implementing tools to address these problems. To date, they have developed a newsletter—that provides information on changes to compensation or information about compensation that is widely misunderstood or unknown, such as how to get a “myPay” Web site access code, and bulletins that provide information on specific topics of interest that are sent to Army personnel and plan to expand these tools to the other services. The efforts made to date by DOD to explain the value and competitiveness of compensation stands in contrast to the substantial investment the department makes to recruit new members. As of fiscal year 2003, the department was spending over $13,000 per enlisted recruit for advertising, bonuses, incentives, and recruiter pay and support. We do not have comparable data on what the department spends to educate servicemembers on compensation, but DOD officials told us that it has not been a priority department wide and DOD has never mounted a comprehensive campaign to explain the competitiveness of its compensation to servicemembers. Servicemembers Tend to Be More Satisfied with Their Total Compensation Package than with Specific Aspects of Their Compensation During the 1990s, the military benefit package was significantly enhanced in response to servicemembers’ concerns of eroding benefits; yet servicemembers have continued to express dissatisfaction with many aspects of their compensation. In the 2002 Status of Forces Surveys of Active Duty Servicemembers, participants were asked to rate their satisfaction with specific components within their military compensation. As figure 5 shows, a substantial percentage of servicemembers were dissatisfied with numerous aspects of their compensation, including basic and special pays and housing and subsistence allowances. During more than half of our focus groups, servicemembers cited base pay, the subsistence and housing allowance, and special and incentive pays as reasons for dissatisfaction along with health care and others. In general, officers tended to be more satisfied with base pay than were enlisted personnel. Six of the eight focus groups with senior enlisted servicemembers expressed dissatisfaction with their pay especially compared to junior officers, who the senior enlisted servicemembers perceive as having less experience and relying on them for on-the-job training. While members recognized that there have been improvements in the housing allowance with DOD’s recent effort to increase the allowance, they complained that these increases have had little effect because they perceive that landlords increase rent by the same amount. Additionally, 8 of our 40 focus groups discussed how it is unfair that servicemembers with dependents receive more housing allowance than single members. Moreover, members had varying perceptions about special pays and incentives. Some were dissatisfied with the amount of special pays and thought they should be increased. Others were dissatisfied because it was unclear to them why everyone does not receive special pays. This is particularly true for senior enlisted pay grades that are ineligible for reenlistment bonuses. Furthermore, health care was most frequently discussed as a source of dissatisfaction as well as satisfaction. While servicemembers were satisfied with the minimal to no cost of health care for themselves and their family, 31 of our 40 focus groups commented on their dissatisfaction with the quality and access to health care. Additional reasons of why servicemembers most frequently reported these components and others as sources of dissatisfaction are listed in figure 5. While officers were relatively satisfied with base pay, both officers and enlisted personnel expressed concern about how their pay compares to civilians and the inadequacy of base pay for enlisted members. Servicemembers expressed much concern with the subsequent raises of their rent due to the recent increases of their housing allowance and the inequity of higher housing allowances for members with dependents compared to ones without dependents. While servicemembers have high dissatisfaction with the subsistence allowance, many inaccurately thought the allowance covered themselves and their family. Overall, both enlisted personnel and officers were dissatisfied with special pays because they do not understand why some are eligible to receive pays and how the amounts of the pays are set. Although servicemembers’ dissatisfaction rates are low, almost all 40 focus groups spoke extensively about their dissatisfaction with the poor quality of or difficulty in accessing their health care. Enlisted personnel and officers often cited different reasons for dissatisfaction with family medical care, including not being able to maintain a rapport with one doctor and being unfamiliar with the billing processes for TRICARE. While the rate of dissatisfaction is relatively low, servicemembers felt the commissaries and exchanges have outlived their usefulness because the savings are relatively minor and are not as convenient for those members living off base. error of +/- 2 percent. Our focus group survey results are not generalizable because we did not use random sampling to collect the data. For more information on our focus group methodology see app. I. We asked servicemembers separate questions about their satisfaction with medical and dental care. The numbers reflected above are servicemembers’ dissatisfaction with medical care. Nineteen percent of enlisted personnel and 12 percent of officers reported they were dissatisfied with their dental care, while 40 percent of enlisted personnel and 32 percent of officers reported they were dissatisfied with their families’ dental care. Although servicemembers expressed dissatisfaction with certain pays and benefits, many were more satisfied when considering compensation as a whole in the Status of Forces survey. DOD surveys in 2003 and 2004 showed that about 47 percent of servicemembers were satisfied with their overall cash compensation (i.e., base pay, allowances, and bonuses), which is significantly more than their satisfaction with specific aspects of their compensation. This was evident during our focus groups as well: servicemembers were often more satisfied with their compensation overall than they were with specific aspects of their compensation, like the housing allowance. Despite their dissatisfaction with many aspects of their compensation, servicemembers expressed a clear preference for cash when asked if they would make any changes to the compensation system. In 35 of our 40 focus group sessions, servicemembers were willing to decrease their noncash benefits if those benefits were replaced with cash. For example, servicemembers in our focus groups said that they prefer frequenting off-base discount stores more than the commissaries and exchanges. Also, servicemembers in our focus groups said they would prefer the cash equivalent to their medical coverage in order to obtain their own health care because of their dissatisfaction with the present choice. Servicemembers, especially junior officers who said they do not intend to stay in the military for a full 20-year career, told us that they would prefer DOD to give them cash that they could invest toward their retirement. Comments like these, which we heard frequently, seem to support past studies indicating servicemembers have a strong preference for cash compensation today. However, such personal preferences were offset by other concerns. Specifically, during 16 of our 40 focus group sessions, servicemembers expressed concern that if they were to receive an increase in cash it would not equal the value of their current noncash or deferred benefit. Or, changing to more cash compensation might not be in the best interest of all members, especially the junior enlisted personnel, who might not manage their finances well. Servicemembers in Our Focus Groups Expressed Certain Misperceptions and Concerns about Their Compensation During our focus group discussions, servicemembers (1) underestimated the cost of their compensation and how their compensation compares to civilian wages, (2) were unaware of or confused about certain aspects of their compensation, and (3) were concerned about erosion of benefits. These findings suggest that a culture of dissatisfaction and misunderstanding about compensation exists among servicemembers. Underestimation of Total Compensation Servicemembers consistently underestimated how their pay compares to the private sector. Almost 80 percent of servicemembers participating in our focus groups reported in our survey of focus group participants that they believe they are paid less than their civilian counterparts. In addition, during the focus groups servicemembers frequently discussed how they were dissatisfied with their military pay because they believe that they could make more “on the outside” as a civilian. Moreover, when asked how much DOD spends on cash pays, retirement, and health care for them, 9 out of 10 servicemembers participating in our focus groups underestimated how much it cost to provide their compensation. While some specific skill groups could likely make considerably more in civilian jobs, such perceptions of noncompetitive compensation seem to be inaccurate in broad terms. The most recent Quadrennial Review of Military Compensation—a DOD commission that reviews military compensation— found that cash compensation fares favorably overall to civilian wages. Specifically, they compared cash compensation (including tax advantage, but not including special pays or benefits) with comparably educated civilians. They found that, on average, military pay was at the 70th percentile or higher of civilian wages. It should also be noted that this review of military compensation found that, based on historical data, when DOD pays servicemembers at around the 70th percentile of civilian wages it is competitive in the employment market—that is, DOD has generally not experienced recruiting or retention problems when compensating servicemembers at this level. In sum, this means that DOD seeks to pay servicemembers competitive cash wages when compared to civilians and, at the same time, provides increasingly expensive benefits that are in most cases much greater than those provided by the private sector. Lack of Awareness or Understanding of Aspects of Compensation Although most servicemembers were aware that their compensation was a complex mix of cash and benefits, some servicemembers were unaware of certain aspects of their compensation; for example, servicemembers were not aware of which retirement system they fell under or specifics about their retirement benefits. Some servicemembers expressed confusion on the repeal of the REDUX retirement system or did not realize that retired members and their dependents now receive health insurance for life under the military’s healthcare system, TRICARE. Also, servicemembers did not understand and had misperceptions about many components of their cash compensation. For instance, some enlisted members were unsure as to how special pays were allocated, while others were not as familiar with the federal tax advantage they receive. Additionally, servicemembers showed frequent misperceptions about the subsistence allowance being designed for the member and not the family. Moreover, servicemembers often complained of how they did not know how to access information about health care benefits for their family. In contrast to pay and benefits, focus group participants seldom raised deferred compensation as a reason for dissatisfaction or satisfaction, and few junior enlisted personnel included deferred benefits when describing their compensation. Some servicemembers expressed concern about losing deferred benefits that were implicitly promised to them as part of joining the military. In addition, a majority of the focus groups did not recognize veterans’ benefits as a component of their military compensation. Of the servicemembers who discussed veterans’ benefits, they focused on the home loan program. Concern about Eroding Benefits During the 1990s, some servicemembers expressed concerns that their benefits were eroding, particularly their health care and retirement benefits. In response to such concerns, the military benefit package has been significantly enhanced. In recent years, for example, Congress restored retirement benefits that had previously been reduced for some servicemembers, significantly expanded their retirement health benefits, and allowed concurrent receipt of disability and retirement pay. However, leaders in both the enlisted and officer ranks in our focus groups were concerned that their benefits have still been eroding despite these recent efforts. They often talked about retirement benefits worsening as well as about decreases in base services provided through the Morale, Welfare, and Recreation organization—such as outdoor recreational equipment discounted rentals. Conclusions DOD is maintaining an increasingly expensive and complex military compensation system comprising a myriad of pays and benefits. With the costs scattered across the federal budget, decision makers within the administration and Congress have insufficient transparency over the total costs to compensate servicemembers, particularly with respect to deferred costs, such as TRICARE—which is projected to experience explosive growth in the future. Moreover, changes to the compensation system are made in a piecemeal fashion with an imprecise understanding of how the changes will affect the total cost of compensation or what return on investment decision makers should expect in terms of recruiting and retention. This lack of transparency is becoming a more urgent matter today as DOD and all federal agencies face tough choices ahead managing the serious and growing long-term fiscal challenges facing the nation. For DOD, these trade-offs could become as fundamental as investing in people versus investing in hardware—tough choices for a military with aging infrastructure and equipment that could have readiness implications in the future. Compiling comprehensive information about the total cost of compensation—as well as how it is allocated to cash and benefits—would be a crucial first step for the department and Congress to lay a foundation for future decisions. Without such information, decision makers at DOD and Congress do not know what it is costing the government to compensate servicemembers. Furthermore, DOD has not performed the analysis necessary to determine whether its current allocation to cash and benefits is reasonable or appropriate. With dramatic increases in compensation costs and an expanding budget—mostly resulting from supplemental appropriations from Congress to fund activities and operations related to the Global War on Terrorism—it is highly questionable whether the rising costs and current allocations are affordable and sustainable over the long term, especially when supplemental funding recedes. Again, because DOD’s compensation system lacks transparency over these issues, the decision makers in the administration and Congress cannot adequately assess whether DOD’s current approach to compensation is most efficiently meeting its needs for both today and tomorrow. DOD also faces a sobering marketing challenge—convincing skeptical servicemembers that their compensation is competitive, overall. The department’s efforts thus far have been ineffective, and military members are still dissatisfied with key aspects of their compensation, and harbor several misperceptions and concerns that their compensation is eroding, or will in the future. Pay comparison studies conducted by DOD, however, show that military compensation is quite competitive even without considering benefits. Recent efforts to improve benefits for retirees have done little to address dissatisfaction among current members. This dilemma exists for two reasons: (1) while servicemembers do not want to lose benefits, they value future benefits much less than current cash; and (2) fewer than one in five of those servicemembers who begin military service will ultimately receive those benefits. Without more emphasis on marketing the value of the cash pay as well as the total compensation received overall, DOD will be unable to improve servicemember perceptions, which could have implications for future recruiting and retention efforts. Recommendations for Executive Action To improve transparency over total compensation; to ensure the compensation system is reasonable, appropriate, affordable, and sustainable; and to better educate servicemembers about the competitiveness of their compensation, we recommend that the Secretary of Defense take the following three actions: Compile the total costs to provide military compensation and communicate these costs to decision makers within the administration and Congress—perhaps as an annual exhibit as part of the President’s budget submission to Congress. In preparing the annual exhibit, DOD may want to work with the Office of Management and Budget. Assess the affordability and sustainability of the compensation system and its implications on readiness as well as the reasonableness and appropriateness of the allocation to cash and benefits and whether changes in the allocation are needed to more efficiently achieve recruiting and retention goals in the 21st century. Develop a comprehensive communication and education plan to inform servicemembers of the value of their pay and benefits and the competitiveness of their total compensation package when compared to their civilian counterparts that could be used as a recruiting and retention tool. Matter for Congressional Consideration The Congress should consider the long-term affordability and sustainability of any additional changes to pay and benefits for military personnel and veterans, including the long-term implications for the deficit and military readiness. Furthermore, Congress should consider how best to proceed with any significant potential restructuring of existing military compensation policies and practices, including whether a formal commission may be necessary. Agency Comments and Our Evaluation DOD’s comments are included in this report as appendix III. DOD generally concurred with our recommendations, but raised some technical concerns about the way we compiled compensation costs data. DOD partially concurred with our first recommendation to compile the total costs to provide military compensation and communicate these costs to decision makers. In DOD’s response, it noted that it agrees with the goal of making total compensation costs transparent to decision makers; however, the department noted that this may be a more appropriate issue for the Office of Management and Budget since the costs extend to four departments. As we noted in our report, lack of transparency over costs is in part due to the sheer number of pays and benefits that make up the military compensation system and the lack of a single source to show total cost of compensation. By establishing transparency of total military compensation costs, DOD would have a more complete picture of how its military members are being compensated and be in the best position to compile these costs for decision makers. DOD concurred with our second recommendation and stated that it is already engaged in multiple simultaneous efforts to assess the overarching military personnel compensation strategy. In addition, DOD said that it will continue to actively point out the impact of the legislative process to Congress as it did with concurrent receipt, the survivor benefit program, and expanding retiree health care. DOD partially concurred with our third recommendation to develop a comprehensive communication and education plan to inform servicemembers of the value of their pay and benefits and the competitiveness of their total compensation package when compared to their civilian counterparts. The department acknowledged that there is a perception that military compensation is underreported and undervalued, but pointed out that all of the services as well as the Office of the Under Secretary of Defense for Personnel and Readiness currently have multiple resources available for servicemembers, including Web sites, such as the Navy’s electronic pay and compensation calculator, and brochures, such as the Air Force’s compensation fact sheet. We believe, as discussed in our report, that these official sources are not effective, because of the continuing dissatisfaction with compensation. DOD said that it will explore an information/marketing campaign that will improve understanding of the system. DOD also raised technical concerns; specifically, it believed that we did not adequately describe the impact of the increase in funding related to the Global War on Terrorism as well as how we converted the costs to constant year dollars. In its comments, DOD stated that the fiscal year 2004 compensation costs included over $17 billion in supplemental funding for the war on terrorism, and much of this funding was used to pay for mobilized reservists. While it is true that our estimates include supplemental funding, we do not believe that the inclusion of this funding changes our findings or conclusions. Supplemental funding represents real costs to the federal government that we believe are appropriate to include when calculating how much the federal government spends on compensating military members. However, we took DOD’s concerns into account and added footnotes in our report to explain our approach. DOD also raised concerns about our use of end strength instead of average strength in our per capita calculations. We believe that end strength, which represents only active population, is an appropriate denominator to calculate per capita active duty costs because it provides a consistent population to spread costs of cash, noncash benefits, and deferred benefits. To use average strength, which includes mobilized reservists, would not have been an accurate representation of active duty per capita costs for noncash and deferred benefits. However, in response to this comment, we added footnotes that indicate that the cash compensation for fiscal year 2004 includes costs for mobilized reservists and including those reservists in our per capita calculations would have decreased cash compensation by about $5,000 per servicemember. Finally, DOD raised concerns about our adjustments for inflation in our data. We used the National Defense Budget Estimates published by the Office of the Under Secretary of Defense (Comptroller). According to the document, the deflators we used provide inflation indexes in base years for each DOD appropriation title to be used in converting total obligation authority from current to constant dollars. To address DOD’s concerns that the total cost data are not accounting for the fact that the military pay raises have been larger than other civilian pay raises, we added footnotes that compared military pay increases, which averaged over 21 percent between fiscal years 2000 and 2004, to the all urban workers Consumer Price Index and the Employment Cost Index for civilian wages and salaries, which over the same period increased 9.7 percent and 13.3 percent, respectively. We are sending copies of this report to the Secretary of Defense. In addition, the report will be available at no charge on GAO’s Web site at http://www. gao.gov. If you or your staff have any questions regarding this report, please contact me at (202)512-5559 ([email protected]). Other staff members who made key contributions to this report are listed in appendix IV. Scope and Methodology Calculation of the Costs of Providing Active Duty Compensation To calculate the cost of compensating active duty servicemembers to the federal government, we interviewed officials from the Department of Defense (DOD) including the Office of the Secretary of Defense, Undersecretary for Personnel and Readiness’ office of compensation, the office of the Comptroller within the Office of Secretary of Defense and the services, the Office of the Actuary, and Health Affairs. In addition, we interviewed officials from Department of Veterans’ Affairs (VA), Department of Labor, Department of Education, Office of Management and Budget, and the Congressional Budget Office. See table 4 for an overview of our sources of information. In comparing the mix of cash and noncash compensation of active duty servicemembers to that of private industry employees and federal civilians, we used the Department of Commerce’s Bureau of Economic Analysis data to determine the typical percentage of compensation that is allocated to salary/wages and benefits in private industry and federal civilian compensation systems. We examined and compiled data for fiscal years 2000-2004 from the Army, Air Force, Marine Corps, and Navy’s military personnel and operations and maintenance budget justification books. Within the operations and maintenance justification books, we reviewed the budgets of the defense health program; the defense commissary agency; the morale, welfare and recreation activities (OP-34 exhibit); and DOD dependent education activity. In addition, we reviewed and compiled data from the future years defense planning document. We also reviewed and compiled data from the VA benefits and health care budget justification books. We used deflators to adjust the budget appropriations into current fiscal year 2004 dollars. To estimate the total federal tax expenditure that results from the tax- exempt housing and subsistence allowances military personnel receive, we grouped servicemembers by earnings, allowances, and tax status for the years 2000 to 2004 and used the National Bureau of Economic Research’s TAXSIM Model to simulate tax liabilities under different scenarios. Only military income was considered. Nonmilitary income, such as spousal earnings or investment income, would likely increase marginal tax rates and, thus, increase our tax expenditure estimates. A servicemember’s earnings and tax status were determined by rank, years of service, and number of dependents. Allowances are determined by rank and number of dependents, and we assumed servicemembers living on-base received the average housing allowance of similar servicemembers living off-base. The number of servicemembers in each group for each year was provided by the DOD’s Selected Military Compensation Tables. The tax expenditure for each group is estimated as the number of servicemembers in the group multiplied by the difference in the tax liabilities of a representative member of the group assuming that the allowances are and are not taxable. To estimate health care accrual costs, we used official estimates of accrual health care costs for all retirees and their dependents provided by DOD’s Office of the Actuary. Since 2003, health care costs for retirees over 65 years of age and their dependents are accrual-budgeted in the DOD Military Personnel budget. However, health care costs for retirees under 65 years of age and their dependents are budgeted through an annual appropriation to the Defense Health Program (DHP). Because the DHP annual budget includes health care costs for active duty servicemembers and their dependents as well as retirees under 65 and their families, we had to estimate the share of DHP costs associated with active duty servicemembers and their dependents for the fiscal years 2000-2004. Using similar methodology employed by the Congressional Budget Office, we transformed DHP enrollee data (broken out by gender, age category, and whether they were active duty personnel, family of active duty personnel, retirees under 65, or family of retirees under 65) into equivalent demand units, because enrollees do not all have the same underlying demand for health care services—average health care expenditures and reliance upon DHP differ across the groups. The estimated share of DHP dollars due to active duty service is the ratio of equivalent demand units for active duty personnel and their families to the total number of equivalent demand units in DHP. We first used the Medical Expenditures Panel Survey (MEPS) to estimate the average total health care expenditures each year (2000-2004) by gender and age category. Estimates of relative use were derived by dividing all estimates by the average total health care expenditures for males ages 18 to 44 (the comparison group). Second, reliance rates for DHP were provided by the DOD from the annual Military Health Care Survey for active duty personnel, their families, and retirees under 65 and their families. To calculate the costs of future veterans’ benefits for current active duty servicemembers, including the costs for health care, compensation, pension, and other types of benefits, we used notional costs as a percentage of basic pay of accruing and actuarially funding VA benefits in the DOD budget. The notional cost percentages we used were unofficial Office of Management and Budget estimates. These estimates were based on the most recent official percentages shown in table 12-2 of the 1999 President’s Budget. Determination of Active Duty Servicemembers’ Perceptions of Their Compensation To determine servicemembers’ satisfaction and dissatisfaction with components of their compensation, we reviewed past DOD surveys, including the 2002, 2003, and 2004 Status of Forces Surveys. The 2002 survey administered to over 30,000 servicemembers had a response rate of 32 percent. DOD has conducted and reported on research to assess the impact of nonresponse rate on overall estimates. It found that, among other characteristics, junior enlisted personnel (in pay grades E1 to E4), servicemembers who do not have a college degree, and members in services other than the Air Force were more likely to be nonrespondents. We have no reason to believe that potential nonresponse bias not otherwise accounted for by DOD’s research is substantial for the variables we studied in this report. Therefore, we concluded the data to be sufficiently reliable for the purposes of this report. To determine active duty servicemembers’ perceptions of their compensation, we conducted 40 focus groups at eight military installations across all four services. We gathered supplemental information from focus group participants through a survey that asked questions to assess knowledge, individual opinions, and attitudes toward compensation. Focus Groups We conducted 10 focus groups with active duty servicemembers in each of the four services, for a total of 40 focus groups. Focus groups involve structured small group discussions designed to gain in-depth information about specific issues that cannot easily be obtained from single or serial interviews. As with typical focus group data collection, our design involved multiple groups with certain homogeneous factors—such as rank, service, and installation. Each group was designed to involve 8 to 12 participants. Discussions were held in a structured manner, guided by a moderator who used a standardized list of questions. Our overall objective in using a focus group approach was to obtain servicemembers’ views, insights, and feelings about military compensation. Scope of Our Focus Groups To ensure we achieved saturation, the point where we were no longer hearing new information, we conducted 40 focus groups with multiple active duty servicemembers at eight military installations (see table 4). This design allowed us to identify differences in perceptions of servicemembers in different branches of the military and in different pay grades. In all focus groups, in order to hear from different perspectives, efforts were made to select participants with differences in sex, marital status, if the member lived on base or off base, and if the member had been recently deployed. Focus groups were conducted from November 2004 to March 2005. A guide was developed to assist the moderator in leading the discussions. The guide helped the moderator address several topics related to servicemembers’ perceptions of their compensation including their definition of compensation, sources of information on compensation, satisfaction and dissatisfaction with compensation, and any needed changes to compensation. Each focus group started with the moderator describing the purpose of the study and explaining how focus groups work. Participants were assured that all their comments would be anonymous in that their names would not be used in write-ups of the sessions or in the report. The participants were then asked open-ended questions about their perceptions of military compensation. All focus groups were moderated by a GAO analyst, while at least one GAO analyst observed and took notes. After each focus group the moderator and note taker reviewed the transcript together to verify that all comments were captured. Content Analysis We performed a systematic content analysis of discussions to categorize and summarize participants’ perceptions of their compensation. Using the primary topics covered in the focus group guide, GAO analysts reviewed responses from several of the focus groups and created a list of subcategories within each of the primary focus group topics. A GAO analyst then reviewed the responses from each focus group and assigned each comment to a corresponding category. To ensure inter-rater reliability, another analyst also reviewed each comment and independently assigned it to a corresponding category. Any comments that were not assigned to the same category were then reconciled and adjudicated by the two analysts, which led to the comments being placed into one or more of the resulting categories. Agreement regarding each placement was reached between at least two analysts. The responses in each category were then used in our evaluation of how servicemembers perceive their compensation. Limitations Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, focus groups are intended to provide in-depth information about participants’ reasons for the attitudes held toward specific topics and to offer insights into the range of concerns and support for an issue. The projectability of the information produced by our focus groups is limited for several reasons. First, they represent the responses of only the active duty servicemembers in our 40 focus groups. The experiences of other active duty servicemembers who did not participate in our focus groups may have varied. Second, while the composition of the groups was designed to assure a distribution of active duty servicemembers by several characteristics, including sex and marital status, the groups were not randomly sampled. Despite these limitations, we gathered data from a broad range of servicemembers at several strata of the military hierarchy and obtained a better understanding of how servicemembers perceive their compensation and where they obtain information about their pay and benefits. We were also able to obtain information complementary to the Status of Forces Survey, which seeks information on satisfaction and dissatisfaction among military servicemembers but does not address reasons for these perceptions. Use of a Survey to Supplement Focus Group Findings We conducted a survey of focus group participants to provide further information on servicemembers’ perceptions of their compensation. The survey was administered and received from the universe of focus group participants, which numbered 401. The survey collected additional specific information on servicemembers’ satisfaction and dissatisfaction with their pay and benefits, sources of information on their compensation, recommendations for changing compensation, and demographic information. Since the survey was used to collect supplemental information and administered to focus group participants only, the results cannot be generalized across the population of active duty servicemembers. The results from this data collection effort represent only those who participated in our focus groups. The objectives of our survey were to collect (1) data that could not easily be obtained through focus groups and (2) collect some of the same data found in past DOD surveys. The practical difficulties of conducting any survey may introduce certain types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into survey results. To reduce nonsampling errors, we conducted five pretests with active duty servicemembers, both enlisted and officers, and revised it based on the pretest results. We also performed statistical analyses to identify inconsistencies and had a second independent reviewer for the data analysis to further minimize such error. The surveys were administered in person directly after each focus group session. To analyze survey results, we ran frequencies for all questions and highlighted those where a significant response occurred in a particular category. We also compared responses by pay grade and service. We conducted our review from August 2004 through May 2005 in accordance with generally accepted government auditing standards. Sample of a Personal Statement of Military Compensation PERSONAL STATEMENT OF MILITARY COMPENSATION This statement is intended to outline the total value of your military pay, allowances and benefits. By making your compensation more “visible,” this statement could be useful when applying for credit or loans (including home loans) from businesses or lending institutions. Another possible use of this summary is to help determine whether specific civilian employment offers would let you maintain the same standard of living you had while serving in the military. Start with the Total Direct Compensation on page 1, add the Federal Tax advantage from page 2, and then add any additional expense a civilian employer would expect you to pay for health and life insurance, retirement contributions, etc. This will tell you approximately what level of civilian salary you must earn in order to maintain a similar standard of living as that provided by your military take home pay. Each section of this statement contains an explanation. However, if you have any questions, please contact your local pay office. SUMMARY: A. Basic Military Compensation as of March 2005 .................................................................... $ B. Special Pay and Bonuses ....................................................................................................... $ C. Expense allowances ............................................................................................................... $ TOTAL DIRECT COMPENSATION ........................................................................ $______________ Added value of indirect compensation ....................................................................................... $ Added considerations/programs (Your estimate) ........................................................................ $______________ TOTAL COMPENSATION ....................................................................................... $______________ The following information provides more details on the value of your personal compensation. Adding the indirect compensation and additional considerations to your direct compensation should provide a clearer picture of your total military compensation package. DIRECT COMPENSATION AS OF MARCH 2005 (NOTE 1) A. BASIC COMPENSATION. Describes the basic elements of compensation paid to all military members. It includes Basic Pay, the value of living in government quarters or Basic Allowance for Housing (BAH), and the value of meals furnished or Basic Allowance for Subsistence (BAS). Your basic compensation is: Basic Pay ................................................................................................................................................................ $ BAH or quarters valued at actual BAH for your location, rank and dependency status (see Note 2) ..................... $ BAS ........................................................................................................................................................................ $ TOTAL BASIC COMPENSATION .......................................................................................................... $________ B. SPECIAL PAY AND BONUSES. Is in addition to Basic Compensation for people in certain skills and assignments. Your bonuses, special and Special and Incentive Pays ..................................................................................................................................... $ Bonuses .................................................................................................................................................................. $ TOTAL SPECIAL PAY AND BONUSES ............................................................................................... $_______ C. EXPENSE ALLOWANCE. You may receive allowances to help compensate you for extra expenses you incur based on the location of your duty assignment. These include the overseas housing allowance (OHA), cost of living allowance (COLA) (Note 1) only payable in certain areas, family separation allowance (FSA), and clothing replacement allowance (CRA). Your total expense allowances are: TOTAL EXPENSE ALLOWANCES ...................................................................................................... $_______ Note 1: Pay items on your March 200LES, marital status and dependents taken from your personnel records. Annual rates for COLA are for 365 days, not 12 times the March rate. If BAH was not in effect in March 200, we assumed you received quarters or meals worth about as much as BAH. If you received partial BAH, we assumed that the partial BAH and value of quarters together roughly equal full BAH. Other programs supplement your direct compensation. These have a cash value to you in terms of spendable income. They are an important part of your compensation and should be considered in adding up your real pay value. A. MEDICAL CARE. As an active duty member, the military provides you and your family with comprehensive medical care. TRICARE is the name of the Defense Department’s regional health care program. Under TRICARE, there are three health plan options: TRICARE Prime (all active duty are automatically in Prime, but family members may choose to enroll in this HMO-type plan); TRICARE Standard (an indemnity plan, formerly called CHAMPUS); TRICARE Extra (a Preferred Provider Organization plan). Under TRICARE Prime, you will have an assigned military or civilian primary care manager who will manage all aspects of your care, including referrals to specialists. Prime has no deductibles, cost-shares, or co-payments except a nominal co-payment for prescriptions filled at a retail pharmacy or through the National Mail Order Pharmacy program. TRICARE Standard offers more choice of providers, but requires an annual $150 deductible/person or $300/family (E-1 to E-4: $50/person, $100/family) plus a 20% cost-share for outpatient care and an $13.90/day charge for inpatient care. TRICARE Extra offers the same benefit as Standard, but when you elect to use a Prime network provider, the outpatient visit cost-share is only 15%. The average total premium of a civilian plan that would provide similar benefits to TRICARE Prime is conservatively estimated at $374.27/month/individual, $ 4,491.24/year/individual, $1,022.03/month/family and $ 12,264.36/year/family – these premiums do not take into consideration cost-shares and deductibles often required in civilian plans like the TRICARE Standard and Extra options. Please contact the Beneficiary Counseling and Assistance Coordinator at the nearest military treatment facility for additional information. The personal costs experienced by you or your family will vary depending on the TRICARE option you select. B. DEATH AND SURVIVOR PROGRAMS. If you die on active duty, your survivors are eligible for life insurance and other payments. You may buy life insurance in $10,000 increments up to $250,000 at a very low cost. Also, your dependents would receive a death gratuity payment of $12,000 and monthly Dependency and Indemnity Compensation (DIC) payments (non-taxable) of $967 for the surviving spouse and an additional $241 for each surviving child. DIC is adjusted annually for inflation. More information can be found at http://www.vba.va.gov/. Also see Survivor Benefit Plan on page 3 of this statement. You are currently paying premiums for SGLI coverage of $_______ on yourself and $_______ on your spouse. C. FEDERAL TAX ADVANTAGE. This represents the amount of additional Federal tax you would have to pay if your quarters (BAH), and meals (BAS) allowances were taxed. Your tax advantage is based on INDIRECT COMPENSATION (A + C) One of the most attractive incentives of a military career is the retirement system that provides a monthly retirement income for those who serve a minimum of twenty years. Currently, there are three retirement plans in effect -- Final Basic Pay, High-3, and Choice of High-3 or Redux with $30K Career Status Bonus (CSB). A description of each follows. Information on all three plans is available at: http://www.afpc.randolph.af.mil/ Additional information on the new High-3 and Redux/$30K CSB choice is available at: http://dod.mil/militarypay/. Cost-of-Living Adjustment (COLA) (Notes 2, 3 & 4) (Note 5) by DIEUS) (Note 1) Full inflation protection; COLA based prior to 8 Sep 80 on Consumer Price Index (CPI) High-3 (Note 6) Full inflation protection; COLA based the highest 36 months of basic pay on Consumer Price Index (CPI) COLA based on Consumer Price Index (CPI) Bonus” at 15 years of service in exchange highest 36 months of basic pay. At age 62, retired pay age 62, retired pay is adjusted to for agreeing to serve to at least 20 years of is recalculated without deducting the one percentage reflect full COLA since retirement. Partial COLA then resumes after age generous Redux plan. catch up to what it would have been without the Redux 62. penalty. Note 1: Date initially entered uniformed service (DIEUS) refers to the fixed date the member was first enlisted, appointed, or inducted. This includes cadets at the Service Academies, students enrolled in a reserve component as part of the Services’ senior ROTC programs or ROTC financial assistance programs, students in the Uniformed Services University of the Health Sciences, participants in the Armed Forces Health Professions Scholarship program, officer candidates attending Officer Training School, and members in the Delayed Entry Program. Note 2: The maximum multiplier is 75 percent times basic pay. Note 3: Members should be aware that the Uniformed Services Former Spouses Protection Act allows state courts to consider military retired pay as divisible property in divorce settlements. The law does not direct state courts to divide retired pay; it simply permits them to do so. Note 4: Retired pay stops upon the death of the retiree unless he or she was enrolled in the Survivor Benefit Plan. See “Survivor Benefit Plan (SBP)” on page 3 for additional information on this program. Note 5: COLA is applied annually to retired pay. Note 6: High-3 is a reference to the average of the high three years or, more specifically, the high 36 months of basic pay as used in the formula. Note 7. Effective 28 Dec 01, members may elect one of 5 options to receive the $30K CSB: one lump sum payment of $30k; two annual payments of $15K; three annual payments of $10K; four annual payments of $7.5K; or five annual payments of $6K. (For Retirement-Eligible Personnel) If you were to retire in your present grade, your initial gross monthly retired pay would be ____________ increased annually for inflation. For each year you continue to stay on active duty, you will receive an additional 2.5% of your basic pay up to a maximum of 75%. Your retirement represents a considerable value over your life expectancy. While retired pay stops upon death, you can ensure your survivors receive a portion of it by enrolling in the Survivor Benefit Plan when you retire (see next page). Retired pay calculation is for illustration only. It does not consider any active duty service commitment or time-in-grade requirement, which may preclude your retiring immediately in your present grade. Further, the date used to determine years of service in your actual retired pay computation (the “1405” date) will be determined by the MPF from paper records and could be different than the total active Federal military service used in this example. When adding up the total worth of your compensation package, you should also consider the many other programs and privileges you have. Their worth will be different for each person depending on use. This page is presented for you to determine the yearly value/savings you estimate each of these programs has been worth to you. PAY GROWTH. Pay raises each year, longevity increases, and competitive promotion opportunities. STATE/LOCAL TAX ADVANTAGE. Besides being exempt from Federal taxes, your BAH, BAS, and overseas allowances and in-kind housing may be exempt from State and Local taxes, depending upon the state you claim as a legal residence. Relative to the tax laws of your legal residence, this can save you hundreds of dollars each year. COMMISSARY. Studies have found that commissary shoppers save an average of 30% or more on their grocery purchases, amounting to about $2,700 annually for a family of four. If you spend the following, your savings will be approximately: $ ____________ Find your nearest commissary through the locations link at www.commissaries.com. Take advantage of the Savings You’ve Earned! ARMY AND AIR FORCE EXCHANGE SERVICE (AAFES). Now in our second century of service, the Army and Air Force Exchange Service (AAFES) remains committed to serving you, the "best customer in the world". Your exchanges provide products and services to authorized customers worldwide and generate reasonable earnings to supplement appropriated funds for Army and Air Force morale, welfare, and recreation (MWR) programs. Earnings fund new and improved stores with most of the profits going to MWR programs - over $2 30 million last year. AAFES' shelf prices provide you 21.9 percent overall savings compared to off post/base retail operations. While you can enjoy your exchange benefit in many ways the greatest value is AAFES' pledge to "Go Where You Go." And remember, AAFES offers 24/7 convenience through its website 'www.aafes.com'. SURVIVOR BENEFIT PLAN (SBP). All pay stops when a member dies. However, if you die on active duty, your surviving spouse and children are automatically protected by the SBP--at no cost to you. The surviving spouse will get an annuity equal to the difference between the dependency and indemnity compensation DIC payment and the SBP payment that would be paid if you had been retired on the date of your death. To determine the amount of the SBP, the maximum applicable rate of retired pay that would be due you will be used. The only way retirees can guarantee their survivors receive a share of their retired pay is to enroll in SBP before they retire. The maximum annuity is equal to 55% of retired pay until the spouse attains age 62. At age 62, the annuity is reduced to 35% of retired pay unless the retiree purchases the Supplemental Survivor Benefit Plan (SSBP), which restores the annuity to between 40 and 55%, depending on the amount selected. The FY05 NDAA eliminated the reduction of SBP at age 62, with phase-in rates of 40% in 2005, 45% in 2006, 50% in 2007, and 55% in 2008. The SBP annuity for your survivor is adjusted each year by the same percentage increase given to military retired pay. Additional information can be found at http://www.afpc.randolph.af.mil/ UNIFORMED SERVICE THRIFT SAVINGS PLAN (TSP): You can gain additional tax deferred advantages through participation in the TSP. You are limited in the amount of Base Pay you may contribute (below), however you may contribute up to 100% of your special, incentive, or bonus pay. There are also annual contribution limits that apply (below). If you perform duty in a designated combat zone, your contributions to TSP will be tax-exempt (versus tax deferred) and will not count against your tax deferred limits. The combination of your tax-exempt and tax deferred contributions are limited to $40,000 for any year. More information can be found at http://www.tsp.gov/ Total Annual Tax Deferred Limits (unlimited) FEDERAL LONG-TERM CARE INSURANCE PROGRAM (FLTCIP): The FLTCIP is the only long term care insurance program sponsored by the Federal Government. It is managed by the Office of Personnel Management and offered by two insurance leaders--John Hancock and MetLife. It provides comprehensive benefits to included home care, informal care, and inflation options at competitive group premiums. The FLTCIP helps preserve your retirement savings should a long-term care need arise. Those eligible for the FLTCIP include all Federal Employees (Uniformed Service members), their spouses, adult children (including natural, adopted & step), parents, parents-in-law, and stepparents. Call 1-800-LTC-FEDS (1-800-582-3337) or visit the web site at: http://www.LTCFEDS.com EDUCATION PROGRAMS. Members in authorized off-duty education programs receive up to 100 percent of tuition costs, up to a maximum of $250.00 per credit hour, $4,500 per fiscal year, paid by the Government. Members who had established an account in the Veterans Educational Assistance Program (VEAP) by contributing $25-$100 each month or by lump sum payment (up to $2700), have a Government $2 for $1 matching contribution for a total of up to $8,100. Members who elected to participate in the Montgomery GI Bill upon entering active duty (after 30 June 1985), and agreed to payroll reduction of $100 per month for a total of 12 months, can receive a benefit of $ $36,144 with yearly increases as determined by the consumer price index. SERVICES ACTIVITIES. Provide conveniently located, low-cost, professionally managed activities and entertainment. You and your family members receive significant savings when you participate in Services programs such as fitness, libraries, child development and youth programs, skills development, golf, bowling, clubs, outdoor recreation activities, equipment checkout, aero clubs, etc. COUNSELING AND ASSISTANCE PROGRAMS. Military members can get free personal financial management counseling, relocation services assistance, transition counseling, spousal employment assistance, and assistance from a wide range of other programs available from Air Force Family Support Centers. Air Force Aid Society provides zero interest emergency loans and grants to members who qualify (total loans and grants given in 2004 was $11,300,000 and 10,000,000 in community outreach and education programs). $ ____________ LEGAL COUNSELING. Military members can get free legal counseling and assistance. $_____________ Consultations with an attorney: SPACE AVAILABLE TRAVEL. Space available travel for Uniformed Services members can provide substantial savings over commercial airline fares. Space available travel is defined by DoD policy as a privilege (not an entitlement), which accrues to Uniformed Services members as an avenue of respite from the rigors of Uniformed Services duty. Under one of the categories of space available travel, members on leave can travel with one dependent on permissive TDY house hunting trips. For additional information on this special privilege, consult the AMC Space Available web page at http://public.amc.af.mil/Library/SPACEA/spacea.htm TRICARE DENTAL PROGRAM (TDP). TDP eligibility includes spouses and eligible children of active duty members of the Uniformed Services, Selected Reserve and Individual Ready Reserve. Additionally, the Selected Reserve and Individual Ready Reserve members themselves are eligible for the TDP. Enrollees may be treated in both CONUS and OCONUS locations. TDP monthly premiums for Selected Reserve members and family members of active duty are cost-shared by the Department of Defense (DoD) (i.e., the government pays 60% of the premium, sponsor pays 40%). The sponsor’s monthly premium payment is $9.32 for a single enrolled family member and $23.31 for families with two or more members enrolled. This equates to an annual savings conservatively estimated at $175 for single and $439 for family enrollments. Basic preventive, diagnostic and emergency services are covered at 100%; the plan pays 50%-80% of the cost for certain specialized services such as restorations, orthodontics, and prosthodontics. Moreover, DoD cost-shares other specialty care (periodontic, endodontic, and oral surgery) at a higher percentage for E-1s to E-4s. (add this amount to Summary Total on page 1) Comments from the Department of Defense GAO Contact and Staff Acknowledgments GAO Contact Derek Stewart, (202) 512-5559 ([email protected]) Acknowledgments Lori Atkinson, Alissa Czyz, Natasha Ewing, Alison Martin, David Mayfield, Lindsey Mosson, James Pearce, John Pendleton, Charles Perdue, Terry Richardson, Samuel Scrutchins, and Sonja Ware made key contributions to this report. Related GAO Products Defense Management: Key Elements Needed to Successfully Transform DOD Business Operations. GAO-05-629T. Washington, D.C.: April 28, 2005. Military Personnel: Preliminary Observations on Recruiting and Retention Issues within the U.S. Armed Forces. GAO-05-419T. Washington, D.C.: March 16, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Military Personnel: DOD Needs More Data Before It Can Determine if Costly Changes to the Reserve Retirement System Are Warranted. GAO-04-1005. Washington, D.C.: September 15, 2004. Military Personnel: Survivor Benefits for Servicemembers and Federal, State, and City Government Employees. GAO-04-814. Washington, D.C.: July 15, 2004. Military Personnel: DOD Has Not Implemented the High Deployment Allowance That Could Compensate Servicemembers Deployed Frequently for Short Periods. GAO-04-805. Washington, D.C.: June 25, 2004. Military Personnel: Active Duty Compensation and Its Tax Treatment. GAO-04-721R. Washington, D.C.: May 7, 2004. Military Personnel: Observations Related to Reserve Compensation, Selective Reenlistment Bonuses, and Mail Delivery to Deployed Troops. GAO-04-582T. Washington, D.C.: March 24, 2004. Military Personnel: Bankruptcy Filings among Active Duty Service Members. GAO-04-465R. Washington, D.C.: February 27, 2004. Military Personnel: DOD Needs More Effective Controls to Better Assess the Progress of the Selective Reenlistment Bonus Program. GAO-04-86. Washington, D.C.: November 13, 2003. Military Personnel: DOD Needs to Assess Certain Factors in Determining Whether Hazardous Duty Pay Is Warranted for Duty in the Polar Regions. GAO-03-554. Washington, D.C.: April 29, 2003. Military and Veterans’ Benefits: Observations on the Concurrent Receipt of Military Retirement and VA Disability Compensation. GAO-03-575T. Washington, D.C.: March 27, 2003. Military Personnel: Management and Oversight of Selective Reenlistment Bonus Program Needs Improvement. GAO-03-149. Washington, D.C.: November 25, 2002. Military Personnel: Active Duty Benefits Reflect Changing Demographics, but Opportunities Exist to Improve. GAO-02-935. Washington, D.C.: September 18, 2002. Military Personnel: Higher Allowances Should Increase Use of Civilian Housing, but Not Retention. GAO-01-684. Washington, D.C.: May 31, 2001. Defense Health Care: Observations on Proposed Benefit Expansion and Overcoming TRICARE Obstacles. GAO/T-HEHS/NSIAD-00-129. Washington, D.C.: March 15, 2000. Military Personnel: Preliminary Results of DOD’s 1999 Survey of Active Duty Members. GAO/T-NSIAD-00-110. Washington, D.C.: March 8, 2000. The Congress Should Act to Establish Military Compensation Principles. GAO/FPCD-79-11. Washington, D.C.: May 9, 1979. | Over the years, the Department of Defense's (DOD) military compensation system has become an increasingly complex and piecemeal accretion of pays, allowances, benefits, and special tax preferences. DOD leaders have expressed concern that rising compensation costs may not be sustainable in the future and could crowd out other important investments needed to recapitalize equipment and infrastructure. Given the looming fiscal challenges facing the nation in the 21st century, GAO believes it is time for a baseline review of all federal programs to ensure that they are efficiently meeting their objectives. Under the Comptroller General's authority, GAO (1) assessed whether DOD's approach to compensation provides adequate transparency over costs; (2) identified recent trends in active duty compensation, and how costs have been allocated to cash and benefits; and (3) reviewed how active duty servicemembers perceive their compensation and whether DOD has effectively explained the value of the military compensation package to its members. DOD's historical piecemeal approach to military compensation has resulted in a lack of transparency that creates an inability to (1) identify the total cost of military compensation to the U.S. government and (2) assess the allocation of total compensation investments to cash and benefits. No single source exists to show the total cost of military compensation, and tallying the full cost requires synthesizing about a dozen information sources from four federal departments and the Office of Management and Budget. Without adequate transparency, decision makers do not have a true picture of what it costs to compensate servicemembers. They also lack sufficient information to identify long-term trends, determine how best to allocate available resources to ensure the optimum return on compensation investments, and better assess the efficiency and effectiveness of DOD's current compensation system in meeting recruiting and retention goals. To address this and other major business transformation challenges in a more strategic and integrated fashion, GAO recently recommended the creation of a chief management official at DOD. Transparency over military compensation is critical because costs to provide compensation are substantial and rising, with over half of the costs allocated to noncash and deferred benefits. In fiscal year 2004, it cost the federal government about $112,000, on average, to provide annual compensation to active duty enlisted and officer personnel. Adjusted for inflation, the total cost of providing active duty compensation increased about 29 percent from fiscal year 2000 to fiscal year 2004, from about $123 to $158 billion. During this time, health care was one of the major cost drivers, increasing 69 percent to about $23 billion in fiscal year 2004. In addition, military compensation is weighted more toward benefits compared with other government and private sector civilian compensation systems. Furthermore, less than one in five service members will serve 20 years of active duty service to become eligible for retirement benefits. Increasing compensation costs make the need to address the appropriateness and reasonableness of the compensation mix and the long-term affordability and sustainability of the system more urgent. DOD survey results and analysis of GAO focus groups and survey data have shown that servicemembers are dissatisfied and harbor misperceptions about their pay and benefits in part because DOD does not effectively educate them about the competitiveness of their total compensation packages. About 80 percent of the 400 servicemembers that GAO surveyed believed they would earn more as civilians; in contrast, a 2002 study showed that servicemembers generally earn more cash compensation alone than 70 percent of like-educated civilians. Servicemembers also expressed confusion over aspects of their compensation, like retirement, and many complained that benefits were eroding despite recent efforts by Congress and DOD to enhance pay and benefits. By not systematically educating servicemembers about the value of their total compensation, DOD is essentially allowing a culture of dissatisfaction and misunderstanding to perpetuate. |
U.S. Assistance Has Had; Limited Results Project Sustainability in Question Despite some positive developments, U.S. rule of law assistance in the new independent states of the former Soviet Union has achieved limited results, and the sustainability of those results is uncertain. Experience has shown that establishing the rule of law in the new independent states is a complex undertaking and is likely to take many years to accomplish. Although the United States has succeeded in exposing these countries to innovative legal concepts and practices that could lead to a stronger rule of law in the future, we could not find evidence that many of these concepts and practices have been widely adopted. At this point, many of the U.S.-assisted reforms in the new independent states are dependent on continued donor funding to be sustained. Rule of Law Remains Elusive in the New Independent States Despite nearly a decade of work to reform the systems of justice in the new independent states of the former Soviet Union, progress in establishing the rule of law in the region has been slow overall, and serious obstacles remain. As shown in table 1, according to Freedom House, a U.S. research organization that tracks political developments around the world, the new independent states score poorly in the development of the rule of law, and, as a whole, are growing worse over time. These data, among others, have been used by USAID and the State Department to measure the results of U.S. development assistance in this region. In the two new independent states where the United States has devoted the largest amount of rule of law funding—Russia and Ukraine—the situation appears to have deteriorated in recent years. The scores have improved in only one of the four countries (Georgia) in which USAID has made development of the rule of law one of its strategic objectives and the United States has devoted a large portion of its rule of law assistance funding. I want to emphasize that we did not use these aggregate measures alone to reach our conclusions about the impact and sustainability of U.S. assistance. Rather, we reviewed many of the projects in each of the key elements of U.S. assistance. We examined the results of these projects, assessing the impact they have had as well as the likelihood that that impact would continue beyond U.S. involvement in the projects. Five Elements of the U.S. Rule of Law Assistance Program The U.S. government funds a broad range of activities as part of its rule of law assistance. This includes efforts aimed at helping countries develop five elements of a modern legal system (see Fig. 1): 1. a post-communist foundation for the administration of justice, 2. an efficient, effective, and independent judiciary, 3. practical legal education for legal professionals, 4. effective law enforcement that is respectful of human rights, and 5. broad public access to and participation in the legal system. In general, USAID implements assistance projects primarily aimed at development of the judiciary, legislative reform, legal education, and civil society. The Departments of State, Justice, and the Treasury provide assistance for criminal law reform and law enforcement projects. Legal Foundation; Some Key Reforms Have Been Passed, but Others Remain Unfinished A key focus of the U.S. rule of law assistance program has been the development of a legal foundation for reform of the justice system in the new independent states. U.S. projects in legislative assistance have been fruitful in Russia, Georgia, and Armenia, according to several evaluations of this assistance, which point to progress in passing key new laws. For example, according to a 1996 independent evaluation of the legal reform assistance program, major advances in Russian legal reform occurred in areas that USAID programs had targeted for support, including a new civil code and a series of commercial laws and laws reforming the judiciary. Despite considerable progress in a few countries, major gaps persist in the legal foundation for reform. In particular, Ukraine, a major beneficiary of U.S. rule of law assistance, has not yet passed a new law on the judiciary or new criminal, civil, administrative, or procedure codes since a new constitution was passed in 1996. Furthermore, a major assistance project aimed at making the Ukrainian parliament more active, informed, and transparent has not been successful, according to U.S. and foreign officials we interviewed. In Russia, the government has still not adopted a revised criminal procedure code, a key component of the overall judicial reform effort, despite assistance from the Department of Justice in developing legislative proposals. According to a senior Justice official, Russia is still using the autocratic 1963 version of the procedure code that violates fundamental human rights. Judiciary: Greater Independence Achieved in Some Respects, but Continued Reform and Retraining Needed The second element in the U.S. government’s rule of law program has been to foster an independent judiciary with strong judicial institutions and well-trained judges and court officers who administer decisions fairly and efficiently. The United States has contributed to greater independence and integrity of the judiciary by supporting key new judicial institutions and innovations in the administration of justice and by helping to train or retrain many judges and court officials. For example, in Russia, USAID provided training, educational materials, and other technical assistance to strengthen the Judicial Department of the Supreme Court. This new independent institution was created in 1998 to assume the administrative and financial responsibility for court management previously held by the Ministry of Justice. USAID and the Department of Justice have also supported the introduction of jury trials in 9 of Russia’s 89 regions for the first time since 1917. Although the jury trial system has not expanded beyond a pilot phase, administration of criminal justice has been transformed in these regions—acquittals, unheard of during the Soviet era, are increasing under this system (up to 16.5 percent of all jury trials by the most recent count). However, U.S. efforts we reviewed to help retool the judiciary have had limited impact so far. USAID assistance efforts aimed at improving training for judges have had relatively little long-term impact. Governments in Russia and Ukraine, for example, have not yet developed judicial training programs with adequate capacity to reach the huge numbers of judges and court officials who operate the judiciaries in these nations. In Russia, the capacity for training judges remains extremely low. The judiciary can train each of its 15,000 judges only about once every 10 years. In Ukraine, the two judicial training centers we visited that had been established with USAID assistance were functioning at far below capacity; in fact one center had been dismantled entirely. Courts still lack full independence, efficiency, and effectiveness. Throughout the region, much of the former structure that enabled the Soviet government to control judges’ decisions still exists, and citizens remain suspicious of the judiciary. Legal Education: More Practical Methods Introduced but Not Widely Practiced The third element of the U.S. assistance program has been to modernize the system of legal education in the new independent states to make it more practical and relevant. The United States has sponsored a variety of special efforts to introduce new legal educational methods and topics for both law students and existing lawyers. Notably, USAID has introduced legal clinics into several law schools throughout Russia and Ukraine. These clinics allow law students to get practical training in helping clients exercise their legal rights. They also provide a service to the community by facilitating access to the legal system by the poor and disadvantaged. With the training, encouragement, and financing provided by USAID, there are about 30 legal clinics in law schools in Russia and about 20 in Ukraine. USAID has also provided a great deal of high-quality continuing education for legal professionals, particularly in the emerging field of commercial law. Traditionally, little training of this type was available to lawyers in the former Soviet Union. However, the impact and sustainability of these initiatives are in doubt, as indigenous institutions have not yet demonstrated the ability or inclination to support the efforts after U.S. and other donor funding ends. For example, in Russia, we could not identify any organizations that were engaged in reprinting legal texts and manuals developed with U.S. assistance. In Ukraine, U.S. assistance has not been successful in stimulating law school reforms, and legal education remains rigidly theoretical and outmoded by western standards. Students are not routinely taught many skills important to the practice of law, such as advocacy, interviewing, case investigation, negotiation techniques and legal writing. The United States has largely been unsuccessful at fostering the development of legal associations, such as bar associations, national judges associations, and law school associations, to carry on this educational work in both Russia and Ukraine. U.S. officials had viewed the development of such associations as key to institutionalizing modern legal principles and practices and professional standards on a national scale as well as serving as conduits for continuing legal education for their members. Law Enforcement: Training, Models, and Research Provided, but Routine Application Is Not Evident The fourth component of the U.S. government’s rule of law program involves introducing modern criminal justice techniques to local law enforcement organizations. As part of this effort, the United States has provided many training courses to law enforcement officials throughout the new independent states of the former Soviet Union, shared professional experiences through international exchanges and study tours, implemented several model law enforcement projects, and funded scholarly research into organized crime. These programs have fostered international cooperation among law enforcement officials, according to the Department of Justice. U.S. law enforcement officials we spoke to have reported that, as a result of these training courses, there is a greater appreciation among Russians and Ukrainians of criminal legal issues for international crimes of great concern in the United States, such as organized crime, money laundering, and narcotics and human trafficking. They have also reported a greater willingness of law enforcement officials to work with their U.S. and other foreign counterparts on solving international crimes. However, we found little evidence that the new information disseminated through these activities has been routinely applied in law enforcement in the new independent states. In Russia and Ukraine we could not identify any full-scale effort in local law enforcement training institutions to replicate or adapt the training for routine application. Nor could we find clear evidence that the U.S. techniques have been widely embraced by training participants. Furthermore, though the United States has sponsored significant amounts of research on organized crime in Russia and Ukraine, we could not determine whether the results of this research had been applied by law enforcement agencies. Civil Society: Awareness and Involvement Have Increased, but Many Nongovernmental Organizations’ Activities Depend on Continued International Donor Support The fifth element of the rule of law assistance program is the expansion of access by the general population to the justice system. In both Russia and Ukraine, the United States has fostered the development of a number of nongovernmental organizations that have been active in promoting the interests of groups, increasing citizens’ awareness of their legal rights, and helping poor and traditionally disadvantaged people gain access to the courts to resolve their problems. For example, in Russia, USAID has sponsored a project that has helped trade unions and their members gain greater access to the legal system, leading to court decisions that have bolstered the legal rights of millions of workers. In Ukraine, environmental advocacy organizations sponsored by USAID have actively and successfully sued for citizens’ rights and greater environmental protection. Despite their high level of activity in recent years, these nongovernmental organizations still face questionable long-term viability. Most nongovernmental organizations we visited received very little funding from domestic sources and were largely dependent upon foreign donor contributions to operate. The sustainability of even some of the most accomplished organizations we visited remains to be seen. Limits on Impact and Sustainability Stem From Political, Economic, and Program Management Issues At least three factors have constrained the impact and sustainability of U.S. rule of law assistance: (1) a limited political consensus on the need to reform laws and institutions, (2) a shortage of domestic resources to finance many of the reforms on a large scale, and (3) a number of shortcomings in U.S. program management. The first two factors, in particular, have created a very challenging climate for U.S. programs to have major, long-term impact in these states, but have also underscored the importance of effective management of U.S. programs. Political Consensus on Reform Slow in Forming In key areas in need of legal reform, U.S. advocates have met some steep political resistance to change. In Ukraine and Russia, lawmakers have not been able to reach consensus on critical new legal codes upon which reform of the judiciary could be based. In particular, Ukrainian government officials are deadlocked on legislation reforming the judiciary, despite a provision in the country’s constitution to do so by June 2001. Numerous versions of this legislation have been drafted by parties in the parliament, the executive branch, and the judiciary with various political and other agendas. Lack of progress on this legislation has stymied reforms throughout the justice system. In Russia’s Duma (parliament), where the civil and the criminal codes were passed in the mid-1990s, the criminal procedure code remains in draft form. According to a senior Department of Justice official, the Russian prosecutor’s office is reluctant to support major reforms, since many would require that institution to relinquish a significant amount of the power it has had in operating the criminal justice system. While U.S. officials help Russian groups to lobby for legislative reforms, adoption of such reforms remains in the sovereign domain of the host country. In the legal education system as well, resistance to institutional reform has thwarted U.S. assistance efforts. USAID officials in Russia told us that Russian law professors and other university officials are often the most conservative in the legal community and the slowest to reform. A USAID- sponsored assessment of legal education in Ukraine found that there was little likelihood for reform in the short term due to entrenched interests among the school administration and faculty who were resisting change. Policymakers have not reached political consensus on how or whether to address the legal impediments to the development of sustainable nongovernmental organizations. Legislation could be adopted that would make it easier for these organizations to raise domestic funds and thus gain independence from foreign donors. Weak Economic Conditions Make Funding Reforms Difficult Historically slow economic growth in the new independent states has meant limited government budgets and low wages for legal professionals and thus limited resources available to fund new initiatives. While Russia has enjoyed a recent improvement in its public finances stemming largely from increases in the prices of energy exports, public funds in the new independent states have been constrained. Continuation or expansion of legal programs initially financed by the United States and other donors has not been provided for in government budgets. For example, in Russia, the system of jury trials could not be broadened beyond 9 initial regions, according to a senior judiciary official, because it was considered too expensive to administer in the other 89 regions. In Ukraine, according to a senior police official we spoke to, police forces often lack funds for vehicles, computers, and communications equipment needed to implement some of the law enforcement techniques that were presented in the U.S.- sponsored training. Program Management Weaknesses Affect Impact and Sustainability of Aid U.S. agencies implementing the rule of law assistance program have not always managed their projects with an explicit focus on achieving sustainable results, that is, (1) developing and implementing strategies to achieve sustainable results and (2) monitoring projects results over time to ensure that sustainable impact was being achieved. These are important steps in designing and implementing development assistance projects, according to guidance developed by USAID. We found that, in general, USAID projects were designed with strategies for achieving sustainability, including assistance activities intended to develop indigenous institutions that would adopt the concepts and practices USAID was promoting. However, at the Departments of State, Justice, and the Treasury, rule of law projects we reviewed often did not establish specific strategies for achieving sustainable development results. In particular, the law enforcement-related training efforts we reviewed were generally focused on achieving short-term objectives, such as conducting training courses or providing equipment and educational materials; they did not include an explicit approach for longer-term objectives, such as promoting sustainable institutional changes and reform of national law enforcement practices. According to senior U.S. Embassy officials in Russia and Ukraine, these projects rarely included follow-up activities to help ensure that the concepts taught were being institutionalized or having long-term impact after the U.S. trainers left the country. We did not find clear evidence that U.S. agencies systematically monitored and evaluated the impact and sustainability of the projects they implemented under the rule of law assistance program. Developing and monitoring performance indicators is important for making programmatic decisions and learning from past experience, according to USAID. We found that the Departments of State, Justice, and Treasury have not routinely assessed the results of their rule of law projects. In particular, according to U.S. agency and embassy officials we spoke to, there was usually little monitoring or evaluation of the law enforcement training courses after they were conducted to determine their impact. Although USAID has a more extensive process for assessing its programs, we found that the results of its rule of law projects in the new independent states of the former Soviet Union were not always apparent. The results of most USAID projects we reviewed were reported in terms of project outputs, such as the number of USAID-sponsored conferences or training courses held, the number and types of publications produced with project funding, or the amount of computer and other equipment provided to courts. Measures of impact and sustainability were rarely used. State has recently recognized the shortcomings of its training-oriented approach to law enforcement reforms. As a result, it has mandated a new approach for implementing agencies to focus more on sustainable projects. Instead of administering discrete training courses, for example, agencies and embassies will be expected to develop longer-term projects. Justice has also developed new guidelines for the planning and evaluation of some of its projects to better ensure that these projects are aimed at achieving concrete and sustainable results. These reform initiatives are still in very early stages of implementation. It remains to be seen whether future projects will be more explicitly designed and carried out to achieve verifiably sustainable results. One factor that may delay the implementation of these new approaches is a significant backlog in training courses that State has already approved under this program. As of February 2001, about $30 million in funding for fiscal years 1995 through 2000 has been obligated for law enforcement training that has not yet been conducted. U.S. law enforcement agencies, principally the Departments of Justice and the Treasury, plan to continue to use these funds for a number of years to pay for their training activities, even though many of these activities have the same management weaknesses as the earlier ones we reviewed. Unless these funds are reprogrammed for other purposes or the projects are redesigned to reflect the program reforms that State and Justice are putting in place, projects may have limited impact and sustainability. | This testimony discusses the U.S. government's rule of law assistance efforts in the new independent states of the former Soviet Union. GAO found that these efforts have had limited impact so far, and results may not be sustainable in many cases. U.S. agencies have had some success in introducing innovative legal concepts and practices in these countries. However, the U.S. assistance has not often had a major, long-term impact on the evolution of the rule of law in these countries. In some cases, countries have not widely adopted the new concepts and practices that the United States has advocated. In other cases, continuation or expansion of the innovations depends on further funding from the U.S. or other donors. In fact, the rule of law appears to have actually deteriorated in recent years in several countries, including Russia and Ukraine, according to the data used to measure the results of U.S. development assistance in the region and a host of U.S. government and foreign officials. This testimony summarizes an April 2001 report (GAO-01-354). |
Background A 1987 Department of Defense (DOD) Defense Science Board study on the detection and neutralization of illegal drugs and terrorist devices, such as explosives, concluded, among other things, that better-focused R&D testing and evaluation and acquisition efforts were needed at the federal level. To address this issue, the study proposed establishing a permanent Research and Technology Group within the National Drug Policy Board, which was the predecessor to ONDCP. The Anti-Drug Abuse Act of 1988 (P.L. 100-690) created ONDCP to better plan and coordinate federal drug control efforts and assist the federal government in overseeing those efforts. ONDCP is charged with overseeing and coordinating the drug control efforts of over 50 federal agencies and programs, consulting with and assisting state and local governments in their relations with federal agencies involved in the National Drug Control Program, and reviewing and certifying the adequacy of other federal agencies’ drug control-related budget requests. In February 1990, ONDCP created the S&T Committee to perform functions similar to those previously performed by the Research and Technology Group. The National Defense Authorization Act for Fiscal Year 1991 (P.L. 101-510), which amended the Anti-Drug Abuse Act of 1988, established CTAC as the central U.S. counterdrug enforcement R&D organization. The act placed CTAC under the operating authority of the Director of ONDCP and required that CTAC be headed by a Chief Scientist of Counterdrug Technology. Overall, Congress expected CTAC to coordinate the National Counterdrug R&D Program to prevent duplication of efforts and ensure that, whenever possible, those efforts provided capabilities that filled overall existing technology gaps that transcended the needs of any single federal agency and that otherwise might not have been funded. Specifically, CTAC was charged with (1) identifying and defining the short-, medium-, and long-term scientific and technological needs of federal, state, and local drug enforcement agencies; (2) making a priority ranking of such needs according to fiscal and technological feasibility as part of a National Counterdrug Enforcement R&D Strategy; (3) in consultation with the National Institute on Drug Abuse (NIDA) and through interagency agreements or grants, examining addiction and rehabilitation research and the application of technology to expanding the effectiveness or availability of drug treatment; (4) overseeing and coordinating counterdrug technology initiatives with the related activities of other federal civilian and military departments; and (5) under the general authority of the ONDCP Director, submitting requests to Congress for the reprogramming or transfer of funds appropriated for counterdrug enforcement R&D. Similar to its authorizing legislation, CTAC’s mission statement sets forth its responsibilities as follows: (1) identify the short-, medium-, and long-term scientific and technological needs of federal, state, and local drug enforcement agencies; (2) develop a national counterdrug R&D strategy that validates technological needs, prioritizes such needs according to technical and fiscal feasibility, and sets forth a plan (including budget) to develop and test the highest priority technology projects; (3) implement a national counterdrug R&D program, including technology development in support of substance abuse, addiction, and rehabilitation research; and (4) coordinate counterdrug R&D activities to identify and remove unnecessary duplication. To accomplish its mission, CTAC is to (1) annually publish the Counterdrug Research and Development Blueprint Update, which, among other things, lists the scientific and technological needs of federal agencies with counterdrug missions; (2) use the S&T Committee as the principal mechanism for assisting in its coordination of counterdrug technology R&D efforts and for identifying and prioritizing technology needs and selecting otherwise unfunded R&D projects for CTAC funding; and (3) use an outreach program of regional workshops and technology symposiums to facilitate access to federal, state, and local government organizations, industry and academic scientists and engineers, and other targeted community segments. Federal counterdrug technology R&D spending for fiscal years 1992 (CTAC’s first year of operation) through 1997 totaled $3.2 billion, of which CTAC accounted for $86.5 million or about 2.7 percent. (See app. II.) According to a CTAC official, for fiscal years 1992 through 1997, CTAC distributed about $61.0 million, or about 71 percent of its total funds, for 72 counterdrug R&D projects. CTAC also spent $17.7 million for operational test-and-evaluation efforts and $4.6 million for technical and contracting agents who are to manage the projects once funded. Table 1 in appendix III shows the distribution of CTAC funding by spending category for fiscal years 1992 through 1997. In fiscal year 1992, CTAC projects were organized into three technical thrust areas: tactical technologies, nonintrusive inspection, and wide-area surveillance. In fiscal year 1993, the area of demand reduction was added as a fourth technical thrust area. As shown in table 2 of appendix III, the majority of CTAC’s R&D funds for fiscal years 1992 through 1997 were distributed on projects related to the tactical technology thrust area, followed by the demand reduction, nonintrusive inspection, and wide-area surveillance areas. As of September 30, 1997, CTAC’s professional staff in Washington, D.C., was comprised of the Chief Scientist, who is the only ONDCP employee; three civilians employed by and detailed from DOD; and three persons employed by and detailed from a Fort Huachuca contractor, which is one of CTAC’s technical/contracting agents. Fort Huachuca and the Tennessee Valley Authority, CTAC’s other technical/contracting agent, together had two full-time and three part-time employees dedicated to CTAC activities. According to the Chief Scientist, only he and two of the DOD employees were available to perform management-related functions, such as working and supporting CTAC’s interaction with the S&T Committee. The other DOD detailee served as CTAC’s budget analyst. The three contractor detailees and the five contracting agent personnel at Fort Huachuca and the Tennessee Valley Authority had specific support functions, such as handling the transfer of funds for CTAC-sponsored technology projects, and were not available to perform management-related functions. CTAC’s Coordination Process Had Several Shortcomings We identified several shortcomings in the design and execution of the process CTAC established to carry out its coordination of counterdrug R&D efforts as intended. The S&T Committee, whose charter has not been revised since before CTAC was created, does not reflect the committee’s current composition, responsibilities, and relationship to CTAC. Moreover, the full S&T Committee met irregularly and often was not included in the decisionmaking about which counterdrug technologies should be funded. Furthermore, CTAC did not regularly reassess the counterdrug technology needs of federal agencies to ensure that its listing was current and reflected the top priority needs of S&T Committee member agencies. Also, CTAC did not systematically consider and fund the counterdrug technology needs of state and local agencies as part of its process for selecting and funding projects, and, until recently, state and local agencies were not represented on the S&T Committee. CTAC also approved many R&D projects for funding even though they lacked comprehensive transitional plans, which are intended to help ensure that developed technologies were eventually put to use. In addition, although several agencies told us of cases in which CTAC efforts had helped them to avoid unnecessary duplicative research, CTAC was unaware of these cases because it had no system in place to determine the extent to which unnecessary duplication was identified and avoided due to CTAC’s efforts. CTAC’s Needs Identification and Project Selection Process Since 1992, CTAC has had a process and procedures in place for coordinating with the R&D community to identify and prioritize R&D needs, avoid unnecessary duplication, and select CTAC-funded R&D projects that, among other things, can help fill overall existing technology gaps and transcend the needs of any single federal agency. CTAC’s process included specific steps, criteria, and controls to help ensure that funded projects (1) addressed the needs of the federal law enforcement and demand reduction agencies and (2) provided promising technology that could be used. According to its charter and CTAC’s Chief Scientist, the S&T Committee is to be used as the principal mechanism for assisting CTAC in its coordination of counterdrug technology R&D efforts, identifying and prioritizing R&D needs, and evaluating R&D projects for CTAC to fund. For a detailed description of CTAC’s process for identifying and prioritizing technology needs and selecting projects for CTAC funding, including an overview flowchart of the process, see appendix IV. S&T Committee’s Charter Does Not Reflect Its Current Composition and Responsibilities The composition and responsibilities of the S&T Committee, which was established within ONDCP before CTAC’s existence, were set forth in a February 1990 charter. According to the charter, the S&T Committee is to be composed of parallel management-level representatives from federal counterdrug R&D agencies and a representative from the state and local R&D community. The S&T Committee is to be comprised of a 7-member Executive Board, a 16-member committee, and 7 associate committee members. It also is to be organized into several working groups. The S&T Committee’s overall responsibilities are to include identifying, developing, coordinating, and facilitating achievement of the overall goals and objectives of ONDCP’s National Drug Control Strategy in the areas of drug control research, automated data processing, and telecommunications. The charter is intended to establish and clarify the S&T Committee’s role and responsibilities in helping ONDCP accomplish its goals and mission. However, the existing charter does not reflect the S&T Committee’s current composition. Several of the current members of the S&T Committee—the Department of Justice’s (DOJ) National Institute of Justice (NIJ) and NIDA, for example—are not listed as members or listed in their current roles. NIJ is to represent the state and local law enforcement communities, and NIDA is to represent the demand reduction community. Congress has directed CTAC to be responsible for addressing the R&D needs of these communities. Also, some of the organizations identified in the charter as members of the S&T Committee are no longer members. In addition, the listing of designated working groups in the charter was not current. The existing charter also does not address the S&T Committee’s current responsibilities and its relationship to CTAC. Because the charter was created before CTAC existed, CTAC is not mentioned in the charter. Yet, the S&T Committee is to be the principal mechanism that CTAC uses to accomplish its responsibilities of overseeing and coordinating counterdrug technology. CTAC focuses its R&D efforts in four areas—tactical technologies, demand reduction, nonintrusive inspection technology, and wide-area surveillance areas. The area of demand reduction is not addressed by the S&T Committee’s existing charter, and the demand reduction community’s representative, NIDA, only recently began participating on the committee. Also, the charter does not reflect the roles and responsibilities of the S&T Committee and its working groups in developing and monitoring the implementation of the 10-year counterdrug technology development plan and 5-year budget projections. CTAC Has Not Used the S&T Committee Regularly and Consistently CTAC has not regularly and consistently involved the full S&T Committee in key decisions relating to its coordination process. The S&T Committee did not meet as regularly as the Chief Scientist intended, and its involvement in CTAC’s coordination process varied from year to year and was not always documented. Rather, the Chief Scientist generally consulted with individual S&T Committee members and its working groups. By not involving or dealing with the full S&T Committee, CTAC did not take full advantage of the benefits of the interaction and deliberation among the members on key matters relating to the identification and prioritization of counterdrug technology needs and selection and funding of related R&D projects. As a result, CTAC may be making key funding decisions without the coordinated deliberation and input, as intended, of the full S&T Committee. Thus, neither we nor CTAC could determine the extent to which its process was identifying and funding the otherwise unfunded highest priority technology needs. According to CTAC’s Chief Scientist, the full S&T Committee meets approximately every 4 months to discuss policy issues, technological needs, and opportunities to advance technologies for improving the achievement of counterdrug missions. However, since CTAC’s creation, the S&T Committee met only twice a year in 1992, 1993, and 1995; once a year in 1994 and 1996; and not at all in 1997. On the basis of our review of S&T Committee minutes from fiscal years 1992 through 1996 and discussions with some committee members, the S&T Committee’s involvement in CTAC’s coordination process varied from year to year. The S&T Committee performed different tasks each year over the 5-year period we reviewed. For example, the S&T Committee reviewed CTAC’s annual draft R&D program plan only once—in fiscal year 1992. The S&T Committee met only once—in fiscal year 1995—to evaluate and prioritize federal agencies’ proposals for CTAC funding consideration. The S&T Committee performed a variety of other coordination activities at least once during the 5-year period. These activities included presenting project proposals for possible CTAC funding, evaluating proposals, providing progress reports on CTAC-funded projects, and performing technical reviews. CTAC’s project selection process calls for the preparation of an annual R&D program plan that is based on the agencies’ needs and the technical merit and developmental risk of the proposals submitted to meet these needs. According to CTAC’s Chief Scientist, the S&T Committee is to assist CTAC by reviewing and updating the needs listing. However, we did not find any documentation showing that the S&T Committee, as a body, was involved in the review and updating of the needs listing in fiscal years 1992 through 1996. Six of the 11 S&T Committee members we surveyed indicated that the committee provided a valuable and important forum for exchanging information on technology needs. A couple of members of the S&T Committee also said that the committee was more actively involved in the selection of CTAC-funded projects in the earlier CTAC years. One member stated that more frequent meetings of the S&T Committee were needed to foster additional cooperation and coordination among agencies. In August 1996, the ONDCP Director stated that the S&T Committee and its working groups needed to be revitalized. The Director proposed that the S&T Committee (1) act as a steering body for R&D technology efforts, (2) have senior-level membership to make commitments to R&D policy decisions, and (3) increase the frequency of its meetings to as often as “every three weeks.” The Director remarked that it was important for ONDCP/CTAC to obtain feedback from the S&T Committee and its working groups to be able to provide better funding assistance for valid interagency R&D needs. However, as of November 1997, no significant changes had been made in the S&T Committee and its working groups. Nor, as previously mentioned, had the February 1990 S&T Committee charter been updated since CTAC’s creation to reflect changes in the committee’s composition, roles, and expanded mission and to address the committee’s proposed revitalization. The Chief Scientist told us that the full S&T Committee did not meet between August 1996 and August 1997. However, he said that, between December 1996 and August 1997, he met 10 times with members of the Technology Coordination Working Group, which is an S&T Committee working group comprised of key agency representatives. According to the Chief Scientist, the purpose of the meetings was, among other things, to develop a 10-year counterdrug technology development plan with 5-year budget projections in support of ONDCP’s 10-year National Drug Control Strategy. The 10-year technology development plan is expected to provide a road map for developing counterdrug technologies and upgrading existing agency systems. However, at the time of our review, the working group had not completed the plan and budget. Also, it was not clear what role the full S&T Committee, as the principal coordinating mechanism, would play in helping to monitor, implement, and adjust the 10-year plan and 5-year budget from year to year. Counterdrug Technology Needs Were Not Regularly Reassessed and Updated According to CTAC’s process for selecting and funding counterdrug technology R&D projects, the full S&T Committee is to annually reassess, update, and prioritize counterdrug technology and scientific needs to help ensure that the projects selected and funded are linked to currently identified priority needs among all relevant agencies. However, CTAC’s Chief Scientist, as well as some of the S&T Committee members, acknowledged that the latest counterdrug technology needs listing had not been recently reassessed and was not always updated annually. Furthermore, although CTAC had developed what it termed as priority listings of counterdrug R&D technology needs, there were far more items on these lists than could be funded, and no attempt had been made to rank the listed needs by their relative importance to agency end users. As a result, there is no way for CTAC to ensure that the projects it funds reflect the most current and highest priority of the otherwise unfunded counterdrug R&D technology needs of the law enforcement and demand reduction communities. In this regard, 8 of the 10 S&T Committee members we surveyed believed that their agencies’ counterdrug technology needs were not adequately reflected in the CTAC-funded projects. A listing of priority law enforcement-related counterdrug technology needs was included in CTAC’s first Blueprint Update in August 1992. In May 1993, DOD conducted a 2-day workshop with the S&T Committee members and CTAC officials to revisit the S&T needs of the counterdrug enforcement agencies. The workshop attendees produced an Investment Strategy for DOD Counterdrug S&T Programs. The S&T needs from that effort were added by CTAC to the counterdrug technology needs listing and updated with agency inputs for fiscal year 1994. The needs listing and updated data were included in CTAC’s 1995 Blueprint Update. Since then, CTAC has not substantially changed the counterdrug technology needs listing. S&T Committee members told us that the latest counterdrug technology needs listing did not reflect contemporary agency needs. For example, in a July 1997 memorandum on the subject, an official of one federal law enforcement agency represented on the S&T Committee stated that, although some of the listed technological needs might still be current, the list did not represent current law enforcement needs from his agency’s perspective. CTAC officials told us that they annually requested written updates to the needs listing, but they did not receive responses from most agencies. CTAC said it received responses from 9 of 21 agencies for fiscal year 1995, no agency responses for 1996, and responses from 2 agencies for 1998. For fiscal year 1997, according to a CTAC official, CTAC did not request an update to the S&T Committee needs listing. However, the Chief Scientist said CTAC did not follow up with the agencies to obtain their input or to determine why they did not respond and whether they had any additions or changes. Moreover, CTAC had not used the S&T Committee as a forum to obtain all input and reassess the list to ensure that it reflected the member agencies’ current counterdrug technology requirements. The Chief Scientist told us that he planned to follow up on the agency needs update at the next S&T Committee meeting, which was scheduled to be held in February 1998. Regarding demand reduction technology needs, although legislation added the demand reduction area to CTAC’s statutory responsibilities in 1993, CTAC did not begin developing a related needs listing until June 1997. CTAC delayed developing the list even though it had invested over $19 million in such technology research as of September 1997. Moreover, according to the Chief Scientist, NIDA, which represents the demand reduction community, was not represented on the S&T Committee until December 1996 when its representative began attending meetings of the previously mentioned Technology Coordination Working Group. State and Local Needs Were Not Systematically Identified and Considered CTAC’s mission includes identifying, defining, and helping to meet the counterdrug technology needs of state and local, as well as federal, law enforcement agencies. But, although CTAC funded some state and local projects, it made no attempt to systematically identify the needs of state and local law enforcement agencies. According to CTAC’s Chief Scientist, CTAC operated on the assumption that state and local counterdrug R&D needs were the same as those of federal agencies; therefore, CTAC focused its process on federal agencies. In addition, CTAC did not consider the counterdrug technology needs of state and local law enforcement agencies as part of its formal process for selecting and funding projects. As a result, state and local projects were selected for funding independently of the process; some of these projects might not have been selected had they been considered in conjunction with federal needs. The Chief Scientist told us that NIJ was CTAC’s link to the state and local law enforcement community. According to an NIJ official, NIJ’s Office of Science and Technology is to work closely with state and local agencies to identify their overall law enforcement R&D technology needs, including their counterdrug needs. However, according to NIJ’s Director of Science and Technology, CTAC was not responsive to state and local counterdrug technology project proposals and concerns raised by NIJ. Like NIDA, NIJ only became a representative on the S&T Committee in December 1996. The Director of Science and Technology did not agree with CTAC’s assumption that state and local needs were the same as those of federal agencies. Moreover, the President of the International Association of Chiefs of Police (IACP) stated in his July 1997 monthly address to association members that state and local law enforcement practitioners needed to get more involved in the creation, advancement, and development of technology to ensure that their needs are communicated and met. As of October 1997, CTAC was funding six state and local law enforcement projects. Total CTAC funding for these projects was about $14.6 million, or about 24.0 percent of the funds CTAC distributed for R&D projects from fiscal years 1992 to 1997. However, these projects were not selected as part of CTAC’s regular process for selecting and funding counterdrug projects, which as previously discussed focused on federal agencies’ technology needs. Rather, these projects were selected outside of the process through more ad hoc means. Thus, CTAC had no systematic way of ensuring that projects selected and funded with available CTAC resources had the highest priority among state and local, as well as federal, agencies. For example, one state project receiving funding was initiated as a result of a contact at a federal agency; another project receiving funding was initiated as a result of contacts made at a law enforcement conference. In addition, by selecting projects outside of the formal process, CTAC has no assurance that they, to the extent possible, meet the needs of multiple local, state, and federal agencies. For example, a state and local project leader told us that two of the six CTAC-funded state and local projects were so specialized that they could not be transferred easily to other jurisdictions. According to the Chief Scientist, CTAC communicated and interacted with state and local law enforcement and demand reduction agencies primarily through regional workshops held principally to share counterdrug technologies in the test and pilot stages. According to the CTAC contractor responsible for managing the workshops, these workshops apparently increased state and local agencies’ awareness of CTAC and its mission. In this regard, over 90 percent of the state and local agencies participating in CTAC’s law enforcement counterdrug technology workshops said that they were not aware of CTAC before receiving notice of the workshops. However, the workshops were generally not used to identify state and local counterdrug technology needs. A CTAC official told us that CTAC representatives attended annual meetings of the IACP, National Sheriffs Association (NSA), and Police Executive Research Forum and participated in NIJ’s technology committee to help identify the needs of the state and local organizations. However, we found no evidence of how information gathered at these meetings was incorporated into CTAC’s needs identification process. The Chief Scientist acknowledged that, although CTAC is tasked with identifying state and local technology needs, it had not formally addressed these needs as it had federal needs. He stated that, in anticipation of receiving additional funds in fiscal year 1998 specifically to transfer technologies to state and local law enforcement agencies, CTAC was planning to form a committee comprised of representatives from various pertinent organizations, including NIJ, NSA, and IACP, to assess and identify the technologies to be transferred and the recipient locations. According to the Chief Scientist, this committee would be used to assist CTAC in identifying state and local counterdrug technology needs as well as the technologies ready for transfer. Comprehensive Transitional Plans Were Not Provided In its report accompanying ONDCP’s fiscal year 1993 appropriations bill, the House Appropriations Committee stated that before CTAC committed funds to a R&D project, it should have a written commitment from the client agency. This commitment was to specify that funds to purchase the technology, once successfully developed, would be included in future budget requests. Consequently, CTAC recommends that agencies provide CTAC with acquisition or transitional plans for each of their projects receiving CTAC funds. These plans are intended to increase the likelihood that any technology that is successfully developed through R&D efforts will eventually be used. However, most R&D projects that CTAC approved for funding did not have transitional plans, as recommended. A CTAC official told us that, in many instances, CTAC used verbal, good faith agreements with agency representatives, and that such agreements were not documented. From its establishment through April 1997, CTAC funded 72 projects. However, although we found brief references to transition or acquisition in several project proposals, only seven funded projects included transitional plans for deploying the technology under development. CTAC’s Chief Scientist told us that CTAC would like to receive more transitional plans from the agencies. However, other than a reference in the 1992 Blueprint Update to the lack of transitional plans, CTAC did not attempt to follow up on its recommendation that agencies provide transitional plans. Nor did CTAC raise this issue with the S&T Committee. Extent of Duplication Avoided Was Unknown As reflected in its mission statement, one of CTAC’s objectives is to prevent duplication of counterdrug R&D efforts. According to CTAC officials, they look for unnecessary duplication in federal counterdrug R&D projects as part of the process for identifying counterdrug R&D needs and requirements and for selecting projects. A CTAC official also indicated that CTAC checks for duplication as part of its role in ONDCP’s drug budget certification process. CTAC also includes a listing of those projects comprising the National Counterdrug R&D Program in its Blueprint Update. In addition, according to CTAC officials, the S&T Committee meetings and the CTAC-sponsored symposiums, among other things, enable stakeholders to identify and avoid unnecessary, duplicative R&D efforts. CTAC officials were confident that the mechanisms they had in place helped avoid unnecessary duplication. However, they told us that they had not identified any specific examples of potentially duplicative counterdrug R&D projects that had been avoided due to CTAC’s efforts. The officials said they did not systematically attempt to identify or obtain feedback from participating agencies on incidents of duplication that had been avoided due to CTAC. Without a measure of outcome, CTAC has no assurance of how well it is carrying out and achieving this mission. As discussed later in this report, outcome measures are required by GPRA as part of future performance measurement tasks. Some of the S&T Committee members we interviewed told us that their agencies were generally able to avoid duplicative research projects because they learned of each other’s plans as a result of CTAC’s efforts. Moreover, in a computer listing of ongoing National Counterdrug R&D Program projects distributed by CTAC for updating, one agency identified two projects being done by other agencies that would meet its needs; therefore, it dropped its plans to submit proposals for similar projects. Also, 3 of the 10 S&T Committee representatives we surveyed responded that they were aware of potentially duplicative efforts that CTAC had helped them to avoid. For example, one agency representative noted that CTAC’s efforts helped avoid duplication in the demand reduction and nonintrusive inspection technology R&D areas. Another agency noted that the CTAC-sponsored Facial Recognition Working Group of the S&T Committee joined together all of the federal sponsors of and major customers for facial recognition R&D, thereby avoiding duplicative R&D efforts. CTAC Made Some Positive Contributions to Federal Counterdrug Technology R&D Efforts but Had Not Developed Meaningful Performance Measures CTAC officials cited numerous contributions or accomplishments relating both to 36 of the 72 R&D projects it funded and to the outreach efforts CTAC has sponsored since it was established. However, agency contact persons for individual CTAC-funded projects defined contributions differently, citing only those 10 projects (of the 36 projects identified by CTAC) that had actually resulted in usable technologies that were assisting agencies. Agency officials agreed that the outreach efforts cited by CTAC helped to enhance the exchange of information as well as avoid duplication. However, our task of determining CTAC’s contributions to federal drug control efforts was complicated because CTAC has no meaningful performance measures to enable it to (1) assess the extent to which it is achieving its mission and contributing to the development and deployment of counterdrug technology and (2) identify and implement any needed improvements to better achieve its mission. CTAC Cited Projects and Outreach Efforts as Contributions From 1992 until April 10, 1997, CTAC funded 72 projects. According to CTAC’s Chief Scientist, a project was considered a contribution or accomplishment if any one of the following occurred: (1) a technology was developed and in use, (2) a phase of a project was completed, (3) a prototype was developed, (4) results of testing were completed, or (5) “substantial progress” in an area was achieved. In response to our request, the Chief Scientist developed and provided us with a list of 36 counterdrug projects that they considered to be contributions. Some of these projects are highlighted in CTAC’s annual R&D Blueprint Update, which includes a listing of CTAC’s major accomplishments. CTAC also considered its outreach efforts to be contributions. The outreach program was developed to bring together major stakeholders involved in counterdrug efforts to exchange information on technology. According to a CTAC official, CTAC’s outreach efforts, from its inception through August 1997, included four international symposiums, one drug abuse treatment technology workshop, and six 1-day technology workshops designed to address user needs and technological opportunities. Views on CTAC’s Contributions Differ Because agencies are the ultimate customers of counterdrug technology, we contacted the CTAC-identified contact persons from the lead agencies for each of the 36 projects that CTAC considered to be contributions to obtain their views on the contributions. In summary, these agency officials considered 10 of the 36 projects to be contributions because they had different criteria than CTAC for considering a project a contribution. Their criteria were that the technology resulting from those projects (1) had been successfully used and (2) was assisting their agencies in fulfilling their counterdrug missions. These 10 projects are described in appendix V. The remaining 26 projects did not meet these criteria in that they generally either were completed and not implemented or were still in progress. Specifically, 12 of these projects were categorized as completed, but they were not in use for a variety of reasons (e.g., the technology was not user friendly, was too expensive to use, had operational problems, or needed further development). For example, a project pertaining to narcotics detection in mail packages fell into this category because, although a prototype had been developed, the technology did not effectively detect cocaine. We were told by the agency contact persons that 11 of the 26 other projects were currently in development. For example, a transportable observation platform designed to provide long-range observation capability was still undergoing tests and evaluation. Finally, for the remaining three projects, the designated contact persons were not aware of the status of the projects and thus could not comment on whether they considered them to be contributions. The majority of the S&T Committee members we surveyed commented that CTAC’s unique contribution to the counterdrug effort is that it provides a forum for interagency exchange of information. For example, respondents noted that S&T Committee meetings held to present project proposals facilitated professional communication among agency representatives. They noted that these meetings gave R&D agency representatives an opportunity to informally discuss current research and thereby identify technology gaps and to learn about technology acquisition on non-CTAC-funded projects. Another aspect of CTAC’s coordination function and outreach efforts is CTAC-sponsored symposiums that bring together scientific and technical experts from academia, private industry, and government agencies. One S&T Committee member that we surveyed reported that his agency was able to bring together at a CTAC symposium all of the federal sponsors and most of the major customers of a new technology in the area of wide-area surveillance. Also, 97 percent of the state and local law enforcement participants who completed exit evaluations at the six 1-day workshops held to date reported that they found the workshops helpful. In addition, we surveyed S&T Committee members to obtain their overall views on CTAC’s contributions, particularly regarding to its coordination of federal counterdrug technology R&D efforts and its support of ONDCP’s National Drug Control Strategy. When asked to determine, from their agencies’ perspectives, how effective or ineffective CTAC has been in coordinating and overseeing federal counterdrug technology R&D activities, representatives from the 10 agencies we surveyed provided mixed responses. Six of the 10 agency respondents stated that CTAC was “sometimes effective, sometimes ineffective,” with 2 agency respondents stating that CTAC was “generally effective,” and 2 responding “generally ineffective.” One of the six respondents explained that CTAC was “somewhat effective” when focusing its efforts on the R&D technology that was needed and not being pursued by other agencies, but was “less effective” in areas where agencies had different technology requirements or needs. Another of the six respondents explained that CTAC had been “generally effective” in its function of coordinating the federal counterdrug technology R&D effort, but had been “generally ineffective” in developing technology to meet needs. When asked to determine, from their agencies’ perspectives, to what extent CTAC’s involvement has had a positive effect on federal counterdrug technology R&D efforts that support the goals of the National Drug Control Strategy, the representatives’ responses ranged from CTAC’s having a “moderate” effect to having “little or no” effect. For example, one respondent noted that before fiscal year 1997, CTAC had focused more on individual agencies’ technology needs than on technology that specifically supported the overall National Strategy. Meaningful Performance Measures Were Lacking Determining CTAC’s progress in achieving its mission and its contributions to the development and deployment of counterdrug technology was complicated by CTAC’s lack of meaningful performance indicators or measures. Although CTAC has a specific mission and responsibilities, according to ONDCP and CTAC officials, it had not developed indicators to measure its progress in achieving its mission, that is, the outcome of its efforts. Although scientific research is often considered to be intrinsically valuable to society, there is pressure on all federal agencies, including S&T agencies, to demonstrate that they are making effective use of taxpayers’ dollars. This emphasis is evident in the passage of GPRA. In response to questions about the value and effectiveness of federal programs, the Act seeks to shift federal agencies’ focus away from traditional concerns, such as staffing, activity levels, and tasks completed, toward a focus on program outcomes—that is, the real difference a federal program makes in people’s lives. Within the context of the Act, an “outcome measure” assesses the results of a program activity compared to its intended purpose, while an “output measure” tabulates, calculates, or records the level of activity or effort and can be expressed in a quantitative or qualitative manner. Since CTAC had no formal performance measures, we relied on general criteria provided by the Chief Scientist, as well as our survey results and subsequent discussions with S&T Committee members, to learn about CTAC’s contributions. As previously discussed, we found a lack of agreement between CTAC and S&T Committee members regarding the criteria for CTAC project-related contributions. When we examined the contributions identified by CTAC, we found that they were more output-related than outcome-related. That is, CTAC focused more on quantifying specific activities and products than on assessing their effectiveness or impact on law enforcement and demand reduction counterdrug efforts. For example, CTAC generally considered completed projects (output) successful whether or not they resulted in the deployment of useful technology by law enforcement or demand reduction agencies. In addition, our review did not find that CTAC obtained periodic feedback from law enforcement or demand reduction agencies on the extent to which the technology resulting from CTAC-funded projects was useful in helping to reduce drug supply or the demand for drugs (outcome). Also, CTAC cited the number of symposiums and workshops it sponsored (output) but did not specifically measure the outcome of those forums in terms of, for example, the unnecessary duplicative R&D avoided and the technology developed and used (outcome). In addition, CTAC is responsible for coordinating counterdrug R&D activities to ensure that unnecessary duplication is avoided and that it supports otherwise unfunded projects with the highest priority. However, we found no indication that CTAC had developed a means for measuring the results and effectiveness of its coordination (outcome), such as obtaining feedback from the agencies with R&D missions whose activities it is charged with coordinating. Nor, as we previously discussed, has CTAC developed a means for measuring its effectiveness in identifying and avoiding unnecessary duplicative R&D efforts. Without measurable outcome indicators linked to its mission and identifiable goals and objectives, CTAC and others cannot reliably determine CTAC’s impact on reducing the nation’s drug problems through the development and deployment of useful counterdrug technologies. In September 1997, ONDCP and CTAC officials informed us that they were taking steps as part of two separate, but related, initiatives to develop long-term strategic goals. First, pursuant to statutory provisions requiring the development and submission of the National Drug Control Strategy, ONDCP has been developing a performance measurement system for the National Strategy. As part of this effort, CTAC officials said that CTAC and other federal R&D agencies have been developing performance targets and corresponding measures or indicators for each of the technology-related objectives for the National Strategy’s goals. However, these indicators are intended to measure the administration’s overall progress in achieving the national goals and objectives, which involves the input and efforts of various agencies, and not to measure CTAC’s execution of its mission or its specific achievements and contributions. Secondly, pursuant to GPRA, ONDCP is developing a separate strategic plan with objectives, targets, and performance indicators specific to the operations of ONDCP and its components. ONDCP officials told us that in response to the Office of Management and Budget’s (OMB) comments on a draft of the plan, they and CTAC officials were developing specific objectives, targets, and performance indicators for CTAC that would be included in the strategic plan. They stated that these indicators or measures would be primarily output-oriented (e.g., number of projects funded, reports generated, or symposiums sponsored). They also stated that they planned to work with CTAC in developing outcome measures for CTAC later, although they did not provide a specific time frame. Conclusions CTAC has in place a coordination process for identifying counterdrug technology needs and selecting and funding R&D projects to meet those needs. However, we found that CTAC’s design and execution of the process did not allow CTAC or us to determine the extent to which its process was identifying and funding the otherwise unfunded highest priority technology needs. The primary reason for this situation appears to be a lack of regular communication between CTAC and counterdrug R&D agencies through the S&T Committee, which is their representative body. S&T Committee meetings have been infrequent, and the committee has not been used regularly and consistently in helping to make key decisions, such as which projects CTAC should fund with its limited available funds. According to S&T Committee members, when the committee has met more frequently, it was effective in enabling members to exchange information, avoid duplication, and foster better cooperation and coordination. By dealing with individual S&T Committee members or working groups, CTAC may not be taking full advantage of the interaction and deliberations among the members on decisions and advisory matters as intended. As a result, CTAC may not be funding the most critically needed counterdrug technologies. Moreover, the charter for the S&T Committee has not been revised to reflect changes in the committee’s composition, responsibilities, and relationship to CTAC since 1990, which was before CTAC was established. Because of the changes in the S&T Committee’s membership since the charter was originally written, it is important that the document be updated as needed. In addition, federal R&D agencies’ counterdrug needs have not been regularly reassessed and updated; state and local technology needs, although funded in some cases, have not been systematically considered, along with federal needs, as part of CTAC’s needs identification and project selection process; and agencies have often failed to include the transitional plans needed to help ensure that technologies successfully developed with CTAC funds are used. Also, while CTAC has established mechanisms to avoid duplicative R&D efforts, it has not gathered the necessary feedback from its constituent agencies to determine whether these mechanisms are working. Therefore, CTAC does not know to what extent it is fulfilling its mission objectives of helping the counterdrug R&D community to identify and avoid duplication. Recent efforts by CTAC and the S&T Committee’s Technology Coordination Working Group to develop a 10-year counterdrug technology development plan with 5-year budget projections in support of ONDCP’s 10-year National Drug Control Strategy are positive steps toward defining and addressing our nation’s counterdrug technology needs. These efforts also are good examples of how CTAC could more effectively communicate and coordinate with the counterdrug technology R&D community in accomplishing its mission. However, CTAC may not be able to effectively implement and adjust as necessary the National Drug Control Strategy and the technology development plan from year to year, because of the shortcomings we found in its coordination process for annually identifying, selecting, and funding R&D projects to meet identified technology needs and gaps. CTAC has made some identifiable contributions to needed counterdrug technology development. However, the extent to which CTAC has achieved its mission of helping to develop and deploy needed counterdrug technology is unclear because it has not yet developed meaningful, measurable performance goals and outcome indicators. This situation is reflected in the varying perspectives on CTAC’s contributions to counterdrug technology efforts held by CTAC and the other agencies involved in those efforts. Although both CTAC and the S&T Committee members we surveyed agreed that CTAC’s outreach efforts had improved information-sharing among members of the counterdrug R&D community, many of the R&D projects that CTAC cited as contributions were not considered as such by the agencies that will ultimately use the technologies. One reason for this difference of opinion appears to be that, while CTAC counted the attainment of certain milestones in the development process as contributions, the lead agencies were interested primarily in implementing efficient and effective counterdrug technologies in the field. Until CTAC and the agencies it assists—its customers and stakeholders—concur in how CTAC’s contributions to the development and deployment of counterdrug technology should be measured, it will be difficult to determine the extent to which CTAC is achieving its mission. ONDCP/CTAC has an opportunity to address this situation by coordinating closely with its key customers and stakeholders as it develops specific goals and performance measures under GPRA. However, CTAC is currently developing output measures, rather than the outcome measures that are necessary to determine with any precision the extent to which CTAC is achieving the purpose for which it was created. Recommendations For CTAC to more effectively coordinate with federal, state, and local counterdrug R&D agencies in identifying and prioritizing technology needs and selecting projects for CTAC funding, we recommend that the Director, ONDCP, direct the Chief Scientist to work with the S&T Committee to help ensure that: The S&T Committee meets regularly to exchange information on federal, state, and local drug supply and demand reduction technology needs; obtain, assess, and prioritize R&D needs; and recommend to the Chief Scientist selection and funding of the otherwise unfunded highest priority projects. In this regard, the S&T Committee’s charter should be updated to reflect the committee’s current composition, responsibilities, and relationship to CTAC. Projects selected for CTAC funding have transitional/acquisition plans. Furthermore, to help ensure that CTAC can adequately measure whether it is achieving its mission, we recommend that the Director, ONDCP, direct the Chief Scientist to develop, within a set period, performance objectives and outcome measures that make it possible to assess the extent to which CTAC is achieving its various mission objectives and contributing to the development and deployment of counterdrug technologies. Agency Comments and Our Evaluation ONDCP provided comments on a draft of this report, and its comments are reprinted in appendix VI. Overall, ONDCP generally agreed with our findings and conclusions and is taking action on all of our recommendations. Regarding our first recommendation, ONDCP stated in its written comments that it had directed CTAC to revise the S&T Committee’s 1990 charter. Other than changes in the composition of the committee, ONDCP did not specify how the charter would be revised. However, if implemented as set forth in our recommendation, revising the charter should help ensure that all parties understand their roles, responsibilities, and expectations. However, ONDCP indicated in its written comments that the membership of the S&T Committee would include officials of the President’s Cabinet with drug control responsibilities. An ONDCP official subsequently informed us that ONDCP expects that “principal deputy secretaries” of the various agencies will sit as members of the committee. This would represent a change from the current membership, which includes officials at the working levels, with knowledge of their agencies’ counterdrug technology R&D activities. However, according to the Chief Scientist, the working-level officials currently on the S&T Committee would continue to serve on the committee’s Technology Coordination Working Group, which he chairs and which would serve as CTAC’s principal mechanism for coordinating counterdrug R&D efforts, identifying and pioritizing technology needs, and selecting otherwise unfunded R&D projects for CTAC funding. The Working Group would then advise the S&T Committee, which would serve as the steering and policymaking body for counterdrug technology R&D efforts. Regarding our second recommendation, ONDCP stated that it had directed CTAC to use the annual budget recertification process to ensure that the lead agencies for CTAC-sponsored projects involving the delivery of prototype systems have written acquisition or transitional plans. This action, if properly implemented, should fulfill the intent of our recommendation. Regarding our third recommendation, ONDCP expressed the intention to verify CTAC’s performance by measuring the contributions of CTAC-sponsored counterdrug technologies to the efficiency and effectiveness of user agencies within the framework of ONDCP’s national drug control goals and objectives. To track and measure CTAC’s performance, ONDCP proposes to use the strategic plan, annual plan, and annual performance report required under GPRA. Depending on the types of indicators that ONDCP and CTAC develop to measure CTAC’s performance and contributions, these proposed actions could go a long way toward helping to clarify CTAC’s impact on the development and deployment of counterdrug technology. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of the report to the Ranking Minority Member of the Senate Caucus on International Narcotics Control, the appropriate congressional committees, the Director of ONDCP, CTAC’s Chief Scientist, the heads of agencies represented on the S&T Committee, the Director of OMB, and other interested parties. Also, copies will be made available to others upon request. The major contributors to this report are listed in appendix VII. If you have any questions about this report, please call me on (202) 512-8777. Objectives, Scope, and Methodology In response to the request of the Chairman of the Senate Caucus on International Narcotics Control that we review the operations and contributions of the Counterdrug Technology Assessment Center (CTAC), our objectives were to determine (1) how CTAC coordinates its counterdrug research and development (R&D) efforts with other federal agencies to address counterdrug R&D needs that are not being met by other agencies and to avoid unnecessary duplication and (2) what contributions CTAC has made to counterdrug R&D efforts since its creation. Our work covered CTAC operations and contributions during fiscal years 1992 through 1997. We conducted our review primarily in the Washington, D.C., area at the headquarters of the Office of National Drug Control Policy (ONDCP)/CTAC. We interviewed officials from CTAC, the Office of Management and Budget (OMB), and the following key federal law enforcement and other agencies involved in the National Counterdrug R&D program: the U.S. Customs Service, Drug Enforcement Administration (DEA), Department of Defense (DOD), U.S. Coast Guard, Federal Bureau of Investigation (FBI), National Institute of Justice (NIJ), Immigration and Naturalization Service (INS), and National Institute for Drug Abuse (NIDA). We contacted CTAC’s technical and contracting agencies—the U.S. Army Electronic Proving Ground in Fort Huachuca, AZ, and the Tennessee Valley Authority in Knoxville, TN, to discuss and obtain documentation of CTAC’s project selection process. Furthermore, on the basis of usage of CTAC-sponsored technology, we judgmentally selected and contacted two Customs Service field offices. To address both of our objectives, we used a structured questionnaire to survey representatives of 10 of the 21 federal agencies on the S&T Committee. We judgmentally selected the 10 agencies on the basis of preliminary discussions with ONDCP, CTAC, and several federal agencies involved in counterdrug technology R&D activities. The 10 agencies varied in size and level of funding, but accounted for the majority of the overall budget for the National Counterdrug R&D Program from fiscal years 1992 through 1997. In addition to size and level of funding, we considered such factors as the agencies’ functions (drug supply and demand reduction) and their extent and length of involvement in R&D activities. We surveyed the agencies during March and April, 1997. Using this questionnaire, we asked officials their views about (1) how well CTAC communicated various kinds of information to agency users of counterdrug technology, (2) how effective CTAC was in overseeing and coordinating federal counterdrug technology R&D activities and in avoiding duplication and filling technology gaps, and (3) to what extent CTAC’s involvement has had a positive effect on federal counterdrug technology R&D efforts that support the goals of the National Drug Control Strategy. We also asked officials their views about (1) CTAC’s general and specific contributions to federal counterdrug technology R&D efforts and (2) the extent to which specific technologies developed and tested with CTAC funds had been fielded and used. Also, using a similarly structured questionnaire, we judgmentally selected and interviewed nine state and local administrators by telephone or in person at CTAC’s regional workshop in Atlanta, GA. We also analyzed evaluation forms that had been completed by attendees at all six regional workshops sponsored by CTAC. To address the first objective, we also analyzed (1) CTAC’s and key law enforcement’s and other agencies’ involvement in counterdrug technology R&D efforts for fiscal years 1992 through 1997, ONDCP’s National Drug Control Strategies, CTAC’s corresponding annual Counterdrug Research and Development Blueprint Update reports, minutes of Science and Technology (S&T) Committee and working group meetings, and pertinent memorandums and other documents; (2) CTAC’s policies, procedures, and processes for identifying R&D needs, and prioritizing, selecting, and funding R&D projects; (3) CTAC communication of guidance and project-related information to counterdrug technology R&D and user agencies; (4) for fiscal years 1992 through 1997, funding appropriated to and allocated by CTAC for the National Counterdrug R&D Program; and (5) CTAC’s legislative history. We did not verify the validity of data provided by CTAC. To address the second objective, we also reviewed documentation to identify and analyze contributions or accomplishments cited by CTAC. CTAC provided us with a list of contributions and R&D agencies’ contact officials in the appropriate federal agencies. We discussed with these officials their views on the contributions and the status of the related projects. Federal Counterdrug Research and Development Spending, FY 1992-97 Interagency Crime and Drug Enforcement CTAC Funding by Various Spending and Technology Thrust Categories Table III.1: CTAC Funding by Various Spending Categories, Fiscal Years 1992-97 (137) CTAC funded demand reduction projects prior to being mandated by Congress to include this area in its mission. This responsibility was added by the Violent Crime Control and Law Enforcement Act of 1994 (P.L. 103-322). The El Paso Intelligence Center received $600,000 and the Model Drug Law Conference received $1 million from CTAC’s fiscal year 1996 appropriation. In fiscal year 1997, the Law Conference received another $1 million. Table III.2: Distribution of CTAC R&D Funding by Technology Thrust Area, Fiscal Years 1992-97 Overview Description of CTAC’s Process for Identifying and Prioritizing Counterdrug Technology Needs and Selecting CTAC- Funded Research and Development Projects The following is a detailed description provided by the Chief Scientist and other CTAC officials of the process to be followed by CTAC and the S&T Committee for identifying and prioritizing counterdrug technology needs and selecting R&D projects for funding with available CTAC funds. The annual selection process for CTAC-funded R&D projects is to begin with the S&T Committee update of the scientific and technological needs. CTAC generally requests, in writing, the scientific and technological need updates from the counterdrug law enforcement members of the S&T Committee between April and May of each year. To address the demand reduction needs, CTAC is to consult with the National Institute of Drug Abuse. These scientific and technological needs are grouped into four areas called thrusts: (1) tactical technology, (2) nonintrusive inspection, (3) wide-area surveillance, and (4) demand reduction. The scientific and technological needs of the drug enforcement agencies are to be placed into a priority order according to short-, medium-, and long-term requirements in the thrust areas of tactical technology, nonintrusive inspection, and wide-area surveillance. The demand reduction thrust area is not included in the priority listing. The priority listing of short-, medium-, and long-term needs by thrust area is generally included in an appendix to CTAC’s Blueprint Update. To address the scientific and technological needs of the drug enforcement agencies, CTAC solicits either white papers or proposals through the Broad Agency Announcements (BAA). These submissions are from industry, federal government laboratories, federal agencies, and academia. Furthermore, the members of the S&T Committee can submit proposals at any time. According to the Chief Scientist, CTAC works with the members of the S&T Committee to develop a potential project. If the evaluation of the potential project is accepted by the expert panel that is comprised of government officials who are experts in the area of consideration, CTAC would consider the project for funding. CTAC’s technical and contracting agent at the U.S. Army Electronic Proving Ground handles the evaluations of white papers and proposals as a result of the BAA. The Tennessee Valley Authority, CTAC’s other technical and contracting agent, is primarily responsible for interagency agreements with academic institutions. The appropriate experts evaluate white papers and proposals for technical merit and execution risk as they are received. The evaluation criteria are as follows: potential contribution of the effort to the various counterdrug law enforcement agencies’ specific missions, as well as relevance and contribution to the national technology base; overall scientific and technical merit of the proposal including (1) an understanding of the technical problem and its application to counterdrug enforcement and demand reduction, (2) the soundness of the approach, and (3) the probability of success; the performer’s capabilities, related experience, facilities, techniques, or unique combinations of these that are integral factors for achieving the proposed objectives; the qualifications, capabilities, and experience of the proposed principal investigator, team member, or key personnel who are critical in achieving the proposed objectives; and realism of proposed cost and availability of funds. On the basis of the evaluation of proposals, CTAC’s technical and contracting agents compile a list of acceptable proposals for CTAC’s consideration. This listing of acceptable R&D proposals is forwarded to CTAC. Before the Chief Scientist makes his final selection of R&D projects for funding, he assesses those proposals on the basis of the following criteria: (1) alignment to the National Drug Control Strategy’s goals and objectives, (2) multiagency use, (3) innovative and high payoff, (4) developmental risk, (5) duplication, (6) acquisition and transitional planning, and (7) time horizon (i.e., short-, medium-, or long-term). CTAC identifies a sponsoring agency for each project to provide oversight. CTAC then is to discuss each project with the lead agency to confirm that the project would meet the agency’s counterdrug mission and to negotiate funding for the project. In addition, CTAC is to assess each agency’s R&D counterdrug program plan to identify duplication and gaps in the counterdrug area. The Chief Scientist assesses the continuation of projects on the basis of the following criteria: (1) progress that has been made, (2) input from the sponsoring agency, and (3) funds availability. CTAC then assesses new and existing projects to decide the best balance for spending within budget constraints, according to a CTAC official. On the basis of CTAC’s assessment and consultation from experts, the Chief Scientist annually makes selections between June and September of the R&D projects to address the needs of counterdrug efforts. CTAC prepares a R&D counterdrug program plan that lists the selected R&D projects. The members of the S&T Committee review the R&D counterdrug program plan. The ONDCP Director approves the R&D counterdrug program plan. CTAC notifies the House and Senate Treasury and Postal Appropriation Committee staffs about the R&D counterdrug program plan between November and December. Figure IV.1 provides a flowchart of CTAC’s technology needs identification and R&D projects selection process as designed. CTAC is to direct the S&T Committee to annually identify counterdrug needs. CTAC's technical and contracting agents are to receive proposals addressing counterdrug needs from federal agencies, government labs, and industry academic institutions. Knowledgeable experts from various government agencies and government-related organizations are to evaluate proposals on their own technical merit and execution risk. CTAC's technical and contracting agents are to compile a list of acceptable proposals. CTAC is to review the list of proposals on the basis of the National Drug Control Strategy's goals and other criteria. CTAC's Chief Scientist is to select proposals with consultation from knowledgeable experts. S&T Committee is to annually review draft counterdrug R&D program plan. CTAC is to forward proposed R&D program to the ONDCP Director for approval. CTAC is to notify the House and Senate Appropriations Committee staffs of the approved counterdrug R&D program plan. The U.S. Army Electronic Proving Ground at Fort Huachuca, AZ, and the Tennessee Valley Authority, TN, are CTAC’s technical and contracting agents. Technology-Related Contributions/ Accomplishments CTAC Identified and Lead R&D Agencies Confirmed as Completed and Successfully Fielded, FY 1992-97 This project produced a government-owned set of drawings and specifications to be used for the manufacturing of a new sensor design that includes the capability to employ the system tactically or strategically, with sufficient modularity to allow for upgrades in technology and introduction of new sensor types into the system. The initial units for this project have been delivered, and INS considered it successful. This project provides support to DEA and the Agricultural Research Service to accurately estimate cocaine production. Funding is provided for continued development of a scientific and statistically valid technique for estimating cocaine production. DEA reported that this system has been operational for 3-1/2 years and has been rated as successful. This project developed improvements to existing field test kits that detect trace amounts of narcotics residue on hands and surfaces. The proof of concept for these kits was originally funded by the FBI. During the course of this project, field test kits were supplied to federal, state, and local law enforcement agencies for field use. Since this effort, a new commercial product has been developed. This communications system satisfied an immediate need to provide crucial, dependable communications for law enforcement officers who perform covert surveillance during counterdrug enforcement operations. The project was completed in March 1997, and the systems are now in use by the FBI. This project produced a Low Probability of Intercept and Low Probability of Detection system, which is worn by agents to support both surveillance and communications requirements. Production units are now being used by the FBI. (continued) This project explored a number of commercially available products that permit the optical scanning of data into a database, thereby providing the capability to retrieve and index data for timely access with minimal human interface. FinCen rated the technology resulting from this project as helpful and timesaving. This project developed a miniaturized electronics package with an improved source/detector ratio to reduce the source size and permit a lighter, smaller contraband detector to be produced. This new detector can be mounted on an inspector’s belt and be readily available for examining hard-to-inspect areas and items for illicit drugs. These detectors are being bought and used by Customs. This project developed a nonintrusive portable or mobile prototype field inspection system to detect contraband in empty containers that transport liquids. A prototype system was developed and was well-accepted by Customs’ field offices. A community test-and-evaluation center was established in fiscal year 1991. The center continues to be used by Customs for field-testing technology. This project consisted of a controlled series of field evaluations of existing, commercially available narcotics detection equipment. Four systems were tested during the first testing cycle, which was completed in November 1994. According to Customs, this project is ongoing and has been very useful. Comments From the Office of National Drug Control Policy Major Contributors to This Report General Government Division, Washington, D.C. Office of the General Counsel, Washington, D.C. Geoffrey R. Hamilton, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the operations and contributions of the Counterdrug Technology Assessment Center (CTAC), focusing on: (1) how CTAC coordinates its counterdrug research and development (R&D) efforts with other federal agencies to address counterdrug R&D needs that are not being met by other agencies and to avoid unnecessary duplication; and (2) what contributions CTAC has made to counterdrug R&D efforts since its creation. GAO noted that: (1) CTAC has a coordination process in place for identifying counterdrug technology needs and selecting and funding R&D projects to meet those needs; (2) however, GAO identified the following shortcomings in CTAC's design and execution of the process: (a) the Science and Technology (S&T) Committee's charter, which was created before CTAC existed, does not reflect the Committee's current composition, responsibilities, and relationship to CTAC; (b) CTAC did not regularly and consistently involve the S&T Committee in its coordination process; (c) CTAC did not regularly evaluate and prioritize the agencies' counterdrug R&D technology needs to ensure that it funded otherwise unfunded projects with the highest priority; (d) CTAC did not systematically identify and consider the counterdrug technology needs of state and local agencies, in conjunction with federal agencies' needs, as part of its regular process for selecting and funding projects, and state and local agencies were only recently represented on the S&T Committee; (e) agencies generally did not submit transitional or acquisition plans to CTAC; and (f) although a few agencies cited instances where duplication was avoided as a result of CTAC's efforts, CTAC had not developed any means for determining the extent to which unnecessary duplication had been identified and avoided due to its efforts; (3) as a result of these shortcomings, neither GAO nor CTAC could determine the extent to which its coordination process was meeting its mission; (4) GAO's task of determining CTAC's contribution to federal drug control efforts was complicated by CTAC's lack of meaningful performance measures to enable it to: (a) assess its progress in achieving its mission and contributing to the development and deployment of counterdrug technology; and (b) identify and implement any needed improvements to better achieve its mission; (5) CTAC's Chief Scientist told GAO that he considered not just technologies that are completed and in use as contributions, but also uncompleted projects that have reached various stages of development; and (6) the contact officials of the lead R&D agencies identified by CTAC told GAO that they considered 10 of the 36 projects cited as contributions by CTAC to be actual contributions. |
FAA Efforts to Develop an Inspector Targeting System As early as 1987, we identified the need for FAA to develop criteria for targeting safety inspections to airlines with characteristics that may indicate safety problems and noted that targeting was important because FAA may never have enough resources to inspect all aircraft, facilities, and pilots. FAA employs about 2,500 aviation safety inspectors to oversee about 7,300 scheduled commercial aircraft, more than 11,100 charter aircraft, about 184,400 active general aviation aircraft, about 4,900 repair stations, slightly more than 600 schools for training pilots, almost 200 maintenance schools, and over 665,000 active pilots. Although FAA has taken steps to better target its inspection resources to areas with the greatest safety risks, these efforts are still not complete. SPAS, which FAA began developing in 1991, is intended to analyze data from up to 25 existing databases that contain such information as the results of airline inspections and the number and the nature of aircraft accidents. This system is then expected to produce indicators of an airline’s safety performance, which FAA will use to identify safety-related risks and to establish priorities for FAA’s inspections. FAA completed development and installation of the initial SPAS prototype in 1993. As of April 1996, FAA had installed SPAS in 59 locations but is experiencing some logistical problems in installing SPAS hardware and software. Full deployment of the $32-million SPAS system to all remaining FAA locations nationwide is scheduled to be completed in 1998. In February 1995, we reported that although FAA had done a credible job in analyzing and defining the system’s user requirements, SPAS could potentially misdirect FAA resources away from the higher-risk aviation activities if the quality of its source data is not improved. SPAS program officials have acknowledged that the quality of information in the databases that are linked to SPAS poses a major risk to the system. To improve the quality of data to be used in SPAS analyses, we recommended that FAA develop and implement a comprehensive strategy to improve the quality of all data used in its source databases. FAA concurred with the need for this comprehensive strategy and planned to complete it by the end of 1995. As of April 1996, the strategy drafted by an FAA contractor had not been approved by agency management. Until FAA completes and implements its strategy, the extent and the impact of the problems with the quality of the system’s data will remain unclear. Although we have not determined the full extent of the problems, our recent audit work and recent work by the DOT IG have identified continuing problems with the quality of data entered into various source databases for SPAS. FAA’s Program Tracking and Reporting Subsystem (PTRS), which contains the results of safety inspections, has had continuing problems with the accuracy and consistency of its data. Several FAA inspectors mentioned concerns about the reliability and consistency of data entered into PTRS. According to an inspector who serves on a work group to improve SPAS data inputs, reviews of inspectors’ entries revealed some inaccurate entries and a lack of standardization in the comment section, where inspectors should report any rules, procedures, practices, or regulations that were not followed. He said inspectors continued to comment on things that were not violations while some actual violations went unreported. For example, during our ongoing work we recently found a PTRS entry indicating an inspection that never occurred on a type of aircraft that the carrier did not use. The DOT IG also concluded in a November 1995 report that FAA inspectors did not consistently and accurately report their inspection results in PTRS because reporting procedures were not interpreted and applied consistently by FAA inspectors, and management oversight did not identify reporting inconsistencies. The DOT IG recommended that FAA clarify PTRS reporting procedures to ensure consistent and accurate reporting of inspections and to establish controls to ensure supervisors review PTRS reports for reporting inconsistencies and errors. Such problems can jeopardize the reliability of SPAS to target inspector resources to airlines and aircraft that warrant more intensive oversight than others. Adequacy of Inspector Training Continues to Be a Concern Over the last decade, we, the DOT IG, and internal FAA groups have repeatedly identified problems and concerns related to the technical training FAA has provided to its inspectors. For example, both we and the IG have reported that FAA inspectors were inspecting types of aircraft that they had not been trained to inspect or for which their training was not current. In the wake of these findings, FAA has revised its program to train inspectors by (1) developing a process to assess training needs for its inspector workforce, (2) attempting to identify those inspections that require aircraft-specific training and limiting this training to the number of inspectors needed to perform these inspections, and (3) decreasing the requirements for recurrent flight training for some of its inspectors. However, our interviews with 50 inspectors indicate that some inspectors continue to perform inspections for which they are not fully trained, and some inspectors do not believe they are receiving sufficient training. While we cannot determine the extent of these problems from our limited interviews, the training issues reflect persistent concerns on which we and others have reported for many years. For example, we reported in 1989 that airworthiness inspectors received about half of the training planned for them in fiscal year 1988. Furthermore, we reported in 1989 and the DOT IG reported again in 1992 that inspectors who did not have appropriate training or current qualifications were conducting flight checks of pilots.The Director of FAA’s Office of Flight Standards Service acknowledged that the adequacy of inspector training remains a major concern of inspectors. Some Inspectors Still Do Not Receive Needed Technical Training Recognizing that some of its employees had received expensive training they did not need to do their jobs while others did not receive essential training, in 1992 FAA developed a centralized process to determine, prioritize, and fund its technical training needs. This centralized process is intended to ensure that funds are first allocated for training that is essential to fulfilling FAA’s mission. In accordance with this process, each FAA entity has developed a needs assessment manual tailored to the entity’s activities and training needs. For example, the manual for the Flight Standards Service outlines five categories of training. The highest priority is operationally essential training, which is defined as training required to provide the skills needed to carry out FAA’s mission. The other four categories, which are not considered operationally essential, involve training to enhance FAA’s ability to respond to changes in workload, to use new technologies, to enhance individual skills, or to provide career development. To identify initial course sequences for new hires and time frames for their completion as well as some continuing development courses that are not aircraft-specific, FAA created profiles for the various types of inspectors. Although each profile notes that additional specialized training may be required according to an inspector’s assigned responsibilities and prior experience, the centralized process provides no guidance for analyzing individualized needs. According to several inspectors we interviewed who had completed initial training, they were not receiving the specific technical training needed for their assigned responsibilities. The inspectors said that the assessment process does not fully address their advanced training needs and that some inspectors were performing inspections for which they have not received training. For example, one maintenance inspector told us he was responsible for inspecting seven commuter airlines but had never attended maintenance training school for the types of aircraft he inspects. He said that he had requested needed training for 5 years with his supervisor’s approval, but his requests were not ranked high enough in the prioritization process to receive funding. Instead, FAA sent the maintenance inspector to training on Boeing 727s and composite materials, which were not related to the aircraft he was responsible for. He said that he did not request these courses and assumed he was sent to fill available training slots. Another maintenance inspector said that although he was trained on modern, computerized Boeing 767s, he was assigned to carriers who fly 727s, 737s, and DC-9s with older mechanical systems. While the Director of the Flight Standards Service said that inspectors could obtain some aircraft-specific training by attending classes given by the airlines they inspect, inspectors with whom we spoke said that supervisors have not allowed them to take courses offered by airlines or manufacturers because their participation could present a potential conflict of interest if the courses were taken for free. Some inspectors we interviewed said that when they could not obtain needed training through FAA they have audited an airline’s classes while inspecting its training program. Although the inspectors might acquire some knowledge by auditing an airline’s class, they stressed that learning to oversee the repair of complex mechanical and computerized systems and to detect possible safety-related problems requires concentration and hands-on learning, not merely auditing a class. The inspectors said that extensive familiarity with the aircraft and its repair and maintenance enhances their ability to perform thorough inspections and to detect safety-related problems. While technical training is especially important when inspectors assume new responsibilities, other inspectors we interviewed said that they sometimes do not receive this training when needed. For example, although an operations inspector requested Airbus 320 training when a carrier he inspected began using that aircraft, he said that he did not receive the training until 2 years after that carrier went out of business. Similarly, several inspectors told us that despite their responsibility to approve global positioning system (GPS) receivers, a navigation system increasingly being used in aircraft, they have had no formal training on this equipment. Finally, a maintenance inspector, who was responsible for overseeing air carriers and repair stations that either operate or repair Boeing 737, 757, 767, and McDonnell Douglas MD-80 aircraft, said that the last course he received on maintenance and electronics was 5 years ago for the 737. Although the other three aircraft have replaced mechanical gauges with more sophisticated computer systems and digital displays, the inspector has not received training in these newer technologies. While acknowledging the desirability of updating training for new responsibilities, the Director of the Flight Standards Service said that prioritizing limited training resources may have defined essential training so narrowly that specialized training cannot always be funded. The Acting Manager of FAA’s Flight Standards National Field Office, which oversees inspector training, told us that to improve training programs for inspectors FAA is also providing training through such alternative methods as computer-based instruction, interactive classes televised via satellite, and computer-based training materials obtained from manufacturers. However, the effectiveness of these initiatives depends on how FAA follows through in promoting and using them. For example, while FAA has developed a computer-based course to provide an overview of GPS, the course is not currently listed in the training catalogue for the FAA Academy. We found that several inspectors who had requested GPS training were unaware of this course. According to the Manager of the Regulatory Standards and Compliance Division of the FAA Academy, their lack of awareness may be because the course is sponsored by a different entity of FAA, the Airway Facilities Service. If this GPS course meets inspectors’ needs, they could be informed of its availability through a special notice and by cross-listing it in FAA’s training catalogue. The extent to which inspectors will use distance learning equipment (e.g., computer-based instruction) and course materials depends in great part on their awareness of existing courses and whether the equipment and software are readily available. FAA Has Limited the Number of Inspectors Who Receive Aircraft-Specific Training Because of resource constraints, FAA has reduced the number of inspections for which aircraft-specific training is considered essential and has limited such training to inspectors who perform those inspections. For example, FAA requires inspectors to have pilot credentials (type ratings by aircraft) when they inspect commercial aircraft pilots during flight. FAA has a formula to determine how many inspectors each district office needs to perform inspections requiring aircraft-specific skills. A district office must perform a minimum number of aircraft-specific inspections each year to justify training for that type of aircraft. Offices that perform fewer than the minimum number of inspections that require specialized skills may borrow a “resource inspector” from FAA headquarters or a regional office. According to the Director of the Flight Standards Service, FAA cannot afford to maintain current pilot credentials for all inspectors so they can conduct pilot inspections. However, inspectors interviewed mentioned problems with using resource inspectors, although we have not determined how pervasive these problems are. Some of the inspectors said that they had difficulties obtaining resource inspectors when needed. Additionally, they said that sometimes resource inspectors are not familiar with the operations and manuals of the airline they are asked to inspect and may therefore miss important safety violations of that airline’s policies or procedures. For example, while one inspector, who had primary responsibility for a carrier that was adding a new type of aircraft, had to terminate the inspection because the airline’s crew was not operating in accordance with the carrier’s operations manual, the resource inspector who accompanied him had not detected this problem because he was unfamiliar with that carrier’s specific procedures. In responding to these concerns, the Director of the Flight Standards Service acknowledged that the resource inspector may need to be paired with an inspector familiar with the airline’s manuals. According to the Director of the Flight Standards Service and the Acting Manager of the Evaluations and Analysis Branch, identifying inspections that require aircraft-specific training and limiting training to those who perform such inspections has reduced the number of inspectors who need expensive aircraft-specific flight training. They said this policy also helps to ensure that inspections requiring a type rating are only conducted by inspectors who hold appropriate, current credentials. As we recommended in 1989, reevaluating the responsibilities of inspectors, identifying the number needed to perform flight checks, and providing them with flight training makes sense in an era of limited resources for technical training. The DOT IG’s ongoing work has found differences of opinion and confusion within FAA about which inspections require aircraft-specific training and type ratings. For example, while the Flight Standards Service training needs assessment manual lists 48 inspection activities for which operations inspectors need aircraft-specific training, during the DOT IG’s ongoing audit the Acting Manager of the Evaluations and Analysis Branch listed only 15 inspection activities requiring current type ratings. Until FAA identifies the specific inspection activities that require aircraft-specific training or type ratings, it will remain unclear whether some inspections are being performed by inspectors without appropriate credentials. The DOT IG’s ongoing study is evaluating this issue in more detail. FAA Has Reduced Flight Training Requirements for Operations Inspectors We and the DOT IG have previously reported that FAA inspectors making pilot flight checks either did not have the credentials (type ratings) or were not current in their aircraft qualifications in accordance with FAA requirements. Being current is important because some inspectors may actually have to fly an aircraft in an emergency situation. In May 1993, FAA decreased the frequency of inspector training and more narrowly defined those inspector activities requiring type ratings. Under FAA’s previous policy, inspectors overseeing air carrier operations received actual flight training (aircraft or simulator flying time) every 6 months to maintain their qualifications to conduct flight checks on pilots. FAA now requires recurrent flight training every 12 months and limits this requirement to those inspectors who might actually have to assume the controls (flight crewmember, safety pilot, or airman certification) in aircraft requiring type ratings. Because inspectors who ride in the jump seat would not be in a position to assume control of an aircraft, they no longer need to remain current in their type ratings, whereas inspectors of smaller general aviation aircraft who might actually have to assume the controls, are required to receive flight training. However, this annual requirement for general aviation inspectors has been changed to every 24 months. Inspectors we interviewed opposed the change requiring less frequent flight training. An operations inspector for general aviation aircraft believed training every 2 years was inadequate for inspectors who have to be at the controls every time they conduct a check ride. Another inspector, who is type rated in an advanced transport category aircraft, said he has not received any aircraft flying time and only half the simulator time he needs. According to the Acting Manager of the Evaluations and Analysis Branch, the decision to reduce the requirements for flight training was driven by budget constraints, and FAA has not studied the potential or actual impact of this reduction. Consequently, it is unknown whether the change in inspector flight training frequency is affecting aviation safety. The Director of the Flight Standards Service said that FAA has been placed in a position of having to meet the safety concerns of the aviation industry and the public at a time when air traffic is projected to continue increasing while resources are decreasing. Funding for Technical Training Has Decreased Significantly Between fiscal years 1993 and 1996, decreases in FAA’s overall budget have significantly reduced the funding available for technical training. FAA’s overall training budget has decreased 42 percent from $147 million to $85 million. FAA has taken a number of steps over the years to make its technical training program more efficient. For example, the prescreening of air traffic controller trainees has improved the percentage of students who successfully complete this training and decreased the number of FAA and contract classes needed. Additionally, in response to our recommendation, FAA has limited expensive flight training to inspectors who require current flight experience. FAA has also realized savings from the increased use of distance learning (e.g., computer-based instruction) and flight simulation in place of more expensive aircraft training time. FAA’s reduced funding for technical training has occurred at a time when it has received congressional direction to hire over 230 additional safety inspectors in fiscal year 1996. To achieve this staffing increase, FAA will have to hire about 400 inspectors to overcome attrition. New staff must be provided initial training at the FAA Academy to prepare them to assume their new duties effectively. The cost of this training, combined with overall training budget reductions, constrains FAA’s ability to provide its existing inspectors with the training essential to effectively carry out FAA’s safety mission. For fiscal year 1996, FAA’s training needs assessment process identified a need for $94 million to fund operationally essential technical training. However, due to overall budget reductions, FAA was allocated only $74 million for this purpose. For example, the budget for Regulation and Certification is $5.2 million short of the amount identified for operationally essential training. Specific effects of this shortfall include: delaying the training of fourth quarter inspector new hires until fiscal year 1997; cancellation of 164 flight training, airworthiness, and other classes planned to serve over 1,700 safety inspectors; and delay of recurrent and initial training for test pilots who certify the airworthiness of new aircraft. Based on the fiscal year 1997 request, the gap between FAA’s request and the amount needed to fund operationally essential technical training will be even greater in fiscal year 1997, in part because of training postponed in fiscal year 1996. Regulation and Certification, for example, is projecting an $8.1-million shortfall in operationally essential training. FAA’s Center for Management Development in Palm Coast, Florida, which provides management training in areas such as leadership development, labor-management relations, and facilitator skills, has experienced a 9-percent funding decrease since fiscal year 1993. At a time when FAA’s overall staffing has decreased from 56,000 in fiscal year 1993 to around 47,600 in fiscal year 1996, these decreases have not been reflected in the center’s costs or level of activity. An FAA contractor study completed in April 1995 showed that co-locating the center with the FAA Academy in Oklahoma City would result in cost savings of a half million dollars or more per year. Specifically, the study estimated that FAA could save between $3.4 million and $6.3 million over the next 10 years by transferring the center functions to the FAA Academy. The study also identified such intangibles as adverse employment impacts in the Palm Coast area that could be considered in making a relocation decision. FAA management currently supports retention of the center. In reviewing this study, we have identified potential additional savings that could increase the savings from relocating this facility to as much as $1 million annually. For example, the study estimated that easier commuting access to Oklahoma City would save $2.5 million in staff time over the 10-year period, an amount that was not included in the study’s overall savings estimate. The study also did not consider reducing or eliminating center staff who duplicate functions already available at the FAA Academy, such as course registration and evaluation. In an era of constrained budgets where funding shortfalls for essential technical training have become a reality, FAA must find ways to make the best use of all available training resources. Moving the center’s functions to the FAA Academy should be seriously considered—particularly since FAA’s 10-year lease on the center facility expires in August 1997. Mr. Chairman, this concludes our statement. We would be pleased to respond to questions at this time. Related GAO Products Aviation Safety: Data Problems Threaten FAA Strides on Safety Analysis System (GAO/AIMD-95-27, Feb. 8, 1995). FAA Technical Training (GAO/RCED-94-296R, Sept. 26, 1994). Aircraft Certification: New FAA Approach Needed to Meet Challenges of Advanced Technology (GAO/RCED-93-155, Sept. 16, 1993). FAA Budget: Important Challenges Affecting Aviation Safety, Capacity, and Efficiency (GAO/T-RCED-93-33, Apr. 26, 1993). Aviation Safety: Progress on FAA Safety Indicators Program Slow and Challenges Remain (GAO/IMTEC-92-57, Aug. 31, 1992). Aviation Safety: Commuter Airline Safety Would Be Enhanced With Better FAA Oversight (GAO/T-RCED-92-40, Mar. 17, 1992). Aviation Safety: FAA Needs to More Aggressively Manage Its Inspection Program (GAO/T-RCED-92-25, Feb. 6, 1992). Aviation Safety: Problems Persist in FAA’s Inspection Program (GAO/RCED-92-14, Nov. 20, 1991). Serious Shortcomings in FAA’s Training Program Must Be Remedied (GAO/T-RCED-90-91, June 21, 1990, and GAO/T-RCED-90-88, June 6, 1990). Staffing, Training, and Funding Issues for FAA’s Major Work Forces (GAO/T-RCED-90-42, Mar. 14, 1990). Aviation Safety: FAA’s Safety Inspection Management System Lacks Adequate Oversight (GAO/RCED-90-36, Nov. 13, 1989). Aviation Training: FAA Aviation Safety Inspectors Are Not Receiving Needed Training (GAO/RCED-89-168, Sept. 14, 1989). FAA Staffing: Recruitment, Hiring, and Initial Training of Safety-Related Personnel (GAO/RCED-88-189, Sept. 2, 1988). Aviation Safety: Measuring How Safely Individual Airlines Operate (GAO/RCED-88-61, Mar. 18, 1988). Aviation Safety: Needed Improvements in FAA’s Airline Inspection Program Are Underway (GAO/RCED-87-62, May 19, 1987). FAA Work Force Issues (GAO/T-RCED-87-25, May 7, 1987). Department of Transportation: Enhancing Policy and Program Effectiveness Through Improved Management (GAO/RCED-87-3, Apr. 13, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Federal Aviation Administration's (FAA) safety inspection program. GAO noted that: (1) in 1991, FAA created its Safety Performance Analysis System (SPAS) to focus its inspection resources on the pilots, aircraft, and facilities that pose the greatest risk; (2) poor data quality jeopardizes the success of SPAS; (3) FAA officials have not fully responded to prior recommendations of adopting a strategy to improve data quality by the end of 1995; (4) FAA inspectors have performed inspections without the appropriate or up-to-date credentials; (5) FAA has had trouble training its inspectors because it does not offer the necessary courses and has limited aircraft-specific training and decreased the frequency of inspector flight training; (6) between fiscal year (FY) 1993 and FY 1996, funding for technical training decreased 42 percent; and (7) FAA expects a $20-million shortfall for technical training it identified as essential for FY 1996. |
Background Fusion Centers Nationwide, states and major urban areas have established fusion centers to coordinate the gathering, analysis, and dissemination of law enforcement, homeland security, public safety, and terrorism information. After centers had begun to be established around the country, Congress passed the 9/11 Commission Act to require the Secretary of Homeland Security to share information with and support fusion centers. The National Strategy identifies fusion centers as vital assets critical to sharing information related to terrorism because they serve as focal points for the two-way exchange of information between federal agencies and state and local governments. According to DHS, fusion centers are the primary way that DHS shares intelligence and analysis with state and local homeland security agencies. For example, fusion centers typically issue analytical products, such as daily or weekly bulletins on general criminal or intelligence information and intelligence assessments which, in general, provide in-depth reporting on an emerging threat, group, or crime. These products are primarily created for law enforcement entities and other community partners, such as members of the critical infrastructure sectors. In recent years, fusion centers have been credited with being influential in disrupting a planned terrorist attack on the New York City subway system, investigating bomb threats against U.S. airlines, and providing intelligence support to several political conventions and summits. Other fusion centers have been instrumental in providing intelligence and analytical support to assist with securing our nation’s borders. For example, the Arizona Counterterrorism Information Center and the New York State Intelligence Center routinely (i.e., either twice a week or quarterly, respectively) issue border-specific intelligence products to enhance the situational awareness of law enforcement agencies in border communities. While all fusion centers were generally created by state and local governments to improve information sharing across levels of government and to prevent terrorism or other threats, the missions of fusion centers vary based on the environment in which the center operates. Some fusion centers have adopted an “all-crimes” approach, incorporating information on terrorism and other high-risk threats into their jurisdiction’s existing law enforcement framework to ensure that possible precursor crimes, such as counterfeiting or narcotics smuggling, are screened and analyzed for linkages to terrorist planning or other criminal activity. Other fusion centers have adopted an “all-hazards” approach. In addition to collecting, analyzing, and disseminating information on potential terrorist planning and other crimes, these fusion centers identify and prioritize types of major disasters and emergencies, such as hurricanes or earthquakes, which could occur within their jurisdiction. In doing so, they gather, analyze, and disseminate information to assist relevant responsible agencies—law enforcement, fire, public health, emergency management, critical infrastructure—with the prevention, protection, response, or recovery efforts of those incidents. Fusion centers also vary in their personnel composition and staffing levels. Consistent with the statutory definition of a fusion center, these centers typically bring together in one location representatives from several different state or local agencies, such as state and local law enforcement agencies—state police, county sheriffs, and city police departments— homeland security agencies, emergency management agencies, and the National Guard. In addition, as DHS is required to the maximum extent possible to assign officers and intelligence analysts to fusion centers, many centers have federal personnel working on-site, such as DHS intelligence operations specialists and Customs and Border Protection agents, along with others such as FBI intelligence analysts and Drug Enforcement Administration agents. In terms of staffing levels, a 2009 joint DHS and PM-ISE survey of fusion centers reported that the number of personnel working at these centers ranged from under 10 employees to over 75 per center, as shown in figure 1. Federal Role in Relation to Fusion Centers Recognizing that DHS had already begun to provide support to fusion centers but needed to play a stronger, more constructive role in assisting these centers, Congress passed the 9/11 Commission Act, which required the Secretary of Homeland Security to create the State, Local, and Regional Fusion Center Initiative. The Act also required the Secretary, in coordination with representatives from fusion centers and the states, to take certain actions in support of the initiative. Specifically, the Act requires that the Secretary take a number of steps to support the centers, including supporting efforts to integrate fusion centers into the ISE, assigning personnel to centers, incorporating fusion center intelligence information into DHS information, providing training, and facilitating close communication and coordination between the centers and DHS, among others. The law also required the Secretary to issue guidance that includes standards that fusion centers shall undertake certain activities. These include, for example, that centers collaboratively develop a mission statement, identify expectations and goals, measure performance, and determine center effectiveness; create a collaborative environment for the sharing of intelligence and information among federal, state, local, and tribal government agencies, the private sector, and the public, consistent with guidance from the President and the PM-ISE; and offer a variety of intelligence and information services and products. DHS has taken steps to organize and establish a management structure to coordinate its support of fusion centers. In June 2006, DHS tasked I&A with the responsibility for managing DHS’s support to fusion centers. I&A established a State and Local Program Office (SLPO) as the focal point for supporting fusion center operations and to maximize state and local capabilities to detect, prevent, and respond to terrorist and homeland security threats. Consistent with the 9/11 Commission Act and Intelligence Reform Act, DHS, in conjunction with DOJ and the PM-ISE, has issued a series of guidance documents to support fusion centers in establishing their operations. In 2006, through the Global Justice Information Sharing Initiative (Global), DHS and DOJ jointly issued the Fusion Center Guidelines, a document that outlines 18 recommended elements for establishing and operating fusion centers consistently across the country, such as establishing and maintaining a center based on funding availability and sustainability; ensuring personnel are properly trained; and developing, publishing, and adhering to a privacy and civil liberties policy. To supplement the Fusion Center Guidelines, in September 2008, DHS, DOJ, and Global jointly published the Baseline Capabilities, which were developed in collaboration with the PM-ISE and other federal, state, and local officials. The Baseline Capabilities define the capabilities needed to achieve a national, integrated network of fusion centers and detail the standards necessary for a fusion center to be considered capable of performing basic functions by the fusion center community. For example, the Baseline Capabilities include standards for fusion centers related to information gathering, recognition of indicators and warnings, processing information, intelligence analysis and production, and intelligence and information dissemination. In addition, the Baseline Capabilities include standards for the management and administrative functioning of a fusion center. Among these are standards for ensuring information privacy and civil liberties protections, developing a training plan for personnel, and establishing information technology and communications infrastructure to ensure seamless communication between center personnel and partners. The development of these baseline standards is called for in the National Strategy, which identifies their development as a key step to reaching a national integrated network of fusion centers. By achieving this baseline level of capability, it is intended that a fusion center will have the necessary structures, processes, and tools in place to support the gathering, processing, analysis, and dissemination of terrorism, homeland security, and law enforcement information. In accordance with the 9/11 Commission Act, DHS, DOJ, and the PM-ISE rely on fusion centers as critical nodes in the nation’s homeland security strategy and provide them with a variety of other support. Federal Grant Funding: DHS’s HSGP awards funds to states, territories, and urban areas to enhance their ability to prepare for, prevent, protect against, respond to, and recover from terrorist attacks and other major disasters. The fiscal year 2010 HSGP consists of five separate programs, two of which are primarily used by states and local jurisdictions, at their discretion, for fusion-center-related funding. These grant programs are not specifically focused on, or limited to, fusion centers. Thus, fusion centers do not receive direct, dedicated funding from DHS; rather, the amount of grant funding a fusion center receives is determined by a state’s State Administrative Agency (SAA)—the state-level agency responsible for managing all homeland security grants and associated program requirements—or an urban area’s working group, which has similar responsibilities. A fusion center typically contributes to the development of a state’s federal grant application by providing information on how it will use the proposed funding needed, called an investment justification. Personnel: DHS and DOJ have deployed, or assigned, either part-time or full-time personnel to fusion centers to support their operations and serve as liaisons between the fusion center and federal components. For example, DHS personnel are to assist the center in using ISE information; review information provided by state, local, and tribal personnel; create products derived from this information and other DHS homeland security information; and assist in disseminating these products. As of July 2010, DHS’s I&A had deployed 58 intelligence officers and the FBI had deployed 74 special agents and analysts full time to 38 of the 72 fusion centers. Access to Information and Systems: DHS and DOJ also share classified and unclassified homeland security and terrorism information with fusion centers through several information technology networks and systems. For example, in February 2010, DHS’s I&A reported that it had installed the Homeland Secure Data Network, which supports the sharing of federal secret-level intelligence and information with state, local, and tribal partners, at 33 of 72 fusion centers. DHS also provides an unclassified network, the Homeland Security Information Network, which allows federal, state, and local homeland security and terrorism-related information sharing. Training and Technical Assistance: DHS has partnered with DOJ, through Global, to offer fusion centers a variety of training and technical assistance programs. These include training on intelligence analysis and privacy and civil liberties protections, as well as technical assistance with technology implementation, security, and the development of liaison programs to coordinate with other state and local agencies. Privacy Requirements for Fusion Centers Fusion centers have a number of privacy related requirements. As provided under the 9/11 Commission Act, DHS is required to issue standards that fusion centers are to develop, publish, and adhere to a privacy and civil liberties policy consistent with federal, state, and local law. In addition, the standards must provide that a fusion center give appropriate privacy and civil liberties training to all state, local, tribal, and private sector representatives at the center and have appropriate security measures in place for the facility, data, and personnel. Because fusion centers are within the ISE when they access certain kinds of information, federal law requires they adhere to ISE privacy standards issued by the President or the PM-ISE under the authority of the Intelligence Reform Act, as amended. Other federal requirements found in 28 C.F.R. part 23, Criminal Intelligence Systems Operating Policies, apply to federally funded criminal intelligence systems, and fusion centers receiving criminal intelligence information must follow these procedures, which also include privacy requirements. In 2006, the PM-ISE issued the ISE Privacy Guidelines, which establish a framework for sharing information in the ISE in a manner that protects privacy and other legal rights. The ISE Privacy Guidelines apply to federal departments and agencies and, therefore, do not directly impose obligations on state and local government entities. However, the ISE Privacy Guidelines do require federal agencies and the PM-ISE to work with nonfederal entities, such as fusion centers, seeking to access protected information to ensure that the entities develop and implement appropriate policies and procedures that are at least as comprehensive as those contained in the ISE Privacy Guidelines. Among the primary components of these guidelines, agencies are required to, for example, ensure that protected information is used only for authorized, specific purposes; properly identify any privacy-protected information to be shared; put in place security, accountability, and audit mechanisms; facilitate the prevention and correction of any errors in protected information; and document privacy and civil liberties protections in a privacy and civil liberties policy. Federal Efforts Are Under Way to Assess Centers’ Capabilities, Target Funding to Capability Gaps, and Assess Costs, but Measuring Results Achieved Could Help Show Centers’ Value to the ISE Officials in all 14 fusion centers we contacted cited federal funding as critical to expanding their operations and achieving and maintaining the baseline capabilities needed to sustain the national network of fusion centers. An assessment of fusion centers, led by the PM-ISE, DHS, and DOJ, is under way to obtain data about the current capabilities of centers nationwide, identify the operational gaps that remain, and determine what resources centers may need to close the gaps. DHS is evaluating whether to amend its grant guidance to require fusion centers to use future funding to support efforts to meet and maintain the baseline capabilities. DHS also has plans to assess the costs of the fusion center network to help inform decisions about the extent to which the funding mechanisms in place in support of fusion centers are adequate, or if other funding avenues need to be explored. However, taking steps to implement standard performance measures to track the results of fusion centers’ efforts to support information sharing and assess the impact of their operations could help demonstrate center value to the ISE and enable the federal government to justify and prioritize future resources in support of the national network. Fusion Center Officials We Interviewed Cited Federal Funding as Critical to Sustaining Operations Officials in all 14 fusion centers we contacted stated that without continued federal grant funding, in particular DHS grant funding, their centers would not be able to expand, or in some instances even maintain, operations. States have reported to DHS that they have used about $426 million in grant funding from fiscal year 2004 through 2009 to support fusion-related activities nationwide, as shown in table 1. According to a nationwide survey conducted by DHS and the PM-ISE, of the 52 of 72 fusion centers that responded, on average, over half of their 2010 budgets were supported by federal funding. Specifically, as shown in figure 2, these centers reported that federal grant funding accounted for 61 percent of their total current budgets of about $102 million and state or local funds accounted for 39 percent ($40 million), according to information reported to DHS and the PM-ISE. For the 14 centers we contacted, officials in 6 of the centers reported relying on federal grant funding for more than 50 percent of their annual budgets, which ranged from $600,000 to about $16 million. Officials in all 14 of the centers we contacted stated that federal funding was critical to long-term sustainability and provided varying examples of the impact that not having federal grant funding would have on their fusion centers. Officials in four fusion centers stated that without federal funding, their centers would not be able to continue operations. For example, an official in one of these centers stated that with the state’s economic recession, the fusion center does not expect to grow operations over the next 5 years and is struggling to maintain the personnel and funding needed to maintain their current operations, which includes fewer than 10 full-time personnel with an estimated budget of a little over $500,000. Officials in another fusion center stated that while they have a comparatively large budget of about $10 million, they could not maintain their level of operations without the federal grant funding, about $5 million per year, they receive. Fusion Centers See Federal Funding as Necessary to Achieve and Maintain the Baseline Capabilities; a Nationwide Assessment to Gauge Gaps in Centers’ Capabilities Is Under Way Officials in all 14 fusion centers we contacted stated that without sustained federal funding, centers could not expand operations to close the gaps between their current operations and the baseline capabilities, negatively affecting their ability to function as part of the national network. For example, officials from one fusion center stated that they currently do not have the resources to hire a security officer, which affects the center’s development, implementation, maintenance, and oversight of security measures, including ensuring that security measures are in place to provide the proper information protection in compliance with all applicable laws and the center’s privacy and civil liberties policy. Officials in another fusion center stated that federal grant funding is essential to expanding their outreach and coordination with other state and local entities—a recommended baseline capability and one of the primary ways that centers maintain partnerships with other entities. Consistent with fusion center views reported at the 2010 National Fusion Center Conference, officials in all 14 fusion centers we contacted stated that achieving and maintaining the baseline capabilities was key to sustaining their centers. By achieving and maintaining these capabilities, fusion centers should have the necessary structures, processes, and tools in place to support the gathering, processing, analysis, and dissemination of terrorism, homeland security, and law enforcement information as part of the national, integrated network. At the 2010 National Fusion Center Conference, fusion center directors reported that achieving the critical operational capabilities at each fusion center was necessary to ensure an effective flow of information throughout the national network of fusion centers. To do so, these directors cited the importance of performing baseline capability self-assessments, identifying gaps between operations and the baseline capabilities, developing plans to address the gaps, and leveraging existing resources more effectively and efficiently to close those gaps. For example, assessing gaps in centers’ current information technology and communication infrastructure and the associated costs of implementing the necessary systems may enable fusion centers to focus resources more efficiently to address these needs and close the identified gaps. Officials in all of the 14 fusion centers we contacted said that, in recognizing the importance of meeting the baseline capabilities, they had taken some steps to review their own operations and identify gaps between their current operations and the recommended baseline capabilities. For example, an official in one center said that he had conducted a systematic gap analysis of the center’s current operations against the baseline capabilities and determined that the center still had to achieve an estimated 80 percent of the capabilities, such as developing performance metrics and an outreach program. Gaps identified by officials at the 14 fusion centers included, for example, the need to develop information technology and related tools for analysis; not having a privacy and civil liberties policy in place; not having identified a privacy/civil liberties officer; and not having identified a security officer. To provide data about the baseline capabilities of fusion centers nationwide, the PM-ISE, DHS, and DOJ are conducting an ongoing systematic assessment of centers’ capabilities. The goal of the nationwide assessment, according to DHS senior officials, is to help enable both federal and fusion center representatives to (1) obtain more accurate information on the current status of centers’ abilities to meet the baseline capabilities, (2) help identify gaps between centers’ current operations and the capabilities, and (3) use this information to develop strategies and realign resources to support centers’ efforts to close those gaps going forward. Further, according to both DHS senior officials and fusion center representatives, the results of the assessment are also intended to provide centers with the information needed to develop more accurate and specific investment justifications to their SAAs in competing for DHS HSGP funding. According to DHS and a senior official from the NFCA, personnel from DHS, the PM-ISE, and DOJ coordinated with state and local government representatives and fusion center officials prior to and during the National Fusion Center Conference in February 2010 to jointly identify four critical operational capabilities and four enabling capabilities to be prioritized in developing the national network of fusion centers. Among the four enabling capabilities are those that relate to establishing a sustainment strategy and establishing privacy and civil liberties protections, as shown in table 2. The nationwide assessment of fusion centers consists of two phases—a self-report survey followed by onsite validation. First, the PM-ISE sent a self-assessment questionnaire, which was to be completed in May 2010, to all 72 designated fusion centers to use to assess their current operations against all baseline capabilities. Second, starting in June 2010, seven validation teams consisting of federal and fusion center personnel began making site visits to fusion centers to validate centers’ responses to the self-assessment. Specifically, the validation teams are to conduct a review of the four critical operational capabilities that were identified collaboratively by federal officials and fusion center directors as being critical to the functioning of the national network. Validation teams are also to review information on the privacy and civil liberties protections established by these fusion centers and to discuss the centers’ sustainment strategies. Senior DHS officials stated that this review is to involve discussions on each fusion centers’ experiences and related issues, challenges, and associated costs of achieving and maintaining the four critical operational capabilities, as well as the privacy and civil rights/civil liberties enabling capability, to provide additional information on why gaps may exist and how to address them. According to DHS senior officials, the site visits were completed in September 2010. The results of the assessment, which are to include the aggregate of both the self-assessment and on-site validation data, are expected to be analyzed and shared in a report with the participating fusion centers by the end of October 2010. Further, according to DHS senior officials, they are planning to conduct the assessment on a recurring basis. Thus, this initial assessment is expected to serve as a baseline against which to measure the development of the baseline capabilities in individual fusion centers, as well as across the national network. DHS Has Efforts Under Way to Link DHS Grants to Filling Baseline Capabilities Gaps and Plans to Assess Costs of the Fusion Center Network DHS has opportunities to better target federal fusion center funding to fill critical baseline capability gaps and is taking steps to do so. Both the National Strategy and DHS emphasize that federal agencies are to play an active role in addressing the challenge of sustaining fusion centers by ensuring that they are able to achieve and maintain the baseline capabilities. Specifically, the National Strategy states that federal agencies are to assist fusion centers in incorporating the baseline capabilities into their operations by amending and modifying grants and grants guidance, and other applicable funding programs, to ensure that centers are able to meet and sustain the baseline capabilities and operational standards. In its fiscal year 2010 HSGP grant guidance, DHS encourages, but does not require, that fusion centers prioritize the allocation of HSGP funding they receive through their SAAs to meet and maintain the baseline capabilities. Further, senior DHS officials stated, generally, that the results of the nationwide assessment will be used to address future fusion center funding and that the office will determine how it may leverage DHS’s HSGP to ensure that centers have access to grant funds and assist with putting these mechanisms in place for the future. Senior officials from DHS as well as all 14 of the fusion centers we contacted stated that linking, or tying, future HSGP grant funding to achieving and maintaining the baseline capabilities may better enable fusion centers to obtain the resources needed to address the gaps in baseline capabilities by allowing them to more specifically detail how grant funding is to be used in their investment justifications. For example, by tying future grant funding to developing fusion centers’ ability to gather information, aggregate it, analyze it, and share it as appropriate, centers may be more likely to obtain the funding necessary to develop the specific information systems and analytical tools needed to enable them to achieve these capabilities. An Acting Director with FEMA’s Office of Counterterrorism and Security Preparedness stated that, as part of developing the Fiscal Year 2011 HSGP guidance, FEMA is currently working with DHS and fusion center stakeholders to evaluate the potential for amending the guidance to accomplish two goals. Specifically, they are working to (1) require, rather than encourage, that fusion centers use 2011 grant funding allocated from SAAs to achieve and maintain all of the baseline capabilities; and (2) focus funding to specifically address gaps in baseline capabilities identified during the assessment process. For example, the official said that they are exploring options such as requiring centers to include in their investment justifications the results of the nationwide assessment and indicating how the center would use funding to fill any identified gaps. Further, this official added that FEMA has also begun collaborating within DHS and with DOJ to discuss current grant programs and possibilities for future interagency coordination on the support specifically for fusion centers. Directives such as these could help ensure that capabilities are met by enabling fusion centers to provide specific data about operational gaps and needs in their investment justifications. While DHS could ensure that fusion centers target the federal funding they receive on filling baseline capabilities gaps, fusion centers have called on the federal government to establish a dedicated funding stream for them. DHS’s HSGP is the primary grant program through which fusion centers receive funding, but these grants are not specifically focused on, nor limited to, fusion centers. As such, fusion centers compete with other state homeland security, law enforcement, and emergency management agencies and missions for a portion of the total amount of HSGP funding awarded to the SAA, which decides what portion of the total funding centers will receive. This process has generated long-standing concerns by the fusion center community about the lack of a longer-term, predictable funding source for the centers. For example, we reported in October 2007 that fusion centers reported challenges with funding, that these issues made it difficult to plan for the future, and that fusion centers were concerned about their ability to sustain their capability for the long term. The Congressional Research Service (CRS) similarly reported in January 2008 that the threat of diminished or eliminated federal or state funding, such as a decrease in DHS grant funding programs, poses a risk to the development of fusion centers. The DHS Office of Inspector General subsequently reported in December 2008 that fusion center officials they spoke with remained concerned with sustainability and funding, emphasizing that sustainment planning and funding from the federal government is essential for the success of fusion centers. Officials from 13 of the 14 centers we contacted cited a number of challenges with obtaining funding and the lack of a dedicated funding source, which affected their ability to plan long term or expand their operations. For example, officials in 9 of these centers stated that uncertainty around the amount of federal grant funding the fusion center will receive from their states each year made it difficult to plan and expand operations. For instance, an official from a fusion center stated that the center relies on federal funding for 80 percent of its annual operating budget, but has to compete with several other state agencies and about 75 counties for a portion of HSGP funding each year. Officials in another fusion center stated that competition for limited federal grant funding has made getting the necessary funding more difficult and, as a result, they have had to scale back part of their outreach efforts to state and local entities, which is one of the four critical enabling capabilities. In referring to the role fusion centers are to have in the national information sharing network, officials from all 14 fusion centers stated that there should be a federal grant funding stream or program dedicated specifically to support fusion centers. For example, officials from 6 centers stated that, since the National Strategy has identified fusion centers as a key component of the success of the ISE, the federal government should recognize the importance of providing dedicated funding support so that centers with varying missions and resources can continue to close baseline capability gaps and function as key partners in the national network. An official from one of these fusion centers stated that while centers are owned and operated by state and local entities—and should thus be supported by state and local resources—centers are also expected, as members of the ISE, to support a national information sharing, homeland security mission. Moreover, this official said that if fusion centers, as the primary focal points of information sharing between state and local and federal governments, are to support this mission, there should be a targeted federal funding source to support centers’ efforts to meet and achieve the baseline capabilities, which have been identified as being essential for centers to function in the national network. Senior I&A and FEMA officials said that they understood the fusion centers’ concerns and recognized the challenges centers faced in competing for funding. However, these FEMA officials stated that they do not have the authority to create a fusion-center-specific grant within the HSGP and that doing so would require congressional action. These FEMA officials said that, in addition to the nationwide assessment that is underway to identify gaps in baseline capabilities, within the HSGP, they have broadened the allowable costs for which fusion centers can use HSGP funding and prioritized funding on achieving the baseline capabilities. However, DHS has not directed that a certain percentage of HSGP funding be used for fusion centers out of concern that other state agencies, such as emergency management agencies, would likewise lobby for such specific funding. These officials added that this would not be possible because they are trying to balance ensuring that SAAs have flexibility in administering HSGP funds while ensuring that federal fusion center requirements are supported and met. Further, senior DHS officials stated that DHS has recognized the need to conduct extensive research on funding options for fusion centers, stating that, after the nationwide assessment is completed, the SLPO is to assess key budgetary processes to determine how support to fusion centers can be affected and determine DHS’s ability to identify additional funding options for centers. In addition, Fiscal Year 2012 implementation guidance for the ISE requires that, by October 29, 2010, DHS should develop and promulgate an annual common reporting process that will document the total operational and sustainment costs of each of the 72 fusion centers in the national network. Senior DHS officials stated that, while not yet completed, the SLPO has begun to develop this reporting process and that it is to be based in part on surveys implemented in previous years at fusion centers. These officials added that the goal of the guidance is to develop annual data on the costs to sustain fusion centers, and that these data are a necessary first step to assessing the adequacy of current funding mechanisms. Taking Steps to Implement Standard Performance Measures to Track the Results of Fusion Centers’ Efforts to Support Information Sharing Could Help Demonstrate Centers’ Value to the ISE If fusion centers are to receive continued financial support, it is important that centers are also able to demonstrate that they are providing critical information that is helping the federal government and state and local agencies protect against terrorist and homeland security threats. We have previously emphasized the importance of performance measures as management tools to track an agency’s progress toward achieving goals and to provide information on which to base organizational and management decisions. Performance data allow agencies to share effective approaches, recognize problems, look for solutions, and develop ways to improve results. The Fusion Center Guidelines recommend that individual fusion centers develop and use performance measures as an ongoing means to measure and track performance and determine and evaluate the effectiveness of their operations to make better decisions and allocate resources. The Baseline Capabilities expand on these guidelines and recommend that fusion centers develop measures that allow them to, among other things, track their performance and results against the centers’ individual goals and objectives. Officials from 5 of the 14 centers we contacted stated that one of the gaps they identified between their current operations and the baseline capabilities was development of methods to monitor and evaluate their fusion center’s performance. Officials from these 5 fusion centers stated that it was a challenge to develop performance measures to monitor their operations and demonstrate results because their mission was to prevent crimes, and it is difficult to know how many crimes were averted due to their efforts. Additionally, officials from 3 of these 5 fusion centers stated that their ability to develop performance measures was also affected by the fact that, due to limited personnel, addressing other operational work responsibilities, such as analyzing intelligence information and developing related reports, was the priority. A senior official from NFCA said that these challenges are similarly experienced across the broader network of fusion centers, and that centers would welcome a collaborative process in developing these measures to involve participation from, among others, federal agencies such as DHS, DOJ, and the PM-ISE. According to DHS senior officials, the nationwide assessment currently under way is to gauge whether or not each fusion center has developed methods to monitor and evaluate its own performance. For example, the assessment results are to indicate to what extent a center has developed mechanisms to receive feedback on the value of its products or to determine the effectiveness of its operations in achieving identified goals and objectives. DHS senior officials stated that the results will be used to help federal agencies assess to what extent there are gaps in this baseline capability across the national network of fusion centers and to make decisions about where to allocate resources to support centers’ efforts to develop these individual performance measures. However, while federal guidance recommends that individual fusion centers develop and use performance measures as a baseline capability, currently there are no standard measures to track performance across fusion centers and demonstrate the impact of centers’ operations in support of national information sharing goals. According to PM-ISE and DHS senior officials, the results of the nationwide assessment are not intended to provide standard measures for fusion centers to demonstrate the results they are achieving in meeting broader information sharing goals as part of the national network. For example, the assessment results are not intended to provide information about how well centers disseminated federal information to local security partners or how useful federal agencies found the information that centers provided them. The PM-ISE and DHS have recognized the value of implementing standard performance measures across fusion centers. In its 2009 annual report to Congress, the PM-ISE stated that among the activities the office would undertake in 2009 and 2010 would be designing a set of performance measures to demonstrate the value of a national integrated network of fusion centers operating in accordance with the baseline capabilities. Senior PM-ISE officials stated that the PM-ISE had not begun this effort and is no longer planning to develop these performance measures however, because DHS, as the lead agency in coordinating federal support of fusion centers, is now responsible for managing development of these performance measures. Further, in response to a requirement under the 9/11 Commission Act, DHS stated in its 2008 fusion center Concept of Operations that it will develop qualitative and quantitative measures of performance for the overall network of fusion centers and relevant federal entities, such as DHS and DOJ. According to senior DHS officials, the agency recognizes that developing these measures is important to demonstrate the value of agency efforts in support of the ISE. However, these officials stated that, while DHS has started collecting some information that will help in developing such measures, the agency is currently focusing on completing the nationwide assessment to gauge the capabilities and gaps across fusion centers. As such, these officials said that they have not defined next steps or target timeframes for designing and implementing these measures. Standard practices for program and project management state that specific desired outcomes or results should be conceptualized, defined, and documented in the planning process as part of a road map, along with the appropriate steps and time frames needed to achieve those results. By defining the steps it will take to design and implement a set of standard measures to track the results and performance across fusion centers and committing to a target timeframe for completion, DHS could help ensure that centers and federal agencies demonstrate the value of fusion centers’ operations to national information sharing goals and prioritize limited resources needed to achieve and maintain those functions deemed critical to support the national fusion center network. Federal Agencies Are Providing Technical Assistance and Training to Centers to Help Them Develop Privacy and Civil Liberties Policies and Protections, and DHS Is Assessing the Status of These Protections DHS and DOJ are providing technical assistance to assist fusion centers in developing privacy and civil liberties policies, and fusion centers nationwide are in varying stages of completing their policies. Additionally, fusion center officials we interviewed reported taking steps to designate privacy/civil liberties officials and conduct outreach about their policies. Further, DHS and DOJ are providing training to fusion centers on implementing privacy and civil liberties policies and protections that officials in the 14 centers we contacted found helpful and wanted to be continued. DHS also has several efforts underway to assess the status of fusion centers’ privacy and civil liberties protections, including updating the privacy and civil liberties impact assessments to help ensure centers’ protections are implemented. DHS and DOJ Are Providing Technical Assistance to Help Fusion Centers Develop Privacy and Civil Liberties Policies, and Centers Nationwide Are in Varying Stages of Completing Their Policies Because fusion centers are collecting and sharing information on individuals, federal law establishes requirements and federal agencies have issued guidelines for fusion centers to establish policies that address privacy and civil liberties issues. Consistent with the 9/11 Commission Act, the Fusion Center Guidelines call for fusion centers to develop, publish, and adhere to a privacy and civil liberties policy. Further, the Baseline Capabilities provide more specific guidance on developing such a policy and contain a set of recommended procedures for fusion centers to include in their policies to ensure that their centers’ operations are conducted in a manner that protects the privacy, civil liberties, and other legal rights of individuals according to applicable federal and state law. According to federal guidance, if centers adhere to the Baseline Capabilities, they in turn will be in adherence with the ISE Privacy Guidelines. Further, DHS’s fiscal year 2010 HSGP funding guidance stipulates that federal funds may not be used to support fusion-center- related initiatives unless a fusion center has developed a privacy and civil liberties policy containing protections that are at least as comprehensive as the ISE Privacy Guidelines within 6 months of the grant award. According to senior DHS Privacy officials, the fiscal year 2010 grants were awarded in September 2010, so fusion centers will have until March 2011 to have their policies reviewed and certified by the DHS Privacy Office. If a fusion center does not have a certified privacy and civil liberties policy by March 2011, according to DHS guidance, DHS grants funds may only be used to support the development or completion of the center’s privacy and civil liberties protection requirements. To facilitate fusion centers meeting federal requirements for their privacy and civil liberties policies, DHS and DOJ have published a template and established a process to review and certify the policies. The template incorporates the primary components of the ISE Privacy Guidelines and provides sample language for the center to use as a starting point when drafting procedures for a privacy and civil liberties policy. To ensure fusion centers comply with the certification requirements in DHS’s grant guidance, DHS and DOJ have established a joint process to review and certify fusion centers’ privacy and civil liberties policies. First, a fusion center sends its draft policy to a team of attorneys contracted by DOJ’s Bureau of Justice Assistance to provide a detailed review of the policy and compare its language and provisions against language in the template. After its review, DOJ submits the center’s completed draft policy to the DHS Privacy Office for a final review. This office reviews the policy specifically to determine whether it contains protections that are at least as comprehensive as the ISE Privacy Guidelines. If the policy satisfies the ISE Privacy Guidelines, the DHS Chief Privacy Officer sends written notification to the fusion center director stating that the policy has been certified. Using this guidance and technical assistance, fusion centers nationwide are in varying stages of completing their privacy and civil liberties policies. Specifically, 21 centers had certified policies; 33 centers had submitted policies; and 18 centers, while they have not yet submitted their policies, were currently receiving technical assistance, as of August 2010. Senior DHS Privacy officials stated that they expect that all 72 fusion centers will have submitted their policies and the federal agencies will be able to review and certify them by the March 2011 deadline to avoid any limits on grant funding. The 14 centers we contacted were at different stages of the review process and reported that they found the template and technical assistance to be helpful. Specifically, 7 centers had certified policies, 6 had policies in the review process, and 1 center was drafting its policy. Officials from all 14 of the fusion centers stated that they used or were using the template to write their policies, and that the template was a helpful guide to developing their policies. In addition, officials in 13 of these centers that had submitted their policies for review stated that the technical assistance and guidance DHS and DOJ provided was integral in assisting them draft their policies, especially a tracking sheet the DOJ review team used to document comments, feedback, and recommendations. Consistent with Recommended Federal Guidance, Fusion Center Officials We Interviewed Have Taken Steps to Designate Privacy/Civil Liberties Officials and Conduct Outreach The Baseline Capabilities recommend that fusion centers designate a privacy/civil liberties official or a privacy committee to coordinate the development, implementation, maintenance, and oversight of the fusion center’s privacy and civil liberties policies and procedures. Furthermore, the Baseline Capabilities recommend that if the designated privacy/civil liberties official is not an attorney, fusion centers should have access to legal counsel with the appropriate expertise to help clarify related laws, rules, regulations, and statutes to ensure that centers’ operations are adhering to privacy and civil liberties protections. Officials from all 14 fusion centers we contacted stated that they have taken steps to designate privacy/civil liberties officials or form privacy committees. For example, officials in 12 of these centers said that they designated a single individual to serve as the privacy/civil liberties official; officials in 1 fusion center selected two officials—attorneys from the state’s bureau of investigation and the state’s department of safety; and officials in 1 center created a privacy committee. For more information on the qualifications of privacy/civil liberties officials and the challenges associated with designating them, see appendix I. In addition to developing a privacy and civil liberties policy and designating a privacy/civil liberties official, the Baseline Capabilities recommend that fusion centers facilitate public awareness of their policy by making it available to the public. Officials in 7 of 14 fusion centers we contacted described taking steps to make the public aware of their fusion center’s privacy and civil liberties protections. For example, officials in 3 centers said that they met with privacy and civil liberties advocacy groups to elicit feedback about the centers’ policies. For instance, one official said that his fusion center shared its policy with a local chapter of the ACLU, which reviewed it and made suggestions for revisions, some of which the center implemented. Additionally, officials from 6 of 14 fusion centers we interviewed said that they posted their policies on their centers’ Web sites or planned to post them once they are certified. To assist centers with their outreach efforts, DHS and DOJ officials stated that they are developing a communications and outreach guidebook that will include information on how fusion centers can communicate their mission, operations, and privacy and civil liberties protections to state and local governments, privacy advocacy groups, and the general population. These officials added that this guidebook will recommend that fusion centers post their privacy and civil liberties policies online to help centers achieve the baseline capability of promoting transparency and public awareness of their privacy and civil liberties protections. Fusion Center Officials We Interviewed Reported That DHS’s and DOJ’s Training on Privacy and Civil Liberties Protections Was Helpful and Would Like It Continued after Their Policies Are Developed The 9/11 Commission Act requires DHS to establish guidelines for fusion centers that include standards for fusion centers to provide appropriate privacy training for all state, local, tribal, and private sector representatives at the fusion center, in coordination with DHS’s Privacy Office and CRCL. To support fusion centers in this effort, DHS, in partnership with DOJ and Global, has implemented a three-part training and technical assistance program for fusion center personnel consisting of (1) a “Training the Trainers” Program, where representatives from DHS’s Privacy Office and CRCL provide instruction to fusion center privacy/civil liberties officials with the intent that these officials then implement and teach the material to personnel at their centers; (2) a Web site “Tool Kit” or Web portal, which provides a single point of access to federal resources on privacy training and contains training material and video resources for state and local personnel on privacy topics; and (3) an On-site Training Program, where representatives from DHS’s Privacy Office and CRCL travel to fusion centers, upon request, to provide training on privacy, civil rights, and civil liberties issues. Appendix II discusses this training program in greater detail. Officials from all 14 fusion centers we contacted stated that DHS’s and DOJ’s three-part training and technical assistance program was helpful and expressed a need for continued training or guidance as they continue to establish their privacy and civil liberties protections. Fusion center officials cited several reasons why they wanted continued training and updated guidance, including evolving privacy laws, and the recognition that some privacy/civil liberties officials may lack privacy-related expertise or backgrounds. In addition to training, six fusion center officials expressed a need for continued privacy guidance, such as briefings on examples of fusion center privacy violations and how they were corrected. For example, an official from one of these centers expressed a need for federal guidance on how centers should deal with certain groups who make threats against state or local governments, as these groups can span across multiple states. Recognizing that fusion centers would like continued federal training and guidance on privacy, senior officials from DHS’s Privacy Office and CRCL stated that they plan to continue the DHS- DOJ joint three-part training and technical assistance program over the next several years and to tailor its privacy, civil rights, and civil liberties instruction to the needs of individual centers. Further, senior DHS Privacy officials stated that a goal of the training program is to develop multiyear relationships with privacy/civil liberties officials in each center, helping to establish a professional cadre of trained privacy/civil liberties officials across the national fusion center network. DHS Has Efforts Under Way to Assess the Status of Fusion Centers’ Privacy and Civil Liberties Protections Senior DHS Privacy officials stated that the review of fusion centers’ privacy and civil liberties policies is a first step in providing ongoing federal oversight of the development of privacy and civil liberties protections across fusion centers. These officials stated that continued assessment and oversight—by the federal government and by fusion centers themselves—is necessary to ensure that the protections described in centers’ policies are implemented in accordance with all applicable privacy regulations, laws, and constitutional protections. For example, a Director with DHS’s Privacy Office noted that a fusion center can, in theory, have a model privacy and civil liberties policy but not correctly implement its protections, increasing the risk of potential violations such as the proliferation of inaccurate data. The 9/11 Commission Act requires that the Secretary issue guidelines that contain standards that fusion centers shall not only develop and publish a privacy and civil liberties policy, but also that they adhere to it. Further, the Baseline Capabilities recommend that fusion centers, as part of their privacy and civil liberties protections, identify methods for monitoring the implementation of their privacy and civil liberties policies and procedures to incorporate revisions and updates. While the 9/11 Commission Act does not dictate specific oversight mechanisms for fusion center privacy and civil liberties protections, DHS, in coordination with DOJ and the PM-ISE, has two efforts under way to assess the status of these protections across fusion centers and is taking steps to encourage centers to assess their own protections going forward to identify any existing privacy and civil liberties risks and develop strategies to mitigate them. First, the nationwide assessment asks fusion centers to provide information on each of the privacy-related baseline capabilities, including information on the centers’ designated privacy/civil liberties officials, components of their privacy and civil liberties policies and related protections, policy outreach efforts, and training. Following that, validation teams are to review the self-reported information in detail with each fusion center. According to senior DHS officials, this information may help to identify any critical gaps in privacy and civil liberties protections across the national network of fusion centers. Senior DHS Privacy officials stated that this information will be an important tool in developing a longer-term oversight and assessment strategy to ensure that resources are aligned to address these gaps. Second, the 9/11 Commission Act, enacted in August 2007, requires, among other things, that DHS submit (1) a report within 90 days of the enactment of the Act containing a Concept of Operations for the Fusion Center Initiative that includes a privacy impact assessment (PIA) and a civil liberties impact assessment (CLIA) examining the privacy and civil liberties implications of fusion centers, and (2) another PIA and CLIA within 1 year of enactment. In general, these assessments allow agencies to assess privacy and civil liberties risks in their information sharing initiatives and to identify potential corrective actions to address those risks. DHS published a PIA in December 2008 that identified several risks to privacy presented by fusion centers, explained mitigation strategies for those risks, and made recommendations on how DHS and fusion centers can take additional action to further enhance the privacy interests of the citizens in their jurisdictions. CRCL similarly published a CLIA in December 2008 that evaluated fusion centers’ impact on the civil liberties of particular groups or individuals, outlined procedures for filing a civil liberties complaint with DHS, and highlighted the importance of training fusion center personnel on civil rights and civil liberties. DHS has not completed the second PIA or CLIA, which were to be issued by August 2008. However, according to senior DHS Privacy officials, the DHS Privacy Office is currently beginning to develop the updated PIA. These officials said that they identified two key milestones when determining when to begin work on the updated PIA. First, the officials said that they wanted to complete the “training the trainers” program for designated fusion center privacy/civil liberties officials, which they did in July 2010. Second, officials said that they delayed the start of the updated PIA to allow time for fusion centers to develop their privacy and civil liberties policies—which are to be certified by DHS by March 2011. Ensuring that centers had completed, and were beginning to implement, their policies would help in assessing updates to any risks identified in the initial PIA, according to these officials. Senior DHS Privacy officials stated that they have begun planning for the updated PIA, and that the assessment will be published in 2011. These officials stated that the updated PIA will be comprehensive in its scope, and include an assessment of the steps fusion centers have taken to address the recommendations of the 2008 PIA, an analysis of federal and state government involvement in fusion center privacy and civil liberties protections, a description of what federal agencies have done and are doing to assist fusion centers in establishing these protections, and a discussion about related initiatives. These officials added that the updated PIA will be a useful tool in assessing where fusion centers are in implementing protections and addressing the 2008 PIA recommendations, and that the information will be used to inform decisions on where to focus their training and oversight efforts going forward. Further, senior officials from CRCL stated that they have also begun to develop the updated CLIA, and plan to publish the assessment in 2011. According to these officials, the updated CLIA will address topics such as oversight of fusion centers, common issues and challenges that fusion centers face in establishing civil rights and civil liberties protections, examples of civil rights and civil liberties complaints directed at fusion centers, and key issues brought up during fusion center trainings. Given the assessments’ proposed scope and content, completing the updates to the PIA and CLIA as required will help provide critical information to help ensure that fusion centers are implementing privacy and civil liberties protections and that DHS, and other federal agencies, are supporting them in their efforts. In addition to the nationwide assessment and updated PIA and CLIA, DHS is also taking steps to encourage fusion centers to conduct their own PIAs once their privacy and civil liberties policies are reviewed and certified by the DHS Privacy Office as a means to oversee their own privacy and civil liberties protections going forward. According to senior DHS Privacy officials, individual PIAs are integral for a fusion center’s development and promote transparency by describing fusion center activities and authorities more fully than the policies can alone. To assist fusion centers in developing their own PIAs, DHS and DOJ jointly published a guide to conducting PIAs specific to state, local, and tribal information sharing initiatives, including a template to lead policy developers through appropriate privacy risk assessment questions. In addition to the template itself, according to senior DHS Privacy officials, the importance of conducting a fusion center PIA is conveyed through the three-part training and technical assistance program where the steps the office took to conduct its own PIA in 2008 are covered. Conclusions Fusion centers—as the primary focal points for the two-way exchange of information between federal agencies and state and local governments— play a critical and unique role in national efforts to combat terrorism more effectively. In light of their reliance on fusion centers as critical components in the ISE, DHS, in collaboration with DOJ and the PM-ISE, provide fusion centers with a variety of support, including DHS grant funding, personnel, and technical assistance. However, centers remain concerned about their long-term sustainability and ability to meet and maintain the baseline capabilities given the current federal funding sources and fiscally constrained state and local economic environments. DHS’s efforts to require, rather than encourage, centers to target HSGP funding to achieving and maintaining the baseline capabilities are aimed at enabling fusion centers to close gaps in capabilities and develop more accurate and specific investment justifications in competing for DHS HSGP funding within their respective states. Further, by completing the nationwide assessment and the required cost assessment of the fusion center network, DHS can begin to address long-standing concerns and questions about sustaining the fusion center network. If fusion centers are to receive continued financial support, it is important that centers demonstrate that they are providing critical information that is helping the federal government protect against homeland security and terrorist threats through a set of performance measures. The PM-ISE and DHS have recognized the value of developing such performance measures, but defining the steps it will take to design and implement them and committing to a target time frame for completion could help ensure that fusion centers and federal agencies track fusion center performance in a manner that demonstrates the value of fusion center operations within the ISE. Recommendation for Executive Action To enhance the ability to demonstrate the results fusion centers are achieving in support of national information sharing goals and help prioritize how future resources should be allocated, we recommend that the Secretary of Homeland Security direct the State and Local Program Office, in partnership with fusion center officials, to define the steps it will take to design and implement a set of standard performance measures to show the results and value centers are adding to the Information Sharing Environment and commit to a target timeframe for completing them. Agency Comments and Our Evaluation We requested comments on a draft of this report from the Secretary of Homeland Security, the Attorney General, and the Program Manager for the ISE on September 13, 2010. DHS, DOJ, and the PM-ISE did not provide official written comments to include in our report. However, in an email received September 23, 2010, a DHS liaison stated that DHS concurred with our recommendation. DHS and DOJ provided written technical comments, which we incorporated into the report, as appropriate. In its technical comments, DHS stated that the agency has recently started to develop a performance management framework to demonstrate the value and impact of the national network of fusion centers and is using the nationwide assessment data to support the development of specific performance measures. With regard to target timeframes, DHS stated that it is planning to (1) collaborate with fusion center directors and interagency partners on the development of these performance measures throughout the remainder of 2010 and (2) provide performance management resources at the next National Fusion Center Conference in March 2011. If properly implemented and monitored, developing these standard performance measures should enhance the ability to demonstrate the results fusion centers are achieving in support of national information sharing goals and help prioritize how future resources should be allocated. DHS also noted that while the report emphasizes the importance of sustainment funding for fusion centers, it does not recommend that DHS develop a sustainment strategy to assist fusion centers in getting the critical federal support they require. In our 2007 report, we recommended that the federal government articulate such a sustainment strategy for fusion centers—a recommendation with which DHS agreed and that we consider to still be current and applicable. Specifically, we recommended that the federal government define and articulate its role in supporting fusion centers and determine whether it expects to provide resources to centers over the long-term to help ensure their sustainability. During our review, DHS described actions that it plans to take that begin to build this strategy. More specifically, DHS said that it plans to collect and assess cost data from centers—a necessary first step to assessing the adequacy of current funding mechanisms and level of the resources that DHS provides to fusion centers. While a positive start, it will be important for DHS to follow through on these plans and develop a sustainment strategy for fusion centers. This would in turn be responsive to our recommendation. We are sending copies of this report to the Secretary of Homeland Security, the Attorney General, the Program Manager for the ISE, and other interested congressional committees and subcommittees. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report or wish to discuss the matter further, please contact me at (202) 512-8777, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Qualifications of Fusion Center Privacy/Civil Liberties Officials and Challenges Associated with Designating Them According to our interviews with officials in 14 fusion centers, the individuals designated to be privacy/civil liberties officials varied in terms of their position and legal experience. For example, in the 13 fusion centers with privacy/civil liberties officials, 3 of these officials were center directors and 6 were analysts. The remaining 4 fusion centers designated attorneys from other bureaus or agencies within their respective state or local governments, such as state attorneys’ general offices, as their privacy/civil liberties official. These officials stated that because they either did not have the appropriate legal expertise within the fusion center or had an existing working relationship with a state bureau or agency, designating officials outside their center as the privacy/civil liberties official was the best option available in achieving this baseline capability. Among the 9 centers that had designated fusion center personnel as the privacy/civil liberties official, none of these personnel was an attorney; however, officials in 3 of these centers stated that their privacy/civil liberties officials had access to other legal counsel within the state police agency or city police department, for example, to help clarify laws and regulations governing privacy and civil liberties protections and to assist with the development of the centers’ policies. Fusion center officials we interviewed reported several challenges in designating privacy/civil liberties officials, including concerns that some officials had other operational duties at the fusion center or may not have sufficient legal expertise to ensure implementation of privacy and civil liberties protections. For example, of the nine fusion centers with directors or analysts serving as the privacy/civil liberties official, two had officials whose sole duty was to oversee development of the center’s privacy and civil liberties policy and implementation of privacy and civil liberties protections. The other seven privacy/civil liberties officials had other operational duties at the fusion center. For instance, one fusion center’s privacy/civil liberties official also served as the center’s critical infrastructure and key resources analyst, which according to the center officials, slowed the development of the center’s privacy and civil liberties policy. According to a Director with DHS’s Privacy Office, it is difficult to assess the effect of fusion center privacy/civil liberties officials having responsibilities outside of their privacy-related duties because the position is relatively new and this is common. The official added that, in general, it is better to have the designated privacy/civil liberties official be able to focus exclusively on privacy-related duties. Additionally, officials in two fusion centers were concerned that their privacy/civil liberties officials may not have sufficient legal expertise to effectively monitor privacy and civil liberties protections at the centers. For example, one official stated that it was difficult to identify personnel who, in addition to legal expertise, had experience in both intelligence analysis and standard law enforcement practices which, in his experience, were necessary skills for a center’s privacy/civil liberties official. Senior DHS Privacy officials said that, in recognizing that fusion center privacy/civil liberties officials have multiple duties and vary in terms of their experience and legal expertise, DHS is committed to training and has taken steps to train centers’ designated officials and tailor DHS’s privacy instruction to the needs of individual fusion centers to help centers achieve this baseline capability. Appendix II: Privacy/Civil Rights and Civil Liberties Training and Technical Assistance Program DHS, in partnership with DOJ and Global, has implemented a three-part training and technical assistance program in support of fusion centers’ efforts to provide appropriate privacy, civil rights, and civil liberties training for all state, local, tribal, and private sector representatives at the fusion center: A “Training the Trainers” Program: In this 2010 program, representatives from the DHS Privacy Office and CRCL provided instruction to fusion center privacy/civil liberties officials at four regional fusion center conferences that are held annually. These 1 1/2-day classes were intended to provide privacy/civil liberties officials with instruction on the requirements of a fusion center in implementing privacy and civil liberties protections, the general privacy law framework of the ISE, and instruction on how privacy/civil liberties officials can best teach the material to fusion center personnel at their centers. According to senior officials from CRCL, privacy/civil liberties officials from 68 of 72 fusion centers have received the training as of August 2010. According to directors with the DHS Privacy Office and CRCL, the training delivered at the conferences is specialized and tailored based on feedback the offices receive from fusion center staff on key issues they would like covered. Officials added that they obtain feedback at each training session to also identify the privacy, civil rights, and civil liberties-related subject areas in which privacy/civil liberties officials may need more training. Participants in this program are asked to teach the material to other fusion center personnel within their centers within 6 months. A Web site “Tool Kit:” This tool-kit, or Web portal, provides a single point of access to federal resources on privacy, civil rights, and civil liberties training. The portal contains training material and video resources for state and local personnel on a broad range of privacy, civil rights, and civil liberties topics. The public Web portal can be found at www.it.ojp.gov/PrivacyLiberty. Furthermore, the Web portal provides access to training resources on the requirements in 28 C.F.R. part 23, which contains guidelines for law enforcement agencies operating federally grant-funded criminal intelligence systems. DHS HSGP guidance states that in fiscal year 2010, all fusion center employees are expected to complete the online 28 C.F.R. part 23 certification training. Officials from all 14 fusion centers we contacted stated that fusion center staff have completed the requisite online certification training, and that it was helpful in making their staff aware of the regulations governing their criminal intelligence systems. Furthermore, officials from 5 of these 14 fusion centers stated that they plan to require that fusion center personnel complete the 28 C.F.R. part 23 certification training on an annual basis to ensure that staff are well-versed on privacy requirements. An On-site Training Program: For this program, representatives from the DHS Privacy Office and CRCL travel to fusion centers, upon request, to provide a full day of training on privacy, civil rights, and civil liberties issues in the following core areas: civil rights and civil liberties basics in the ISE, privacy fundamentals, cultural competency, First Amendment issues in the ISE, and “red flags” when reviewing or creating intelligence products. Additionally, fusion centers have the option of selecting topics from a list of available training modules, such as a civil rights and civil liberties case scenario or an intelligence analysis product review exercise, and receiving customized instruction based on the training needs of their fusion center. Prior to the training, representatives from CRCL conduct interviews with fusion center officials to learn about their specific privacy, civil rights and civil liberties questions or issues, review state constitution and relevant state law, and research local media to identify the types of issues related to the work of the fusion center that have raised concerns among citizens in their jurisdictions. According to senior officials from the DHS Privacy Office and CRCL, as of August 2010, 21 of 72 fusion centers have received this on-site training. Officials we contacted in 3 fusion centers stated that they had requested and received on-site training on privacy, civil rights, and civil liberties protections from DHS personnel at their fusion centers and that the training was helpful. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Mary Catherine Hult, Assistant Director; Hugh Paquette; Kevin Craw; Katherine Davis; John de Ferrari; Matt Grote; David Plocher; Michael Silver; and Janet Temko made key contributions to this report. | Recent terrorist activity, such as the attempted Times Square bombing, underscores the need for terrorism-related information sharing. Since 2001, all 50 states and some local governments have established fusion centers, where homeland security, terrorism, and other intelligence information is shared. The federal government recognizes the importance of fusion centers; however, as GAO reported in October 2007, centers face challenges in sustaining their operations. GAO was asked to assess the extent to which (1) the Department of Homeland Security (DHS) has taken action to support fusion centers' efforts to maintain and grow their operations, and (2) DHS and the Department of Justice (DOJ) have supported fusion centers in establishing privacy and civil liberties protections. GAO reviewed relevant legislation and federal guidance; conducted interviews with 14 of 72 fusion centers, selected on the basis of location and time in operation, among other factors; and interviewed DHS and DOJ officials. The views of fusion center officials are not generalizable but provided insights Fusion centers have cited DHS grant funding as critical to achieving the baseline capabilities--the standards the government and fusion centers have defined as necessary for centers to be considered capable of performing basic functions in the national information sharing network, such as standards related to information gathering and intelligence analysis. However, DHS has not set standard performance measures for the centers. Fusion centers nationwide reported that federal funding accounted for about 61 percent of their total fiscal year 2010 budgets, but DHS's Homeland Security Grant Program (HSGP), the primary grant program through which fusion centers receive funding, is not specifically focused on, or limited to, fusion centers. Rather, states and local governments determine the amount of HSGP funding they allocate to fusion centers each year from among a number of competing homeland security needs. As a result, fusion centers continue to raise concerns about the lack of a longer-term, predictable federal funding source. DHS, in coordination with the Program Manager for the Information Sharing Environment and DOJ, has a nationwide assessment of centers' baseline capabilities under way. To be completed in October 2010, the goal of the assessment is to provide federal agencies and fusion centers with more accurate information on the status of centers' abilities, help identify gaps between centers' current operations and the baseline capabilities, and use this information to develop strategies and realign resources to close those gaps going forward. Recent federal guidance also requires that, by October 29, 2010, DHS should develop an annual reporting process that will document the total operational and sustainment costs of each of the 72 fusion centers in the national network so as to assess the adequacy of current funding mechanisms. If centers are to receive continued federal financial support, it is important that they are also able to demonstrate their impact and value added to the nation's information sharing goals. However, there are no standard performance measures across all fusion centers to do this. DHS has not started developing such measures because the agency is currently focusing on completing the nationwide assessment and compiling its results and, as such, has not defined next steps or target timeframes for designing and implementing these measures. Defining the steps it will take to design and implement a set of measures and committing to a target timeframe for their completion could better position DHS to demonstrate the value and impact of the national network of fusion centers. To help fusion centers develop privacy and civil liberties policies and protections, DHS and DOJ have provided technical assistance and training, including a template on which to base a privacy and civil liberties policy, and a joint process for reviewing fusion centers' policies to ensure they are consistent with federal requirements. The 14 centers GAO interviewed were at different stages of the policy review process, with 7 completed as of June 2010. Officials from all 14 of the fusion centers GAO interviewed stated that the guidance DHS and DOJ provided was helpful and integral in assisting them to draft their policies. |
Background GPO’s mission includes both printing government documents and disseminating them to the public. Under the public printing and documents statutes of Title 44 of the U.S. Code, GPO’s mission is to fulfill the printing needs of the federal government and to distribute those printed products to the public. All printing for the Congress, the executive branch, and the judiciary—except for the Supreme Court—is to be done or contracted by GPO except for authorized exemptions. The Superintendent of Documents, who heads GPO’s Information Dissemination division, disseminates these government products to the public through a system of nearly 1,300 depository libraries nationwide (the Federal Depository Library Program), GPO’s Web site (GPO Access), telephone and fax ordering, an on-line ordering site, and its bookstore in Washington, D.C. The Superintendent of Documents is also responsible for classification and bibliographic control of tangible and electronic government publications. Printing and related services. In providing printing and binding services to the government, GPO generally dedicates its in-house printing equipment to congressional printing, contracting out most printing for the executive branch. Table 2 shows the costs of these services in fiscal year 2003, as well as the source of these printing services. Documents printed for the Congress include the Congressional Record, hearing transcripts, bills, resolutions, amendments, and committee reports, among other things. GPO also provides publishing support staff to the Congress. These support staff mainly perform print preparation activities, such as typing, scanning, proofreading, and preparation of electronic data for transmission to GPO. GPO generally provides printing services to federal agencies through an acquisition program that relies on the commercial sector by passing the contractors’ costs on to its government customers. Prequalified businesses, small to large in size, compete for printing jobs that GPO printing experts oversee to ensure that the contractors meet customer requirements for quality. For this service, GPO attaches a 7 percent surcharge that GPO officials have stated was established partly by what the market will bear and partly by what is needed to cover GPO expenses. GPO procures about 83 percent of printing for federal agencies from private contractors and does the remaining 17 percent at its own plant facilities. Most of the procured printing jobs (85 percent for the period from June 2002 to May 2003) were for under $2,500 each. Besides printing, GPO provides a range of services to agencies including, for example, CD-ROM development and production, archiving/storage, converting products to electronic format, Web hosting, and Web page design and development. Dissemination of government information. The Superintendent of Documents is responsible for the acquisition, classification, dissemination, and bibliographic control of tangible and electronic government publications. Regardless of the printing source, Title 44 requires that federal agencies make all their publications available to the Superintendent of Documents for cataloging and distribution. The Superintendent of Documents manages a number of programs related to distribution, including the Federal Depository Library Program (FDLP), which provides copies of government publications to libraries across the country for public use. Generally, documents distributed to the libraries are those that contain information on U.S. government activities or are important reference publications. GPO evaluates documents to determine whether they should be disseminated to the depository libraries. When documents are printed through GPO, it evaluates them at the time of printing; if documents are not printed through GPO, agencies are to notify GPO of these documents, so that it can evaluate them and arrange to receive any copies needed for distribution. A relatively small percentage of the items printed through GPO for the executive branch are designated as depository items. Another distribution program under the Superintendent of Documents is the Sales of Publications Program, which purchases, warehouses, sells, and distributes government documents. Publications are sold by mail, telephone, and fax; through GPO’s on-line bookstore; and at its bookstore in Washington, D.C. In addition, GPO provides electronic copies of the Congressional Record and other documents to the Congress, the public, and the depository libraries in accordance with the Government Printing Office Electronic Information Access Enhancement Act of 1993. The Superintendent of Documents is also responsible for GPO’s Web site, GPO Access, which is one mechanism for electronic dissemination of government documents to the public through links to over 268,000 individual titles on GPO’s servers and other federal Web sites. More than 2 billion documents have been retrieved by the public from GPO Access since August 1994; almost 372 million downloads of government information from GPO Access were made in fiscal year 2002 alone. About two-thirds of new FDLP titles are available online. GPO Is Funded by Appropriations and by a Revolving Fund GPO receives funding from two appropriations: (1) the Congressional Printing and Binding Appropriation, which is used for in-house printing of congressional activities and (2) the Salaries and Expenses Appropriation, which is used for certain Superintendent of Documents activities. In addition to these appropriations, GPO has a business-oriented revolving fund, which is used to fund its procured printing, document sales, and other operations. The revolving fund was designed to financially “break even” by recovering costs through rates, prices, and other charges to customers for goods and services provided by GPO. The revolving fund is supported by the 7 percent service charge levied on agency customers of GPO-procured printing services and also receives funds from sales of publications to the general public. Trends in Printing and Information Dissemination Current printing industry trends show that the total volume of printed material has been declining for the past few years, and this trend is expected to continue. A major factor in this declining volume is the use of electronic media options. The move to electronic dissemination is the latest phase in the electronic publishing revolution that has transformed the printing industry in recent decades. This revolution was driven by the development of increasingly sophisticated electronic publishing (“desktop publishing”) software, run on personal computers, that allows users to design documents including both images and text, and the parallel development of electronic laser printer/copier technology with capabilities that approach those of high-end presses. These tools allow users to produce documents that formerly would have required professional printing expertise and large printing systems. These technologies have brought major economic and industrial changes to the printing industry. As electronic publishing software becomes increasingly sophisticated, user-friendly, and reliable, it approaches the ideal of the print customer being able to produce files that can be reproduced on the press with little or no intervention by printing professionals. As the printing process is simplified, the customer can take responsibility for more of the work. Thus, the technologies diminish the value that printing organizations such as GPO add to the printing process, particularly for simpler printing jobs. Nonetheless, professional expertise remains critical for many aspects of printing, and for many print jobs it is still not possible to bypass the printing professional altogether. The advent of the Internet permits the instantaneous distribution of the electronic documents produced by the new publishing processes, breaking the link between printing and dissemination. With the increasing use of the Web, the electronic dissemination of information becomes not only practical, but also more economical than dissemination on paper. As a result, many organizations are changing from a print to an electronic focus. In the early stages of the electronic publishing revolution, organizations tended to prepare a document for printing and then convert the print layout to electronic form—in other words, focusing on printing rather than dissemination. Increasingly, however, organizations are changing their focus to providing information—not necessarily on paper. Today an organization may employ computers to generate both plates used for printing as well as electronic files for dissemination. Tomorrow, the organization may create only an electronic representation of the information, which can be disseminated through various media, such as Web sites. A printed version would be produced only upon request. Government Printing and Dissemination Changes Are Forcing GPO’s Transformation As in private industry, printing and dissemination in the federal government are being heavily affected by the changing technological environment. This new environment presents both financial and management challenges to GPO. Just as the volume of material provided to private firms for printing has decreased over the past few years, so has the volume of material that federal agencies provide to GPO for printing. In addition, federal agencies are publishing more items directly to the Web—without creating paper documents at all—and are able to print and disseminate information without using GPO services. Similarly, individuals are downloading documents from government Web sites, such as GPO Access, rather than purchasing paper copies of government documents, thus reducing document sales. As a result, GPO’s financial condition has deteriorated, and the relationship between GPO and its federal agency customers has changed. Changes in Government Printing and Dissemination Result in Reduced Revenues The reduction in the demand for procured printing and for printed government documents has resulted in reduced revenues to GPO. These diminished revenues, combined with steady expenses and management’s use of retained earnings for GPO-wide needs, have totally depleted the retained earnings from revolving fund activities. These retained earnings have gone from a surplus of $100 million in fiscal year 1998 to a deficit of $19 million in fiscal year 2003. Figure 1 shows the declining trend in retained earnings. Specifically, most of the reductions to revenues for GPO’s revolving fund activities are from two sources: (1) losses to the sales of publications operations and (2) adjustments to actuarial calculations of future liabilities for GPO’s workforce compensation. Additional reductions to retained earnings resulted from GPO’s procured printing operations and regional printing. (See fig. 2.) Also, retained earnings were used to provide the Retirement Separation Incentive Program for reductions to GPO’s workforce. Losses to the sales program account for the largest reductions to GPO’s retained earnings. The sales program has had a net loss of $77 million over the past 5 years, $20 million in fiscal year 2003 alone. According to GPO, these losses are due to a downward trend in customer demand for printed publications that has significantly reduced document sales revenues. For example, according to the Superintendent of Documents, GPO sold 35,000 subscriptions to the Federal Register 10 years ago and now sells 2,500; at the same time, over 4 million Federal Register documents are downloaded each month from GPO Access. The Superintendent also reported that the overall volume of sales has dropped from 24.3 million copies sold in fiscal year 1993 to 4.4 million copies sold in fiscal year 2002. As a result, revenues have not covered expenses, and the sales program has sustained significant annual operating losses. (See fig. 3.) By comparison, the losses from GPO’s procured printing business are less significant: $15.8 million over the last 5 years. According to GPO, its federal agency print jobs at one time generated close to $1 billion a year. In fiscal year 2003, the amount was just over half that—$570 million. Changes in Printing and Dissemination Affect How Federal Agencies Use GPO Services These changes in federal printing and dissemination are also creating challenges for GPO’s long-standing structure for centralized printing and dissemination. As mentioned earlier, agencies are to notify GPO of published documents (if they used other printing sources), which allows GPO to review agency documents to determine whether the documents should be disseminated to the depository libraries. If they should be, GPO can then add a rider to the agency’s print contract to obtain the number of copies that it needs for dissemination. However, if agencies do not notify GPO of their intent to print, these documents become “fugitive documents” and may not be available to the public through the depository library program. In responding to our surveys, executive branch agencies reported that they are producing a significant portion of their total printing volume internally, generally on desktop publishing and reproduction equipment instead of large-scale printing equipment. In addition, while most agencies (16 of 21) reported that they have established procedures to ensure that documents that should be disseminated through the libraries are forwarded to GPO, 5 of 21 did not have such procedures, thus potentially adding to the fugitive document problem. Responding agencies also reported that although currently more government documents are still being printed than are being published electronically, more and more documents are being published directly to the Web, and their numbers are expected to grow in the future. Most agencies reported that documents published directly to the Web were not of the type that is required to be sent to GPO for dissemination. However, a GPO official, in commenting on this, said that unless there is a specific reason why a document should not be disseminated to the public, such as if it is classified or of administrative interest only, GPO should have the opportunity to evaluate whether that document is suitable for dissemination through its depository library system. Of the five agencies that did publish eligible documents electronically, only one said that it had submitted these documents to GPO. As electronic publishing continues to grow, such conditions may contribute further to the fugitive document problem. Change in Printing and Dissemination Affect Relationship between GPO and Executive Branch Customers The ongoing agency shift toward electronic publishing is also creating challenges for GPO’s existing relationships with its executive branch customers. In responding to our surveys, executive branch agencies expressed overall satisfaction with GPO’s products and services and expressed a desire to continue to use these services for at least part of their publishing needs. However, these agencies reported a few areas in which GPO could improve—for example, in the presentation of new products and services. (We provide further results from our surveys on agency satisfaction in app. III.) Further, some agencies indicated that they were less familiar with and less likely to use GPO’s electronic products and services. As shown in table 3, these agencies were hardly or not at all familiar with services such as Web page design and development (8 of 28), Web hosting services (8 of 29), and electronic publishing services (5 of 28). As a consequence, these agencies were also less likely to use these services. With the expected growth in electronic publishing and other services, making customer agencies fully aware of GPO’s capabilities in these areas is important. Table 3 provides agency responses on their familiarity with various GPO products and services. A few of the responding agencies reported less than satisfied ratings for some GPO products and services. Among these services were financial management services (7 of 23) and Web page design/development (3 of 10). Agencies also reported not using some GPO products and services, including Web hosting and Web page design/development services (18 of 28), converting products to electronic format (11 of 28), and electronic publishing services (9 of 28). Table 4 shows the results of our survey on agency satisfaction with GPO services, which includes agencies’ reports of products and services that they do not use. GPO Is Taking Action to Address Challenges GPO officials agreed with our assessment of the impact of technological change and said they are taking action to make GPO a more customer- focused organization. According to these officials, GPO is taking a new direction with its Office of Sales and Marketing, including hiring an outside expert and establishing nine national account managers, who spend most of their time in the field building relationships with key customers, analyzing their business processes, identifying current and future needs, and offering solutions; working with its largest agency customer, the Department of Defense, to determine how to work more closely with large in-house printing operations; evaluating recommendations received from the Depository Library Council; and continuing to implement a Demonstration Print Procurement Project, jointly announced with the Office of Management and Budget on June 6, 2003. The Demonstration Print Procurement Project is to provide a Web-based system that will be a one-stop, integrated print ordering and invoicing system. The system is to allow agencies to order their own printing at reduced rates, with the option of buying additional printing procurement services from GPO. According to GPO, this project is also designed to address many of the issues identified through our executive branch surveys, particularly the depository library fugitive document problem. Recommended Next Steps Although executive branch agencies generally expressed satisfaction with GPO products and services, their survey responses indicate some areas for improvement. Accordingly, we recommend that the Public Printer work with executive branch agencies to examine the nature of their in- house printing and determine whether GPO could provide these services more economically; address the few areas in which executive branch agencies rated GPO’s products, services, and performance as below average; reexamine GPO’s marketing of electronic services to ensure that agencies are aware of them; and use the results of our surveys to work with agencies to establish processes that will ensure that eligible documents (whether printed or electronic) are forwarded to GPO for dissemination to the public, as required by law. Expert Panel Suggests Strategic Options for GPO’s Future Role The Public Printer and his leadership team recognize the challenges that they face in the very competitive printing and dissemination marketplace and have embarked upon an ambitious effort to transform the agency. First and foremost, the Public Printer agrees with the need to reexamine the mission of the agency within the context of technological change that underlies GPO’s current situation. To assist in that process, our expert panel developed a series of options for GPO to consider in its planning. Briefly, the panel suggested that GPO develop a business plan to focus its mission on information dissemination as its primary goal, rather than printing; demonstrate to its customers—including agencies and the public—the value it can provide; improve and extend partnerships with agencies to help establish itself as an information disseminator; and ensure that its internal operations—including technology, how it conducts business with its customers, management information systems, and training—are adequate for efficient and effective management of core business functions and for service to its customers. We shared the results of the panel with GPO leadership, who commented that the panel’s suggestions dovetail well with their own assessments. These leaders stated that they are using the results of the panel as a key part of the agency’s ongoing strategic planning process. The panel members are listed in appendix IV. Create a New Vision Focusing on Dissemination In view of the changing federal government printing and dissemination environment, the panel suggested that GPO first needs to create a new vision of itself as a disseminator of information, and not only a printer of documents. As one panel member put it, GPO should end up resembling a bank of information rather than a mint that stamps paper. As a first step in this new vision, according to the panel, GPO needs to develop a business plan that emphasizes direct electronic dissemination methods over distribution of paper documents. The panel identified several elements that could be included in such a business plan: Improving GPO Access. GPO Access should be upgraded, and particular emphasis should be placed on improving the search capabilities. Investigating methods to disseminate information directly. For example, GPO could develop additional services to “push” data and documents into the hands of those who need or want them. To become more active in disseminating data, GPO could provide information to public interest or advocacy groups that are interested in tracking government information on certain subjects. These groups require something like a news clipping service, and the panel suggested that this is one way in which GPO could provide “value-added” service for which it could collect fees. Modernizing production processes. GPO should be moving toward production processes that will allow it to prepare a document once for distribution through various media (print or electronic). In the past, most organizations have focused on printing paper documents that are then turned into electronic ones. According to the panel members, the strategy for the future is to publish electronically and print only when necessary. Promote the federal use of metadata. GPO should support the use of metadata—descriptive information about the data provided that is carried along with the data—across the federal government as a requirement for electronic publishing. Providing increased support to the depository libraries. According to the panel, the depository libraries will continue to play an important role in providing access to electronically disseminated government information— through GPO Access and other tools—to that portion of the public that does not have access to the Internet. To support this role, GPO will have to ensure that the depository libraries receive training in electronic search tools, especially in GPO Access. GPO officials stated that its Office of Innovation and New Technologies, established in early 2003, is leading an effort to transform GPO into an agency “at the cutting edge of multichannel information dissemination.” A major goal in this effort is to disseminate information while still addressing the need “to electronically preserve, authenticate, and version the documents of our democracy.” Also, GPO has established an Office of New Business Development that is to develop new products and service ideas that will result in increased revenues. GPO officials stated that they are using the results of the panel discussion to categorize and prioritize their initial compilation of ideas for new products and services and, in this context, plan to assess how these ideas would improve operations and revenue. Demonstrate Value to Customers and the Public The panel also agreed that, while GPO appears to provide value to agencies because of its expertise in printing and dissemination, it is not clear that agencies and the general public realize this. Therefore, GPO should focus on demonstrating its value to federal agencies and to the public. According to the panel, areas that GPO could emphasize include the following: Providing competitively priced printing that meets customer needs. GPO should collect the data to show that it can, in fact, provide the “best value” for the government print dollar. GPO should demonstrate its capabilities by assisting agencies to select optimal alternatives for obtaining their printing. Providing expert assistance in electronic dissemination. Given GPO’s major role in providing information dissemination, one panel member suggested that GPO provide its expert advice on electronic Web site dissemination to agencies. Once again, GPO could develop information that demonstrates how it can add value in this area. Disseminating government information to the public. GPO should focus on demonstrating the usefulness of agencies’ sharing information with GPO for public dissemination. In addition, the depository libraries and GPO Access should be made better known to the public. GPO could demonstrate its value to the public as a trusted source of authentic government information. GPO agreed that demonstrating its value is an important part of its new customer service direction. GPO’s Office of Sales and Marketing is also working to augment customer service, including hiring an outside expert and establishing nine national account managers, as mentioned earlier. Establish Partnerships with Collaborating and Customer Agencies According to the panel, GPO should establish partnerships with other agencies and enhance the partnerships it already has. These partnerships can be used to assist GPO in establishing itself as a disseminator and depository of information and to expand agencies’ use of GPO in this role. Specifically, the panel suggested that GPO establish partnerships with the other information dissemination and preservation agencies (such as the National Library of Medicine, the Office of Scientific and Technical Information, the Library of Congress, and the National Archives and Records Administration) with which it has related responsibilities. Through ongoing dialogue with these agencies, GPO will be able to (1) coordinate standards and best practices for digitizing documents and (2) work with agencies to archive documents in order to keep them permanently available to the public. GPO could be successfully marketed as the source of government information for public use. In addition, the panel suggested that GPO improve and expand its partnerships with other agencies. Most agencies consider GPO a resource for printing documents; however, it now has the capability to assist in the collection and dissemination of electronic information. GPO agreed that partnerships with other agencies, particularly the information dissemination agencies, would be a key item in its transformation. GPO has made efforts to join various working groups within the government working on information dissemination issues. Most recently, the Public Printer has been added to the oversight committee of the National Digital Information Infrastructure and Preservation Program (NDIIPP), a national cooperative effort to archive and preserve digital information, led by the Library of Congress. Improve Internal Operations The panel suggested that GPO would need to improve its internal operations to be successful in the very competitive printing and dissemination marketplace. Panel members suggested that GPO consider the following strategies. Emphasize the use of technology to address future needs. The panel members suggested that GPO hire a chief technical officer (in addition to its chief information officer), who would focus on bringing in new printing and dissemination technologies while maintaining older technologies. Improve how it conducts business with its customers. An electronic means for submitting printing requests would streamline the printing process for GPO customers. One panel member noted that when his organization started an electronic submission system for manuscripts, the number of requests it received increased dramatically because such systems made it easier for the user. (GPO’s demonstration project, currently being piloted at the Department of Labor, includes use of a Web- based tool for submitting printing requests.) Improving management information systems. GPO should overhaul its outdated management information systems and acquire new ones that can provide management with the information it needs to effectively monitor operations and to make good business decisions. Enhance employee training. GPO’s transformation should include significant improvements to employee training. GPO customer service employees should have the knowledge they need to effectively assist customers not only in printing publications and creating electronic documents, but also in advising customers on the best form of dissemination (paper or electronic) for their jobs. GPO agreed that its internal operations need improvement. Among its actions to address the adequacy of its internal functions, GPO has hired a chief technical officer. The chief technical officer serves as a codirector of the Innovation and New Technology Office and provides principal guidance in the creation and development of technology designed to accelerate the transformation of GPO into a 21st century information organization using state of the art solutions to provide the highest quality government information services to the nation. GPO Has Made the Case for Change, but Actions to Advance Transformation Needed Large-scale change management initiatives, such as organizational transformations, are not simple endeavors and require the concentrated efforts of both leadership and employees to realize intended synergies and to accomplish new organizational goals. We have identified a number of key practices and related implementation steps that have consistently been found at the center of successful transformations. Collectively, these key practices and implementation steps can help agencies transform their cultures so that they have the capacity to fulfill their promises, meet current and emerging needs, maximize their performance, and ensure accountability. GPO has applied some key practices as part of its transformation effort, such as involving top leadership and strategically communicating with employees and other stakeholders. However, it has not fully applied key practices that emphasize planning and goal setting. For example, GPO has not developed a plan for its transformation that would include goals and strategies to achieve its goals. Such a plan is important to pinpoint performance shortfalls and gaps and suggest midcourse corrections. GPO’s Leadership Has Clearly Articulated the Need to Transform and Taken Steps to Ensure the Continued Delivery of Services Because transformation of an organization entails fundamental change, strong and inspirational leadership is indispensable. Our work has found that leadership articulating a succinct and compelling reason for change helps employees, customers, and stakeholders understand the expected outcomes of the transformation and engenders not only their cooperation, but also their ownership of these outcomes. In addition, to ensure that the productivity and effectiveness of the organization do not decline, leadership must also balance the continued delivery of services with transformation activities. Ensure top leadership drives the transformation. Define and articulate a succinct and compelling reason for change. Balance continued delivery of services with transformation activities. On several occasions and to different audiences, the Public Printer has reiterated the need for GPO to move from the 19th century to the 21st century. The Public Printer bases his case for change on three interrelated points that are consistent with our findings discussed above: GPO’s printing business and customer base has decreased significantly in recent years due to the government’s and public’s increased use of and reliance on electronic documents, necessitating GPO to establish itself as the leading organization within the federal government for dealing with the collection, authentication, and preservation of government documents—rather than a traditional printing operation. GPO has failed to update its technological abilities to keep pace with changes in the information dissemination environment, and as a result must update its technology to address the needs of today’s customers and information users and stay alert to future trends and changing needs. GPO’s retained earnings, which were normally available to fund technological investment, are virtually depleted, requiring GPO to change the way in which it does business to ensure that it can reverse the trend of financial losses. GPO’s precarious financial condition makes it essential that its leaders effectively balance transformation efforts with the continued delivery of services. The Public Printer created and filled eight top leadership positions. The creation of these positions recognized that the demands of transforming while managing an ongoing operation can strain leadership, as well as the importance of organizational structure as a key factor affecting an agency’s management control environment. These positions, which had no counterpart in GPO’s former organization, can help ensure that GPO balances its transformation efforts with its day-to-day operations. For example, the Chief Operating Officer (COO) focuses primarily on day- to-day activities, the Chief of Staff focuses on strategic planning, and the Chief Human Capital Officer (CHCO), CIO, and CFO address both types of activities within their respective functional areas. (See fig. 4.) GPO Has Set Interim Goals for Its Operating Units While It Works on a Strategic Plan The mission and strategic goals of a transformed organization must become the focus of the transformation, define the culture, and serve as the vehicle for employees to unite and rally around. In successful transformation efforts, developing, communicating, and constantly reinforcing the mission and strategic goals give employees, customers, and stakeholders a sense of what the organization intends to accomplish, as well as helping employees determine how their positions fit in with the new organization and what they need to do differently to help the new organization achieve success. Adopting leading practices for results- oriented strategic planning and reporting, including those mandated for executive agencies in the Government Performance and Results Act (GPRA), can help focus transformation efforts. While GPO is not required to follow GPRA, the act can provide a relevant framework for GPO to follow in developing its strategic plan. GPRA requires that strategic plans include several elements, including a mission statement, goals and objectives, and approaches or strategies to achieve goals and objectives. The framework can help an agency meet management control standards by enabling top management review of actual performance against planned performance. Establish a coherent mission and integrated strategic goals to guide the transformation. Adopt leading practices for results-oriented strategic planning and reporting. GPO is establishing a mission and strategic goals. Its overall approach is to consider the information gathered in the past year on GPO’s current environment—including the results of our work—and develop its strategic plan by the summer of 2004. Specific responsibilities for drafting a strategic plan have been placed with the Chief of Staff, who, beginning in April 2004, held biweekly meetings with the Public Printer to discuss the direction for the strategic plan. These meetings were meant to provide the Chief of Staff with updates on the Public Printer’s vision, which, according to a GPO official, is being developed as he meets with stakeholders and industry leaders. Over the past year, the Public Printer has spoken with employees, stakeholders, and the Congress to help focus and refine a vision for GPO’s future. On April 28, 2004, the Public Printer made his most clear and direct statement of his vision for GPO thus far, stating that GPO has “begun to develop a new vision for the GPO: an agency whose primary mission will be to capture digitally, organize, maintain, authenticate, distribute, and provide permanent public access to the information products and services of the federal government.” GPO’s strategic plan has the potential to unite employees around the new mission and determine what they need to do to help GPO transform and achieve success in the new environment. Although GPO has not fully developed its mission and strategic goals, GPO’s leadership has started to change GPO’s culture by setting interim goals for major operating units. Managers told us that in the past, GPO’s culture was to not set goals in order to avoid being held accountable for results. More specifically, GPO did not set or track any organizational goals and, therefore, did not develop the capacity to measure performance. The COO began to change GPO’s culture by leading an initiative in October 2003 to develop goals for its operating units and told us that it was important to begin to focus managers’ attention on priority issues and hold them accountable for progress. He said he viewed the interim goals as a necessary step to prepare GPO managers to operate in a results-oriented environment after GPO’s strategic plan is completed. The COO met with the heads of each business unit to develop goals that they thought would be consistent with GPO’s yet-to-be-developed strategic mission based on discussions with the Public Printer. Once the goals were developed, the COO and business unit managers identified areas where some interdependence with other managers’ goals might exist. Each manager is responsible for achieving between 6 and 11 goals that are specific to his or her business unit, and 6 additional goals that are common across GPO. The common goals are as follows: offer training opportunities to all employees in necessary job skills; establish baseline information on customer satisfaction; resolve all reportable conditions from financial audits; establish a line of communication through regular meetings to complete second-level reorganizations; and establish adequate off-site backup to enable continuity of essential operations. GPO’s efforts to set goals are a significant step toward strengthening communication and accountability; however, many of the goals do not emphasize outcomes. For example, one of the goals for both the Customer Services and Information Dissemination divisions is to implement the Office of Management and Budget (OMB) compact demonstration program. While this demonstrates that GPO has incorporated cross- cutting goals between its operating units, this goal is a statement of a task to be accomplished rather than an outcome to be achieved. While goals are important for establishing accountability, so too are measures, because they allow leaders to perform their management control responsibilities for monitoring performance and ensuring resolution of identified performance gaps. GPO’s COO has stated that he would like to strengthen performance measurement as GPO sets its goals for fiscal year 2005. To this end, GPO has the opportunity to learn from the practices of leading organizations that implemented results-oriented management. Among other things, such leading organizations generally developed measures that were tied to program goals, demonstrated the degree to which the desired results were achieved, and were limited to the vital few that were considered essential to producing data for decision making. Recommended Next Steps Consistent with the efforts under way, the Public Printer should ensure that GPO’s strategic planning process includes development of a comprehensive agency mission statement to define the basic purpose agencywide long-term goals and objectives to explain what results are expected from the agency’s main functions and when to expect those results; approaches or strategies to achieve goals and objectives to align GPO’s activities, core processes, and resources to support achievement of GPO’s strategic goals and mission; a description of the relationship between the long-term and annual goals to show expected progress; an identification of key external factors to help determine what actions will be needed to meet the goals; and a description of program evaluations used to establish or revise strategic goals, and a schedule for future program evaluations. The Public Printer should reinforce a focus on results by continuing efforts to set goals, measure performance, and hold managers accountable by adopting leading practices of organizations that have been successful in measuring their performance. First, the measures that GPO develops should be tied to program goals and demonstrate the degree to which the desired limited to the vital few that are considered essential to producing data responsive to multiple priorities, and responsibility-linked to establish accountability for results. Second, GPO leadership needs to recognize the cost and effort involved in gathering and analyzing performance data and make sure that the data it collects are sufficiently complete, accurate, and consistent to be useful in decision making. GPO Can Strengthen Its Transformation by Focusing on a Key Set of Principles and Priorities Principles are the core values of the new organization; like the mission and strategic goals, they can serve as an anchor that remains valid and enduring while organizations, personnel, programs, and processes may change. Core values define the attributes that are intrinsically important to what the new organization does and how it will do it. They represent the institutional beliefs and boundaries that are essential to building a new culture for the organization. Focus on a key set of principles and priorities at the outset of the transformation. Embed core values in every aspect of the organization to reinforce the new culture. GPO leadership has not adopted a set of agencywide core values to help unify GPO to achieve its transformation, but has created a task team under the direction of the Deputy Chief of Staff to develop them. Although the core values have yet to be developed, they are referenced in draft performance agreements for its senior managers. The experience of a GPO unit demonstrates the benefits of having core values. According to the Director of the Pueblo Document Distribution Center, core values were developed in 1998 that helped change the center’s culture and focus employees on improving the center’s performance. The employees at Pueblo had a series of meetings to develop and agree on the core values, thereby taking ownership of them and reinforcing employees’ understanding that they were responsible for the success of the Pueblo facility. Figure 5 shows a banner detailing these core values that hangs prominently in the facility. Employees said that the banner is a constant reminder that their individual and organizational success is dependent on how well they employ the core values as they serve their customers. The Pueblo Document Distribution Center Director said that establishing core values has helped employees take ownership for improving customer service, as measured by the center’s per order error rate. He said that the employees understand the importance of these core values because most of their work is done on a reimbursable basis for other federal agency customers, the center’s primary source of funding. Efforts to improve customer service are consistent with the recommendation made by the panel of printing and information dissemination experts we convened. Recommended Next Steps articulate to all employees how the core values can guide GPO’s transformation and serve to anchor GPO’s transformation efforts and ensure core values developed by units within GPO are consistent with GPO’s agencywide core values. GPO Does Not Have a Transformation Plan, but Has Taken Steps to Demonstrate Progress Because a transformation is a substantial commitment that could take years to complete, it must be carefully and closely managed. As a result, it is essential to establish and track implementation goals and establish a timeline to pinpoint performance shortfalls and gaps and suggest midcourse corrections. Further, research suggests that failure to adequately address—and often even consider—a wide variety of people and cultural issues is at the heart of unsuccessful transformations. Thus, people and cultural issues must be monitored from day one of a transformation. Set implementation goals and a timeline to build momentum and show progress from day one. Make public implementation goals and timeline. Seek and monitor employee attitudes and take appropriate follow-up actions. Identify cultural features of transforming organizations. Attract and retain key talent. Establish an organizationwide knowledge and skills inventory. Make public implementation goals and timeline. GPO has not established a transformation plan with specific time frames and goals for which leadership would be held accountable. Although GPO leadership has stated that a critical phase of its transformation is to develop a strategic plan by the summer of 2004, other more specific goals and timelines for the transformation, which could be linked to those being included in the strategic plan, are under development. GPO has the opportunity to ensure that its transformation remains on track and is ultimately successful by applying project management principles. Project management is a control mechanism that provides some assurance that desired outcomes can be achieved. It involves establishing key goals, tasks, time frames, and responsibilities that guide project accomplishment and ensure accountability. GPO leaders have acknowledged general weaknesses in GPO’s project management capabilities and have identified this as a skills gap that is being addressed through training initiatives. By enhancing project management skills at various levels of the organization and applying project management principles to key efforts like the transformation, GPO can have greater assurance that these efforts will produce desired outcomes. Seek and monitor employee attitudes and take appropriate follow- up actions. Because people are the drivers of any merger or transformation, monitoring their attitudes is vital. Top leadership should also take appropriate follow-up actions to avoid creating negative attitudes that may translate into actions that could have a detrimental effect on the transformation. In February 2003, GPO leadership sought employee attitudes by implementing an employee climate survey, an important first step to establish a baseline on employee attitudes and concerns. After the survey was completed, GPO leadership adopted recommendations to address employee concerns. GPO will have the opportunity to take additional follow-up actions based on a second employee survey it plans to administer in the coming months. This survey has the potential to provide GPO leadership with updated information on GPO employee attitudes and views on GPO’s transformation. Identify cultural features of transforming organizations. Because a change of culture is at the heart of a successful transformation, it is important for leadership to gain a better understanding of the organization’s beliefs and values prior to, or early in, the transformation process. By listening to GPO employees and customers, GPO management determined that its culture was not sufficiently customer focused in dealing with agencies’ printing needs. Instead, GPO relied on the requirement in Title 44 that federal agencies use GPO for their printing needs and made little effort to develop customer relationships and anticipate the needs of its customers. As mentioned earlier, to foster a more customer-oriented culture, GPO has created new positions, national account managers, responsible for developing relationships with customer agencies. The national account managers’ role is to develop relationships with agency customers and provide them with information about the products and services that GPO can offer to meet their information dissemination needs. Similarly, GPO’s former CHCO spoke with GPO employees about their views of the Human Capital Office and identified features of the office’s culture that he is trying to change. The recent restructing of the Human Capital Office is aimed at creating a culture that is more customer focused, breaking down organizational barriers, and enhancing internal and external communication. For example, the Human Capital Office has been reorganized into teams dedicated to support GPO’s operating units. These teams will be able to address the full range of human resources activities, from hiring to retirement, as well as worker safety issues. According to GPO, physically locating the human capital team members with the business unit staff ensures that the operating units’ human resources needs are more easily met, improves communication between the units and the Human Capital Office, and allows for faster decision making. Attract and retain key talent. Success is more likely when the best people are selected for each position based on the competencies needed for the new organization. To help ensure that GPO retained key talent needed for GPO’s transformation, the Public Printer appointed experienced GPO employees to fill the top management positions of Superintendent of Documents, Managing Director of Customer Services, and Managing Director of Plant Operations. (One of the three employees has over 40 years of experience at GPO.) According to the Public Printer, each of these individuals is committed to helping GPO transform and successfully meet the needs and demands of GPO's customers in the 21st century. In addition, these appointments ensured that a vast amount of institutional knowledge remained at GPO during the transformation and were meant to give other current GPO employees a clear message that, while GPO is transforming and changing the way it does business, there is a place for current GPO employees at all levels of the new organization. To ensure that GPO attracts the people it needs to successfully transform and to obtain the next generation of technical skills needed to prepare GPO for the challenges of the 21st century, the Public Printer has increased the recruitment of outstanding college scholars. GPO has implemented a recruiting initiative at universities and colleges that emphasize fields of study that would benefit GPO in meeting its current and emerging needs. For example, the initiative will target graduates in printing and graphic communication; electrical, mechanical, and chemical engineering; and business administration. In response to a request from the General Counsel of GPO, we recently provided an advance decision that GPO may use appropriated funds to provide recruitment and relocation payments and retention allowances to certain GPO employees, but suggested that it consult with the Joint Committee on Printing before doing so. GPO is exploring these and other strategies to enhance its ability to recruit and retain top talent with needed skills and knowledge. We have reported that agencies have successfully used human capital flexibilities, such as recruitment and retention allowances, as important human capital strategies to assist in reaching program goals. Establish an organizationwide knowledge and skills inventory. A knowledge and skills inventory can help a transforming organization identify the skills and competencies of the existing workforce that can help the organization adapt to its new mission. In addition, a transforming organization needs to define the critical skills and competencies that it will require in the future to meet its strategic program goals and identify how it will obtain these requirements, including those that it will need to acquire, develop, and retain (including full- and part-time federal staff and contractors) to meet future needs. GPO’s Human Capital Office is planning to complete a knowledge and skills inventory to identify the skills and competencies of the existing workforce. In a memorandum to all GPO employees, the former CHCO explained that the Workforce Development Department will undertake a comprehensive skills assessment involving all employees to strategically determine how GPO will need to retrain the workforce as the transformation proceeds. The skills assessment will include a number of measurement tools and methods, including skills tests, electronic and paper-based surveys, interviews, focus groups, and observations of work. The knowledge and skills inventory could help GPO as it reorganizes and shifts focus to new missions and competencies. Knowledge and skills inventories have been used by agencies to identify related training needs. We have reported that agencies have used a variety of approaches in assessing skills and competencies to identify training needs. For example, agencies used workforce planning models; assessed the workforce in view of organizational, occupational, and unit-based competency standards; and evaluated job performance appraisals and information from individual development plans. While GPO completes the skills assessment of its current employees, it plans to also complete a systematic identification of new skills and competencies that it will need in the future. When GPO leadership completes both of these efforts, it will be able to pinpoint skills gaps within its workforce and develop strategies to ensure that GPO retains, develops, and acquires employees with these skills. These efforts can serve to help employees understand how they can enhance their skills to contribute to GPO’s future and are consistent with the recommendation to strengthen training made by the panel of printing and information dissemination experts that we convened. The skills inventory is an important step to ensure that GPO employs people with the skills necessary for its future mission. However, until GPO leadership finalizes the mission and goals of the transformed GPO, it cannot determine fully the skills needed to achieve current and future programmatic results or develop strategies focused on those skills. Recommended Next Steps The Public Printer should develop a documented transformation plan that outlines his goals for the transformation and when he expects to meet identifies critical phases and essential activities that need to be completed. determine, based on the results of the upcoming employee survey, whether any changes are needed to the transformation strategies and ensure that the development of human capital strategies focuses on the skills gaps identified by GPO leadership. GPO’s Management Team Is in Place, but Attention to Daily Transformational Activities Could Be Strengthened Dedicating a strong and stable implementation team that will be responsible for the transformation's day-to-day management is important to ensuring that it receives the focused, full-time attention needed to be sustained and successful. Specifically, the implementation team is important to ensuring that various change initiatives are sequenced and implemented in a coherent and integrated way. Top leadership must vest the team with the necessary authority and resources to set priorities, make timely decisions, and move quickly to implement top leadership’s decisions about the transformation. Dedicate an implementation team to manage the transformation process. Establish networks to support implementation team. Select high-performing team members. The Public Printer has put in place a senior management team, referred to as the management council, which can help bring GPO into the future. This council is composed of the COO, CFO, CHCO, CIO, Superintendent of Documents, Managing Director of Plant Operations, Managing Director of Customer Services, the Chief of Staff, the Deputy Chief of Staff, General Counsel, and the Inspector General. According to GPO officials, the management council does not have regularly scheduled meetings and only meets when convened by the Public Printer. About 80 percent of the management council’s time is devoted to long-term, transformational activities, while 20 percent of the time is devoted to addressing day-to-day operational issues. A second management team, referred to as the operations council, is composed of the CFO, CHCO, CIO, Superintendent of Documents, Managing Director of Plant Operations, and Managing Director of Customer Services; this council meets weekly with the COO. According to GPO officials, this council spends about 80 percent of its time dealing with day-to-day operations and 20 percent with transformation issues. Although the operations council occasionally discusses transformation-related issues, its meetings are not structured around specific transformation tasks or decisions required to make progress on the transformation. Instead, the meetings give each member of the council an opportunity to provide an update on issues affecting his or her unit’s operations, improve communication among GPO’s top managers, and ensure that crosscutting issues in day-to-day operations receive management attention. GPO leadership recognizes the importance of establishing networks to support its transformation efforts, and it is creating a network of task forces to lead the development of various transformational strategies. For example, GPO created a task force to focus on “revenue enhancements and new investments.” This task force will be chaired by the CFO, and will include members of other GPO business units, such as New Business Development and Information Technology and Systems. The Public Printer directed the chairpersons of the task teams to select task force members and develop the strategies that will be the basis for GPO’s strategic plan by June 17, 2004. Our work on transformations has found that establishing networks, including a senior executive council, functional teams, or crosscutting teams, can help the implementation team conduct the day-to- day activities of the transformation and help ensure that efforts are coordinated and integrated. GPO leaders have acknowledged that creating the support capacity and accountability for daily transformation activities could help ensure that the transformation continues to make progress. These leaders said that responsibilities for day-to-day transformation activities could include setting priorities, proposing milestones, tracking progress, providing analysis to support decision making, and coordinating among teams. Recommended Next Steps The Public Printer should establish a transformation team, or augment the management council, to address the day-to-day management of GPO’s transformation effort. The team should include high-performing employees who have knowledge and competencies that could help GPO plan its future. Establishing such a team could create the focus needed to stimulate and sustain GPO’s transformation efforts. GPO Is Planning Changes to Strengthen Its Performance Management System A performance management system can help manage and direct the transformation process and serves as the basis for setting expectations for individuals’ roles in the transformation. To be successful, transformation efforts must have leaders, managers, and employees who have the individual competencies to integrate and create synergy among the multiple operating units involved in the transformation effort. Individual performance and contributions are evaluated on competencies such as change management, cultural sensitivity, teamwork and collaboration, and information sharing. Leaders, managers, and employees who demonstrate these competencies are rewarded for their success in contributing to the achievement of the transformation process. Use the performance management system to define responsibility and assure accountability for change. Adopt leading practices to implement effective performance management systems with adequate safeguards. GPO plans to implement a new performance management system for its executives and will later work on changes for employees at other organizational levels. As part of this effort, GPO is exploring the use of competencies to provide a fuller assessment of performance. For example, GPO has developed performance agreements for its senior managers based upon the executive core qualifications adopted by the Office of Personnel Management for senior executives and included responsibilities such as leading strategic change. Each responsibility will be linked to three or four competencies. Additionally, the draft performance agreements include interim goals that GPO developed for its operating units and other elements that we have identified as important for executive performance. They include, for example, specific levels of performance that GPO plans to link to strategic objectives to help senior executives see how they directly contribute to organizational results. Until GPO’s strategic plan is completed, however, GPO will not be able to fully align individual performance competencies or expectations with organizational goals. The completion of the strategic plan will provide human capital officials with the information needed to develop competencies and expectations for employees that have a direct link to GPO’s goals, providing employees with the information they need to understand how their performance leads to organizational success. As part of GPO’s effort to strengthen performance management, GPO plans to pilot a new system that, beginning with its senior executives, will more closely link an individual’s pay with his or her performance. Linking pay to performance is a key practice for effective performance management. We have reported that efforts to link pay to performance require adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, to ensure the fair, effective, and nondiscriminatory implementation of the system. The Human Resources Office has begun developing a pay-for-performance program that will use measures of effectiveness that directly link individual performance with organizational goals and objectives. The newly established Workforce Development, Education and Training Office will be required to develop and deliver training to supervisors and managers on performance management. The objective is to ensure that supervisors and managers are equipped with the necessary skills to effectively manage their employees, help drive change efforts, and achieve results. Recommended Next Steps The CHCO should continue developing a performance management system for all GPO employees that creates a line of sight by linking employee performance with agency goals. The CHCO should ensure that GPO’s new performance management system has adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, to ensure the fair, effective, and nondiscriminatory implementation of the system. GPO Has Improved Communication, but Can Better Address Employee Needs Communication, an important management control, is most effective when done early, clearly, and often, and when it is downward, upward, and lateral. Successful organizations have comprehensive communication strategies that reach out to employees, customers, and stakeholders and seek to genuinely engage them in the transformation process. Establish a communication strategy to create shared expectations and report related progress. Communicate early and often to build trust. Ensure consistency of message. Encourage two-way communication. Provide information to meet specific needs of employees. The Public Printer communicated his intention to transform GPO early and often and to various audiences. For example, in his confirmation hearing in October 2002, the Public Printer told Congress that GPO “must step back and take a new look at the changing and emerging information needs of its customers and develop a deeper understanding of its true strengths so that it can determine how best to build a new business model.” Then again, just 8 days after the Public Printer took office, he publicly stated his intention to transform GPO. In his communications with employees, the Public Printer has also frequently expressed his intention to transform GPO. For example, GPO’s biweekly management newsletter, the GPO Link, often contains articles about GPO’s transformation. Transforming organizations have found that communicating information early and often helps to build an understanding of the purpose of the planned changes and builds trust among employees and stakeholders. GPO leadership has communicated a consistent message about the transformation to employees, customers, and other stakeholders through such methods as sponsoring conferences, attending customers’ meetings, and speaking with relevant trade magazines. For example, in correspondence with the Congress, employees, and the library community, the Public Printer and other senior managers have used similar terms and concepts when discussing GPO’s transformation. This consistency is important in ensuring that GPO’s employees, customers, and other stakeholders understand the current environment under which GPO operates. A message to employees and others affected by a transformation that is consistent in tone and content can alleviate the uncertainties generated during the unsettled times of large-scale change management initiatives. GPO leadership has encouraged two-way communication by instituting methods for employees and others to provide feedback and ask questions. For example, GPO’s intranet site has a section called “Ask the Public Printer.” On the site, the Public Printer fields questions on issues ranging from training opportunities, to building renovation issues, to contingency planning. In addition, the Public Printer holds periodic town hall meetings that include time for employees in attendance to ask him questions in person. In addition, stakeholders have been asked to communicate with GPO leaders. For example, on January 22, 2004, the Depository Library Council provided GPO with advice on topics that the Public Printer identified as important to the future of GPO. The Public Printer expects to use feedback from stakeholders such as the Depository Library Council as GPO develops its strategic plan. Two-way communication is central to forming the effective internal and external partnerships that are vital to the success of any organization. GPO leadership has also made significant efforts to improve communication between management and employees. For example, GPO established the Employee Communications Office, which was developed with the vision “to have the best informed workforce in the U.S. Government by over-communicating organizational clarity and the mission and vision of the new GPO.” An important initiative undertaken by the Employee Communications Office is the development of GPO Link, a biweekly newsletter that reports on activities of GPO’s top managers. Despite these efforts, because GPO’s future mission and strategies have not yet been decided, the Public Printer has been unable to communicate the nature of the change that GPO needs to make in a way that addresses the specific needs of employees. During a communications focus group GPO held in October 2003, employees stated that recent efforts to improve communication were positive, but failed to provide the specific information needed to alleviate job concerns. These concerns were also voiced during town hall meetings led by the Public Printer in January 2004 and were consistent with concerns raised by union representatives in their discussions with us about GPO’s transformation. Employees have indicated they are unsure about their future in the new GPO, and are seeking specific information on the skills they will need to remain useful to GPO. Communicating with employees about their specific concerns can help them understand how they might be affected and how their responsibilities might change with the new organization. GPO managers, union leaders, and employees have indicated that employees are unsure of their role in GPO’s transformation. Union leaders told us that much of the communication has been rhetoric with insufficient detail regarding how the transformation will affect employees. For example, the Public Printer has stated that the transformation will bring GPO into the 21st century, but the specifics of what jobs might be lost or changed have not been discussed because GPO is developing its mission and strategic plan. Recommended Next Steps The Public Printer can augment GPO’s communication about the transformation to include additional information that employees can use to understand their role in building the GPO of the 21st century. As GPO’s strategic planning effort moves forward, communication with employees should include topics such as GPO’s new mission, strategic goals, and in particular, employee concerns about their role in the new environment. As key decisions are made, communication should address how GPO’s transformation will affect employees so that they understand how their jobs may be affected, what their rights and protections might be, and how their responsibilities might change. GPO Can Expand the Involvement of Employees in the Transformation Employee involvement strengthens the transformation process by including frontline perspectives and experiences. Further, employee involvement helps to create the opportunity to establish new networks and break down existing organizational silos, increase employees’ understanding and acceptance of organizational goals and objectives, and gain ownership for new policies and procedures. Involve employees to obtain their ideas and gain their ownership for the transformation. Use employee teams. Involve employees in planning and sharing performance information. Incorporate employee feedback into new policies and procedures. Delegate authority to appropriate organizational levels. The former and Acting CHCO, CIO, CFO, and Managing Director of Customer Services told us that they are adopting team-based approaches for accomplishing their units’ goals, which includes improved customer service. For example, GPO combined the former procurement division with the customer services division to create teams of employees who have a range of skills to address customer needs. Previously, GPO’s customers were shuffled between these two divisions, neither of which was clearly accountable for addressing the customers’ needs. A GPO official explained that by changing to a team approach, where a group of about five employees is responsible for all work with a customer, accountability for meeting the needs of that customer is clear and may lead to improved service. A teams-based approach to operations can create an environment characterized by open communication, enhanced flexibility in meeting job demands, and a sense of shared responsibility for accomplishing organization goals and objectives. GPO units can expand the involvement of employees and use their feedback in planning and sharing performance information, which can help employees accept and understand the goals of their units and their role in achieving them. For example, GPO officials told us that the CFO has shared goals for his division with his managers, who have, in turn, shared the goals with their employees. Therefore, all employees under the CFO know the goals of the division and how their work and performance helps realize the goals. However, not all division managers have shared goals with their employees. The practice of involving employees in planning and sharing performance information can be transferred to other GPO units as GPO’s transformation progresses. Major transformations, like GPO’s, often include redesigning work processes, changing work rules, or making other changes that are of particular concern to employees. GPO has made or plans to make changes to many of its policies and procedures. As we mentioned earlier, for example, GPO is planning to pilot test a new pay-for-performance system, beginning with its senior managers. We have reported on other agencies’ attempts to involve employees and unions in developing aspects of its personnel systems. For example, at the Department of Homeland Security, employees and union representatives played a role in shaping the design of a proposed personnel system. The design process attempted to include employees by creating multiple opportunities for employees to provide feedback. GPO has taken some actions to delegate authority to employees. Soon after the new Public Printer took office, GPO instituted a time-off awards program, which provides supervisors with a means to recognize employees for their productivity, creativity, dedication, and outstanding contributions to the mission of GPO. Before GPO created this award program, supervisors did not have the authority to recognize and reward outstanding performance. In a transformation, employees are more likely to support changes when they have the necessary authority and flexibility—along with commensurate accountability and incentives—to advance the organization’s goals and improve performance. Delegating certain personnel authorities is important for managers and supervisors who know the most about an organization’s programs and can use those authorities to make those programs work. The former Deputy Public Printer told us that decision making on many day-to-day matters was centralized within his office. For example, his approval was required for all training requests from GPO employees. The current Public Printer has delegated authority to approve training to lower level managers who are more familiar with the employees’ work requirements and, therefore, have a better understanding of the training individual employees need to improve their performance. We have reported that agency managers and employees have important roles in the success of training and development activities. Managers are responsible not only for reinforcing new competencies, skills, and behaviors but also for removing barriers to help employees implement learned behaviors on the job. Furthermore, if managers understand and support the objectives of training and development efforts, they can provide opportunities for employees to successfully use new skills and competencies and can model the behavior they expect to see in their employees. Employees also need to understand the goals of agencies’ training and development efforts and accept responsibility for developing their competencies and careers, as well as for improving their organizations’ performance. Recommended Next Steps GPO leadership should involve employees more in planning and decision making for the future, allowing employees to gain ownership of the transformation. For example, the CHCO should incorporate employee feedback as part of the process for developing GPO’s pay for performance system and in training and development activities. World-Class Management Practices Can Strengthen GPO’s Transformation Successful change efforts start with a vision of radically improved performance and the relentless pursuit of that vision. Leaders of successful transformations seek to implement best practices in systems and processes and guard against automatically retaining the approaches used in the past. Instead of developing optimal systems and processes, transforming organizations risk devoting attention to attempting to mend less than fully efficient and effective systems and processes merely because they are already in place. Over the longer term, leaders of successful mergers and acquisitions, like leaders of successful organizations generally, seek to learn from best practices and create a set of systems and processes that are tailored to the specific needs and circumstances of the transforming organization. GPO leadership has articulated a vision to transform GPO into a world- class organization and has taken some initial steps toward this objective, most notably with respect to human capital management. However, because significant change efforts are difficult and take a long time, continued leadership attention is needed. The commitment of the Public Printer, the appointment of a COO, and other key leadership selections are positive steps in this regard. In particular, we have reported that COOs can be part of a broader effort to elevate attention to management and transformation issues, integrate various key management and transformation efforts, and institutionalize accountability for addressing management issues leading a transformation. By their very nature, the problems and challenges facing agencies are crosscutting and thus require coordinated and integrated solutions. However, the risk is that management responsibilities (including, but not limited to, information technology, financial management, and human capital) will be “stovepiped” and thus will not be carried out in a comprehensive, ongoing, and integrated manner. GPO Has Taken Numerous Actions to Strengthen Human Capital Management Having effective human capital policies and procedures is a critical factor in an organization’s management control environment. GPO’s efforts to strengthen human capital management demonstrate a commitment to these management controls. In October 2003, we reported on how GPO leadership could advance its transformation through strategic human capital management and made numerous recommendations to GPO leadership that were based on leading practices in strategic human capital management. Taken as a whole, these recommendations represent a framework for radically improving GPO’s human capital practices. GPO’s Human Capital Office is using our October 2003 report as GPO’s roadmap for transforming its human capital management and is actively implementing the recommendations we made. Much of GPO’s progress in improving its human capital management has been described previously in this report. Our recommendations focus on four interrelated areas: communicating the role of managers in GPO’s transformation, strengthening the role of the human resources office, developing a strategic workforce plan to ensure GPO has the skills and knowledge it needs for the future, and using a strategic performance management system to drive change. GPO has made clear progress toward adopting the leading practices that we described in our October report, and has shown a continuing interest in improving GPO’s Human Capital Office by identifying management best practices used by other organizations. The experience of transforming organizations, including GAO, has shown that transformation must be based on the best, most up-to-date management practices to reach its full potential. Consistent with this practice, GPO leadership requested our assistance in identifying and describing approaches and strategies used by other organizations to restructure their workforces. In response to this request, on January 20, 2004, we briefed GPO leadership on the workforce restructuring efforts of the Federal Deposit Insurance Corporation, GAO, and the Treasury’s Financial Management Service. The briefing presented the lessons that these agencies learned from their workforce restructuring efforts, with particular emphasis on efforts to assist employees in finding other employment. The approaches and strategies we highlighted were retraining, outplacement assistance, workforce restructuring planning, communication, and employee and union involvement. Our briefing contained specific examples, related agency materials, and contacts that could provide further information and assistance to GPO. GPO’s CIO Organization Has Begun to Transform The Public Printer has stated that the new vision of GPO will be an agency whose primary mission will be to capture digitally, organize, maintain, authenticate, distribute, and provide permanent public access to the information products and services of the federal government. To execute this vision, he states that GPO must deploy the technology needed by federal agencies and the public to gather and produce digital documents in a uniformly structured database in order to authenticate documents disseminated over the Internet and to preserve the information for permanent public access. However, improved information technology (IT) systems such as those contained in the Public Printer’s vision are not simple to develop or acquire. Through our research of best IT management practices and our evaluations of agency IT management performance, we have identified a set of essential and complementary management disciplines that provide a sound foundation for IT management. These include software/system development and acquisition, IT human capital. GPO’s CIO understands that his IT organization, like all of GPO, will have to transform to meet current and future needs. More specifically, he acknowledges the need to establish IT management policies, procedures, and practices in the key areas listed above. The CIO has taken steps, or plans to take steps, to begin improving IT in each of these areas. Enterprise Architecture An enterprise architecture is to an organization’s operations and systems as a set of blueprints is to a building. That is, building blueprints provide those who own, construct, and maintain the building with a clear and understandable picture of the building’s uses, features, functions, and supporting systems, including relevant building standards. Further, the building blueprints capture the relationships among building components and govern the construction process. Enterprise architectures do nothing less, providing to people at all organizational levels an explicit, common, and meaningful structural frame of reference that allows an agency to understand (1) what the enterprise does; (2) when, where, how, and why it does it; and (3) what it uses to do it. An enterprise architecture provides a clear and comprehensive picture of the structure of an entity, whether an organization or a functional or mission area. This picture consists of snapshots of both the enterprise’s current or “as-is” technical and operational environments, its target or “to- be” technical and operational environments, and a capital investment roadmap for transitioning from the current environment to the target environment. An enterprise architecture is an essential tool for effectively and efficiently engineering business practices, implementing and evolving supporting systems, and transforming an organization. Managed properly, it can clarify and help optimize the interdependencies and relationships among an organization’s business operations and the underlying IT infrastructure and applications that support these operations. Employed in concert with other important management controls, such as portfolio- based capital planning and investment control processes, architectures can greatly increase the chances that organizations’ operational and IT environments will be configured to optimize mission performance. Our experience with federal agencies has shown that investing in IT without defining these investments in the context of an enterprise architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. The development of an enterprise architecture is an essential part of a successful organizational transformation. Our research has shown that an organization should ensure that adequate resources are provided for developing the architecture and that responsibility for directing, overseeing, and approving enterprise architecture development is assigned to a committee or group with representation from across the organization. Establishing this organizationwide responsibility and accountability is important in demonstrating the organization’s commitment to building the management foundation and obtaining support for the development and use of the enterprise architecture from across the organization. This group should include executive-level representatives from each line of business, and these representatives should have the authority to commit resources to architecture-related efforts and enforce decisions within their respective organizational units. Our research shows that enterprise architecture efforts also benefit from developing an architecture program management plan that specifies how and when the architecture is to be developed, including a detailed work breakdown structure, resource estimates (e.g., funding, staffing, and training), performance measures, and management controls for developing and maintaining the architecture. The plan demonstrates the organizations’ commitment to managing enterprise architecture development and maintenance. Currently, GPO does not have such an enterprise architecture. Its CIO agrees that an enterprise architecture is an important tool and is working to develop one for GPO. As the first step towards developing an enterprise architecture, the CIO organization is in the process of documenting GPO’s current business processes and supporting IT architecture (the “as-is” enterprise architecture). In doing this work, the agency is focusing first on those business items of greater interest to two sets of critical customers— the Congress and users of the Federal Register. The CIO has also hired a manager to lead this effort who has significant experience in the development and institutionalization of enterprise architecture and related processes. Investment Management In concert with a properly developed and institutionalized enterprise architecture, an effective and efficient IT investment management process is key to a successful transformation effort. An effective and efficient IT investment process allows agencies to maximize the value of their IT investments and to minimize the risks of IT acquisitions. This is critically important because IT projects, while having the capability to significantly improve an organization’s performance, can become very costly, risky, and unproductive. Federal agency IT projects too frequently incur cost overruns and schedule slippages while contributing little to mission-related outcomes. GPO’s transformation may require significant investment in IT and related efforts. Therefore, it is essential that GPO effectively manage such investments. We have developed a guide to effective IT investment management based on a select/control/evaluate model: Select. The organization (1) identifies and analyzes each project’s risks and returns before committing significant funds to any project and (2) selects those IT projects that best support its mission needs. This process should be repeated each time, reselecting even ongoing investments, as described below. Control. The organization ensures that, as projects develop and investment expenditures continue, the project continues to meet mission needs at the expected levels of cost and risk. If the project is not meeting expectations or if problems have arisen, steps are quickly taken to address the deficiencies. If mission needs have changed, the organization can adjust its objectives for the project and appropriately modify expected project outcomes. Evaluate. The organization compares actual versus expected outcomes after a project is fully implemented. This is done to (1) assess the project’s impact on mission performance, (2) identify any changes or modifications to the project that may be needed, and (3) revise the investment management process based on lessons learned. To oversee the investment management process, an investment review board is established, made up of managers, that is responsible and accountable for selecting and monitoring projects based on the agency’s investment management criteria. The IT investment board is a key component in the investment management process. An organizationwide investment board has oversight responsibilities for developing and maintaining the organization’s documented IT investment process. It plays a key role in establishing an appropriate IT investment management structure and processes for selecting, controlling, and evaluating IT investments. The organization may choose to make this board the same board that provides executive guidance and support for the enterprise architecture. Such overlap of responsibilities may enhance the ability of the board to ensure that investment decisions are consistent with the architecture and that it reflects the needs of the organization. This model allows an organization to effectively choose, monitor, and evaluate projects. GPO intends to complete a major transformation of itself within a few years, with most of its transformation based on improved IT capabilities. For an organization like GPO, in the midst of transformation, effective oversight of its IT investments is essential. Currently, GPO does not have an IT investment management process. GPO’s CIO said that his review of projects at GPO indicated that in the past, for example, most projects were selected without documentation such as a cost-benefit analysis, economic justification, alternatives analysis, and fully validated requirements. The CIO’s long-term goal is to implement a standard investment management process requiring such items. As a beginning, he is working on a list of required documents that each new information technology proposal will have to provide before it can be approved by GPO management. He is holding workshops aimed at introducing the business managers to this documentation and providing training in the meaning and use of these documents in support of project initiation. Software/System Development and Acquisition Capability Underlying enterprise architecture management and investment management is the ability to effectively and efficiently develop and acquire systems and software. GPO’s CIO is aware that his organization’s development and acquisition capabilities could be improved and plans to take steps to achieve these improvements. The CIO has tasked one of his new managers to begin improving key process areas for software development and acquisition. On the basis of this manager’s recommendation, GPO has selected the software acquisition models of the Institute of Electrical and Electronics Engineers (IEEE) as its standard for this process. The IEEE defines a nine-step process for the acquisition of software, which GPO plans to implement for future software acquisitions. The CIO also plans to implement a process for software development, but has not determined which model to use. IT Security Dramatic increases in computer interconnectivity, especially in the use of the Internet, are revolutionizing the way our government, our nation, and much of the world communicate and do business. The benefits from this have been enormous. However, this widespread interconnectivity poses significant risks to computer systems and, more importantly, to the critical operations and infrastructures they support, such as telecommunications, power distribution, and national defense. The same factors that benefit operations—speed and accessibility—if not properly controlled, also make it possible for individuals and organizations to inexpensively interfere with or eavesdrop on these operations from remote locations for purposes of fraud or sabotage, or for other malicious or mischievous purposes. In addition, natural disasters and inadvertent errors by authorized computer users can have devastating consequences if information resources are poorly protected. As GPO transforms, its information resources will become increasingly dependent upon correctly functioning IT. Whereas in the past, GPO needed only to have several paper copies of each document available, greater security measures will be required as GPO implements a database for permanent public access to all federal government information. GPO’s current information security could be improved. In fiscal year 2003, an independent audit of GPO’s internal controls, done as part of a review of GPO’s financial statements, found that GPO did not have in place an effective security management structure that provides a framework and continuing cycle of activity for managing risk, developing security policies, and monitoring the accuracy of GPO’s computer security controls. Among the specific findings: Security-related policies and procedures had not been documented or had not been kept current and did not reflect GPO’s current environment. These policies also provided no guidance for developing risk assessment programs. Local network administrators did not have guidance in developing formal procedures to perform network administration duties, such as creating and maintaining user accounts, periodically reviewing user accounts, and reviewing audit logs. GPO had not established a comprehensive business continuity and disaster recovery plan for its mainframe, client-server platforms, and major software applications. The CIO has ongoing projects aimed at addressing each of the issues outlined by the audit organization. First, he is in the process of issuing new security-related policies and procedures reflecting GPO’s current environment. The CIO’s security organization is also working on policies and procedures and guidance for local network administrators. Finally, GPO is negotiating with other legislative branch agencies to use their backup computer facility and is developing a business continuity and disaster recovery plan for GPO’s platforms and major software applications. IT Human Capital As mentioned earlier, the Public Printer has emphasized the importance of strategically managing GPO’s people in order to successfully transform the organization. The CIO, like other GPO managers, considers human capital a vital part of his organization’s operations as well as critical to the success of GPO’s transformation efforts. The CIO and his managers are reviewing the current human capital situation and taking interim steps to improve. At the same time, they are working with GPO’s human capital organization to develop a strategy to improve GPO’s human capital management capability for the long term. For example, the CIO has tasked each IT area manager to complete such a review of his or her staff and report the results to the CIO. While a few needed skill sets will be hired from the outside, the emphasis for the CIO organization in the near term will be on finding needed skills inside the organization and retraining individuals with related skills. The CIO is also providing training to his CIO staff on project management and related issues. Recommended Next Steps Like efforts in other parts of GPO, the CIO’s actions to improve GPO’s IT capabilities are important first steps. However, much more needs to be done to establish an effective IT investment process, to establish an enterprise architecture, and to improve the agency’s system development and acquisition, security, and human capital capabilities. Therefore, we recommend that the Public Printer direct the GPO CIO to do the following. Begin an effort to create and implement a comprehensive plan for the development of an enterprise architecture that addresses completion of GPO’s current or “as-is” architecture, development of a target or “to-be” architecture, and development of a capital investment plan for transitioning from the current to the target architecture. As part of the capital investment plan, designate an architecture review board of agency executives who are responsible and accountable for overseeing and approving architecture development and maintenance, and establish an enterprise architecture program management plan. Begin an effort to develop and implement an investment management process by (1) developing guidance for the selection, control, and evaluation processes and then (2) establishing an investment review board responsible and accountable for endorsing the guidance, monitoring its implementation, and executing decisions on projects based on the guidance. Develop and implement a comprehensive plan for software development and acquisition process improvement that specifies measurable goals and time frames, sets priorities for initiatives, estimates resource requirements (for training staff and funding), and defines a process improvement management structure. Establish the appropriate security and business continuity policies, procedures, and systems to ensure that its information products are adequately protected. Ensure that GPO’s Human Capital Office, in its efforts to develop and implement a human capital strategy, considers the special needs of IT human capital. Financial Management’s Role in Supporting Transformation Sound financial management practices that produce reliable and timely financial information for management decision making are a vital part of a strategic plan to achieve transformation. In recent testimony the Public Printer acknowledged that GPO is in a precarious financial position with sustained significant financial losses over the past 5 years, which appear to be structural in nature. Such structural losses point out the clear need for transformation. In response to GPO’s financial condition, the Public Printer has taken positive, immediate steps to stem losses, cut costs, and curtail certain program activities. Given the importance of GPO’s business transformation, it is imperative that transformation efforts be clearly linked to financial management results and receive the sustained leadership needed to improve the economy, efficiency, and effectiveness of GPO’s business operations through its transformation plan. The transformation plan should provide a strategic-level “road map” from the current environment to the planned future environment, including a link to current cost-cutting and other financial improvement initiatives. In addition, management needs reliable and up-to-date information on progress, including financial results. As discussed in our executive guide on best practices in financial management, dramatic changes over the past decade in the business environment have driven finance organizations to reevaluate their role. The role of financial management and reporting will be critical to managing the progress and impact of GPO’s transformation efforts. In the transformation environment, GPO will need to define a vision for its financial management organization such that it is a value-creating, customer-focused partner in business results in order to build a world-class finance organization and to help achieve GPO’s transformation goals. As reported and shown in figure 6, certain success factors, goals, and practices are instrumental in achieving financial management excellence. Build a team that delivers finance. results. prority. decision makers. 1. Build a foundation of control and accountability. 10. Develop a finance team with the right mix of skills and competencies 2. Provide clear strong executive leadership. 4. Assess the finance organization's current role in meeting mission objectives. 7. Develop systems that support the partnership between finance and operations. 3. Use training to change the culture and engage line managers. 5. Maximize the efficiency of day-to-day accounting activities. 8. Reengineer processes in conjunction with new technology. 11. Build a finance organization that attracts and retains talent. 6. Organize finance to add value. 9. Translate financial data into meaning- ful information. We compared best practices that would be most applicable to GPO’s financial management operations and transformation efforts to many of the activities and goals planned by GPO. Overall, GPO and its CFO have taken many actions and have plans for efforts that are consistent with many best practices in financial management; however, additional emphasis is needed for other best practices that enhance GPO’s transformation and to ensure that planned efforts are fully supported. In addition, GPO’s strategic planning for transformation should include the actions, plans, and goals to be initiated by its CFO and its financial management team to ensure that GPO’s weakening financial position does not undermine its transformation goals. Making Financial Management an Entitywide Priority Our prior report observes that the chief executive should recognize the important role the finance organization plays in improving overall business performance and involve key business managers in financial management improvement initiatives. This is especially important in a transformation environment. In order to make financial management an entitywide priority, the organization should (1) build a foundation of control and accountability, (2) provide clear strong executive leadership, and (3) use training to change the culture and engage line managers. GPO has shown that financial reporting and the audit process are important management and oversight tools for building a foundation of control and accountability by routinely receiving “unqualified” audit opinions on its annual financial statements. In addition, GPO receives an opinion on its management assertion on internal controls from its external auditor. Additional accountability is provided through oversight from the GPO Office of Inspector General established by Title 44, U.S. Code, section 3901. Also, GPO has expanded reported financial information beyond audited financial statements to include performance information on revolving fund operations such as printing and binding operations, purchased printing, and procured printing. Our executive guide on best practices in financial management also recognizes that the chief executive officers of leading organizations understand the important role that the CFO and the finance organization play in improving overall business performance of the organization. Consequently, the CFO is a central figure on the top management team and heavily involved in strategic planning and decision making. In this regard, the Public Printer established the CFO position shortly after arriving at GPO and has included the CFO as a member of the management council. The key to successfully managing change and changing organizational culture is gaining the support of line management. To change the organizational culture and enlist the support of line managers, many organizations use training programs. This training may be geared towards providing line managers with a greater appreciation of the financial implications of their business decisions and transformation efforts. GPO has engaged its nonfinancial managers with financial-related goals. For example, customer services, which includes purchased printing, has a goal of increasing revenue by identifying potential government work and increasing business to GPO. As discussed earlier in this report, account managers are assigned to increase revenue from federal agencies through regular customer agency visits, presentations at selected agencies to highlight GPO services, and targeting customers for specialized outreach efforts. GPO could provide a greater emphasis on training its nonfinancial managers on the financial implications of business decisions and the value of financial information. Training on how to fully use the financial information they receive not only produces better managers, but also helps break down functional barriers that can affect productivity and impede improvement efforts, especially in a time of transformation. In addition, training and other tools facilitate and accelerate the pace of the change initiative, which helps to reduce the opposition that could ultimately undermine the effort. Redefine the Role of Finance Today, leading finance organizations are focusing more on internal customer requirements by providing products and service that directly support strategic decision making and ultimately improve overall business performance. Again, this is critical in a transformation environment. Best practices reported by our prior review of leading financial organizations include actions to (1) assess the finance organization’s current role in meeting mission objectives, (2) maximize the efficiency of day-to-day accounting activities, and (3) organize finance to add value. Consistent with best practices for redefining financial operations, GPO has plans to integrate on-line workflow systems for all major operations including the receipts and processing operations, to streamline the budget formulation process, and to eliminate all paper-based accounting and budget reports. Customer feedback is also useful both in the future to assess the perceived benefits of changes related to transformation and to use as a baseline on which to compare future changes. The CFO plans to establish a baseline of information on customer satisfaction based on our survey of GPO’s major customers. This includes a planned assessment of customer satisfaction with services provided, identification of areas for improvement, implementation of plans to increase the value and efficiency of services provided, and identification of key performance measures. Provide Meaningful Information to Decision Makers Financial information is meaningful when it is useful, relevant, timely, and reliable. Therefore, organizations should have the systems and processes required to produce meaningful financial information needed for management decisions. Financial organizations should (1) develop systems that support the partnership between finance and operations, (2) reengineer processes in conjunction with new technology, and (3) translate financial data into meaningful information. Our executive guide for best practices in financial management suggests that relevant financial information should be presented in an understandable, simple format, with suitable amounts of detail showing the financial impact and results of cost-cutting initiatives and transformation efforts. Leading finance organizations have designed reporting formats around key business drivers to provide executives and managers with relevant, forward-looking information on business unit performance. We believe that such reports can be a key to linking GPO’s financial management efforts to transformation. GPO provides financial information to its key decision makers that is consistent with best practices for reports that are useful and relevant to key decision makers. The GPO CFO provides monthly summaries for each of GPO’s key operational areas, including plant operations, customer service, sales program, salaries and expenses programs, and administrative support operations, as well as other information on the status of appropriated funds, billings, and contractors. The information includes cumulative year-to-date summaries, profit and loss statements, use of employees and staff levels, and other information specific to each operational area. The CFO organization is developing plans to provide financial, administrative, and analytical support to all of GPO in addition to the monthly information packages provided to the management council. GPO is also developing plans to replace legacy information systems to integrate and streamline internal and external ordering as well as inventory and accounts payable processes. GPO expects to greatly improve the monthly financial processes with information necessary to make calculations regarding time spent on performance or cost analysis and on transaction processing. This information can be useful in gauging office efficiencies as a result of changes and transformation efforts. Build a Team That Delivers Results The finance function has evolved over the past decade from a paper-driven, labor-intensive, clerical role to a more consultative role as advisor, analyst, and business partner. Many leading finance organizations have seen a corresponding shift in the mix of skills and competencies required to perform this new role. GPO has plans and goals that are consistent with best practices for financial organizations. GPO should ensure that these plans are completed, fully supported, and expanded, especially in light of the critical function that finance will play in GPO’s transformation efforts. Specifically, the CFO is completing input for training plans that include both skill and education assessments of administrative support staff, budget operations staff, and staff in the Office of Comptroller. The CFO stated that he is directly involved in recruiting talented staff for GPO’s financial operations and is coordinating with the human resource office on developing a career path and opportunities for rotational assignments for financial-related staff. While these planned and developed efforts are consistent with best practices for financial organizations, GPO should keep focused on the need to ensure that its financial professionals are equipped to meet new challenges and support their agency’s mission and goals. This requires GPO to develop a finance team with the right mix of skills and competencies and to play the role needed in GPO’s transformation efforts. GPO has taken actions through ongoing efforts and planned goals that are often consistent with best practices for financial management. Nevertheless, unless it includes critical financial management activities in strategic plans for transformation, GPO creates the risk of undermining its ultimate goal of successful transformation. Without the link to transformation, GPO may lack the commitment to sustain sound financial management and lose the benefit of best practices that may be used as tools to assist decision makers during a period of great change. emphasize training on the usefulness and understanding of financial information to nonfinancial managers who are critical to GPO’s business operations; ensure that planned GPO and CFO efforts and goals in redefining the role of finance, providing information to decision makers, and building a team that delivers results receive the full and consistent support of GPO’s top management; ensure that management is receiving the financial information needed to manage day-to-day operations and track progress against transformation goals; and recognize the importance of financial management and reporting in strategic plans for transformation. Concluding Observations The Public Printer has taken action to transform GPO in response to changes in the environment for printing and information dissemination. Change is not optional for GPO—it is required, and it is driven by declines in GPO’s printing volumes, printing revenues, and document sales. The panel we convened of printing and information dissemination experts identified options for GPO’s future that focused on GPO’s role in information dissemination rather than printing. GPO leadership is using the panel’s suggestions to inform its strategic plan and set a direction for the agency’s transformation. We have noted that setting a clear direction for the future is vital to GPO’s transformation. GPO’s draft strategic plan is to be completed imminently; however, its transformation efforts are at a critical juncture, and GPO leadership will need to take further actions to strengthen and sustain GPO’s transformation by using the nine key practices that we identified to help agencies successfully transform. One of these practices, related to ensuring that top management drives the transformation, has already been fully applied by GPO’s leadership. Our recommendations, outlined in this report, will assist GPO with the implementation of the eight practices where GPO’s efforts are still under way. GPO leadership has articulated a vision to transform GPO into a world- class organization and has taken some initial steps toward this objective. GPO is actively implementing our prior recommendations to strengthen strategic human capital management and has also taken steps toward improving information technology and information technology management. GPO could build on this progress by focusing additional leadership attention on adopting best practices in these areas. Agency Comments and Our Evaluation We provided a draft of this report on June 9, 2004, to the Public Printer for review and comment. We received written comments from the Public Printer, which are reprinted in appendix II. The Public Printer agreed with the content, findings, and recommendations of the draft report. In his written comments, the Public Printer stated that this report, together with our October 2003 report on human capital management, will support many future actions that are necessary to bring about a successful transformation of GPO. For example, GPO will use our recommendations, along with the panel’s suggestions, to develop a customer service model that partners with GPO’s agency customers to meet their publishing needs. Further, the Public Printer said that he fully agrees with our assessment of GPO’s human capital environment and will make significant investments in workforce development in order to train existing employees in the skills required for 21st century printing and information processing. In addition, he added that GPO is moving toward becoming a world-class organization in both financial management and information technology management by adopting leading business practices. GPO also provided minor technical clarifications, which we incorporated as appropriate in this report. We are sending copies to the Public Printer, as well as the Joint Committee on Printing, the House Appropriations Legislative Subcommittee, the Senate Committee on Rules and Administration, and the House Committee on Administration. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact J. Christopher Mihm or Steven Lozano on (202) 512-6806 or [email protected] and [email protected]. Questions concerning the expert panel, the survey of executive branch agencies, and information technology issues should be directed to Linda Koontz at (202) 512-6240 or Tonia Johnson at (202) 512- 6447 or [email protected] and [email protected]. Questions about GPO’s financial management should be directed to Jeanette Franzel at (202) 512- 9471 or Jack Hufnagle at (202) 512-9470 or [email protected] or [email protected]. Other contributors to this report were Barbara Collier, Benjamin Crawford, William Reinsberg, Amy Rosewarne, and Warren Smith. Scope and Methodology To help explore the options for the future for the Government Prining Office (GPO), we contracted with the National Academy of Sciences to convene a panel of experts to discuss (1) trends in printing, publishing, and dissemination and (2) the future role of GPO. In working with the National Academy to develop an agenda for the panel sessions, we consulted with key officials at GPO, representatives of library associations, including the Association of Research Libraries and the American Library Association, and other subject matter experts. The National Academy assembled a panel of experts on printing and publishing technologies, information dissemination technologies, the printing industry, and trends in printing and dissemination. This panel met on December 8 and 9, 2003. To obtain information on GPO’s printing and dissemination activities— including revenues and costs—we collected and analyzed key documents and data, including laws and regulations; studies of GPO operations; prior audits; historical trends for printing volumes and prices; and financial, budget, and appropriations reports and data. We did not independently verify GPO’s financial information, but did perform limited tests of the work performed by external auditors. We also interviewed appropriate officials from GPO, the Library of Congress, and the Office of Management and Budget. To determine how GPO collects and disseminates government information, we collected and analyzed documents and data on the depository libraries, the cataloging and indexing program, and the International Exchange Service program. We also interviewed appropriate officials from GPO. To determine executive branch agencies’ current reported printing expenditures, equipment inventories, and preferences, familiarity and level of satisfaction with services provided by GPO, and current methods for disseminating information to the public, we developed two surveys of GPO’s customers in the executive branch. We sent our first survey to executive agencies that are major users of GPO’s printing programs and services. It contained questions on the department’s or agency’s (1) familiarity with these programs and services and (2) level of satisfaction with the customer service function. These major users, according to GPO, account for the majority of printing done through GPO. We sent one survey each to 7 independent agencies and 11 departments that manage printing centrally. We also sent one survey each to 15 component agencies within 3 departments that manage printing in a decentralized manner. A total of 33 departments and agencies were surveyed. The response rate for the user survey was 91 percent (30 of 33 departments and agencies). We sent our second survey to print officers who manage printing services for departments and agencies. These print officers act as liaisons to GPO and manage in-house printing operations. This survey contained questions concerning the department’s or agency’s (1) level of satisfaction with GPO’s procured printing and information dissemination functions; (2) printing preferences, equipment inventories, and expenditures; and (3) information dissemination processes. These agencies include those that were sent the user survey plus two others that do not use GPO services. We sent this survey to 11 departments that manage printing centrally, 15 component agencies within 3 departments that manage printing in a decentralized manner, and 9 independent agencies. A total of 35 departments and agencies were surveyed. The response rate for the print officer survey was 83 percent (29 of 35 departments and agencies). To develop these survey instruments, we researched executive agencies’ printing and dissemination issues with the assistance of GPO’s Customer Services and Organizational Assistance Offices. We used this research to develop a series of questions designed to obtain and aggregate the information that we needed to answer our objectives. After we developed the questions and created the two survey instruments, we shared them with GPO officials. We received feedback on the survey questions from a number of internal GPO organizations including Printing Procurement, Customer Services, Information Dissemination, and Organizational Assistance. We pretested the executive branch surveys with staff at the Department of Transportation and the Environmental Protection Agency. We chose these agencies because each had a long-term relationship with GPO, experience with agency printing, and familiarity with governmentwide printing and dissemination issues. Finally, we reviewed customer lists to determine the appropriate agencies to receive the executive branch surveys. We did not independently verify agencies’ responses to the surveys. To assess GPO’s actions and plans for the transformation, we reviewed statements by the Public Printer, Superintendent of Documents, and other senior leaders; analyzed draft performance agreements, employee surveys, communication plans, and strategic planning documents; GPO policies and procedures; organizational charts; audited financial statements; information from GPO’s intranet; communications with employees from the Employee Communications Office and Public Relations; and other relevant documentation. To obtain additional information and perspectives on GPO’s transformation issues, we interviewed key senior GPO officials, including the Deputy Public Printer; Chief Operating Officer; Chief of Staff; Deputy Chief of Staff; Superintendent of Documents; Deputy Superintendent of Documents; Managing Director of Plant Operations; Managing Director of Customer Services; the former and Acting Chief Human Capital Officer; Chief Financial Officer; Chief Information Officer; and Director, Office of Innovations and New Technology. We also interviewed GPO officials at the next level of management responsible for information dissemination, customer service, and human capital. To get employee perspectives, we spoke with union leaders, attended town hall meetings, and analyzed results of the employee survey and focus groups held by the Human Capital Office. In addition, we visited the Pueblo, Colorado, Document Distribution Center to talk with frontline managers about their views of the transformation. We used the practices presented in our report Results Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations, GAO-03-669, to guide our analysis of the actions taken by GPO to transform. We developed the recommended next steps by referring to our other models, guides, reports, and products on transforming organizations, strategic human capital management, and best practices for information technology and financial management, and by identifying additional practices that were associated with and would further complement or support current GPO efforts. We performed our work from March 2003 through June 2004. During this time we worked cooperatively with GPO leaders, meeting regularly with them about the progress of their transformation initiatives and providing them with information that they plan to use to develop GPO’s strategic plan and strengthen management. Because of this collaborative, cooperative approach, we determined that our work in response to the mandate could not be considered an audit subject to generally accepted government auditing standards. However, in our approach to the work, we followed appropriate quality control procedures consistent with the generally accepted standards. For the general management review examining GPO’s transformational efforts, we did follow generally accepted government auditing standards. Comments from the Government Printing Office Executive Agency Satisfaction with GPO Services Agencies responding to our surveys were generally satisfied with the Government Printing Office (GPO) and its services. Many agencies rated certain services favorably: 18 of 19 that use electronic publishing services rated these as average or 16 of 17 that use large-format printing services rated these as average or 16 of 17 that use services to convert products to electronic format rated these as average or above. However, a few of the responding agencies suggested areas in which GPO could improve, as the following examples illustrate: 7 of 23 that use financial management services (such as billings, payments, and automated transfers) rated these as below average or poor; 3 of 10 that use Web page design/development rated it as below average 5 of 24 that use the Federal Depository Library Program rated it as below average or poor. Table 5 (repeated from the body of the report) summarizes (1) agency users’ levels of satisfaction with GPO’s services and (2) products and services that they do not use. As the table shows, in responding to questions on customer satisfaction, some agencies indicated that they did not use certain electronic services: 18 of 28 do not use Web hosting and Web page design/development 11 of 28 do not use services to convert products to electronic format, and 9 of 28 do not use electronic publishing services. Some responding agencies identified other services that they did not use: 19 of 28 do not use reimbursable storage and distribution services, 15 of 28 do not use archiving and storage services, 12 of 28 do not use custom-finishing services, 10 of 27 do not use large format printing services, and 10 of 28 do not use preflighting services. In addition, we asked agencies about specific GPO services, which are reported in the sections that follow. Level of Satisfaction with GPO Term Contracts Most of the responding agencies’ print officers were generally satisfied with GPO’s Print Procurement Term Contracts organization—the group that awards and manages long-term multiple print contracts. All print officers responding to our survey rated this organization as average or above in the following areas: cost of products and services, and knowledge of products and services. Among the few less-than-average ratings were presentation of new products and services—4 of 20 rated GPO’s timeliness—3 of 23 rated GPO’s performance below average, and responsiveness to customer needs—2 of 24 rated GPO’s performance below average. Table 6 shows the specific responses. Level of Satisfaction with GPO Procurement Purchasing Most of the responding agencies’ print officers also were generally satisfied with GPO’s Print Procurement Purchasing organization—the organization that manages one-time print procurements. Among the areas in which the organization was highly rated were ability to solve problems—all ratings were average or above, accessibility by phone—all ratings were average or above, and communication skills—all ratings were average or above. Among the few less than average ratings were presentation of new products and services—4 of 16 rated this below responsiveness to customer needs—2 of 19 rated this below average, timeliness—1 of 19 rated this below average. Table 7 shows the specific responses. Level of Satisfaction with GPO Regional Print Procurement Most of the responding agencies’ print officers were satisfied with GPO’s regional print procurement organizations, which manage print contracting for agency organizations outside of Washington, D.C. Among the areas in which these organizations were favorably rated were ability to solve problems—all ratings were average or above, accessibility by phone—all ratings were average or above, and accuracy of information—all ratings were average or above. Among the few less-than-average ratings were presentation of new products and services—2 of 15 rated this below product and services knowledge—1 of 21 rated this below average. Table 8 shows the specific responses. Level of Satisfaction with GPO Information Dissemination Most of the responding agencies’ print officers were generally satisfied with GPO’s information dissemination. Among the areas in which this function was favorably rated were courtesy—all rated average or above, product and/or service knowledge—all rated average or above, and professionalism—all rated average or above. Among the few less than average ratings were presentation of new products and services—3 of 13 rated this below accessibility by phone—3 of 22 rated this poor. Table 9 shows the specific responses. Level of Satisfaction with GPO Customer Services Most responding agencies were generally satisfied with the Customer Services program. Among the areas rated as average or above were ability to solve problems, professionalism. Among the few less than average ratings were presentation of new products and services—9 of 25 rated below average or poor, cost of products and services—4 of 26 rated below average or poor, and timeliness and responsiveness to customer needs—2 of 28 rated below average. Table 10 shows the specific responses. Most of the responding agencies were generally satisfied with their most recent experience with this program. Specifically, 27 of 29 were able to reach a customer service representative, and 27 of 29 felt that the customer service representatives were helpful. Among the few less than positive ratings were 7 of 29 strongly agreed or agreed that additional contact was required to 3 of 28 disagreed that their complaint was resolved in a timely manner, 3 of 29 disagreed that their question was answered in a timely manner. Table 11 shows the specific responses. Panel of Experts Prudence S. Adler Associate Executive Director Federal Relations and Information Policy Association of Research Libraries Jamie Callan Associate Professor School of Computer Science Carnegie Mellon University Bonnie C. Carroll President and Founder Information International Associates, Inc. Gary Cosimini Business Development Director Creative Pro Product Group Adobe Systems Incorporated John S. Erickson Principal Scientist Digital Media Systems Lab Hewlett-Packard Laboratories Michael Jensen Director of Web Communications National Academies Press P. K. Kannan Associate Professor Robert H. Smith School of Business University of Maryland Nick Kemp Senior Vice President of Operations Nature Publishing Group William C. Lamparter President and Principal PrintCom Consulting Group Craig Nevill-Manning Senior Staff Research Scientist Google Inc. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs | The transformation of the Government Printing Office (GPO) is under way. This report captures the results of our efforts over the past year to assess and help strengthen GPO's transformation and strategic planning efforts. It is the final part of GAO's response to both a mandate requiring GAO to examine the current state of printing and dissemination of public government information and a congressional request that we conduct a general management review of GPO focusing on that GPO's transformation and management. Federal government printing and dissemination are changing due to the underlying changes to the technological environment. The Public Printer and his leadership team understand the effects of this technological change on GPO and have begun an ambitious effort to transform GPO and reexamine its mission. Federal agencies are publishing more documents directly to the Web and are doing more of their printing and dissemination of information without using GPO services. At the same time, the public is obtaining government information from government Web sites such as GPO Access rather than purchasing paper copies. As a result, GPO has seen declines in its printing volumes, printing revenues, and document sales. To assist in the transformation process under way at GPO, GAO convened a panel of printing and information dissemination experts, who developed a series of options for GPO to consider in its strategic planning. The panel suggested that GPO (1) develop a business plan to focus its mission on information dissemination as its primary goal, rather than printing; (2) demonstrate to its customers the value it can provide; (3) improve and extend partnerships with agencies to help establish itself as an information disseminator; and (4) ensure that its internal operations are adequate for efficient and effective management of core business functions and for service to its customers. GPO can also use other key practices that GAO identified to help agencies successfully transform, such as involving employees to obtain their ideas and gain their ownership for the transformation. GPO fully applied one of these practices, related to ensuring that top management drives the transformation, and has partially implemented each of the remaining eight practices. To fully implement the remaining practices, GPO needs to take actions including establishing its mission and strategic goals and developing a documented plan for its transformation. GPO has taken some initial steps to adopt the best practices of other public and private sector organizations, most notably with respect to human capital management. GPO is actively implementing the recommendations GAO made in October 2003. For example, GPO reorganized the human capital office into customer-focused teams devoted to meeting the human capital needs of GPO's operating units. Continued leadership attention is needed to build on the initial progress made in information technology and financial management. For example, GPO should implement an information technology investment management process to help management choose, monitor, and evaluate projects, and GPO should train its line managers to effectively use financial data. |
Background Before turning to the results of our work in detail, let me briefly provide some background information and discuss the methodology we used for our study. During their deployment associated with the Persian Gulf War, many of the approximately 700,000 veterans of the Gulf War may have been exposed to a variety of potentially hazardous substances. These substances include compounds used to decontaminate equipment and protect it against chemical agents, fuel used as a sand suppressant in and around encampments, fuel oil used to burn human waste, fuel in shower water, leaded vehicle exhaust used to dry sleeping bags, depleted uranium, parasites, pesticides, drugs to protect against chemical warfare agents (such as pyridostigmine bromide), and smoke from oil well fires. Moreover, DOD acknowledged in June 1996 that some veterans may have been exposed to the nerve agent sarin following the postwar demolition of Iraqi ammunition facilities. Many of these veterans have complained of a wide array of symptoms and disabling conditions since the end of the war in 1991. Some fear that they are suffering from chronic disabling conditions because of exposure to chemicals, pesticides, and other agents used during the war with known or suspected health effects. Accordingly, both DOD and VA established programs through which Gulf War veterans could receive medical examinations and diagnostic services. From 1992 to 1994, VA participants received a regular physical examination with basic laboratory tests. In 1994, VA established a standardized examination to obtain information about exposures and symptoms related to diseases endemic to the Gulf region and to order specific tests to detect the “biochemical fingerprints” of certain diseases. If a diagnosis was not apparent, veterans could receive up to 22 additional tests and additional specialty consultations. In addition, if the illness defied diagnosis, the veterans could be referred to one of four VA Persian Gulf referral centers. DOD initiated its Comprehensive Clinical Evaluation Program in June 1994. It was primarily intended to provide diagnostic services similar to those of the VA program and employed a similar clinical protocol. However, the VA program was among the first extensive efforts to gather data from veterans regarding the nature of their problems and the types of hazardous agents to which they might have been exposed. Methodology To address our first evaluation question—the extent of DOD’s clinical follow-up and monitoring of treatment and diagnostic services—we reviewed literature and agency documents and conducted structured interviews with DOD and VA officials. We asked questions designed to identify and contrast their methods for monitoring the quality and outcomes of their treatment and diagnostic programs and the health of the registered veterans. The second objective concerns the coherence of the Persian Gulf Veterans Coordinating Board’s (PGVCB) research strategy. To answer this question, we conducted a systematic review of pertinent literature and agency documents and reports. We also interviewed representatives of PGVCBResearch Working Group and officials of VA, DOD, and the Central Intelligence Agency. We surveyed primary investigators of ongoing epidemiological studies. Because different methodological standards apply to various types of research and because the overwhelming majority of federally sponsored research is categorized as epidemiological, we limited our survey to those responsible for ongoing epidemiological studies. With the help of an expert epidemiological consultant, we devised a questionnaire to assess critical elements of these studies (including the quality of exposure measurement, specificity of case definition, and steps to ensure adequate sample size) and to identify specific problems that the primary investigators may have encountered in implementing their studies. We interviewed primary investigators for 31 (72 percent) of the 43 ongoing epidemiological studies identified by PGVCB in the November 1996 plan. We also reviewed and categorized descriptions of all 91 projects identified by April 1997, based on their apparent focus and primary objective. Finally, to review the progress of major ongoing research efforts, we visited the Walter Reed Army Institute of Research, the Naval Health Research Center, and two of VA’s Environmental Hazards Research Centers. To address the third objective, we reviewed major conclusions of the PGVCB and the Presidential Advisory Committee on Gulf War Veterans’ Illnesses to determine the strength of evidence supporting major conclusions. The purpose of this review was not to critique PGVCB’s or the Presidential Advisory Committee’s efforts, per se, but rather to describe the amount of knowledge about Gulf War illnesses that has been generated by research 6 years after the war. We reviewed these conclusions because they are the strongest statements that we have come across on these matters by any official body. The Presidential Advisory Committee’s report was significant because the panel included a number of recognized experts who were assisted by a large staff of scientists and attorneys. In addition, the Committee conducted an extensive review of the research. Thus, we believed that evaluating these conclusions would provide important evidence about how fruitful the federal research has been thus far. We addressed this objective by reviewing extant scientific literature, and consulting experts in the fields of epidemiology, toxicology, and medicine. Because of the scientific and multidisciplinary nature of this issue, we ensured that staff conducting the work had appropriate backgrounds in the field of epidemiology, psychology, environmental health, toxicology, engineering, weapon design, and program evaluation and methodology. In addition, we used in-house expertise in chemical and biological warfare and military health care systems. Also, medical experts reviewed our work. Moreover, we held extensive discussions with experts in academia in each of the substantive fields relevant to this issue. Finally, we talked to a number of the authors of the studies that we cited in this report to ensure that we correctly interpreted their findings and had independent experts review our draft report. Our work was completed between October 1996 and April 1997 in accordance with generally accepted government auditing standards. DOD and VA Have No Systematic Approach to Monitoring Gulf War Veterans’ Health After Initial Examination Over 100,000 of the approximately 700,000 Gulf War veterans have participated in DOD and VA health examination programs. Of those veterans examined by DOD and VA, nearly 90 percent have reported a wide array of health complaints and disabling conditions. The most commonly reported symptoms in VA and DOD registries include fatigue, muscle and joint pain, gastrointestinal complaints, headache, skin rash, depression, neurologic and neurocognitive impairments, memory loss, shortness of breath, and sleep disturbances. Officials of both DOD and VA have claimed that regardless of the cause of veterans’ illnesses, veterans are receiving appropriate and effective symptomatic treatment. Both agencies have tried to measure or ensure the quality of veterans’ initial examinations through such mechanisms as training and standards for physician qualification. However, these mechanisms do not ensure a given level of effectiveness for the care provided or permit identification of the most effective treatments. We found that neither DOD nor VA has mechanisms for monitoring the quality, appropriateness, or effectiveness of these veterans care or clinical progress after their initial examination and has no plans to establish such mechanisms. VA officials involved in administering the registry program told us that they regarded monitoring the clinical progress of registry participants as a separate research project, and DOD’s Clinical Care and Evaluation Program made similar comments. We believe that such monitoring is important because (1) undiagnosed conditions are not uncommon among ill veterans, (2) treatment for veterans with undiagnosed conditions is based on their symptoms, and (3) veterans with undiagnosed conditions or multiple diagnoses may see multiple providers. Without follow-up of their treatment, DOD and VA cannot say whether these ill veterans are any better or worse today than when they were first examined. Federal Research Strategy Lacks a Coherent Approach Federal research on Gulf War veterans’ illnesses and factors that might have caused their problems has not been pursued proactively. Although these veterans’ health problems began surfacing in the early 1990s, the vast majority of research was not initiated until 1994 or later. Much of this research was associated with legislation or external reviewers’ recommendations. This 3-year delay has complicated the task facing researchers and has limited the amount of completed research currently available. Although at least 91 studies have received federal funding, over 70, or four-fifths, of the studies are not yet complete, and the results of some studies will not be available until after 2000. We found that some hypotheses received early emphasis, while some hypotheses were not initially pursued. While research on exposure to stress received early emphasis, research on low-level chemical exposure was not pursued until legislated in 1996. The failure to fund such research cannot be traced to an absence of investigator-initiated submissions. According to DOD officials, three recently funded proposals on low-level chemical exposure had previously been denied funds. We found that additional hypotheses were pursued in the private sector. A substantial body of research suggests that low-level exposure to chemical warfare agents or chemically related compounds, such as certain pesticides, is associated with delayed or long-term health effects. Regarding delayed health effects of organophosphates, the chemical family used in many pesticides and chemical warfare agents, there is evidence from animal experiments, studies of accidental human exposures, and epidemiological studies of humans that low-level exposures to certain organophosphorus compounds, including sarin nerve agents to which some of our troops may have been exposed, can cause delayed, chronic neurotoxic effects. It has been suggested that the ill-defined symptoms experienced by Gulf War veterans may be due in part to organophosphate-induced delayed neurotoxicity. This hypothesis was tested in a privately supported epidemiological study of Gulf War veterans. In addition to clarifying the patterns among veterans’ symptoms by use of statistical factor analysis, this study demonstrated that vague symptoms of the ill veterans are associated with objective brain and nerve damage compatible with the known chronic effects of exposures to low levels of organophosphates. It further linked the veterans’ illnesses to exposure to combinations of chemicals, including nerve agents, pesticides in flea collars, N,N-diethyl-m-toluamide (DEET) in highly concentrated insect repellents, and pyridostigmine bromide tablets. Toxicological research indicates that pyridostigmine bromide, which Gulf War veterans took to protect themselves against the immediate, life-threatening effects of nerve agents, may alter the metabolism of organophosphates in ways that activate their delayed, chronic effects on the brain. Moreover, exposure to combinations of organophosphates and related chemicals like pyridostigmine or DEET has been shown in animal studies to be far more likely to cause morbidity and mortality than any of the chemicals acting alone. We found that the bulk of ongoing federal research on Gulf War veterans’ illnesses focuses on the epidemiological study of the prevalence and cause of the illnesses. It is important to note that in order to conduct such studies, investigators must follow a few basic, generally accepted principles. First, they must specify diagnostic criteria to (1) reliably determine who has the disease or condition being studied and who does not and (2) select appropriate controls (people who do not have the disease or condition). Second, the investigators must have valid and reliable methods of collecting data on the past exposure(s) of those in the study to possible factors that may have caused the symptoms. The need for accurate, dose-specific exposure information is particularly critical when low-level or intermittent exposure to drugs, chemicals, or air pollutants is possible. It is important not only to assess the presence or absence of exposure but also to characterize the intensity and duration of exposure. We found that the ongoing epidemiological federal research suffered from two methodological problems: a lack of a case definition, and absence of accurate exposures data. Without valid and reliable data on exposures and the multiplicity of agents to which the veterans were exposed, researchers will likely continue to find it difficult to detect relatively subtle effects and to eliminate alternative explanations for Gulf War veterans’ illnesses. Prevalence data can be useful, but it requires careful interpretation in the absence of better information on the factors to which veterans were exposed. While multiple federally funded studies of the role of stress in the veterans’ illnesses have been done, basic toxicological questions regarding the substances to which they were exposed remain unanswered. We found that federal researchers studying Gulf War illnesses have faced several methodological challenges and encountered significant problems in linking exposures or potential causes to observed illnesses or symptoms. For example: Researchers have found it extremely difficult to gather information about exposures to such things as oil well fire smoke and insects carrying infection. DOD has acknowledged that records of the use of pyridostigmine bromide and vaccinations to protect against chemical/biological warfare exposures were inadequate. Gulf War veterans were typically exposed to a wide array of agents, making it difficult to isolate and characterize the effects of individual agents or to study their combined effects. Most of the epidemiological studies on Gulf War veterans’ illnesses have relied only on self-reports for measuring most of the agents to which veterans may have been exposed. The information gathered from Gulf War veterans years after the war may be inaccurate or biased. There is often no straightforward way to test the validity of self-reported exposure information, making it impossible to separate bias in recalled information from actual differences in the frequency of exposures. As a result, findings from these studies may be spurious or equivocal. Classifying the symptoms and identifying illnesses of Gulf War veterans have been difficult. From the outset, symptoms reported by veterans have been varied and difficult to classify into one or more distinct illnesses. Moreover, several different diagnoses might provide plausible explanations for some of the specific health complaints. It has thus been difficult to develop a case definition (that is, a reliable way to identify individuals with a specific disease), which is a criterion for doing effective epidemiological research. In summary, the ongoing epidemiological research will not be able to provide precise, accurate, and conclusive answers regarding the causes of veterans’ illnesses because of these formidable methodological problems. Support for Key Government Conclusions Is Weak or Subject to Alternative Interpretations Six years after the war, little is conclusively known about the causes of Gulf War veterans’ illnesses. In the absence of official conclusions from DOD and VA, we examined conclusions drawn in December 1996 by the Presidential Advisory Committee on Gulf War Veterans’ Illnesses. This Committee was established by the President to review the administration’s activities regarding Gulf War veterans’ illnesses. In January 1997, DOD endorsed the Committee’s conclusions about the likelihood that exposure to 10 commonly cited agents contributed to the explained and unexplained illnesses of these veterans. We found that the evidence to support three of these conclusions is either weak or subject to alternative interpretations. First, the Committee concluded that “stress is likely to be an important contributing factor to the broad range of illnesses currently being reported by Gulf War veterans.” While stress can induce physical illness, the link between stress and these veterans’ physical symptoms has not been firmly established. For example, a large-scale, federally funded study concluded that “for those veterans who deployed to the Gulf War and currently report physical symptoms, neither stress nor exposure to combat or its aftermath bear much relationship to their distress.” The Committee has stated that “epidemiological studies to assess the effects of stress invariably have found higher rates of posttraumatic stress disorder (PTSD) in Gulf War veterans than among individuals in nondeployed units or in the general U.S. population of the same age.” Our review indicated that the prevalence of PTSD among Gulf War veterans may be overestimated due to problems in the methods used to identify it. Specifically, the studies on PTSD to which the Committee refers have not excluded other conditions, such as neurological disorders that produce symptoms similar to PTSD and can also elevate scores on key measures of PTSD. Also, the use of broad and heterogenous groups of diagnoses (e.g., “psychological conditions”—ranging from tension headache to major depression) in data from DOD’s clinical program may contribute to overestimation of the extent of serious psychological illnesses among Gulf War veterans. Second, the Committee concluded that “it is unlikely that infectious diseases endemic to the Gulf region are responsible for long-term health effects in Gulf War veterans, except in a small known number of individuals.” Similarly, PGVCB concluded that because of the small number of reported cases, “the likelihood of leishmania tropica as an important risk factor for widely reported illness has diminished.” While this is the case for observed symptomatic infection with the parasite, the prevalence of asymptomatic infection is unknown, and such infection may reemerge in cases in which the patient’s immune system becomes deficient. As the Committee noted, the infection may remain dormant up to 20 years. Because of this long latency, the infected population is hidden, and because even classic forms of leishmaniasis are difficult to recognize, we believe that leishmania should be retained as a potential risk factor for individuals who suffer from immune deficiency. Third, the Committee also concluded that it is unlikely that the health effects reported by many Gulf War veterans were the result of (1) biological or chemical warfare agents, (2) depleted uranium, (3) oil well fire smoke, (4) pesticides, (5) petroleum products, and (6) pyridostigmine bromide or vaccines. However, our review of the Committee’s conclusions indicated the following: While the government found no evidence that biological weapons were deployed during the Gulf War, the United States lacked the capability to promptly detect biological agents, and the effects of one agent, aflatoxin, would not be observed for many years. Evidence from various sources indicates that chemical agents were present at Khamisiyah, Iraq, and elsewhere on the battlefield. The magnitude of the exposure to chemical agents has not been fully resolved. As we recently reported, 16 of 21 sites categorized by Gulf War planners as nuclear, biological, and chemical (NBC) facilities were destroyed. However, the United Nations Special Commission found after the war that not all the possible NBC targets had been identified by U.S. planners. The Commission has investigated a large number of the facilities suspected by the U.S. authorities as being NBC related. Regarding those the Commission has not yet inspected, we determined that each was attacked by coalition aircraft during the Gulf War. One of these sites is located within the Kuwait theater of operation in close proximity to the border, where coalition ground forces were located. Exposure to certain pesticides can induce a delayed neurological condition without causing immediate symptoms. Available research indicates that exposure to pyridostigmine bromide can alter the metabolism of organophosphates (the chemical family of some pesticides that were used in the Gulf War, as well as certain chemical warfare agents) in ways that enhance chronic effects on the brain. Recommendations to DOD and VA Because of the numbers of Gulf War veterans who continue to experience illnesses that may be related to their service during the Gulf War, we recommended in our report that the Secretary of Defense, with the Secretary of Veterans Affairs, (1) set up a plan for monitoring the clinical progress of Gulf War veterans to help promote effective treatment and better direct the research agenda and (2) give greater priority to research on effective treatment for ill veterans and on low-level exposures to chemicals and their interactive effects and less priority to further epidemiological studies. We also recommended that the Secretaries of Defense and Veterans Affairs refine the current approaches of the clinical and research programs for diagnosing posttraumatic stress disorder consistent with suggestions recently made by the Institute of Medicine. The Institute noted the need for improved documentation of screening procedures and patient histories (including occupational and environmental exposures) and the importance of ruling out alternative causes of impairment. Mr. Chairman, that concludes our prepared remarks. We will be happy to answer any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the results of its study on the government's clinical care and medical research programs relating to illnesses that members of the armed forces might have contracted as a result of their service in the Persian Gulf War, focusing on the: (1) efforts of the Departments of Defense (DOD) and Veterans Affairs' (VA) to assess the quality of treatment and diagnostic services provided to Gulf War veterans and their provisions for follow-up of initial examinations; (2) government's research strategy to study the veterans' illnesses and the methodological problems posed in its studies; and (3) consistency of key official conclusions with available data on the causes of the veterans' illnesses. GAO noted that: (1) over 100,000 Gulf War veterans have participated in DOD and VA health examination programs; (2) of those veterans examined by DOD and VA, nearly 90 percent reported a wide array of health complaints and disabling conditions; (3) although efforts have been made to diagnose veterans' problems and care has been provided to many eligible veterans, neither DOD or VA has systematically attempted to determine whether ill Gulf War veterans are any better or worse today than when they were first examined; (4) federal research on Gulf War veterans' illnesses and factors that might have caused their problems has not been pursued proactively; (5) the majority of the research has focused on the epidemiological study of the prevalence and cause of Gulf War illnesses rather than the diagnosis, treatment, and prevention of them; (6) while this epidemiological research will provide descriptive data on veterans' illnesses, methodological problems are likely to prevent researchers from providing precise, accurate, and conclusive answers regarding the causes of veterans' illnesses; (7) ongoing epidemiological federal research suffered from two methodological problems: a lack of case definition, and absence of accurate exposure data; (8) without valid and reliable data on exposures and the multiplicity of agents to which the veterans were exposed, researchers will likely continue to find it difficult to detect relatively subtle effects and to eliminate alternative explanations for Gulf War veterans' illnesses; and (9) support for some official conclusions regarding stress, leishmaniasis (a parasitic infection), and exposure to chemical agents was weak or subject to alternative interpretations. |
Background The 1998 terrorist bombings of the U.S. embassies in Kenya and Tanzania highlighted the compelling need for safe and secure overseas facilities. Following the bombings, two high-level independent groups cited problems at U.S. overseas facilities. In January 1999, the chairman of the Accountability Review Boards, formed to investigate the bombings, reported that unless security vulnerabilities associated with U.S. overseas facilities were addressed, “U.S. Government employees and the public in many of our facilities abroad” would be at continued risk from further terrorist bombings. Later that year, the Overseas Presence Advisory Panel (OPAP) concluded that many U.S. overseas facilities are unsafe, overcrowded, deteriorating, and “shockingly shabby” and recommended major capital improvements and more accountability for security. In addition, the panel recommended that the United States consider rightsizing its overseas presence to reduce security vulnerabilities. In January 2001, we recommended that State develop a long-term capital construction plan to guide the multibillion dollar program to build new secure facilities. We also reported in July 2002 on a rightsizing framework we developed to facilitate the use of a common set of criteria for making staff assessments and adjustments at overseas posts, which included consideration of security, mission priorities and requirements, and costs. We recommended that OMB use the framework as a basis for assessing staffing levels at existing overseas posts. Figure 1 illustrates the locations worldwide for which State has received funding for new embassy compound construction in fiscal years 1999 through 2003 and for which it has requested funding for projects in fiscal year 2004. In July 2001, State published its first Long-Range Overseas Building Plan, a planning document that outlines the U.S. government’s overseas facilities requirements and guides implementation of State’s expansive and unprecedented overseas construction program. This program aims to provide safe, secure, and cost-effective buildings for the thousands of U.S. employees working overseas. State identified the projects with the most compelling case for replacement and ranked them in the plan, which OBO plans to update annually as compounds are completed and new projects are added to the priority list. The current long-range plan describes building new embassy compounds at more than 70 locations during fiscal years 2002 through 2007. State estimates this will cost more than $6.2 billion. Additional funding will be needed after this time to continue the program. State’s construction program of the late 1980s encountered lengthy delays and cost overruns in part because of a lack of coordinated planning of post requirements prior to approval and budgeting for construction projects. As we reported in 1991, meaningful planning began only after project budgets had been authorized and funded. As real needs were determined, changes in scope and increases in costs followed. OBO now requires that all staffing projections for new embassy compounds be finalized prior to submitting funding requests, which are sent to the Congress as part of State’s annual budget request each February. To accomplish this task, OBO requires that final staffing projections be submitted the previous spring. Figure 2 outlines the major milestones and highlights key dates in the planning and construction process for a new embassy compound scheduled for 2007 funding. As Figure 2 depicts, OBO will receive final staffing projections for fiscal year 2007 projects in spring 2005. Between spring 2005 and February 2006, OBO will develop more firm cost estimates for the project, vet the resulting funding requirements through OMB, and submit the funding request to the Congress. Appropriations for these fiscal year 2007 projects will not be secured until at least October 2006—18 months after final projections are submitted—and construction may not begin for another 6 months. In total, OBO estimates that, in some cases, it could take 2 to 3 years from the time projections are finalized to actually begin construction of a new compound, which could take another 2 to 3 years to complete. To ensure that projects in the long-range plan proceed on schedule and at cost, OBO will not request additional funding to accommodate changes made after funding requests are submitted to the Congress. Once OBO receives appropriations for construction projects, it moves immediately to complete the design of a new compound and secure a contracting firm for the project. Changes to staffing projections after this point may result in redesign and could lead to lengthy delays and additional costs, according to an OBO official. For example, large changes generally require that materials already purchased for the project be replaced with new materials. According to OBO, there is little room for flexibility after the budget is submitted given budgetary and construction time frames. However, OBO does include a margin of error in the designs for all new embassy compounds, which typically allows for a 5-10 percent increase in building size to accommodate some additional growth. A key component of the planning process outlined in figure 2 is the development of staffing projections for new embassy compounds. Staffing projections present the number of staff likely to work in the facility and the type of work they will perform. These are the two primary drivers of the size and cost of new facilities. Individual embassies and consulates, in consultation with headquarters bureaus and offices, are responsible for developing the staffing projections, which OBO then uses to design the new compounds and prepare funding requests. As the government’s overseas real property manager, OBO must rely on the other bureaus in the State Department and other U.S. agencies for policy and staffing decisions. OBO is not in a position to independently validate the projections once the geographic bureaus have given their approval. To help ensure that new compounds are designed as accurately as possible, OBO designed a system for collecting future staffing requirements, as shown in figure 3, that encourages the active participation of embassy personnel, officials in State’s geographic bureaus, and officials from all other relevant federal agencies. This process also calls upon embassy management and geographic bureaus to review and validate all projections before submitting them to OBO. OBO generally gives embassies and geographic bureaus the opportunity to submit staffing projections several times before they are finalized. Finally, it should be noted that while OBO takes the lead in designing and constructing all buildings on new embassy compounds, OBO is not always responsible for securing funding for all compound buildings. Pursuant to an informal agreement between OBO and USAID, USAID will secure funding for a separate annex in a compound when it requires desk space for 50 or more employees. However, if USAID projects it will need fewer than 50 desks, its offices will be in the chancery building in the compound, which State would fund, as it would for all U.S. government agencies in the chancery. According to OBO and USAID headquarters officials, there is some flexibility in the maximum number of USAID desk spaces allowed in a chancery, and this issue is handled on a case-by-case basis. Systematic Effort to Project Staffing Needs for New Embassies Is Lacking Although OBO has designed a reasonable approach to developing staffing projections, we found that it was not adopted uniformly across all of the embassies and geographic bureaus that we studied. While some of the embassies we examined have conducted relatively thorough analyses of their future needs, in other cases the process has been managed poorly, both in the field and at headquarters offices, thus raising concerns about the validity of the projected requirements. For example, with few exceptions, officials at the posts we visited did not appreciate the seriousness of the staffing projection process as it relates to the size and cost of new diplomatic facilities. Moreover, none of the embassies we contacted received formal, detailed guidance on how to develop projections. In addition, they had no systematic approach, such as the one presented in our framework, to conducting rightsizing analyses that would ensure that projected needs are the minimum necessary to support U.S. national security interests. In general, for the embassies we contacted, rightsizing exercises were largely limited to predictions of future workload, priorities, and funding levels, and did not include analyses of other factors, such as operational costs. Moreover, none of the embassies we contacted conducted a rightsizing analysis of existing staffing levels prior to projecting future requirements. We also found that posts did not maintain documentation of the assessments they conducted when completing staffing projections, and that State’s geographic bureaus did not consistently vet posts’ projections prior to submitting them to OBO. Finally, the process was further complicated by other factors, such as frequent personnel turnover and breakdowns in communication among multiple agencies. Efforts to Develop Staffing Projections Vary Significantly across Embassies and Geographic Bureaus We found that staffing projection exercises were not consistent across all of the embassies we contacted, and, indeed, State officials acknowledged that efforts to develop and validate projections were informal and undisciplined. Some embassy management teams were more engaged in the projection process than others. For instance, at several of the U.S. embassies we contacted, chiefs of mission or deputy chiefs of mission led interagency, or country team, meetings to discuss the embassy’s long-term priorities and the staffing implications. In addition, management followed up with agency representatives in one-on-one meetings to discuss each agency’s projected requirements. However, management teams at other embassies we contacted, such as the U.S. embassies in Belgrade, Serbia and Montenegro, and Tbilisi, Georgia, were less engaged and had relied mainly on administrative officers to collect information from each agency informally. In Belgrade, officials acknowledged that the projection exercise was not taken seriously and that projections were not developed using a disciplined approach. In Tbilisi, a failure to document recent growth in current staffing levels led to final projections that were too low. OBO has had to meet immediate additional requirements by using all of the growth space it built into the original compound design and reducing the amount of common space, such as conference rooms, to accommodate additional offices. Therefore, the new facility may be overcrowded upon opening, embassy officials said. If embassy or headquarters officials communicated earlier to OBO the likelihood of large staffing increases by the time construction was completed, OBO might have been able to better accommodate these needs in its plans. In addition to inconsistencies in the field, we found that officials in the geographic bureaus in Washington, D.C., whose staff are responsible for working most closely with embassies and consulates, also have varied levels of involvement in the projection process. Officials with whom we spoke in State’s geographic bureaus acknowledged that there is no mechanism to ensure the full participation of all relevant parties. When these officials were more involved, we have more confidence in the accuracy of the projections submitted to OBO. For example, officials from the U.S. Embassy in Beijing, China, said that representatives from their geographic bureau in Washington, D.C., have been very involved in developing their projections. They reported that the geographic bureau contacted all federal agencies that might be tenants at the new embassy— even agencies that currently have no staff in the country—to determine their projected staffing needs. Conversely, officials at Embassy Belgrade said State’s geographic bureau did not request any justifications for or provide any input into the final projections submitted to OBO. Officials in the geographic bureau acknowledged that the bureau does not require formal justification for embassies’ projected staffing requirements for new compounds. Given the weaknesses in how staffing projections were developed in Embassy Belgrade, State has little assurance that the planned compound will be the right size. Embassies Do Not Receive Consistent, Formal Guidance on Staffing Projection Process and Importance of Rightsizing Our analysis indicates that the State Department is not providing embassies with sufficient formal guidance on important time lines in the projection process or factors to consider when developing staffing projections for new embassy compounds. Officials from each of the 14 posts we contacted reported that their headquarters bureaus had not provided specific, formal guidance on important factors to consider when developing staffing projections. One geographic bureau provided its embassies with a brief primer on the process by which State determines priorities for new embassy compounds that broadly described the projection process and OBO’s long-range plan. However, we found that State generally did not advise embassies to consider factors such as (1) anticipated changes in funding levels, (2) the likelihood that policy changes could result in additional or fewer work requirements, (3) linkages between agencies’ annual operating costs and the achievement of embassy goals, (4) costs associated with their presence in a new facility, or (5) alternative ways to consolidate certain positions among neighboring embassies, among others. Absent such guidance from Washington, D.C., we found that factors that embassy officials considered when developing projections varied on a case-by-case basis. Officials at Embassy Sarajevo, for example, conducted a relatively thorough analysis of their future needs, including consulting World Bank indicators for Bosnia-Herzegovina to determine the likelihood of increased U.S. investment in the region and link future staffing needs accordingly. In addition, a consular affairs officer analyzed the likelihood that new security requirements for consular sections, which may allow only American consular officers to screen visa applicants, would boost that section’s staffing requirements. Other embassies we contacted conducted less thorough analyses of future needs. For example, officials from several of the other embassies we contacted reported that they largely relied on information from annual Mission Performance Plans to justify future staffing needs in a new compound. Although the performance plan links staffing to budgets and performance, and may include goals related to improving diplomatic facilities, it is a near-term tool. For example, performance plans for fiscal year 2004 identify goals and strategies only for that fiscal year. For a project scheduled for 2004 funding, an embassy may go through two or three additional performance planning cycles before embassy staff move onto a new compound. The performance plan, while a reasonable starting point, is not directly linked to long-term staffing requirements and by itself is not sufficient to justify staffing decisions for new compounds. Indeed, an official from one geographic bureau said that while the bureau works with the embassies in developing staffing projections, it generally does not send out additional or separate formal guidance to all relevant embassies. Although OBO informed the geographic bureaus that final projections for fiscal year 2004 funding would be due in spring 2002, officials at some of the embassies we examined were unaware of this deadline. For example, officials at the U.S. Embassy in Harare, Zimbabwe, said they lacked information on the major time frames in the funding process for their new compound. Officials at the Embassy Belgrade said they were unaware that the projections they submitted to OBO in spring 2002 would be their final chance to project future staffing needs, and that the results would be used as the basis for the new compound’s design. In other words, they did not know that additional requirements they might submit would not result in a larger-sized building. Use of Rightsizing Exercises According to OBO, individual embassies should have conducted rightsizing exercises before submitting the staffing projections used to develop the July 2001 version of the long-range plan. In addition, in January 2002, OBO advised all geographic bureaus that staffing projections should incorporate formalized rightsizing initiatives early in the process so that building designs would accurately reflect the embassies’ needs. However, OBO is not in a position to know what processes the geographic bureaus use when developing staffing projections. Indeed, OBO officials stated that they cannot hold the geographic bureaus accountable for policy-related decisions and can only assume that rightsizing exercises have been incorporated into the projection process. The degree to which each geographic bureau stressed the importance of rightsizing staffing projections differed across the embassies we studied. We found that agencies at the posts we examined were not consistently considering the three critical elements of diplomatic operations outlined in our rightsizing frameworkphysical security of facilities, mission priorities and responsibilities, and operational costs—when determining future staffing requirements. In general, for these posts, rightsizing exercises were largely limited to predictions of future funding levels and likely workloads. For example, officials at each of the seven posts we visited reported that staffing projections were, in large part, linked to anticipated funding levels. In Skopje, for example, USAID officials estimated that funding levels for some programs, such as the democracy and governance program, could decline significantly over the next 5 years and could result in a reduction in staff assigned to these areas. Although these embassies had considered mission requirements as part of the projection process, they did not consistently consider other factors that are mentioned above, such as options for relocating certain positions to regional centers or consolidating other positions among neighboring embassies. Moreover, decision makers at these embassies used current staffing levels as the basis for projecting future requirements. None of the posts we contacted conducted a rightsizing analysis of existing staffing levels prior to projecting future requirements. In addition, we found that most agencies with staff overseas are not consistently considering operational costs when developing their staffing projections. The President’s rightsizing initiative has emphasized cost as a critical factor in determining overseas staffing levels. However, during our fieldwork, only USAID officials consistently reported that they considered the implications of anticipated program funding on staffing levels and the resulting operational costs. Furthermore, we found only one instance where an agency, the U.S. Commercial Service, reported that as part of its overseas staffing process, it compares operating costs of field offices with the performance of those offices. Little Documentation of Comprehensive Assessments of Long-term Staffing Needs At each of the seven posts we visited, we found little or no documentation to show that staff had completed a comprehensive assessment of the number and types of people they would need in the year that their new embassy would be completed. As part of our prior work on rightsizing, we developed examples of key questions that may be useful for embassy managers in making staffing decisions. These include, but are not limited to the following questions: Is there adequate justification for the number of employees from each agency compared to the agency’s mission? What are the operating costs for each agency at the embassy? To what extent could agency program and/or routine administrative functions (procurement, logistics, and financial management functions) be handled from a regional center or other locations? However, we did not find evidence of these types of analyses at the posts we visited. Officials from several embassies told us they had considered these factors; yet, they did not consistently document their analyses or the rationales for their decisions. Although officials at the embassies we visited said that these types of considerations are included as part of their annual Mission Performance Plan process, there was little evidence of analyses of long-term needs. Moreover, we found little or no documentation explaining how previous projections were developed or the justifications for these decisions. For example, by the time the new embassy compound is completed in Yerevan, Armenia, the embassy will be four administrative officers removed from the person who developed the original staffing requirements, and current embassy officials had no documentation on previous projection exercises or the decision-making processes. Thus, there was generally no institutional memory of and accountability for previous iterations of staffing projections. As a result, future management teams will not have accurate information on how or why previous decisions were made when they embark on efforts to update and finalize staffing projections. Staffing Projections Are Not Vetted Consistently by Geographic Bureaus According to OBO, the relevant geographic bureaus are expected to review and verify the staffing projections developed by individual embassies and confirm these numbers with other agencies’ headquarters before they are submitted to OBO. However, we found that the degree to which the staffing projections were reviewed varied. For example, officials at Embassy Belgrade reported that their geographic bureau was not an active participant in projection exercises. But officials at Embassy Sarajevo reported that officials from the same geographic bureau were involved in the projection process and often requested justifications for some decisions. In addition, we found little evidence to show that staffing projections were consistently vetted with all other agencies’ headquarters to ensure that the projections were as accurate as possible. Indeed, State officials acknowledged that (1) State and other agencies’ headquarters offices are not held accountable for conducting formal vetting exercises once projections are received from the embassies; (2) there is no formal vetting process; and, (3) the geographic bureaus expect that officials in the field consult with all relevant agencies; therefore, the bureaus rarely contact agency headquarters officials. Additional Factors Complicate Staffing Projection Process We found additional factors that further complicate the staffing projection process. First, frequent turnover of embassy personnel responsible for developing staffing projections results in a lack of continuity in the projection process. This turnover and the lack of formal documentation may prevent subsequent embassy personnel from building upon the work of their predecessors. Second, we found that coordinating the projected needs of all agencies could be problematic. For example, some agencies may decide not to be located in the new compound, while others, such as USAID, may have different requirements in the new compound. However, we found that these issues were not always communicated to embassy management in a timely fashion, early in the projection process. Lack of Continuity in Projection Process Frequent turnover in embassy personnel can contribute to problems obtaining accurate staffing projections. Embassy staff may be assigned to a location for only 2 years, and at some locations, the assignment may be shorter. For instance, the U.S. Office in Pristina, Kosovo, and the U.S. Embassy in Beirut, Lebanon, have only a 1-year assignment requirement. Given that personnel responsible for developing the projections could change from year to year, and that posts may go through several updates before the numbers are finalized, the continuity of the projection process is disrupted each year as knowledgeable staff are transferred to new assignments. Officials in Kosovo reported that the frequent turnover of administrative personnel has forced incoming staff to rebuild institutional knowledge of the projection process each year. Breakdowns in Communication among Multiple Agencies Part of the complexity of the projection process is the difficulty in coordinating staffing requirements for multiple agencies in a given location. Agencies’ space needs in the main office building may differfor instance, some may require classified space, which is more expensive to construct and thus has different implications for the design and cost of a new building than unclassified space. However, agencies requesting office space may not currently be situated in the country in question and, thus, communication between them and embassy managers is difficult. For example, embassy management in Yerevan, Armenia, stated that one agency without personnel currently in Armenia did not notify the ambassador that it planned to request controlled access space in the new embassy. Embassy officials stated they learned of this only when floor plans for the new chancery were first delivered. These kinds of issues should be communicated to embassy managers in the early stages of the projection process so that the final projections are based on the most accurate information available. Embassy officials in Rangoon, Burma, for example, reported that close interaction among agencies at post and OBO during the staff projection process, under the leadership of the deputy chief of mission and the administrative officer, kept OBO apprised of changes to requirements early enough in the process that it was before the budget proposal was submitted to the Congress and the projections were locked. Failure to Provide Timely Requests for Co-location Waivers Following the 1998 embassy bombings, a law was passed requiring that all U.S. agencies working at posts slated for new construction be located on the new embassy compounds unless they are granted a special co-location waiver. However, agencies are not required to submit these waiver requests prior to submitting their final staffing projections to OBO. To ensure that OBO has the most accurate projections, it is imperative that waiver requests be incorporated early in the staffing projection process so that OBO is not designing and funding buildings that are too large or too small. In Yerevan, for example, the Department of Agriculture office projected the need for 26 desks in the new chancery, yet officials in Yerevan plan to use only 13 of these desks and to house the remaining personnel in their current office space. However, Agriculture has not yet requested a waiver. If Agriculture receives a waiver and proceeds according to current plans, OBO will have designed space and requested funding for 13 extra desks for Agriculture staff. We found other instances where agencies had not requested a waiver before submitting final projections. In Sarajevo, for example, the Departments of the Defense, Treasury, and Justice have staff in host country ministries they advise. However, officials at Embassy Sarajevo, including the regional security officer, were uncertain about which agencies would be requesting a waiver for the new compound. Embassy officials acknowledged that these decisions must be made before the staffing projections are finalized. Separate Funding for USAID Annexes Could Complicate the Projection Process In compounds where USAID is likely to require desk space for more than 50 employees, it is required to secure funding in its own appropriations for an annex building on the compound. However, officials from at least two of the embassies we examined had trouble determining where USAID would be located, and this kind of problem could delay planning and disrupt OBO’s overall plan for concurrent construction of the USAID annex with the rest of the compound. For example, at Embassy Yerevan, confusion among USAID officials in Washington and the field over whether USAID would fund a separate annex has caused construction and funding on the annex to fall behind schedule. Therefore, USAID staff will not move to the new site concurrent with the rest of the embassy’s staff. Rather, USAID may be forced to remain at the current, insecure facilityat an additional costuntil completion of its annex, unless alternative arrangements can be made. We also found a related problem in Sarajevo, Bosnia-Herzegovina, where USAID officials were concerned about having to build a separate annex. Current staffing levels and projections exceed the 50-desk level, which will require USAID to fund the construction of an annex on the compound. However, the assistance program may be declining significantly soon after the completion of the new compound and, as a result, the office may need far fewer staff. Thus, USAID may be constructing an annex that is oversized or unnecessary by the time construction is completed or soon after. USAID officials in Sarajevo acknowledged they would need to coordinate with embassy management and their headquarters offices regarding the decision to build a separate annex so that OBO has the most accurate projections possible. The issue of USAID annex construction is further complicated by difficulty coordinating funding schedules. One of the key assumptions of the long- range plan is that where USAID requires a separate annex, construction will coincide with the State-funded construction projects. However, annual funding levels for USAID construction have been insufficient to keep chancery and USAID annex construction on the same track in some countries. In Tbilisi, Georgia, for example, funding for the USAID annex has fallen behind State Department funding by 2 to 3 fiscal years. According to USAID officials in Washington, D.C., two-track construction could lead to security concerns, work inefficiencies, and additional costs. Because USAID is required to secure funding for its annexes separate from State’s funding for new compounds, it is imperative that decisions regarding the future location of USAID personnel be made early in the staffing projection process to avoid additional security or financial risks. Government Aims to Distribute Costs of Overseas Facilities among Users The State Department, which historically has been responsible for funding the construction and maintenance of U.S. embassies and consulates, recently proposed a capital security cost-sharing plan that would require federal agencies to help fund its embassy construction program. Traditionally, U.S. government agencies other than State have not been required to help fund capital improvements of U.S. embassies and consulates. OMB is examining State’s and other cost-sharing proposals designed to create more discipline in the process for determining overseas staffing requirements. The administration believes that if agencies were required to pay a greater portion of the total costs associated with operating overseas facilities, they would think more carefully before posting personnel overseas. In spring 2003, OMB will lead an interagency committee to develop a cost-sharing mechanism that would be implemented in fiscal year 2005. This new mechanism could require agencies to help fund the construction of new embassies and consulates. While it may be reasonable to expect that agencies should pay full costs associated with their overseas presence, many factors and questions must be addressed prior to implementing an effective and equitable cost-sharing mechanism. State’s Proposed Capital Security Cost-sharing Plan The State Department has presented a capital security cost-sharing plan to OMB that would require agencies to help fund State’s capital construction program. State’s proposal calls for each agency to pay a proportion of the total construction program costs based on its total overseas staffing levels. Agencies would be charged different costs based on whether their staff are located in classified or nonclassified access areas. Agencies would be assessed a fee each year, which would be updated annually, until the building program is completed. An added benefit of such a program, State believes, is it would provide incentive for agencies to place greater consideration of the total costs associated with their presence abroad, which in turn, would lead to greater efforts to rightsize overseas presence. Table 1 shows an estimated distribution of costs for each agency once the program is fully implemented, based on State’s May 2001 survey data. Efforts by OMB to Develop a Cost-sharing Mechanism As part of the President’s Management Agenda, OMB is leading an effort to develop a cost-sharing mechanism that would require users of U.S. overseas facilities to share the costs associated with those facilities to a greater extent than currently required. OMB and the administration believe that such a requirement would provide agencies with an incentive to scrutinize long-term staffing more thoroughly when assessing their overseas presence. OMB officials also believe greater cost sharing could provide a clear linkage between the costs of new facilities that result directly from agencies’ presence. In its November 1999 report, the Overseas Presence Advisory Panel (OPAP) noted a lack of cost sharing among agencies that use overseas facilities, particularly as it related to capital improvements and maintenance of sites. As a result, OPAP recommended that agencies be required to pay rent in government-owned buildings in foreign countries to cover current operating and maintenance costs. In effect, agencies would pay for space in overseas facilities just as they would for domestic office space operated by the General Services Administration. In response to the OPAP recommendation, a working group consisting of staff from the Departments of Commerce, Defense, Justice, State, and Transportation; the Central Intelligence Agency; OMB; and USAID was created to develop a mechanism by which agencies would be charged for the use of overseas facilities. In summer 2000, the working group recommended to the Interagency Subcommittee on Overseas Facilities that agencies be assessed a surcharge based on the space they actually use in overseas facilities. Like State’s more recent proposal, the working group’s plan was designed to help fund construction of new embassy compounds, but the plan was not implemented. In January 2003, OMB notified each federal agency with overseas staff how State’s capital cost-sharing proposal would affect the agencies’ respective budgets in fiscal year 2004. Because the State proposal and OMB assessment were completed after the budget submission deadline, OMB told agencies that they would not be charged in 2004; however, OMB did say that a capital construction surcharge would be phased in over 5 years beginning in 2005. In addition, agencies were invited to participate in an interagency working group charged with developing an equitable cost- sharing program this year. Also during 2003, OMB is requiring that agencies complete a census of all current overseas positions and an assessment of agencies’ future staffing plans as part of their budget requests for 2005. The results of this census will become the baseline for how future cost-sharing charges are determined. Factors to Consider When Developing a Capital Cost- sharing Mechanism The impact of agencies’ staffing levels on the costs associated with maintaining and improving the physical infrastructure of overseas facilities is an important factor agencies should consider when placing staff overseas. However, agency personnel in Washington and in the field, and embassy management teams with whom we spoke, expressed concerns over many factors involved in implementing a new cost-sharing arrangement. Therefore, as OMB and the interagency committee work to develop a new cost-sharing mechanism, they also need to develop consensus on many issues, including how the cost-sharing mechanism would be structured—for example, as capital reimbursement for new embassy compounds, or as a rent surcharge applied to all embassy occupants worldwide or just those at new embassy compounds; the basis for fees—such as full reimbursement of capital costs in a year or amortized over time, or rent based on local market rates, an average of market rates within a region, or one flat rate applied worldwide; how charges would be assessed—based on the amount of space an agency uses or on its per capita presence—and whether charges would be applied on a worldwide level, at the post level, or just for posts receiving new facilities; whether different rates would be applied to staff requiring controlled access rather than noncontrolled access space; whether agencies would be charged for staff not located within facilities operated by the State Department—for example, USAID staff working in USAID-owned facilities outside an embassy compound or staff who work in office space at host country ministries and departments; if and how costs associated with staff providing shared services would be offset, and whether costs associated with Marine and other security services would be covered; how fees would be paid and who would collect the payments—whether through an interagency transfer of funds or through an existing structure such as ICASS; and whether potential legal barriers exist and, if so, what legislation would be necessary to eliminate them. In addition, the interagency committee must develop consensus on the underlying purpose of capital cost sharing, demonstrate how such a mechanism would benefit users of overseas facilities, and determine how the mechanism can be implemented with the greatest fairness and equity. Finally, the committee must figure out how to minimize the management burden of whatever mechanism it develops. Conclusions The State Department has embarked on an expansive capital construction program designed to provide safe, secure, and cost-effective buildings for employees working overseas. This program will require a substantial investment of resources. Given that the size and cost of new facilities are directly related to anticipated staffing requirements for these posts, it is imperative that future staffing needs be projected as accurately as possible. Developing staffing projections is a difficult exercise that requires a serious effort by all U.S. agencies to determine their requirements 5 to 7 years in the future. However, we found that efforts to develop these projections at the embassies we studied were undisciplined and did not follow a systematic approach. Therefore, the U.S. government risks building new facilities that are designed for the wrong number of staff. We believe that additional, formal guidance and the consistent involvement of the geographic bureaus would help mitigate the cost and security risks associated with wrong-sized embassies. Although any staffing requirements could be affected by changing world events and circumstances, we believe a systematic process would help ensure that the construction of new embassies is based on the best projections possible and most accurate information. Costs associated with the physical infrastructure of facilities are important factors that agencies need to consider when deciding whether to assign staff to overseas locations. Recent proposals to implement a new cost- sharing mechanism may help add greater discipline to the staffing projection and rightsizing processes. However, in deciding how costs will be shared, decision makers will need to address issues such as fairness and equity, while designing a system that is relatively easy to administer. Recommendations for Executive Action To ensure that U.S. agencies are conducting systematic staffing projection exercises, we recommend that the Secretary of State provide embassies with formal, standard, and comprehensive guidance on developing staffing projections for new embassy compounds. This guidance should address factors to consider when developing projections, encourage embassywide discussions, present potential options for rightsizing, and identify important deadlines in the projection process, including planning, funding, and construction time lines. To ensure continuity in the process, we also recommend that the Secretary of State require that chiefs of mission maintain documentation on the decision-making process including justifications for these staffing projections. Finally, we recommend that the Secretary require all chiefs of mission and geographic bureaus to certify that the projections have been reviewed and vetted before they are submitted to OBO. Agency Comments and Our Evaluation State and USAID provided written comments on a draft of this report (see apps. II and III). OMB provided oral comments. State agreed with our conclusion that it is essential that staffing projections for new embassy compounds be as accurate as possible. State also said it plans to implement our recommendations fully by creating a standard and comprehensive guide for developing staffing projections, which it anticipates completing by late April 2003. State said this guide would provide posts and geographic bureaus with a complete set of required steps, the timelines involved, and the factors to consider when developing staffing projections. Moreover, State agreed with our recommendations that posts should retain documentation on the processes they used to develop staffing projections, and that chiefs of mission and geographic bureaus should certify staffing projections. State provided technical comments related to our cost-sharing discussion, which were incorporated into the text, where appropriate. USAID also agreed that U.S. agencies do not take a consistent approach to determining long-term staffing needs for new embassy compounds. Specifically, USAID supported the recommendation calling for standard and comprehensive guidance to assist posts when developing staffing projections. USAID also expressed deep concerns about the security and cost implications that result from delayed funding for their facilities on new embassy compounds. Indeed, USAID acknowledged that its employees will continue to work in facilities at two overseas locations that do not meet minimal physical security standards even though other agencies have been moved to new embassy compounds. USAID said that the lack of funding has prevented USAID and State from coordinating the construction of new facilities. In oral comments, OMB said it agrees with our conclusions regarding both the staffing projection process and cost sharing, and with our three recommendations to the Secretary of State. In addition, OMB suggested it would be useful to have an independent body review the vetted staffing projections prior to their submission to OBO, to augment the guidance developed by State, and ensure that agencies and embassy management examine rightsizing options. OMB intends to address this issue with the interagency cost-sharing committee. OMB also stated it is concerned about the security and cost implications that can result from funding delays for USAID annexes, and it is studying ways to overcome this problem. OMB also provided technical comments, which we addressed in the text, as appropriate. Scope and Methodology To determine how U.S. agencies are developing staffing projections for new embassy compounds, we analyzed the State Department’s Long- Range Overseas Buildings Plan and interviewed State Department officials from OBO, the Office of Management Policy, and the six geographic bureaus. We also interviewed headquarters officials from agencies with overseas personnel, including officials from the Departments of Agriculture, Commerce, Defense, Justice, and the Treasury, and officials from USAID and the Peace Corps. In addition, we reviewed reports on embassy security and overseas staffing issues, including those of the Accountability Review Boards and OPAP, and we met with officials from OMB to discuss how they are implementing the overseas presence initiatives in the President’s Management Agenda. To further assess agencies’ efforts to develop long-term staffing projections and the extent to which agencies were conducting rightsizing exercises as part of the projection process, we visited seven posts in State’s Bureau of European and Eurasian Affairs—Yerevan, Armenia; Baku, Azerbaijan; Sarajevo, Bosnia-Herzegovina; Tbilisi, Georgia; Pristina, Kosovo; Skopje, Macedonia; and Belgrade, Serbia and Montenegro—where State is planning to construct new compounds from fiscal years 2002 through 2007. We selected these two groups of neighboring posts—the Balkans and Caucasus posts—because State is planning to complete a significant number of construction projects in these subregions. By focusing on these subregions within Europe and Eurasia, we were able to assess the extent to which these posts considered combining services or positions when developing staffing projections for their new compounds. At each post, we interviewed management teams (ambassadors/chiefs of mission, deputy chiefs of mission, and administrative officers), representatives of U.S. agencies, and other personnel who participated in the staffing projection process. To examine the staffing projection process at embassies in other geographic bureaus, we also conducted structured telephone interviews with administrative officers or deputy chiefs of mission from seven other embassies slated for new compounds— Rangoon, Burma; Beijing, China; Quito, Ecuador; Accra, Ghana; Beirut, Lebanon; Panama City, Panama; and Harare, Zimbabwe. These embassies would have just recently completed or were about to complete their staffing projection process. In all, the posts we contacted represent about 16 percent of the new embassy compound construction projects in OBO’s Long-Range Overseas Buildings Plan for 2002 through 2007, and 23 percent of those construction projects in the plan expected to be funded by fiscal year 2005. We also reviewed planning documents, staffing patterns, staffing projections for the new building, and other documentation provided by the posts. To examine the issue of capital cost sharing for construction of new diplomatic facilities, we solicited the views of agency headquarters staff and the management teams of our case study posts to determine the extent to which cost considerations were factored into the decision- making process. We also solicited the views of agency headquarters staff and the management teams of our case study posts to determine the potential advantages and disadvantages of different capital cost-sharing programs. In particular, we interviewed OBO officials and reviewed documentation supporting its capital security cost-sharing proposal. We also held discussions with OMB officials on their plans for developing and implementing an equitable cost-sharing program and on potential issues for the planned interagency working group. Finally, we attended meetings of OBO’s Industry Advisory Panel where cost sharing was discussed by private sector and industry professionals. We also interviewed staff from the International Facility Management Association on how cost sharing is implemented within the private sector. We conducted our work between May 2002 and February 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested Members of Congress. We are also providing copies of this report to the Secretary of State and the Director of the Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Another GAO contact and staff acknowledgments are listed in appendix IV. Appendix I: Standard Embassy Compound Design Figure 4 depicts the elements of a new embassy compound. State’s Bureau of Overseas Buildings Operations is purchasing parcels of land large enough to accommodate these elements and the department’s security standards, which include the placement of all buildings at least 30 meters from a perimeter wall. Appendix II: Comments from the Department of State Appendix III: Comments from the U.S. Agency for International Development Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, David G. Bernet, Janey Cohen, Martin de Alteriis, David Dornisch, Kathryn Hartsburg, Edward Kennedy, and James Strus made key contributions to this report. | The 1998 terrorist attacks on two U.S. embassies in Africa highlighted security deficiencies in diplomatic facilities, leading the Department of State to embark on an estimated $16 billion embassy construction program. The program's key objective is to provide safe, secure, and cost-effective buildings for employees overseas. Given that the size and cost of new facilities are directly related to agencies' anticipated staffing needs, it is imperative that future requirements be projected as accurately as possible. GAO was asked to (1) assess whether State and other federal agencies have adopted a disciplined process for determining future staffing requirements and (2) review cost-sharing proposals for agencies with overseas staff. U.S. agencies' staffing projections for new embassy compounds are developed without a systematic approach or comprehensive rightsizing analyses. State's headquarters gave embassies little guidance on factors to consider in developing projections, and thus U.S. agencies did not take a consistent or systematic approach to determining long-term staffing needs. Officials from each of the 14 posts GAO contacted reported that their headquarters bureaus had not provided specific, formal guidance on important factors to consider when developing staffing projections. The process was further complicated by the frequent turnover of embassy personnel who did not maintain documentation on projection exercises. Finally, staffing projections were not consistently vetted with all other agencies' headquarters. Because of these deficiencies, the government could construct wrong-sized buildings. In fact, officials at two embassies GAO visited said that due to poor projections, their sites may be inadequate almost immediately after staff move onto the new compound. State has proposed a cost-sharing plan that would require federal agencies to help fund new embassy construction. The Office of Management and Budget (OMB) is leading an interagency committee to develop a cost-sharing mechanism that would provide more discipline when determining overseas staffing needs and encourage agencies to think more carefully before posting personnel overseas. Numerous issues will need to be resolved for such a program to be successful, including how to structure the program and how payments will be made. |
Background Congress has long recognized that IT has the potential to enable federal agencies to accomplish their missions more quickly, effectively, and economically. However, fully exploiting this potential has presented longstanding challenges to agencies, and despite substantial IT investments, the federal government’s management of IT has produced mixed results. The CIO position was established by Congress to serve as a focal point for IT within an agency to address these challenges. Legislative Evolution of Agency CIO Roles and Responsibilities Since l980, federal law has placed the management of IT under the umbrella of information resources management (IRM). Originating in a l977 recommendation to Congress from the Commission on Federal Paperwork, the IRM approach was first enacted into law in the Paperwork Reduction Act of l980. This act required OMB to oversee federal agency IRM areas, which combined IT with information management areas, including information collection, records management, and privacy. The law also gave agencies a more general responsibility to carry out their IRM activities in an efficient, effective, and economical manner and to comply with OMB policies and guidelines. To assist in this effort, the law required that each agency head designate a senior official who would report directly to the agency head to carry out the IRM responsibilities of the agency under the law. Amendments to the Paperwork Reduction Act in l986 and l995 were designed to strengthen agency and OMB implementation of the law. Most particularly, the act’s 1995 amendments provided detailed agency requirements for each IRM area, to match the specific OMB provisions. In addition, these amendments required agencies to develop, for the first time, processes to select, control, and evaluate the results of major information systems initiatives. Under the Paperwork Reduction Act, as amended through 1995, senior IRM officials were required to carry out the responsibilities of their agencies with respect to IRM and report directly to the head of the agency. In l996, the Clinger-Cohen Act supplemented the information technology management provisions of the Paperwork Reduction Act with detailed requirements for IT capital planning and investment control and performance and results-based management. The Clinger-Cohen Act also established the position of agency CIO by amending the Paperwork Reduction Act to rename the senior IRM officials “chief information officers” and specifying additional responsibilities for them. Accordingly, agency CIOs are required by law to carry out the responsibilities of their agencies with respect to information collection and control of paperwork; statistical policy and coordination; privacy, including compliance with the Privacy Act; information security, including compliance with the Federal Information Security Management Act (FISMA); information disclosure, including compliance with the Freedom of Information Act (FOIA); and information technology management. Specifically, with regard to IT management, the CIO is responsible for implementing and enforcing applicable governmentwide and agency IT management policies, principles, standards, and guidelines; assuming responsibility and accountability for IT investments; assuming responsibility for maximizing the value and assessing and managing the risks of IT acquisitions through a process that, among other things, is integrated with budget, financial, and program management decisions, and provides for the selection, management, and evaluation of IT investments; establishing goals for improving the efficiency and effectiveness of agency operations through the effective use of IT; developing, maintaining, and facilitating the implementation of a sound, secure, and integrated IT architecture; and monitoring the performance of IT programs and advising the agency head whether to continue, modify, or terminate such programs. Together, these statutory responsibilities require CIOs to be key leaders in managing IT and other information functions in a coordinated fashion in order to improve the efficiency and effectiveness of programs and operations. Prior Reports on CIOs’ Roles and Responsibilities We have previously reported on the status of agency CIOs, including their roles and responsibilities, reporting relationships, backgrounds, and challenges. We have also reported on private-sector CIO roles and responsibilities and challenges and compared them with those of federal CIOs. In October l997, we testified on an OMB evaluation of the status of agency CIO appointments at 27 federal agencies shortly after enactment of the Clinger-Cohen Act. In that testimony, we noted that OMB had identified several agencies where the CIO’s duties, qualifications, and placement met the requirements of the Clinger-Cohen Act. According to OMB, these CIOs had experience, both operationally and technically, in leveraging the use of information technology, capital planning, setting and monitoring performance measures, and establishing service levels with technology users. However, OMB had expressed concerns about the number of other agencies that had acting CIOs, and about CIOs whose qualifications did not appear to meet the requirements of the Clinger- Cohen Act or who did not report directly to the head of the agency. We pointed out that OMB had also raised concerns about agencies where the CIOs had other major management responsibilities or where it was unclear whether the CIO’s primary duty was the IRM function. Our testimony emphasized the importance of OMB following through on its efforts to assess CIO appointments and resolve outstanding issues. We noted that, despite the urgent need to deal with major challenges, including poor security management, and the need to develop, maintain, and facilitate integrated systems architectures to guide agencies’ system development efforts, there were many instances of CIOs who had responsibilities beyond IRM. While some of these CIOs’ additional responsibilities were minor, in many cases they included major duties, such as financial operations, human resources, procurement, and grants management. We stressed that asking the CIO to shoulder a heavy load of responsibilities would make it extremely difficult, if not impossible, for that individual to devote full attention to IRM issues. In July 2004, we reported the results of our study, based on a questionnaire and interviews with CIOs at the same 27 major departments and agencies that OMB had previously evaluated. Our study examined 13 major areas of CIO responsibilities—7 areas predominantly in IT management and 6 areas predominantly in information management, as defined by the relevant laws or deemed critical to the effective management of IT. These areas are described in table 1, along with the relevant source. Our study found that CIOs were not responsible for all of the information and IT management areas. Specifically, all CIOs were responsible for only 5 of the 13 areas, while less than half of the CIOs were assigned responsibility for information disclosure and statistical policy and coordination. Overall, the views of these CIOs were mixed as to whether they could be effective leaders without having responsibility for each individual area. The 2004 study also examined the backgrounds and tenure of CIOs, noting that they had a wide variety of prior experiences, but generally had work or educational backgrounds in IT or IT-related fields, as well as business knowledge related to their agencies. The CIOs and former agency IT executives in the study believed it was necessary for a CIO to stay in office for 3 to 5 years to be effective. However, at the time of our study, the median tenure of permanent CIOs whose time in office had been completed was about 2 years. Based on the study, we also reported on major challenges that the federal CIOs said they faced in fulfilling their duties. In this regard, over 80 percent of the CIOs had cited implementing effective IT management and obtaining sufficient and relevant resources as challenges. We stressed that effectively tackling these reported challenges could improve the likelihood of a CIO’s success. Further, we highlighted the opportunity for Congress to consider whether the existing statutory requirements related to CIO responsibilities and reporting to the agency head reflected the most effective assignment of information and technology management responsibilities and reporting relationships. In September 2005, we reported on the results of our study of 20 CIOs of leading private-sector companies. We noted that most of the private- sector CIOs had full or shared responsibility for 9 of 12 functional areas that we had explored. For the most part, the responsibilities assigned to these private-sector CIOs were similar to those assigned to federal CIOs. In only three areas (information dissemination and disclosure, information collection, and statistical policy) did half or fewer of the CIOs have responsibility. In 4 of the 12 functional areas, the difference between the private-sector CIOs and federal CIOs was greater. Fewer of the private- sector CIOs had these responsibilities in each case. We also reported that private-sector CIOs faced challenges related to increasing IT’s contribution to their organization’s bottom line––such as controlling IT costs, increasing IT efficiencies, and using technology to improve business processes. Prior GAO Reports Identified Challenges within IT and Information Management Although agencies have taken constructive steps to improve IT and information management policies and practices, including through activities of CIOs, we have continued to identify and report on long- standing challenges in the key areas addressed in this report. IT strategic planning: In January 2004, we reported on the status of agencies’ plans for applying information resources to improve the productivity, efficiency, and effectiveness of government programs. At that time, we noted that agencies generally had IT strategic plans that addressed elements such as information security and enterprise architecture, but did not cover key areas specified in the Paperwork Reduction Act. Agencies cited a variety of reasons for not having addressed these areas, including that the CIO position had been vacant, that not including a requirement in guidance was an oversight, or that the process was being revised. We pointed out that, not only are these practices based on law, executive orders, OMB policies, and our guidance, but they are also important ingredients for ensuring effective strategic planning, performance measurement, and investment management, which, in turn, make it more likely that the billions of dollars in government IT investments will be wisely spent. We made a number of recommendations, including that each agency take action to address IT strategic planning, performance measurement, and investment management practices that were not fully in place. IT workforce planning: In 1994 and 2001, we reported on the importance that leading organizations placed on making sure they had the right mix of skills in their IT workforce. In our 2004 report on CIOs’ roles and responsibilities, about 70 percent of the agency CIOs reported on a number of substantial IT human capital challenges, including, in some cases, the need for additional staff. Other challenges included recruiting, retention, training and development, and succession planning. In February 2011, we identified strategic human capital management as a governmentwide high-risk area after finding that the lack of attention to strategic human capital planning had created a risk to the federal government’s ability to serve the American people effectively. As our previous reports have made clear, the widespread lack of attention to strategic human capital management in the past has created a fundamental weakness in the federal government’s ability to perform its missions economically and efficiently. Capital planning and investment management: Since 2002, using our investment management framework, we have reported on the varying extents to which federal agencies have implemented sound practices for managing their IT investments. In this regard, we identified agencies that have made significant improvements by using the framework in implementing capital planning processes. In contrast, however, we have continued to identify weaknesses at agencies in many areas, including immature management processes to support both the selection and oversight of major IT investments and the measurement of actual versus expected performance in meeting established performance measures. For example, in 2007, we reported that two agencies did not have the processes in place to effectively select and oversee their major investments. In June 2009, we reported that about half of the projects we examined at 24 agencies did not receive selection reviews (to confirm that they support mission needs) or oversight reviews (to ensure that they were meeting expected cost and schedule targets). Specifically, 12 of the 24 reviewed projects that were identified by OMB as being poorly planned did not receive a selection review, and 13 of 28 poorly performing projects we examined had not received an oversight review by a department-level oversight board. Accordingly, we made recommendations to multiple agencies to ensure that the projects identified in the report as not having received oversight reviews received them. Information security: Our reviews have noted significant information security control deficiencies that place agency operations and assets at risk. In addition, over the last several years, most agencies have not implemented controls to sufficiently prevent, limit, or detect access to computer networks, systems, or information. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented key elements for an agencywide information security program, as required by FISMA. To address these and other challenges, we have recommended that agencies fully implement comprehensive, agencywide information security programs by correcting shortcomings in risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. Due to the persistent nature of information security vulnerabilities and the associated risks, we continue to designate information security as a governmentwide high-risk issue in our most recent biennial report to Congress, a designation we have made in each report since 1997. Enterprise architecture: We have reported on the status of major federal department and agency enterprise architecture efforts. We found that the state of the enterprise architecture programs at the major federal departments and agencies was mixed, with several having very immature programs, several having more mature programs, and most being somewhere in between. Collectively, agencies faced barriers or challenges in implementing their enterprise architectures, such as overcoming organizational parochialism and cultural resistance, having adequate resources (human capital and funding), and fostering top management understanding. To assist the agencies in addressing these challenges, we have made numerous recommendations aimed at ensuring that their respective enterprise architecture programs develop and implement plans for fully satisfying each of the conditions in our enterprise architecture management maturity framework. In addition, in our most recent high-risk update report we identified possible areas where enterprise architecture could help to alleviate some challenges. For example, we suggested that one agency align its corporate architecture and its component organization architectures to avoid investments that provide similar but duplicative functionality. Systems acquisition, development, and integration: Our work has shown that applying rigorous practices to the acquisition or development of IT systems or the acquisition of IT services can improve the likelihood of success. In addition, we have identified leading commercial practices for outsourcing IT services that government entities could use to enhance their acquisition of IT systems and services. We have evaluated several agencies’ software development or acquisition processes and reported that agencies are not consistently using rigorous or disciplined system management practices. For example, after reviewing the Department of Homeland Security’s Atlas investment, we recommended that the agency implement effective management controls and capabilities by, among other things, revising and updating its cost-benefit analysis; making the program office operational; developing and implementing rigorous performance program management practices; and ensuring plans fully disclose the system capabilities, schedule, cost, and benefits to be delivered. In addition, ensuring that effective system acquisition management controls are implemented on each agency business system investment remains a formidable challenge, as our recent reports on management weaknesses associated with individual programs have demonstrated. For example, we recently reported that the Department of Defense’s large-scale software- intensive system acquisitions continued to fall short of cost, schedule, and performance expectations. Specifically, our report noted that six of the department’s nine enterprise resource planning systems had experienced schedule delays ranging from 2 to 12 years, and five had incurred cost increases ranging from $530 million to $2.4 billion. E-government initiatives: In December 2004, we reported the results of our review of the implementation status of major provisions from the E- Government Act of 2002, which required a wide range of activities across the federal government aimed at promoting electronic government, such as providing the public with access to government information and services. We found that, although the government had made progress in implementing the act, the act’s requirements were not always fully addressed. Specifically, OMB had not (1) ensured that a study on using IT to enhance crisis preparedness and response had been conducted that addressed the content specified by the act, (2) established a required program to encourage contractor innovation and excellence in facilitating the development and enhancement of electronic government services and processes, or (3) ensured the development and maintenance of a required repository and website of information about research and development funded by the federal government. We made recommendations to OMB aimed at ensuring more consistent implementation of the act’s requirements. We have also reported on various challenges agencies faced in meeting information management requirements, including in the areas of privacy, information collection, records management, information disclosure, and information dissemination. In 2002 and 2003, we reported on agencies’ handling of the personal information they collect and whether this handling conforms to the Privacy Act and other laws and guidance. In the 2002 report, we made recommendations to selected agencies aimed at strengthening their compliance with privacy requirements. In the 2003 report, we made recommendations to OMB, which included directing agencies to correct compliance deficiencies, monitoring agency compliance, and reassessing OMB guidance. In 2005, we reviewed agency compliance with information collection clearance requirements under the Paperwork Reduction Act. In an analysis of 12 case studies, we found that while CIOs generally reviewed information collections and certified that they met the standards in the act, in a significant number of instances, agencies did not provide support for the certifications, as the law requires. We recommended that OMB and the agencies take steps to improve review processes and compliance with the act. In 2008, we reviewed the management of e-mail records at four agencies and found agency practices did not always conform to requirements. We recommended that the National Archives and Records Administration develop and implement an oversight approach that provides adequate assurance that agencies are following its guidance, including both regular assessments of agency records and records management programs and reporting on these assessments. Also in 2008, we reported on trends in Freedom of Information Act processing and agencies’ progress in addressing backlogs of overdue FOIA requests. We found weaknesses in agency reporting on FOIA processing and recommended, among other things, that guidance be improved for agencies to track and report on overdue requests and plans to meet future backlog goals. In July 2010, we identified and described current uses of web 2.0 technologies by federal agencies to disseminate information. Specifically, we found that the federal government may face challenges in determining how to appropriately limit collection and use of personal information as agencies utilize these technologies and how and when to extend privacy protections to information collected and used by third-party providers of web 2.0 services. In July 2011, we identified ways agencies are using social media to interact with the public and assessed the extent to which they had policies in place for managing and identifying records, protecting personal information, and ensuring the security of federal information and systems. We made recommendations to 21 agencies to improve their development and implementation of social media policies. OMB Has Several Initiatives Under Way to Improve the Oversight and Management of IT, Including Changing the Role of Federal Agency CIOs On March 5, 2009, President Obama designated the Administrator of OMB’s Office of Electronic Government and Information Technology as the first Federal Chief Information Officer. The Federal CIO was given responsibility for directing the policy and strategic planning of federal information technology investments as well as for overseeing federal technology spending. Toward this end, in December 2010, the Federal CIO issued a 25 Point Implementation Plan to Reform Federal Information Technology Management. This 18-month plan specified five major goals: strengthening program management, streamlining governance and improving accountability, increasing engagement with industry, aligning the acquisition process with the technology cycle, and applying “light technology” and shared solutions. As part of this plan, OMB has initiatives under way to, among other things, strengthen agencies’ investment review boards and to consolidate federal data centers. The plan stated that OMB will work with Congress to consolidate commodity IT spending (e.g., e-mail, data centers, content management systems, web infrastructure) under agency CIOs. Further, the plan called for the role of federal agency CIOs to focus more on IT portfolio management. In March 2011, we testified on the efforts of OMB and the Federal CIO to improve the oversight and management of IT investments in light of the problems that agencies have continued to experience with establishing IT governance processes to manage such investments. These initiatives included increasing the accountability of agency CIOs through the use of the IT Dashboard, a public website established in June 2009 that provides detailed information, including performance ratings, for over 800 major IT investments at federal agencies. Each investment’s performance data are updated monthly, which is a major improvement from the quarterly reporting cycle used by OMB’s prior oversight mechanisms. However, in a series of reviews, we have found that the data on the Dashboard were not always accurate. Specifically, we found that the Dashboard ratings were not always consistent with agency performance data. OMB has also initiated efforts to improve the management of IT investments needing attention. In particular, in January 2010, the Federal CIO began leading TechStat sessions—a review of selected IT investments between OMB and agency leadership to increase accountability and transparency and improve performance. We noted that the full implementation of OMB’s 18-month roadmap should result in more effective IT management and delivery of mission-critical systems, as well as further reduction in wasteful spending on poorly managed investments. Current Agency CIOs Do Not Have Responsibility for All Assigned Areas Similar to 2004, we found that the CIOs are not consistently responsible for all of the 13 areas assigned by statute or identified as critical to effective IT management; however, they are more focused on IT management than on the management of agency information. The majority of CIOs (between 23 and 27) reported they are responsible for the seven areas of IT management. In this regard, the CIOs reported being responsible for activities in managing IT that include the following: managing capital planning and investment management processes to ensure that they were successfully implemented and integrated with the agency’s budget, acquisition, and planning processes; developing, maintaining, and facilitating the implementation of sound and integrated enterprise architectures; designating a senior department official who will have responsibility for departmentwide information security; developing IT strategic plans to emphasize the role that IT can play in effectively supporting the department’s operations and goals; developing, maintaining, and improving systems acquisition managing e-government requirements and ensuring compliance with developing strategies for development of a skilled IT workforce combined with strong succession planning. Fewer CIOs (between 6 and 22) reported being responsible for the six areas predominantly related to information management (information collection/paperwork reduction, records management, privacy, information dissemination, information disclosure, and statistical policy and coordination). Even those CIOs who indicated they had been assigned responsibility for these six information management areas reported they assigned a higher priority to their IT management responsibilities. CIOs who reported they were not responsible for their agencies’ information management functions said they provided input or other assistance to the organizational units within their agencies that were primarily responsible for these areas. The units with which they shared responsibilities varied, as did the roles the CIO played. For example, in the area of records management, one CIO reported working closely with the agency’s data manager and making recommendations regarding records management. In the privacy area, one CIO reported coordinating with the agency’s Chief Information Security Officer, general counsel, and human resources offices to address any privacy issues. To ensure accuracy of information disseminated, one CIO reported collaborating with the agency’s Office of Public Affairs. The areas in which the least number of CIOs reported they were responsible were statistical policy and coordination and information disclosure. In this regard, 21 CIOs stated that statistical policy and coordination was handled by other offices within their agencies, such as a policy or research office. This included components functioning as Principal Statistical Agencies. Eighteen CIOs reported that responsibility for information disclosure rested with another office, such as an agency’s FOIA office. In comparison to 2004, the number of CIOs assigned responsibility for each of the areas remained the same for all but five areas (systems acquisition, development, and integration; IT workforce planning; records management; information dissemination; and statistical policy and coordination). In each of these areas, the number of CIOs assigned responsibility decreased from 2004 to 2011. Figure 1 shows the number of CIOs with responsibility for the 13 areas in 2011 and 2004. The amount of time that CIOs spend in various areas of responsibility reflects their greater emphasis on IT management compared with the management of agency information. Specifically, CIOs reported they devote over two-thirds of their time to the seven IT management areas, which they generally viewed as more important to accomplishing their mission. Moreover, the majority of the CIOs were responsible for each of the areas. By contrast, the CIOs reported spending less than one-fifth of their tim the six information management areas. Specifically, CIOs reported spending 6 percent or less of their time on average in each of the privacy, e-government initiatives, records management, information dissemination, information collection/paperwork reduction, information disclosure, and statistical policy and coordination. As discussed previously, most CIOs reported they were not responsible for all of these areas and indicated they did not always place a high priority on them. ent with the views held by the panel of former federal CIOs, This is consist which generally did not place high priority on the information management areas. Table 2 shows the percentage of time CIOs reported allocating to the 13 areas. The CIOs also reported they spend a significant amount of time outside the 13 areas of responsibility. Specifically, CIOs indicated they spend about 14 percent of their time on other responsibilities outside these 13 areas—the same amount of time as they spend on information security, the area where CIOs reported spending the most time. These additional areas of responsibility included addressing infrastructure issues, participating in agencywide boards, or participating in external organizations, such as the federal CIO Council. In addition, CIOs reported they have begun to focus on emerging areas within IT such as cloud computing, data center consolidation, and commodity services. This is consistent with the recent emphasis of the Federal CIO on reforming IT, as reflected in OMB’s IT Reform Plan. As technology continues to evolve, CIOs are likely to be challenged in ensuring that agencies use new technologies efficiently and effectively. Many CIOs Serve in Multiple Positions Despite the importance of focusing on their primary duties, the CIOs inour review reported holding a number of official agency job functions in addition to being CIO. Specifically, 14 of 30 CIOs reported serving in another position within their agency besides that of CIO. Of these, 11ction. Six of the 14 reported that serving as CIO was their primary job fun CIOs reported holding two or more positions besides CIO, with one holding five positions, including CIO. These positions included Chief Acquisition Officer and Chief Human Capital Officer. erally Report CIOs Gen Directly to the Agency Head Federal law calls for agency CIOs to report to the head of their agency. With regard to this requirement, we reported in 2004 that only 19 of 27 CIOs reported to their agency head, and views were m such a direct reporting relationship was important. In our current study, even fewer—17 of 30—CIOs indicated that they report to their agency head, although 23 thought it was important to do so. Despite this, the views of agency CIOs and others suggested that a variety of reporting relationships between an agency head and the CIO can be effective. CIOs generally agreed that access to the agency head was important, but that they did not necessarily require a formal reporting relationship. One said that it was important to have a “seat at the table” allowing for direct interaction with the agency head in order to articulate any problems or issues in IT. However, other CIOs stated that it was important for the CIO to report to whomever is in charge of running the daily operations of the agency. On CIO did not believe it was ideal to report directly to the agency head because the agency head has too many other responsibilities. This CIO was able to meet with the agency’s deputy secretary frequently and felt this resulted in more input into decision ma reported to the agency head, believed there was not one ideal reporting king. Another CIO, who relationship for the entire federal government because of the differences in size and mission among the agencies. Two CIOs in our review indicated they did not have sufficient access to their agency head, even though they thought it was important to have such access. Accordingly, the CIOs felt they did not have sufficient influence on IT management decisions in their agency. The CIOs stated they had worked to gain greater influence over IT by establishing relationships with peers in their agencies such as the Chief Financial Officer or Chief Operating Officer. Overall, regardless of the reporting relationship between agency heads and agency CIOs, 28 of the CIOs reported they had adequate access to their agency head. Additionally, many of the agency CIOs who did not report directly to the agency head indicated having influence on IT management decisions within their agency because they had relationships with other senior agency officials. These included direct reporting relationships with an assistant secretary or the Chief Operating Officer. Based on their experiences, members of the panel of former C that it was important to report to the agency head on key issues, but als to work with other senior officials for day-to-day activities. In this regard, the former CIOs believed it was essential for the CIO to forge relationships with other senior officials in an agency, such as the Chief Financial Officer and members of the Office of General Counsel. Further, in discussing this matter, the Federal CIO stated that reporting relationships should be determined on an agency-by-agency basis, noting that agencies should determine how best to meet this requirement depending on how the agency is structured. Given the varying responsibilities of agency heads and other senior officials, some degree of flexibility in CIOs’ reporting relationships may be appropriate as long as CIO effectiveness is not impeded. CIOs’ Education and Work Experiences Remain Diverse, although More Have Previously Served as a CIO or Deputy CIO Although the qualifications of a CIO can help determine whether he or she is likely to be successful, there is no general agreement on the optimal background (e.g., education, experience) that a prospective agency CIO should have. The conference report accompanying the Clinger-Cohen Act stated that CIOs should possess knowledge of and practical experience in the information and IT management practices of business or government. We found that when compared to CIOs in 2004, more current CIOs had served previously as a CIO or deputy CIO. As shown in table 3 below, 18 of the CIOs in our review had experience as either a CIO or deputy CIO, an increase of 6 compared to the CIOs that participated in our 2004 review. Also, 21 current CIOs had previously worked for the federal government, 14 had worked in private industry, 4 had been in academia, and 4 had worked in state and local government. Fifteen CIOs had worked in some combination of two or more of these sectors. Further, all of the current CIOs had work experience in IT or IT- related fields. We asked current and former CIOs what key attributes they had found necessary to be an effective CIO. In response, they noted the need for IT experience and an understanding of how IT can be used to transform agencies and improve mission performance. Of most importance, however, were leadership skills and the ability to communicate effectively. The Federal CIO noted that he valued CIOs who thought about the future of the agency and demonstrated an ability to successfully manage IT programs or projects. Median CIO Tenure Remains at About 2 Years We noted previously that one element that influences the likely success of an agency CIO is the length of time the individual in the position has to implement change. For example, our prior work has noted that it can take 5 to 7 years to fully implement major change initiatives in large public and private sector organizations and to transform related cultures in a sustainable manner. Nonetheless, when we reported on this matter in 2004, the median tenure for permanent CIOs who had completed their time in office was just under 2 years. Tenure at the CIO position has remained almost the same since we last reported. Specifically, the median tenure for permanent federal agency CIOs was about 25 months for those who served between 2004 and 2011. However, the number of CIOs who stayed in office at least 3 years declined from 35 percent in 2004 to 25 percent in 2011. (See table 4 for a comparison of CIO tenures from 1996 to 2004 and 2004 to 2011; see app. V for figures depicting the tenure for each of the CIOs at the agencies in our review between 2004 and 2011 and a table showing various statistical analyses on CIO tenure.) We previously reported on factors that affected the tenure of CIOs, which included the stressful nature of the position and whether or not CIOs were political or career appointees. The panel of former CIOs for our current study agreed that high stress levels can lead to CIOs leaving the position, as can factors such as retirement and the opportunity to serve as a CIO at a larger agency. However, we found that during the period covered by our current review, political appointees stayed only 4 months less than those in career civil service positions, compared to 13 months less in our 2004 review. Federal Law Provides Adequate Authority, but Limitations Exist in Implementation for IT Management As previously discussed, a major goal of the Clinger-Cohen Act was to establish CIOs to advise and assist agency heads in managing IT investments. In this regard, the agency CIO was given the authority to administer a process to ensure that IT investments are selected, controlled, and evaluated in a manner that increases the likelihood they produce business value and reduce investment-related risk. As part of this process, CIOs are responsible for advising the agency head on whether IT programs and projects should be continued, modified, or terminated. In order to carry out these responsibilities, CIOs should be positioned within their agencies to successfully exercise their authority. Specifically, we have previously noted that CIOs should have a key role in IT investment decision making and budget control. In addition, CIOs require visibility into and influence over programs, resources, and decisions related to the management of IT throughout the agency. Our study did not find convincing evidence that specific legislative changes are needed to improve CIOs’ effectiveness. Rather, we found that CIOs’ ability to carry out their roles, as prescribed in law, has been limited by certain factors that have led to challenges. Specifically, CIOs reported they were hindered in exercising their authority over agency IT budgets, component IT spending, and staff, which our prior work has shown can lead to an inefficient use of funds. IT Budget authority: Although assigned by law with the authority to be accountable for IT management, we found that CIOs faced limitations in their ability to influence IT investment decision making at their agencies. For example, only 9 CIOs responded that their approval was required for the inclusion of all IT investments in their agency’s budget. The remaining 21 CIOs indicated that their explicit approval either was not required or it was required for major IT investments only. Ten of those 21 CIOs indicated they would be more effective if their explicit approval for IT investment decisions was sought by their agency head. CIOs said having this ability would reduce the number of unknown or “rogue” systems (i.e., systems not vetted by the CIO office), allow the CIO to identify and eliminate duplicative systems, and resolve technology and security issues earlier in an investment’s lifecycle. Further, 13 of the CIOs in our study did not have the power to cancel funding for IT investments. CIOs that did not have this power told us they would be more effective if they were able to cancel funding for investments because they would then be in a better position to consolidate investments and cut wasteful spending on failing projects. In our previous reviews, we have noted limitations in CIOs’ ability to influence IT investments, which have contributed to long-standing challenges in agencies’ management of IT. For instance, we previously reported that one agency did not provide the department’s CIO with the level of IT spending control that our research at leading organizations and past work at federal departments and agencies have shown is important for effective integration of systems across organizational components. We noted that control over the department’s IT budget was vested primarily with the CIO organizations within each of its component organizations. Consequently, there was an increased risk that component agencies’ ongoing investments would need to be reworked to be effectively integrated and maximize departmentwide value. Component-level IT spending: A significant portion of an agency’s IT funding can be allocated and spent at the component level on commodity IT systems—systems used to carry out routine tasks (e.g., e-mail, data centers, web infrastructure)—in addition to mission-specific systems. Multiple CIOs faced limitations in their ability to influence agency decisions on integrating commodity IT systems throughout their agencies because they did not have control over funding for these systems at the component level. According to CIOs, more control over component-level IT funding, including commodity IT and mission-specific systems, could help ensure greater visibility into and influence on the effective acquisition and use of IT. Further, the Federal CIO has called for agencies to place all commodity IT purchases under the purview of the agency CIO, while component mission-specific systems should remain with the component CIO. OMB included centralization of commodity funding under agency CIOs as part of its current IT reform initiatives. Consistent with this, we have reported on the importance of agency CIOs having adequate oversight to ensure that funds being spent on component agency investments will fulfill mission needs. Specifically, at one agency, we found a structured mechanism was not in place for ensuring that component agencies defined and implemented investment management processes that were aligned with those of the department. Because such processes, including reviews of component agency IT investments, were not in place, the agency CIO did not have visibility into a majority of the agency’s discretionary investments and could not ensure the agency’s IT investments were maximizing returns. IT workforce: CIOs also face limitations in their ability to provide input into hiring component-level senior IT managers and other IT staff. Many CIOs in our study faced limitations in performing certain workforce planning activities, such as having direct hiring capability for IT staff, providing input into the hiring of component CIOs, and influencing component agency CIOs’ performance ratings. For example, some CIOs indicated they did not have any input into the hiring of their own staff. In addition, CIOs did not always participate in selections for candidate component CIOs. Further, for a majority of the agencies with component CIOs, the agency CIO did not participate in the component CIOs’ performance reviews. Without sufficient influence over the hiring of IT staff or component CIOs’ performance, agency CIOs are limited in their ability to ensure appropriate IT staff are being hired to meet mission needs or component accountability for overall agency priorities and objectives. We have also previously reported on CIOs’ challenges related to IT workforce planning, noting there has been a lack of attention in this area, which has created weaknesses in the federal government’s ability to perform its missions economically, efficiently, and effectively. In addition, in our previous review of CIOs’ roles and responsibilities, we found that about 70 percent of CIOs reported IT workforce planning challenges within their agency. Without addressing CIOs’ lack of influence over IT workforce planning, the government will continue to face challenges in this area, risking further inefficiencies. Most CIOs included in our study and the panel of former CIOs agreed that legislative changes were not needed to improve effectiveness in IT management. However, several CIOs told us their agencies have completed or initiated efforts to increase the influence of the CIO. For example, one agency gave its CIO complete control over the entire IT budget and all IT staff. This CIO told us that this has allowed for rapid, effective changes to be made when necessary on IT issues. Another agency began an agencywide consolidation effort so that the CIO’s responsibility will be delegated to one person to centrally manage IT assets instead of multiple agency CIOs. This agency recently implemented a policy that has given one individual the title of CIO and stated that the CIO will assume oversight, management, ownership, and control of all departmental IT infrastructure assets. Another agency was centralizing decision-making authority in the office of the CIO for addressing troubled IT investments. In addition, one agency conducted a reorganization that placed component CIOs under the agency CIO. According to the CIO of that agency, the change has been a great asset to the organization, because it allowed the CIO office to work as a unit, created camaraderie among component CIOs, and reduced duplication of IT investments. In April 2011, the Federal CIO told us that agency CIOs should provide input to the component agency CIOs’ performance review. In addition to these agency-specific efforts, OMB has issued guidance to reaffirm and clarify the organizational, functional, and operational governance framework required within the executive branch for managing and optimizing the effective use of IT. More recently, OMB has taken additional steps to increase the effectiveness of agency CIOs by clarifying their roles and authorities under the current law. For example, its 25 Point Implementation Plan to Reform Federal Information Technology Management called for agency CIOs to shift their focus from policy making and maintaining IT infrastructure to IT portfolio management. According to the plan, agency CIOs will be responsible for identifying unmet agency needs to be addressed by new projects, holding TechStat reviews, and improving or terminating poorly performing projects. After we sent a draft of this report to agencies for comment, OMB issued a memorandum outlining the primary areas of responsibility for federal agency CIOs. The guidance outlines four areas in which the CIO should have a lead role: IT governance, program management, commodity services, and information security. It emphasizes the role of the CIO in driving the investment review process and the CIO’s responsibility over the entire IT portfolio for an agency. In a web log post about the memorandum, the Federal CIO stated that, next year, the administration will ask agencies to report through the President’s Management Council and the CIO Council on implementation of the memo. In our view, the guidance is a positive step in reaffirming the importance of the role of CIOs in improving agency IT management. Nonetheless, this guidance does not address the implementation weaknesses we have identified in this and our prior reviews—specifically that CIOs face significant limitations in their ability to influence IT investment decision making at their agencies and to exercise their statutory authority. The guidance generally instructs agency heads regarding the policies and priorities for CIOs in managing IT that we and others have stressed. However, the guidance does not state a specific requirement for agency heads to empower CIOs to carry out these responsibilities. Additionally, it does not require them to measure and report the progress of CIOs in carrying out these responsibilities and achieving the overall objectives of the IT Reform Plan. Such a requirement is essential to agencies empowering their CIOs to fully and effectively exercise their authority, and ultimately, ensuring that the CIOs are best positioned to be effective leaders in IT management. Without additional clarification and specific measures of accountability in OMB’s guidance, agency CIOs are likely to continue to be hindered in carrying out their responsibilities and achieving successful outcomes in IT management, thus increasing the risk that IT spending will continue to produce mixed results, as we have long reported. A Structured Process Could Improve Sharing of Lessons Learned within Agencies OMB guidance requires and best practices suggest that agencies document lessons learned, and we have previously reported on the importance of their collection and dissemination. The use of lessons learned is a principal component of an organizational culture committed to continuous improvement. Sharing such information serves to communicate acquired knowledge more effectively and to ensure that beneficial information is factored into planning, work processes, and activities. Lessons learned can be based on positive experiences or on negative experiences that result in undesirable outcomes. Documenting lessons learned can provide a powerful method of sharing successful ideas for improving work processes and increasing cost-effectiveness by aligning them to be utilized in the future. To facilitate the sharing of best practices and lessons learned relating to IT management across the federal government, the CIO Council established the Management Best Practices Committee. The committee works to identify successful information technology best practices being implemented in industry, government, and academia and shares them with agency CIOs. As part of its mission, in April 2011, the committee launched a best practices information-sharing platform in the form of a website to which agencies can contribute case studies of best practices. Federal agencies have begun to contribute by submitting examples depicting best practices relating to a range of topics including vendor communication and contract management; the consolidation of multiple systems into an enterprise solution through the use of cloud services; and program manager development. As of July 2011, the CIO Council website featured 10 case studies submitted by 10 agencies describing best practices. For example, one agency faced challenges with distributing technical support to 27 organizational units. After the agency head directed the consolidation of IT support services under the CIO, the agency gained a better understanding of spending on services and equipment needed to provide IT support. In another example, an agency had been operating under separate e-mail systems, which prevented it from maximizing operational efficiency and productivity. Specifically, the agency faced high costs for maintaining individual systems; difficulty sending broadcast e-mails across the entire department, thus preventing the e-mails from being received in a timely fashion; difficulty obtaining accurate and complete contact information for all employees in one global address list; and difficulty operating calendar appointments. In order to address these challenges, the agency utilized a cloud-based service solution, which the agency explained would result in lower costs per user, an improved security posture, and a unified communication strategy. In addition, agency CIOs told us their agency had implemented changes based upon lessons learned that have improved the effectiveness of the CIO. For example, while several CIOs implemented investment review boards or similar governance mechanisms, three CIOs explained that at their agency, senior-level officials, including deputy secretaries, and in one instance, an undersecretary, chaired these boards, which provided higher visibility over the selection, control, and evaluation of IT investments. Additionally, one CIO explained that implementing an enterprisewide licensing solution to optimize the agency’s buying power resulted in a savings of $200 million. One told us about improved effectiveness in information security through the use of a centralized information security center. Specifically, this CIO stated that all agency information went through this center, which provides real-time monitoring throughout agency systems. This CIO explained that the security center has helped to reduce the impact of intrusions to the agency’s systems. Nonetheless, although the CIO Council has established the management best practices committee and corresponding information-sharing platform to identify lessons learned, 19 CIOs said their agency did not have a process in place for capturing and documenting lessons learned and best practices. Two CIOs indicated that their agency did not have such a process due to a shortage of resources or because they did not see the development of such a process as being their responsibility. Without structured processes for capturing and documenting these lessons learned, agencies risk both losing the ability to share knowledge acquired with CIOs’ experience and increasing the time required for newly hired CIOs to become effective. Additionally, the lack of internal documented processes for capturing lessons learned within agencies has the potential to inhibit the Management Best Practices Committee’s ability to effectively identify, document, and disseminate individual agencies’ lessons learned and best practices throughout the federal government. By effectively identifying, documenting, and disseminating lessons learned internally and externally, agencies can mitigate risk and track successful ideas for improving work processes and cost-effectiveness that can be utilized in the future. Conclusions As in 2004, federal agency CIOs currently are not consistently responsible for all of the 13 areas assigned by statute or identified as critical to effective IT management. While the majority of CIOs are primarily responsible for key IT management areas, they are less likely to have primary responsibility for information management duties. In this regard, CIOs spend two-thirds or more of their time in the IT management areas and attach greater importance to these areas compared with the information management areas. Notwithstanding the focus on IT management, CIOs have not always been empowered to be successful. Despite the broad authority given to CIOs in federal law, these officials face limitations that hinder their ability to effectively exercise this authority, which has contributed to many of the long-standing IT management challenges we have found in our work. These limitations, which include control and influence over IT budgets, commodity IT investments, and staffing decisions, are consistent with issues we have previously identified that prevented CIOs from advising and influencing their agencies in managing IT for successful outcomes. While OMB’s guidance reaffirms CIO authorities and responsibilities to influence IT outcomes, it does not establish measures of accountability. Having actionable measures would help ensure that CIOs are empowered to successfully carry out their responsibilities under the law and enable them to successfully carry out their responsibilities under the IT Reform Plan. Finally, while agency CIOs told us they had implemented practices they believed have improved the management of IT, they had not established processes to document agency-specific lessons learned that could be shared within the agency. Not doing so increases the likelihood of new CIOs making the same mistakes as those they are replacing, while establishing such a mechanism could better enable succession planning and knowledge transfer between CIOs. Recommendations for Executive Action To ensure that CIOs are better able to carry out their statutory role as key leaders in managing IT, we recommend the Director of OMB take the following three actions: Issue guidance to agencies requiring that CIOs’ authorities and responsibilities, as defined by law and by OMB, are fully implemented, taking into account the issues raised in this report. Establish deadlines and metrics that require agencies to demonstrate the extent to which their CIOs are exercising the authorities and responsibilities provided by law and OMB’s guidance. Require agencies to identify and document internal lessons learned and best practices for managing information technology. Agency Comments and Our Evaluation We received comments on a draft of this report from OMB and from 5 of the 30 agencies included in our study. In oral comments, OMB’s Deputy Administrator for e-Gov and its Policy Analyst for e-Gov, within the Office of Electronic Government and Information Technology, generally agreed with our findings and stated that the agency had taken actions that addressed our recommendations. Specifically, with regard to our first recommendation, the officials said they believed OMB’s August 8, 2011, memorandum discussing CIOs’ authorities aligned with, and reflected the beginning of a process that would help address, the concerns noted in our report. Thus, they believed our recommendation had been addressed with OMB’s issuance of the memorandum. With regard to our second recommendation that called for OMB to establish an appropriate reporting mechanism to ensure compliance with the guidance, the officials pointed to a recent web log post about the August memorandum. In the post, the Federal CIO stated that, in 2012, the administration will ask agencies to report through the President’s Management Council and the CIO Council on implementation of the memorandum. We believe the guidance reflected in OMB’s August 2011 memorandum is a positive step in reaffirming the importance of the role of CIOs in improving agency IT management and toward addressing the concerns that are the basis for our first recommendation. It highlights the responsibilities of CIOs in the four areas of IT governance, program management, commodity services, and information security. These responsibilities are consistent with requirements in law and best practices. Further, OMB’s planned use of the councils for agency reporting on implementation of the memorandum could be a useful mechanism for helping to ensure CIOs’ accountability for effectively managing IT. However, neither the guidance nor the planned use of the councils, as referenced, identify requirements that would hold agencies accountable for ensuring effective CIO leadership in the four IT management areas. Specifically, as pointed out earlier in this report, the guidance does not articulate a requirement for agencies to measure and report the progress of CIOs in carrying out their responsibilities and authorities. Such a requirement is essential to ensuring that agency CIOs are best positioned to be effective leaders in IT management. As such, we stand by our second recommendation but have revised it to more explicitly highlight the need for OMB to establish deadlines and metrics that require agencies to demonstrate the extent to which CIOs are exercising their authorities and responsibilities. With regard to our third recommendation, that OMB require agencies to establish processes for documenting internal lessons learned and best practices, the officials believed this recommendation was addressed by existing guidance requiring agencies to document lessons learned for post-implementation reviews of IT projects. However, as discussed earlier, most of the agencies in our study reported that they had not established processes for documenting internal lessons learned. Further, the guidance to which OMB’s officials referred is limited to lessons learned for post- implementation reviews of specific IT projects and does not include the broader spectrum of IT management areas, such as program management and information security. As such, we continue to believe that agencies could benefit from having established internal processes for documenting lessons learned across the broader spectrum of IT management areas and, therefore, believe our recommendation is warranted. Although we made no specific recommendations to the 30 agencies included in our review, we sent each agency a draft of the report for comment. Twenty-five of the agencies told us they had no comments on the draft report, while five agencies provided e-mail or written comments on the report, as follows. In written comments from the Department of Defense CIO, the department concurred with our recommendations to OMB. However, the CIO also stated that, while our report did not identify legislative changes needed to enhance current CIOs’ authority and generally felt that existing law provides sufficient authority, the department believes there are legislative opportunities to clarify and strengthen CIO authorities that should be pursued, such as overlap in responsibilities between the CIO and other officials. The department stated that it was taking actions to address this issue internally. As discussed earlier in this report, the effectiveness of agency CIOs depends in large measure on their having clear roles and authorities. As noted, however, we found no evidence indicating that legislative changes are needed to achieve this. Rather, our study results determined that these officials face limitations that hinder their ability to effectively exercise their current authorities. Accordingly, agencies have an important opportunity to address these limitations by empowering the CIOs to fully and effectively exercise their authority and ensuring that the CIOs are best positioned to be effective leaders in managing IT. Our recommendations to OMB are aimed at ensuring that CIOs effectively exercise the authority and responsibilities that they have been given. DOD’s comments are reprinted in appendix VI. The Department of Homeland Security’s Director of Departmental GAO/Office of Inspector General (OIG) Liaison Office provided written comments in which the department indicated agreement with our findings and recommendations. In the comments, the department said it is committed to working with OMB to address the challenges agency CIOs face and increase the effectiveness of its efforts. These comments are reproduced in appendix VII. In written comments from the CIO, the Office of Personnel Management agreed with our recommendations. The agency included examples of actions the agency has taken to elevate the CIO position and bring it into greater alignment with the Clinger-Cohen Act. The Office of Personnel Management’s written comments are reproduced in appendix VIII. In an e-mail response from the Office of the Chief Information Officer, the United States Agency for International Development said the recommendations were sound and would assist agencies in ensuring that CIOs are better able to carry out their statutory role as key leaders in managing IT. In an e-mail response from the Deputy CIO, the Department of Commerce stated that it had no major issues with the recommendations and conclusions and described the report as an informative assessment of the practices and challenges faced by federal agency CIOs. Beyond the aforementioned comments, two agencies—the Social Security Administration and the Department of Health and Human Services—provided technical comments on the report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested congressional committees, the Director of the Office of Management and Budget, and the Secretaries of Agriculture, the Air Force, the Army, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Labor, the Navy, State, Transportation, the Treasury, and Veterans Affairs; the Attorney General; the administrators of the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Small Business Administration, and U.S. Agency for International Development; the commissioners of the Nuclear Regulatory Commission and the Social Security Administration; the directors of the National Science Foundation and Office of Personnel Management; the Chief Executive Officer of the Corporation for National and Community Service; and the chairmen of the Federal Labor Relations Authority and Commodity Futures Trading Commission. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6304 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. Key contributors to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology Our objectives wer of federal agency Chief Information Officers (CIO) in managing information and technology; (2) determine what poten tial modifications to the Clinger-Cohen Act and related laws could be made to enhance CIOs’ authority and effectiveness; and (3) identify key lessons learned by federal agency CIOs in managing information and technology. e to (1) determine the current roles and responsibilities To address the objectives of this review, we collected and reviewed previous GAO reports, including our 2004 report on CIOs’ roles and responsibilities, as well as various other reports that discussed the status of agency CIOs’ roles and responsibilities. This included reports from Gartner and Deloitte on the role of federal CIOs and OMB’s 25 Point Implementation Plan to Reform Federal Information Technology Management. We also interviewed the Partnership for Public Service’s Director of the Strategic Advisors to Government Executives Program for mentoring federal executives, including agency CIOs. We then developed and administered a questionnaire to the CIOs of 27 major departments and agencies in our 2004 review and of three small, independent agencies. We selected the three independent agencies based on whether they had a CIO in place when our review began and the size of the agency’s 2011 budget estimates. Using the questionnaire, we requested information on whether each CIO was responsible for each of 13 information technology (IT) and information management areas that we identified as either required by statute or critical to effective IT management in our 2004 report. In addition, we asked about CIOs’ reporting relationships, professional and educational backgrounds, tenure, and lessons learned in managing information and techno logy. In addition, we collected and reviewed written position descriptions for for each agency’s CIO, deputy CIO, and other key officials responsible the 13 IT and information management areas; the resumes or c urricula vitae of the current CIOs; each agency’s current organization chart(s) depicting the CIO’s position relative to the head of the agency, other senior officials, and component CIOs, if applicable; and functional statements for offices that have responsibilities in IT and information management. We also asked each agency to supply the name, beginn and ending dates in office, and circumstances (e.g., whether they were an acting or permanent position) of each of the individuals who ha served as CIO at the agency since 2003. Further, we also collected and reviewed any supporting documentation of recent departmental changes. We then interviewed each of the CIOs who were in place at the time of our review (see app. II for a list of the CIOs) in order to validate responses from the questionnaire and to obtain an understanding of their views on the 13 IT and information management areas including roles and responsibilities, changes needed to enhance authority and effectiveness, and lessons learned for managing information and technology. From the questionnaire and interview responses, we analyzed CIO responses to determine their current roles and responsibilities and reporting relationships with agency heads. We then compared the responses to those identified in our 2004 report. Additionally, we assessed the CIOs’ reported time spent in the 13 IT and information management areas of responsibility and the importance of each area to them, as well as their views on changes needed to improve their authorit y and effectiveness. We also reviewed CIOs’ qualifications and current and former CIOs’ tenure. Further, we analyzed CIO responses to questions concerning changes needed to improve their authority and effectiveness and compared them to the authority described in federal IT laws. We supplemented our analysis by reviewing our prior reports related to s’ agency CIO authority and IT management challenges. We also ana CIOs’ comments related to lessons learned that they have used to improve IT management at their agency. Further, we analyzed OMB IT management reform efforts, including its August 2011 memorandum onCIO authorities, and status updates related to agency CIOs and lessons learned initiatives. To complement information we obtained from current CIOs, we held a panel discussion with nine former CIOs of federal agencies. The purpose of this discussion was to elicit views regarding the statutory responsibilities given to federal CIOs, lessons learned by CIOs in managing information and technology, and areas in which current legislation could be revised to enhance CIOs’ authority and effectiven Appendix III lists these panelists. Finally, we met with the Federal CIO to obtain his views on priorities and responsibilities for CIOs and to discuss potential modifications to the Clinger-Cohen Act and related laws that could enhance CIOs’ authority and effectiveness. ess. Appendix II: Chief Information Officers Interviewed Appendix II: Chief Information Officers Interviewed Agency/department Commodity Futures Trading Commission (CFTC) Corporation For National and Community Service (CNCS) Department of Health and Human Services (HHS) Michael W. Carleton Department of Homeland Security (DHS) Department of Housing and Urban Development (HUD) Department of Transportation (DOT) Department of Veterans Affairs (VA) Environmental Protection Agency (EPA) Federal Labor Relations Authority (FLRA) General Services Administration (GSA) National Aeronautics and Space Administration (NASA) National Science Foundation (NSF) Nuclear Regulatory Commission (NRC) Office of Personnel Management (OPM) Small Business Administration (SBA) Social Security Administration (SSA) U.S. Agency for International Development (USAID) Appendix III: Former Agency CIO Panel Participants In March 2011, we convened a panel of former federal agency chief information officers, during which we discu sed CIOs’ roles and s responsibilities, reporting relationships, anial changes any potentd needed to legislation. Table 5 provides the former and current titles of these officials. Appendix IV: Summary of CIOs’ Information Management and Technology Responsibilities The following summarizes information gathered from CIOs related to responsibilities in the 13 information management and information technology management areas discussed in this report. IT Strategic Planning CIOs are responsible for strategic planning for all information and information technology management functions [Paperwork Reduction Act]. Of the 3 0 CIOs we surveyed, ensuring compliance with law agency. In 2004, all 27 CIOs strategic planning. all CIOs indicated they were responsible fo s related to IT strategic planning w surveyed also indicated re r ithin their sponsibility for IT All CIOs re strategic planning. Twenty-nine of the 30 CIOs reported that IT strategic planning was imporying out their mission.rtant to car The CIO who reported that IT strategic plning was not important said this area was an being executed properly and it did not require much attention or guidance. Table 6 provides a summary of CIO responses regarding IT strategic planning. t the CIO should be responsible for IT accountability and transparency in the linger-Cohen Act]. Of the 30 CIOs we surveyed, all of them indicated they were responsible for capital planning and investment management activities at their agency. This is consistent with the results of our 2004 report, which found that all 27 CIOs also indicated responsibility for capital planning and investment management. All 30 of the CIOs reported they thought the CIO should be responsible for capital planning and investment management. All 30 CIOs reported that capital planning and investment management was “very important” or “important” to carrying out their mission. Table 8 provides a summary of CIO responses regarding capital planning and investment management. CIOs are responsible for ensuring agency to In protect information and systems [Paperwork Reduction Act, Federal formation Security Management Act, and Clinger-Cohen Act]. All 30 CIOs indicated they were responsible for ensuring compliance with information security best practices and related laws at their agency. This is consistent with the results of our 2004 report, which found that all of the 27 CIOs surveyed indicated being responsible for information security. Of the 30 agencies that provided responses, all 30 CIOs reported that they thought the CIO should be responsible by law for information security. Twenty-nine of the 30 CIOs reported that information security was “very important” to carrying out their mission. Only one CIO ranked information security as “somewhat important” because his goal is to move the agency toward a risk-based approach that uses secure, reliable, and cost-effective technology. Table 9 provides a summary of CIO responses regarding information security. CIOs are responsible for developing and maintaining the business and technology blueprint that links an agency’s strategic plan to IT programs and supporting system implementations [Clinger-Cohen Act]. f the 30 CIOs we surveyed, all 30 indicated they were responsible for O enterprise architecture-related activities at their agency. This is consistent with the results of our 2004 report, which found that 27 of 27 CIOs also indicated responsibility for enterprise architecture. All 30 CIOs interviewed reported that they believed the CIO should be responsible for enterprise architecture. Twenty-eight of the 30 CIOs reported that enterprise architecture was “important” or “very important” to carrying out their mission with one of the remaining two identifying it as being “somewhat important” and the other labeling it as being “not very important.” For example, one CIO ranked enterprise architecture as being very important based on the maturity of the agency’s abilities within the area. The CIO explained that, since their enterprise architecture was no as mature as they would like it to be, they viewed it as being currently very important. The CIO who reported that enterprise architecture was somewhat important for his mission clarified that this was because the existing activities related to enterprise architecture were being properly executed and therefore required less focus. The remaining CIO who responded that enterprise architecture was “not very important” explained that enterprise architecture was not essential to completing the agency’s mission and therefore having a formal enterprise architecture was less important at the agency. Table 10 prov regarding enterprise architecture. ides a summary of CIO r Of the 30 CIOs we surveyed, 9 indicated that they were responsi ble for information disclosure at their agency. This is generally consistent with our 2004 findings in which 9 of 27 CIOs indicated responsibility for information disclosure. Appendix V: CIO Tenure at Each Agency Appendix VI: Comments from the Department of Defense Appendix VII: Comments from the Department of Homeland Security Appendix VIII: Comments from the Office of Personnel Management Appendix IX: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, key contributions were made to this report by Cynthia J. Scott (Assistant Director); Michael Alexander; Cortland Bradford; Virginia Chanley; James Crimmer, Jr.; Neil Doherty; Ashfaq Huda; Lee McCracken; David Plocher; David A. Powner; Meredith R. Raymond; John M. Resser; Eric Trout; Christy Tyson; Walter Vance; and Merry Woo. | The federal government invests billions in information technology (IT) each year to help agencies accomplish their missions. Federal law, particularly the Clinger-Cohen Act of 1996, has defined the role of Chief Information Officer (CIO) as the focal point for IT management within agencies. Given the longstanding challenges the government faces in managing IT and the continued importance of the CIO, GAO was asked to (1) determine the current roles and responsibilities of CIOs, (2) determine what potential modifications to the Clinger-Cohen Act and related laws could be made to enhance CIOs' authority and effectiveness, and (3) identify key lessons learned by CIOs in managing IT. To do this, GAO administered a questionnaire to 30 CIOs, compared responses to legislative requirements and the results of a 2004 GAO study, interviewed current CIOs, convened a panel of former agency CIOs, and spoke with the Office of Management and Budget's (OMB) Federal CIO. CIOs do not consistently have responsibility for 13 major areas of IT and information management as defined by law or deemed as critical to effective IT management, but they have continued to focus more attention on IT management-related areas. Specifically, most CIOs are responsible for seven key IT management areas: capital planning and investment management; enterprise architecture; information security; IT strategic planning, "e-government" initiatives; systems acquisition, development, and integration; and IT workforce planning. By contrast, CIOs are less frequently responsible for information management duties such as records management and privacy requirements, which they commonly share with other offices or organizations within the agency. In this regard, CIOs report spending over two-thirds of their time on IT management responsibilities, and less than one-third of their time on information management responsibilities. CIOs also report devoting time to other responsibilities such as addressing infrastructure issues and identifying emerging technologies. Further, many CIOs serve in positions in addition to their role as CIO, such as human capital officer. In addition, tenure at the CIO position has remained at about 2 years. Finally, just over half of the CIOs reported directly to the head of their respective agencies, which is required by law. The CIOs and others have stressed that a variety of reporting relationships in an agency can be effective, but that CIOs need to have access to the agency head and form productive working relationships with senior executives across the agency in order to carry out their mission. Federal law provides CIOs with adequate authority to manage IT for their agencies; however, some limitations exist that impede their ability to exercise this authority. Current and former CIOs, as well as the Federal CIO, did not identify legislative changes needed to enhance CIOs' authority and generally felt that existing law provides sufficient authority. Nevertheless, CIOs do face limitations in exercising their influence in certain IT management areas. Specifically, CIOs do not always have sufficient control over IT investments, and they often have limited influence over the IT workforce, such as in hiring and firing decisions and the performance of component-level CIOs. More consistent implementation of CIOs' authority could enhance their effectiveness in these areas. OMB has taken steps to increase CIOs' effectiveness, but it has not established measures of accountability to ensure that responsibilities are fully implemented. CIOs identified a number of best practices and lessons learned for more effectively managing IT at agencies, and the Federal CIO Council has established a website to share this information among agencies. Agencies have begun to share information in the areas of vendor communication and contract management; the consolidation of multiple systems into an enterprise solution through the use of cloud services; and program manager development. However, CIOs have not implemented structured agency processes for sharing lessons learned. Doing so could help CIOs share ideas across their agencies and with their successors for improving work processes and increasing cost effectiveness. |
Background Responsibility for Secure Flight Operations Several entities located within TSA’s Office of Intelligence and Analysis share responsibility for administering the Secure Flight program. Among these are the Operations Strategy Mission Support Branch, which acts as the program’s lead office; the Secure Flight Operations Branch, which oversees passenger vetting and other operational activities; and the Systems Management and Operations Branch and the Secure Flight Technology Branch, both of which focus on technology-related issues. Collectively, these entities received about $93 million to carry out program operations in fiscal year 2014. Overview of Secure Flight Matching and Screening Processes at Implementation The Secure Flight program, as implemented pursuant to the 2008 Secure Flight Final Rule, requires U.S.- and foreign-based commercial aircraft operators traveling to, from, within, or overflying the United States, as well as U.S. commercial aircraft operators with international point-to-point flights, to collect information from passengers and transmit that information electronically to TSA. This information, known collectively as Secure Flight Passenger Data (SFPD), includes personally identifiable information, such as full name, gender, date of birth, passport information (if available), and certain nonpersonally identifiable information, such as itinerary information and the unique number associated with a travel record (record number locator). Since implementation began in January 2009, the Secure Flight system has identified high-risk passengers by matching SFPD against the No Fly List and the Selectee List, subsets of the Terrorist Screening Database (TSDB), the U.S. government’s consolidated watchlist of known or suspected terrorists maintained by the Terrorist Screening Center, a multiagency organization administered by the Federal Bureau of Investigation (FBI). (We discuss screening activities initiated after TSA began implementing Secure Flight in 2009 later in this report.) To carry out this matching, the Secure Flight system conducts automated matching of passenger and watchlist data to identify a pool of passengers who are potential matches to the No Fly and Selectee Lists. Next, the system compares all potential matches against the TSA Cleared List, a list of individuals who have applied to, and been cleared through, the DHS redress process. Passengers included on the TSA Cleared List must submit a redress number when making a reservation, which allows the Secure Flight system to recognize and clear them. After the system performs automated matching, Secure Flight analysts are to conduct manual reviews of potential matches, which may involve consulting other classified and unclassified data sources, to further rule out individuals who are not those included on the No Fly and Selectee Lists. After the completion of manual reviews, TSA precludes passengers who remain potential matches to certain lists from receiving their boarding passes. These passengers, for whom air carriers receive a “passenger inhibited” message from Secure Flight, must undergo a resolution process at the airport. This process may involve air carriers sending updated passenger information back to Secure Flight for automated rematching or placing a call to Secure Flight for assistance in resolving the match. At the conclusion of automated and manual screening processes (including the airport resolution process) air carriers may not issue a boarding pass to a passenger until they receive from Secure Flight a final screening determination. These determinations include a “cleared” message, for passengers found not to match a watchlist, and a “selectee” message, for matches to the Selectee List who are to be to be designated by air carriers for enhanced screening. For passengers matching the No Fly List, Secure Flight’s initial “passenger inhibited” message is the final determination, and the air carrier may not issue a boarding pass (see fig. 1). Passenger Screening at Airport Security Checkpoints In general, passengers undergo one of three types of screening, based on the Secure Flight determinations shown on boarding passes— standard screening, enhanced screening for selectees, and expedited screening for low-risk passengers. Standard screening typically includes a walk-through metal detector or Advanced Imaging Technology screening, which is to identify objects or anomalies concealed under clothing, and X-ray screening for the passenger’s accessible property. In the event a walk-through metal detector triggers an alarm, the Advanced Imaging Technology identifies an anomaly, or the X-ray machine identifies a suspicious item, additional security measures, such as pat- downs, explosives trace detection searches (which involve a device certified by TSA to detect explosive particles), or additional physical searches may ensue as part of the resolution process. Enhanced screening includes, in addition to the procedures applied during a typical standard screening experience, a pat-down and an explosives trace detection search or physical search of the interior of the passenger’s accessible property, electronics, and footwear. Expedited screening typically includes walk-through metal detector screening and X-ray screening of the passenger’s accessible property, but unlike in standard screening, travelers do not have to, among other things, remove their belts, shoes, or light outerwear. Passengers not designated for enhanced or expedited screening generally receive standard screening unless, for example, identified by TSA for a different type of screening through the application of random and unpredictable security measures at the screening checkpoint. Secure Flight Initially Identified Passengers on Terrorist Watchlists and Now Also Differentiates Passengers Based on Risk Since January 2009, the Secure Flight program has changed from one that identifies high-risk passengers by matching them against the No Fly and Selectee Lists to one that assigns passengers a risk category: high risk, low risk, or unknown risk.passengers as high risk if they are matched to watchlists of known or suspected terrorists or other lists developed using certain high-risk criteria, as low risk if they are deemed eligible for expedited screening through TSA PreTM—a 2011 initiative to preapprove passengers for expedited screening—or through the application of low-risk rules, and as unknown risk if they do not fall within the other two risk categories. To separate passengers into these risk categories, TSA utilizes lists in addition to the No Fly and Selectee Lists, and TSA has adapted the Secure Flight system to perform risk assessments, a new system functionality that is distinct from both watchlist matching and matching against lists of known travelers. At airport checkpoints, those passengers identified as high risk receive enhanced screening, passengers identified Specifically, Secure Flight now identifies as low risk are eligible for expedited screening, and passengers identified as unknown risk generally receive standard screening. Secure Flight Is Using New High-Risk Lists for Screening, Including Two Lists of Individuals Who Meet Various Threat Criteria, but Who May Not Be Known or Suspected Terrorists Since January 2009, TSA has been using new high-risk lists for screening, including two lists to identify passengers who may not be known or suspected terrorists, but who—based on TSA’s application of threat criteria—should receive enhanced screening, and an expanded list of known or suspected terrorists in the TSDB. As initially implemented under the October 2008 Secure Flight Final Rule, the program matched the names of passengers against the No Fly and Selectee List components of the TSDB. According to the rule, comparing passenger information against the No Fly and Selectee components of the TSDB (versus the entire TSDB) would be generally satisfactory during normal security circumstances to counter the security threat. The rule also provides that TSA may use the larger set of watchlists maintained by the federal government as warranted by security considerations, for example, if TSA learns that flights on a particular route may be subject to an increased security risk.compare passenger information on some or all flights on that route against the full TSDB or other government databases, such as intelligence or law enforcement databases. Rules-Based Watchlists After the December 25, 2009, attempt to detonate a concealed explosive on board a U.S.-bound flight by an individual who was not a known or suspected terrorist in the TSDB, TSA sought to identify ways to mitigate unknown threats—individuals not in the TSDB for whom TSA has determined enhanced screening would be prudent. To that end, TSA worked with CBP to develop new lists for Secure Flight screening, and in April 2010, began using the lists to identify and designate for enhanced screening passengers who may represent unknown threats. To create these lists, TSA leveraged CBP’s access to additional data submitted by passengers traveling internationally and the capabilities of CBP’s Automated Targeting System-Passenger (ATS-P)—a tool originally created and used by CBP that targets passengers arriving at or departing the United States by comparing their information against law enforcement, intelligence, and other enforcement data using risk-based targeting scenarios and assessments. Specifically, analysts within the Intelligence and Analysis Division of TSA’s Office of Intelligence and Analysis review current intelligence to identify factors that may indicate an elevated risk for a passenger. TSA creates rules based on these factors and provides them to CBP. CBP then uses ATS-P to identify passengers who correspond with the rules and provides TSA information on them in the form of a list. Upon receiving the list, TSA creates another rules-based list—a subset of the larger rules-based list—based on additional criteria. Through Secure Flight screening, TSA designates passengers matching either rules-based list as selectees for enhanced screening. The Expanded Selectee List In addition to the two ATS-P-generated lists, Secure Flight incorporated an additional list derived from the TSDB into its screening activities in order to designate more passengers who are known or suspected terrorists as selectees for enhanced screening. Specifically, in April 2011, TSA began conducting watchlist matching against an Expanded Selectee List that includes all records in the TSDB with a full name (first name and surname) and full date of birth that meet the Terrorist Screening Center’s reasonable suspicion standard to be considered a known or suspected terrorist, but that are not already included on the No Fly or Selectee List. TSA began using the Expanded Selectee List in response to the December 25, 2009, attempted attack, as another measure to secure civil aviation. Collectively, the No Fly, Selectee, and Expanded Selectee Lists are used by Secure Flight to identify passengers from the government’s consolidated database of known or suspected terrorists. Secure Flight Is Identifying Low-Risk Passengers by Screening against TSA PreTM Lists and Conducting Passenger Risk Assessments Since October 2011, TSA has also begun using Secure Flight to identify passengers as low risk, and therefore eligible for expedited screening, through the use of new screening lists and by performing passenger risk assessments. According to TSA, identifying more passengers as eligible for expedited screening will permit TSA to reduce screening resources for low-risk travelers, thereby enabling TSA to concentrate screening resources on higher-risk passenger populations. In August 2013, TSA officials stated that this approach would support the agency’s goal of identifying 25 percent of airline passengers as eligible for expedited screening by the end of calendar year 2013. As of May 2014, TSA officials stated the goal had been revised to identify 50 percent of airline passengers as eligible for expedited screening by the end of calendar year 2014. According to officials within TSA’s Office of Chief Counsel, TSA’s efforts to identify low-risk travelers also fulfill a stated goal of the 2008 Secure Flight rule to implement a “known traveler” concept that would allow the federal government to assign a unique number to known travelers for whom the federal government had conducted a threat assessment and determined did not pose a security threat. TSA PreTM Lists of Preapproved Low-Risk Travelers We expect to issue a report on TSA’s PreTM program later this year. entry). Since then, TSA has established separate TSA PreTM lists for additional low-risk passenger populations, including members of the U.S. armed forces, Congressional Medal of Honor Society members, and members of the Homeland Security Advisory Council (see app. II for a full listing of TSA PreTM lists used by Secure Flight for screening). To identify these and other low-risk populations, TSA coordinated and entered into agreements with a lead agency or outside entity willing to compile and maintain the associated TSA PreTM list. Members of the list-based, low-risk populations participating in TSA PreTM are provided a unique known traveler number, and their personal identifying information (name and date of birth), along with the known traveler number, is included on lists used by Secure Flight for screening. In addition to TSA PreTM lists sponsored by other agencies or entities, TSA created its own TSA PreTM list composed of individuals who apply to be preapproved as low-risk travelers through the TSA PreTM Application Program, an initiative launched in December 2013. The program is another DHS trusted traveler program, in which DHS collects a fee to conduct a background investigation for applicants. Applicants approved as low risk through the program receive a known traveler number and are included on an associated TSA PreTM Application Program list used by Secure Flight for screening. To be recognized as low risk by the Secure Flight system, individuals on TSA PreTM lists must submit their known traveler numbers when making a flight reservation. As of April 2014, there were about 5.6 million individuals who, through TSA PreTM program lists, were eligible for expedited screening. TSA PreTM Risk Assessments To further increase the number of passengers identified as low risk (and therefore TSA PreTM eligible), TSA adapted the Secure Flight system to begin assigning passengers risk scores to designate them as low risk for a specific flight. Beginning in 2011, TSA piloted a risk-based security program to identify certain members of participating airlines’ frequent flier programs as low risk, and therefore eligible for expedited screening for a specific flight. Specifically, TSA used the Secure Flight system to assess data submitted by these frequent fliers during the course of travel and assign them scores, which were then used to determine eligibility for expedited screening. In October 2013, TSA expanded the use of these assessments to all passengers, not just frequent fliers, and also began using other travel-related data to assess passengers. These assessments are conducted only if the passenger has not been designated as high risk by other Secure Flight screening activities or matched to one of the TSA PreTM Lists. The scores assigned to passengers correspond with a certain likelihood of being designated as eligible to receive expedited screening through TSA PreTM. According to officials within TSA’s Office of Chief Counsel, the assessments are not watchlist matching, rather they are a means to facilitate the secure travel of the public—a purpose of Secure Flight, as stated in the program’s final rule and in accordance with TSA’s statutory responsibilities to ensure the security of civil aviation. As of May 2014, TSA uses PreTM risk assessments to determine a passenger’s low-risk status and resulting eligibility for TSA PreTM expedited screening, but according to TSA officials, TSA also has the capability to use this functionality to identify high-risk passengers for enhanced screening. TSA made adjustments to enable the Secure Flight system to perform TSA PreTM risk assessments to identify high-risk passengers in March 2013. However, TSA officials stated the agency has no immediate plans to use the assessments to identify high-risk passengers beyond those already included on watchlists. New Secure Flight Screening Activities Allow TSA to Differentiate Passengers by Risk Category Given the changes in the program since implementation, the current Secure Flight system screens passengers and returns one of four screening results to the air carriers for each passenger: TSA PreTM eligible (expedited screening), cleared to fly (standard screening), selectee (enhanced screening), or do not board (see fig. 2). TSA Has Processes in Place to Implement Secure Flight Screening Determinations at Checkpoints, but Could Take Further Action to Address Screening Errors TSA has developed processes to help ensure that individuals and their accessible property receive a level of screening at airport checkpoints that corresponds to the level of risk determined by Secure Flight. However, TSA could take additional actions to prevent TSO errors in implementing these risk determinations at the screening checkpoint. Furthermore, fraudulent identification or boarding passes could enable individuals to evade Secure Flight vetting, creating a potential vulnerability at the screening checkpoint. TSA’s planned technology solutions could reduce the risk posed by fraudulent documents at the screening checkpoint. TSA Has Developed Processes to Implement Secure Flight Determinations at Airport Checkpoints TSA has developed processes to help ensure that individuals and their accessible property receive a level of screening at airport checkpoints that corresponds to the level of risk determined by Secure Flight.are primarily responsible for ensuring that passengers receive the appropriate level of screening because they are to verify passengers’ identities and identify passengers’ screening designations. TSA requires passengers to present photo identification and a boarding pass at the TDCs screening checkpoint. Using lights and magnifiers, which allow the TDC to examine security features on the passenger’s identification documents, the TDC is to examine the identification and boarding pass to confirm that they appear genuine and pertain to the passenger. The TDC is also to confirm that the data included on the boarding pass and in the identity document match one another. According to TSA standard operating procedures, TDCs may accept minor name variations between the passenger’s boarding pass and identification.information on the identification varies significantly from the boarding pass, the TDC is to refer the passenger to another TSA representative for identity verification through TSA’s Identity Verification Call Center (IVCC). If the passenger’s information varies from the SFPD submitted to Secure Flight, the IVCC is to contact Secure Flight to vet the new information. If the identification or boarding pass appears fraudulent, the TDC is to contact law enforcement. If the TDC finds that the The TDC is also required to review the passenger’s boarding pass to identify his or her Secure Flight passenger screening determination—that is, whether the passenger should receive standard, enhanced, or expedited screening. TDCs either examine the boarding pass manually or, where available, scan the boarding pass using an electronic boarding pass scanning system (BPSS). In addition, Secure Flight provides TSA officials in the airports with advance notice of upcoming selectees from the Selectee and Expanded Selectee Lists, as well as those on the No Fly List. Secure Flight provides this information to TSA officials at the passenger’s airport of departure via e-mail beginning 72 hours prior to flight departure for the No Fly and Selectee Lists, and via a shared electronic posting beginning 26 to 29 hours prior to flight departure for the Expanded Selectee List. TSA also has requirements related to TDC performance. First, according to TSA officials, TSA designated the TDC a qualified position in February 2013, meaning that TSOs must complete training and pass a job knowledge test to qualify as TDCs. Second, TSA has documented processes to govern the screening checkpoint, such as standard operating procedures applicable to the TDC, the screening checkpoint, and expedited screening that specify responsibilities and lines of reporting. In March 2011, TSA also updated its Screening Management standard operating procedures to clarify that supervisory TSOs are required to monitor TSO performance to ensure compliance with all applicable standard operating procedures and correct improper or faulty application of screening procedures to ensure effective, vigilant, and courteous screening. According to officials in TSA’s Office of Inspection, many checkpoint failures resulted from a lack of supervision. These officials stated that when TDCs are not properly supervised, they are more likely to take shortcuts and miss steps in the standard operating procedures and that because working as a TDC can be tedious and repetitive, supervision and regular rotation are particularly important to ensure TDCs’ continued vigilance. TSOs Have Made Errors in Implementing Secure Flight Screening Determinations at the Screening Checkpoint, and Additional Actions Could Reduce the Number of Screening Errors Our analysis of TSA information from May 2012 through February 2014 found that TSOs have made errors in implementing Secure Flight risk determinations at the screening checkpoint. By evaluating the root causes of these errors and implementing corrective measures to address those root causes, TSA could reduce the risk posed by TSO error at the screening checkpoint. TSA officials we spoke with at five of the nine airports conduct after-action reviews of screening errors at the checkpoint and have used these reviews to take action to address the root causes of those errors. However, TSA does not have a systematic process for evaluating the root causes of screening errors at the checkpoint across airports, which could allow TSA to identify trends across airports and target nationwide efforts to address these issues. TSA OSO officials told us that evaluating the root causes of screening errors would be helpful and could allow them to better target TSO training efforts. In January 2014, TSA OSO officials stated that they are in the early stages of forming a group to discuss these errors. However, TSA was not able to provide documentation of the group’s membership, purpose, goals, time frames, or methodology. Standards for Internal Control in the Federal Government states that managers should compare actual performance with expected results and analyze significant differences.will be important for TSA to develop a process for evaluating the root causes of screening errors at the checkpoint and identify and implement corrective measures, as needed, to address these root causes. Uncovering and addressing the root causes of screening errors could help TSA reduce the number of these errors at the checkpoint. Fraudulent Documents Pose Risks at Airport Screening Checkpoints, and TSA’s Planned Technology Solutions Are in Early Stages Fraudulent identification or boarding passes could enable individuals to evade Secure Flight vetting, creating a potential vulnerability at the screening checkpoint. TDCs are responsible for verifying the validity of identification documents and boarding passes presented by passengers. In June 2012, the TSA Assistant Administrator for the Office of Security Capabilities testified before Congress that the wide variety of identifications and boarding passes presented to TDCs poses challenges to effective manual verification of passenger identity, ticketing, and vetting status. He testified that there are at least 2,470 different variations of identification that could be presented at security checkpoints and stated that it is very difficult for a TSO to have a high level of proficiency for all of those identifications. From May 2012 through July 2013, TSA denied 1,384 individuals access to the sterile area as a result of identity checking procedures. These denials include travelers who did not appear to match the photo on their identification, who presented identification that appeared fraudulent or showed signs of tampering, and who were unwilling or unable to provide identifying information.time period, TDCs also made 852 referrals to airport law enforcement because of travelers who did not appear to match the photo on their identification, presented identification or boarding passes that appeared fraudulent or showed signs of tampering, or exhibited suspicious behaviors. However, TSA would not know how many travelers successfully flew with fraudulent documents unless those individuals came to TSA’s attention for another reason. During this same We have previously reported on security vulnerabilities involving the identity verification process at the screening checkpoint. For example, in our May 2009 report on Secure Flight, we identified a vulnerability involving the Secure Flight system—namely, airline passengers could provide fraudulent information when making a flight reservation to avoid detection. In addition, in June 2012, we reported on several instances when passengers used fraudulent documentation to board flights. For example, we reported that in 2006, a university student created a website that enabled individuals to create fake boarding passes. In addition, in 2011, a man was convicted of stowing away aboard an aircraft after using an expired boarding pass with someone else’s name on it to fly from New York to Los Angeles. We also reported that news reports have highlighted the apparent ease of ordering high-quality counterfeit driver’s licenses from China. TSA’s planned technology solutions could reduce the risk posed by fraudulent documents at the screening checkpoint. Boarding pass scanners are designed to verify the digital signature on these boarding passes, allowing TDCs to know that the boarding passes are genuine. The scanners are also to notify the TDC when a passenger is a selectee. In September 2013, TSA purchased 1,400 boarding pass scanners, at a cost of $2.6 million, and planned to deploy 1 for every TDC at airport security checkpoints, beginning with TDCs in TSA PreTM lines. According to TSA officials, as of March 2014, TSA had deployed all 1,400 scanners at airport security checkpoints. In December 2013, TSA released a request for proposal for Credential Authentication Technology (CAT), which is a system that is designed to verify passenger identity, ticketing status, and Secure Flight risk determination at the screening checkpoint. CAT could address the risks of fraudulent identifications, as well as TSO error and reliance on air carriers to properly issue boarding passes. CAT is to verify the authenticity of identification documents presented at the screening checkpoint, confirm the passenger’s reservation, and provide the Secure Flight screening result for that traveler. TDCs would no longer need to examine passengers’ boarding passes to identify those who should receive enhanced screening, which could reduce the potential for error. In April 2014, TSA awarded a contract for the CAT technology solution. TSA has faced long-standing challenges in acquiring CAT technology. In May 2009, we found that TSA had begun working to address the vulnerability posed by airline passengers providing fraudulent information when making a flight reservation to avoid detection. TSA has issued four previous requests for proposals for CAT/BPSS technology, two of which resulted in no vendors meeting minimum requirements. In 2012, TSA piloted a joint CAT/BPSS technology from three vendors at a cost of $4.4 million. According to TSA’s final report on the pilot, TSA decided not to move forward with these systems because of significant operability and performance difficulties. None of the units tested met TSA’s throughput requirements, creating delays at the screening checkpoint. According to TSA officials, after the joint CAT/BPSS pilot failed, TSA decided to separate CAT technology from BPSS technology and procure each separately. TSA has also faced challenges in estimating the costs associated with the CAT system. In June 2012, we reported that we could not evaluate the credibility of TSA’s life-cycle cost estimate for CAT/BPSS because it did not include an independent cost estimate or an assessment of how changing key assumptions and other factors would affect the estimate. At that time, according to the life-cycle cost estimate for the Passenger Screening Program, of which CAT/BPSS is a part, the estimated 20-year life-cycle cost of CAT/BPSS was approximately $130 million based on a procurement of 4,000 units. As of April 2014, TSA had not approved a new life-cycle cost estimate for the CAT program, so we were unable to evaluate the extent to which TSA has addressed these challenges in its new estimate. TSA Lacks Key Information to Determine whether the Secure Flight Program Is Achieving Its Goals Secure Flight Measures Do Not Fully Assess Progress toward Goals Secure Flight has six program goals that are relevant to the results of screening performed by the Secure Flight computer system and the program analysts who review computer-generated matches, including the following: goal 1: prevent individuals on the No Fly List from boarding an aircraft, goal 2: identify individuals on the Selectee List for enhanced goal 3: support TSA’s risk-based security mission by identifying high- risk passengers for appropriate security measures/actions and identifying low-risk passengers for expedited screening, goal 4: minimize misidentification of individuals as potential threats to goal 5: incorporate additional risk-based security capabilities to streamline processes and accommodate additional aviation populations, and goal 6: protect passengers’ personal information from unauthorized use and disclosure. To assess progress with respect to these goals, the program has nine performance measures that it reports on externally (see app. III for the nine Secure Flight performance measures and performance results for fiscal years 2012 and 2013). In addition, Secure Flight has measures for a number of other program activities that it reports internally to program managers to keep them apprised of program performance with respect to the goals (such as the number of confirmed matches identified to the No Fly and Selectee Lists). However, Secure Flight’s performance measures do not fully assess progress toward achieving its six program goals. For goals 1 through 4 and goal 6, we found that while TSA measured some aspects of performance related to these goals, it did not measure aspects of performance necessary to determine overall progress toward the goals. In addition, for goal 5, we could not identify any program measures that represented the type of performance required to make progress toward achieving the goal, in part because the goal itself did not specify how performance toward the goal should be measured. GPRA establishes a framework for strategic planning and performance measurement in the federal government. Part of that framework involves agencies establishing quantifiable performance measures to demonstrate how they intend to achieve their program goals and measure the extent to which they have done so. These measures should adequately indicate progress toward performance goals so that agencies can compare their programs’ actual results with desired results. Our prior body of work has shown that measures adequately indicate progress toward performance goals when they represent the important dimensions of their performance goals and reflect the core functions of their related programs or activities. Further, when performance goals are not self-measuring, performance measures should translate those goals into concrete conditions that determine what data to collect in order to learn whether the program has made progress in achieving its goal. Measures Addressing Accuracy (Goals 1 through 4) With respect to the program’s first four goals, which address the Secure Flight system’s ability to accurately identify passengers on various watchlists for high- and low-risk screening, the program does not measure all aspects of performance that are essential to achieving these goals. To measure performance toward the first three goals, Secure Flight collects various types of data, including the number of passengers TSA identifies as matches to high- and low-risk lists (including the No Fly, Selectee, Expanded Selectee, rules-based, and TSA PreTM lists). However, Secure Flight has no measures to address the extent to which Secure Flight is missing passengers who are actual matches to these lists (see table 1). TSA Secure Flight officials stated that measuring the extent to which the Secure Flight system may miss passengers on high-risk lists is difficult to perform in real time. However, our prior work and current program documentation show that the Secure Flight program has used proxy methods to assess the extent to which the system is missing passengers on watchlists. For example, we reported in May 2009 that when the Secure Flight system was under development, TSA conducted a series of tests—using a simulated passenger list and a simulated watchlist created by a TSA contractor with expertise in watchlist matching—to measure the extent to which Secure Flight did not identify all simulated watchlist records. In addition, for this review, we examined meeting minutes of the Secure Flight Match Review Board—a multidepartmental board that reviews system performance and recommends changes—for the period May 2010 through August 2013, to determine how the board assesses system performance. The minutes show that Secure Flight, when contemplating a change in the system’s search capabilities, measures the impacts of proposed changes on system performance, including the extent to which the changes result in failures to identify watchlisted individuals. To make these assessments, Secure Flight rematches historical passenger data and watchlist data under proposed system changes, and compares the results with prior Secure Flight screening outcomes to determine whether any previously identified individuals on high-risk lists were missed. While helpful for Match Review Board deliberations, the testing reflected in meeting minutes was performed on an ad hoc basis and therefore is not a substitute for ongoing performance measurement. In addition, with respect to low-risk lists, TSA could measure the extent to which the Secure Flight system correctly identifies passengers submitting valid known traveler numbers (i.e., an actual number on a TSA PreTM list) and designates them for expedited screening. TSA officials have stated that variations in the way passengers enter information when making a reservation with a valid known traveler number can cause the system to fail to identify them as TSA PreTM eligible. For example, TSA Match Review Board documentation from December 2012 identified that the Secure Flight system had failed to identify participants on one TSA PreTM list because they used honorific titles (e.g., the Honorable and Senator) when making reservations, and, as a result, they were not eligible for expedited screening.TSA has a process in place to review and resolve inquiries from passengers who believe they should have received TSA PreTM but did not during a recent travel experience.helpful for addressing some TSA PreTM-related problems, the process does not provide information on the extent to which TSA is correctly identifying passengers on low-risk lists, because some passengers may not report problems. TSA’s fourth goal (to minimize the number of passengers misidentified as threats on high-risk lists) also addresses system accuracy. The program’s related performance measure, its false positive rate, accounts for the number of passengers who have been misidentified as matches to some, but not all, high-risk lists and, thus does not fully assess performance toward the related goal (as shown above, in table 1). TSA’s false positive rate does not account for all misidentifications, because, under the current Secure Flight process, TSA has information on passengers misidentified to the No Fly and Selectee Lists, but does not have information on passengers misidentified to the Expanded Selectee or rules-based lists. TSA is currently implementing changes that will allow it to collect more information about passengers misidentified to other high-risk lists.positive measure, would allow TSA to more fully assess the program’s ability to minimize the misidentification of individuals as potential threats to aviation security. Measures Addressing Risk- Based Security Capabilities (Goal 5) TSA does not have any measures that clearly address its goal of incorporating additional risk-based security capabilities to streamline processes and accommodate additional aviation populations (goal 5). According to TSA officials, the goal addresses the program’s ability to adapt the Secure Flight system for risk-based screening initiatives, such as TSA PreTM and similar efforts that allow TSA to distinguish high-risk from low-risk passengers. TSA officials identified several measures that address this goal, including program measures for responding to a change in the national threat level, the system false positive rate, and the system availability measure. identified clearly relate to the goal of adapting the Secure Flight system for different risk-based screening activities, or specify what data should be collected to measure progress toward the goal. Measures Addressing Privacy (Goal 6) The Secure Flight system availability measure—a Key Performance Parameter and an OMB 300 Measure—tracks the total amount of time the Secure Flight system (within Secure Flight bounds) is available for matching activities. Secure Flight’s false positive measure was discussed previously. All Secure Flight measures are defined in app. III. implemented.scoring activities from the Secure Flight system, TSA ensures that passenger data do not remain in the system and thus will not be subject to unauthorized use or disclosure. Nevertheless, the measure does not assess other points in time in which the records could be subject to unauthorized use or disclosure, such as before the records are purged or when other government agencies request the results of Secure Flight screening for various purposes, such as an ongoing investigation. When the Secure Flight program was in development, TSA included among a list of possible measures for the fully implemented program a measure for privacy incident compliance (i.e., percentage of privacy incidents reported in compliance with DHS Privacy Incident Handling Guidance). According to TSA officials, since then, TSA has determined that such a measure is not needed because privacy incidents are tracked and publicly reported on at the department level. Nevertheless, additional measures, such as the percentage of government agencies’ requests for Secure Flight data that are handled consistently with program privacy requirements, would allow Secure Flight to determine the extent to which the program is appropriately handling passenger information before it is purged from the system. By purging the results of Secure Flight matching and Secure Flight’s performance measures provide program managers some information on its progress with respect to its accuracy-related and privacy-related goals (goals 1 through 4 and 6), but do not measure all aspects of performance critical to achieving these goals. In addition, the measures do not provide information on progress toward the program’s risk-based security capabilities goal (goal 5). Additional measures that address key performance aspects related to program goals, and that clearly identify the activities necessary to achieve goals, would allow the program to more fully assess progress toward its goals. For example, the extent to which the Secure Flight system is missing individuals on the No Fly, Selectee, and other high- and low-risk lists is an important dimension of performance related to each of the accuracy-related goals and speaks to a core function of the Secure Flight program—namely to accurately identify passengers on lists. Without measures that provide a more complete understanding of Secure Flight’s performance, TSA cannot compare actual with desired results to understand how well the system is achieving these goals. Similarly, without a measure that reflects misidentifications to all high-risk lists, TSA cannot appropriately gauge its performance with respect to its goal of limiting such misidentifications. Likewise, with respect to its privacy-related goal, additional measures that address other key points in the Secure Flight process in which passenger records could be inappropriately accessed would allow Secure Flight to more fully assess the extent to which it is meeting its goal of protecting passenger information. Finally, establishing measures that clearly represent the performance necessary to achieve the program’s goal that addresses risk-based security capabilities (goal 5) will allow Secure Flight to determine the extent to which it is meeting its goal of adapting the Secure Flight system for different risk-based screening activities. TSA Does Not Have Timely and Reliable Information on the Secure Flight System’s Matching Errors TSA does not have timely and reliable information on past Secure Flight system matching errors. As previously discussed, preventing individuals on the No Fly List from boarding an aircraft and identifying individuals on the Selectee List for enhanced screening are key goals of the Secure Flight program. Standards for Internal Control in the Federal Government states that agencies must have relevant, reliable, and timely information to determine whether their operations are performing as expected, and that such information can assist agencies in taking any necessary corrective actions to achieve relevant goals. According to TSA officials, when TSA receives information related to matching errors of the Secure Flight system (i.e., the computerized matching and manual reviews conducted to identify matches of passenger and watchlist data), the Match Review Board reviews this information to determine if any actions could be taken to prevent similar errors from happening again. We reviewed meeting minutes and associated documentation for the 51 Match Review Board meetings held from March 2010 through August 2013, and found 16 meetings in which the Match Review Board discussed system matching errors; investigated possible actions to address these errors; and, when possible, implemented changes to strengthen system performance. However, when we asked TSA for complete information on the extent and causes of system matching errors, we found that TSA does not have readily available or complete information. It took TSA over 6 months to compile a list of such errors, a process that, according to TSA officials, required a significant amount of manual investigation and review. Further, we found that the list was not complete because it did not reflect all system errors that were discussed at the Match Review Board meetings.discussion of a system error that was not included in the Match Review Board documentation. We also found that, for many incidents on the list, TSA’s description of the cause of the error was not sufficiently detailed to understand whether the Secure Flight system was at fault. Information on the time frames of our request and the number of system matching errors TSA identified is considered sensitive information and cannot be included in a public report. all potential causes of these errors, and identify and implement sufficient corrective actions. Conclusions The Secure Flight program is one of TSA’s key tools for defending civil aviation against terrorist threats. Since TSA began implementing the program, in January 2009, Secure Flight has expanded from a system that matches airline passengers against watchlists of known or suspected terrorists to a system that uses additional high-risk lists and conducts risk- based screening assessments of passengers. Specifically, through the use of new high-risk screening lists, the program now identifies a broader range of high-risk travelers—including ones who may not be on lists of known and suspected terrorists but who nevertheless correspond with known threat criteria. TSA has also begun using Secure Flight to identify low-risk passengers eligible for expedited screening through TSA PreTM. Given Secure Flight’s importance to securing civil aviation and achieving TSA’s risk-based screening goals, the extent to which passengers are being accurately identified by the system (including computerized matching and manual reviews) for standard, expedited, and enhanced screening is critically important. More broadly, to fully realize the security benefits of the Secure Flight program, it is critical that TSA checkpoint personnel correctly identify and appropriately screen travelers according to Secure Flight determinations. Better information on both system and checkpoint performance, therefore, would provide TSA with greater assurance that Secure Flight is achieving its desired purpose to correctly identify passengers for standard, expedited, and enhanced checkpoint screening. Specifically, TSA would have better assurance that all passengers are screened in accordance with their Secure Flight risk determinations by investigating checkpoint errors and taking appropriate corrective action. Evaluating the root causes of screening errors across all airport checkpoints would provide TSA more complete information on such cases and serve as the basis for policies to ensure the checkpoint is correctly processing passengers. In addition, implementing corrective measures to address the root causes that TSA identifies through its evaluation process would help strengthen checkpoint operations. Furthermore, establishing measures that cover all activities necessary to achieve Secure Flight program goals would allow TSA to more fully assess progress toward these goals. Finally, when TSA learns of Secure Flight system matching errors, a mechanism to systematically document the number and causes of these errors would help ensure that TSA had timely and reliable information to take any corrective action to strengthen system performance. Recommendations for Executive Action We recommend that the Transportation Security Administration’s Administrator take the following four actions: to further improve the implementation of Secure Flight risk determinations at the screening checkpoint, develop a process for regularly evaluating the root causes of screening errors across airports so that corrective measures can be identified; to address the root causes of screening errors at the checkpoint, thereby strengthening checkpoint operations, implement the corrective measures TSA identifies through a root cause evaluation process; to assess the progress of the Secure Flight program toward achieving its goals, develop additional measures to address key performance aspects related to each program goal, and ensure these measures clearly identify the activities necessary to achieve progress toward the goal; and to provide Secure Flight program managers with timely and reliable information on cases in which TSA learns of Secure Flight system matching errors, develop a mechanism to systematically document the number and causes of such cases, for the purpose of improving program performance. Agency Comments and Our Evaluation We provided a draft of this report to DHS and the Department of Justice for their review and comment. DHS provided written comments on August 25, 2014, which are summarized below and reproduced in full in appendix IV. DHS concurred with all four of our recommendations and described actions under way or planned to address them. In addition, DHS provided written technical comments, which we incorporated into the report as appropriate. DHS concurred with our first recommendation, that TSA develop a process for regularly evaluating the root causes of checkpoint screening errors across airports so that corrective measures can be identified. DHS stated that TSA is collecting data on the root causes of checkpoint screening errors in its Security Incident Reporting Tool (SIRT) and that TSA OSO’s Operations Performance Division will develop a process for regularly evaluating the root causes of checkpoint screening errors across airports and identify corrective measures. DHS estimates that this will be completed by September 30, 2014. These actions, if implemented effectively, should address the intent of our recommendation. Regarding our second recommendation, that TSA implement the corrective measures it identifies through a root cause evaluation process, DHS concurred. DHS stated that TSA OSO’s Operations Performance Division will evaluate the data gathered from airports through SIRT to identify root causes of checkpoint screening errors and on the basis of the root cause, work with the appropriate TSA program office to implement corrective measures. Such actions could help to reduce the likelihood that TSA will fail to appropriately screen passengers at the screening checkpoint. Additionally, DHS concurred with our third recommendation, that TSA develop additional measures to address key performance aspects related to each program goal and ensure these measures clearly identify the activities necessary to achieve progress toward the goal. DHS stated that TSA's Office of Intelligence and Analysis will evaluate its current Secure Flight performance goals and measures and develop new performance measures as necessary. DHS further stated that TSA will explore the possibility of implementing analyses to measure match effectiveness through the use of test data sets. Such actions could help TSA better monitor the performance of the Secure Flight program. DHS also concurred with our fourth recommendation, that TSA develop a mechanism to systematically document the number and causes of cases in which TSA learns that the Secure Flight system has made a matching error. DHS stated that TSA's Office of Intelligence and Analysis will develop a more robust process to track all known cases in which the Secure Flight system has made a matching error, and that the Secure Flight Match Review Board will conduct reviews to identify potential system improvement measures on a quarterly basis. TSA plans to implement these efforts by December 31, 2014. These actions, if implemented effectively, should address the intent of our recommendation. We will continue to monitor DHS’s efforts. The Department of Justice did not have formal comments on our draft report, but provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, the TSA Administrator, the United States Attorney General, and interested congressional committees as appropriate. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions about this report, please contact Jennifer A. Grover at 202-512-7141 or [email protected]. Key contributors to this report are acknowledged in appendix IV. Key points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Appendix I: Objectives, Scope, and Methodology This report addresses the following questions: 1. How, if at all, has Secure Flight changed since implementation began in January 2009? 2. To what extent does the Transportation Security Administration (TSA) ensure that Secure Flight screening determinations for passengers are fully implemented at airport security checkpoints? 3. To what extent do TSA’s performance measures appropriately assess progress toward achieving the Secure Flight program goals? To identify how the Secure Flight program has changed since implementation began, we analyzed TSA documentation related to new agency initiatives involving Secure Flight screening since January 2009, including the Secure Flight program concept of operations, privacy notices TSA issued from 2008 (in preparation to begin program implementation) through 2013, and TSA memorandums describing the rationale for new agency initiatives involving the Secure Flight system. We also submitted questions on how Secure Flight has changed to TSA’s Office of Chief Counsel and reviewed its responses. To clarify our understanding of a new agency initiative to identify high-risk passengers not already included in the Terrorist Screening Database (TSDB)—the U.S. government’s consolidated list of known or suspected terrorists—we spoke with relevant officials in the Intelligence and Analysis Division of TSA’s Office of Intelligence and Analysis, who are responsible for the initiative, and with officials from U.S. Customs and Border Protection, who facilitate the generation of one rules-based list. In addition, to understand new initiatives involving Secure Flight screening to identify low-risk travelers, we spoke with Secure Flight program officials and with officials in TSA’s Office of Risk Based Security who oversee TSA PreTM, a 2011 program that allows TSA to designate preapproved passengers as low risk, and TSA Pre✓™ risk assessments, another initiative to identify passengers as low risk for a specific flight. To determine the extent to which TSA ensures that the Secure Flight vetting results are fully implemented at airport security checkpoints, we analyzed TSA documents governing the screening checkpoint, such as standard operating procedures for checkpoint screening operations and Travel Document Checkers (TDC) and reviewed reports about the performance of Transportation Security Officers (TSO) at the checkpoint by TSA’s Office of Inspections and GAO. To determine the extent to which TSA made errors at the screening checkpoint, we analyzed certain TSA data on TSO performance at the screening checkpoint from May 2012, when TSA began tracking these data, through February 2014, when we conducted the analysis. We examined documentation about these data and interviewed knowledgeable officials, and determined that the data were sufficiently reliable for our purposes. In addition, to clarify our understanding of TSA’s checkpoint operations and inform our analysis, we interviewed officials within TSA’s Office of Security Operations, which is responsible for checkpoint operations, and TSA officials at nine airports. We selected these nine airports based on a variety of factors, such as volume of passengers screened and geographic dispersion. The results of these interviews cannot be generalized to all airports, but provide insight into TSA’s challenges to correctly identify and screen passengers at checkpoints. To better understand how TSA ensures that all passengers have been appropriately screened by Secure Flight, we visited TSA’s Identity Verification Call Center to interview officials and observe their identity verification procedures. We compared TSA’s checkpoint procedures against Standards for Internal Control in the Federal Government.Finally, to determine the extent to which TSA’s planned technology solutions could address checkpoint errors, we analyzed documents, such as requests for proposals, related to TSA’s planned technology solutions and interviewed knowledgeable TSA officials. To determine the extent to which Secure Flight performance measures appropriately assess progress toward achieving the program goals, we reviewed documentation of TSA’s program goals and performance measures for fiscal years 2012 and 2013—including the measures Secure Flight reports externally to the Department of Homeland Security and the Office of Management and Budget (OMB), as well as other internal performance measures Secure Flight officials use for program management purposes—and discussed these measures with Secure Flight officials. We assessed these measures against provisions of the Government Performance and Results Act (GPRA) of 1993 and the GPRA Modernization Act of 2010 requiring agencies to compare actual results with performance goals. Although GPRA’s requirements apply at the agency level, in our prior work, we have reported that these requirements can serve as leading practices at lower levels within an organization, such as individual programs or initiatives, through a review of our related products, OMB guidance, and studies by the National Academy of Public Administration and the Urban Institute. We also interviewed relevant TSA officials about the current performance measures for the Secure Flight program and the adequacy of these measures in assessing TSA’s progress in achieving program goals. In addition, to understand how TSA uses Secure Flight-related performance data, we reviewed documentation related to all meetings that TSA identified of the Secure Flight Match Review Board—a multidepartmental entity established to, among other things, review performance measures and recommend changes to improve system performance—from the time was the board was initiated, in March 2010, through August 2013, a total of 51 meetings. To identify the extent to which TSA monitors and evaluates the reasons for any Secure Flight system matching errors, we analyzed a list of such errors that occurred from November 2010 (the point at which the Secure Flight program was implemented for all covered domestic and foreign air carriers) through July 2013 that TSA compiled at our request. To assess the accuracy and completeness of the list TSA provided, we also checked to see if system matching errors we identified in documentation from the Match Review Board meetings were included in TSA’s list. We evaluated TSA’s efforts to document system matching errors against standards for information and communications identified in GAO’s Standards for Internal Control in the Federal Government. We conducted this performance audit from March 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions. Appendix II: Secure Flight Screening Lists and Activities In January 2009, the Transportation Security Administration (TSA) began implementing the Secure Flight program to facilitate the identification of high-risk passengers who may pose security risks to civil aviation, and designate them for additional screening at airport checkpoints. Since then, TSA has begun using Secure Flight to identify low-risk passengers eligible for more efficient processing at the checkpoint. This appendix presents an overview of the lists and other activities as of July 2013 that Secure Flight uses to identify passengers as high risk or low risk. The Secure Flight program, as implemented pursuant to the 2008 Secure Flight Final Rule, requires commercial aircraft operators traveling to, from, within, or overflying the United States to collect information from passengers and transmit that information electronically to TSA. The Secure Flight system uses this information to screen passengers by conducting computerized matching against government lists and other risk assessment activities. As a result of this screening, passengers identified as high risk receive enhanced screening, which includes, in addition to the procedures applied during a typical standard screening experience, a pat-down and either an explosive trace detection search involving a device certified by TSA to detect explosive particles or a physical search of the interior of the passenger’s accessible property, electronics, and footwear. Those passengers Secure Flight identifies as low risk are eligible to receive expedited screening, which unlike standard screening, affords travelers certain conveniences, such as not having to remove their belts, shoes, or light outerwear when screened. Figure 3 provides information on the lists Secure Flight uses to identify high-risk passengers. Figure 4 describes Secure Flight’s activities to identify low-risk passengers, including screening against lists associated with the TSA PreTM Program, a 2011 initiative that allows TSA to designate preapproved passengers as low risk, and TSA Pre✓™ risk assessments, which assess passengers’ risk using data submitted to Secure Flight for screening. Figure 5 describes two additional lists Secure Flight uses for passenger screening that, depending on the list, exempt passengers from being designated as low or high risk. Appendix III: Secure Flight Performance Data for Fiscal Years 2012 and 2013 This appendix presents data on the Secure Flight program’s performance measures and associated performance results that the Transportation Security Administration (TSA) reported externally to the Department of Homeland Security (DHS) and the Office of Management and Budget (OMB). Specifically, table 2 displays data on six Secure Flight Key Performance Parameters—key system capabilities that must be met in order for a system to meet its operational goals—that TSA management reported to DHS for fiscal years 2012 and 2013. Table 3 displays data on five Secure Flight program measures that TSA management reported to OMB for fiscal years 2012 and 2013. The OMB measures are part of the program’s yearly exhibit 300, also called the Capital Asset Plan and Business Case, a document that agencies submit to OMB to justify resource requests for major information technology investments. TSA reports performance data for all these measures on a monthly basis, and for each measure, we have provided the range of the performance measurement results for each fiscal year. Overview of the Secure Flight Screening Process The Secure Flight program, as implemented pursuant to the 2008 Secure Flight Final Rule, requires commercial aircraft operators traveling to, from, or overflying the United States to collect information from passengers and transmit that information electronically to TSA. This information, known collectively as Secure Flight Passenger Data (SFPD), includes personally identifiable information, such as full name, gender, date of birth, passport information (if available), and certain nonpersonally identifiable information, such as itinerary information and the unique number associated with a travel record (record number locator). The Secure Flight program designates passengers for risk-appropriate screening by matching SFPD against various lists composed of individuals who should be identified, for the purpose of checkpoint screening, as either high risk or low risk. With respect to matching passengers against lists, the Secure Flight computer system first conducts automated matching of passenger and watchlist data to identify a pool of passengers who are potential matches to various lists. Next, the system compares all potential matches against the TSA Cleared List, a list of individuals who have applied to, and been cleared through, the DHS redress process. Passengers included on the TSA Cleared List submit a redress number when making a reservation, which allows the Secure Flight system to recognize and clear them.performs automated matching, Secure Flight analysts conduct manual reviews of potential matches to further rule out individuals who are not included on the No Fly and Selectee Lists. After the completion of manual reviews, TSA precludes passengers who remain potential matches to certain lists from receiving their boarding passes. These passengers, for whom air carriers receive a “passenger inhibited” message from Secure Flight, must undergo a resolution process at the airport. This process may involve air carriers sending updated passenger information back to Secure Flight for automated rematching or placing a call to Secure Flight for assistance in resolving the match. At the conclusion of automated and manual screening processes, Secure Flight provides air carriers with a final screening determination for each passenger. At airport checkpoints, those passengers identified as high risk receive enhanced screening and those identified as low risk are eligible for expedited screening. Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Maria Strudwick (Assistant Director), Mona Nichols Blake (Analyst-in-Charge), John de Ferrari, Michele Fejfar, Imoni Hampton, Eric Hauswirth, Susan Hsu, Richard Hung, Justine Lazaro, Benjamin Licht, Tom Lombardi, Linda Miller, David Plocher, and Ashley Vaughan made key contributions to this report. | In 2009, DHS's TSA began using Secure Flight to screen passengers against high-risk lists. These lists, subsets of the TSDB—the U.S. government's consolidated list of known and suspected terrorists—included the No Fly List, to identify those who should be prohibited from boarding flights, and the Selectee List, to identify those who should receive enhanced screening at airport checkpoints. GAO was asked to assess the current status of the program. This report examines (1) changes to the Secure Flight program since 2009, (2) TSA's efforts to ensure that Secure Flight's screening determinations for passengers are implemented at airport checkpoints, and (3) the extent to which program performance measures assess progress toward goals. GAO analyzed TSA data and documents—including checkpoint data from 2012 through 2014 and Secure Flight performance measures—and interviewed relevant DHS officials. Since 2009, Secure Flight has changed from a program that identifies passengers as high risk solely by matching them against the No Fly and Selectee Lists to one that assigns passengers a risk category: high risk, low risk, or unknown risk. In 2010, following the December 2009 attempted attack of a U.S.-bound flight, which exposed gaps in how agencies used watchlists to screen individuals, the Transportation Security Administration (TSA) began using risk-based criteria to identify additional high-risk passengers who may not be in the Terrorist Screening Database (TSDB), but who should be designated as selectees for enhanced screening. Further, in 2011, TSA began screening against additional identities in the TSDB that are not already included on the No Fly or Selectee Lists. In addition, as part of TSA PreüTM, a 2011 program through which TSA designates passengers as low risk for expedited screening, TSA began screening against several new lists of preapproved low-risk travelers. TSA also began conducting TSA Pre✓™ risk assessments, an activity distinct from matching against lists that uses the Secure Flight system to assign passengers scores based upon travel-related data, for the purpose of identifying them as low risk for a specific flight. TSA has processes in place to implement Secure Flight screening determinations at airport checkpoints, but could take steps to enhance these processes. TSA information from May 2012 through February 2014 indicates that screening personnel have made errors in implementing Secure Flight determinations at the checkpoint. However, TSA does not have a process for systematically evaluating the root causes of these screening errors. GAO's interviews with TSA officials at airports yielded examples of root causes TSA could identify and address. Evaluating the root causes of screening errors, and then implementing corrective measures, in accordance with federal internal control standards, to address those causes could allow TSA to strengthen security screening at airports. Since 2009, Secure Flight has established program goals that reflect new program functions to identify additional types of high-risk and also low-risk passengers; however, current program performance measures do not allow Secure Flight to fully assess its progress toward achieving all of its goals. For example, Secure Flight does not have measures to assess the extent of system matching errors. Establishing additional performance measures that adequately indicate progress toward goals would allow Secure Flight to more fully assess the extent to which it is meeting program goals. Furthermore, TSA lacks timely and reliable information on all known cases of Secure Flight system matching errors. More systematic documentation of the number and causes of these cases, in accordance with federal internal control standards, would help TSA ensure Secure Flight is functioning as intended. This is a public version of a sensitive report that GAO issued in July 2014. Information that the Department of Homeland Security (DHS) and the Department of Justice deemed sensitive has been removed. |
Introduction Section 6012 of the Internal Revenue Code requires individuals, businesses, and other taxable entities with income over a certain threshold amount to file income tax returns. While most individuals and businesses voluntarily comply with this requirement, millions do not. At the beginning of fiscal year 1993, IRS had an inventory of about 10 million known nonfilers—about 7 million individuals and about 3 million businesses that had not filed one or more required returns. IRS estimated that the amount of unpaid individual income taxes on returns due but not filed for 1992 alone was more than $10 billion. IRS identifies potential nonfilers in several ways. One of the more significant ways to identify potential nonfilers of individual income tax returns is through the document matching program. Under that program, IRS matches taxpayers’ returns with information returns (generally Forms W-2 and 1099) showing income, such as wages and interest, paid by third parties, such as employers and banks. When the match shows income but no corresponding tax return, a potential nonfiler is identified. IRS identifies business nonfilers by computer-matching filed returns with the business’ filing requirements. Once it has identified potential nonfilers, and after considering what resources are available, IRS decides what action to take. In 1993, IRS received about 114 million individual income tax returns. Almost all of those returns were for tax year 1992. For that same tax year, IRS identified 59.6 million potential individual nonfilers. Of the 59.6 million, IRS took no enforcement action on 54.1 million (91 percent), primarily because IRS subsequently determined that the individual or business had no legal requirement to file. Collection officials at IRS’ National Office and regional offices evaluated the remaining 5.5 million cases to determine the potential tax due. Cases that IRS judged to have the least potential, 2.5 million, or 46 percent, received a reminder to file. Cases judged to have medium potential, 0.6 million, or 11 percent, received up to 2 notices. Cases judged to have the highest potential, 2.3 million, or 43 percent, received up to 4 notices. Under IRS procedures, nonfiler cases that are not resolved during the notice process are assigned to either the automated Substitute-for-Return (SFR) program, an Automated Collection System (ACS) call site, or a district office. Generally, cases are assigned to the automated SFR program when (1) IRS has enough income information from other sources, such as information documents filed by employers and banks, to prepare a return for the nonfiler; and (2) the potential tax due meets established criteria. Other cases are assigned, using predetermined criteria, to ACS or a district office, where they are scored to establish working priority. Cases assigned to a district office are put in an automated inventory called the “queue” at the district office. Cases with higher estimated net tax yield are assigned to revenue officers in IRS’ Collection function. Revenue officers attempt to contact nonfilers and obtain delinquent returns through telephone calls, letters, or visits. Nonfiler cases with low estimated yield may remain in the queue indefinitely. Objectives, Scope, and Methodology Our objectives, addressed under our basic legislative authority, were to assess the results of IRS’ Nonfiler Strategy and identify any opportunities for IRS to improve future nonfiler efforts. To accomplish our objectives, we did the following: We interviewed IRS National Office officials responsible for overseeing the Nonfiler Strategy about planning and managing the Strategy and about its results. We interviewed officials and personnel at the Central, Mid-Atlantic, and Southeastern Regional Offices; Atlanta, Baltimore, Cincinnati, and Detroit District Offices; and Atlanta and Cincinnati Service Centers about their roles in the Nonfiler Strategy, their procedures for implementing the Strategy, and the results obtained. We chose the Central Region and Cincinnati District Office because of earlier work done at those locations. We selected the other locations because they had large inventories of nonfilers. The four district offices had 10 percent of IRS’ nonfiler inventory as of August 31, 1993. We interviewed Austin Compliance Center officials about their analysis of IRS’ process for identifying nonfilers and selecting nonfiler cases. We reviewed relevant IRS manuals, instructions, reports, and statistics. We reviewed IRS Internal Audit reports and met with Internal Audit personnel doing work in the nonfiler area. Because IRS’ Examination function redirected a significant number of staff to help with nonfiler cases during the Nonfiler Strategy, we took some specific steps directed at that aspect of the Strategy. To help identify the types of nonfiler cases worked by Examination staff, as well as how they were worked, we randomly selected 35 cases worked by Examination in each of the 4 district offices we visited. In each district, we selected 15 cases from the cases closed by Examination in fiscal year 1993, 15 cases from the cases closed by Examination in fiscal year 1994, and 5 cases that had been closed by Examination in fiscal year 1995 but were still physically located at the district offices when we visited them in November and December 1994. These 140 cases involved a total of 464 nonfiled returns. We also reviewed IRS’ account records as of February and May 1995 to determine whether the taxpayers in our sample cases remained compliant by filing returns in subsequent years. Our sample results are not projectable. Appendix I contains a profile of the nonfilers in our sample and a profile developed by IRS’ Statistics of Income Division from returns filed in fiscal year 1993 that were 360 days or more late. Much of the statistical data in this report on the results of IRS’ Nonfiler Strategy was taken from the Commissioner’s Nonfiler Report, a statistical report prepared by National Office staff responsible for overseeing the Strategy. After we finished our review and had drafted our report, IRS told us that the Commissioner’s Nonfiler Reports on which we had based our analyses were erroneous. IRS provided revised reports, which showed significant differences from the reports we had relied on. Also, the revised reports covered only 11 months of the fiscal year because data that IRS needed to reconstruct the reports for the full fiscal year were not available. We updated our report and, where appropriate, our analyses to reflect the revised data provided by IRS. We did not assess the data’s accuracy or reliability. We did our audit work from December 1993 through May 1995 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designee. On December 4, 1995, we met with several IRS officials, including the National Director, Service Center Compliance; the National Director, Compliance Specialization; the Acting Director of the Office of Return Delinquency; and the Acting Director for Special Compliance Programs. They provided us with oral comments, which the National Director, Service Center Compliance, reiterated and expanded on in memoranda dated December 11, 1995, and February 12, 1996. Their comments are summarized and evaluated on pages 23 and 32 and are incorporated in this report where appropriate. Results of IRS’ Nonfiler Strategy IRS became increasingly concerned about the nonfiler problem in 1991, when its delinquent return inventory—which had been growing by about 12 percent a year—increased by 30 percent. In October 1992, IRS initiated a Nonfiler Strategy with the basic objective of bringing nonfilers into the system and keeping them there. During the planned 2 years of the Strategy, fiscal years 1993 and 1994, IRS took several positive steps to achieve that objective. Those actions included deployment of staff from the Examination function to work on nonfiler cases, an increased emphasis on nonfiler activities by other IRS functions, elimination of aged cases from inventory, cooperative working arrangements with states and the private sector, and implementation of a refund hold program. IRS considers the Nonfiler Strategy a success because, as a result of the actions noted in the preceding paragraph, IRS, among other things, (1) reduced the size of the nonfiler inventory, (2) eliminated unproductive cases that allowed IRS to focus its enforcement resources more effectively, (3) eliminated backlogs in the automated SFR inventory, and (4) increased the number of returns secured from individual nonfilers. While we acknowledge all of those accomplishments, our comparison of the results IRS achieved during the 2 years of the Strategy (1993 and 1994) with the results achieved in the year before the Strategy (1992) was inconclusive. Some of the data showed improved results compared with 1992, but other data showed the opposite. The results of the Strategy were also inconclusive when compared with IRS’ three goals. IRS achieved its goal of reducing the backlog of nonfiler investigations, but there is insufficient information with which to judge IRS’ success in achieving its other two goals. In that regard, it is unclear how much, if at all, voluntary compliance improved as a result of the Strategy. For example, IRS knows the extent to which nonfilers who were brought into compliance during the Strategy became noncompliant again, but it does not know how that rate of recidivism compares to years before the Strategy. Likewise, IRS did not have the comprehensive cost data needed to assess return on investment—a key component of IRS’ third goal. Also affecting an assessment of IRS’ results was the absence of measurable goals for such things as the number of overdue returns IRS expected to secure or the number of nonfilers IRS expected to bring into compliance during the Strategy. In our opinion, these various factors would make it difficult for IRS management to adequately assess its efforts during the Nonfiler Strategy and make informed decisions on the nature and extent of any future efforts. IRS Took Several Positive Steps to Address the Nonfiler Problem The objective of IRS’ Nonfiler Strategy, as described by the Commissioner of Internal Revenue in October 1993 testimony before the Subcommittee on Oversight of the House Committee on Ways and Means, was to bring nonfilers into the system and keep them there. The Commissioner cited three goals that IRS established to help achieve that objective: (1) use a combination of outreach and enforcement to improve taxpayer compliance and the identification of nonfilers, (2) eliminate the backlog in the number of nonfiler investigations by the end of fiscal year 1994 so that IRS can work individual nonfiler cases promptly, and (3) improve the way IRS directs its enforcement resources in working nonfiler cases so that it can employ the most effective techniques on different types of cases to achieve the highest return on its resource investment. A major feature of the Nonfiler Strategy was its crossfunctional approach to a problem that had primarily been the responsibility of one function—Collection. This approach increased the involvement of other functions, such as Examination, Underreporter, Taxpayer Service, and Public Affairs. In that regard, two major components of the Nonfiler Strategy involved the deployment of (1) revenue agents and tax auditors from the Examination function to work nonfiler cases and (2) staff from IRS’ Underreporter function to work SFR cases. According to IRS, the Examination and Underreporter functions redirected a total of about 4,000 staff years and 550 staff years, respectively, to those efforts in fiscal years 1993 and 1994. Another major component of the Nonfiler Strategy was to remove unproductive, low-priority cases from the nonfiler inventory. That inventory is the universe of nonfilers known to IRS and selected for some type of enforcement action. Within that universe are those cases that IRS has selected for possible detailed investigation—known as Tax Delinquency Investigations (TDI). According to IRS, at the start of fiscal year 1993, (1) the nonfiler inventory consisted of about 10.2 million individuals and businesses that had not filed at least 1 required tax return and (2) the number of TDI cases stood at 2.3 million. By the end of fiscal year 1994, IRS had reduced the nonfiler inventory to about 6.8 million cases, mostly by purging millions of cases that IRS deemed to have low potential because of their age. IRS plans to continue purging aged nonfiler cases annually. IRS also reduced the number of TDIs to 1.8 million cases through the deployment of additional resources to help with cases and through other efforts like the refund hold program, discussed later. Perhaps the most visible component of the Nonfiler Strategy and another example of its crossfunctional nature was IRS’ effort to encourage and help nonfilers get back into compliance through outreach and assistance (as opposed to enforcement). The Taxpayer Service function conducted educational workshops and helped taxpayers meet their return filing requirements while Public Affairs had primary responsibility for the communications and outreach strategy. That strategy generated a considerable amount of positive publicity for IRS. As part of the outreach effort, many districts held “nonfiler days” during which IRS volunteers, sometimes accompanied by volunteers from professional associations, such as the American Institute of Certified Public Accountants and the American Bar Association, were available to answer questions and help taxpayers prepare returns. Many IRS district offices also entered into cooperative working arrangements with state tax agencies. As a result of those joint efforts, IRS obtained tax returns, generated publicity and educational materials, identified market segments to be targeted for outreach efforts and enforcement actions, and gained access to state databases to aid in identifying nonfilers. For example, one state did a comparison that identified a large number of individuals and businesses that had filed state sales tax returns but not federal income tax returns. Also as part of the Strategy, in January 1994 IRS began putting a hold on refunds claimed by some individuals who had a prior year’s return in TDI status. The hold applied to returns involving refund claims above a certain amount filed by persons who were not in bankruptcy or under criminal investigation. IRS instructed the taxpayer by letter to file the delinquent return(s) or explain why there was no filing requirement. IRS’ letter also said that if it did not receive either the delinquent return(s) or an acceptable explanation, IRS could prepare a substitute return based on available information. IRS released the refund in cases where there was no filing requirement or the taxpayer established that a significant hardship existed. Otherwise, the refund was applied to the balance due on any delinquent return(s), with any remaining balance sent to the taxpayer. IRS data show that the refund hold program in 1994 resulted in the receipt of about 106,000 delinquent returns and the collection of about $16 million with those returns. IRS expanded the program in 1995 to include any situation where a refund return for more than a certain amount was filed for tax year 1994 and a prior year’s return was more than 1 year overdue, even if the overdue return was not in TDI status. According to IRS data, as of May 1995 IRS had secured about 24,000 returns and collected about $1.8 million in revenue with those returns. Was the Nonfiler Strategy a Success? According to IRS, the Nonfiler Strategy was generally a success. In reaching that conclusion, it pointed to several aspects of the Strategy, some of which were discussed in the preceding section. Among other things, IRS cited (1) a decrease in the nonfiler inventory, (2) creation of the refund hold program, (3) elimination of unproductive cases that allowed IRS to focus its enforcement resources more effectively, (4) elimination of backlogs in the automated SFR inventory, (5) increases in the number of returns secured from and dollars assessed against individual nonfilers during the 2 years of the Strategy (fiscal years 1993 and 1994) compared with the year before the Strategy (fiscal year 1992), and (6) a closer working relationship between IRS and outside stakeholders and professional associations. We assessed the results of the Strategy by looking at the key performance indicators tracked by IRS during the Strategy. We concentrated on indicators that were identified by the Commissioner in her October 1993 testimony—total number of nonfiler returns secured, number of returns filed by unknown nonfilers, and the dollar amount assessed and collected as a result of these filings. For those indicators, we compared data for 1993 and 1994 with comparable data for the year preceding the Strategy—1992 (we could not go back before 1992 because, according to IRS, comparable data were not available). Also, because the basic objective of the Strategy was not only to bring nonfilers into the system but also keep them there, we looked at data on recidivism—the extent to which nonfilers who were brought into compliance during the Strategy became nonfilers again. While informative, the above analyses were insufficient for us to determine whether the Nonfiler Strategy was a success. We were unable to assess success because IRS (1) did not have specific goals for any of the measures discussed in the preceding paragraph, such as the number of returns it expected to secure or an acceptable rate of recidivism; and (2) did not compile data on the overall cost of the Strategy. Number of Returns Secured From Nonfilers IRS’ Strategy emphasized bringing individual nonfilers into compliance, and the number of returns secured from individual nonfilers increased steadily during the 2-year period over the number secured in fiscal year 1992. However, IRS also tracked the results of its Strategy on business nonfilers, and the number of returns secured from business nonfilers decreased (see table 2.1). IRS had intended that the redeployment of Examination staff to work nonfiler cases would free Collection staff in district offices to concentrate on collecting delinquent taxes and working business nonfiler cases. However, IRS’ statistics show declining results in both of those areas. The number of returns secured from business nonfilers declined, as noted earlier. IRS said that this decline could be attributable to an increase in timely filings. Another contributing factor could be the fact that according to IRS data, the percent of time that Collection staff in district offices spent on nonfiler work dropped from 6.3 percent in fiscal year 1992 to 4.9 percent in fiscal year 1993 and 4.2 percent in fiscal year 1994. Whatever the reason for the decrease in returns secured from business nonfilers, the fact remains that during the Nonfiler Strategy and despite the use of thousands of Examination staff to help work cases, the number of returns secured from nonfilers in total was less than the number secured the year before the Strategy was implemented. In addition, district office collections of delinquent taxes decreased almost 9 percent—from about $7.9 billion in fiscal year 1992 to about $7.2 billion in fiscal year 1994. In constant 1994 dollars, the decline in collections was about 13 percent—from about $8.2 billion in fiscal year 1992 to about $7.2 billion in fiscal year 1994. Table 2.2 shows how many of the returns secured during the Nonfiler Strategy came from unknown nonfilers. Compared with 1992, the average number of returns secured from unknown business nonfilers increased 6.5 percent during the Strategy while the average number of returns secured from unknown individual nonfilers decreased slightly. Net Tax Assessments and Dollars Collected With Returns IRS officials responsible for the Nonfiler Strategy said that IRS’ objective was to bring nonfilers into compliance rather than to generate revenue. Accordingly, collection of additional revenues was not a specific goal of the Strategy. Nevertheless, IRS’ key performance indicators for the Nonfiler Strategy included (1) dollars assessed and (2) dollars collected at the time the return was secured. As shown in table 2.3, if constant 1994 dollars are used, (1) net assessments decreased from fiscal year 1992 to fiscal year 1993 and then increased in fiscal year 1994; and (2) fewer dollars were collected with the return, in absolute numbers and as a percent of net assessments, in 1993 and 1994 than in 1992. The “dollars collected with return” indicator does not reflect the total amount eventually collected from the nonfilers; only the amount collected at the time the return was secured. Additional amounts may have been collected later through installment agreements, but IRS did not track that information. Repeat Nonfilers In an internal briefing document prepared for the Commissioner in advance of her October 1993 testimony before the Oversight Subcommittee of the House Committee on Ways and Means, IRS stated that the Nonfiler Strategy would be a success “if the taxpayers who return to the system remain in compliance and we are able to fully pursue compliance from those who don’t.” IRS has since found, and our sample cases corroborated, that many of the people brought into compliance during the Strategy had apparently become nonfilers again. IRS matched computer files to determine whether nonfilers brought into the system in fiscal year 1993 filed tax year 1993 returns in 1994. According to IRS, its match showed that 38 percent had not filed by August 1995—16 months after tax year 1993 returns were due. IRS had no data to show how this rate of recidivism compared with other years and no specific rate-of-recidivism goal for the Nonfiler Strategy. Thus, we had no basis for determining whether a rate of 38 percent was acceptable. Our review of a sample of cases closed by Examination also showed a large rate of recidivism. Of the 60 individuals involved in the sample cases closed in 1993, 29 (48 percent) did not file in 1994. Of those 29, 19 also had not filed in 1995 (as of May 1995), and 10 had extensions to file that had not yet expired. Similarly, of the 60 individuals involved in the sample cases closed in 1994, 31 (52 percent) had not filed in 1995 (as of May 1995); another 12 had extensions to file that had not expired. Nonmeasurable Goals and Lack of Cost Data Hampered Assessment of the Nonfiler Strategy IRS did not have measurable goals for most aspects of the Nonfiler Strategy nor comprehensive cost data against which to compare its results. Measurable program goals and reliable data on costs are important if management is to effectively assess its efforts and make informed decisions about future efforts. Although IRS’ basic objective in implementing the Strategy was to bring nonfilers into the system and keep them there, it had no goals for such things as the number of nonfilers it expected to bring into compliance or the percentage of nonfilers it expected to remain compliant in future years. The only measurable goal associated with the Nonfiler Strategy was one that called for reducing the number of TDI cases to 1.5 million cases by the end of fiscal year 1994. The absence of specific goals makes it difficult for IRS officials responsible for carrying out the Strategy to know exactly what was expected of them and to measure the Strategy’s success. Some Examination personnel in the four district offices we visited said that their objective was to redirect a certain amount of staff years to the effort and that they believed the Strategy was successful because they did so. However, an input measure, such as staff years, is less likely to produce a desired outcome than an output or outcome measure, such as the number of nonfilers brought into compliance. IRS did not track the overall cost of the Nonfiler Strategy. Some cost-related data, such as the number of Examination and Collection staff years spent on the Nonfiler Strategy, were available, but (1) data on other costs, such as those incurred by other IRS functions like Taxpayer Service and Public Affairs, were not available; and (2) those data that were available were not compiled in a way that would provide management with information on the Strategy’s overall cost. IRS officials explained that return on investment was not really an important consideration with respect to the Nonfiler Strategy and that IRS never intended to measure the success of the Strategy by cost. As noted earlier, however, one of the goals of the Strategy as described by the Commissioner in her October 1993 testimony was to “improve the way we direct our enforcement resources in working nonfiler cases . . . to achieve the highest return on our resource investment .” Comprehensive cost data are also important if management is to make informed decisions on the nature and extent of future nonfiler efforts. Conclusions IRS initiated its Nonfiler Strategy to counteract a growing nonfiler problem, and it took many positive steps to deal with that problem. Its outreach effort was commendable as was its recognition that this was an agency problem that required crossfunctional attention. Although IRS considers the Strategy a success, we were not able to reach that same conclusion on the basis of a review of available IRS data. Also, IRS’ assessment of the Strategy was limited by the absence of measurable goals and comprehensive cost data against which to compare results. Recommendations to the Commissioner of Internal Revenue To better assess the results of future nonfiler efforts, if any, and provide a better foundation for deciding about subsequent efforts, we recommend that the Commissioner of Internal Revenue (1) establish measurable goals and (2) develop comprehensive data on program costs. Agency Comments and Our Evaluation We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designee. On December 4, 1995, we met with several IRS officials, including the National Director, Service Center Compliance; the National Director, Compliance Specialization; the Acting Director of the Office of Return Delinquency; and the Acting Director for Special Compliance Programs. They provided us with oral comments, which the National Director, Service Center Compliance, reiterated and expanded on in memoranda dated December 11, 1995, and February 12, 1996. IRS officials took strong exception to the “extremely negative tone” of our draft report. They said that the draft focused almost exclusively on criticisms of the Strategy without fully acknowledging its accomplishments and that, as a result, an uninformed reader would likely judge the Strategy to have been a failure when, in IRS’ view, it was generally a success. In response to those comments, we revised chapter 2 of the report to give more prominence to the positive aspects of the Strategy and to recognize IRS’ position on the Strategy’s success. We reiterate, however, that although IRS is confident that the Strategy was a success, we could not reach the same conclusion given the statistical data available and the absence of other data. IRS acknowledged that it had only one goal for which a specific target was set, the TDI goal, but pointed out that it did have several key performance indicators (such as the number of returns secured and the net dollars assessed) that were designed to show positive or negative trends in results. We agree that it is useful to track trends, but such an exercise is more meaningful if there are goals against which to compare those trends. For example, speaking hypothetically, a 5-percent increase in the number of returns secured might look good on its face but would not look as good if the goal were a 25-percent increase. IRS said that experience and statistical information obtained during the 2 years of the Strategy will permit better planning and goal-setting for any future endeavor. As for cost data—IRS said that it never intended to measure the success of the Strategy by cost and that it is debatable whether all of the goals of the Strategy are amenable to accurate cost/benefit analysis. We are not suggesting that cost should be the sole measure of success, but we think it should be part of any overall assessment. Our draft report also included a recommendation that IRS reconcile conflicting data on the results of the Strategy. However, as discussed in chapter l, IRS subsequently told us that it had revised some data in the Commissioner’s Nonfiler Report. Because those revisions resolved the data inconsistencies referenced in our draft, we dropped that proposed recommendation. Opportunities to Improve Future IRS Nonfiler Efforts Our review of the Nonfiler Strategy identified several areas where we think opportunities exist for IRS to enhance future efforts directed at nonfilers. Those areas include (1) the length of time that expires from the time a return becomes delinquent until IRS first attempts to make telephone contact with the nonfiler, (2) the use of higher graded staff to work cases or do tasks that might be effectively done by lower graded staff, and (3) the absence of special procedures for dealing with recidivists—nonfilers who are brought into compliance and then become nonfilers again. IRS has taken some action in two of these areas. It shortened the time that elapses before a first notice is sent to persons who have been identified as potential nonfilers. However, IRS’ procedures still call for sending several notices to a potential nonfiler before IRS attempts to make telephone contact. IRS also developed special procedures for dealing with recidivists. Those procedures call for, among other things, eliminating some notices but say nothing about revising the language of the remaining notices. IRS Takes a Long Time to Make Telephone Contact With Nonfilers IRS officials have stated that the faster they can act to obtain nonfiled returns and related taxes, the more likely that the action will be successful. However, as described in chapter 1, IRS’ process for identifying and investigating nonfilers is a lengthy one. To identify nonfilers, IRS computer-matches data on information returns with data on income tax returns. In the past, this match was usually not done until December—after IRS had finished processing information returns and those income tax returns that were filed late because of extensions. IRS staff must then review the results of the match to determine what action to take. Only after that review is the nonfiler sent a notice. For example, individuals who did not file tax returns in 1993 would not have received a notice until a year later—April 1994. Subsequent notices would have been issued about 6 to 8 weeks later, with the last notice going out in late August 1994. If the case was still unresolved and met the criteria for referral to ACS, it would not have gone to an ACS site for telephone contact until October 1994—1-1/2 years after the return was due. Those cases unresolved by ACS and meeting certain criteria would then be assigned to a revenue officer who might attempt to visit the taxpayer. The whole process may take years, and, as noted earlier, IRS ends up dropping millions of nonfilers from its inventory—more than 5 million in 1994—whose returns have been in inventory for several years. IRS has a project directed at reducing the time it takes to match data on information returns with data on income tax returns and thus shortening the time before the first notice is issued by several months. As a result of that project, according to an IRS National Office official responsible for managing the Nonfiler Strategy, IRS plans to move up first contact with certain nonfilers to the November after the tax return is due. More significant changes, according to IRS, depend on successful implementation of IRS’ multibillion-dollar systems modernization effort, known as Tax Systems Modernization. “According to private and state collectors, early telephone contact is cost-effective and allows the collector to determine why payment has not been made, establish future payment schedules, and update information on the debtor’s status. Collectors can also discuss with the debtor possible adverse actions that could be taken if payment is not received.” In the same report, we recommended, among other things, that IRS restructure its collection organization to support earlier telephone contact with delinquent taxpayers. Although that quote and recommendation relate to the collection of delinquent taxes, they would seem equally appropriate to the collection of delinquent returns (and any delinquent taxes associated with those returns). In January 1995, IRS implemented an Early Intervention Project nationwide. Although the project focuses on the collection of delinquent taxes from persons and businesses that have filed returns, its goal (shortening the notice process and contacting the taxpayer by telephone sooner) is also relevant to delinquent returns. We were told that the project was not extended to nonfilers because sufficient staff would not have been available to handle the resulting workload. In a similar vein, an IRS business process reengineering team reviewed the collection process and made several recommendations, some of which were directed at reducing the time taken to resolve nonfiler cases by eliminating some notices and moving certain cases more quickly to a call site for attempted telephone contact with the taxpayer. As of July 1995, those recommendations were under consideration by IRS management. Some Nonfiler Case Work Could Be Done by Lower Graded Staff Nonfiler cases that cannot be resolved by ACS and that meet certain criteria are referred for investigation by field personnel—revenue officers in IRS’ Collection function and, during the Nonfiler Strategy, revenue agents and tax auditors in IRS’ Examination function. In 1993 and 1994, IRS’ Examination function had about 18,000 revenue agents and tax auditors. Over that 2-year period, Examination redirected about 4,000 staff years to work nonfiler cases. Of the 140 cases we reviewed that had been closed by Examination in 4 IRS district offices, 92 (66 percent) were worked by GS-11 revenue agents. Of the remaining cases, 40 (29 percent) were worked by staff (generally tax auditors) below grade GS-11, 4 (3 percent) were worked by revenue agents above GS-11, and 4 (3 percent) were worked by staff whose grades could not be determined. Those data are not projectable. However, national data from Examination’s management information system showing the hours charged to nonfiler cases closed in fiscal years 1993 and 1994 also showed that GS-11 revenue agents accounted for most of the time spent by Examination on nonfiler work. Specifically, of the approximately 3.6 million hours charged by revenue agents and tax auditors on those cases, about 2.4 million hours (66 percent) were charged by GS-11 revenue agents. Another 491,000 hours (14 percent) were charged by revenue agents above GS-11, and 155,000 hours (4 percent) were charged by agents in grades 5 through 9. The remaining hours were charged by tax auditors. Generally, higher graded revenue agents audit more complex tax returns. For example, when not working nonfiler cases, GS-11 and above revenue agents generally audit complex returns filed by individuals and returns filed by corporations. Although it helped IRS to reduce its nonfiler inventory and secure delinquent returns, the use of GS-11 and above Examination staff on nonfiler cases might have also contributed to an increase in IRS’ audit rate for individual returns and a decline in the audit rate for nonindividual returns. For example, the audit rate for individual returns went from 0.92 percent in fiscal year 1993 to 1.08 percent in fiscal year 1994, an increase that IRS has attributed to the Nonfiler Strategy. At the same time, however, the audit rate for corporate returns decreased from 3.05 percent to 2.31 percent. Although other factors may have contributed to that decrease, several of the revenue agents and Examination officials we interviewed in four district offices told us that if the GS-11 and above agents had not been doing nonfiler work, they would have been doing corporate audits. Examination officials in one district, for example, told us that because of the nonfiler work, the number of corporate audits done in that district decreased by about 10 percent. Although Examination officials, revenue agents, and tax auditors we interviewed in the four district offices we visited had several positive things to say about the Nonfiler Strategy and Examination’s role therein, a common theme expressed by many of them was that much of the nonfiler case work done by revenue agents and tax auditors could have been done by lower graded staff. In one district office, for example, that view was expressed by the Chief and Assistant Chief of Examination as well as the two Branch Chiefs, one Group Manager, three revenue agents, and three tax auditors we interviewed. Our review of case files in the four districts led to a similar conclusion—that the nonfiler case work in those districts involved tasks that could be done by lower graded staff. Our case file reviews indicated that with some exceptions, the work done on those cases was not so complex that it required the expertise of higher graded staff. That perception was confirmed by several of the agents and auditors we spoke with in the four district offices who said that nonfiler cases were easier to work than audit cases and were not technically challenging. One reason why revenue agents and tax auditors might not have found nonfiler work technically challenging is that audits of returns secured from nonfilers during the Nonfiler Strategy were different from normal audits. As explained in an August 1992 document on the Nonfiler Strategy signed by the then Acting Commissioner, the nonfiler audit process was streamlined so that cases could be worked in a minimal amount of time. As noted in the document, audits of nonfiler returns were to be limited in scope, with the rule of thumb being “if the return makes sense, accept it.” One presumed advantage of using revenue agents on nonfiler cases is that they are accustomed to making field visits to contact taxpayers. However, in only 15 percent of the cases we reviewed was there any evidence of a field visit, and an IRS analysis of 1,000 cases completed by Examination in one district office showed that a field visit was made in only 23 cases (2.3 percent). It is not our intent to second-guess IRS’ staffing decisions for the Nonfiler Strategy. We do not know what options were available to IRS when it implemented the Strategy and, even if we did, second-guessing would serve no useful purpose. Our intent, rather, is to suggest, on the basis of our case reviews and our interviews of persons involved in doing those cases, that different staffing patterns might be appropriate for future nonfiler efforts, if any. Those patterns might involve (1) using lower graded revenue agents instead of GS-11s, (2) using more tax auditors or service center tax examiners instead of revenue agents, and/or (3) making greater use of paraprofessionals or administrative staff. The kinds of tasks that could be done by paraprofessionals or administrative staff, in our opinion, include such things as locating nonfilers, contacting them by telephone or letter, scheduling and rescheduling appointments, and preparing SFRs. In many of the cases we reviewed, for example, it was our perception that Examination’s success in securing delinquent returns was due, in large part, to the agents’ and auditors’ persistence in contacting nonfilers by telephone and in following up with nonfilers when they missed an appointment or when returns or information they had promised to mail were not received. Because it did not appear that the person making the phone calls needed any special auditing skills, it seemed that IRS could achieve the same result by using paraprofessionals or other lower graded staff, leaving higher graded staff more time to audit. One of the district offices we visited had some experience using paraprofessionals. The Detroit District Office, in June 1994, trained 15 Accounting Aides, primarily grade 5, to help prepare reports and case files for nonfiler cases. The Detroit office reported such advantages as enhanced productivity, reduced nonfiler workload, and more time for revenue agents and tax auditors to do other duties. The average annual base salary of a GS-5 in 1995 (figured at step 6, the middle of the pay scale) was $21,827, compared with $33,070 for a GS-9 and $40,010 for a GS-11. Although our work focused on the use of Examination staff during the Nonfiler Strategy, it seems logical that our observations may also be pertinent to the use of Collection staff. Revenue officers range in grade between GS-5 and GS-12, any of whom, according to Collection officials, might be asked to perform nonfiler investigations. IRS Recently Developed Special Procedures to Deal With Nonfiler Recidivism IRS has three broad business objectives, the first of which is to increase voluntary compliance. With that in mind, a key indicator of the success of IRS’ nonfiler efforts, in our opinion, is the extent to which nonfilers brought into compliance remain compliant. As noted in chapter 2, our analysis and a broader analysis done by IRS showed that many of the nonfilers brought into compliance in 1993 did not file returns in 1994. IRS spent resources getting these nonfilers to comply only to have many stop filing 1 year later. When they are identified as nonfilers again, IRS must spend additional resources and begin the enforcement cycle again. IRS developed a strategy for dealing with these repeat nonfilers, whom we refer to as recidivists, that was approved by the Deputy Commissioner in July 1995. The strategy calls for such things as expediting cases against certain nonfilers by eliminating some notices, developing a separate scoring system for recidivists, and referring some cases for possible criminal investigation. IRS officials told us in November 1995 that those procedures were being reconsidered since the extent of recidivism (38 percent) was less than what they thought at the time the procedures were prepared. At that time, IRS’ initial analysis of recidivism had indicated a rate of more than 50 percent. While the proposed strategy for dealing with recidivists calls for eliminating some notices, there is no mention of any intent to revise the language of the notices that will be sent. If the intent is to reduce the number of notices from four to two, for example, by simply eliminating the second and third notices and keeping the first and fourth, then the language in the remaining two notices might have to be revised to reflect the truncated process. Because a notice’s content and format may affect the recipient’s ability and willingness to comply, it is important that notices be clear, informative, and comprehensive. The first notice IRS now sends nonfilers, for example, is very low key. It notes that IRS has yet to receive a return and asks the person or business to either (1) file a return, (2) notify IRS if a return has already been filed, or (3) explain why the person or business has no filing requirement. Subsequent notices are increasingly more urgent in tone. If IRS intends to reduce the number of notices it sends to recidivists, the first notice may have to convey a greater sense of urgency than is now the case while still giving the apparent recidivists the opportunity to explain why they have no filing requirement. An IRS official responsible for the nonfiler program acknowledged that if IRS decides to send fewer notices to recidivists, it may need to revise the wording of those notices. It is important that IRS make that determination in a timely manner because of the lengthy process involved in approving and making the computer programming changes needed to revise a notice. Conclusions We believe that opportunities exist for IRS to further enhance its efforts to deal with nonfilers. We believe that the quicker IRS can make telephone contact with a nonfiler, the better its chances of making that nonfiler compliant. IRS is moving in that direction by speeding up issuance of the first notice to potential nonfilers. We believe that IRS could move even further in that direction if, as recommended by an internal study group, it reduced the number of notices sent to nonfilers and moved nonfiler cases more quickly to a telephone call site—similar to its Early Intervention Project for delinquent taxes. IRS should consider extending that project to nonfilers, at least to the extent deemed feasible given the amount of staff available to work on the project. In that regard, IRS might want to consider testing early intervention for nonfilers to see what impact, if any, it has on compliance. Related to our views on telephone contact is our belief that IRS could use its enforcement resources more efficiently in dealing with nonfilers. We believe that it is to IRS’ benefit to limit as much as possible the extent to which higher graded enforcement staff are doing work that could be done effectively by lower graded enforcement staff or even, in some instances, by paraprofessionals or administrative staff. As we discussed earlier, for example, successful closure of many of the cases we reviewed seemed to be due, in no small part, to the revenue agent’s persistence in calling nonfilers. We see no reason why lower graded staff could not be just as persistent. Keeping nonfilers compliant once they have been brought into compliance is critical if IRS is to increase voluntary compliance and maintain control over its nonfiler workload. IRS’ recently approved strategy for dealing with recidivism, if implemented, would be a big step in the right direction. Part of that strategy calls for reducing the number of notices sent to recidivists. There is no mention, however, of any intent to review the language of the remaining notices to ensure that it is still appropriate. Recommendations to the Commissioner of Internal Revenue To enhance any future IRS efforts directed at nonfiling, we recommend that the Commissioner of Internal Revenue do the following: Revise procedures to provide for more timely telephone contact with nonfilers in line with the reengineering team’s recommendations. In that regard, IRS should consider whether the Early Intervention Project, which includes, among other things, earlier telephone contact with taxpayers whose taxes are delinquent, should be extended to nonfilers. Consider the feasibility and appropriateness of assigning more nonfiler work to lower graded professional staff, paraprofessionals, and administrative staff. In considering its options, IRS might want to solicit input from district managers and staff who worked on the Nonfiler Strategy. If IRS decides to send fewer notices to recidivists, it should determine whether the language of the remaining notices should be revised. Agency Comments and Our Evaluation We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designee. On December 4, 1995, we met with several IRS officials, including the National Director, Service Center Compliance; the National Director, Compliance Specialization; the Acting Director of the Office of Return Delinquency; and the Acting Director for Special Compliance Programs. They provided us with oral comments, which the National Director, Service Center Compliance, reiterated and expanded on in memoranda dated December 11, 1995, and February 12, 1996. In commenting on our draft, IRS said that it agreed with only one of our three recommendations—the one dealing with the language of notices sent to recidivists. IRS said that our proposed recommendation on timely contact with nonfilers was unnecessary because IRS has been working to accelerate the processing of information returns for several years with the intent of making earlier contacts with nonfilers and filers who have underreported their income. We have revised the body of our report to more clearly acknowledge those efforts. However, our recommendation was intended to go beyond the initial identification of and contact with nonfilers. Our intent was to encourage IRS to make more timely telephone contact with nonfilers. Although the accelerated processing of information returns should speed up the entire process and lead to quicker telephone contact, we believe that there are other steps IRS could take, similar to its Early Intervention project for delinquent taxes, to help achieve that end. In that regard, we think our recommendation is necessary, and we have reworded it to clarify the focus on earlier telephone contact. In response to our revised recommendation, IRS said that it (1) has established the framework for expanding the Early Intervention Project to business nonfilers, if sufficient resources become available; and (2) does not anticipate having sufficient staffing to expand the Project to individual nonfilers. IRS said that if circumstances change in the future, it may find it feasible to consider including individual nonfilers in the Project. Although we acknowledge the resource limitations, we wonder whether it might be feasible for IRS to revise the Early Intervention Project to include a mix of delinquent tax and nonfiler cases, even if that means having to exclude some delinquent tax cases, rather than limiting the Project to only delinquent tax cases. That might enable IRS to assess the relative benefits of early intervention on both types of cases. IRS took most exception to our proposed recommendation on assigning nonfiler case work. IRS said that the recommendation was unnecessary and reflected a basic misunderstanding of the purpose of the Nonfiler Strategy. IRS said that the decision to assign nonfiler cases to Examination employees, even those capable of working higher graded, more productive cases, was (1) a management decision based on the view that maintaining the viability of the nonfiler program outweighed possible short-term productivity losses in other areas and (2) a short-term response to stem the growth of the nonfiler inventory that was never intended as an ongoing work assignment practice. IRS also said that a review of the special nonfiler auditing standards makes it clear that techniques needed under the nonfiler initiative required more technical expertise than could be provided by paraprofessionals. As noted earlier, it was not our intent to second-guess IRS’ staffing decisions for the Nonfiler Strategy but rather to suggest that IRS consider other options in staffing any future nonfiler initiatives. Our work at four district offices indicated that other options might be more efficient, depending on the availability of staff. In that regard, our review of case files in four district offices indicated that the audit work on nonfiler cases in those districts was often less involved than suggested by the auditing standards referred to by IRS and thus often did not require the expertise of GS-11 revenue agents. That perception was supported by many of the district office Examination staff and managers we interviewed who said that nonfiler work could be done by lower graded staff. Those lower graded staff could be revenue agents below GS-11 or tax auditors or, for some tasks, paraprofessionals or administrative staff. We revised the report and reworded the recommendation to avoid the impression that we are advocating that all nonfiler work be done by paraprofessionals. IRS also questioned how we could draw conclusions about staffing when our review was limited to four districts and our results are not projectable. We believe the scope of our work was sufficient to raise questions about the level of staffing needed to do the kind of nonfiler case work that was done during the Nonfiler Strategy. We agree, however, that it was not sufficient to support a specific recommendation that IRS adopt different staffing patterns for any future nonfiler effort (which is how we had worded the recommendation in our draft report). Thus, we revised our recommendation to (1) give IRS more flexibility in deciding how, if at all, the staffing of future nonfiler efforts should differ; and (2) suggest that IRS, in considering its options, solicit input from managers and staff in district offices that we did not visit. After we revised our recommendation, IRS advised us that it will, in the future, “consider using appropriately graded employees, if available.” | GAO reviewed the results of the Internal Revenue Service's (IRS) Nonfiler Strategy and opportunities to improve any similar future efforts. GAO found that: (1) IRS actions to achieve its Nonfiler Strategy's goals included deploying examination staff to work on nonfiler cases, increasing other IRS functions' emphasis on nonfiler activities, eliminating old cases from inventory, establishing cooperative relationships with states and the private sector, and implementing a refund hold program; (2) IRS believes that its Nonfiler Strategy was generally a success, since it reduced its nonfiler inventory, eliminated unproductive cases, increased the number of returns from and dollars assessed against individual nonfilers, and created closer working relationships with outside stakeholders and professional associations; (3) although IRS reduced its nonfiler inventory, there are not enough data to determine voluntary compliance improvement or the program's cost-effectiveness; (4) returns from business nonfilers and collection of delinquent taxes decreased during the two years the strategy was in effect; (5) IRS made measuring the strategy's success more difficult by failing to establish measurable goals; (6) at least 38 percent of nonfilers who eventually filed a return became recidivists in the following year; (7) future IRS nonfiler efforts could be improved by shortening the time before first notices and telephone contacts are made, using lower-grade staff to pursue nonfiler cases, and revising notices sent to recidivists to increase their urgency; and (8) IRS has reduced the time before sending first notices and developed special recidivist procedures, but it continues to send several notices before making telephone contact. |
Scope and Methodology We reviewed federal legislation, regulations, and processes regarding spectrum management and spectrum sharing, including NTIA’s Manual of Regulations and Procedures for the Federal Radio Frequency Management; as well as various FCC plans, notices, orders and other publications related to spectrum management and sharing. We conducted multiple interviews with FCC, NTIA, and various advisory committees, such as the Commerce Spectrum Management Advisory Committee (CSMAC). We selected 7 of the 19 Interdepartment Radio Advisory Committee (IRAC) agencies—the Departments of Commerce, Defense, Homeland Security, Interior, Justice, Transportation, and Treasury— based on which agencies were most likely to have experience with spectrum sharing. We interviewed the spectrum managers for these departments to better understand their experiences with sharing, including successes and challenges, and analyzed the extent to which spectrum sharing was a part of their spectrum management plans. We also interviewed a variety of stakeholders and experts outside the federal government with knowledge and experience related to spectrum sharing issues. These stakeholders and experts fell into four groups: Nonfederal spectrum users: We interviewed officials from seven commercial entities such as Verizon, Sprint, and other wireless and communications companies. We also interviewed local government officials regarding their spectrum sharing experiences. We selected these nonfederal users based on their experiences with sharing spectrum, or based on their vested interest in spectrum policy. Companies that create spectrum-sharing solutions: We interviewed two companies that create spectrum sharing technologies. We selected these companies based on recommendations from spectrum experts and federal agency officials about which companies were most active with spectrum sharing technology development. Industry and academic experts: We interviewed 16 industry and academic experts. We selected these experts based on their published and recognized research credentials for their work on spectrum management, spectrum sharing and the economic impacts of spectrum related policies, and on other stakeholders’ and experts’ recommendations. International spectrum management officials: We interviewed spectrum management officials from Canada, the United Kingdom, and Australia to compare other countries’ spectrum management and spectrum-sharing practices to that of the United States. We chose these three countries based on their level of experience dealing with spectrum-sharing issues. We also interviewed officials from the International Telecommunication Union to understand its role in advising international spectrum management and spectrum sharing policies. We also completed a literature search and reviewed recent reports and articles related to spectrum sharing, including academic and government reports as well as speeches and articles by the groups of officials and experts we interviewed as described above. A complete list of the departments and agencies, experts and companies that we interviewed can be found in appendix I. The information and perspectives that we obtained from the interviews may not be generalized to all experts and industry stakeholders that have an interest in spectrum policy. Rather, comments and views were reviewed in context with current literature on spectrum management issues. We conducted this performance audit from September 2011 to October 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Background In the United States, responsibility for managing spectrum—including allocating, assigning, regulating, and facilitating the sharing of spectrum— is divided between two agencies, NTIA and FCC. NTIA and FCC jointly determine the amount of spectrum allocated for federal and nonfederal use, including both exclusive and shared use. After this allocation occurs, in order to use spectrum, nonfederal users, such as wireless companies and local governments, must follow rules and be authorized by FCC to use specific frequencies. When spectrum is repurposed, FCC may also be authorized to hold an auction to distribute licenses through a bidding process. Federal users, like the military, must follow rules, and obtain frequency assignments from NTIA. Both NTIA and FCC have authority to issue rules and regulations on use of spectrum as necessary to ensure effective, efficient, and equitable domestic spectrum use. Federal agencies use spectrum to help meet a variety of missions, including emergency communications, national defense, land management, and law enforcement. More than 60 federal agencies and departments combined have over 240,000 frequency assignments. As of September 2012, 9 departments and agencies had the vast majority of the assignments: the Department of Defense, the Federal Aviation Administration, the Department of Justice, the Department of Homeland Security, the Department of the Interior, the Department of Agriculture, the United States Coast Guard, the Department of Energy, and the Department of Commerce, respectively, hold 94 percent of all federally assigned spectrum. (See fig. 1.) Nonfederal entities (which include commercial companies and state and local governments) also use spectrum to provide a variety of services. For example, state and local police departments, fire departments, and other emergency services agencies use spectrum to transmit and receive critical voice and data communications, while commercial entities use spectrum to provide wireless services, including mobile voice and data, paging, broadcast radio and television, and satellite services. Not all spectrum has equal value. The spectrum most highly valued generally consists of frequencies between 225 and 3700 MHz, as these frequencies have properties well suited to many important wireless technologies, such as mobile devices, and radio and television broadcasting. According to NTIA’s Office of Spectrum Management, federal agencies have exclusive use of about 18 percent of this highly valued spectrum, while nonfederal users have exclusive licenses to approximately 33 percent. The remainder of this spectrum is allocated to shared use. The types of and degree of sharing between governmental and nongovernmental users varies across the bands included within this shared spectrum. In addition, increasing demands on spectrum mean that federal and nonfederal users increasingly occupy adjacent bands, which in practice necessitates intensive coordination on technical rules. Estimates of the extent of predominant federal use within the spectrum allocated for shared use vary depending on the particular evaluation model and analyses employed. Depending on the estimate used, the total percentage of the most highly valued spectrum exclusively or predominantly used by the federal government ranges from approximately 39 percent to 57 percent. Spectrum sharing can be defined as the cooperative use of common spectrum that allows disparate missions to be achieved. In this way, multiple users agree to access the same spectrum at different times or locations, as well as negotiate other technical parameters, to avoid adversely interfering with one another. For sharing to occur, users and regulators must negotiate and resolve where (geographic sharing), when (sharing in time), and how (technical parameters) spectrum will be used. (See fig. 2.) Both FCC and NTIA manage the process that leads to spectrum sharing between federal and nonfederal users. The steps involved in the process include the following: Prior to authorizing a nonfederal user to share spectrum with federal users, FCC will coordinate with NTIA on the allocation and service rulemakings required that define the technical and operating conditions for shared access to spectrum. NTIA will provide draft findings to IRAC, which provides advice to NTIA regarding federal spectrum. FCC participates in IRAC as a liaison. IRAC determines which agencies would be affected by the nonfederal use of spectrum and acts as the forum where those agencies consider how they may accommodate the nonfederal user. According to NTIA, any nonfederal user may approach NTIA and IRAC to discuss a proposal to use federal or shared bands to determine any obstacles and best ways forward. In some cases, FCC encourages nonfederal applicants to work directly with concerned agencies to try to reach agreement on an arrangement that could then be adapted to FCC rules or licenses. Similarly, if federal users request frequency assignments in nonfederal or shared bands, these requests must be coordinated with FCC through IRAC. According to NTIA, thus far, all requests by federal entities to change an allocation have gone through NTIA to FCC and have required an FCC rulemaking. To have access to federal spectrum, the nonfederal entity must also obtain an FCC license. FCC will coordinate the allocation change or license application, including technical and operational conditions, for sharing federal spectrum through the Frequency Assignment Subcommittee of IRAC. Next, before any spectrum sharing takes place in federal or shared spectrum, NTIA must vet the coordinating parties’ requests, assign frequencies, and ensure that the systems the parties will be using— such as the land mobile radios used by state and local emergency responders that share spectrum with federal users—are compatible. When spectrum sharing occurs solely among federal users within federal exclusive bands, IRAC’s Frequency Assignment Subcommittee, using NTIA’s database of federal frequency assignments, reviews the spectrum requests and identifies any potential interference concerns prior to commencing the shared use. Federal sharing occurs routinely. For example, the Departments of Defense, and Transportation, and Department of Commerce’s National Oceanic and Atmospheric Administration share spectrum for directing aircraft and monitoring weather conditions. The Department of Defense also frequently shares spectrum among its own various programs, internal services and agencies. When sharing occurs solely among nonfederal users, FCC seeks to allow flexibility for license holders to coordinate and negotiate spectrum-sharing agreements among themselves. FCC provides flexibility in a couple of ways. One way is through the expanded issuance of flexible use licenses. As opposed to traditional licenses, where usage is limited to the specific terms of the license (e.g., TV broadcast stations in specific markets), flexible use licenses allow for a wider array of uses without having to seek additional FCC authorization. Licensing expands the pool of potential entities that would be able to innovate and share the spectrum beyond those that would use the spectrum in a similar manner. However, with both traditional and flexible use licenses, if a proposed shared use is not consistent with the terms of a license, an FCC rulemaking would be required to allow that use. Another way FCC provides flexibility is with respect to its secondary market policies and rules that permit licensees to share their spectrum resource through spectrum lease arrangements. While FCC tracks these secondary market transactions, users negotiate their own terms, making it difficult to gauge the extent to which sharing occurs among these users, if at all. Spectrum sharing also occurs through unlicensed access by anyone using wireless equipment certified by FCC for those frequencies. Equipment such as wireless microphones, baby monitors, and garage door openers share spectrum with other services on a non-interference basis typically within a limited geographic range and at low power levels to avoid interference with higher priority uses. In contrast with most licensed spectrum use, unlicensed users have no regulatory protection against interference from other licensed or unlicensed users in the band. Unlicensed use is regulated to ensure that devices do not cause interference to other operations in the spectrum. For example, wireless fidelity (Wi-Fi) devices share some band segments in the 5 gigahertz (GHz) range with military radar subject to the condition that the Wi-Fi devices are capable of spectrum sensing via dynamic frequency selection; if the Wi-Fi device detects a radar signal, the device must immediately vacate the channel the radar signal is on. Many technological developments have also increased spectrum efficiency and further enabled sharing. For example, dynamic spectrum access technologies under development could allow equipment to sense and select among available frequencies in an area, efficiently using whatever frequencies might be available. This allows users to share frequencies in the same location in very small increments of time. Software-defined radios also use spectrum more efficiently by accessing different frequencies in one location. In addition, these radios use more efficient batteries that allow them to perform more sophisticated tasks while using less spectrum than traditional radios. As another example, small cell technology allows users to share the same frequencies in close proximity to each other. Further, emerging fourth generation (4G) Long Term Evolution (LTE) technologies, as used by some smart phones to access the Internet, promise improvement in data transfer speeds. Research continues on these and other fronts to enable more efficient use of spectrum, including sharing. Dynamic spectrum access technologies are currently able to sense for available frequencies before transmission (listen before talk), but not sense during transmission. (listen while talk) Sensing before transmission involves sensing available frequencies, then jumping and transmitting, which causes lag time. The technology to enable sensing during transmission, which would allow a user to seamlessly continue communication while moving geographically through spectrum bands is still under development. Some Users Lack Incentives and Face Barriers to Sharing Spectrum Some Users Lack Economic Incentives to Share Spectrum While federal spectrum users often share spectrum among themselves, they may have little economic incentive to otherwise use spectrum efficiently, including sharing it with nonfederal users. From an economic perspective, when a consumer pays the market price for a good or service and thus cannot get more of it without this expense, the consumer has an incentive to get the most value and efficiency out of the good as possible. If no price is attached to a good—which is essentially the case with federal agencies’ use of spectrum—the normal market incentive to use the good efficiently may be muted. In the case of federal spectrum users, obtaining new spectrum assignments may be difficult, so an agency may have an incentive to conserve and use the spectrum it currently has assigned to it or currently shares efficiently, but the extent of that incentive is likely weaker than if the agency had to pay market price for all of its spectrum needs. Consequently, federal spectrum users do not fully face a market incentive to conserve on their use of spectrum or use it in an efficient manner. The full market value of the spectrum assigned to federal agencies has not been assessed, but, according to one industry observer, would most likely be valued in the tens of billions of dollars. Similarly, many nonfederal users, such as television broadcasters and public safety entities, did not pay for spectrum when it was assigned to them and do not pay the full market price for their continuing use of spectrum, so, like federal agencies, they may not fully have market-based incentives to use spectrum efficiently. In contrast, licensed, commercial users that purchase spectrum at auction generally have market incentives to use their spectrum holdings efficiently, but these users also have incentives that work against their sharing spectrum. FCC officials and industry stakeholders and experts told us that these users may prefer not to share their unused spectrum because they are concerned about the potential for interference to degrade service quality to their customers. Also, they may not want to give potential competitors access to spectrum. Industry stakeholders and experts also said that companies seeking spectrum may prefer obtaining exclusive spectrum licenses over sharing spectrum that is licensed to another company or federal user, given uncertainties about regulatory approvals, interference, and enforcement if interference occurs. Several Barriers Can Deter Users from Sharing Spectrum Federal agencies will not risk mission failure, particularly when there are security and public safety implications. According to the agency officials we contacted, federal agencies will typically not agree to share spectrum if it puts achievement of their mission at risk. The officials stressed that when missions have security and safety implications, sharing spectrum may pose unacceptable risks. For example, the military tests aircraft and trains pilots using test ranges that can stretch hundreds of miles, maintaining constant wireless contact. While there may be times and locations where the frequencies are not in use because aircraft are not in the area, communication frequencies in the test ranges cannot be shared, according to officials in the Department of Defense, because even accidental interference in communications with an aircraft could result in catastrophic mission failure. Further, sharing information about such flights could expose pilots and aircraft, or the military’s larger mission, to increased risk. Federal law enforcement agencies are also concerned about how sharing spectrum could put missions at risk. For example, officials at the Departments of Treasury and Justice explained that interference with communications among agents could put the agents in danger and cause them to miss mission critical information. According to officials from the Department of Justice, the department tested sharing spectrum with a major commercial carrier in a metropolitan area in 2008 and concluded that the department and the carrier t could not co-exist on the same spectrum. NTIA also reported that although sharing should be accommodated when appropriate, it is necessary to establish clear regulatory mechanisms for sharing to ensure that federal users are not required to assume responsibility for mitigating interference. According to FCC officials, concerns about risk of mission failure can drive conservative technical standards for federal agencies’ missions that can make sharing spectrum impractical. In general, the technical analyses and resulting standards federal agencies develop are based on worst-case scenarios and not on assessments of the most likely scenario or a range of scenarios. Moreover, in contrast to FCC’s open rulemaking process, there is little opportunity for public input to the federal agencies’ standards-setting process. Stakeholders may meet or have discussions with NTIA and the relevant federal agencies, but this occurs without any formal public process. Nor do stakeholders have any effective means to appeal other than by asking FCC to reject NTIA’s analysis or standards. Spectrum sharing can be costly. FCC and NTIA officials, as well as other agency officials and an industry stakeholder, told us that sharing federal spectrum can be costly for both the nonfederal and federal users seeking to share for the following reasons: Users may find that mitigation of potential interference can be costly in terms of equipment design and operation. For example, according to officials from one agency, sharing spectrum outside a law enforcement environment would require cognitive radios, which could be costly. Users applying to share federal frequencies may find that those frequencies are being used by more than one federal agency or program. As a result of needing to mitigate interference for multiple users, costs to share spectrum in that band could increase. Federal users often use and rely on proven older technology that was designed to use spectrum to meet a specific mission and may be less efficient than more modern systems. Limited budgets may also prevent users from being able to invest in newer technology that can facilitate easier sharing. For example, officials at one agency said they maintain and use systems until the end of the system’s life cycle to assure continuity of operations and security. Spectrum-sharing approval and enforcement processes can be lengthy and unpredictable. FCC and NTIA processes can cause two main problems when nonfederal users seek to share federal spectrum, according to stakeholders: The spectrum-sharing approval process between FCC and NTIA can be lengthy and unpredictable, and the risk associated with it can be costly for new entrants. FCC officials told us that its internal processes can potentially last years if a rulemaking is required to allow shared use of spectrum. In addition to that time, NTIA officials said that IRAC’s investigation of potential harmful interference could also take months. In one example, federal users currently share the spectrum band of 413-457 MHz with a nonprofit medical devices provider. The spectrum is used for transmissions related to implant products for veterans. It took FCC, NTIA and the spectrum users approximately 2 years (from 2009 to 2011) to facilitate this arrangement because an FCC rulemaking was required and all parties agreed to a lengthy evaluation of potential interference. The nonprofit in this case was funded by an endowment and was not dependent on income from the device to sustain itself during this process, but such delays, and the potential for a denial because of findings of harmful interference risks, could discourage for-profit companies from developing and investing in business plans that rely on sharing federal spectrum. However, officials at one agency commented that they have seen the timing of NTIA approval of federal participation drastically reduced over the past several years, from many months to less than a month as a result of additional coordination and negotiation of sharing done prior to the submission of frequency requests. Stakeholders we interviewed told us that when federal and nonfederal users share spectrum, both parties worry that harmful interference may affect their missions or operations if the other party overreaches or does not follow the agreement. They also fear that any enforcement actions taken by FCC will happen too slowly to protect their interests and that enforcement outcomes may be unfavorable. According to officials at one agency, there are not many examples of large scale sharing of federal and nonfederal systems and limited governance and enforcement mechanisms exist to support such efforts. Similar problems can arise when nonfederal users share spectrum with each other. Distrust of each other and of FCC’s decision-making and enforcement processes could discourage sharing. For example, if a proposed shared use does not fall within the terms of the incumbent’s license, FCC may need to engage in rulemaking proceedings, which can be long and unpredictable and can make spectrum-sharing arrangements unattractive to companies that otherwise might consider sharing. Users May Be Unable to Easily Identify Spectrum Available for Sharing Besides lacking incentives and overcoming other barriers, users may also have difficulty identifying spectrum available for sharing because data on available spectrum is incomplete or inaccurate, and information on some federal spectrum usage is not publicly available. According to NTIA officials, coordinating spectrum sharing requires accurate data on users, frequencies, locations, times, power levels, and equipment, among other things. We recently reported that both FCC’s and NTIA’s spectrum databases may contain incomplete and inaccurate data. We reported that a substantial number of surveyed users of FCC’s largest and most accessed license database, the Universal Licensing System, said that inaccurate and missing data hindered their use of the system to a great or moderate extent. NTIA collects basic, descriptive information on federal spectrum use, such as agency name, frequency, and location, in its Government Master File, and relies on agencies to evaluate and report their own current and future spectrum needs, even though agencies have not always provided accurate information on their spectrum use, which could be useful in coordinating sharing arrangements.agency spectrum managers told us that agencies have not been asked to Further, federal regularly update their spectrum plans, in which they were required to include an accounting of spectrum use. Federal agencies were directed to submit spectrum plans to NTIA and provide updates every 2 years. Since 2008, NTIA has ceased requesting those updates and has put its strategic planning initiatives on hold because of limited resources. NTIA is developing a new data system that officials believe will provide more robust data that will enable more accurate analysis of spectrum usage and potential interference. The new system may in turn identify more sharing opportunities. NTIA officials plan for the new Federal Spectrum Management System (FSMS) to house more detailed data about agencies’ spectrum usage than the current Government Master File, including times of use, power levels, and equipment, among other information not currently collected. FSMS is scheduled to be operational in fiscal year 2014. However, the data will only be available to IRAC members and will not be publicly available. Legislation has been introduced to try to address the lack of publicly available data on spectrum usage broadly. The legislation would require in part that FCC, in consultation with NTIA and the White House Office of Science and Technology, prepare a report for Congress that includes an inventory of each radio spectrum band they manage. The inventory is also to include data on the number of transmitters and receiver terminals in use, if available. Other technical parameters that allow for more specific evaluation of how spectrum can be shared will also be inventoried, including coverage area, receiver performance, location of transmitters, percentage and time of use, and a list and described use of unlicensed devices authorized to operate in the band. However, experts and federal officials we contacted told us that there may be some limitations to creating such an inventory. For instance, measuring spectrum usage can be difficult because it can only be accomplished on a small scale and technologies to measure or map widespread spectrum usage are not yet available. Additionally, FCC and NTIA officials told us that information on some federal spectrum bands may never be made publicly available because of the sensitive or classified nature of some federal spectrum use. Incentives and Opportunities to Share Spectrum Could Be Expanded Federal advisors and experts we spoke with identified several options that could provide incentives and opportunities for more efficient spectrum use and sharing, by federal and nonfederal users, which include, among others: (1) assessing spectrum fees; (2) expanding the availability of unlicensed spectrum; (3) identifying federal spectrum that can be shared and promoting sharing; (4) requiring agencies to give more consideration to sharing and efficiency; (5) improving and expediting the spectrum- sharing process; and (6) increasing the federal focus on research, development and testing of technologies that can enable sharing, and improve spectral efficiency. We have previously reported that to improve spectrum efficiency among federal agencies, Congress may wish to consider evaluating what mechanisms could be adopted to provide better incentives and opportunities for agencies to move toward more efficient use of spectrum, which could free up some spectrum allocated for federal use to be made available for sharing or other purposes. Assessing Spectrum Fees Several advisory groups and industry experts, including those we interviewed, have recommended that fees be assessed based on spectrum usage. As previously mentioned, with the exception of fees for frequency assignments, federal users incur no costs for using spectrum and have few requirements for efficient use. As s result, federal users may have little incentive to share spectrum assigned to them with nonfederal users or identify opportunities to use it more efficiently— except to the extent that sharing or more efficient use helps them achieve their mission requirements. In 2011, the CSMAC Incentives Subcommittee recommended that NTIA and FCC study the implementation of spectrum fees and solicit input from both federal and nonfederal users that might be subject to fees.Plan has also recommended that Congress consider granting FCC and NTIA authority to impose fees on unauctioned spectrum license holders— The National Broadband such as TV broadcasters and public safety entities—as well as government users. Fees could help to free spectrum for new uses, since licensees that use spectrum inefficiently may reduce their holdings or pursue sharing opportunities once they bear the opportunity cost of letting their spectrum remain fallow or underused. FCC officials told us that they have proposed spectrum usage fees at various times including in FCC’s most recent congressional budget submission and have requested, but have yet to receive, legislative authority to implement such a program. While noting the benefits of spectrum fees, the CSMAC Incentives Subcommittee report also notes specific concerns about the impact of spectrum fees on government users. For instance, some CSMAC members expressed concern that fees do not fit into the federal annual appropriations process and that new appropriations to cover fees are neither realistic nor warranted in the current budget environment. Other members suggested that fees will have no effect because agencies will be assured additional funds for their spectrum needs. Similarly, the National Broadband Plan notes that a different approach to setting fees may be appropriate for different spectrum users, and that a fee system must avoid disrupting public safety, national defense, and other essential government services that protect human life, safety, and property. To address some of the concerns regarding agency budgets, the recent PCAST report recommended the use of a “spectrum currency” process to promote spectrum efficiency. Rather than using funds to pay for spectrum, federal agencies would each be given an allocation of synthetic currency that they could use to “buy” their spectrum usage rights. Usage fees would be set based on valuations of comparable private sector uses for which the market has already set a price. Agencies would then have an incentive to use their assignments more efficiently or share spectrum. In the PCAST proposal, agencies would also not bear the costs of making spectrum available to others for sharing, because they could be reimbursed for their investments that made sharing possible from a proposed Spectrum Efficiency Fund. Internationally, some regulatory agencies have moved forward with charging market based rates for spectrum. Officials in two of the countries we spoke with said that the regulatory agency in their country collects user fees for government-agency spectrum use that reflect the opportunity cost of spectrum and serve as a means to encourage greater efficiency. For instance, the Australian Communications and Media Authority assesses two types of license fees for devices: (1) administrative charges to recover the direct costs of spectrum management and (2) annual license taxes to recover the indirect government costs of spectrum management. Officials suggested that license fees provide incentives for efficient use. Similarly, the Office of Communications in the United Kingdom uses a concept known as Administered Incentive Pricing (AIP) to set charges for spectrum holdings to reflect the value of the spectrum and to promote efficient use. Officials in these countries told us that the fee structure also encourages agencies to seek more opportunities to share spectrum. For example, in response to the United Kingdom’s AIP system, one ministry conducted a study of which spectrum bands could be shared or, if not in full use, released for use by others. The ministry identified at least five bands to share and released additional bands because the cost associated with retaining those rights was not economically feasible for intermittent use. As a result, the ministry relinquished its rights to those underused bands. Expanding Unlicensed Use According to stakeholders, unlicensed use is a valuable complement to licensed use and more spectrum could be made available for unlicensed use. Spectrum for unlicensed use can be used efficiently and for high value applications, like Wi-Fi, for example. While FCC has generally relied on auctions to license spectrum, which over the years have generated billions in dollars of revenue for the United States Treasury, FCC is attempting to make more unlicensed spectrum available in the hope of fueling innovation and economic growth. Increasing the amount of spectrum available for unlicensed use allows more users to share spectrum without going through lengthy negotiations and interference mitigations, and also promotes more experimentation and innovation. To access exclusively licensed spectrum, users must enter into sharing agreements with the license holder sand negotiate access each time they wish to use that spectrum. By contrast, when spectrum is available for unlicensed purposes, such negotiation is generally not needed and, according to some experts, may lead to more widespread experimentation and the development of innovative technologies. More recently, FCC has provided unlicensed access to additional spectrum, known as TV “white spaces,” to help address spectrum demands. The white spaces refer to the buffer zones that FCC provided between the television broadcasters to mitigate unwanted interference between adjacent stations. In the TV white space rules, the buffer zones are no longer needed, and FCC approved the previously unused spectrum for unlicensed use. To identify available white space spectrum, devices must access a database that responds with a list of the frequencies that are available for use at the device’s location. As an example, one local official explained that the City of Wilmington, North Carolina, uses TV white space spectrum to provide a network of public Wi-Fi access and public-safety surveillance functions. However, some experts have noted that the use of white space for rural areas holds more promise than large, dense urban areas because the sheer number of TV stations and higher usage in those areas makes use of the white spaces more challenging. Identifying Federal Spectrum That Can Be Shared and Promoting Sharing FCC and NTIA have noted the importance of sharing federal spectrum as a means to address spectrum demand. FCC’s Chairman recently said that it has become increasingly harder to find free and clear blocks of spectrum. The Chairman further said that it would be counterproductive to be limited to the choices of reallocation or nothing and that it may be the case that in some bands, sharing could allow access to spectrum that might otherwise take years and be costly to make available to other users. As we previously mentioned, in 2010, the President directed federal agencies to clear 500 MHz of spectrum for nonfederal uses by 2020. In response to this directive, NTIA identified bands to evaluate for repurposing. For example, an interagency group was formed to determine the viability of accommodating commercial wireless broadband in the 1755-1850 MHz band. However, the evaluation found that clearing this 95 MHz band may take 10 years, cost $18 billion, and cause significant disruption. Furthermore, some federal systems could remain in the band indefinitely. To support NTIA’s effort regarding this band, FCC recently granted special temporary authority for T-Mobile to conduct tests to explore sharing between commercial wireless services and federal systems operating in the 1755-1780 MHz band. NTIA has also noted that the federal government must ensure effective spectrum use and push for sharing and other innovative uses wherever possible. Further, it is critical that agencies participate in identifying strategies for more efficient use of spectrum, including sharing it, while maintaining essential federal missions. For example, NTIA asked CSMAC to advise on what kinds of sharing are workable in the long term. Consequently, CSMAC is reviewing options to analyze the impact federal systems remaining in the band might have on future commercial uses, and the sharing conditions that might be required to protect incumbent systems. Recent PCAST recommendations could also create opportunities for nonfederal users to share 1,000 MHz of spectrum previously occupied only by federal users. Out of concern that additional clearing of federal users from spectrum is not sustainable, PCAST recently recommended that the President issue a new policy memorandum calling for the federal government to immediately identify 1,000 MHz of federal spectrum for sharing with nonfederal users. To facilitate sharing this spectrum, PCAST also recommended that FCC and NTIA implement a federal spectrum access system that includes data on when and where federal users could allow access to fallow spectrum. Such a system could help streamline the regulatory processes involved in sharing that we discussed earlier. However, PCAST acknowledged that implementing the structure they recommended will not be easy and could take a long time. Moreover, some experts and industry stakeholders suggest that sharing 1,000 MHz of federal spectrum may be no easier or less costly than previous efforts to vacate half that amount, given the barriers to sharing that exist. Requiring Agencies to Give More Consideration to Spectrum Sharing and Efficiency In 2011, the Office of Management and Budget (OMB) updated its guidance to federal agencies on preparing the fiscal year 2013 budget by asking agencies to consider the economic value of spectrum when developing their economic justifications for procuring new equipment.The guidance noted that spectrum should generally not be considered a free resource, but rather should be considered to have value. Therefore, budget requests for systems that require spectrum should include the evaluation of alternative systems or methods that reduce spectrum needs, such as spectrum sharing. In January 2011, CSMAC reported that the focus of this process had been on capital planning. The Committee stated that it believed it would be more useful to focus on ensuring that agencies give more consideration to trade-offs in spectrum use in their management processes. They also said that doing so will likely yield greater improvements in overall spectrum management and use. Toward that end, with respect to the budget for major spectrum-dependent communications systems, the Committee rewrote the circular, recommending that agencies specify in their spectrum proposals (a) whether the system will share with other existing systems, (b) the extent to which replacement systems will be more spectrally efficient compared to the prior system, and (c) that there was consideration of non-spectrum dependent or commercial alternatives. The Middle Class Tax Relief and Job Creation Act required that OMB implement these recommendations. We have also reported that federal agencies generally invest in more spectrally efficient technologies when mission needs demand it, not according to any underlying, systematic consideration of spectrum efficiency. As a result, we recommended that FCC and NTIA jointly develop accepted models and methodologies to assess the impact of new technologies on overall spectrum use and that NTIA determine how to provide incentives to agencies to use spectrum more efficiently. Improving and Expediting the Spectrum-Sharing Process FCC and NTIA have taken some actions to potentially reduce the amount of time and even the need for some potential rulemakings associated with spectrum sharing, but stakeholders and experts we interviewed suggested that more could be done to expedite the process. NTIA also encourages communication between federal and nonfederal users regarding sharing plans to deal with potential interference and other technical issues early in the process. These communications are important to provide certainty to nonfederal users about the availability of shared spectrum while also ensuring that critical federal operations are protected. Stakeholders suggested that NTIA and FCC could do more to streamline or automate their processes, and that more complete databases of spectrum use, as discussed earlier, could help potential sharing entities identify opportunities. Some experts argued for FCC to shift from a “command and control” approach for spectrum management to a regulatory approach that was more flexible and adaptable to new technologies. Others argue that the process is further slowed down and complicated because two regulatory agencies are involved as opposed to a single agency, as is the case in other countries, and other industries. CSMAC also reported that FCC and NTIA could do more to streamline the sharing approval process. For example, a common frustration is that a nonfederal entity seeking to share federal spectrum is unable to precisely follow the status of its spectrum sharing application once it is filed with the FCC. Further, it is not transparent in the experimental licensing process on the FCC website when FCC transmits applications to NTIA, when NTIA responds to FCC, and whether that response contains questions to which the applicant must respond to progress the application. On the NTIA side, IRAC’s Frequency Assignment Subcommittee established a review period of approximately nine days to respond with concurrence or concerns regarding an application. NTIA’s website provides some information regarding the status of an application in the IRAC process; however, the information is very generic and the nonfederal applicant has no means to obtain information as to why its request was tabled or to engage directly with the concerned parties. For applicants to more proactively engage FCC regarding concerns or other actions, CSMAC recommended that there be a public tracking capability that allows an FCC applicant to readily identify when FCC sent the application to NTIA, when NTIA responded, and whether NTIA had specific questions regarding the merits or technical components of the application. Regardless, any such changes to how spectrum is currently managed and regulated would need to be carefully studied with respect to potential benefits and costs. Increasing the Federal Focus on Research, Development and Testing Several technological advances promise to make sharing easier, but are still at early stages of development and testing. For example, various spectrum users and experts we contacted mentioned the potential of dynamic spectrum access technology. If made fully operational, dynamic spectrum access technology will be able to sense available frequencies in an area and jump among frequencies to seamlessly continue communication as the user moves geographically through spectrum bands. According to experts and researchers we spoke with, progress has been made but there is no indication of how long it will be before this technology is fully deployable. Similarly, current fourth generation (4G) Long Term Evolution (LTE) technologies promise the ability to facilitate channel sharing as well as much faster data transfer rates over time, which could also potentially free frequencies more quickly for use by others. However, experts we talked to could not predict how long it will be before data networks reach international 4G transmission standards and thus, maximize spectral efficiency. Such new technologies can obviate or lessen the need for extensive regulatory procedures to enable sharing and can open up new market opportunities for wireless service providers. If a secondary user or sharing entity employs these technologies, the incumbent user or primary user would theoretically not experience interference, and agreements and rulemakings that are currently needed may not be necessary to enable sharing. Although industry participants indicated that extensive testing under realistic conditions is critical to conducting basic research on spectrum efficient technologies, we found that only a few companies are involved in such research and may experience challenges in the testing process. Companies tend to focus technology development on current business objectives as opposed to conducting basic research that may not show an immediate business return. For example, NTIA officials told us that one company that indicated it would participate in NTIA’s dynamic spectrum access-testing project removed its technologist from the testing effort to a project more closely related to its internal business objectives. Furthermore, some products are too early in the development stage to even be fully tested. For example, NTIA officials said six companies responded to NTIA’s invitation to participate in the previously mentioned dynamic spectrum access-testing project. However, three handsets were received for the testing, and one of those did not work as intended. Other companies that responded told NTIA that they only had a concept and were not ready to test an actual prototype. We have previously reported that the federal government has a key role in performing or otherwise encouraging research that the private industry would not do on its own. With respect to research and development on spectrum sharing and spectrum efficiency, we found that FCC and NTIA are involved in creating test beds and other opportunities for research and development. For example, when FCC proposed a rulemaking to improve its experimental license program in November 2010, it invited comments on a number of ideas including the need to identify locations for test beds, where new technologies could be tested before being introduced to the market and frequency bands where FCC might provide increased flexibility to conduct experiments. Further, FCC is seeking to establish provisions that encourage the exploration of new technologies, including technologies that would facilitate spectrum sharing. To expand testing opportunities, PCAST recommended that real-world test services be provided to test federal and public-safety frequency bands. Similarly, the Wireless Spectrum Research and Development’s Senior Steering Group is conducting workshops regarding the development of a national wireless test environment. However, spectrum users told us that even though they understand the benefits of testing and development, they are reluctant to allow testing in their spectrum because of the potential for harmful interference. As previously mentioned, NTIA also has a pilot test bed program to evaluate dynamic spectrum access and technology for spectrum sharing in land mobile radio bands, but the program is in the early stages and requires additional access to spectrum for testing to be fully implemented. The Department of Defense—the federal agency with the largest number of spectrum assignments—is also involved in researching and developing new spectrum technologies, although they are still in the early stages. The Department’s Defense Advanced Research Projects Agency (DARPA) has several such efforts under way. For example, unlike existing databases that only provide limited, descriptive frequency assignment information, DARPA’s Advanced Radio Frequency Mapping program seeks to provide real-time awareness of spectrum use across frequency, geography, and time. With this information, spectrum managers and automatic spectrum allocation and management systems could operate more efficiently through improved interference mitigation. However, agency officials told us that this technology is at the basic research level and years away from market readiness. Also, in the beginning phases, the Communications Under Extreme Radio Frequency Spectrum Conditions program plans to address spectrum use and interference mitigation in a congested communications environment. According to DARPA officials, the program will work to develop interference mitigation technologies (especially for jamming),interference tolerance, and higher spectrum utilization. Recent federal advisory committee recommendations and international examples also emphasize the importance of funding and providing incentives for research and development endeavors. For example, to promote research in efficient technologies, PCAST recommended that (1) release funds the Research and Development Wireless Innovation Fund for this purpose and (2) the current Spectrum Relocation Fund be redefined as the Spectrum Efficiency Fund. This adjustment would allow federal agencies to be reimbursed for general investments in improving spectrum sharing. PCAST also recently suggested that a partnership between the federal government and the private sector is the best mechanism to ensure optimal use of federal spectrum and related spectrum research and testing. Similarly, CSMAC recommended the creation of a Spectrum Innovation Fund. Unlike the Spectrum Relocation Fund, which is strictly limited to the actual costs incurred in relocating federal systems from auctioned spectrum bands, the Spectrum Innovation Fund could also be used for spectrum sharing and other opportunities to enhance spectrum efficiency.Canadian government instituted tax credits for research and development efforts by Canadian wireless companies, and required wireless companies to commit 2 percent of all revenues toward research and development activities related to spectrum. Conclusions As the demand for and use of spectrum continues to increase, federal and nonfederal users will need to be more cognizant of how efficiently spectrum is used. Sharing spectrum is one way to use spectrum more efficiently and make more spectrum available. While a number of barriers exist to sharing spectrum—such as incompatible uses, potentially prohibitive costs, and cumbersome regulatory processes—it is clear that first and foremost, users currently lack incentives to share the spectrum that is assigned or licensed to them. To address the incentive problem, spectrum experts, federal advisory groups, and others have made recommendations but have also identified implementation problems associated with different options. First, we agree with experts that spectrum usage fees should be given further consideration. We previously reported that incentive-based fees are designed to promote the efficient use of spectrum by compelling users to recognize the value to society of the spectrum that they use. Yet, designing a fee system is fraught with numerous obstacles and challenges, such as how such fees should be incorporated into agency budgets and the appropriations process in order to create the right incentives. A full evaluation of the potential benefits and impacts of implementing a fee structure would be a potential step in identifying the most prudent and effective approach. Second, because new technologies that could better facilitate sharing are in some cases years from market readiness, spectrum users could be encouraged to dedicate more resources to research and development. Additionally, users need spectrum access for testing new technologies. If there are continued limitations to accessing spectrum for testing, it may be impossible to validate technologies under realistic conditions, further delaying the availability of these technologies to users and the opening of new market opportunities and economic growth. Third, users have expressed concern about the timeliness of FCC and NTIA spectrum- sharing processes. If these processes continue to be lengthy and unpredictable, federal and nonfederal users may continue to be reluctant to share spectrum. As the debate about these options continues, it is clear that more information is needed to further the understanding and discussion about which incentives and opportunities will be the most feasible and effective toward promoting sharing as a viable solution to address increasing spectrum demand. Recommendations for Executive Action To better identify the most feasible incentives to promote spectrum efficiency and sharing, we recommend that the NTIA Administrator and the FCC Chairman jointly take the following three actions: Report their agencies’ views and conclusions regarding spectrum usage fees to the relevant congressional committees, specifically with respect to the merits, potential effects, and implementation challenges of such a fee structure, and what authority, if any, Congress would need to grant for such a structure to be implemented. Based on the findings of current research and development efforts under way, determine how the federal government can best promote federal and nonfederal investment in the research and development of spectrally efficient technologies, and whether additional spectrum is needed for testing new spectrum efficient technologies. Evaluate regulatory changes, if any, that can help improve and expedite the spectrum sharing process. Agency Comments and Our Evaluation We provided a draft of this report to the Department of Commerce and FCC for review and comment. In response to our draft report, Commerce and FCC provided written comments, which are reprinted in appendix II and III, respectively. The agencies also provided technical corrections to the draft report, which we incorporated as appropriate. In summary, Commerce concurred with our findings, but believes that activities completed or under way by NTIA and FCC satisfy the recommendations contained in our draft report. In its written comments, FCC noted that the agency was pursuing the goals outlined in the National Broadband Plan, and highlighted several actions it is taking to promote more shared access to spectrum. In our draft report, we included four recommendations. The first recommendation was that NTIA and FCC jointly examine the merits and challenges associated with implementing spectrum usage fees. Commerce noted that the issue was examined by the CSMAC in 2010 and 2011 and that consensus could not be reached regarding the imposition of such fees. Moreover, the agency states that further study is unlikely to resolve the issues. We agree that implementation of spectrum usage fees or a similar structure that can provide users with greater incentive to efficiently use or share spectrum raises several difficult questions, such as authority to implement a new fee structure and ensuring that federal operations are not disrupted. We also agree that further study may not resolve these issues. Nevertheless, our findings suggest that additional incentives are still needed for users to seek out more efficient ways of using spectrum, such as sharing, and that Congress could benefit from more information to fully understand the implications of a fee structure. Therefore, we altered our recommendation to state that NTIA and FCC, rather than initiate additional study on the issue, should provide Congress with the agencies’ views and conclusions regarding the merits, potential effects, and implementation challenges of such a fee structure, and authorities that Congress would need to grant that such a structure be implemented. We believe such actions would help provide members of Congress with information they could use to evaluate any proposed fee structure or other proposed incentive schemes. Our second recommendation was that FCC and NTIA jointly study whether spectrum should be repurposed and made available for unlicensed use. However, in written comments the agencies identified NTIA’s and FCC’s recent efforts to identify spectrum for repurposing, which have focused on allowing unlicensed users to share the spectrum. Consequently, we removed that recommendation from our final report. Our draft report also recommended that the agencies jointly study (1) actions that could help spur research and development and (2) regulatory changes that might improve the spectrum-sharing process. Commerce stated that NTIA and FCC already have efforts under way in these areas that fulfill the goals of these recommendations and that additional study is unnecessary. We acknowledge throughout the report that NTIA and FCC have activities under way in these areas, some of which were initiated during the course of our review. The intent of our recommendations was not to displace these activities with additional study, but rather to support these actions, and to encourage the agencies to take further steps to enable real world testing of spectrum-sharing technologies and to streamline and improve the regulatory processes that enable spectrum sharing. We revised our draft recommendations to clarify that we are encouraging NTIA and FCC to take further actions in these areas, as opposed to further study, and we will continue to monitor the agencies’ efforts in these areas. In addition to Commerce and FCC, we also provided the Departments of Defense, Homeland Security, Interior, Justice, Transportation, and Treasury the opportunity to comment on segments of the report that pertain to the data and information they provided. Except for the Department of Transportation, which did not provide any comment, the agencies verified the key facts we obtained from them and provided technical corrections to the draft report, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce, the Chairman of the Federal Communications Commission, and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Appendix I: Federal Agencies, Nonfederal Spectrum Users and Experts Interviewed Regarding Spectrum Sharing Selected Interdepartment Radio Advisory Committee Member Agencies Nonfederal Spectrum Users Appendix II: Comments from the Department of Commerce Appendix III: Comments from the Federal Communications Commission Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact person named above, Andrew Von Ah (Assistant Director), Eli Albagli, Amy Abramowitz, Michael Clements, Andy Clinton, David Goldstein, Brian Hartman, Bert Japikse, Elke Kolodinski, Jean McSween, Erica Miles, Sally Moino, Joshua Ormond, Amy Rosewarne, Hai Tran, and Jarrod West made key contributions to this report. | The increasing popularity of wireless devices that use spectrum, combined with federal spectrum needs for national defense and other public safety activities, have created concerns that a "spectrum crunch" is looming. However, there is also evidence that at any given time or place, spectrum lies fallow or is only intermittently used. In an effort to use spectrum as efficiently as possible, advisory groups and others have proposed solutions to share spectrum. This requested report examines (1) what factors prevent users from sharing spectrum more frequently and (2) what actions the Federal Communications Commission (FCC), the National Communications Information Administration (NTIA), and others can take to encourage more sharing and efficient spectrum use. GAO reviewed plans and documents from FCC and NTIA regarding their management of nonfederal and federal spectrum-sharing activities, respectively. GAO also interviewed federal and commercial spectrum users, industry and academic experts, and other stakeholders. Some spectrum users may lack incentive to share spectrum or otherwise use it efficiently, and federal agencies and private users currently cannot easily identify spectrum available for sharing. Typically, paying the market price for a good or service helps to inform users of the value of the good and provides an incentive for efficient use. Federal agencies, however, pay only a small fee to the NTIA for spectrum assignments and therefore have little incentive to share spectrum. Federal agencies also face concerns that sharing could risk the success of security or safety missions, or could be costly in terms of upgrades to more spectrally efficient equipment. Nonfederal users, such as private companies, are also reluctant to share spectrum. For instance, license holders may be reluctant to encourage additional competition, and companies may be hesitant to enter into sharing agreements that require potentially lengthy and unpredictable regulatory processes. Sharing can be costly for them, too. For example, nonfederal users may be required to cover all interference mitigation costs to use a federal spectrum band, which might include multiple federal users. Sharing can also be hindered because information on federal spectrum use is lacking and information regarding some federal spectrum use may never be publicly available, a situation that makes it difficult for users to identify potential spectrum for sharing. Federal advisors, agency officials, and experts have identified several options that could provide greater incentives and opportunities for more efficient spectrum use and sharing by federal and nonfederal users. These options include, among other things: considering spectrum usage fees to provide economic incentive for more efficient use and sharing; identifying more spectrum that could be made available for unlicensed use, since unlicensed use is inherently shared; encouraging research and development of technologies that can better enable sharing; and improving and expediting regulatory processes related to sharing. However, these options involve implementation challenges. For example, setting spectrum usage fees for federal users may not result in creating the proper incentives, because agency budgets might simply be increased to accommodate their current use. While new technologies that overcome some of the inherent challenges with sharing spectrum are being developed, proving those technologies under real-world conditions can be difficult, and few incentives exist at the federal level to encourage such technology development. Finally, FCC and NTIA have taken some actions to potentially reduce the amount of time and even the need for potential rulemakings sometimes associated with spectrum sharing, but stakeholders and experts suggested that more could be done to expedite the approval process, such as automating some steps and developing better capabilities to track the status of spectrum-sharing applications. However, any changes to federal regulatory processes related to spectrum management and sharing would need to be carefully studied with respect to potential benefits and costs. |
Background Medicaid Program Overview Medicaid is jointly financed by the federal government and the states, with the federal government matching most state Medicaid expenditures using a statutory formula that determines a federal matching rate for each state. Medicaid is a significant component of federal and state budgets, with estimated total outlays of $554 billion in fiscal year 2015, of which $347 billion is expected to be financed by the federal government and $207 billion by the states. An important health care safety net, Medicaid served about 78 million individuals during fiscal year 2014. There are multiple ways to be eligible for Medicaid that relate to an individual’s age, income, and disability status. As a federal-state partnership, both the federal government and the states play important roles in ensuring that Medicaid is fiscally sustainable over time and effective in meeting the needs of the vulnerable populations it serves. States administer their Medicaid programs within broad federal rules and according to individual state plans approved by CMS, the federal agency that oversees Medicaid. States can also seek permission from CMS to provide services under waivers of traditional Medicaid requirements, for example to provide services to a segment of the state’s eligible population. Federal Medicaid Funds and State Medicaid Payments To obtain federal matching funds for Medicaid payments to providers, states submit their estimated Medicaid expenditures—their payments for covered services and costs of administering the program—to CMS each quarter for the upcoming quarter. After CMS has approved the estimated expenditures, it makes federal matching funds available to the state for the purpose of making Medicaid payments during the quarter. States typically finance Medicaid payments to providers with a combination of federal funds advanced to them and nonfederal funds (e.g., funds from state and local government sources). Federal matching funds are available to states for different types of payments that states make, including payments made directly to providers for services rendered and payments made to managed care organizations. Under a fee-for-service delivery model, states make payments directly to providers; providers render services to beneficiaries and then submit claims to the state to receive payment. States review and process fee-for-service claims and pay providers based on state- established payment rates for the services provided. Under a managed care delivery model, states pay managed care organizations a set amount per beneficiary; providers render services to beneficiaries and then submit claims to the managed care organization to receive payment. Managed care plans are required to report to the states information on services utilized by Medicaid beneficiaries enrolled in their plans— information typically referred to as encounter data. Most states use both fee-for-service and managed care delivery models, although the number of beneficiaries served through managed care has grown. Federal law requires each state to operate a mechanized claims processing system to process and record information about the services provided under both fee-for-service and managed care delivery models. Provider claims and managed care encounters are required to include information about the service provided, including the general type of service; a procedure code that identifies the specific service provided; the location of the service; the date the service was provided; and information about the provider who rendered the service. For services delivered under a fee-for-service delivery model, the claims record must also include the payment amount. Federal law requires states to collect managed care encounter data, but actual payment amounts to individual providers are not required. Medicaid Long-term Services and Supports Home- and community-based services, which include personal care services, are a component of a larger class of health and health-related services and nonmedical supports for individuals of all ages who need care for an extended period of time, broadly referred to as long-term services and supports. Long-term services and supports financed by Medicaid are generally provided in two settings: institutional facilities, such as nursing homes and intermediate-care facilities for individuals with intellectual disabilities; and home and community settings, such as individuals’ homes or assisted living facilities. Under Medicaid requirements governing the provision of services, states generally must provide institutional care to Medicaid beneficiaries, while HCBS coverage is generally an optional service. Medicaid spending on long-term services and supports provided in home and community settings has increased dramatically over time—to about $80 billion in 2014—while the share of spending for care in institutions has declined, and HCBS spending now exceeds long-term care spending for individuals in institutions (see fig. 1). All 50 states and the District of Columbia provide long-term care services to some Medicaid beneficiaries in home and community settings. Personal care services are a key component of long-term services and supports and include assistance with activities of daily living, such as bathing and dressing, and in some cases instrumental activities of daily living, such as preparing meals and housekeeping. Personal care services are typically nonmedical services provided by personal care attendants—home-care workers who may or may not have specialized training. Personal care attendants may be employed by a provider agency or self-employed. In some cases, they are friends or family members of the beneficiary and, under certain types of Medicaid PCS programs, can be spouses, parents, or other legally responsible relatives. Under what is known as an agency-directed model, a provider agency employs multiple attendants. The provider agency hires, fires, pays, and trains the attendant to provide personal care services to Medicaid beneficiaries. Under a participant-directed model, beneficiaries or their representatives have the authority to manage personal care services by selecting, hiring, firing, and training attendants themselves and have a greater say in the personal care services the beneficiary receives. Overall, the number of personal care attendants employed is projected to increase by 26 percent from 2014 to 2024. As we recently reported, states have considerable flexibility to establish Medicaid personal care services programs under different provisions of federal law that authorize different types of programs. States may provide personal care services under a Medicaid state plan, a state plan amendment, or a waiver, such as the 1915(c) waiver, referred to as an HCBS Waiver—the most common type of program through which states provide personal care services. Section 1915(c) authorizes states with HHS approval to waive certain traditional Medicaid requirements, allowing states to target services to specific groups and limit the number of beneficiaries served. Program options available under a Medicaid state plan or amendment include State Plan Personal Care Services, State Plan HCBS, and Community First Choice. Individual states can and do have multiple programs operating under different authorities. Some requirements, such as cost neutrality and maintenance of expenditures, are applicable only to specific program options. In addition, under Community First Choice programs, states receive a 6 percentage point increase in their federal matching rate for personal care and other home- and community-based services. The nature of HCBS can complicate both federal and state oversight, including understanding the time frames in which services were delivered, the types of services delivered, and the providers delivering services billed to and paid for by Medicaid. According to numerous HHS OIG reviews and CMS’s annual review of Medicaid improper payments, the provision of Medicaid personal care services is at high risk for improper payments. For example: In 2012, the OIG issued a report synthesizing results from 23 individual reviews of states’ programs conducted over a 6-year timeframe between 2006 and 2012. Then, in October 2016, it issued an investigative advisory to CMS regarding Medicaid personal care services based on more than 200 investigations opened since the 2012 report. The HHS OIG concluded that existing controls and safeguards intended to prevent improper payments in the Medicaid program and to ensure patient safety and quality of care were often ineffective. Problems the HHS OIG found included: lack of details on personal care services claims, including missing dates for when services were provided and lack of information identifying the provider of services; lack of evidence that services were rendered; and lack of prepayment controls to prevent payments for home-based personal care services when a beneficiary is in an institution. The HHS OIG also noted that federal and state Medicaid investigations have found an increasing volume of fraud involving personal care attendants. Of particular concern were personal care services provided under the Participant-Directed Option, where the beneficiary has direct responsibility over the care they receive and a budget to pay personal care attendants. These findings highlight the need to have better information about the identity of the individuals providing personal care services. CMS reported for 2014 that Medicaid personal care services accounted for an estimated $2.2 billion in improper fee-for-service payments and had the third-highest improper payment rate of the major categories of service, estimated at 6.3 percent. Factors that CMS found contributed to improper personal care services payments included a lack of documentation verifying that beneficiaries received services; a lack of documentation of the specific services provided; and missing or incorrect documentation on the amount, or units, of services provided. Two CMS Systems Collect Data on the Provision of and Spending on Personal Care Services, and the Data Suggest Wide Variations among States Two CMS data systems collect Medicaid data from providers’ records of services rendered and total Medicaid expenditures by broad Medicaid categories of service, including for personal care services. Based on our assessment of the data collected from the two data systems, these data can be reliably used to provide a summary description of the provision of and spending on personal care services, including aggregate annual spending and proportion of Medicaid beneficiaries that received these services. These data suggest wide variation among states in the provision of and spending on personal care services. Two CMS Systems Collect Data on the Provision of and Spending on Medicaid Personal Care Services Two CMS data systems collect data related to the provision of and spending on Medicaid personal care services at the state and national level. Both data systems contain data collected by states and submitted to CMS. Each system has a different purpose and the type and scope of the data each collects reflects its purpose. The Medicaid Statistical Information System (MSIS) was established to collect detailed information on the services rendered to individual Medicaid beneficiaries. The Medicaid Budget and Expenditure System (MBES) was established for states to report total aggregate expenditures for Medicaid services across broad service categories. Medicaid Statistical Information System MSIS is a national data system maintained by CMS that collects data from state records on fee-for-service claims for services rendered to Medicaid beneficiaries and managed care encounter records for services delivered through managed care. Each state transmits digital files to CMS quarterly using MSIS. MSIS was designed to provide CMS with a detailed, national database of Medicaid program information to support a broad range of program management functions, including health care research and evaluation, program utilization and spending forecasting, and analyses of policy alternatives. MSIS collects information on the beneficiary receiving services and on the services provided. Beneficiary information includes a beneficiary’s age and basis for Medicaid eligibility. Information on the services provided includes: the date the service was provided; the place where the service was provided; the general type of service provided; a procedure code that identifies the specific service provided; and a provider identification number that identifies the Medicaid provider who rendered services. Payment information is collected for fee-for-service claims, which are paid by the state, but not for managed care encounters, which are paid for by managed care organizations that contract with the state. Federal law requires that all data submitted be consistent with the standardized MSIS format and data elements as a condition of receiving federal reimbursement for mechanized claims processing systems. CMS reviews these MSIS files for initial quality and proper formatting and returns any files that do not pass its quality tests back to states for correction and resubmission. State MSIS submissions are compiled into calendar year data sets that provide beneficiary-level data on eligibility, service utilization, and payments for every state Medicaid program. CMS can make the data files available for analysis to researchers and others that submit a data use agreement approved by CMS. Medicaid Budget and Expenditure System MBES is a national expenditure reporting system that collects each states’ total aggregate Medicaid expenditures reported to CMS by broad categories of service for the purpose of states’ obtaining the federal share of their payments to providers and for other approved expenditures. States are required to use this web-based system to input and transmit electronically a form referred to as the CMS-64 on a quarterly basis. MBES contains state Medicaid expenditures in over 80 broad categories of services and total expenditures for each state. These data come from CMS-64 reports, which CMS requires states to use to report their Medicaid expenditures through specified, standard categories of service such as inpatient hospital services, nursing facility services, physician services, and HCBS services by type of program. For each category of service, the CMS-64 collects a state’s total Medicaid expenditure, the federal share, and the nonfederal share. The CMS-64 does not collect beneficiary-specific payment information or expenditures for specific types of services. For example, all expenditures for regular inpatient hospital services are reported on one category-of-service line. Data from the CMS-64 do, however, represent the most reliable and comprehensive information on aggregate Medicaid spending, including spending on program administration. In addition to its primary purpose of capturing states’ expenditures for purposes of states obtaining federal matching payments for their expenditures, MBES data are used to produce national and state-specific Medicaid expenditure reports by the standard categories of services. CMS compiles the reports by federal fiscal year and makes these yearly expenditure files available to the public on its website. Data Available from the Two CMS Systems Suggest Wide Variations across Reviewed States in the Provision of and Spending on Personal Care Services Available information from MSIS and MBES suggest that the provision of and spending on personal care services varies widely across states. Specifically, the most recent and complete MSIS claims data available to us for 35 states from calendar year 2012 suggest variation across these states in the provision of personal care services, including differences in: the types of beneficiaries served, the delivery model under which they are served (fee-for-service or managed care), and the average payment per beneficiary. Similarly, MBES expenditure data for all states for calendar years 2012 through 2015 show variation across states in total spending on personal care services and spending by type of program. Data from the Medicaid Statistical Information System Analysis of calendar year 2012 MSIS data for 35 states indicates that, overall, nearly 3 percent of all Medicaid beneficiaries in these states— about 1.5 million individuals—received personal care services at least once. However, the percentage of beneficiaries receiving services varied among the states. As illustrated in figure 2, the percentage of each state’s Medicaid beneficiaries who used personal care services at least once ranged from less than 1 percent of beneficiaries in 9 states to almost 17 percent in 1 state. MSIS data also show wide variation in the percentage of beneficiaries who received personal care services across the four main eligibility groups we analyzed (children, adults, aged individuals, and disabled individuals). For example, as illustrated in figure 3, all 35 of the states with available data provided personal care services to aged and disabled beneficiaries, but some more so than others. Across the 35 states, about 13 percent of aged and about 9 percent of disabled beneficiaries received personal care services in 2012. The proportions receiving personal care services in individual states ranged from less than 1 percent to about 32 percent of aged and from less than 1 percent to 36 percent of disabled beneficiaries. The average percentage of adults and children who received personal care services across all the 35 states was much smaller than the aged and disabled groups (less than 1 percent), and the data suggest that a few states did not provide any personal care services to individuals in the adult and children groups. When examining just those beneficiaries who received personal care services in 2012, MSIS data show that most were in the disabled or aged eligibility categories, but that the composition of each state’s recipients varied widely. Of the nearly 1.5 million beneficiaries who received personal care services that year in the 35 states, the vast majority— nearly 86 percent—were either aged or disabled. Disabled beneficiaries represented 48 percent and aged beneficiaries represent 37 percent of all those receiving personal care services. Children and adults represented a much smaller share of the beneficiaries that received personal care services in 2012 in these states, at 10 percent and 4 percent, respectively (see figure 4). However, children and adults made up more than 90 percent of the disabled group in the 35 states. The 2012 MSIS data for the 35 states show that most personal care services are provided under a fee-for service delivery model, rather than under a managed care delivery model. About 80 percent of personal care services in the 35 states were provided under a fee-for-service model, with 20 percent delivered through a managed care model. Most of the claims under fee-for-service models were for services provided to disabled beneficiaries, while most of the services under managed care models were provided to aged beneficiaries (see figure 4). The majority of the 35 states (20) provided 100 percent of their personal care services under a fee-for-service model, and nearly all of the remaining states provided the majority of their services (i.e., greater than 50 percent) this way; only 3 states provided a majority of personal care services through a managed care model. No states relied exclusively on a managed care model for all beneficiaries, although one state—Tennessee—used a managed care model exclusively for the adult eligibility group. MSIS fee-for-service claims data for 2012 show variation in the average total payments made per beneficiary and by type of beneficiary for personal care services across the 35 states. For beneficiaries who received personal care services under a fee-for-service model that year, the average total payment per beneficiary was $9,785. As illustrated in figure 6, average total payments per beneficiary varied across eligibility groups. For example, average total payments for personal care services per beneficiary ranged from $1,742 for adults to more than $10,786 for disabled beneficiaries. In addition, average total payments varied significantly across states. Across all eligibility groups, average total payments for personal care services ranged from $2,639 in Wyoming to $33,857 in Delaware, a nearly 13-fold difference. For disabled beneficiaries, the range in average total payment was even greater—from $3,131 to $48,856—a nearly 16-fold difference, also represented by Wyoming and Delaware. Expenditure data contained in the MBES, as reported by all 50 states and the District of Columbia, revealed total personal care spending on a fee- for-service basis of about $15 billion in calendar year 2015. As illustrated in figure 6, however, more than three-quarters of the reported spending was for personal care services provided under two types of programs: State Plan Personal Care Services and Community First Choice. Specifically, based on state-reported data, spending under State Plan Personal Care Services was slightly higher in 2015 at nearly $6 billion than the $5.7 billion in spending for Community First Choice. These two programs have fewer and less stringent federal oversight requirements than the HCBS Waiver and State Plan HCBS programs. In contrast, spending on personal care services under these two programs was less overall, with spending on personal care services under HCBS Waiver programs totaling $3.2 billion and $34 million under State Plan HCBS programs. For HCBS Waiver and State Plan Personal Care Services programs, states reported less than 1 percent of their expenditures for personal care services under the Participant-Directed Option. The state-reported expenditure data contained in the MBES reveal how total spending on personal care services has changed over time, both in total amounts and in the amounts associated with each program type. As illustrated in in figure 8, state-reported expenditure data suggest that after a spending increase of over $2 billion in from calendar year 2012 to 2013, total fee-for-service spending on personal care services increased more slowly, by about $100 million a year from 2013 through 2015. Moreover, the data show a significant share of spending on personal care services under the Community First Choice program beginning in 2013. Limitations in Data Hinder CMS Oversight of Personal Care Services, and Planned Improvements May not Address Data Limitations CMS’s two data systems provide some basic and aggregate information on the provision of and spending on personal care services. However, in order to provide effective oversight, CMS needs detailed data on personal care services that are timely, complete, consistent, and accurate, including data on who provided the service, the type and amount of services provided, when services were provided, and the amount the state paid for services. We found that the detailed data collected by the two systems were not always timely, complete, consistent, or accurate, which limits the usefulness of these data for CMS oversight. CMS Does Not Collect Sufficiently Complete or Consistent Data from States on Medicaid Personal Care Services Needed to Monitor the Provision of and Spending on These Services CMS does not collect data that are timely, or are sufficiently complete, consistent, and accurate to effectively monitor the provision of and spending on Medicaid personal care services. Medicaid Statistical Information System Data Collected by CMS Are Not Timely and are Often Incomplete or Inconsistent Medicaid personal care services claims and encounter data collected by CMS through MSIS are not timely, and available data are often incomplete and inconsistent, based on our analysis of 2012 data from 35 states. States are required by federal law to develop and operate their own claims-processing and information-retrieval systems and submit data to CMS, through MSIS, that includes information on the specific services provided, the beneficiaries receiving these services, and the providers delivering these services. Federal law requires states to submit data that are consistent with the standardized MSIS format and data elements as a condition of receiving federal reimbursement for mechanized claims processing systems. CMS has established specific reporting guidance for some of these data elements but not for others. MSIS was designed to provide CMS with a detailed, national database of Medicaid program data to support a broad range of program management functions, including health care research and evaluation by CMS and other researchers, program utilization and spending forecasting, and analyses of policy alternatives. The information CMS collects through MSIS from states is not timely. Data are typically not available for analysis and reporting by CMS or others for several years after services are provided. This happens for two reasons. First, although states have 6 weeks following the completion of a quarter to report their claims data, their reporting can be delayed as a result of providers and managed care plans not submitting data in a timely manner, according to the CMS contractor responsible for compiling data files of Medicaid claims and encounters. For example, providers may submit claims for fee-for-service payments to the state late and providers may need to resubmit claims to make adjustments or corrections before they can be paid by the state. Second, the contractor analyzes the MSIS data submitted by the states and compiles annual person-level claims files that are in an accessible format. The contractor also conducts quality-control checks and corrects data errors and consolidates multiple records that may exist for one claim. This process, for one year of data, can take several years and, as a result, when information from claims and encounters becomes available for use by CMS for purposes of program management and oversight, it can be several years old. Information CMS collects from states through MSIS is also incomplete in two ways. First, specific data on beneficiaries’ personal care services were not included in the calendar year 2012 MSIS data for 16 states. Nevertheless, these 16 states received federal matching funds for the $4.2 billion in total fee-for-service payments for personal care services that year—about 33 percent of total expenditures for personal care services reported by all states (see figure 9). Second, even for the 35 states for which 2012 MSIS claims and encounter data were available, certain data elements collected by CMS were incomplete. For example, for the records we analyzed, 20 percent included no payment information, 15 percent included no provider identification number to identify the provider of service, and 34 percent did not identify the quantity of services provided (see figure 10). Incomplete data limit CMS’s ability to track spending changes and corroborate spending with reported expenditures because they lack important information on a significant amount of Medicaid payments for personal care services. For example, among the 2012 claims for personal care services under a fee-for-service delivery model, claims without a provider identification number accounted for about $4.9 billion in total payments. Similarly, payments for fee-for-service claims with missing information on the quantity of personal care services provided totaled about $5.1 billion. Even when key information was included in claims and encounter data, it was often inconsistent, which limits the effectiveness of the data to identify questionable claims and encounters. For purposes of oversight, a complete record (claims or encounters) should include data for each visit with a provider or caregiver, with specified dates of service, and it should use a clearly specified unit of service (e.g., 15 minutes) along with a standard definition of the type of service provided. These data allow CMS and states to analyze claims to identify potential fraud and abuse. The following examples illustrate inconsistencies in the data from the 35 states: States used hundreds of different procedure codes for personal care services. Procedure codes on submitted claims and encounters were inconsistent in three ways: the number of codes used by states; the use of both national and state-specific codes; and the varying definitions of different codes across states. More than 400 unique procedure codes were used by the 35 states. CMS does not require that states use standard procedure codes for personal care services; instead, states have the discretion to use state-based procedure codes of their own choosing or national procedure codes. As a result, the procedure codes used for similar services can differ from state to state, limiting CMS’s ability to use this data as a tool to compare and track changes in the use of specific personal care services provided to beneficiaries because CMS cannot easily compare similar procedures by comparing service procedure codes. States used widely varying units of service associated with numerous procedure codes. As a result of the numerous procedure codes used by states, the units of service for personal care services varied widely. Depending on the code used, units of service can be in 15-, 30-, or 60-minute increments, or as per diem codes. The absence of information about the unit of service in the millions of records for personal care services, combined with states’ use of hundreds of different codes, makes it difficult to efficiently assess the extent that services provided are reasonable. Claims and encounter records generally include the procedure code, but do not identify the unit of service associated with the code. Claims for multiple units of service may be reasonable if the unit represents a 15-minute increment. However, if the unit of service represents an hour the number of units billed may not be reasonable. For example, for a beneficiary requiring 2 hours of personal care services, a claim containing 8 units in a single day is reasonable if the unit of service is 15 minutes but would not be reasonable if the unit of service is an hour. Without consistent procedure codes with defined units of service, the utilization and expenditure analyses done by CMS and others with the data are difficult to complete, including assessing the reasonableness of the amount of services claimed and identifying potentially inappropriate claims. In general, we found that claims for 2012 represented large quantities of services. Among claims with valid quantity of service data—that is, where the claim identified a procedure code and the number of units provided—quantities reported ranged from 1 unit to more than 27,000 units, with an average quantity of 15 units. For the most commonly used procedure, which represents 15 minutes of service by a personal care attendant, the average quantity would translate into nearly 4 hours of personal care services billed as a single claim. For seven states, 100 percent of cases were missing information on the quantity of services delivered. State-reported dates of service were overly broad. In the 35 states whose claims we could review, some claims for personal care services had dates of services (i.e., start and end dates) that spanned multiple days, weeks, and in some cases months. For 12 of the 35 states, 95 percent of their claims were billed for a single day of service. However, in other states, a number of claims were billed over longer time periods. For example, for 10 of the states, 5 percent of claims covered a period of at least 1 month, and 9 states submitted claims that covered 100 or more days. When states report dates of service that are imprecise, it is difficult to determine the specific date for which services were provided and identify whether services were claimed during a period when the beneficiary is not eligible to receive personal care services—for example, when hospitalized for acute care services. Others have also found the poor quality of personal care services data submitted to CMS to be a long-standing problem. Based on numerous reviews of states’ personal care services programs, the HHS OIG determined that the limited or missing information on personal care service providers, dates of service, and quantity of services were an impediment to effective program integrity and oversight. The HHS OIG found that the data could not be used to accurately identify overlapping claims because it was common for providers to submit one claim for multiple instances of personal care services provided over days, weeks, or months. The HHS OIG also found that with overlapping dates it is difficult to identify instances when beneficiaries were receiving institutional services and therefore were ineligible for home-based personal care services. Further, the HHS OIG found that claims for personal care services did not include unique identifiers for personal care attendants and that cases of fraud often involved impossible or improbable volumes of service or service patterns, for example, claims for more than 24 hours in a day or claims for services in multiple beneficiary homes during the same day. The HHS OIG concluded that, if the availability and quality of personal care data were improved, investigators could analyze the data to identify and follow up on aberrancies and questionable billing patterns. Based on its findings, the OIG recommended that CMS takes steps to reduce variation in how states are documenting claims for personal care services, among other recommendations. Medicaid Budget and Expenditure System Data Collected by CMS Are Not Always Accurate or Complete Medicaid personal care services expenditure data collected from states by CMS and contained in the MBES are not always accurate or complete, according to our analysis of states’ reported expenditures for calendar years 2012 through 2015. CMS requires states to report expenditures for personal care services on specific lines on the CMS-64. The required reporting lines correspond with the specific types of programs under which states have received authority to cover personal care services, and can affect the federal matching payment amounts states receive when seeking federal reimbursement. For example, a 6 percent increase in federal matching is available for services provided through the Community First Choice program. For personal care services provided under the State Plan PSC program, CMS requires states to report their expenditures on one of two lines of the CMS-64. For personal care services provided under the three other programs—HCBS Waiver, State Plan HCBS, and Community First Choice—CMS requires states to report their expenditures for personal care services separately from other types of services provided under each program. CMS requires these states to submit expenditure amounts for specific service types on what CMS refers to as feeder forms—that is, expenditure lines on the CMS-64 that feed into the total HCBS spending amount under a state’s HCBS Waiver program. The MBES system automatically generates the state’s total HCBS Waiver program spending by combining the expenditures reported for each of the various specific services. We found that not all states are reporting their personal care services expenditures accurately, and as result, personal care services expenditures may be underreported or reported in an incorrect category. We compared personal care services expenditures from all states’ CMS- 64 reports for calendar years 2012 through 2015 with each state’s approved programs during this time period and found that about 17 percent of personal care services expenditure lines were not reported correctly. As illustrated in figure 11, nearly two-thirds of the reporting errors were a result of states not separately identifying and reporting personal care services expenditures using the correct reporting lines, as required by CMS. Without separate reporting of personal care expenditures as required, CMS is unable to monitor how spending changes over time across the different program types and have an accurate estimate of the magnitude of potential improper payments for personal care services. The other types of errors involved states erroneously reporting expenditures that did not correspond with approved programs. As a result, CMS is not able to efficiently and effectively identify and prevent states from receiving federal matching funds inappropriately, in part, because it does not have accurate fee-for-service claims data that track payments by personal care program type that is linked with expenditures reported for purpose of federal reimbursement. These errors demonstrate that CMS is not effectively ensuring its reporting requirements for personal care expenditures are met. By not ensuring that states are accurately reporting expenditures for personal care services, CMS is unable to accurately identify total expenditures for personal care services, expenditures by program, and changes over time. According to CMS, expenditures that states report through MBES are subject to a variance analysis, which identifies significant changes in reported expenditures from year to year. However, CMS’s variance analysis did not identify any of the reporting errors that we found. CMS officials told us that they will continue to review states’ quarterly expenditure reports for significant variances and follow up on such variances. CMS is Taking Steps to Improve Data Collected from States, but Has Not Yet Fully Addressed Completeness, Consistency, and Accuracy of Personal Care Services Data CMS has two ongoing actions intended to improve Medicaid claims data collected from states. First, CMS is developing an enhanced Medicaid claims data system—called the Transformed-Medicaid Statistical Information System (T-MSIS)—that will replace MSIS. Enhancements being made under T-MSIS include requiring states to report more timely data, additional claims information, and improved CMS checks on the quality of data submitted. Specifically, states will be required to do the following, according to CMS: report data more frequently than they are now required (monthly rather than quarterly), submit a new data file reporting information on the providers of services, including provider identification numbers, and identify for each claim which expenditure line on the CMS-64 corresponds with the type of service covered by the claim. CMS is also improving the quality of data reported by states by subjecting states’ submitted data to thousands of electronic checks to identify obvious errors. Despite the promise of T-MSIS, implementation has faced delays and is not yet complete. Implementation of T-MSIS by all states has been delayed for several years. The original date for nationwide implementation was January 2014; however, according to CMS officials, as of July 2016, 10 states were submitting T-MSIS data to CMS, but not all of the required data were submitted. The agency expects that all states will be submitting T-MSIS claims data by the end of calendar year 2016. However, reaching this goal depends on the remaining states’ timeliness in completing the work needed to successfully transmit the T- MSIS data. It could be a number of years before all states are submitting complete T-MSIS data that include all required data elements, according to officials. Once all states are reporting T-MSIS claims data, including personal care services claims, key data limitations we identified associated with MSIS claims may not be fully addressed. This is because under T-MSIS, CMS has not taken steps to improve the completeness and consistency of personal care services claims data. For example, CMS has not issued guidance to establish: a uniform set of procedure codes to be used by all states to more consistently document type and quantity of personal care services rendered; state reporting requirements for provider identification numbers for personal care attendants. appropriate time periods covered by individual claims—that is, the maximum number of days that a personal care attendant may include in a single claim. In addition, planned improvements in T-MSIS to identify the corresponding expenditure line on the CMS-64 may not be realized. CMS has stated a goal that T-MSIS would identify, for each claim paid in a fee- for-service delivery system, the expenditure line on the CMS-64 that corresponds with the type of service covered by the claim. This goal would allow better accounting for the claims paid and the services for which the claims were made. Further, this linking of the claims with the associated expenditure line could facilitate more accurate state reporting of expenditures on the CMS-64 and allow CMS to effectively reconcile each state’s payments for personal care services with their reported expenditures. However, CMS’s plans to have states link their T-MSIS claims with CMS-64 expenditure lines will be effective only for one personal care services program. For three other programs, T-MSIS claims are not required to be associated with a specific service type. Rather, the claims are identified simply as an HCBS service under one of the three programs. Without this information, T-MSIS claims for personal care services cannot be cross-walked with CMS-64 data on the expenditures for those services. CMS’s second ongoing action to improve Medicaid claims data collected from the states is the establishment of a new Division of Business and Data Analysis. This division is intended to help the agency ensure the quality of T-MSIS data. According to CMS officials, MSIS claims data have generally not been used for program monitoring and oversight, because issues with data timeliness, completeness, and consistency have limited their usefulness for these purposes. Development of T-MSIS is intended to address data issues and the establishment of the new division is intended to facilitate the use of state-collected data by CMS. According to CMS officials, the new division is intended to: work with states to help ensure the completeness and consistency of claims data as states transition to T-MSIS, improve the quality of the data by analyzing the data for anomalies and errors that can be corrected, build the agency’s capacity to use the data for program monitoring, oversight, and reporting, and provide data analysis support for different CMS program offices, including the offices that oversee states’ personal care services programs. According to CMS, the improved data will be used by CMS for program monitoring, policy implementation, improving beneficiary health care, and lowering costs. CMS efforts in building the agency’s data analysis capacity are underway but in the early stages. Improving the quality of the data is a continuous process that depends on identifying specific data needed for oversight functions. According to CMS officials responsible for implementing T-MSIS, while T-MSIS has the capability for improving the quality of data submitted by states, policies and guidance are needed regarding how CMS will use it. CMS recognizes it has an important responsibility to support Medicaid agencies and leverage program data to protect the Medicaid program from fraud, waste, and abuse in part by improving the quality and consistency of Medicaid data reported to CMS and improving the analysis of this data to identify potential risks. However, as of September 2016, neither the division nor the CMS offices responsible for managing different personal care services programs has identified or developed plans for analyzing and using personal care services data for program management and oversight, such as analytical tools and standard reports. Doing so could facilitate necessary changes to improve the quality of the data, clarify T-MSIS reporting requirements, facilitate the integration of claims and expenditure data, and increase the usefulness of claims data for oversight. Federal agencies should collect data that are reasonably free from error and bias, and represent what they purport to represent. Standards for Internal Control in the Federal Government indicate that appropriate data must be collected to enable program oversight and establish a strong internal control environment. Timely, relevant, and reliable data are needed for decision making, external reporting, and monitoring program operations—for example, to conduct management functions such as tracking the growth in use of and spending on specific Medicaid services; to identify trends related to utilization and payments per service, provider, and beneficiary; and to identify areas at higher risk for fraud, waste, and abuse. Without complete and consistent federal data collected from states, CMS is unable to conduct effective oversight and perform key management functions specific to personal care services, such as ensuring that states report personal care services expenditures correctly; claims for enhanced federal matching funds are accurate, verifying states’ historical spending levels for determining maintenance of expenditure requirements, linking payments from claims with reported expenditures, or providing technical assistance to states to identify improper personal care services payments. Conclusions Personal care services are an important Medicaid service for millions of vulnerable Medicaid beneficiaries. Federal and state spending on Medicaid home- and community-based services, including personal care services, has increased significantly in the last two decades and this growth is projected to continue. These services present high-risk payments for the Medicaid program and have one of the highest improper payment rates of all Medicaid services. In light of these factors, CMS needs complete and consistent information to effectively monitor and oversee these services, which it currently does not collect from states. We found that the data collected from states were often incomplete, inconsistent, or inaccurately reported. CMS’s efforts to improve the quality and accuracy of the data collected from states have not resulted in guidance to states on reporting of personal care services data or plans for using the data for oversight purposes. As a result, issues with the completeness, consistency, and accuracy of personal care services data reported by the states are likely to continue. With better data, CMS could more effectively perform key management functions related specifically to personal care services, such as ensuring that states’ claims for enhanced federal matching funds are accurate and that maintenance of expenditure and cost neutrality requirements are met. Recommendations for Executive Action To improve the collection of complete and consistent personal care services data and better ensure CMS can effectively monitor the states’ provision of and spending on Medicaid personal care services, we recommend CMS take the following four steps: Establish standard reporting guidance for personal care services collected through T-MSIS to ensure that key data reported by states, such as procedure codes, provider identification numbers, units of service, and dates of service, are complete and consistent; Better ensure, for all types of personal care services programs, that data on provision of personal care services and other HCBS services collected through T-MSIS claims can be specifically linked to the expenditure lines on the CMS-64 that correspond with those particular types of HCBS services; Better ensure that personal care services data collected from states through T-MSIS and MBES comply with CMS reporting requirements; and Develop plans for analyzing and using personal care services data for program management and oversight. Agency Comments and Our Evaluation We provided a draft of this report to HHS for review and comment. HHS concurred with two of our four recommendations, specifically, that the agency better ensure that states comply with reporting requirements and develop plans for analyzing and using data. HHS did not explicitly agree or disagree with the two other recommendations—that the agency establish standard reporting guidance and improve the linkages between CMS-64 and T-MSIS data on personal care services. However, in its response to these two recommendations, HHS stated that the Department had recently published a request for information in the Federal Register intended to gather input on additional reforms and policy options to strengthen the integrity of service delivery and appropriate reporting standards for personal care services and other HCBS. HHS indicated that the information collected will be used to determine the agency’s next steps. In light of our findings of inconsistent and incomplete reporting of claims and encounters, errors in reporting expenditures, and the high risk of improper payments associated with personal care services, we believe that action in response to these two recommendations is needed to improve CMS oversight. HHS also provided technical comments, which we incorporated as appropriate. HHS’s comments are reprinted in appendix I. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Tim Bushfield, Assistant Director; Perry Parsons, Analyst-in-Charge; Anna Bonelli; Christine Davis; Barbara Hansen; Giselle Hicks; Laurie Pachter; Vikki Porter; Bryant Torres; and Jennifer Whitworth made key contributions to this report. | A growing share of long-term care spending under Medicaid, a joint federal-state health care program, is for services provided in home and community settings. Medicaid spending on these services—about $80 billion in 2014—now exceeds spending on institutional long term care. Personal care services are key components of long-term, in-home care, providing assistance with basic activities, such as bathing, dressing, and toileting, to millions of individuals seeking to retain their independence and to age in place. However, these services are also at high risk for improper payments, including fraud. Given the expected increase in the demand for and spending on personal care services and risk of improper payments, GAO was asked to examine available data on personal care services and CMS's use of the data. This report: (1) describes the CMS systems that collect data on personal care services and what the data reveal, and (2) examines the extent to which data from these systems can be used for oversight. GAO reviewed information from two CMS data systems, reviewed relevant federal guidance and documents, and interviewed officials and researchers. Two data systems managed by the Centers for Medicare & Medicaid Services (CMS)—the federal agency that oversees Medicaid—collect information from states on the provision of and spending on personal care services: The Medicaid Statistical Information System (MSIS) collects detailed information from provider claims on services rendered to individual Medicaid beneficiaries and state payments for these services. The Medicaid Budget and Expenditure System (MBES) collects states' total aggregate Medicaid expenditures across 80 broad service categories. Information from these two CMS data systems can be used in the aggregate to describe broadly the provision of and spending on Medicaid personal care services. For example, MBES data show that total fee-for-service spending on these services was at least $15 billion in 2015—up $2.3 billion from 2012. However, the usefulness of the data collected from these two systems for CMS oversight is limited because of data gaps and errors. To provide effective oversight, including decision making, external reporting, and monitoring program operations, CMS needs timely, relevant and reliable data on personal care services rendered and the amount paid. GAO found that the data collected did not always meet these standards. For example: MSIS data were not timely, complete, or consistent. The most recent data available at the time of GAO's audit were for 2012 and only included data for 35 states. Further, 15 percent of claims lacked provider identification numbers, over 400 different procedure codes were used to identify the services, and the quantity and time periods varied widely. Without good data, CMS is unable to effectively monitor who is providing personal care services or the type, amount, and dates of services. CMS may also face challenges determining whether beneficiaries were eligible for services and assessing the reasonableness of the amount of services claimed. MBES data were not always accurate or complete. From 2012 through 2015, GAO found that 17 percent of expenditure lines were not reported correctly. Nearly two-thirds of these errors were due to states not separately identifying personal care services expenditures, as required by CMS. Inaccurate and incomplete reporting limits CMS's ability to ensure federal matching funds are provided consistent with states' approved programs. CMS is developing a new Medicaid claims system to replace MSIS and recently established a new office to support CMS's use of Medicaid data for program management and monitoring. However, CMS has not issued guidance related to reporting of personal care services that addresses the gaps GAO identified, or developed plans to use the data for oversight purposes. Without improved data and plans for how it can be used for oversight, CMS could continue to lack critical information on personal care service expenditures. HHS agreed with two of GAO's recommendations to ensure state compliance with reporting requirements and develop plans to use the data. HHS neither agreed nor disagreed with two others. |
Background The CDFI Fund is authorized to allocate tax credit authority to CDEs that manage NMTC investments in low-income community development projects. The CDEs are domestic corporations or partnerships with a primary mission of providing investment capital for low-income communities or low-income persons. Some CDEs are established by other public or private entities, such as local governments or financial institutions. Through the CDFI Fund, Treasury awards tax credit authority to CDEs through a competitive application process. After CDEs are awarded tax credit authority, they use it to attract investments from investors who then claim the NMTC. CDEs use the money raised to make investments in projects (one or more) in low-income communities. In the 11 rounds of allocations since 2003, Treasury, through the CDFI Fund, has made allocations to CDEs that total $40 billion. The base for claiming the credit on a project is called the qualified equity investment (QEI). It equals the credit authority allocated to a project by a CDE and generally covers a portion of the total project costs. The equity in the QEI includes money provided by NMTC investors but may also include money from private lenders or other government entities. The NMTC, which is equal to 39 percent of the QEI, is claimed over 7 years— 5 percent in each of the first 3 years and 6 percent over each of the last 4 years. In recent years, private investors have claimed more than $1 billion in NMTCs annually. When the QEI does not cover the entire project cost, the NMTC financing is supplemented by other financing outside the NMTC allocation. CDEs are required to invest the funds they receive in qualified low- income community investments, which include, but are not limited to, investments in operating businesses and residential, commercial, and industrial projects. Although the range of activities financed by CDEs varies, about half of NMTC investments have been used for commercial real estate projects. The program expired in 2013 but legislation has been proposed to extend it, and the President requested a permanent extension in his fiscal year 2015 budget proposal. The Financial Structures of NMTC Investments Have Become More Complex and Less Transparent While Treasury Guidance Covers Only the Simpler Structures NMTC investors have developed financial structures that increase the amount of other funding from either private or public sources that are used with the NMTC—a process that is called increasing the leverage on the investment. These structures can increase the amount of federal subsidy to a project and can result in projects being undertaken that would not otherwise have been started for lack of sufficient funding. However, they also increase the complexity of the financial structures by adding more parties and more transactions, which in turn reduces transparency and may increase the cost in terms of fees and other related transactions costs. An assessment of the NMTC program as a whole requires complete information on all the costs and benefits attributable to the NMTC, including these administrative and compliance costs. Figures 1-4 are simplified illustrations of these increasingly complex financial structures. In 2003, the Internal Revenue Service (IRS) confirmed the ability to use private funding to leverage a NMTC structure. Also, in 2004, Treasury issued final regulations generally allowing the NMTC to be combined with other tax credits. However, neither Treasury nor IRS has specifically confirmed the ability to use other federal or state funding to increase the leverage on NMTC investments. In Practice, NMTC Financial Structures Are More Complex Many NMTC projects have financial structures that are more complex than the simple examples in figures 1 through 4. According to CDFI Fund data, 21 percent of projects originating in 2010 through 2012 had financial structures involving more than one CDE. In addition, 21 percent of projects had four or more transactions involving financial flows (such as loans from the CDEs to the LIC business). The NMTC financial structures have become more complex over time. We estimated that 41 percent of NMTC investments were made using one of the leveraged models in 2006, while more recent industry estimates state that more than 90 percent used such leverage in 2013. The simplified structures in our examples also do not show some of the complexities that are required for the purpose of acquiring the tax benefit. For example, when the public funds are leveraged, as in figure 4, a pass- through entity that is disregarded for tax purposes is established to route the flow of funds back from the low-income community business to the investment fund to claim the NMTC. The complexity of the structures may reduce transparency by making it more difficult to trace the flow of private and public funds and the benefits from the tax subsidies. For example, investors can leverage other tax credits with the NMTCs to gain access to other tax benefits such as when the tax credit for solar equipment is leveraged to claim additional NMTCs and also claim accelerated depreciation deductions based on the solar equipment. However, as mentioned above, this complexity can have benefits because it may result in projects getting financing they could not get otherwise. For example, combining assistance from other government programs with the NMTC, as in figures 3 and 4, could finance projects that would not be viable if they had to rely on only the NMTC and private lenders for financing as in figure 2. A Majority of NMTC Projects Used Other Public Funds in 2010-2012 Based on our survey of CDEs with projects originating in 2010-2012, the use of other public sources of funds with the NMTC is widespread (as shown in Table 1). An estimated Sixty-two percent of all NMTC projects received other public funding— funds from federal, state or local public sources; Thirty-three percent of all NMTC projects received other federal Twenty-one percent of all NMTC projects received funding from multiple other government programs. Among the most frequently used other federal sources by NMTC projects in our survey were historic tax credits (HTC) and tax-exempt bonds for private non-profit education facilities. In addition, for a number of NMTC projects in our survey, small business participants obtained loans guaranteed by the Small Business Administration. The other federal funds most often leveraged with the NMTC projects in our survey were the historic tax credits, Recovery Zone bonds and tax-exempt bonds for private non-profit education facilities. The state and local funds most frequently leveraged with the federal NMTC were state historic tax credits and state new markets tax creditsfrom our survey). Complex NMTC Financial Structures May Mask Investments with Rates of Return That Are Higher Than Necessary The competitive or market return should be sufficient to attract enough investors to fund the project because it reflects the return on comparable investments with similar risk. In figures 1 through 4 above, we assumed for illustrative purposes that the competitive market rate of return was 7 percent. We also assumed that NMTC investors provided enough equity to the project (perhaps because of competition for the credits) that the credits claimed over the 7-year compliance period provided a market return. NMTC industry representatives have argued that competition for credits by potential investors and competition between CDEs for credit allocations works to ensure returns on NMTC investments are kept at market rates commensurate with NMTC investor’s risk. The NMTC investor’s level of risk depends on the financial structure that the investor uses. For example, when the NMTC investor uses a leveraged investment model, the investor does not generally share in the riskiness of the project. Unless the investor is also the leveraged lender or has some other financial interest in the project, it is not exposed to any risk from project failure because such an event generally does not stop the investor from claiming the credit. Some evidence suggests that some investors may receive returns that are above-market and therefore more than the necessary subsidy required to attract the funds. In a case study reported by the Urban Institute, an investor appeared to put in about $500,000 of NMTC equity to claim $1.2 million of NMTCs representing a return of about 24 percent compounded annually. The NMTC was leveraged entirely with $2.5 million of federal and state HTCs without use of a conventional leveraged loan in the NMTC structure. As a result, 83 percent of the qualified equity investment on which an investor claimed NMTCs was provided by other federal and state tax credit programs. However, the Urban Institute study authors said that, because of the complex financial structure, they could not rule out the possibility that the investor supplied other, non-NMTC funds to the project and at a lower rate of return. They noted that the NMTC financing was combined with a conventional loan outside the NMTC structure, owner equity, and a loan from the local municipality. If the NMTC investor supplied some of these additional funds, it is possible that the investor’s overall rate of return may have been lower and more in-line with the market return. Guidance and Controls Do Not Exist to Prevent Above-Market Rates of Return or Unnecessary Duplication and Costs Internal controls should provide reasonable assurance that operations and use of resources are efficient. In the context of the NMTC program, the resources administered by Treasury include the tax benefits claimed by the NMTC investors. One NMTC control has been Treasury and IRS guidance on allowable financial structures. The Secretary of the Treasury has the authority to limit how other federal tax benefits are used with the NMTC and the Secretary has used this authority to prohibit its use with the Low-Income Housing Tax Credit (LIHTC). While an IRS revenue ruling, issued in 2003, allowed using debt to leverage NMTC equity, the ruling did not explicitly address using other public funds, such as other tax benefits, to leverage the NMTC as in figure 4. Controls do not exist to monitor and prevent unnecessary use of other public funds to supplement the NMTC. As already noted, Treasury, through the CDFI Fund, does not collect information about the specifics of other public funding. For the CDFI Fund, Treasury does not have controls to limit the risk of cases like the example from the Urban Institute study where other public funds were used to expand the NMTC base and apparently generate a 24 percent rate of return for the NMTC investor. We believe that such controls could take a variety of forms and would have to be assessed relative to any added compliance and administrative costs. A control that would provide greater clarity about tax subsidies could be achieved by requiring CDEs to report the NMTC investor’s overall rate of return on the NMTC project. As suggested by the Urban Institute example, this information would allow an assessment of whether the NMTC investor is earning a market return commensurate with the risk on its entire investment. With this information, an additional control could be implemented that would require CDEs to justify rates of return above a certain threshold by explaining why this project was so risky that it required a greater-than-market rate of return. Other controls that could be considered include caps on rate of returns and mechanisms to ensure competition among NMTC investors sufficient to prevent above-market rates of return. The decision to adopt any of these controls would require that Treasury compare the benefits of the controls with any compliance costs from added complexity for taxpayers and administrative costs for Treasury from collection and evaluation of the data and monitoring the controls once they are put in place. The complexity of the financial structures creates a lack of transparency for taxpayers and IRS, and can increase both the risk of higher than needed NMTC rates of return and investment transactions costs. Combining multiple investment sources may help some NMTC projects to obtain sufficient financing to proceed. Indeed, NMTCs are often referred to as “gap financing” that can be added to other financing. Some projects may not require NMTC funding. The Urban Institute study estimated that about 20 percent of projects in the first 4 years of the program showed no evidence of needing NMTCs to proceed while about 30-40 percent did with 30 percent uncertain. However, even in the case where the NMTC funding is necessary, the intricate patterns of investment flows through NMTC structures, where multiple sources may be mingled and later dispersed, make it difficult to determine who is receiving subsidies and whether the return to NMTC investors is higher than necessary. In addition, the network of transactions in the NMTC financial structures increase costs both in terms of explicit CDE fees and other resources used to pay the legal and accounting costs necessary to establish the entities that make the transactions. Fees and Retentions Reduce the Available NMTC Equity but Lack of Transparency Makes the Size of the Reduction Uncertain Fees and retentions directly reduce the amount of tax subsidized equity investment that is available to low-income community businesses but these costs do not represent the only way that equity can be reduced. The costs associated with financial structures could also appear in the form of higher interest rates, especially when the investor and leveraged lenders are related parties. In listening sessions organized by CDFI Fund officials, some CDEs reported that comparison of the fees that they charged to the fees charged by integrated investors/lenders were inaccurate because of the ability of integrated investors/lenders to receive compensation in other ways. The fees, interest, and other costs that can offset one another (a low fee may be offset by a high interest rate) also reduce transparency by making the net effect on the tax subsidized equity reaching the low-income community business hard to determine. Officials at the CDFI Fund are attempting to address this issue by requiring CDEs to provide a disclosure statement to low-income community businesses about the size of tax subsidized equity and how it is affected by fees and interest rates.The new requirement that CDEs disclose to the low-income community businesses all transactions costs, fees, and compensation could help those businesses understand the final net benefit to the project being financed with NMTCs. However, because Treasury does not require the CDFI Fund to collect the CDE disclosure statements itself, the CDFI Fund database has incomplete information about fees, interest, and other costs. Without such complete information the Treasury is limited in its ability to analyze the final net financial benefit of NMTC investments to low-income community businesses. CDE Fees and Retentions Reduce the NMTC Equity Available to the Low- Income Community Businesses Our analysis shows that fees and retentions by the CDEs reduced the $8.8 billion of NMTC investment available to the businesses in 2011-2012 by about $619 million or 7.1 percent. The initial reduction occurred as part of the NMTC investment that the CDEs retain to cover administrative costs before investing the remainder in the project. The CDEs then also charge fees over the course of the 7-year compliance period that further reduces the equity available to the project. These fees can take the form of front-end or origination fees at closing, on-going, or asset management fees during the compliance period, and closing fees at the end of the compliance period. Table 2 shows the fees and retentions measured as a percentage of NMTC investment that reduce the equity that is available to the businesses. In addition, the projects may also incur third-party transaction costs for NMTC related accounting or legal services not provided by the CDEs. The CDEs are not required to report these other third-party transaction costs to the CDFI Fund. Higher CDE Fees and Retentions Are Associated with More Complex Financial Structures Our analysis also shows that the amount of fees and retentions charged is strongly associated with the amount of NMTC investment in the project: the amount of NMTC investment accounts for about 50 percent of the variation in fees across projects. Although the CDFI Fund does not currently collect data that directly measure the complexity of investment structures, it does collect data on the number of investment transactions that occur on a NMTC project. These transactions often represent loans and investments to the business from different entities such as multiple CDEs, and can therefore be used as a proxy for more direct measures of complexity. Our regression analysis of these data shows that higher fees and retentions are associated with more complex structures as indicated by the number of transactions (see table 3 for details of this association between fees and retentions and financial structure, and other characteristics of the NMTC project). Fees and retentions are generally described by NMTC participants as reflecting mostly fixed costs, which would be more consistent with fees decreasing in proportion to the total size of the NMTC investment. CDEs That Reported They Charged No Fees or Retentions Illustrates the Program’s Lack of Transparency For about 20 percent of projects originating in 2011-2012, CDEs reported to the CDFI Fund that they charged zero fees or retentions on these projects (see figure 5). However, it seems unlikely that the services provided by the CDEs were uncompensated. While the payment may have come in different forms, a lack of transparency makes it hard to readily determine how much of the NMTC investment is being reduced and by what means. For example, NMTC program participants have suggested that these projects with no fee costs may be more integrated investments where banks or other large institutions may be lenders as well as investors. In this case, low fees may be offset by higher interest rates. We found some indirect evidence of this in another regression where the projects with no fees or retentions were analyzed relative to all the projects that had fees and retentions. Here, the analysis showed that the projects with higher average interest rates were more likely to charge no fees and retentions. For this subgroup of projects, the positive relationship between fees and interest rates that we found when we analyzed all the projects (as shown in table 3) is reversed. Data on NMTC Equity Remaining in the Low-Income Community Businesses Are Not Sufficiently Complete or Accurate Projects that have considerable equity are more likely to have better loan- to-value ratios and are generally more likely to obtain loans with better terms than projects without their own equity. For this reason, the larger the amount of equity remaining in the project, the greater is the likelihood that the project will continue on its own without any further government subsidies. However, the data available from the CDFI Fund reflect only the equity left by NMTC investors and may not give a complete picture of the economic viability of the business because it does not include other forms of equity, such as those from retained earnings for a successful business. Furthermore, the CDFI data on equity left in the business are not sufficiently reliable because they are incomplete and not accurate enough to capture program performance. According to our standards for measuring program performance, several elements should be considered when examining the quality of agency performance data including accuracy and completeness. However, our review showed that about 60 percent of projects originating in 2011-2012 had inconsistencies that made these data unreliable. Examples of these inconsistencies are: Incomplete data: One or more CDEs involved in the project did not report any values making it impossible to calculate an amount for the entire project. Inaccurate data: Equity remaining was projected to equal or exceed 100 percent of all NMTC investments in the project. These amounts are not valid because they exceed the amounts of the original equity loan. Zero values: Some CDEs may be reporting a zero value because they do not intend to leave equity in the project. But according to CDFI Fund officials, other CDEs may intend to leave equity in the project, but reported zero value for accounting reasons because the charge would not be recorded until a later date. Thus, the zero values may be understating the equity available to the low-income community businesses. We determined that one cause of data unreliability was the unclear instructions in the manual for entering data into the CDFI Fund’s systems. The manual does not clearly explain the time period for which information should be reported, which may have led to the CDEs reporting according to different accounting rules. Incomplete and inaccurate data result in an inability to use the data to track an important indicator of the likely performance of the NMTC projects after the compliance period ends. As a result, it is not possible to determine from these data the amount of equity to remain in the low-income community businesses after the 7-year credit period. CDFI Fund Data on Distressed Projects Are Not Sufficiently Reliable CDFI Fund data on projects experiencing financial distress, such as the number of days a loan is delinquent, track program performance in that they indicate how likely a project is to continue in business during and after the credit period. These indicators of financial distress must be weighed against other NMTC program goals. Another program goal is to encourage investors to invest in projects that may be more risky because they are located in low-income communities. Regardless, reliable performance data are needed to administer the program. Our review of CDFI Fund data on the current status and performance of loans to NMTC projects showed inconsistencies that made these data not sufficiently reliable to determine the number and extent of projects experiencing financial distress. Examples of these inconsistencies include: Incomplete data: Ninety-nine percent of projects reported current loan status (a mandatory field), but approximately 30 percent of projects omitted additional information for the other potential indicators of distress such as the number of times a loan has been restructured, the number of days the loan is currently delinquent, and the dollar amount of any loan that has been charged off (optional or conditionally-required fields). Inaccurate data: Potential inaccuracies appear when we compared CDEs’ descriptions of troubled projects from their NMTC allocation applications with indicators of distress for those same projects in CDFI Fund databases. In their 2012 applications, CDEs described 193 projects with delinquent, defaulted, or impaired loans in sufficient detail that we could identify those projects in the CDFI Fund data. But 49 of these projects showed no indications of distress in the data. The causes of data unreliability were unclear instructions and optional reporting. CDEs enter their data into the CDFI Fund’s electronic database. How they enter the data is determined by instructions provided by the CDFI Fund. We found a lack of clarity in these instructions that prevented us from being confident that the data provided an accurate measure of distress. For example, the instructions did not clearly distinguish restructured loans from refinanced loans. However, refinancing may not be an indicator of distress. It can occur for a variety of economic reasons. In addition, the data were incomplete because reporting some information was optional. Without accurate and complete data, the CDFI Fund does not have sufficient information to track program performance related to the future viability of the NMTC funded projects. CDFI Fund officials told us that they are changing all the loan performance data points to make them mandatory. Difficulties in Determining Whether or Not a Project Has Failed Project failures could significantly affect program performance by limiting its social and economic outcomes in low-income communities. However, determining when a project has failed is difficult. Projects that seem to be in difficulty based on the indicators of financial distress can become financially sound again. Some CDEs in their 2012 applications described successful restructurings of projects experiencing financial difficulty. But other projects are in situations where recovery seems unlikely or the CDEs have in fact written off the projects (see text boxes 1 and 2 for details of examples of both types of outcomes). The CDFI Fund is developing additional tools to collect better information on failed projects. As discussed above, their current measures of financial distress are inaccurate or incomplete. However, even if the data are improved, the financial distress measures may not accurately identify project failures because the projects can recover from distress. The CDFI Fund is attempting to rectify this measurement problem with a close-out report which is intended to collect additional information on the status of the business at the end of the 7-year compliance period, such as whether a business continues to operate or a real estate project has been put into service. Examples Illustrating Difficulties in Determining if a Project Has Failed Text Box 1: Meat processing plant investment that failed. An $8 million NMTC investment in a meat processing company involving two loans and one equity investment from one CDE. According to news accounts, the investments were needed to address liquidity problems created by an expansion of this ongoing business that employed about 300 people. According to the CDE’s 2012 reapplication, the investments became impaired soon after the CDE made the investments when the leveraged lender limited the company’s ability to borrow from its credit line, which forced the company into bankruptcy. The CDE unsuccessfully sought alternative financing for the company but still suffered a loss of $5.2 million. In the CDFI Fund data, one loan and one equity investment are reported as “charged off” for the full dollar amounts. The other loan is reported as “closed” but no amount charged off. Text Box 2: Troubled college construction project that recovered. A college construction project involving three CDEs and several NMTC loans. In their 2012 reapplications for NMTC allocations, two of the CDEs reported that their loans for the project became delinquent for as much as 480 days because of construction delays and the economic recession of 2008. The two CDEs reported that they worked with the borrower to rebalance the budget and bring in additional sources of financing. At the time of their applications, the two CDEs reported that all the previous interest was paid and all subsequent interest payments had been on time. The loans from the two CDEs show indications of poor performance (periods of delinquency) in CDFI Fund data. The third CDE did not report any delinquent or impaired loans in its 2012 reapplication, and its loans do not show any indications of poor performance in CDFI Fund data. Conclusions The potential impact of the NMTC in promoting economic development in designated low-income communities is diluted if the NMTC provides an above-market rate of return. Similarly, the impact of a combination of assistance from government programs is diluted if in the same cases the combination of assistance is unnecessarily duplicative. Treasury guidance and controls that are designed to limit these risks can help ensure the NMTC program realizes the greatest possible impact on low- income communities. Complete and reliable information is a vital component of assessing program effectiveness. While the complexity of the NMTC financial structures makes gathering information a challenge, there are several aspects of these structures where better information would aid in understanding the effectiveness of the program. These include the extent to which fees, interest rates, and other costs reduce the NMTC equity flowing to low-income community businesses, the amount of equity available to the low-income community businesses at the end of the 7- year compliance period, and the number of projects that failed or are at risk of failing. Recommendations for Executive Action We recommend that the Secretary of the Treasury take the following actions: Issue guidance on how funding or assistance from other government programs can be combined with the NMTC including the extent to which other government funds can be used to leverage the NMTC by being included in the qualified equity investment. Ensure that controls are in place to limit the risk of unnecessary duplication at the project level in funding or assistance from government programs and to limit above market rates of return, i.e., returns that are not commensurate with the NMTC investor’s risk. Ensure that the CDFI Fund reviews the disclosure sheet that CDEs are required to provide to low-income community businesses to determine whether it contains data that could be useful for the Fund to retain. Ensure that the CDFI Fund clarifies the instructions for reporting the amount of any equity which may be acquired by the low-income community business at the end of the 7-year NMTC compliance period. Ensure that the CDFI Fund clarifies the instructions it provides to CDEs about reporting loan performance and make the reporting of that data mandatory. Agency Comments and Our Evaluation We provided a draft of this product to Treasury for comment on June 11, 2014. In its written comments, reproduced in appendix IV, Treasury concurred with two of our recommendations and reported that the other recommendations are under consideration. The CDFI Fund also provided technical comments that were incorporated, as appropriate. Treasury said that it is considering our recommendations to issue further guidance on how other government programs can be combined with NMTCs, and to ensure that adequate controls are in place to limit the risks of unnecessary duplication and above-market rates of return. Treasury reported that our recommendations would be reviewed in consultation with a recently formed working group that includes representatives from the IRS and the CDFI Fund to discuss potential administrative or regulatory changes. Treasury said that it is considering our recommendation about reviewing data presented in the disclosure sheets that CDEs are required to provide low-income community businesses. As our report states, not all information on the disclosure sheets, such as third-party transactions costs, is reported to the CDFI Fund. This additional information on the disclosure sheets could be useful for the CDFI Fund to retain. Treasury agreed with our recommendation to clarify instructions for reporting any equity amounts that may be acquired by the low-income community business at the end of the compliance period. Treasury also agreed with our recommendation to clarify instructions to CDEs about reporting loan performance and make this data reporting mandatory. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Director of the CDFI Fund, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-9110 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to assess: (1) New Markets Tax Credit (NMTC) financial structures in terms of their complexity, transparency, and effect on the size of the federal subsidies going to NMTC projects as well as controls to ensure that subsidies are not larger than necessary for the investment; (2) what is known about the types and amounts of fees and other costs that reduce the amount of equity reaching low-income community businesses; (3) what is known about the amount of equity left in the low-income community businesses after the 7-year credit period; and (4) what is known about NMTC projects that are at risk of failing by becoming economically nonviable. For our first objective, we reviewed the NMTC literature and interviewed representatives of community development entities (CDEs) and researchers who have evaluated the program to determine how the financial structures have evolved. To assess the complexity and transparency of NMTC investment structures, we applied criteria from our prior work on evaluating tax expenditures. We also applied criteria from federal government internal control standards to assess whether controls are present to ensure that subsidies are not larger than necessary for a NMTC project. To report on the number, types, and funding amounts of other federal programs used on NMTC projects, we designed and implemented a web- based survey to gather information on how projects were financed from the CDEs responsible for the project. Our survey population consisted of randomly selected NMTC projects with all loans and investments closing on or after January 1, 2010. Restricting the survey population to NMTC projects within our study’s time period left us with a total of 1,265 projects in the population. We selected a stratified sample of 305 projects. From the first strata, defined as those projects for which there is an indication on the underlying population file that at least one of the funding sources was public dollars, we selected 126 projects. From the second strata, defined as all other projects, we selected 179 projects. Although some projects in the second strata could actually have public dollars, this stratification helps ensure that our sample has enough of these cases to produce estimates of that domain. The survey asked the CDEs a combination of questions that allowed for open-ended and close-ended responses with regard to federal, state, local, and private funding sources. We pre-tested the content and format of the questionnaire with four knowledgeable CDEs and made changes based on pre-test results. The survey was a web-based survey. We sent an activation e-mail for the survey on March 7, 2014, and closed the survey on April 4, 2014. The practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses were processed and analyzed, or the types of people who do not respond can influence the accuracy of the survey results. We took steps in the development of the survey, the data collection, and the data analysis to minimize these non- sampling errors and help ensure the accuracy of the answers that were obtained. A second independent analyst checked all the computer programs that processed the data. In instances where multiple CDEs responded about the same project, we manually merged the data together into a representative single project following the Community Development Financial Institutions (CDFI) Fund’s practices, such as reporting the highest value as the default estimate. We obtained responses for 214 of the 305 projects in our sample for an overall response rate of about 70 percent. Population estimates were produced by weighting the sample data from the responding projects to account for differing sampling rates for projects funded with public dollars and those funded without public dollars. We have treated the respondents as a stratified random sample and calculated sampling errors as an estimate of the uncertainty around the survey estimates. All percentage estimates based on this sample have 95 percent confidence intervals of within +/- 7 percentage points of the estimate itself. For other numeric estimates, the 95 percent confidence intervals are presented along with the estimates themselves. We are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. To address the last three objectives, we analyzed data from the CDFI Fund on NMTC investments in low-income community businesses from 2003 through 2012. The CDFI Fund requires all CDEs that have been awarded NMTC allocations to submit an annual report detailing how they invested the qualified equity investment (QEI) proceeds in low-income communities. These reports must be submitted to the CDFI Fund by the CDEs, along with their audited financial statements, within 6 months after the end of their fiscal years. CDEs are required to report their NMTC investments in the CDFI Fund’s Community Investment Impact System (CIIS) for a period of 7 years. Due to a time lag in reporting, NMTC investments reported in CIIS are less than the total amount allocated for the NMTC program. Given that the CDFI Fund requires CDEs to report information on project characteristics and financing once a year, CIIS data may not capture the most current information for all existing projects. However, the CIIS data that we used represent the most current available information as of December 2, 2013 on the status of the program. We interviewed CDFI Fund officials with knowledge of CIIS about the steps they take to ensure its accuracy. Based on GAO’s criteria for valid data— that they be sufficiently accurate and complete to capture program performance—we also determined the data on fees and retentions we used in this report were sufficiently reliable for our purposes (see below for discussion of the data we found unreliable for our purposes). Seventy-three projects were excluded because they had loans and investments originating both before and after December 31, 2010, and 49 projects were excluded because one or more CDEs appeared to have reported fees data incorrectly by reporting fees as a percentage of the QEI rather than as basis points as instructed by the CDFI Fund. to calculate the total dollar amounts of fees. We compared our final results to other data on fees submitted by CDEs when applying for NMTC allocations in 2010 and 2011. We attempted to compare these costs to costs on non-NMTC investments by reviewing industry data and academic studies on fees, and interviewing industry and tax credit experts. We also reviewed written and oral comments made by CDEs and industry experts to the CDFI Fund in response to a November 7, 2011, Federal Register notice soliciting comments on several possible NMTC program changes. The CDFI Fund specifically requested comments on whether additional rules, restrictions, and requirements should be imposed related to fees and expenses charged by CDEs. The CDFI Fund also held three listening sessions in December 2012 and January and February 2013 with a total of 45 CDE and industry experts. We reviewed the transcripts of these listening sessions and concluded from these reviews that the types of projects funded by the NMTC are so varied that we could not conduct a valid comparison of fee costs with those on non-NMTC projects. We attempted to determine the amount of equity that remains in low- income community businesses by analyzing data reported by CDEs to the CDFI Fund on equity projected to remain in NMTC projects at the end of the 7-year credit period. We analyzed these data at the project level for 842 projects with loans and investments originating in 2011 and 2012. Starting with the 2011 reporting period, the CDFI Fund began requiring that, if applicable, CDEs report the projected amount of any equity or debt investment which may be acquired by the low-income community business as the result of a put/call option or other arrangement for loans and investments originating after December 31, 2010. After reviewing all 2,249 transactions for all 842 projects, we determined that data were usable for only 363 projects (about 40 percent), and therefore the data could not be used to give sufficiently reliable descriptions of the equity remaining in the project for our purposes. We found the following problems with the data that made the values unreliable for our report. For 201 projects, no data were reported by one or more of the CDEs involved in those projects. We concluded that in these cases we could not determine: (1) how many of these projects were non-leveraged where the failure to project a residual value might be expected; (2) how many indicated a true intention not to leave any equity in the project; and (3) how many were simply errors or omissions. For another 143 projects, one or more CDEs involved reported a total projected residual value of $0. We concluded in these cases that the implications of zero for equity remaining were ambiguous. According CFDI Fund officials, some CDEs may in fact do not intend to leave any residual equity for the business to obtain at the end of the 7-year period. However, other CDEs may project $0 remaining now when in fact they intend to designate the amount of equity at a later date. These CDEs may be reporting a zero value at this time due to an individual CDE’s internal practices or accounting rules particular to the CDE’s form of incorporation. That is, the CDEs would not report a projected or final residual value until the put/call option was exercised by the business. Of the remaining 498 projects, 135 projects had data showing that the total projected residual value for these projects appeared to be overstated. Some data showed that one or more CDEs reported projected residual values greater than or equal to the original equity investment values. In most of these cases, the CDEs appeared to have incorrectly reported a projected residual value for both leveraged loans and equity investments. Based on the typical leveraged model structure, the CDEs should have only reported the residual value of the equity investments. In other cases, one or more CDEs involved in a project reported the projected residual value of the equity investment twice—once as value for the equity investment, and then repeated it as a value for the leveraged loan. As a result of these reporting errors or inconsistencies, the total projected residual value for these projects appeared to be overstated. To determine the NMTC projects at risk of failing by becoming economically nonviable, we analyzed data reported by CDEs to the CDFI Fund that could indicate that a project is experiencing financial distress. We used indicators of financial distress that are available from CDFI Fund data, such as whether a loan on a project is delinquent, charged off, or restructured that could show increased risk of business failure. These indicators, in most cases, could not be used to conclude that a business has failed in the sense of being economically nonviable as the CDFI Fund does not currently have data on the ultimate disposition of NMTC projects. We analyzed the number of projects that showed indications of financial distress between 2003 and 2012, and the dollar amounts invested in these financially distressed projects. We tested the reliability of these optional fields by reviewing CDEs applications for 2012 NMTC allocations. In the application instructions, CDEs were asked to discuss any delinquent, defaulted, or impaired loans or equity investments from prior NMTC investments. In the 2012 applications, we counted 281 projects with delinquent, defaulted, or impaired loans or investments. Of these, 193 projects were described in sufficient detail that we could then match those projects to transaction-level data in CIIS. However, 49 of those projects did not show any indications of financial distress in CIIS. For some projects involving multiple CDEs, one or more CDEs may have described the project in their 2012 applications as having delinquent, defaulted, or impaired investments, but only one CDE then reported any of the optional distress indicator data in CIIS. In other cases, CDEs described several projects in the 2012 applications as having delinquent, defaulted, or impaired investments, but then those CDEs did not report any distress indicators for these projects in CIIS. In the end, we concluded that the CDFI Fund CIIS data on indicators of financial distress were insufficient for our purposes, largely due to the fact that nearly all of the distress indicators were optional data fields. We conducted this performance audit from May 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Other Federal Funding Used in New Markets Tax Credit Projects In our survey of New Markets Tax Credit (NMTC) projects, we asked Community Development Entities (CDEs) what other federal sources of assistance were used (in addition to the NMTC) to fund the project. CDEs could select from a pre-populated list of federal sources comprised of federal tax credits, public bond financing exempt from federal tax, grants or direct payments from federal agencies, and direct or guaranteed loans from federal agencies. Our survey also permitted CDEs to write in other federal sources not included in the pre-populated list. CDEs were also asked if the NMTC was “twinned” or “enhanced” with any of these federal sources inside the NMTC structure, i.e., did NMTC investors claim NMTCs based on these additional amounts of federal assistance. The NMTC investor community often refers to this practice as “twinning” the NMTC with other tax credits or other public assistance, but for purposes of this report, we define this as leveraging other public sources with the NMTC. The four tables below list the types of other federal assistance (from our pre-populated list of federal assistance and write-in descriptions) that CDEs reported as being used to finance our sample of NMTC projects, and whether these other federal sources were leveraged with the NMTC. Appendix III: Regression Analysis of Total Fees and Retentions Associated with Characteristics of the Project Using CDFI Fund Data, 2011-2012 Dollar amount of NMTC qualified equity investment (QEI) Number of transactions (financial structure complexity) Poverty rate > 30% (distressed community) The table reports the results of an ordinary least squares regression with the dependent variable equal to the amount of fees and retentions charged. Omitted category variables are: project type equal to “rehabilitation or other” and origination year equal to “2011.” The regression was also estimated using various functional forms including quadratic and log forms of the regression equation. These specifications either were rejected as statistically insignificant or, in the case of some of the log specifications, resulted in a substantial decrease in the explanatory power of the regression as measured by its R-squared. This regression was also estimated with fees and retentions as separate dependent variables, and, in these cases, the results were different for certain variables of interest. With fees as the dependent variable, neither the interest rate nor the number of transactions was statistically significant at the 95 percent confidence level. However, the size of the qualified equity investment remained positive and significant. With retentions as the dependent variable, the retentions were positively and significantly related to interest rates but had no statistically significant relationship to the number of transactions. Appendix IV: Comments from the Department of the Treasury Appendix V: GAO Contact and Staff Acknowledgments GAO Contact James R. White, (202) 512-9110, or [email protected]. Staff Acknowledgments In addition to the contact named above, Kevin Daly, Assistant Director; Amy Bowser; Cathy Hurley; Mark Kehoe; Jill Lacey; Edward Nannenhorn; Mark Ramage; Wayne Turowski; and Elwood White made key contributions to this report. Related GAO Products 2014 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-14-343SP. Washington, D.C.: April 8, 2014. Community Development: Limited Information on the Use and Effectiveness of Tax Expenditures Could Be Mitigated through Congressional Attention. GAO-12-262. Washington, D.C.: February 29, 2012. Efficiency and Effectiveness of Fragmented Economic Development Programs Are Unclear. GAO-11-477R. Washington, D.C.: May 19, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. New Markets Tax Credit: The Credit Helps Fund a Variety of Projects in Low-Income Communities, but Could Be Simplified. GAO-10-334. Washington, D.C.: January 29, 2010. HUD and Treasury Programs: More Information on Leverage Measures’ Accuracy and Linkage to Program Goals Is Needed in Assessing Performance. GAO-08-136. Washington, D.C.: January 18, 2008. Tax Policy: New Markets Tax Credit Appears to Increase Investment by Investors in Low-Income Communities, but Opportunities Exist to Better Monitor Compliance. GAO-07-296. Washington, D.C.: January 31, 2007. New Markets Tax Credit Program: Progress Made in Implementation, but Further Actions Needed to Monitor Compliance. GAO-04-326. Washington, D.C.: January 30, 2004. | In recent years, private investors have claimed more than $1 billion in NMTCs annually. The credits are combined with private loans and other public funds to support investments in low-income communities. GAO was asked to review the financial structure of NMTCs. This report assesses: (1) the complexity and transparency of NMTC financial structures and controls over the size of federal subsidies; (2) what is known about the types and amounts of fees and other costs of the financial structures; (3) what is known about the equity remaining in low-income community businesses after the 7-year credit period; and (4) what is known about NMTC project failure rates. GAO reviewed Treasury NMTC data and surveyed CDEs that allocated credits to 305 projects in 2010-2012. The financial structures of New Markets Tax Credit (NMTC) investments have become more complex and less transparent over time. The increased complexity is due, in part, to combining the NMTC with other federal, state, and local government funds. Based on GAO's survey of Community Development Entities (CDEs) an estimated 62 percent of NMTC projects received other federal, state, or local government assistance from 2010 to 2012. While combining public financing from multiple sources can fund projects that otherwise would not be viable, it also raises questions about whether the subsidies are unnecessarily duplicative because they are receiving funds from multiple federal sources. In addition, in some cases the complexity of the structures may be masking rates of return for NMTC investors that are above market rates. For example, a study done for the Department of the Treasury (Treasury) found an investor apparently earning a 24 percent rate of return, which is significantly above market rates of return. In that case, the investor leveraged the NMTCs by using other public funds to increase the base for claiming the NMTC. Treasury and the Internal Revenue Service issued guidance about allowable financial structures in the early years of the NMTC program, but the guidance has not been updated to reflect the subsequent growth in complexity, such as the use of other public money to leverage the NMTC. Treasury also does not have controls to limit the risk of unnecessary duplication in government subsidies or above market rates of returns. Without such guidance and controls the impact of the NMTC program on low-income communities could be diluted. The costs of complex NMTC financial structures may not be fully reflected in fees charged by CDEs, and they could be reflected in other costs such as higher interest rates. Treasury has taken steps to ensure businesses are better informed about fees and other costs, but is not collecting these additional data itself. Without these data, Treasury is limited in its ability to analyze NMTC program benefits. GAO also found that the data on equity remaining in businesses after the 7-year credit period were unreliable because, in part, instructions on what to report are unclear. As a result, at this time it is not possible to determine how much equity remains in low-income community businesses after 7 years. Similarly, data on NMTC project failure rates were unavailable. GAO reviewed data of performance on loans from CDEs to low-income community businesses as an indicator of whether the businesses will be viable over the long term. However, data on loan performance were also incomplete because some reporting of this information by CDEs is optional. As a result, it is not possible to determine, at this time, the NMTC project failure rate with certainty. |
Background The permanent provisions of the Brady Handgun Violence Prevention Act (Brady Act) took effect on November 30, 1998. Under the Brady Act, before a federally licensed firearms dealer can transfer a firearm to an unlicensed individual, the dealer must request a background check through NICS to determine whether the prospective firearm transfer would violate federal or state law. The Brady Act’s implementing regulations also provide for conducting NICS checks on individuals seeking to obtain permits to possess, acquire, or carry firearms. Under federal law, there are 10 categories of individuals who are prohibited from During a NICS check, descriptive data receiving or possessing a firearm.provided by an individual, such as name and date of birth, are to be used to search three national databases containing criminal history and other relevant records to determine whether or not the person is disqualified by law from receiving or possessing firearms. Interstate Identification Index (III)—Managed by the FBI, III is a system for the interstate exchange of criminal history records. III records include information on persons who are indicted for, or have been convicted of, a crime punishable by imprisonment for a term exceeding 1 year or have been convicted of a misdemeanor crime of domestic violence. National Crime Information Center (NCIC)—An automated, nationally accessible database of criminal justice and justice-related records, which contains, among other things, information on wanted persons (fugitives) and persons subject to restraining orders. NICS Index—Maintained by the FBI, this database was created for presale background checks of firearms purchasers and contains information on persons predetermined to be prohibited from possessing or receiving a firearm. According to DOJ, approximately 16 million background checks were run through NICS during 2011, of which about half were processed by the FBI’s NICS Section and half by designated state and local criminal justice agencies. States may choose among three options for performing NICS checks, which include the state conducting all of its own background checks, the state and DOJ sharing responsibility for background checks, or DOJ conducting all background checks for a state. See appendix II for further discussion of these differences. The Gun Control Act of 1968, as amended, and ATF regulations establish the definitions of the mental health and unlawful drug use prohibiting categories and, therefore, the scope of relevant records to be made available to the FBI by states and territories. As defined in ATF regulations, mental health records that would preclude an individual from possessing or receiving a firearm include (1) persons who have been adjudicated as “a mental defective,” including a finding of insanity by a court in a criminal case, incompetent to stand trial, or not guilty by reason of insanity, and (2) individuals involuntarily committed to a mental institution by a lawful authority. The prohibitor—that is, the condition or factor that prohibits an individual from possessing or receiving firearms— does not cover persons in a mental institution for observation or a voluntary admission to a mental institution. Mental health records are found within two databases checked during a NICS background check, the III, and the NICS Index. Federal law prohibits individuals who are unlawful users of or addicted to any controlled substance from possessing or receiving a firearm. ATF regulations define an unlawful user of or addicted to any controlled substance as a person who uses a controlled substance and has lost the power of self-control with reference to the use of the controlled substance and any person who is a current user of a controlled substance in a manner other than as prescribed by a licensed physician. In general, under these regulations, use of such substances is not limited to the precise time the person seeks to acquire or receives a firearm; instead, inference of current use may be drawn from evidence of recent use or pattern of use through convictions, multiple arrests, and failed drug tests, among other situations. ATF regulations further provide examples upon which an inference of current use may be drawn, including a conviction for use or possession of a controlled substance within the past year or multiple arrests related to controlled substances within the past 5 years if Unlawful drug use the most recent arrest occurred within the past year.records associated with a criminal arrest or conviction are generally found in the III and those that are not associated with an arrest or conviction are entered into the NICS Index. FBI officials reported that states submit the vast majority of their unlawful drug use records to the III. The NIAA provides that such disqualification determinations are those made under subsection (g) or (n) of section 922 of title 18, United States Code, or applicable state law. grants if the state provides less than 50 percent of the records requested under the NIAA. This discretionary penalty may be increased to 4 percent through 2018 and to a mandatory 5 percent penalty thereafter if a state provides less than 90 percent of records requested under the act. Additionally, the NIAA establishes the NARIP grant program to assist states in providing records to NICS. In order to be eligible for such grants, states must meet two conditions. First, states are to provide DOJ with estimates, pursuant to a methodology provided by the Attorney General, of their numbers of potentially NICS-applicable records. Second, states must establish a program that allows individuals who have been prohibited from possessing firearms due to a mental health adjudication or commitment to seek relief from the associated federal firearms prohibition (disability). The NIAA refers to such programs as “relief from disabilities” programs. ATF is responsible for determining whether a state’s relief program satisfies the conditions of the NIAA and has developed minimum criteria for certifying a state’s program. For example, the program has to be established by state statute—or administrative regulation or order pursuant to state law—and include due process requirements that allow persons seeking relief the opportunity to submit evidence to the lawful authority considering the relief application. Most States Have Made Limited Progress in Providing Mental Health Records and Could Benefit from DOJ Sharing Promising Practices States increased the number of mental health records available for use during NICS background checks from 200,000 in October 2004 to 1.2 million in October 2011, but this progress largely reflects the efforts of 12 states, and most states have made little or no progress in providing these DOJ and state officials identified technological, legal, and other records.challenges that hinder states’ ability to make these records available. DOJ has made several forms of assistance available to help states provide records—including grants, conferences, and training—and the 6 states we met with generally reported finding these helpful. DOJ has begun to have states share their promising practices during regional meetings, but DOJ has not shared these practices nationally. Mental Health Records Have Increased since NIAA Enactment, but Progress Largely Reflects Efforts of 12 States The total number of mental health records that states made available to the NICS Index increased by approximately 800 percent—from about 126,000 records in October 2004 to about 1.2 million records in October As shown in figure 1, there was a marked 2011—according to FBI data.increase in the number of mental health records made available by states since 2008, when the NIAA was enacted. This increase largely reflects the efforts of 12 states that had each made at least 10,000 mental health records available by October 2011. From October 2004 to October 2011, 3 states increased the number of mental health records they made available by over 150,000 each. On the other hand, during this same time period, almost half of the states increased the number of mental health records they made available by less than 100 records. As of October 2011, 17 states and all five U.S. territories had made fewer than 10 mental health records available to the NICS Index. Factors other than the NIAA could have also contributed to the increase in mental health records made available to NICS, including state efforts already under way before the act, changes in state funding or leadership, and increases in the number of individuals with mental health records that would preclude them from receiving or possessing a firearm. In addition, in August 2008, the FBI’s NICS Section requested that states move certain records they had previously submitted to a “denied persons” category in the NICS Index to more specific categories of prohibitors, including the mental health category. According to NICS Section officials, the majority of records in the mental health category were new records submitted by states and not transferred from the denied persons category. The increase in available mental health records could be a factor in the increasing number of firearm transactions that have been denied based on these records. According to FBI data, the number of firearm transactions that were denied based on mental health records increased from 365 (or 0.5 percent of 75,990 total gun purchase denials) in 2004 to 2,124 (or 1.7 percent of 123,432 total gun purchase denials) in 2011. According to NICS Section officials, the vast majority of these denials were based on mental health records in the NICS Index, but that a small number could have been based on prohibiting information contained in other databases checked by NICS (e.g., criminal history records in the III noting a court finding of incompetence to stand trial). DOJ and States Reported That Technological, Legal, and Coordination Challenges Hindered States’ Ability to Make Mental Health Records Available Technological Challenges DOJ and state officials we met with identified technological challenges to making mental health records available to NICS, such as updating aging computer systems and integrating existing record systems. DOJ officials noted that technological challenges are particularly salient for mental health records because these records originate from numerous sources within the state—such as courts, private hospitals, and state offices of mental health—and are not typically captured by any single state agency. For example, records that involve involuntary commitments to a mental institution typically originate in entities located throughout a state and outside the scope of law enforcement, and therefore a state may lack processes to automatically make these records available to the FBI. In addition, 6 of the 16 states that applied for NARIP grant funding in 2011 cited technology barriers as a reason for requesting funding in their grant applications. For example, Virginia received a NARIP grant to, among other things, equip district courts with an automated means to transmit mental health records to the FBI and replace its previous manual and labor-intensive process. Five of the 6 states we reviewed also noted that technological challenges impaired their ability to identify, collect, and provide mental health records to NICS. Minnesota officials said that it is difficult to share historical records involving involuntary commitments to a mental institution since they are paper records that cannot be automatically transmitted. The 1 state in our sample that did not cite technology as a challenge, Texas, already had an automated system in place that to facilitate the transmission of mental health records. DOJ officials were aware that states faced technological barriers to making mental health records available and cited that NARIP grants help states address these challenges. Additionally, BJS has made improving the submission of mental health records, including efforts to automate the reporting of such information, a funding priority of the NARIP grant program for 2011 and 2012. Addressing state privacy laws is a legal challenge that some states reported facing in making mental health records available to NICS. Specifically, officials from 3 of the 6 states we reviewed said that the absence of explicit state-level statutory authority to share mental health records was an impediment to making such records available to NICS. For example, Idaho officials reported deferring to the protection of individual privacy until clear state statutory authority was established to allow state agencies to make mental health records available. Idaho enacted a law in 2010 requiring, among other things, that the state’s Bureau of Criminal Identification obtain and transmit information relating to eligibility to receive or possess a firearm to NICS, and the state was preparing to submit its first set of mental health records to the NICS Index in the first quarter of 2012. Overall, 20 states have been identified by the FBI as having enacted statutes that require or permit agencies to share their mental health records, and some of these states in our sample reported an increase in record availability as a result. For example, Texas enacted a law in 2009 requiring court clerks to prepare and forward certain types of mental health records to the state record repository within 30 days of specified court determinations.law, Texas officials said that the number of mental health records provided to NICS increased by about 190,000 records. 45 C.F.R. Part 164. impediment to making such records available to NICS. To help address these types of challenges as they relate to HIPAA, DOJ has asked HHS to consider a potential change to the Privacy Rule that would specifically allow disclosure of mental health records for NICS reporting purposes. According to a senior HHS health information privacy policy specialist, HHS is in the process of reviewing this issue and has not yet made a decision to pursue a proposed change to the Privacy Rule. DOJ and state officials we met with said that states often faced challenges in getting relevant state agencies to collaborate, particularly because many mental health records reside in entities—such as hospitals and departments of mental health—that are typically not connected to the law enforcement agencies that make the majority of records available to NICS. For example, according to the State of Illinois’ Office of the Auditor General, approximately 114,000 mental health records were maintained in state nursing homes, private hospitals, state mental health facilities, and circuit courts in 2010. However, because of coordination and other challenges, only about 5,000 records (or 4.4 percent) were made available to the FBI. In addition, 2 of the 6 states in our sample reported that deciding which state agency would act as the liaison to the FBI was challenging because of limited staff resources and technological requirements needed to make records available. New Mexico, for instance, has not yet assigned responsibility to an agency to be the primary entity for making mental health records available to the FBI, despite discussions surrounding this issue over the past 4 years. New Mexico’s Administrative Office of the Courts has recently provided records on approximately 6,000 individuals who were committed to a mental institution directly to the FBI for NICS checks, but state officials have not yet coordinated their efforts and decided collectively on what entity will be responsible for providing such records in the future because of the resources needed to do so. DOJ acknowledged that complete reporting of state records to national databases can best be achieved through the cooperative efforts of all entities that create the records. Underscoring the importance of collaboration, BJS has recommended that NARIP grant recipients use a portion of grant funds to establish NICS Record Improvement Task Forces, to include representatives from the central record repository and other agencies. According to DOJ, task forces with wide representation can provide a forum for exploring possible options for improving the quality, completeness, and availability of NICS records. Idaho officials, for example, noted that forming such a multijurisdictional working group was extremely helpful for learning which state entities housed relevant mental health records. DOJ officials also said that several states overcame coordination challenges by conducting outreach to entities involved with providing mental health records and educating them about the importance of making such records available to NICS. Texas Department of Public Safety officials reported collaborating closely with courts by distributing training and guidance documents to ensure that the courts understood the types of mental health records that should be made available to NICS. The guidance materials also include an outline of the importance of mental health records for background checks, the types of cases to report, instructions on how to input relevant records into Texas’s record system, and a frequently asked questions document for reference. States Generally Found DOJ Assistance with Providing Mental Health Records Helpful Grants NARIP grants were established to improve the completeness, automation, and transmittal of records used during NICS background checks. Since its inception 3 years ago, the grant program has awarded approximately $40 million to 14 states. DOJ has placed an emphasis on increasing the submission of mental health records as part of the 2011 and 2012 grant solicitations. Of the 16 NARIP grant applicants in 2011, 11 applicants requested funding for mental health record-related activities. In addition, 6 of the 16 NARIP grant applicants in 2011 requested funds for technology-related improvements to increase the submission of mental health records, including updating information system hardware and automating the record submission process. Officials from 2 of the 3 states in our sample that received NARIP grants reported using a portion of the funds to address technological barriers to submitting mental health records. For example, Idaho officials reported using NARIP grant funds it received in 2010 to create a new transmission protocol to provide relevant data related to state mental health records. State officials said these grants were instrumental in funding the programming, testing, and software upgrades needed to create the database. State officials have also reported using NARIP grants to research which state agencies house mental health records that could be used during NICS background checks. Specifically, 6 of the 16 NARIP grant applicants in 2011 requested funding to conduct assessments to identify where relevant mental health records reside within the state in order to improve their efforts to provide such records for a NICS check. In addition to NARIP grants, DOJ administers the National Criminal History Improvement Program and JAG Program, which can also be used to support state efforts to improve mental health records, among other things. For example, in 2008, we reported that from fiscal years 2000 through 2007, almost $940,000 in NCHIP grants were specifically targeted to improve the availability of mental health record for use during NICS background checks. All 6 states in our sample have received NCHIP grants, but officials in all of these states said they did not use the funding to improve the submission of mental health records. Rather, these states used NCHIP funds for activities regarding criminal history records in state repositories. For example, 1 state in our sample used 2011 NCHIP funds to reconcile approximately 60,000 open arrest records with their corresponding dispositions. The JAG Program also supports information-sharing programs in criminal justice entities. For example, in 2009, states spent $89.6 million (7 percent of total funds for that year) on information-sharing projects, such as initiatives to increase records provided to NICS. An additional $33 million (3 percent) was spent on criminal records management upgrades and other technology for information sharing. DOJ also offers in-state training sessions to educate state agencies about NIAA-related topics, including issues related to the submission of mental health records. For example, since enactment of NIAA, the NICS Section has reported conducting presentations to law enforcement officials responsible for providing records to NICS. Specifically, DOJ reported conducting in-state trainings and presentations in 7 states. These presentations covered the definition of the mental health prohibitor and how to enhance state plans regarding the submission of mental health records. Further, 3 of the 6 states in our sample reported using these training presentations to provide information about the mental health prohibitor. For instance, at Texas’s request, the NICS Section held presentations for judges, clerks, and other relevant parties to answer questions about the types of mental health records requested under NIAA. Additionally, Washington state officials were complimentary of the NICS Section personnel that travel once a year to eight different locations within the state to provide training on the federal prohibitors to their law enforcement agencies. DOJ also hosts and sponsors conferences in which relevant DOJ components present information on numerous topics, including those related to making more mental health records available. For example, DOJ’s first conference regarding NIAA provisions—the NIAA Implementation Conference—was held in 2009, and DOJ officials reported that officials from almost every state attended. The NICS Section also sponsors annual Report, Educate, Associate Criminal Histories (R.E.A.C.H.) conferences, which focus on improving information sharing between the NICS Section and external agencies. The NICS Section also sponsors annual NICS User Conferences for states that conduct their own NICS checks, which covers topics such as the federal firearm prohibitors and how to submit records to the NICS Index. Officials from all 6 states in our sample had attended at least one DOJ-sponsored conference and generally found these events to be helpful for learning about various aspects of the NIAA, such as how to make certain mental health records available to the FBI. Additionally, an official from 1 of the 6 states in our sample said that the documents distributed by DOJ were particularly useful and officials in their state referenced them regularly. Beginning in 2011, DOJ began sponsoring annual regional NIAA conferences, in conjunction with the National Center for State Courts and SEARCH. These events are intended to provide a forum for states to share their experiences in identifying, collecting, automating, and submitting records. For example, at the December 2011 regional NIAA meeting, Oregon officials shared their state’s experience in developing a system to share mental health records, including an explanation of which state agencies collaborated to share such records and how the state used NARIP grant funds to automate and transmit records. Two of the 6 states in our sample also reported benefiting from learning about other states’ experiences in collecting and submitting mental health records. Specifically, officials in Idaho and Washington noted that hearing about other states’ experiences during a regional conference provided them with technical advice on how to create linkages between existing mental health record systems and helped them determine where relevant records resided. In some cases, the sharing of experiences with mental health records led to sustained relationships and networks among states. For example, following their presentation to several northeastern states at a 2011 NIAA regional conference, New York officials reported sharing best practices and lessons learned with Connecticut and New Jersey officials. According to DOJ officials, five regional conferences have been held, with a total of 38 states attending one of these meetings. DOJ Has Not Identified and Disseminated Promising Practices of Successful States Nationally Although hearing about the experiences of other states during regional conferences has been helpful for some states in making mental health records available to NICS, DOJ has not yet identified promising practices employed by all states or shared this information nationally. Officials from all 6 states in our sample noted that the sharing of promising practices among states may be helpful to, among other things, guide future policy decisions and spur ideas on how to improve reporting efforts. Further, BJS officials acknowledged that there are benefits to sharing such practices and said that learning about the experiences of other states can introduce state officials to new ways of approaching challenges, such as how to address technology challenges, legal barriers, and coordination issues. GAO, Standards for Internal Control in the Federal Government, GAO-AIMD-00-21.3.1 (Washington, D.C.: Nov. 1999). phases of their efforts to make mental health records available and address barriers they face in providing these records. States Generally Are Not Sharing Unlawful Drug Use Records That Are Not Associated with an Arrest or Conviction States’ overall progress in providing unlawful drug use records—which encompasses both criminal and noncriminal records—is generally unknown; however, available data indicate that most states are not providing noncriminal records. DOJ’s overall efforts to improve criminal history records have assisted state efforts to provide unlawful drug use records. DOJ has issued guidance related to the unlawful drug use records that are noncriminal, but states in our sample raised concerns about providing these kinds of records. State Progress in Providing Unlawful Drug Use Records Is Generally Unknown; Available Data Suggest Most States Are Not Providing Noncriminal Records The states’ progress in providing unlawful drug use records—which encompasses both criminal and noncriminal records—is generally unknown, but available data suggest that most states are not providing the noncriminal records. According to NICS Section officials, the majority of unlawful drug use records that states make available for NICS checks are criminal records—such as those containing convictions for use or possession of a controlled substance—and are made available to NICS through the III. The officials noted, however, that these criminal records cannot readily be disaggregated from the over 60 million other criminal history records in the database because there is no automatic process to identify subsets of records within the III in each prohibited category.Three of the 6 states in our sample—Idaho, Washington, and New York— were able to provide data on the number of criminal drug use records they made available to NICS, which showed 14,480 records, 553,433 records, and 1,659,907 records as of January 2012, January 2012, and December 2011, respectively. The states’ progress in sharing unlawful drug use records that are noncriminal is also generally unknown because, per regulation, these records are retained in the NICS Index for only 1 year after the date of the operative event (e.g., the date of the most recent drug-related arrest in the case of an individual with multiple drug-related arrests). According to NICS Section officials, because these records are routinely added and deleted from the NICS Index, the overall trend in the states’ efforts to provide these records is difficult to discern. Available data suggest, however, that most states are not making these records available. According to FBI data, on May 1, 2012, the NICS Index contained a total of 3,753 unlawful drug use records that are noncriminal, of which about On the other hand, also on that date, 30 2,200 came from Connecticut.states, the District of Columbia, and all five U.S. territories had not made any of these records available. DOJ officials agreed that most states generally are not making these records available. From 2004 to 2011, an increasing number (but a lower percentage) of firearm transactions were denied based on unlawful drug use records that states make available to NICS. According to FBI data, the number of firearm transactions that were denied based on unlawful drug use records (both criminal and noncriminal) increased from 5,806 in 2004 (7.6 percent of 75,990 total denials) to 7,526 in 2011 (6.1 percent of 123,432 total denials).transactions that were denied based on criminal versus noncriminal drug use records, but NICS Section officials noted that the vast majority of denials have been based on criminal records. DOJ’s Efforts Assist States in Providing Unlawful Drug Use Records DOJ efforts to help states address challenges in providing criminal drug use records have increased the ability of states to provide such records. Officials from the states in our sample identified several challenges related to criminal drug use records. For example, officials from 3 of the 6 states noted that having drug-related records without fingerprints was a challenge because fingerprints are needed to send these records to state repositories and the III. Additionally, officials from 3 of the 6 states reported that it was difficult to match arrest records from drug offenses to their corresponding dispositions, making it sometimes challenging to determine if an individual should be prohibited under federal law from receiving or possessing a firearm. DOJ has engaged in various efforts to address state challenges in providing criminal history records—including unlawful drug use records— and officials from the states in our sample were generally satisfied with the assistance they have received. Using NCHIP grants—which are intended to help states enhance the quality, completeness, and accessibility of criminal history records—states have purchased systems to automate criminal history records, researched arrest records to reconcile them with their corresponding dispositions, and performed audits of local law enforcement agencies’ criminal history record systems. Further, during a NIAA conference, officials from 1 state reported using NARIP grants to develop software that automatically linked arrests to their corresponding dispositions, which allowed the state to move away from paper-based files and ultimately resulted in the state making more criminal records available for NICS checks. Additionally, the National Center for State Courts, under a DOJ grant, is managing a project to nationally disseminate guidance, information, and state best practices to help ensure the completeness of criminal history records located in state repositories. Specifically, the center is designing an online repository of resources for states to improve their reporting of criminal dispositions and arrests, including those that involve unlawful drug use. A center official reported that the project is scheduled to be completed by April 2013. NARIP funding has also been used to address challenges to providing criminal drug use records. For example, Idaho state officials reported using NARIP grants to replace aging fingerprint-scanning technology in order to make fingerprints more readily available for criminal disposition records. DOJ Has Issued Guidance on Noncriminal Unlawful Drug Use Records, but States Have Concerns about Providing These Records DOJ has taken steps to help clarify the regulatory definition of unlawful drug use records that are noncriminal for states. For example, DOJ has made copies of the regulation that identifies the scope of these records for NICS checks available on its websites. DOJ officials have also conducted presentations at NIAA regional conferences that illustrated examples of records that fall within the scope of the definition of noncriminal drug use records. Also, in January 2011, the NICS Section provided written guidance to the 13 states that conduct their own federal firearm background checks. This guidance identifies a variety of scenarios from which an inference of current unlawful drug use may be drawn, thereby constituting a prohibition. Although the majority of the document focuses on inferences that can be drawn from criminal history records, there are records outside a criminal history itself that can support such an inference—for example, a positive drug test for persons on active probation. Despite this guidance, states generally are not making noncriminal drug use records available to NICS. For example, officials from 4 of the 6 states in our sample reported that they were uncomfortable with the amount of judgment law enforcement officials were being asked to make outside of an official court decision regarding an individual’s potentially prohibited status. Officials from 2 of these 4 states also noted that making these kinds of judgments could present a legal risk to the state and could result in lawsuits from individuals prohibited from receiving or possessing a firearm who had not been convicted of crimes. For example, Minnesota officials explained that drug tests and other ways to infer drug use or possession could be inaccurate and individuals could be prohibited from receiving or possessing firearms based on the wrong information and without due process. Officials from 5 of the 6 states we reviewed reported other challenges in making the noncriminal subset of unlawful drug use records available to NICS. For example, officials from New Mexico and Minnesota were unaware of certain types of records that could be made available under the ATF regulatory language regarding making an inference of current drug use, such as records indicating a failed drug test for a controlled substance. Officials from Texas and Minnesota reported that their states did not have centralized databases that would be needed to collect these records. For example, officials from Minnesota noted that failed drug test results for individuals on active probation are kept at each individual’s supervisory agency and there is no centralized system to gather these and provide them to NICS. Officials from Texas and Washington noted that new state laws permitting agencies to share these types of records would need to be established in order to overcome conflicts with their state privacy laws. DOJ officials agreed that states generally are not making unlawful drug use records that are noncriminal available to NICS. Pursuant to ATF regulations, these records may be utilized only before their period of potential use “expires”—that is, if they relate to an operative event, like an arrest, if the event occurred within the past year. DOJ officials noted that making records available—particularly those that are removed from the system 1 year after the date of the operative event—is challenging and would require a great deal of effort, time, and resources on the part of both states and the federal government. The officials added that capturing these records has not been a priority for DOJ or the states because current efforts have focused primarily on collecting mental health records and records on misdemeanor crimes of domestic violence, records that do not expire. DOJ officials also stated that despite the department’s efforts to train states and provide guidance, the scope of unlawful drug use records that are noncriminal is difficult for states to interpret. DOJ Has Not Administered Reward and Penalty Provisions; Sample States Had Mixed Views on whether Provisions Provided an Incentive DOJ has not administered NIAA reward and penalty provisions because of limitations in state record estimates, which are to serve as the basis for implementing the provisions. Officials from the states in our sample had mixed views on the extent to which the act’s reward and penalty provisions—if implemented as currently structured—would provide incentives for the state to make more records available to NICS. States Face Challenges Developing Record Estimates, Limiting Their Usefulness as a Basis to Implement Rewards and Penalties Limitations in state record estimates—which are estimates of the number of applicable records states possess that are or could be made available for use during NICS checks—have hindered DOJ’s ability to administer the NIAA reward and penalty provisions. These provisions are intended to provide incentives for states to share greater numbers of records by rewarding states that provide most or all of their records and penalizing states that provide few of their records. The act further specifies that the basis for the rewards and penalties should be state record estimates and directs DOJ to develop a methodology for determining the percentages of records states are making available. The National Center for State Courts—with which BJS contracted to review the reasonableness of the state record estimates—identified numerous limitations with the estimates. For example, the center found that states often lacked technology to query data for the record estimates and could not access many records because they were lost, in a legacy system that was no longer available for making inquiries, or were paper files that were not stored in a manner practical for searching. The center also found that many states lacked the ability to report certain records— such as mental health adjudications—because of state statutory issues, could not distinguish criminal unlawful drug use records from other records, or had deleted relevant records. According to BJS officials, states face challenges in accurately estimating both the total number of unique records that reside at agencies around their states and the total number of these records that are made available electronically to NICS. The officials added that most state data systems were created and operate for the primary purpose of generating an individual’s record of arrests and prosecutions. Therefore, these systems do not have basic file analysis capabilities—such as the ability to search text fields for key terms—which would allow the states to search for and count certain types or categories of records. The officials noted that it is very hard to affect or change the design limitations of existing data systems and that making these kinds of changes is costly. Further, they said that changing state data systems for the purpose of counting or estimating records was not something states would need or want since most of the technical improvements states make to their systems relate to data input—such as increasing the automation of criminal records. BJS officials were not certain the challenges with developing record estimates could be overcome, and the department is not collecting record estimates for 2012. Although BJS has not finished analyzing the third year of state record estimates, the officials said they did not know if the state record estimates, as currently collected, would ever reach the level of precision that would be needed to administer the NIAA reward and penalty provisions. The officials noted that estimates in some of the categories—such as felony convictions and mental health—were possibly usable as the basis for rewards and penalties and that these data are more reliable than data collected in other categories. DOJ and officials from 1 of the states in our sample said that there were some benefits to completing the record estimates. For example, based in part on New York’s efforts to estimate the number of records on misdemeanor crimes of domestic violence, New York officials reported that the state passed a statute to recognize such crimes as their own category of misdemeanor, which could allow the courts to distinguish such crimes for submission to NICS. Nonetheless, in its most recent analysis of state record estimates, the National Center for State Courts reported that much remains unknown about whether this data collection exercise actually generated any benefits, such as heightened cooperation or improvements in the number of records states make available to NICS. The officials noted that after BJS finishes reviewing the state record estimates that it collected in 2011, BJS plans to convene focus groups with states and other stakeholders to determine which aspects of the record estimate data collection process have been useful for states and which have not. BJS will also consider what, if any, additional data it will collect from states in the future and whether it can develop a workable estimate methodology. However, until BJS establishes a basis on which rewards and penalties can be implemented, the agency will be limited in its ability to carry out these provisions of the NIAA. Sample States Had Mixed Views on whether Rewards and Penalties Provide Incentives to Submit More Records Officials from the 6 states in our sample provided mixed views on the extent to which the NIAA reward and penalty provisions, if implemented as currently structured, would provide incentives for their states to make more records available for NICS checks. With respect to the NIAA reward provision, officials from 1 state, for example, said that the waiver of the 10 percent matching requirement for NCHIP grants would be helpful and added that there have been years when the state has not applied for With respect to the NIAA NCHIP funds because of the cost match.penalty provision, the officials added that the penalty—which in their state would have been over $100,000 in JAG Program funding in 2011—would also motivate them to make more records available. Officials from another state agreed that the potential impact of the penalty initially was an incentive to share more records, but added that this has become less of a motivator since DOJ has not yet administered the penalty provision. Officials from the remaining 4 states were either generally unaware of the NIAA reward and penalty provisions or how they would affect state efforts to make more records available, or reported that they were a moderate to no incentive. BJS officials reported that they believed the NIAA reward and penalty provisions provided little to some incentive for states to make records available. For instance, BJS officials said the reward provision (i.e., the waiver of the 10 percent NCHIP match) likely provided little incentive for states to make more records available because states could use or apply personnel costs (something they have to pay for regardless) to satisfy the cost match requirement. Based on the amount of the 2011 grant awards, the waiver of NCHIP’s 10 percent matching requirement would have resulted in an average savings of $29,000 in matching funds per state. In terms of penalties, BJS officials said the penalty provision (i.e., percent reduction of JAG Program funding) could provide an incentive to states to some extent, but that states faced significant obstacles in making records available. Specifically, the penalty of 3 to 4 percent of JAG Program funding could have resulted in an average grant reduction of up to about $131,000 to up to about $176,000 per state in 2011. Overall, BJS officials believed that public safety interests were what motivated states to make records available, but had not yet determined the extent to which the rewards and penalties, if administered as currently structured, could provide incentives to states. When asked whether different incentives would better motivate states, the officials suggested that relaxing the restrictions on which states are eligible to receive NARIP grant funding could make funds available to more states and in turn encourage more record sharing. The officials said that given the financial condition of most state governments, positive financial incentives (such as increasing the amount of NIAA grant funding) were the best way to encourage states to take action. The NIAA reward and penalty provisions are intended to provide incentives for states to make more records available to NICS, but the provisions—as currently structured—might not provide the incentives that were envisioned by the act. Our prior work shows that having the right incentives in place is crucial for operational success.provide better incentives for states to make records available for NICS checks. Nineteen States Have Programs to Relieve Federal Firearms Prohibitions for People with Precluding Mental Health Adjudications or Commitments Nineteen states have received ATF certification of their program that allows individuals who have been prohibited from possessing firearms due to a mental health adjudication or commitment to seek relief from the associated federal firearms prohibition (disability). Grant eligibility was the primary motivation for states to develop these relief programs, but reduced funding may result in fewer new programs. Nineteen States Allow Individuals to Seek Relief from Their Firearms Prohibition, Making These States Eligible for Grant Funding From January 2009 through June 2012, ATF certified programs in 19 states that allow individuals with a precluding mental health adjudication or commitment to seek relief from the associated federal firearms prohibition, thus making these states eligible to receive NARIP grant funding. ATF certifies such relief from disabilities programs based on the requirements contained in the NIAA. ATF developed a minimum criteria checklist that specifies nine conditions that a state’s relief program must satisfy and certifies states’ programs based on these requirements. For example, a state’s program must be pursuant to state statute and include due process requirements that allow persons seeking relief the opportunity to submit evidence to the lawful authority considering the relief application. This is to include the circumstances of the original firearms disability (the circumstances that resulted in the individual being prohibited from possessing firearms), the applicant’s mental health record and criminal history records, and the applicant’s reputation as developed through character witness statements, testimony, or other character evidence. The reviewing authority must find that the applicant will not be likely to act in a manner dangerous to public safety and that granting relief would not be contrary to the public interest. State data collected from September 2011 through May 2012 show that 6 of the 16 states that had a certified relief from disabilities program as of May 2012 reported that they had received applications from individuals As shown in table 1, these seeking relief from their firearms disability.states reported receiving 60 applications, 26 of which were approved. DOJ officials reported that most states that develop relief from disability programs do so to be eligible for NARIP funding, and officials from 10 of the 16 states that had ATF-approved relief programs as of May 2012 reported that eligibility to receive NARIP funds greatly motivated their state to pursue developing such a program. Officials from 5 of the remaining states said NARIP eligibility was some incentive or a moderate incentive, and officials from 1 state said it was no incentive. Given the reduced amount of NARIP funding for fiscal year 2012 (from $16.1 million in 2011 to $5 million in 2012), it is not clear how much of an incentive NARIP funding will be for the remaining states to pursue passing such legislation. Table 2 provides NARIP grant awards by state from fiscal year 2009 to fiscal year 2011. Three of the 6 states we reviewed did not have a certified relief from disabilities program. Officials in 1 of these states (whose relief program did not meet the federal standard for certification) said that NARIP funding was an incentive to establish a relief program but that the smaller amount of NARIP funding available for fiscal year 2012 is one reason why the state was not willing to extend the effort to revise its relief program to meet the federal standard in the future. Officials in the second state whose program was pending ATF review said that NARIP grant eligibility was a little incentive to develop a relief from disabilities programs. Officials in the third state reported that they were not aware of the NARIP grant program, and accordingly, it did not affect any decisions regarding developing a relief from disabilities program. After the passage of the NIAA, DOJ sent a letter to every state’s governor explaining the relief from disabilities program provision and the minimum criteria a state’s program would have to meet for ATF certification. DOJ officials also gave presentations at state conferences and regional meetings where they discussed the relief program criteria, explained that a certified relief from disabilities program is a requirement to be eligible to receive NARIP grant funding, and provided points of contact for states to call if they needed technical assistance with their draft legislation. State officials generally had positive feedback regarding the technical assistance they received from ATF. For example, officials in Arizona said that ATF assisted the state with drafting language to amend a state statute and that this was precisely the assistance the state needed. New Jersey officials added that throughout the development of their draft relief provision legislation, ATF reviewed proposed amendments and ensured that they complied with the NIAA standards prior to the state advancing such legislation through the state legislature. State officials reported various challenges in developing relief from disabilities programs, including managing the concerns of advocacy groups and modifying state judicial processes to meet the federal standard, such as the requirement to provide for de novo judicial review.Officials from a state that had submitted draft relief legislation to ATF and were awaiting a determination said that managing the competing interests of various advocacy groups required a great deal of time and negotiation and was a challenge to their efforts to pass relief legislation. The officials noted that if ATF did not approve their legislation, they were not sure they would propose a new program in a future legislative session. Officials from 2 states that had successfully developed relief programs said that competing pressures came from groups representing the families of victims of gun violence, gun rights advocacy groups, and groups from the mental health community that had privacy and other concerns. Other officials from a state without a relief from disabilities program did not believe it was politically feasible in their state to have such a program and had therefore not sought to develop one. Officials from 6 of the 16 states that had ATF-approved relief from disabilities programs as of May 2012 noted that managing the competing interests of advocacy groups was a challenge. For instance, officials from 1 state reported that the National Rifle Association, other gun rights advocacy groups, and members of the mental health community were all part of the process of drafting relief legislation, which took considerable time and effort to meet the federal criteria. The officials added that other states seeking to develop relief from disabilities programs should ensure buy-in with the various interested parties before the relief provision gets to the legislative stages. Conclusions Sustained federal and state efforts to increase the comprehensiveness, timeliness, and automation of records that support NICS background checks are critical to helping enhance public safety and helping to prevent tragedies such as the Virginia Tech shootings. The national system of criminal background checks relies first and foremost on the efforts of state and local governments to provide complete and accurate records to the FBI. While many states have made little progress providing critical records for gun background checks, the substantial increase in mental health records coming mostly from 12 states serves to demonstrate the great untapped potential within the remaining states and territories. States reported finding DOJ’s guidance, grants, and technical assistance useful, but DOJ has opportunities to provide additional support by identifying and sharing information on promising practices on what worked for the states that have made progress sharing mental health records as well as what lessons they have learned. By identifying and distributing promising practices nationally, DOJ would be better positioned to assist states in the early phases of their efforts to make mental health records available, address barriers, and identify solutions to challenges those states face in this effort. The NIAA reward and penalty provisions are intended to provide incentives for states to make more records available to NICS, but our review suggests that the provisions might not be providing the incentives that were envisioned by the act. Given that record sharing with NICS on the part of states is voluntary, it is important that DOJ devise an effective implementation of the incentives, including a reasonable basis upon which to base those incentives. By obtaining state views, DOJ could determine the extent to which the current NIAA provisions provide incentives to states, whether modifications to the provisions would provide better incentives, or if alternative means for providing incentives could be developed and implemented. Further, DOJ would need to establish a basis on which these provisions or any future rewards and penalties approaches could be administered. Carrying changes to the state record estimates, as they are defined in the NIAA, may require DOJ to develop and submit a legislative proposal to Congress to consider any alternatives. Nonetheless, an effective system of rewards and penalties could ultimately result in states providing more records for NICS background checks. Recommendations for Executive Action To help ensure effective implementation of the NIAA, we recommend that the Attorney General take the following two actions. To further assist states in their efforts to make mental health records available for use during NICS background checks, work with states to identify and disseminate promising state practices nationally so that states in the early phases of their efforts to make such records available can address barriers and identify solutions to challenges faced in this effort. To help ensure that incentives exist for states to make records available for use during NICS background checks and that DOJ has a sound basis upon which to base incentives, determine (1) if the NIAA reward and penalty provisions, if they were to be implemented, are likely to act as incentives for states to share more records, and (2) if, given limitations in current state estimates, whether DOJ can develop a revised estimate methodology whereby states are able to generate reliable estimates as a basis for DOJ to administer the NIAA reward and penalty provisions. If DOJ determines either (1) that the reward and penalty provisions are not likely to provide incentives for states to share more records or (2) that it is unable to establish a revised methodology upon which to administer the reward and penalty provisions, DOJ should assess if there are other feasible alternatives for providing incentives or administering the provisions and, if so, develop and submit to Congress a legislative proposal to consider these alternatives, as appropriate. Agency Comments We provided a draft of this report for review and comment to DOJ. The department provided written comments, which are summarized below and reprinted in appendix V. DOJ agreed with both of our recommendations and identified actions it plans to take to implement them. DOJ also provided us with technical comments, which we incorporated as appropriate. DOJ agreed with our recommendation that the department identify and disseminate the promising practices of states in making mental health records available for use during NICS background checks. The department noted that BJS is collaborating with other relevant DOJ components to identify state promising practices. DOJ added that once these practices have been identified, BJS will disseminate this information to the states through electronic mailing lists, the BJS website, other partner agency sites, and at relevant meetings and conferences. DOJ also agreed with our recommendation that the department (1) ensure that the NIAA reward and penalty provisions are likely to act as an incentive for states to share more records and (2) develop a methodology upon which to administer the reward and penalty provisions. In its response, DOJ noted that BJS has determined that the current methodology for reporting estimates of available records does not result in sufficiently reliable estimates on with to base rewards and penalties. In light of this conclusion, BJS decided to not collect a fourth year of estimates but instead focus its efforts on identifying whether there are solutions that would allow BJS to use the estimates in the way the NIAA intended. BJS plans to convene a focus group of states to determine whether a better methodology can be developed and, if so, what attributes the revised methodology would entail. BJS also plans to use this same focus group to explore states’ reactions to the reward and penalty provisions and to assess whether those provisions are likely to provide suitable incentives for the states to increase record sharing. We are sending copies of this report to the appropriate congressional committees, the Department of Justice, and other interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Carol Cha at (202) 512-4456 or [email protected], or Eileen Larence at (202) 512-6510 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology We assessed the progress the Department of Justice (DOJ) and states have made in implementing key provisions of the National Instant Criminal Background Check System (NICS) Improvement Amendments Act of 2007 (NIAA). Namely, the extent to which states have made progress in making mental health records available for use during NICS background checks and DOJ could take actions to help states overcome challenges in providing these records, states have made progress in making unlawful drug use records available for use during NICS background checks and DOJ could take actions to help states overcome challenges in providing these records, DOJ has administered the reward and penalty provisions provided for in the act and whether selected states report that these provisions provide incentives to make records available to the Federal Bureau of Investigation (FBI), and states are providing a means for individuals with a precluding mental health adjudication or commitment to seek relief from the associated federal firearm prohibition. To determine state progress in providing mental health and unlawful drug use records, we analyzed FBI data from fiscal years 2004 through 2011— about 4 years before and after the enactment of NIAA—on the number of such records that states made available for NICS background checks and on the number of gun purchase denials based on these records. To assess the reliability of these data, we questioned knowledgeable officials about the data and the systems that produced the data, reviewed relevant documentation, examined data for obvious errors, and (when possible) corroborated the data among the different agencies, including the Bureau of Justice Statistics (BJS) and the FBI’s NICS Section. We determined that the data were sufficiently reliable for the purposes of this report. To assess the extent to which DOJ is providing assistance to help states overcome challenges in sharing records, we reviewed guidance DOJ provided to states and attended a DOJ-hosted regional conference on the NIAA held in December 2011 in DuPont, Washington. Additionally, we analyzed all NICS Act Record Improvement Program (NARIP) grant applications to identify any limitations that states reported facing when providing records and the amount of funding states believed were necessary to overcome these limitations. We analyzed grant applications from 2009, 2010, and 2011—funded and unfunded—submitted by states, territories, and tribal entities. From this, we were able to identify areas of need for which 28 states, territories, and tribal entities requested funding. To assess the accuracy of mental health and unlawful drug use records made available for NICS checks, we analyzed the most recent round of triennial audits conducted by the FBI’s Criminal Justice Information Services (CJIS) Audit Unit and the most recent set of proactive validation processes completed by the states. The most recent set of proactive validation processes involved 23 states and occurred from October 2010 through September 2011, and the audits were conducted in 42 states from 2008 through 2011. Further, we interviewed officials from a nonprobability sample of 6 states to discuss any challenges they faced in sharing mental health and unlawful drug use records and their experiences with DOJ assistance received to address those challenges. The states selected were Idaho, Minnesota, New Mexico, New York, Texas, and Washington. We selected these states to reflect a range of factors, including the number of mental health records and unlawful drug use records made available for NICS checks, trends in making mental health records available to NICS over the past 3 years, whether the state received a grant under the NIAA, and whether the state has provided a state record estimate to the Bureau of Justice Statistics. While the results of these interviews cannot be generalized to all states, they provided insight into state challenges and state experience addressing those challenges. We also interviewed officials from various DOJ components with responsibility for managing and maintaining NICS records, which included the Bureau of Justice Statistics and the FBI’s CJIS division and NICS Section. We interviewed these officials to determine, among other things, the progress states made submitting mental health and unlawful drug use records, challenges states face in doing so, and the forms of assistance DOJ is providing to help states address these challenges. To determine the extent to which DOJ has administered the reward and penalty provisions of the act and whether these provisions provide incentives for state efforts to share records, we reviewed copies of state record estimates for 2009, 2010, and 2011 and analyzed two reports from the National Center for State Courts evaluating these estimates. We interviewed center officials about, among other things, the scope, methodology, and findings of the reports. We determined that the scope and methodology were sufficient for us to rely on the results. We also interviewed officials from the 6 states in our sample regarding (1) their incentives to make the requested records available to the FBI; (2) the extent to which the reward and penalty provisions of the act have incentivized their efforts; (3) their thoughts on whether the reward and penalty provisions would change their actions if they were carried out by DOJ; (4) if the current reward and penalty provisions do not provide incentives, what would; and (5) the impact, if any, of DOJ not carrying out the reward and penalty provisions. We also interviewed the Bureau of Justice Statistics about its efforts to administer the reward and penalty provisions provided for in the act and the basis for its decisions. Additionally, we discussed the Bureau of Justice Statistics’ position on the process for completing the state record estimates, challenges therein, and the effect of the act’s reward and penalty provisions on record sharing. Further, we interviewed officials with the National Center for State Courts and the National Consortium for Justice Information and Statistics (SEARCH) who were responsible for collecting state record estimates and evaluating their reasonableness. From these interviews, we learned more about the reasonableness of the estimates, changes to the estimate methodology over time, and next steps for the estimates. To determine the extent to which states are providing a means for individuals with a precluding mental health adjudication or commitment to seek relief from the associated federal firearms prohibition, we reviewed documentation on the minimum criteria for certification of a relief from disabilities program and the relief program requirements detailed in the NIAA. We also reviewed examples of state statutes that established relief from disability programs. Further, we interviewed officials in each of the 16 states with approved relief from disability programs as of May 2012 to learn about the challenges they faced developing their programs, motivation for developing the program, federal assistance they received, and information on the number of relief applicants to date, including how many applications had been received, approved, denied, or dismissed.We also relied on data collected through interviews with the previously mentioned 6 sample states, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), and other DOJ components to learn about, among other things, the motivation to establish relief programs, barriers to doing so, and DOJ resources available to help states. Three of the 6 states in our sample did not have certified relief from disability programs, and we asked officials in these states why they had not pursued developing a relief program; what barriers there were to establishing a relief program; and what, if any, federal assistance they would like for establishing a relief program. Additionally, we interviewed groups with an interest in relief from disability programs and NICS data more broadly, including the Brady Campaign, Gun Owners of America, Mayors Against Illegal Guns, and the National Rifle Association. We interviewed these groups to learn their positions on relief from disability programs, among other things. Appendix II: State Options for Conducting Background Checks using the National Instant Criminal Background Check System States have three options for conducting NICS checks, referred to as full point of contact (full POC), non-POC, and partial POC. As detailed in a 2008 report funded by DOJ, in full-POC states, Federal Firearms Licensees first query the NICS databases and related state files through one or more state organizations, such as local or state law enforcement agencies—known as points of contact—and then, if necessary, the staff of the POCs carry out any required follow-up research. In non-POC states, Federal Firearm Licensees contact the NICS Operations Center directly by telephone or via the Internet and any required follow-up research is performed by the NICS’s FBI staff. In partial POC states, Federal Firearm Licensees query NICS and state files through a point of contact for handgun purchases or permits but query NICS directly for long gun purchases, such as shotguns or rifles. Figure 2 shows the distribution of POC states, partial-POC, and non-POC states. According to the DOJ-funded report, states elect POC or non-POC status for various reasons, such as a state’s attitude toward gun ownership, since many POC states have prohibiting legislation that is stricter than federal regulations. For example, Oregon has five statutorily prohibiting categories of misdemeanor convictions in addition to domestic violence— which is the only prohibiting misdemeanor required under federal law. Additionally, there may be an economic incentive for states to elect non- POC status, since implementing and operating a POC may cost a state more money than it can collect in fees charged to Federal Firearm Licensees for conducting background checks. For example, the authors reported that Idaho elected not to become a full-POC state because of the added expense of performing background checks for long gun purchases. Appendix III: Minimum Criteria for Certification of Qualifying State Relief from Disabilities Programs ATF provides guidance for states to follow in certifying that they have established a qualifying mental health relief from firearms disabilities program that satisfies certain minimum criteria under the NIAA. ATF officials said that they review states’ programs according to the following minimum criteria. 1. State law: The relief program has been established by state statute, or administrative regulation or order pursuant to state law. 2. Application: The relief program allows a person who has been formally adjudicated as a “mental defective” or committed involuntarily to a mental institutionthe Federal firearms prohibitions (disabilities) imposed under 18 U.S.C. § 922 (d) (4) and (g) (4). to apply or petition for relief from 3. Lawful authority: A state court, board, commission, or other lawful authority (per state law) considers the applicant’s petition for relief. The lawful authority may only consider applications for relief due to mental health adjudications or commitments that occurred in the applicant state. 4. Due process: The petition for relief is considered by the lawful authority in accordance with principles of due process, as follows: a. The applicant has the opportunity to submit his or her own evidence to the lawful authority considering the relief application. b. An independent decision maker—someone other than the individual who gathered the evidence for the lawful authority acting on the application—reviews the evidence. c. A record of the matter is created and maintained for review. 5. Proper record: In determining whether to grant relief, the lawful authority receives evidence concerning and considers the: a. Circumstances regarding the firearms disabilities imposed by 18 U.S.C. § 922 (g) (4); b. Applicant’s record, which must include, at a minimum, the applicant’s mental health and criminal history records; and c. Applicant’s reputation, developed, at a minimum, through character witness statements, testimony, or other character evidence. 6. Proper findings: In granting relief, the lawful authority issues findings that: a. The applicant will not be likely to act in a manner dangerous to b. Granting the relief will not be contrary to the public interest. 7. De novo judicial review of a denial: The state provides for the de novo judicial review of relief application denials that includes the following principles: a. If relief is denied, the applicant may petition the state court of appropriate jurisdiction to review the denial, including the record of the denying court, board, commission or other lawful authority. b. In cases of denial by a lawful authority other than a state court, the reviewing court as the discretion to receive additional evidence necessary to conduct an adequate review. c. Judicial review is de novo in that the reviewing court may, but is not required to, give deference to the decision of the lawful authority that denied the application for relief. 8. Required updates to state and federal records: Pursuant to § 102(c) of the NIAA, the state, on being made aware that the basis under which the record was made available does not apply, or no longer applies: a. Updates, corrects, modifies, or removes the record from any database that the federal or state government maintains and makes available to NICS, consistent with the rules pertaining to the database; and b. Notifies the Attorney General that such basis no longer applies so that the record system in which the record is maintained is kept up to date. 9. Recommended procedure: It is recommended (not required) that the state have a written procedure (e.g., state law, regulation, or administrative order) to address the update requirements. Appendix IV: Federal and State Efforts to Ensure the Accuracy and Timeliness of Records Used by the National Instant Criminal Background Check System Accuracy Under DOJ regulations, the FBI is to be responsible for validating and maintaining data integrity of records in NICS—including mental health and unlawful drug use records—and does so through triennial on-site audits and proactive validation processes in each state that uses or contributes to the NICS Index. According to officials from the FBI’s CJIS Division, the CJIS Audit Unit conducts the on-site audits every 3 years and the validation processes are held between each audit. The CJIS officials explained that the two other databases searched during a NICS background check—the Interstate Identification Index (III) and National Crime Information Center (NCIC)—are also audited by the CJIS Audit Unit as part of its audit processes. In addition to DOJ’s efforts, several of the states in our sample reported having their own processes in place to ensure the accuracy of records they make available for NICS checks. During on-site triennial audits, the CJIS Audit Unit reports that it examines a random sample of NICS Index records for accuracy, validity, and completeness through a review of documentation used to make the entry into the database. According to CJIS officials, the accuracy of a record is assessed by identifying errors in biographical information contained within a record (e.g., name or date of birth). Further, validity is ensured by determining if there is proper documentation to support the entry of the record into the NICS Index. CJIS officials cited that the completeness review is used by the Audit Unit to notify states if there is additional information that could be captured in their records to increase the likelihood of finding records of individuals prohibited from receiving or possessing a firearm within the database. According to CJIS officials, CJIS Audit Unit auditors make a determination of compliance in the areas of validity and accuracy based on a percentage of total records reviewed. At the close of each audit, the CJIS officials cited that the Audit Unit provides recommendations if there were any findings, as well as follow-up guidance, training, and assistance to the state. According to DOJ, approximately 7,100 NICS Index records from 42 states were reviewed in the most recent round of triennial audits. DOJ officials noted that none of the states were found to be out of compliance, but 6 states were found to have records where CJIS could not determine whether the records were appropriate for entry into the NICS Index. For example, the CJIS auditors could not determine whether some mental health records were from voluntary or involuntary commitments, which is important, since only involuntary commitments would be eligible for submission to the NICS Index. Additionally, several mental health records were found inappropriate for entry into the NICS Index because they belonged to deceased individuals. According to DOJ officials, there were no findings in the most recent round of triennial audits explicitly associated with unlawful drug use records. Officials noted that the unlawful drug use category has very few records overall and is the fourth lowest contributing category of the NICS Index. During a “proactive validation process,” CJIS officials reported that they ask states to validate their records in a manner similar to the way the CJIS Audit Unit conducts the triennial audits. The FBI NICS Section reported that it provides the state with a random sample of NICS Index records to validate and expects the state to examine these records’ documentation for accuracy, completeness, and validity. NICS Section does not make any assessments of compliance during the proactive validation process and it does not review any documentation used to validate the sample of records. In the most recent set of proactive validation processes (October 2010 through September 2011), 13,418 records were validated by 23 states and 1,914 records were reported by states to be invalid, resulting in an 85.74 percent validity rate. Of the 23 states that conducted these processes, 16 states examined records made available to the NICS Index’s Mental Health file and 2 states examined records provided to the As with the triennial audits, however, the FBI Controlled Substance file.could not disaggregate the audit findings for each prohibitor. In addition to DOJ’s audits, 2 of the states in our sample reported taking additional steps to ensure the accuracy of the mental health and unlawful drug use records they make available for NICS checks. For example, Texas Department of Public Safety officials reported that the Texas State Auditor’s Office conducts audits of the department’s criminal history records, including unlawful drug use records, every 5 years. Additionally, New Mexico officials cited the use of fingerprinting technology to automate and ensure the quality of fingerprints for criminal unlawful drug use records. None of the states in our sample has similar checks for accuracy in place for mental health records, but Idaho state officials noted that Idaho plans to conduct spot checks for accuracy on mental health records, similar to those currently done on criminal history records, once it uploads its first batch of these records to NICS. DOJ does not set goals for states regarding the timeliness of when the state makes records, including mental health and unlawful drug use records, available to the NICS Index because officials said it is the responsibility of states to set their own goals in this area. According to the FBI’s NICS Section officials, the NIAA does not specify any timeliness goals for states and the sharing of state records to the NICS Index is entirely voluntary. Therefore, the NICS Section neither tracks how quickly states are making records available to the NICS Index nor provides any guidance regarding time frames between a precluding incident (e.g., involuntary commitment or a failed drug test) and when a NICS Index record should be made available. NICS Section officials also explained that some states are focused on older records that, if made available to the FBI, would preclude an individual from purchasing a gun today. Although these records may date back 20 years, the NICS Section officials view these states’ contributions as a positive effort and not a shortcoming. Some states have reported setting goals or statutory requirements for record timeliness. For example, 4 of the 6 sample states noted having state-specific timeliness requirements for the submission of mental health records. Texas and Washington state officials reported having statutory requirements to making mental health records available to the FBI ranging from 3 to 30 days. Idaho and Minnesota cited requirements that county clerks submit prohibiting mental health records to state repositories as quickly as they can upon completion of the hearing or “as soon as is practicable.” With regard to unlawful drug use records, 2 states reported having state-specific time frames for the submission of arrest records to the state repository (10-day time frame in Idaho and 1-day time frame in Minnesota). Appendix V: Comments from the Department of Justice Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts above, Eric Erdman (Assistant Director), Claudia Becker, Tina Cheng, Katherine Davis, Michele Fejfar, Charlotte Gamble, Geoffrey Hamilton, Lara Miklozek, Christine Ramos, and David Schneider made key contributions to this report. | The 2007 Virginia Tech shootings raised questions about how the gunman was able to obtain firearms given his history of mental illness. In the wake of this tragedy, the NICS Improvement Amendments Act of 2007 was enacted to, among other things, provide incentives for states to make more records available for use during firearm-related background checks. GAO was asked to assess the extent to which (1) states have made progress in making mental health records available for use during NICS checks and related challenges, (2) states have made progress in making unlawful drug records available and related challenges, (3) DOJ is administering provisions in the act to reward and penalize states based on the amount of records they provide, and (4) states are providing a means for individuals with a precluding mental health adjudication or commitment to seek relief from the associated federal firearms prohibition. GAO reviewed laws and regulations, analyzed Federal Bureau of Investigation data from 2004 to 2011 on mental health and unlawful drug use records, interviewed officials from a nongeneralizable sample of 6 states (selected because they provided varying numbers of records) to obtain insights on challenges, and interviewed officials from all 16 states that had legislation as of May 2012 that allows individuals to seek relief from their federal firearms prohibition. From 2004 to 2011, the total number of mental health records that states made available to the National Instant Criminal Background Check System (NICS) increased by approximately 800 percent—from about 126,000 to 1.2 million records—although a variety of challenges limited states’ ability to share such records. This increase largely reflects the efforts of 12 states. However, almost half of all states increased the number of mental health records they made available by fewer than 100 over this same time period. Technological, legal, and other challenges limited the states’ ability to share mental health records. To help address these challenges, the Department of Justice (DOJ) provides assistance to states, such as grants and training, which the 6 states GAO reviewed reported as helpful. DOJ has begun to have states share their promising practices at conferences, but has not distributed such practices nationally. By disseminating practices that states used to overcome barriers to sharing mental health records, DOJ could further assist states efforts. The states’ overall progress in making unlawful drug use records available to NICS is generally unknown because of how these records are maintained. The vast majority of records made available are criminal records—such as those containing arrests or convictions for possession of a controlled substance—which cannot readily be disaggregated from other records in the databases checked by NICS. Most states are not providing noncriminal records, such as those related to positive drug test results for persons on probation. On May 1, 2012, DOJ data showed that 30 states were not making any noncriminal records available. Four of the 6 states GAO reviewed raised concerns about providing records outside an official court decision. Two states also noted that they did not have centralized databases that would be needed to collect these records. DOJ has issued guidance for providing noncriminal records to NICS. DOJ has not administered the reward and penalty provisions of the NICS Improvement Amendments Act of 2007 because of limitations in state estimates of the number of records they possess that could be made available to NICS. DOJ officials were unsure if the estimates, as currently collected, could reach the level of precision needed to serve as the basis for implementing the provisions. The 6 states GAO reviewed had mixed views on the extent to which the reward and penalty provisions—if implemented as currently structured—would provide incentives for them to make more records available. DOJ had not obtained the states’ views. Until DOJ establishes a basis for administering these provisions—which could include revising its current methodology for collecting estimates or developing a new basis—and determining the extent to which the current provisions provide incentives to states, the department cannot provide the incentives to states that were envisioned by the act. Nineteen states have received federal certification of their programs that allow individuals with a precluding mental health adjudication or commitment to seek relief from the associated firearms prohibition. Having such a program is required to receive grants under the 2007 NICS act. Officials from 10 of the 16 states we contacted said that grant eligibility was a strong incentive for developing the program. Reductions in grant funding could affect incentives moving forward. |
Purpose In the past decade, Medicare costs have risen at an average rate of over 10 percent per year. This continued growth has prompted stakeholders to seek methods to slow down or reduce the cost of services. Because managed care is viewed as less costly than fee-for-service health care, one proposal put forth is to expand managed care options for Medicare beneficiaries. Many are concerned, however, that cost reductions may result in poor quality of care provided to Medicare beneficiaries. Currently, the Medicare program reimburses only for care provided in health maintenance organizations (HMO) and by the fee-for-service sector. If managed care options are expanded, however, stakeholders want to ensure that the quality of care furnished to Medicare beneficiaries does not suffer. Concerned about ensuring quality in managed care plans that have not participated in Medicare, the Chairman of the Subcommittee on Health of the House Committee on Ways and Means requested that GAO (1) discuss the present and future strategies of the Health Care Financing Administration (HCFA), which administers the Medicare program, to ensure that Medicare providers furnish quality health care, in both fee-for-service and HMO arrangements and (2) obtain experts’ views on desirable attributes of a quality assurance strategy if more managed care options are made available to Medicare beneficiaries. In meeting these objectives, GAO interviewed health care experts and HCFA officials, reviewed quality-related literature and HCFA documents, and drew on previous GAO work. Background HCFA oversees programs established to monitor quality of care in the Medicare program and ensures that corrective action is taken when problems are found. In 1965, passage of Medicare legislation turned the federal government into the nation’s single largest payer for health care and made it responsible for ensuring that beneficiaries receive good-quality care. This legislation mandated specific programs to help ensure that medical services purchased on behalf of beneficiaries met minimum quality standards. Subsequent legislation created a medical record review program for ensuring that institutional providers meet minimum standards for delivering appropriate and technically correct care. Over time, HCFA’s quality assurance programs have changed in response to shifting utilization patterns created by new Medicare payment methodologies. Quality of care is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge. Most quality assurance programs used by regulators and providers use performance indicators to measure whether established standards have been met. Indicators can be classified according to those that measure (1) structure—the capacity of an institution, health system, practitioner, or provider to deliver quality health care; (2) process—physician and other provider activities performed to deliver the care; and (3) outcomes—the results of physician and provider activities. Today’s quality assurance strategies focus on continuous quality improvement, which encourages all providers to perform better. This differs from past strategies, which tended to focus more on individual providers’ substandard efforts. Principal Findings HCFA Has Two Main Quality Assurance Strategies Medicare’s two main quality assurance strategies—certification and medical record review—are intended to help ensure that Medicare beneficiaries receive good-quality care. The first, HCFA’s certification strategy, includes two major programs: the Medicare Provider Certification Program, directed at fee-for-service institutional health care providers, and the Medicare HMO Qualification Program, directed at HCFA’s Medicare HMOs. Both focus on ensuring that providers meet minimum structural and process requirements. GAO has frequently reported, however, that HCFA has failed to aggressively enforce the requirements of these two programs. HCFA’s medical review strategy uses PROs to monitor providers’ actions through reviews of individual medical records to determine patterns of poor or inappropriate care. If problems are identified, PROs work with providers to correct the problems and in extreme cases recommend a monetary penalty or suspension from the Medicare program. GAO concluded in 1991 and again in 1995 that HCFA had failed to systematically incorporate the results of PRO review into its HMO monitoring process. HCFA’s Quality Assurance Program Generally Consistent With Experts’ Views The experts GAO interviewed suggested four broad strategies for a federal quality assurance program: Build on existing federal, state, and private efforts. These could include state initiatives, such as those patterned after the National Association of Insurance Commissioners’ (NAIC) model standards, government certification, private accreditation, and the use of PROs. Use multiple strategies to evaluate care. In addition to accreditation, experts discussed the use of other performance measures, including outcome measures and patient satisfaction surveys. Until outcome measures are more fully developed, however, the experts suggested continued use of other, more traditional performance measures. Encourage continuous quality improvement. Experts believe that continuous quality improvement programs can identify previously undetected problems, provide management with constructive feedback, and help providers and plans to improve their health services. Make information about providers available to beneficiaries and others in a useful and understandable way. Experts stressed that the federal government should share with beneficiaries information gathered about quality of care to help beneficiaries in their health care purchasing decisions. The experts expressed varying views on implementing these strategies regarding the most appropriate type of performance data to collect and who should verify and evaluate the data once collected. Furthermore, they suggested reexamining federal quality assurance strategies for the entire spectrum of Medicare providers—from managed care organizations to fee-for-service providers. HCFA’s new Health Care Quality Improvement Program is generally consistent with the four broad strategies cited by the experts GAO interviewed. HCFA plans to modify its quality assurance strategies to emphasize outcomes and improvement in the quality of care. This program will build on HCFA’s current certification and medical record review quality assurance strategies. For example, HCFA is currently deemphasizing structure and process measures as the bases for its certification decisions and is preparing to implement outcome indicators for hospitals, nursing homes, and other provider types. Additionally, HCFA is reengineering the entire PRO program to incorporate continuous quality improvement concepts. PROs will deemphasize individual case review in favor of cooperative projects with hospitals and HMOs. HCFA officials are planning a beneficiary satisfaction survey designed to collect data from Medicare beneficiaries in HMOs. HCFA officials also have plans to provide Medicare beneficiaries with information to help them choose providers. The timetable for implementation remains unclear, however, because of perceived difficulties in presenting complex comparative data to consumers in an easily understood way. Agency Comments HCFA did not agree with GAO’s concerns about how well HCFA will implement its new quality assurance initiative and its plans for providing information to beneficiaries. On the basis of GAO’s past studies of HCFA’s quality assurance implementation efforts, however, GAO remains concerned about whether HCFA will implement its new comprehensive program so that it detects and corrects poorly performing providers and improves all providers’ performance. In addition, GAO believes that some of the information now being collected by HCFA could be published and disseminated to Medicare beneficiaries. HCFA also provided specific technical comments, which we incorporated as appropriate. Introduction In the past decade, Medicare costs have risen at an average rate of over 10 percent per year. Medicare program benefit payments have increased from $69.5 billion in 1985 to an estimated $180 billion in 1995, prompting the Congress and others to search for ways to reduce the program’s rate of growth. One proposal put forth is to increase the managed care choices of Medicare beneficiaries who may be considering enrolling in a managed care plan. Although stakeholders believe that managed care organizations can furnish needed services to beneficiaries at less cost than fee-for-service arrangements, they are concerned about ensuring that those beneficiaries who enroll receive high-quality care. Defining Quality of Care According to the Institute of Medicine, quality of care is defined as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.” To evaluate whether quality of care is being provided to those individuals and populations, one or more of the following attributes usually are measured: appropriateness (patients receive the right care at the right time), technical excellence (providers furnish care in the correct way), accessibility (patients obtain care when needed), and acceptability (patients are satisfied with their care). These attributes can be assessed by regulators, providers, or others using performance indicators that measure organizational structures, provider actions, and the results of care. Structure indicators measure the capacity of an institution, health system, practitioner, or provider to deliver quality health care. Having a safe and clean facility and a quality assurance program in place in an organization are examples of structure indicators. Process indicators measure what a provider does to and for the patient. Identifying and evaluating what diagnostic tests a physician performs when examining a patient with chest pain is an example of a process indicator. Outcome indicators measure the results of providers’ actions and are viewed as the most direct measure of the quality of care furnished because they represent the providers’ success. Examples of outcome indicators are mortality, complications resulting from surgery, patient satisfaction with the care received, and functional status. Approaches to Developing Quality Assurance Programs Assessing quality of care involves reaching consensus about standards and developing reliable and valid structure, process, and outcome measures. If the standards are not met, then providers and regulators must develop approaches to make it more likely that health care is furnished in ways that meet the standards. In the past, quality assurance programs focused on the care provided to individual patients. These programs tended to direct improvement activities toward individual providers identified as responsible for mistakes rather than encourage improvement in overall health care delivery. As a result, quality assurance efforts focused on a few providers, and the effects of these efforts were limited to a small percentage of the population. Furthermore, these programs often resulted in adversarial relations between the reviewers and those being reviewed. In recent years, approaches to quality assurance have begun to focus on continuous quality improvement. Under this approach, attempts are made to identify and establish excellent care by focusing attention on inappropriate variation in the quality of care furnished to identified populations and eliminating the variations. This approach strives to make everyone’s performance better, regardless of prior performance. Other recent approaches to quality assurance have also included initiatives for collecting and disseminating information on performance measures. The Health Plan Employer Data and Information Set (HEDIS) is a major attempt to advance the collection of information on quality of care indicators.HEDIS indicators of health plan activities in five performance areas have been adopted by many large health care purchasers and some regulators to gauge the quality of care provided by health plans. Attempts to advance the dissemination of HEDIS and other information on quality of care include the publication of “report cards” by health plans intended to describe their performance measured against selected performance indicators. Employers are also providing quality of care performance information to their employees about health plans with which they contract. For example, the California Retirement System recently distributed a report containing both performance indicators about quality and member satisfaction survey results. Federal Government’s Role in Ensuring Quality of Care for Medicare Beneficiaries HCFA oversees programs established to monitor quality of care in the Medicare program and ensures that corrective action is taken when problems are found. In 1965, passage of federal Medicare legislation turned the federal government into the nation’s single largest payer for health care and made it responsible for ensuring that beneficiaries receive good-quality care. This legislation mandated that the government establish specific programs to help ensure that medical services purchased on behalf of beneficiaries meet minimum quality standards. Over time, these programs have changed in response to shifting utilization patterns created by new Medicare payment methodologies. Initially, the mandated quality assurance programs focused on setting minimum structural standards for hospitals and other institutional providers to ensure that they could deliver care of acceptable quality. In 1986, in response to changes in hospital care delivery systems, HCFA modified its hospital certification program to include more process measures. Also, when the Medicare program began to contract with HMOs, structural standards to help ensure the capacity of HMOs to deliver care were established. Subsequent legislation created a medical record review program for ensuring that institutional providers meet minimum standards for delivering appropriate and technically correct care. This program, however, tended to focus more on utilization of medical services rather than the quality with which they were delivered. As a result of hospital and HMO reimbursement changes in the early 1980s intended to control rising Medicare costs, hospitals had the perverse incentive to admit patients unnecessarily and discharge them prematurely. Also, hospitals and HMOs had an incentive to skimp on costly care. To counter these incentives, the Congress redesigned the Medicare medical record review program to focus on detecting unnecessary hospital admissions and substandard care and by mandating the inclusion of HMOs. In overseeing the quality of care furnished by Medicare providers, HCFA has a range of ways to address providers’ failure to meet established standards. Usually HCFA begins by requiring that providers take timely corrective action to address the identified deficiencies. Ultimately, the agency has the authority to suspend Medicare payment to substandard providers. Objectives, Scope, and Methodology In April 1995, the Chairman of the Subcommittee on Health of the House Committee on Ways and Means asked us to examine ways to best ensure that professional, quality health care would be furnished across a broad spectrum of health plans. Currently, HCFA reimburses for care provided only by the fee-for-service sector or by HMOs. The Chairman requested that we (1) discuss HCFA’s present and future strategies to ensure that Medicare providers furnish quality health care in both fee-for-service and HMO arrangements and (2) obtain experts’ views on desirable attributes of a quality assurance strategy if more managed care options are made available to Medicare beneficiaries. To analyze HCFA’s present and future plans, we reviewed documents on HCFA’s efforts and plans, conducted interviews with HCFA officials, and drew on previous GAO reports. To obtain the views of experts, we conducted over 30 structured interviews with experts selected to represent a wide range of perspectives, including those of health plans, health care researchers, federal and state agencies, major purchasers of health care, and accrediting agencies. (See app. II for a list of the experts we interviewed and their affiliations.) We also reviewed literature about measuring the quality of health care, articles about major health care purchasers’ initiatives, and previous GAO reports on measuring provider performance. We presented initial findings from our work in testimony before the Subcommittee on July 27, 1995. Our work was performed between April and December of 1995 in accordance with generally accepted government auditing standards. HCFA’s Medicare Quality Assurance Strategy Is Based on Compliance With Standards Since its inception, Medicare has had two major quality assurance strategies to ensure that beneficiaries receive quality care. Until recently, these strategies were based on a regulatory approach—setting minimum standards for health care organizations and implementing systems to identify and discipline substandard providers. HCFA’s two strategies cover both fee-for-service providers and HMOs. The first, certification, is intended to ensure that minimum structural requirements, such as appropriate staffing and minimum process requirements (for example, an infection control system that identifies and corrects problems), exist to allow for quality care. The second, review of beneficiary medical records, is intended to ensure that the processes of care reflect the current best practices in the community. HCFA, however, has not always fully used available information in its monitoring programs nor acted effectively when significant problems were found. As a result, HCFA cannot ensure that Medicare beneficiaries are receiving quality care. HCFA’s Certification Strategy Has Separate Programs for Fee-for-Service and HMOs HCFA’s certification strategy includes two major programs. The Medicare Provider Certification Program, in existence since Medicare’s inception in 1965, is directed at ensuring that fee-for-service institutional health care providers serving Medicare beneficiaries meet minimum health and safety requirements. The other program, HCFA’s Medicare HMO Qualification Program, dates to the origin of the Medicare HMO contracting program in the Social Security Amendments of 1972. This program was established to ensure that HMOs with contracts to serve Medicare beneficiaries meet minimum financial and structural standards. The Medicare Provider Certification Program Assesses Fee-for-Service Institutional Providers Medicare law requires institutional providers of care, such as hospitals and nursing homes receiving direct fee-for-service Medicare payments, to comply with certain physical and organizational requirements. These requirements are usually called conditions of participation. Conditions of participation identify minimum standards that policymakers thought were necessary to be met for quality health care to occur. In the past, the conditions related almost exclusively to structural quality of care indicators. This remains largely true for hospitals, although a 1986 revision added some process indicators. A full-service community hospital must meet 20 conditions of participation regarding such matters as the hospital’s governing body, physical plant, clinical and emergency services, nursing service, and food service. Each condition is subdivided into multiple standards, most of which must be met if an institution is to comply with the condition. Surveyors who review the hospital to determine its compliance with the conditions have usually only determined whether the institution has established the necessary policies and procedures to meet the conditions of participation. Federal regulations and survey procedures do not require surveyors to determine what actual patient outcomes have been. In the mid-1980s, HCFA officials began to work toward modifying conditions of participation for other types of institutions to focus the conditions more toward beneficiary outcomes. According to the officials, this process began in 1986 with modification of the survey process for nursing homes to emphasize review of patient outcomes and the provision of patient care services. HCFA implemented major revisions of the conditions of participation for home health agencies and nursing homes in 1991 and 1992, respectively. Finally, in April of 1995, HCFA implemented new outcome-oriented survey procedures for renal dialysis facilities. Certification surveys intended to determine whether an institution is in compliance with the conditions are performed by either state agencies or private accrediting organizations. HCFA contracts directly with state agencies to perform certification surveys of some institutional providers. However, HCFA deems a hospital’s or home health agency’s accreditation by a designated private accrediting organization to be adequate assurance that the provider meets the conditions of participation. If a hospital or home health agency does not request accreditation from such an accrediting organization, the state agency where the institution is located will perform the certification survey. When deciding whether to grant a private accrediting organization deeming status, HCFA reviews the policies of the accrediting organization to determine that the organization, among other things, has accreditation requirements that are at least equivalent to Medicare has survey teams and procedures adequate to detect problems, ensure corrective action, and meet Medicare requirements for the frequency and prior announcement of visits; and is willing to provide HCFA with a copy of the most current accreditation survey and any other information on the survey, including corrective action plans, that HCFA may require. HCFA grants private accrediting agencies deeming authority for a 6-year period. (app. III lists the organizations whose accreditation is deemed equivalent to HCFA certification; it also lists other organizations that accredit institutional health care providers or units within providers.) Regardless of whether HCFA or state agency personnel perform the review, the process used to determine whether an institution meets certification requirements involves an on-site survey by a team of registered nurses and persons trained in other health-related disciplines. This survey may take several days depending on the type and size of provider. The survey includes a thorough review of the provider’s policies, procedures, and systems. At the conclusion of the inspection, the team meets with appropriate provider officials and informs them of its findings. Subsequently, the team prepares a formal written report and sends it to the provider. If the team finds that the provider does not comply with one or more conditions of participation, it will ask the provider to submit a corrective action plan, including a timetable. At the end of the time period specified in the plan’s timetable, the surveying agency may perform a limited resurvey to ensure that all identified problems have been corrected, or it may require the provider to submit documentation that corrective action has occurred. If the provider does not comply with conditions by the end of the time period in the plan’s timetable, or if the problem was severe enough to seriously endanger Medicare beneficiaries, HCFA may revoke the provider’s certification to receive Medicare payment. In our 1991 review of the Medicare hospital certification program, however, we found that HCFA rarely terminated hospitals from the Medicare program even though they might have been out of compliance with Medicare requirements months longer than anticipated or allowed by regulation. This situation occurred because federal and state officials preferred to work with substandard hospitals to bring them into compliance, political pressures were exerted to keep them open if possible, and quality problems less obvious than gross negligence were difficult to document. This apparent unwillingness to terminate noncompliant hospitals has cast some doubt on HCFA’s willingness to act against any but the very worst hospitals. While terminating hospitals from Medicare is usually undesirable except as a last resort, we reported that HCFA should terminate facilities that are persistently noncompliant with conditions of participation. To ensure that state agencies and private accrediting organizations are performing their surveys adequately, HCFA performs validation surveys. HCFA personnel conduct validation surveys on a small percentage of the facilities surveyed by state agencies; in addition, HCFA contracts with state agencies to conduct validation surveys of the facilities surveyed by private accreditors. In 1993, state agency personnel performed 181 validation surveys among the approximately 5,200 hospitals accredited by the Joint Commission. The 1993 HCFA annual report on validation surveys of hospitals accredited by the Joint Commission concluded that a decline over several years in the percentage of hospitals found by the validation surveys to have general health and safety deficiencies provided increased assurance that accredited hospitals met federal standards. However, some problems continued with the Joint Commission’s enforcement of the Life Safety Code. In 1994, HCFA personnel performed 863 validation surveys among 15,493 nursing homes surveyed by state agencies. HCFA officials told us that the results of HCFA’s monitoring program for state survey agencies indicate that state agency performance of nursing home reviews is in some cases uneven. However, they said that they had assessed the problem and were now working with state agencies to help them improve through problem identification, consultation, and training. The Medicare HMO Qualification Program HMOs wanting to provide health care services to Medicare beneficiaries on a risk or cost basis must have a contract with the Medicare program.Under HCFA’s Medicare HMO Qualification Program, HCFA personnel visit HMOs with cost or risk contracts at least once every 2 years to monitor their compliance with Medicare requirements. The site visits are similar to those used in the Medicare Provider Certification Program. HCFA personnel spend several days at the HMO comparing the HMO’s policies and procedures with Medicare requirements. The monitoring team informs the HMO of its preliminary findings at the end of the visit and later prepares a formal report. If the HMO has failed to meet one or more requirements, it must submit a corrective action plan, including a timetable for correcting the deficiency. HCFA may revisit the site to monitor compliance at the end of the time period specified in the plan’s timetable, or it may simply require regular progress reports. If the HMO fails to correct the deficiency in a timely manner, HCFA may terminate the HMO’s Medicare contract or, under some circumstances, impose a civil monetary penalty or suspend Medicare enrollment. Inadequate Enforcement of Medicare HMO Quality Assurance Requirements We have criticized HCFA for failing to aggressively enforce Medicare quality assurance requirements for HMOs. In 1988 and again in 1991, we found that HCFA’s efforts to obtain corrective action from a few noncompliant HMOs were largely ineffective even though HCFA repeatedly requested such action. Furthermore, HCFA often found that the same problems existed when it made its next annual monitoring visit. We found the same problems again in an August 1995 report. We concluded that HCFA’s Qualification Program is inadequate to ensure that Medicare HMOs comply with standards for ensuring quality of care. Specifically, this program remains inadequate for four main reasons: HCFA does not determine if HMO quality assurance programs are operating effectively. HCFA’s routine compliance monitoring reviews do not go far enough to verify that HMOs monitor and control quality of care as federal standards require. The reviews check only that HMOs have procedures and staff capable of quality assurance and utilization management—not for effective operation of these processes. HCFA does not systematically incorporate the results of PRO review of HMOs or use PRO staff expertise in its compliance monitoring. A routine HCFA site visit to an HMO generally involves about three people without specialized clinical or quality assurance training, who spend a week or less focused largely on Medicare requirements for administration, management, and beneficiary services rather than on medical quality assurance. About a third of staff time is typically spent on quality-related matters. PRO staff generally have the specialized clinical training needed to perform quality assurance reviews. HCFA does not routinely collect utilization data that could most directly indicate potential quality problems. In the fee-for-service sector, claims data are available and can be used to detect potential overutilization of services. Although HCFA has the authority to require HMOs to collect such data and federal standards require that HMOs have information systems to report utilization data and management systems to monitor utilization of services, no comparable data exist for use in the Medicare HMO Qualification Program to detect potential underutilization. As a result, even such basic information as hospitalization rates; the use of home health care; or the number of people receiving preventive services, such as mammograms, is unknown. HCFA does not evaluate HMO risk-sharing arrangements with providers. The agency does not routinely assess whether HMO risk-sharing arrangements create a significant incentive to underserve, although in the Omnibus Reconciliation Act (OBRA) of 1990, the Congress gave the Department of Health and Human Services (HHS) authority to limit arrangements that it found provided an excessive incentive to underserve. As of March 15, 1996, the Department had not yet issued final regulations on methods for gauging how much risk an HMO can legitimately pass to providers and requirements that providers must meet to accept such risk. However, a HCFA official told us that HCFA expected to publish these regulations shortly. We also found that enforcement processes remain slow when HCFA does find quality problems or other deficiencies at HMOs that do not comply promptly with federal standards. For example, between 1987 and 1994, HCFA repeatedly found that a Florida HMO did not meet Medicare quality assurance standards and received PRO reports indicating that the HMO was providing substandard care to a significant number of beneficiaries. During this period, it permitted the HMO to operate as freely as a fully compliant HMO. We also found that HCFA does not routinely release its site visit reports to the public. Consequently, when an HMO is found to violate federal standards, Medicare beneficiaries may not know of quality problems that might influence their decision to join or remain enrolled in that HMO. The Medicare Peer Review Organization (PRO) Program HCFA’s medical record review strategy, implemented through the Medicare PRO program, was designed to identify providers whose care does not meet recognized medical standards. PROs generally have been required to focus their reviews on care furnished to beneficiaries on a fee-for-service basis in hospitals and outpatient surgical centers and care furnished by HMOs. Although HCFA may use the PRO program to review care provided to beneficiaries in other settings such as physicians’ offices, it has chosen not to use this authority because reviewing care at all private U.S. physicians’ offices would be overwhelming. Until recently, the PROs’ primary review method was to monitor providers’ actions through reviews of individual medical records. A number of sampling strategies have been used to select records for review. The prevailing strategy in the fee-for-service sector has been to draw a random sample only from Medicare hospital admissions. However, other samples drawn from hospital admissions have focused on areas perceived to be at high risk, for example, cases in which potentially adverse events such as hospital readmission within 31 days of a discharge have occurred. In the HMO sector, the PROs drew a random sample of enrolled beneficiaries, both living and recently deceased, and asked the HMOs to determine which of these sampled beneficiaries had received either ambulatory or inpatient services during the period in question. For these beneficiaries, the PRO reviewed the medical records for all care furnished by the HMO over a 12-month period in both ambulatory and inpatient settings. PRO medical review usually begins when a reviewer employed by the PRO reviews the selected medical record. If a problem is found, the medical record is referred to a PRO physician. If the PRO physician believes that a quality concern might exist, the PRO writes to the providers responsible for the patient’s care and gives them the opportunity to provide an explanation for the potential concern. Then, if the concern is not resolved, it is referred for further review to a physician who is a specialist in the type of care being questioned. If a provider demonstrates a pattern of confirmed problems, the cases are sent to the PRO’s medical review committee, composed mainly of physicians, which determines whether a corrective action plan is necessary to prevent similar problems from occurring in the future. If the provider will not or cannot correct the identified poor practice, the PRO may recommend that the HHS Office of Inspector General impose a sanction. Possible sanctions include suspension of eligibility to receive reimbursement from the Medicare program for a specified period or monetary penalties. The PRO program has been criticized by providers and other health care experts because of the adversarial role some experts believe the PROs have taken. Furthermore, relatively few substandard providers have been identified as a result of this approach. The medical review model used by the PROs focused on the detection and correction of individual aberrant providers. HCFA officials found this particular model to be confrontational, unpopular with the physician community, and of limited effectiveness. In the past, we have also been critical of HCFA’s use of the PRO program to monitor HMOs. In a 1991 report, we cited several problems with the PROs’ ability to monitor care provided by HMOs with risk contracts. First, although HCFA contracted with PROs to perform an initial review of the adequacy of risk HMO quality assurance plans in 1987, HCFA failed to require HMOs to submit their plans for review. Furthermore, when the PROs found deficiencies in HMO quality assurance plans, HCFA did not require HMOs to correct them. As a result, HCFA could not be assured that HMOs were identifying and correcting quality of care problems. In commenting on this report, HCFA stated that in 1987 it did not believe that PROs had the expertise to perform reviews of HMOs’ quality assurance plans. However, HCFA now believes the situation may have changed. HCFA is currently studying the possibility that PROs could play an active role in monitoring Medicare HMO’s quality assurance systems. Second, HCFA did not require risk HMOs to submit patient encounter data to HCFA. As a result, HCFA lacked adequate HMO utilization data and other patient information that PROs could use to serve as the basis for sampling HMO beneficiaries receiving hospital care or to identify statistical patterns of care that may suggest underutilization or inappropriate care. Finally, HCFA failed to incorporate the results of PRO review into its HMO qualification monitoring process. As a result, HCFA could not be assured that high-quality health care was being provided to Medicare beneficiaries in risk HMOs. This failure was still an issue when we reviewed HCFA’s oversight of HMOs serving Medicare beneficiaries in 1995. HCFA’s New Strategies Reflect Experts’ Views on Appropriate Quality Assurance Approaches HCFA is substantially revising its quality assurance strategy to reflect state-of-the-art quality assurance practices, such as continuous quality improvement, outcomes measurement and dissemination of performance results, that health care professionals believe will more effectively improve quality of care. HCFA’s new strategy, called the Health Care Quality Improvement Program, is founded on the premise that HCFA should try to buy the best care possible for Medicare beneficiaries and is generally consistent with many of the elements of appropriate quality assurance strategies cited by the health care experts we interviewed. As a result, HCFA officials believe that they will be able to improve the overall quality of care for all Medicare beneficiaries. HCFA, however, is just now developing plans to provide additional information to beneficiaries about plans’ performances. We believe that this change is needed as HCFA revises its quality assurance strategy. The experts we interviewed believe that providing information to help beneficiaries make sound purchasing decisions is essential to a good quality assurance program. Experts’ Views About Appropriate Strategies for Medicare Managed Care Quality Assurance When we asked the experts about their views on ensuring that quality care is provided to Medicare beneficiaries through a variety of managed care arrangements, they cited the following characteristics for a federal quality assurance strategy: The strategy should build on existing federal, state, and private efforts. These efforts could include state initiatives such as those built on National Association of Insurance Commissioners’ (NAIC) quality assurance and other model standards, as well as existing private and federal systems, such as government certification and private accreditation programs, and the long-standing involvement and experience of PROs in collecting and evaluating quality assurance data. The strategy should use many measures to evaluate care. In addition to the ongoing quality assurance activities already discussed, steps should be taken to develop valid and reliable performance measures, including patient satisfaction surveys, in evaluating health care providers’ performance. The experts stressed the importance of outcome performance measures, recognizing that these measures are not yet fully developed. Therefore, they suggested that other, more traditional, performance measures be used until consensus is reached on appropriate outcome measures. Patient satisfaction surveys are becoming increasingly popular and important as a performance measurement tool. Like large private-sector health care purchasers, the federal government could employ this strategy as one tool to measure provider performance. The strategy should encourage continuous quality improvement. Experts view encouraging providers’ continuous quality improvement activities as an important role for the federal government. In this regard, they recognized the importance of external oversight programs designed to ensure that providers are continually assessing and improving the care they furnish. Such oversight programs are an important tool for identifying previously undetected problems, providing management with constructive feedback, and assisting providers and plans to improve their health services. The strategy should make information about providers available to beneficiaries and others in a useful and understandable way. A common theme expressed by the experts we interviewed was the need to provide understandable and reliable data on managed care organizations to beneficiaries to help them in their health care purchasing decisions. Several told us that this information should be disseminated at the regional or local level because beneficiaries derive little benefit from national data. Although the experts we interviewed agreed on the broad strategies needed for a comprehensive Medicare quality assurance program, they were less unanimous in their views on implementing these strategies. For example, they expressed varying views on the most appropriate performance data to collect, who should verify these data, and who should be responsible for evaluating the data once they are collected and verified. Finally, experts expressed the view that federal quality assurance strategies should be reexamined and enhanced for the entire spectrum of Medicare providers—that is, managed care organizations and fee-for-service providers. HCFA Is Reinventing Its Certification Program As part of its Health Care Quality Improvement Program, HCFA intends to reinvent the Medicare Provider Certification Program. According to a HCFA official, as outcome indicators become more valid, reliable, and accepted by providers, outcome indicators will replace current structure indicators in the certification process. Currently, HCFA is using outcome measures as the basis for its nursing home certification decisions. Furthermore, HCFA is collecting data from home health agencies to construct outcome measures. HCFA is also developing outcome-oriented conditions of participation for hospitals, which may be implemented in 1997. HCFA’s Nursing Home Outcome Measures In 1987, the Congress passed legislation that extensively revised the Medicare conditions of participation for skilled nursing facilities. These new conditions, renamed requirements, as implemented by HCFA require a resident-centered survey emphasizing review of the outcomes of the care actually furnished. This review is in addition to the review of the nursing home’s performance in relation to specific structure and outcome indicators. The resident-centered survey requirement is based upon the selection of a case mix-stratified sample of residents performed in two phases. During the first phase, about 60 percent of the whole sample is selected. Included are residents who have special needs such as those requiring considerable assistance with activities of daily living, those who cannot be interviewed, and those who fit into the specific area of focus selected for the survey. In addition, the sample should include some residents who (1) are new admissions; (2) are at high risk of neglect and abuse because they have dementia, few visitors, or are bedfast; (3) have difficulty communicating; (4) are receiving hospice services; or (5) have other special circumstances. After the survey team has gained enough experience at the facility to identify other areas of special concern, the remaining 40 percent of the survey sample is selected, focusing on patients in these areas. The surveyors interview each of the selected residents and then review their medical records to determine if the patient’s needs have been properly assessed, appropriate interventions have been implemented, and the patient has been evaluated to determine the intervention’s effect. Also as a result of the 1987 legislation, on July 1, 1995, HCFA implemented the Long Term Care Enforcement Regulation, a new set of intermediate sanctions for the nursing home certification process. These give HCFA and the state agencies a broad range of remedies for noncompliance with requirements short of termination from the program. These remedies range from such measures as enhanced state monitoring and directed in-service training to civil monetary penalties, temporary takeover of the facility’s management, and denial of payment for new admissions or even all residents. HCFA officials told us that they provided extensive training in the new procedures and remedies to state agency personnel. HCFA is also developing a set of nursing home outcome indicators such as the prevalence of decubitus ulcers and percentage of patients whose capability for activities of daily living has declined over a 3-month period. These indicators, now being measured in a five-state demonstration project, stem from an expanded version of the minimum data set mandated by law for use in all nursing facilities. HCFA eventually hopes to use the results of these indicators to permit state agencies to focus increased resources on nursing homes showing poor performance by decreasing the frequency of surveys for those nursing homes with good performance. HCFA officials also hope that the nursing homes will use the data for continuous quality improvement activities. HCFA Is Developing Outcome Indicators for Other Health Care Settings HCFA is also preparing new, outcome-oriented conditions of participation for home health agencies, hospitals, and dialysis facilities to be followed by new requirements for hospices. In conjunction with the new home health agency conditions of participation, HCFA is developing indicators that reflect changes in beneficiaries’ functional and health status. Examples of such indicators are (1) percentage of patients showing improvement in walking and (2) percentage of patients readmitted to an acute care hospital. As with nursing homes, HCFA officials hope to use these indicators to determine the frequency with which different home health agencies should be reviewed. They also hope that the agencies will use the indicators for continuous quality improvement projects. Additionally, HCFA is working with the Joint Commission, hospital associations, and others to draft new, outcome-oriented hospital conditions of participation. HCFA officials told us that they hope to publish these new conditions in the Federal Register for public comment in 1996 and implement them during 1997. HCFA is also working with the Joint Commission and the American Osteopathic Association (AOA) to modify its process for validating these organizations’ accreditation surveys. The new process calls for HCFA to conduct a more comprehensive evaluation of these organizations’ hospital accreditation programs, including standard setting, training surveyors, conducting the survey, enforcing actions, and remaining financially viable to ensure they can meet their full responsibilities to protect patients and improve outcomes. Under this new process, state agency surveyors would observe the Joint Commission or AOA surveyors to determine the accreditors’ ability to identify problems and analyze investigation results. HCFA officials told us that they are still working out the methodological problems inherent in conducting simultaneous accreditation and validation surveys. HCFA expects to implement the new hospital survey process in fiscal year 1997. HCFA Is Reengineering the PRO Program Also as part of its Quality Improvement Program, HCFA is reengineering the entire PRO program to incorporate continuous quality improvement concepts. By the end of 1995, random sample case reviews—that until 1993 were the backbone of PRO review—had been completely replaced by cooperative projects between the PROs and providers. Individual case review will continue for seven mandatory categories after implementation of the fifth round of PRO contracts beginning in April 1996. However, only two of these categories appear to be primarily aimed at identifying providers delivering poor care. These categories are beneficiary complaints or possible poor care discovered in the course of cooperative projects. Cooperative projects are implemented by mutual agreement between the PROs and hospitals and the PROs and HMOs with Medicare contracts. Provider participation is voluntary. HCFA officials indicated, however, that they believe most hospitals and HMOs will welcome the opportunity to collaborate with the PROs on projects with the potential to improve the quality of care. They do not believe that provider noncooperation will be a significant problem. However, HCFA officials told us that if they have strong indications that a hospital or HMO has significant quality of care problems and the entity refuses to cooperate, HCFA can issue a letter terminating the hospital’s or HMO’s Medicare participation for violating HCFA’s condition of participation to have an effective quality assurance program. PROs will use population, diagnosis, and procedure-specific utilization analysis of claims and clinical data as well as current published scientific studies to identify potential projects in areas that have clear opportunities to improve care. Most projects are to be jointly developed by the PRO and the provider and may involve direct data collection to supplement the use of claims data. HCFA will direct other cooperative projects. For example, the Cooperative Cardiovascular Project requires PROs to work with hospitals to improve care for Medicare beneficiaries hospitalized for heart attacks. HCFA developed a set of 11 process indicators based on an existing clinical guideline and refined through experience in a demonstration project involving collaboration between PROs and hospitals in four states.This demonstration project found that guidelines are often not followed and that significant opportunities for improvement exist. Even among patients who were identified as the best candidates for treatment, only 70 percent received thrombolytic drugs, 45 percent received beta blockers at discharge, and 77 to 83 percent received aspirin. Hospitals reported that these data were useful, and many of them committed to improving care. The PROs in the four pilot states are now returning to the hospitals to assess progress and promote further improvement in cardiac care for Medicare beneficiaries. In March 1995, the Cooperative Cardiovascular Project was extended nationwide. Data on inpatient treatment for heart attack are being collected in the remaining 46 states, and all PROs are expected to have collaborative projects with hospitals to improve care for heart attack victims by mid-1996. Although PROs have the authority to review fee-for-service ambulatory care, HCFA has been reluctant to venture into this area because reviewing care at all U.S. private physicians’ offices would be overwhelming. Currently, except for ambulatory surgical procedures, the only fee-for-service ambulatory review conducted is a pilot project begun recently in three states. In this project, PROs and 100 volunteer physicians in each state are cooperating to improve the quality of care provided to patients with diabetes. Concurrently, PROs in five other states are working cooperatively with 23 HMOs on a similar project. Both the fee-for-service and HMO initiatives are based on collecting information from medical records about 22 specific process and outcome performance measures such as the results of important laboratory tests. Data Standardization Is a Recognized Need As part of the new program, HCFA officials are committed to working collaboratively with providers to enhance data requirement standardization by making HCFA requirements consistent with other purchasers’. As a result of these efforts, HCFA has already implemented the minimum data set for nursing homes as previously discussed and is developing minimum data sets for use in home health care and managed care plans. It is now focusing efforts on standardizing data collection from managed care plans. HCFA officials have recognized that uniform and consistent plan data are necessary for evaluating any managed care performance. As a result, HCFA is working with NCQA and others to develop a new version of HEDIS that will include information applicable to the health care needs of the Medicare population. HCFA Is Collaborating With Private-Sector Purchasers In June 1995, HCFA announced that it was joining a group of large corporate purchasers of health care to form a new organization called the Foundation for Accountability. Among the many goals of this organization is developing a new generation of quality performance measures for health plans to provide purchasers and consumers with relevant information for health care decisionmaking. These measures will include results of treatment both for a health system’s entire population and for sick individuals. The Foundation also proposes to develop a common set of indicators to enable consumers to compare plans and to understand a plan’s benefit structure and modes of treatment. The Foundation will develop and use standardized, performance-based quality and outcome measures that emphasize patient ability to function normally in activities of daily living and patient satisfaction with the care provided. Because the Foundation represents approximately 80 million insured people, HCFA and the other Foundation members believe that health plans will adopt these measures and supply the results to them, other purchasers, and individual consumers. According to a former HCFA program official, joining this initiative will help to eliminate duplication of quality assurance efforts. Beneficiary Satisfaction Information HCFA has acted to increase its knowledge about Medicare beneficiaries and their reaction to its policies. One major initiative to obtain more information about the demographics, health status, access to care, and satisfaction of Medicare beneficiaries is the annual inclusion of specific questions about these issues in the Medicare Current Beneficiary Survey. This survey, begun in 1991, was undertaken primarily to meet the needs of the HCFA Office of the Actuary for comprehensive information on the use of care, costs, and insurance coverage for the Medicare population. It entails conducting a telephone interview every 4 months with a representative sample of 12,000 Medicare beneficiaries. Sample members usually stay in the survey for several years. HCFA officials told us that they are planning a survey to collect similar data from Medicare beneficiaries enrolled in HMOs. They said that they plan to have an outside contractor perform annual surveys of a statistically valid sample of Medicare enrollees in every HMO with a Medicare contract with HCFA. The contractor will use a standard survey and provide a consistent analysis of the information received from the beneficiaries. Data collected in this survey will include information on member satisfaction, quality of care, and access to services. HCFA has not yet begun the contracting process, however. HCFA officials told us that they intend to use the results of this survey to monitor contracting HMOs as well as to translate the resulting data into information that will be meaningful to beneficiaries and others for making informed health care decisions. HCFA also intends to release the results of the surveys to the plans for use in the plans’ continuous quality improvement activities. Beneficiary Education Focuses on Personal Health, Not Provider or Plan Information HCFA is conducting promotional campaigns intended to increase Medicare beneficiaries’ use of influenza immunizations and screening mammographies. Educational information about additional topics, such as post-acute care alternatives and end-stage renal disease are being developed. HCFA officials eventually plan to provide Medicare beneficiaries with information that will help them choose providers. Within a few years, they expect to be able to report the characteristics and results of key performance indicator data for nursing homes to facilitate consumer comparison of facilities. Producing these reports is difficult, however, because it requires adjusting nursing home comparisons for resident populations with differing care needs. Presenting the results of such a comparison in a clear enough way to be useful to consumers will also be a complex task. At best, it may be several years before this initiative shows concrete results. HCFA officials also reported that they are planning to produce a “Plan Comparability Chart,” another initiative designed to provide beneficiaries with information to compare Medicare HMOs and HMOs versus fee-for-service arrangements. However, this project appears to be in its early stages. In a recent report, we found that, although HCFA does collect information that could be useful to beneficiaries in discriminating among HMOs, it does not routinely make such information available. HCFA regularly reviews plan performance and routinely collects and analyzes data on Medicare HMO enrollment and disenrollment rates, Medicare appeals, beneficiary complaints, plan financial condition, availability and access to services, and marketing strategies. However, HCFA does not make this information routinely available to beneficiaries, nor does it plan to do so. In another recent report, we recommended that HCFA be directed to routinely publish comparative data it collects on Medicare HMOs and the results of its investigations and any findings of noncompliance by HMOs. Conclusions and Agency Comments HCFA’s proposed changes to enhance its quality assurance programs are generally consistent with the strategies expressed by the experts we interviewed and the literature we reviewed on assessing quality in the Medicare program. These changes appear to be steps in the right direction. We have concerns, however, about HCFA’s implementation of its new quality assurance strategy and its plans and timetable for providing information to beneficiaries. Our analysis of HCFA’s previous quality assurance implementation efforts raises concerns about whether HCFA will implement its new comprehensive program to deal effectively with poorly performing health care providers as well as improve all providers’ performance. As the majority of experts we interviewed recommended, HCFA’s Health Care Quality Improvement Program is based on continuous quality improvement. HCFA plans, however—through its targeted medical record review—to continue its efforts to identify providers who do not meet accepted standards of practice. But the number of targeted reviews planned could be minimal. The ability of HCFA’s proposed program to focus on dealing effectively with poorly performing providers is unclear, and this is an area where HCFA has not performed well in the past. HCFA’s plans and timetable for implementing patient satisfaction surveys and distributing comparative performance measurement information lag behind those of some private-sector employers and state agencies because HCFA does not believe it has useful information to give beneficiaries. We agree that HCFA should proceed with due care before implementing programs that might mislead beneficiaries about the quality of care they would receive in different health care systems. However, other responsible purchasers have already proceeded with surveying their constituents to determine their feelings about their health care and have published satisfaction data and other performance information to help individuals make purchasing decisions. Those who received the information say they found it useful and requested more data. HCFA Comments and Our Evaluation The Administrator of the Health Care Financing Administration disagrees with our concerns over how well HCFA will be able to implement its new quality assurance initiative and its plans for providing information to beneficiaries. The Administrator also notes that we do not mention HCFA’s Long Term Care Enforcement Regulation and provides detailed technical comments on our report (see app. I). GAO Work on HCFA’s Quality Assurance Activities The Administrator said that our report inaccurately and unfairly concludes that HCFA cannot implement comprehensive programs and deal effectively with poorly performing health care providers. He states that our reports have presented an unbalanced view of HCFA’s quality assurance initiatives over the years, choosing to focus on negative events in the past rather than HCFA’s continuous improvements to its quality monitoring. For example, we have criticized HCFA in the past for failing to enforce HMO quality assurance standards, citing the example of a Florida HMO. The Administrator notes that we do not mention a HCFA investigation of this HMO in 1994 and 1995, the deficiencies HCFA identified, and the corrective actions the plan agreed to implement. In addition, the Administrator disagrees with our conclusion that HCFA should not rely totally on a continuous quality improvement strategy since this could result in deemphasizing the identification and correction of substandard providers. He argues that our report suggests that HCFA’s resources should be devoted to identifying substandard providers. Furthermore, the Administrator states that we cite but a few poor performers and indicates that the only way to improve care for Medicare beneficiaries is to terminate participation by these facilities. Our reports, as noted by the Administrator, have consistently documented HCFA’s failure to aggressively enforce HMO-related quality assurance requirements. We believe that the history of our work raises reasonable concerns about how well HCFA will implement its current quality assurance initiative and take action if providers are not adequately improving their performance. In several reports prepared in the past decade covering both the provider certification and HMO qualification programs, we have found that HCFA has often failed to act firmly even when the provider is not making good faith efforts or acceptable progress. In our opinion, the events leading up to and surrounding the 1994 investigation of the HMO mentioned by the Administrator are an excellent example of HCFA’s difficulties in enforcing Medicare requirements for HMOs. In January 1993, HCFA was aware of findings from a 1992 special study performed by the Florida PRO that showed serious quality problems at this HMO. Despite this awareness, HCFA did not begin to investigate the HMO’s quality assurance and utilization management practices until June 1994. HCFA approved a corrective action plan for this HMO in January 1995 and found it in compliance in July 1995—more than 2-1/2 years after the problem first surfaced. Despite the Administrator’s statement, our report does not propose devoting all of the program’s resources to identifying substandard providers. Rather, we are concerned about how HCFA will balance its use of continuous quality improvement with ways to deal effectively with poorly performing providers. Additionally, we do not believe, as HCFA indicates, that the only way to improve care for Medicare beneficiaries is to terminate providers from the program. In some instances, however, this may be HCFA’s only recourse if the provider repeatedly fails to take corrective action. We have modified our language in the report to clarify our position on this matter. HCFA’s Consumer Education Effort The Administrator also disagrees with what he characterizes as our conclusion that HCFA has no immediate plans to provide beneficiaries with health plan-specific information to help them in making health care purchasing decisions. Instead, he notes that HCFA recognizes the need to provide information that is truly usable and informative. The Administrator adds that GAO does not go into any detail on the usefulness of information issued by the private sector. He argues that at best such information is very sketchy and cannot be used to make a managed care plan choice. First, we agree that HCFA should publish only useful information; however, we believe that some of the information now being collected by HCFA qualifies as useful and could be published and disseminated to Medicare beneficiaries. This includes information on HMO disenrollment rates and beneficiary complaints. In addition, HCFA could routinely release its HMO site visit reports. These reports contain information that might be useful to beneficiaries, for example, how well the HMO is meeting Medicare requirements such as maintaining an effective quality assurance program and a Medicare appeals system. The reports do not normally contain provider-specific information that HCFA indicates regulations prohibit it from releasing and are currently available to the public only under Freedom of Information Act procedures. We also are convinced that HCFA beneficiaries could benefit from private-sector strategies for collecting and disseminating information about quality and value and have provided an additional reference to support our belief that consumers would use this information. Long Term Care Enforcement Regulation The Administrator also notes that our report does not mention the Long Term Care Enforcement Regulation and the training efforts that have occurred to enhance the effectiveness of both the enforcement regulation and the long-term care survey process. We have added a description of HCFA’s Long Term Care Enforcement Regulation to our report. HCFA also made other detailed comments on specific portions of our draft report. We have considered these and modified our report where appropriate. | Pursuant to a congressional request, GAO reviewed the Health Care Financing Administration's (HCFA) efforts to enhance the quality of care for Medicare beneficiaries, focusing on: (1) the strategies to ensure that Medicare providers furnish quality health care, in both fee-for-service providers and health maintenance organizations (HMO); and (2) experts' views on desirable attributes of a quality assurance strategy if more managed care options are made available to Medicare beneficiaries. GAO found that: (1) HCFA monitors the quality of care in the Medicare program and has the authority to require corrective action or withhold Medicare payments from substandard providers; (2) Medicare's quality assurance strategies include setting minimum standards for health care organizations and implementing systems to identify and discipline substandard fee-for-service providers and HMO; (3) the Medicare Provider Certification Program ensures that fee-for-service institutional health care providers serving Medicare beneficiaries meet minimum health and safety standards; (4) the Medicare HMO Qualification Program ensures that HMO with contracts to serve Medicare beneficiaries meet minimum financial and structural standards; (5) HCFA has failed to enforce Medicare quality assurance requirements for HMO; (6) the HCFA medical record review strategy, implemented through the Medicare Peer Review Organization (PRO) Program, identifies providers whose care does not meet recognized medical standards; (7) the new HCFA quality assurance strategy, called the Health Care Quality Improvement Program, tries to buy the best care possible for Medicare beneficiaries and reflects state-of-the-art quality assurance practices; (8) experts believe that programs designed to ensure quality care provided to Medicare beneficiaries through a variety of managed care arrangements should build on existing efforts, use many measures to evaluate care, encourage continuous quality improvement, and make information about providers available; and (9) the dubious nature of previous quality assurance implementation efforts raises concern about its ability to implement its new quality assurance strategy. |
Background Following the 1998 embassy bombings in Nairobi, Kenya, and Dar es Salaam, Tanzania, a number of reviews called for the reassessment of overseas staffing levels and suggested a series of actions to adjust the overseas presence, including relocating some functions to the United States and to regional centers, where feasible. The White House, Congress, the Office of Management and Budget (OMB), and GAO have emphasized rightsizing as vital to ensuring that the overseas presence is at an optimal and efficient level to carry out foreign policy objectives. GAO’s rightsizing framework, which has been adopted by OMB and State, consists of three factors—mission, security, and cost—that should be weighed when making rightsizing decisions. In addition, the President’s Management Agenda (PMA) has identified rightsizing as one of the administration’s priorities. One way to provide efficient administrative support to overseas posts is by consolidating and centralizing service delivery within a geographic area through regional service centers located overseas and within the United States. Two objectives of regional service centers, which address the three factors of the rightsizing framework, are to improve administrative support to overseas posts (mission) and to reduce staffing overseas whenever possible (cost and security). Within the State Department a number of bureaus and offices are responsible for the administration and oversight of regional operations overseas. The Under Secretary for Management is responsible for implementing the PMA initiatives and, in particular, working with the White House and OMB on the initiative focused on rightsizing the U.S. government’s overseas presence. The congressionally mandated Office of Rightsizing leads State’s efforts to develop mechanisms to better coordinate, rationalize, and manage the deployment of U.S. government personnel overseas. In addition, the Office of Global Support Services and Innovation in the Bureau of Administration coordinates State’s efforts to improve the delivery of support services to all overseas posts. This office partners with service providers at posts and State’s various regional and functional bureaus to move support work to safer and lower-cost regional and central locations. The operation of U.S. embassies and consulates overseas requires basic administrative support services for overseas personnel, such as financial management and personnel services. At the post level, the management section, which is normally headed by a management counselor or management officer, is responsible for carrying out the administrative functions at a post. The typical management section of an embassy consists of several U.S. Foreign Service officers who are in charge of financial management, human resources, information management, and general services. They are assisted by locally employed staff who serve as voucher examiners, cashiers, and financial and personnel assistants and specialists. Smaller posts have not historically had full management sections with trained, experienced U.S. citizen officers filling each of the management positions, such as a financial management officer or human resources officer. Therefore, many times these posts rely on remote support from the United States or a regional service center to obtain administrative support. A Number of State Bureaus Provide Embassy Support Remotely, with More Efforts Planned State has a number of overseas regional bureaus that provide management support remotely in a variety of ways. State’s functional bureaus also provide remote support. As a part of its rightsizing efforts, State developed plans to regionalize support by identifying all nonlocation-specific functions and removing them from overseas posts, starting with critical danger posts, where it is crucial to have as few personnel as possible due to security concerns. State’s Regional Bureaus Offer Remote Support in a Variety of Ways Two regional bureaus provide remote support from a regional service center staffed with a cadre of management staff assigned to various posts. Other regional bureaus assign management staff at larger posts to assist neighboring posts that lack the management staff necessary to carry out all of the post’s administrative functions. Two Regional Bureaus Have Regional Service Centers State’s Bureau of Western Hemisphere Affairs and the Bureau of European and Eurasian Affairs offer a variety of personnel and other administrative support services remotely to their posts through regional service centers. Both regional service centers—the Florida Regional Center in Fort Lauderdale, Florida, and the Regional Support Center in Frankfurt, Germany—have a director who oversees operations and reports to the executive director of each respective regional bureau in Washington, D.C. Both centers’ buildings also house various other regional support activities that are managed by the respective functional bureaus, such as a regional procurement office that provides purchasing and contracting services to posts. The Florida Regional Center provides financial management and human resources support to about 16 posts located in Latin America and the Caribbean. The posts that receive remote support in these functions do not have a full-time, American financial management officer or human resources officer; rather, the U.S. post management officer at these posts serves multiple roles and spends a certain percentage of his or her time on various management activities, including the certification of vouchers and some personnel functions, with assistance from locally employed staff. However, the management officers might not be able to provide enough personnel or financial support due to their lack of experience or training in the function as well as time constraints, according to officials at the posts we visited. To compensate for these limitations, a regional human resources or financial management officer, based in Fort Lauderdale, visits each post for which he or she is responsible on an agreed schedule that is outlined in a memorandum of agreement between the post and the Florida center. For example, during a typical visit, a regional human resources officer ensures that the post is in compliance with local labor laws and regulations, evaluates post personnel operations and practices, addresses employee morale issues, conducts salary and benefits surveys, provides guidance on post training needs, and performs a host of other higher level human resources duties, as necessary. A regional financial management officer’s responsibilities include reviewing post management practices to prevent waste, fraud, and mismanagement; conducting spot reviews of vouchers, purchase orders, and petty cash transactions; and providing assistance in post budget and financial plans. The Florida center also has one regional information management officer involved in a pilot program to provide support to two posts that do not have a permanent information management officers assigned, as well as three information management specialists and two office management specialists that provide additional support to posts throughout the region, when necessary. The Regional Support Center in Frankfurt, Germany, provides management assistance in financial management and human resources to about 40 posts throughout Europe and Eurasia; however, it does this on a more consultative, as-needed basis than the Florida center. The Frankfurt center’s focus is to promote self-reliance in the full range of financial and personnel activities at European and Eurasian posts. It provides management oversight to posts and assists staff in developing various managerial skills through oversight visits and training. Many of the posts the center serves do not have full-time human resources officers or financial management officers, and a number of them are staffed by junior or first-tour management officers who need occasional assistance or training in core management functions. Regional support is provided through occasional post visits from regional officers and senior, locally employed staff located at the Frankfurt office, as well as through training provided at the Frankfurt center. Table 1 provides a breakdown of the number of regional management staff, the number of posts they cover, and the types of support they provide from Fort Lauderdale and Frankfurt. Table 2 provides data on the four posts that we visited that receive financial and personnel support from a regional service center in Fort Lauderdale or Frankfurt and the various characteristics of those posts, including the total number of staff, the number of local staff that carry out financial and personnel functions, the posts’ budgets, and the number of annual visits received from a regional manager. Fort Lauderdale and Frankfurt currently provide administrative support remotely to small and medium-sized posts, which in some instances removes the need for an American officer to carry out those support functions at post. State’s other regional bureaus use mechanisms other than regional centers to support posts’ administrative needs remotely. In particular, the Bureaus of African Affairs, East Asian and Pacific Affairs, Near Eastern Affairs, Western Hemisphere Affairs, and South and Central Asian Affairs use partnering arrangements to provide remote support from larger posts or embassies to small or medium-sized posts that do not have resident American human resources or financial management officers. For example, because Embassy Phnom Penh does not have a resident human resources officer, the management staff at Embassy Bangkok provides support by reviewing human resources operations and providing ad hoc advisory assistance at least twice per year. In Mexico, the Embassy in Mexico City provides financial management support to about nine consulates throughout the country that do not have resident financial management officers. In addition, the Bureau of African Affairs employs staff in Paris to provide financial support to posts in Africa. Some posts have a support agreement that outlines how many visits will be made and what functions will be carried out under such partnering arrangements. Officials from the Bureau of East Asian and Pacific Affairs in Washington said that posts in Asia use partnering because geographic distances and language and cultural differences between posts in some areas make it difficult to devise a regional service center that, like those in Frankfurt and Fort Lauderdale, meets all posts needs. Furthermore, officials said the regional bureau currently lacks the funding to establish a regional service center with a new building and additional management staff. See figure 1 for a map of several remote support partnerships in East Asia and the Pacific. In addition, the Bureau of Near Eastern Affairs has embarked on an effort to make extensive use of remote support provided from the United States due to the extreme security threat faced at new embassies, particularly in Baghdad, Iraq. For example, an official from the Bureau of Near Eastern Affairs told us that State plans to provide increased financial management support to the embassy in Baghdad from centralized operations in Charleston, South Carolina, rather than performing all financial management operations at post. However, he pointed out that it would take significant time and money before the bureau could remove all nonlocation-specific functions from critical danger posts, as outlined in State’s 2006 operational plan. State’s Functional Bureaus Also Provide Remote Support Several functional bureaus within State provide remote support in financial management, information management, procurement, security, courier, medical, and other functions. Some of these operations are offered centrally from locations within the United States and others at overseas locations such as the regional center in Frankfurt. One example of a domestic support operation is the Global Financial Services Center within the Bureau of Resource Management, which has a central location in Charleston, South Carolina, and receives support from offices in Bangkok, Thailand, and Paris, France. The center is responsible for disbursement, payroll, accounting, cashier monitoring and training, customer support, and other financial management support for posts around the world. Additional examples of remote support from functional bureaus include the following: The Bureau for Information and Resource Management sponsors Regional Information Management Centers, which provide telecommunications, network, systems, engineering, installation, and maintenance support to overseas posts from a number of locations. The Bureau of Administration operates the Regional Procurement Support Office, which provides contract and procurement services and provides goods and services to posts throughout the world, for a certain fee. State’s Bureau of Diplomatic Security provides regional engineering support and diplomatic courier operations to posts overseas. State also has various regional medical offices throughout the world that are administered by the Office of Medical Services. State Developed an Operational Plan for Rightsizing State’s fiscal year 2006 operational plan, Organizing for Transformational Diplomacy: Rightsizing and Regionalization, identifies post functions that can be performed remotely. The plan focuses on first removing nonlocation-specific functions—or functions that could potentially be removed from posts and carried out either from the U.S. or a regional center—from critical danger missions, where State officials said it is crucial to have as few personnel at posts as possible due to security concerns. The plan envisions eventually removing those functions from all overseas posts. Officials from the Office of Global Support Services and Innovation identified 78 nonlocation-specific functions and, in December 2005, State selected 16 of these functions that it planned to provide to critical danger posts from remote locations, according to officials. For a list of some of the nonlocation-specific functions that can be provided remotely, see table 3. State’s operational plan includes goals and timelines for action. As of April 2006, State indicated that a number of initiatives to remove nonlocation- specific functions were under way in a number of posts; however, it is too early to asses State’s progress in implementing the plan. In December 2005, State’s Office of the Inspector General (IG) recognized State’s operational plan as a good start and recommended that the Under Secretary for Management produce a Departmentwide master plan for formally accrediting regional centers. This recommended plan would include long-term capital construction requirements for housing and office space, standardized service expectations, and management structures that ensure accountability to serviced bureaus and posts. As of March 2006, officials from the Office of Rightsizing and the Bureau of Administration said they were beginning to address the IG’s findings. While officials from the executive offices of some of the regional bureaus told us that State’s operational plan is on the right track, they cautioned that the implementation of the plan must take into consideration the various realities faced by posts in different regions of the world. For example, an official of the Bureau of African Affairs told us that many posts in Africa lack the technological capabilities to be able to utilize remote support, which requires more processes to be done electronically. He cautioned that certain posts would need to obtain better bandwidth connectivity to handle online financial management transactions. In addition, officials from the bureau did not believe that the three African posts identified as critical danger posts would meet the strategy’s March 2006 timeline to receive nonlocation-specific services remotely. Officials from the Bureaus of Near Eastern Affairs, South and Central Asian Affairs, Western Hemisphere Affairs, and East Asia and Pacific Affairs agreed that there is not a one-size-fits-all approach to providing support remotely. An official from the Bureau of Western Hemisphere Affairs added that if more nonlocation-specific functions are moved from posts to remote locations, regional bureaus would have to release or shift many local staff that currently carry out those functions at posts and hire additional Americans in the United States or staff at regional service centers overseas. Officials in the Bureau of Western Hemisphere Affairs also pointed out that the various administrative bureaus within State, to which the workload related to remote support might be assigned, may not yet have the capacity to handle the additional work. For example, they said that the Bureau of Resource Management had not yet reported that it is ready to provide additional remote support in the area of financial management. However, according to the officials, the Bureau of Information Resource Management is an example of a functional bureau that is committed to maximizing the way in which it provides information technology services to overseas posts and it is standardizing its regional information management centers. State Department Faces Challenges in Its Plans to Increase Embassy Support from Remote Locations State is currently looking to move forward with its fiscal year 2006 operational plan for remote support; however, it faces several challenges that could hinder its further expansion of remote support services. In particular, limits on what management functions non-American staff perform might limit the extent to which services can be provided remotely. In addition, one regulation requires original invoices for payment, which could hinder additional remote support provided electronically. Also, current funding arrangements for the various regional bureaus and posts might limit opportunities for remote support to be offered from one region to another. Finally, a reluctance to change further constrains opportunities to expand remote support. Limits on Non-American Staff Responsibilities Might Hinder Remote Support Officials at the posts we visited told us that empowering local staff could play a significant role in expanding remote support; however, such staff are limited in the types of support that they may provide. For example, while several officials stressed that there are certain tasks that, for reasons of national security, must be carried out overseas by security-cleared American citizens, some tasks, such as certifying vouchers, may be done by non-American staff. In fact, according to the Foreign Affairs Handbook (FAH), direct-hire, locally employed staff members who meet certain professional qualification criteria and have proven records of integrity and consistent superior performance may be designated to certify vouchers as Alternate Certifying Officers. Several officials at the Florida center said that allowing such staff to certify with oversight from a regional officer could remove the need for American officers at some posts. However, we found a lack of clarity regarding this issue at several posts. In particular, several officials whom we spoke with in Washington and overseas either were unaware that non-American staff could certify vouchers or said there were limitations on which types of vouchers or what maximum monetary value those staff may be designated to certify. Additionally, State officials told us that other tasks, such as procurement, could also be carried out by non-American staff with oversight from an American regional officer if current regulations limiting their authority were changed. State is exploring this issue through a pilot program at Embassy Brussels to implement contracting authority for locally engaged staff. If successful and expanded, the program could free up American officers for essential operational and management controls activities, or potentially eliminate some American positions at posts, according to officials in Washington. Officials in Washington and at posts we visited said that State should reexamine its policies and determine, based on a risk- benefit analysis, what additional powers or responsibilities could be given to local, non-American staff, and then communicate that to posts. Existing Regulation Could Hinder Use of Technologies in Providing Remote Support State officials noted that, with the right technological applications, some administrative functions, such as the entire payment process, could be performed from a remote location with minimal involvement from posts. However, State faces challenges in making this transition due to a regulation that requires original invoices in processing payments. State recognizes that leveraging today’s Web-based technologies and global business practices is essential to carrying out administrative functions remotely, and it reports that it is working aggressively with embassies and agencies to use technology and improved management methods to eliminate the nonessential U.S. government presence overseas. In addition, the Under Secretary for Management asked posts to move ahead with efforts to provide additional support remotely and to identify any legal or regulatory barriers, according to State officials. For example, State has waived the regulation requiring an original invoice in order to allow a pilot post being served by the regional center in Frankfurt to e-mail or fax vouchers, invoices, and other supporting documentation to Frankfurt for certification of payment and submission to the Global Financial Services Center for disbursement. However, this pilot is not yet under way due to resistance from officials who believe that there should be a financial management officer at every post, according to State officials in Washington. In addition, the pilot post—Nicosia, Cyprus—lacked the bandwidth capabilities necessary for the electronic transactions at the time of our study, according to officials. Funding Structures Complicate Remote Support Efforts Current State bureau funding structures might limit the application of remote services. Since regional centers are currently funded primarily by their respective regional bureaus, it is commonly believed that it is difficult for posts to cross bureau lines to obtain regional services, according to officials from the regional bureaus in Washington. This makes it difficult, for example, for the Florida Regional Center to provide services to a post not covered by the Bureau of Western Hemisphere Affairs. Another example is the Bureau of African Affairs’ employment of staff in Paris to provide financial support to posts in Africa. The bureau believes these employees are ideally suited for this work because of their financial management expertise, their French-speaking skills that are necessary to serve many African posts, and their access to transportation links to Africa. We asked if these staff could also serve some North African posts, which are even closer geographically to Paris and where French is also widely spoken. But we were told that this is not currently possible, largely because the posts in North Africa are not within the Africa Bureau, and funding structures to cross regional bureaus have not yet been established. State’s IG recently pointed out that a Departmentwide plan clarifying the resources and funding structures for regional centers would add needed coherence to State’s rightsizing efforts. Several examples demonstrate that State is trying to address the issues involved with financing remote support. For example, the International Cooperative Administrative Support Services (ICASS) Executive Board approved a proposal to initiate the charging of customer agencies for regional services and to enable posts to utilize regional center services outside their regional bureau. Furthermore, remote services are already beginning to cross regional boundaries. For example, the Florida Regional Center recently added to its portfolio Hamilton, Bermuda, a post that belongs to the Bureau of European and Eurasian Affairs, because the Florida center is geographically closer to Hamilton than is the Regional Support Center in Frankfurt. This arrangement currently entails the Bureau of European and Eurasian Affairs paying for the travel of the regional manager to post. Reluctance to Change Hampers Remote Support Efforts State officials pointed out that management officials at overseas posts might be reluctant to accept support remotely rather than having an American at post to provide the support. For example, officials at the Florida Regional Center have made two proposals to expand the center’s support in financial management and human resources and have identified posts, with similar characteristics to those currently receiving support (see table 2), that would benefit from remote support. One proposal, which calls for the empowerment of locally employed staff, backed by oversight from a regional manager at the Florida center to certify vouchers, would free up the need for a full-time American financial management officer at post. However, officials in Washington and at some posts we visited overseas told us that most posts are reluctant or unwilling to give up their American management officers because they prefer to have direct access to them. Officials told us that post receptivity to such remote support proposals depends on management’s willingness to relinquish some of its current positions, as well as the assurance from the regional bureaus in Washington, D.C., that the regional service centers would have the resources to provide additional support. For example, Haiti was recently identified as a post that could utilize financial management support from the Florida center but, according to officials from the Bureau of Western Hemisphere Affairs, senior management at the post would not relinquish the American staff position. In addition, State reports resistance to change from a number of its bureaus. For example, officials from the Bureau of Resource Management (as well as some officials overseas) believe that having fewer management staff at posts overseas could increase internal control vulnerabilities and that there should be an American financial management officer at all overseas posts. Additionally, in its technical comments on this draft, the Office of Global Support Services and Innovation said that, while developing the pilot programs to remove nonlocation-specific functions from critical danger posts, such as Haiti, the regional bureaus were reluctant to impose this experiment on posts already under such stress. This reluctance, along with State’s desire to expand remote support to the largest possible number of posts, has led State to consider all posts, not just critical danger posts, for implementation of such pilot programs, according to the office. Providing Support Remotely Offers Potential Advantages, but Cost Analyses and Performance Measures Are Needed According to State officials, there are several potential advantages to providing administrative support to posts from remote locations rather than at individual posts, including potential cost savings, enhanced security for American personnel, and improved quality of administrative support. However, at the time of our review, State had not conducted analyses of the cost advantages associated with providing administrative support remotely rather than at posts and had no systematic performance measures and feedback mechanisms in place to assess the quality of support provided. Support Provided from Remote Locations Could Offer Advantages in Mission, Cost, and Security We have identified several examples to demonstrate the potential advantages, in terms of financial benefits, enhanced security for American personnel, and improved quality of administrative support, of posts receiving support remotely. The first example demonstrates the advantages of providing remote support from a regional service center located in the United States. The second example depicts the advantages associated with providing support from a regional service center located overseas. Finally, the third example illustrates the advantages associated with locally employed staff providing remote support to posts. There are also several issues of concern relating to remote support, namely the quality of services, though these issues require further analysis. Providing Support from the United States According to officials at the Florida center, assigning certain duties to regional officers based in the United States is one way to save money while retaining the expertise of a foreign service officer. Officials told us there are cost savings associated with having one regional officer perform the duties of several officers who would otherwise be assigned to posts. Officials told us that eliminating the need for American officers overseas could result in cost savings after factoring in offsetting costs, such as costs for travel and technology enhancements, to accommodate the change. For example, each overseas position costs approximately $400,000, according to an average computed by State’s Bureau of Resource Management for fiscal year 2007. This amount includes salary, benefits, and support costs plus a number of costs that apply only to officials overseas, such as housing allowances; educational allowances for their children; and additional pay, such as danger pay, depending on which region of the world the officer is located. It also includes costs for providing a secure building for the officers to work in overseas. By assigning regional officers in the United States, State could avoid such costs, which do not apply to personnel stationed domestically. Although officials have not conducted a formal cost comparison to assess the size of the potential savings, they believe the potential savings could be in the millions of dollars. For example, in 2002, the U.S. Embassy in Nassau, Bahamas, requested a full-time American financial management officer at post to handle its financial management workload, according to the post management officer. To avoid the additional costs associated with posting a financial management officer in Nassau, officials from the Bureau of Western Hemisphere Affairs said the bureau instead assigned a regional officer from the Florida Regional Center to assist the Nassau post management officer who handles a variety of financial management responsibilities, such as certifying vouchers. The total cost for the Florida-based regional officer would be his salary and benefits plus travel costs of about $60,000, according to the center’s officials, which includes travel to Nassau and three other posts also served by that officer. In addition to cost efficiencies, officials said the Florida Regional Center’s model of support would enhance security, while the quality of support would not suffer from the change. Officials told us that U.S. officials, in general, are much safer living and working in the United State than at overseas posts. In addition, staff at both posts we visited said that the support the posts received from the Florida Regional Center was generally satisfactory and meeting post needs. One management officer said that the regional managers were highly experienced and competent in their functional areas, which led to a high level of quality support. Officials at the Florida Regional Center added that, in cases where a regional center is located within the United States, civil servants or retired employees could also be used as a cost-effective way of providing remote support, when feasible. Another potential advantage of assigning civil service or retired employees to provide remote support would be continuity, as they would not be required to transfer every 2 to 3 years as foreign service officers do. Providing Support from an Overseas Regional Center According to officials in Washington and overseas, potential advantages also could arise from providing support remotely from a regional service center overseas. For example, approximately 20 posts in Europe and Eurasia have requests in their Mission Performance Plans for an American financial management officer at post, according to the Deputy Assistant Secretary for Global Financial Services. To eventually avoid assigning such new staff to posts overseas, State is piloting a project to determine whether it can remotely certify vouchers in Frankfurt by using scanned rather than original documents. Center officials said that there would be a savings in cost and space and gains in security at those posts where this concept of remote certification removes the need for an American financial management officer position overseas. For example, while some posts in Europe and Eurasia do not have facilities that meet security standards, the Regional Support Center in Frankfurt is located in a safe facility that meets security standards, including 100-foot setback between office facilities and uncontrolled areas, and controlled access at the perimeter of the compound. Also, officials said that posts could receive highly skilled and experienced financial oversight from the center. Officials acknowledged that it is costly to operate from the Frankfurt facility because of local wage rates and the cost of living allowance for U.S. staff. However, they believe that high operating costs would likely be outweighed by a combination of factors, including the potential efficiencies achieved at posts served by the regional facility and the eventual reduction in staff needed at posts overseas due to the remote support offered from Frankfurt. However, center officials said they had not performed cost analyses to demonstrate if servicing posts from Frankfurt was cost effective, and they agree that such analysis would be useful. Providing Support Using Non- American Staff Rather than Americans State officials told us that using non-American staff to provide remote support offers several advantages. For example, State uses these staff in the Foreign Service National Executive Corps and Paris Rovers Programs. The Foreign Service National Executive Corps, one method of providing remote support, is used by the Bureau of European and Eurasian Affairs to leverage in-house resources to benefit smaller missions throughout the world, according to officials at the Frankfurt center. Corps members are locally employed staff, from a variety of posts throughout the various regional bureaus, who are highly experienced in various administrative functions and can assist, train, and mentor staff at posts in areas such as facilities maintenance, financial management, general services (such as procurement), human resources, and information management. State officials told us that, by using the corps members to provide remote support, State has avoided the assignment of additional American officers overseas. The Paris Rovers Program, another means of providing remote support by using non-American staff, is cost-efficient and effective, according to officials from the Bureau of African Affairs. The program operates with six locally employed staff—five of whom are based in Paris—serving as financial management experts for about 44 posts in Africa, many of which either have first-tour financial management officers or no full- time American financial management officers. The rovers are experts in post budget needs and cashier problems and spend much of their time providing on-the-job training to staff at posts, as well as occasionally filling post staffing gaps. According to bureau officials, by educating first-tour officers in the use and management of appropriated funds and reviewing financial management reports, the Paris rovers provide needed financial management internal control oversight, which likely reduces financial losses to the bureau. In addition, bureau officials said they are committed to not sending an American to post when there is no need to do so, due to the security risk levels of many posts in Africa. Recently, several posts in Africa, including Bangui in the Central African Republic, have requested American financial management officers, according to an official from the Bureau for African Affairs. To avoid hiring a financial management officer for Bangui, the bureau added Embassy Bangui to the Paris Rovers Program. Although the bureau has not determined the full potential of the program, its initial data demonstrate that the operation is cost-effective. According to bureau officials, the total cost of the six employee rover program in 2005 was about $934,000, including employee salaries and travel costs. The Bureau of African Affairs prepared an estimate, at our request, of what it would cost to provide financial services without the Paris-based rover operation. The bureau estimated that it would have to spend over one million dollars to fund three additional U.S. officer positions and three part- time employees, slightly more than the cost of the Paris operation. Officials agreed that a more detailed cost analysis could demonstrate if the program is clearly cost-effective and therefore should be expanded to cover additional posts. In Addition to Advantages of Remote Support, Several Concerns Exist Despite overall satisfaction with regional support, management officers and locally employed staff at the posts we visited mentioned a few issues of concern relating to the quality of remote support, including timeliness and the distribution of assistance. One management officer said that it once took 4 weeks for his regional financial management officer to respond to him on a certain issue, by which time the issue was no longer relevant. Another management officer agreed that posts are subject to regional officers’ availability, and when an officer is not at a post, an issue may take too long to resolve. Officials at regional centers told us that the quality of partnering support was not as good as the service provided by a regional center. One management officer told us that an officer with regional responsibilities who is located at a post will likely prioritize the home post’s issues over the needs of other supported posts. In addition, State’s recent IG inspections found substandard regional support at smaller posts in Africa where partnering is used, and often recommended updating the memorandum of understanding to delineate regional support expectations. However, at the time of our review, State did not have performance data for remote support. An official from the Bureau of European and Eurasian Affairs told us that, absent performance measures and feedback tools to ensure costumer satisfaction, accountability, and adequate internal controls, customer service could decrease when a service provider is located outside of the post. In addition, an official from the Global Financial Services Center in Charleston and other officials overseas reported concerns that fewer on- the-ground American management staff could increase internal control vulnerabilities. For example, some officials believe that there needs to be an American financial management officer at every overseas post to prevent fraud, waste, and mismanagement of funds. According to GAO’s Internal Control Management Evaluation Tool, government agencies should formulate an approach for risk management and decide upon the internal control activities required to mitigate risks that could impede the efficient and effective achievement of objectives. The approach should be modified to fit the circumstances, conditions, and risks relevant to the situation of each agency and should also consider the type of mission being performed and the cost/benefit aspect of a particular control item. In this example, State would weigh the potential internal control risks of allowing non-American staff to certify vouchers and carry out other financial management activities against the costs of having an American at every post to carry out such functions. Cost Analyses and Performance Measures Are Needed At the time of our review, State had not conducted analyses of the costs associated with providing administrative support at posts versus providing it remotely. In addition, State lacked systematic performance measures and feedback mechanisms to assess the quality of support provided. Further, officials whom we interviewed from several posts were not aware of the types of remote support that could be made available to them and said they would be more willing to use it if the cost and quality of available services was documented. Cost Analysis Would Be Useful in Determining Whether to Provide Support Remotely At the time of our review, State had not conducted cost analyses to show potential cost efficiencies, such as those outlined in the examples described earlier, of providing support to overseas posts remotely. Officials we talked to in Washington, at the regional centers, and at some posts we visited said that cost analyses would be useful in deciding how to provide support remotely. For example, the Deputy Director of the Florida Regional Center told us that there had been no analysis on how much money has been saved by serving posts from the Florida center rather than having management officials at the posts, and he said that such a study would be useful, not only for the Bureau of Western Hemisphere Affairs, but also for other regional bureaus when they consider using regional centers to provide remote support. Cost analyses were not incorporated into State’s 2006 operational plan for rightsizing. The plan recognizes that additional resources, such as facilities and staff, would be needed to implement the plan. However, it does not address any of the cost savings or efficiencies that could be achieved by providing remote support from regional centers or the United States and whether the savings would exceed the cost of additional resources. A cost analysis would include the various costs and alternatives associated with providing remote support through regional service centers in the United States or overseas. Such cost components would include the various direct and other personnel and support costs associated with providing support at a post. It would weigh these costs against costs required to facilitate remote support, such as travel expenses; costs for technology enhancements, such as improved bandwidth connectivity; costs for new or expanded facilities and other related expenses to accommodate increased staff at existing or new regional centers; costs for changes in local staffing or staffing in the United States; and other costs. Performance Measures and Feedback Mechanisms Needed The concerns with remote support described earlier—particularly relating to quality of services—underscore what officials indicated at both regional centers and all four serviced posts that we visited, which is, that performance measures and customer feedback processes would be useful and beneficial in rating the current level of customer support and oversight. Officials also said that performance measures and customer feedback processes would be essential for making decisions about expanding remote support. Officials from State’s Office of Rightsizing said that, before agreeing to any change, posts would first want proof that remote support provides the same level of customer service as support provided at posts. For example, the Executive Director of the Bureau of Near Eastern Affairs said that the bureau would be willing to use remote support from regional centers, such as the Regional Support Center in Frankfurt, if the cost was reasonable and the quality and reliability of service was demonstrated to be high. He said that, to convince decision- makers about the quality of remote support, all regional centers need to have standards of performance with metrics and data to demonstrate that offering services regionally or centrally, rather than at individual posts, results in adequate service and internal controls. An official from the Bureau of European and Eurasian Affairs said one performance metric could be the amount of time it takes for a voucher to be processed. One post management officer suggested that a performance measure, such as a required weekly telephone call to a serviced post by the regional officer, would be another way support could potentially be improved from the Florida center. State has recognized the need for performance measures and customer feedback mechanisms in its operational plan, but has not yet developed them. However, during our review, one regional bureau developed a customer service survey. Six months after our visit in June 2005, the Florida Regional Center sent customer satisfaction surveys to the posts it provides with regional financial management and human resources support. The survey asked management officers at posts to note the frequency and duration of visits by a regional officer to a post, as well as the frequency of communication between the officer and posts, and to rate the level of guidance and supervision provided by the officer to the local staff. At the time of our review, the Florida center had not yet completed an analysis of the results of the surveys; however, according to officials at the center, the respondents had favorable views of the center’s services. Lack of Awareness of Remote Support Opportunities Limits Their Use Various initiatives to provide support remotely are occurring within the multiple regional bureaus; however, how they are integrated and communicated at a Statewide level is not clear. Several management staff at the posts we visited and those we interviewed by telephone were not fully aware of all the services they could utilize from a remote location. For example, management officers stationed in Asia and Africa said they lack information on what types of support could be provided remotely and how to access that support. Some officials indicated that it would be helpful for them to know the full extent of remote support available, and whether it results in cost efficiencies and effective service, in order to make an informed decision about whether to utilize it. In addition, we found that regional centers were not always fully communicating the types of services and support available to posts, either within their region or across regional bureaus. The Executive Director of the Bureau of Near Eastern Affairs said he would consider using regional support from Frankfurt if he knew the full range of services that were offered there, the quality of customer service, and the potential costs of services. State officials at the Regional Support Center in Frankfurt agreed that while they do talk to post officials, particularly at management conferences, about the regional services that Frankfurt offers, they could do a more comprehensive job of documenting and marketing the full range of services and expertise provided by regional support center. State officials in Washington and overseas told us that communication is the key to ensuring that efforts to expand remote support are maximized, and that a dialogue has recently begun. In particular, the Office of Global Support Services and Innovation and the Office of Rightsizing have set up a Regional Initiatives Council to discuss ongoing efforts to provide remote support in each regional bureau. According to State officials, recent discussions at such meetings have centered on whether or not to set up consolidated administrative service centers, called Centers of Excellence, within the regional bureaus to provide certain management-related functions, such as human resources or travel administration, for posts around the world. For example, a dialogue already has begun regarding how to use existing resources to provide additional remote support from Bangkok for posts in East Asia and the Pacific. Conclusions By providing administrative support remotely, State has the potential to reduce costs and improve customer service. However, State has not conducted cost analyses nor established systematic performance measures and feedback mechanisms to demonstrate the full potential of providing support remotely. Without data depicting the range of implications— relating to cost, efficiency, security, and quality of services—involved with providing and receiving support remotely, decision-makers lack the tools to make informed decisions about investing staff and resources at individual posts or at regional centers overseas and in the United States. Recommendations for Executive Action As State moves forward with its plan for expanding remote support and attempts to overcome institutional resistance to this process, it would be useful to concurrently assess and promote the potential full advantages in providing embassy support from remote locations, including potential cost reductions, improved services, or enhanced security for foreign service officers. Therefore, we recommend that the Secretary of State take the following three actions: Identify and analyze the various costs associated with providing support at individual posts versus at regional service centers in the United States or overseas; Develop systematic performance measures and feedback mechanisms to measure the quality and customer satisfaction of support services provided remotely; and Use the cost analyses and feedback on quality and customer satisfaction inform post management of which services could be offered remotely, the various costs involved, and the quality of services offered; consider ways to improve the quality of remote support, when determine whether additional posts, including posts that are requesting new U.S. officer positions in management functions, might be logical candidates for receiving remote support. We also encourage State to continue reviewing challenges to providing support remotely and finding ways to overcome them. Agency Comments and Our Evaluation We provided a draft of this report to the Department of State for comment. State’s comments, along with our responses to them, can be found in appendix II. State generally concurred with the report’s substance and findings and indicated that it is taking steps to implement all of our recommendations. State agreed that a more systematic and rigorous costing model would be beneficial in determining whether or not providing support from regional centers is cost-effective. State also agreed that systematic performance measures and feedback mechanisms are needed to measure the quality of and satisfaction with remote support, and State plans to strengthen its efforts in this area as part of its plans for providing support remotely. State added that the Office of Rightsizing would coordinate the development of a customer-focused service standard for regional centers. Lastly, State said that it plans to use more consistent and accurate data in making decisions to improve its remote support services. The department also provided a number of technical comments, which have been incorporated throughout the report, where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to other interested Members of Congress, the Library of Congress, and the Secretary of State. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Jess Ford at (202) 512-4268. Other GAO contacts and staff acknowledgments are listed in appendix III. Scope and Methodology To describe the Department of State’s (State) progress in providing administrative support from remote locations, we reviewed documents from the Office of Rightsizing and the Office of Global Support Services and Innovation, including its operational plan for rightsizing and regionalization. We spoke with officials at State’s various regional and functional bureaus in Washington, D.C., to discuss the efforts each bureau has taken to provide administrative support to overseas posts, whether from regional service centers overseas, from the United States, or from other posts through partnering. To assess regional support provided from the United States to overseas posts, we met with senior management and regional staff at the Florida Regional Center in Fort Lauderdale, Florida, and the Global Financial Services Center in Charleston, South Carolina. We also met with senior management, regional staff, and locally employed staff at the overseas Regional Support Center in Frankfurt, Germany, to review remote support provided from an overseas regional service center. We focused our efforts on evaluating the various ways in which financial and personnel support are provided by the various regional bureaus. We did not perform an evaluative audit of the regional support provided by functional bureaus, consular affairs, or the Model for Overseas Management initiative because those operations either have been recently inspected by the Office of the Inspector General or did not fit into the scope of our work. To assess some of the regulatory challenges that State faces in expanding regional support, we reviewed foreign affairs regulations for carrying out administrative functions overseas. This included a review of regulations on what functions locally employed staff can carry out in the areas of procurement and payments. We also reviewed the regulations pertaining to the use of original documentation in processing payments and State’s proposal to waive that regulation. To identify the potential advantages of providing support remotely, we met with ambassadors, deputy chiefs of mission, management officers, and other U.S. embassy staff, including locally employed staff at various posts that receive remote support from either the Florida Regional Center or the Frankfurt Regional Support Center. We chose Belize City, Belize, because it is a small post supported by the Florida center and Nassau, Bahamas, because it is the largest post supported by the center, pertaining to the number of staff and size of budget, according to an official at the Florida center. We chose Valetta, Malta, because it is a small post support by the Frankfurt center, and it recently conducted a rightsizing review, which addressed remote support issues. We chose Helsinki, Finland, because it represents a medium-sized post supported by the Frankfurt center and because it was originally the post chosen for the pilot project to certify vouchers remotely, according to officials in the Bureau of European and Eurasian Affairs. We also visited Mexico City to talk to embassy officials about how the U.S. mission to Mexico has been rightsized and how the embassy provides support to consulates throughout the country. Lastly, in order to explore the advantages of using locally employed staff in providing remote support, we met with officials in Paris, France, to discuss the financial support that locally employed staff provides to posts in Africa. Because our interviews were limited to only a few posts that received regional support, we did not generalize the results of our interviews to the universe of posts receiving regional support. We reviewed the post profiles of the four posts we visited to demonstrate the staffing and other characteristics of posts currently using regional support and verified the data with the post management officers. We also reviewed cost data from the Bureau of Resource Management and the various regional bureaus to estimate the average cost of placing one foreign service officer at an overseas post, including personnel and support costs, and costs that apply only to officers located overseas. For reporting purposes, we rounded the bureau’s estimate of $393,000 to $400,000 for the cost of an American officer overseas. We conducted (1) a data reliability assessment of the data using sample cost data from the posts we visited; (2) interviews with officials from the regional bureaus and the Bureau of Resource Management; and (3) discussions with the Office of Rightsizing at State and the Office of Management and Budget, and we determined the data to be sufficiently reliable for the purposes of this engagement. In addition, we developed a structured interview instrument and conducted telephone interviews with management staff at overseas posts that have recently conducted a rightsizing report, which is required by Congress. We administered structured interviews between February and March 2006 by telephone. We primarily spoke with management counselors or management officers at overseas posts. In one case, we spoke with a deputy chief of mission at the post. We conducted interviews with 20 of 22 posts that were tasked to complete the rightsizing review in the fall 2005 cycle: Asuncion, Baku, Bandar Seri Begawan, Bucharest, Bujumbura, Colombo, Harare, Jakarta, Karachi, Kiev, Krakow, Maputo, N’djamena, Pretoria, Reykjavik, Rome, Santo Domingo, St. Petersburg, Taipei, and Tunis. The structured interview contained open- and closed-ended questions about guidance, timing, the review process, rightsizing considerations, headquarters’ involvement and feedback, and the impact of the review on the post. The interview instrument included questions regarding whether or not post management staff were both aware of and using regional support services. We developed the interview questions based on our review of rightsizing documentation and discussions with post officials during field work in Mexico City and Valletta. We provided an early version of the questions to State’s Office of Rightsizing and Office of Global Support Services and Innovation for their review and comment, and we also pretested the interview with three current management officers to ensure that the questions were clear and could be answered. We modified the interview questions on the basis of the pretest results and an internal expert technical review. We provided the management officers and deputy chief of mission with the interview questions in advance to allow them time to gather any data or information necessary for the interview. We also conducted follow-up discussions with posts as needed. The responses of the structured interviews are not intended to be representative of all posts. We performed our work from June 2005 until April 2006 in accordance with generally accepted government auditing standards. Comments from the Department of State The following are GAO’s comments on the Department of State’s letter dated April 13, 2006. GAO Comments 1. We are conducting a separate review of the consolidation of State and USAID support activities at overseas posts. We plan to issue a report on our findings later in 2006. 2. We recognized the efforts of the Florida Regional Center to measure customer service satisfaction with a survey and state this in our final report. We also acknowledged that State has recognized the need for performance measures and customer feedback mechanisms in its operational plan but has not yet developed them. We encourage State to develop performance measures and customer feedback mechanisms in its operational plan for all posts providing and receiving remote support, and not only for selected posts, such as those that receive support from the Florida Regional Center. We encourage State to use tools such as the ICASS Service Center annual survey to compare local support with remote support and identify areas where remote support could be improved. 3. We agree that the support the embassy in Mexico City provides to nine consulates throughout Mexico is a good example of providing support remotely, and we added this example in our final report. GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the person named above, Joseph Carney, Lyric Clark, Martin De Alteriis, Ernie Jackson, Andrea Miller, Deborah Owolabi, José M. Peña III, and Michelle Richman made key contributions to this report. | The President has emphasized the importance of safety, efficiency, and accountability in U.S. government staffing overseas by designating the achievement of a rightsized overseas presence as a part of the President's Management Agenda. One of the elements of rightsizing involves relocating certain administrative support functions from overseas posts to the United States or regional centers overseas, which can provide cheaper, safer, or more effective support. This report (1) reviews State's efforts in providing administrative support from remote locations, (2) identifies the challenges it faces in doing so, and (3) outlines the potential advantages and concerns associated with providing support remotely. State has a number of regional and domestic offices that provide some management support remotely to overseas posts in areas such as financial management and human resources. For example, State's Bureau of Western Hemisphere Affairs provides support to posts in its region through staff based in Florida. State announced in October 2005 it would identify and remove additional functions that do not need to be performed at post and could instead be performed domestically or at regional centers overseas. State faces several challenges in trying to expand its use of remote support. For example, restrictions on what management functions non-American staff can perform might limit the extent to which services can be provided remotely. In addition, current funding arrangements for various regional bureaus and posts might limit opportunities for remote support to be offered from one region to another, while posts' reluctance to change is a further constraint. State is assessing whether certain regulations could be waived or changed and how institutional challenges might be overcome. There are several potential advantages to providing administrative support to posts from remote locations, and several concerns. For example, one U.S.-based officer provides financial management support to multiple overseas posts, eliminating the need for an American financial management officer at each post served, which, according to State, could result in cost savings. Officials at posts we visited reported they were generally satisfied with the level of support and customer service at a regional or domestic service center, though some noted concerns. However, at the time of our review, State had neither analyzed the potential cost savings associated with providing remote support nor systematically assessed the quality of support provided. In addition, many officials in Washington and overseas were unaware of the full breadth of support offered by regional service centers. |
Background Congress first proposed providing flood insurance in the 1950s, after it became clear that private insurance companies could not profitably provide flood coverage at a price that consumers could afford, primarily because of the catastrophic nature of flooding and the difficulty of determining accurate rates. In 1968, Congress created NFIP to help reduce escalating costs of providing federal flood assistance to repair damaged homes and businesses. According to FEMA, NFIP also was designed to address the policy objectives of identifying flood risk, offering affordable insurance premiums to encourage program participation, and promoting community-based floodplain management. To meet these policy objectives, NFIP has four key elements: identifying and mapping flood risk, floodplain management, flood insurance, and incentivizing risk reduction through grants and premium discounts. NFIP Flood Hazard Mapping and Mitigation Through NFIP, FEMA maps floodplain boundaries and requires participating communities to adopt and enforce floodplain management regulations that mitigate the effects of flooding and reduce overall costs. According to FEMA, floodplain management standards are designed to prevent new development from increasing the flood threat and to protect new and existing buildings from anticipated flooding. FEMA has a division responsible for flood mapping activities and policy and guidance, but stakeholders from all levels of government and the private sector participate in the mapping process. For instance, FEMA relies on local governments to provide notice of changes in communities that can pose new or changed flood hazards and works with localities to collect the information needed to update flood maps. FEMA’s Flood Insurance Rate Maps serve several purposes. They provide the basis for setting insurance rates and identifying properties whose owners are required to purchase flood insurance. For example, since the Flood Disaster Protection Act of 1973, as amended, homeowners with federally backed mortgages or mortgages held by federally regulated lenders on property in an SFHA are required to purchase flood insurance. Others may purchase flood insurance voluntarily if they live in a participating community. The maps also provide the basis for establishing floodplain management standards that communities must adopt and enforce as part of their NFIP participation. As of February 2017, 22,235 communities across the United States and its territories voluntarily participated in NFIP by adopting and agreeing to enforce flood-related building codes and floodplain management regulations. FEMA has stated that resilience to flooding is a key objective of NFIP. Broadly speaking, resilience is the ability to prepare and plan for, absorb, recover from, and more successfully adapt to actual or potential adverse events. Resilience is closely linked with flood mitigation activities. For example, FEMA estimated that its floodplain management efforts resulted in avoidance of $1.87 billion in flood losses annually, and FEMA officials said they expect this amount to increase over time as additional new construction is built to increasingly better standards. FEMA supports a variety of flood mitigation activities that are designed to reduce flood risk and thus NFIP’s financial exposure. These activities, which are implemented at the state and local levels, include hazard mitigation planning; the adoption and enforcement of floodplain management regulations and building codes; and the use of hazard control structures such as levees, dams, and floodwalls or natural protective features such as wetlands and dunes. FEMA provides community-level mitigation funding through grant programs. At the individual property level, mitigation options include elevating a building, relocating the building to an area with lower flood risk, or purchasing and demolishing a building and turning the property into green space. Another tool FEMA uses to incentivize efforts to reduce flood risk is the Community Rating System. The Community Rating System is a voluntary incentive program that recognizes and encourages community floodplain management activities that exceed the minimum NFIP requirements. As a result, flood insurance premium rates are discounted to reflect the reduced flood risk resulting from community actions that meet the three goals of reducing flood damage to insurable property, strengthening and supporting the insurance aspects of NFIP, and encouraging a comprehensive approach to floodplain management. NFIP Coverage, Premium Rates, and Rate Setting Insurance offered through NFIP includes different coverage levels and premium rates, which are determined by factors that include property characteristics, location, and statutory provisions. NFIP coverage limits vary by program (regular or emergency) and building occupancy (for example, residential or nonresidential). In NFIP’s regular program, the maximum coverage limit for 1-4 family residential policies is $250,000 for buildings and $100,000 for contents. For nonresidential or multifamily policies, the maximum coverage limit is $500,000 per building and $500,000 for the building owner’s contents. Separate coverage is available for contents owned by tenants. To set premium rates, FEMA considers several factors including location in flood zones, elevation of the property relative to the community’s base flood elevation, and characteristics of the property such as building type, number of floors, presence of a basement, and the year a structure was built relative to the year of a community’s original flood map. Additionally, FEMA allows policyholders to pay lower premiums if they opt for higher deductible amounts. Most NFIP policies are deemed by FEMA to be full- risk rates, while some are less than full-risk (subsidized). FEMA defines full-risk rates as those charged to a class of policies that generate premiums sufficient to pay the group’s anticipated losses and expenses. According to FEMA, these rates are based on the probability of a range of possible floods, damage estimates based on that level of flooding, and accepted actuarial principles. FEMA staff noted that approximately 80 percent of FEMA’s policyholders pay full- risk rates. Subsidized rates do not fully reflect the risk of flooding but are intended to provide policyholders with more affordable premiums while encouraging floodplain management in communities and the widespread purchase of flood insurance. Generally, subsidized policies cover properties in high-risk locations that otherwise would have been charged higher premiums and that were built before flood maps became available and their flood risk was clearly understood. FEMA staff said they had begun increasing rates for certain subsidized properties as prescribed under the Biggert-Waters Act and HFIAA. This included increased rates for subsidized policies covering businesses, nonprimary residences, severe repetitive loss properties, and substantially damaged/substantially improved properties as required by the Biggert-Waters Act. In addition, HFIAA required increased rates for subsidized policies covering primary residences. When setting subsidized rates for individual properties, FEMA staff said they also consider flood risk, previous rate increases, and statutory limits on rate increases. FEMA also allows some policyholders to receive grandfathered premium rates, which allows policyholders who have been mapped into higher-risk flood zones to pay lower premiums associated with their previous lower- risk flood zone. FEMA officials said that in the aggregate, policy classes that contain grandfathered policies collect enough in premiums to reflect the full risk of loss for that class, but as we have previously reported, FEMA does not yet possess the data necessary to verify this. NFIP Funding and Borrowing Authority FEMA funds NFIP primarily through the insurance premiums paid by policyholders. In addition to covering insurance claims, NFIP premiums also are intended to cover outreach, research, and operating expenses. FEMA also charges a Federal Policy Fee on NFIP policies that helps fund efforts that include mitigating flood risk on properties covered by NFIP policies and developing and maintaining flood maps. In addition, FEMA received appropriations of $190 million in fiscal year 2016 for mapping and other specified statutory requirements. Congress authorized FEMA to borrow from Treasury when needed, up to a preset statutory limit. Originally, Congress authorized a borrowing limit of $1 billion and increased it to $1.5 billion in 1996. Following the catastrophic hurricanes of 2005, Congress amended FEMA’s borrowing authority three more times to more than $20 billion. After Superstorm Sandy in 2012, Congress increased FEMA’s borrowing authority to $30.425 billion. In January 2017, FEMA borrowed an additional $1.6 billion, increasing the total debt to $24.6 billion. Before 2005, NFIP was mostly self-sustaining, only using its borrowing authority intermittently and repaying the loans. Figure 1 shows outstanding debt from 1995 through 2017. Recent Legislative Reforms to NFIP The Biggert-Waters Act affected many aspects of NFIP. For example, it required FEMA to increase rates at 25 percent per year until full-risk rates were reached for certain subsidized properties, including secondary residences, businesses, and severe repetitive loss properties; increase rates over a 5-year period to phase out grandfathered policy rates; prohibit subsidized rates for properties purchased after, or not insured, as of July 6, 2012; create a reserve fund that would maintain at least 1 percent of the total annual potential loss exposure faced by NFIP based on outstanding flood insurance policies in force in the prior fiscal year; improve flood risk mapping; and develop new methods related to compensation for companies that sell, write, and service flood insurance policies; that is, Write Your Own (WYO) insurers. However, concern over rapid rate increases led to the passage of HFIAA in 2014, which repealed or altered portions of the Biggert-Waters Act. HFIAA reinstated certain rate subsidies removed by the Biggert-Waters Act, including those for properties purchased after, or not insured, as of July 6, 2012. For these properties, and certain others, rates would rise by at least 5 percent per year. HFIAA also established a new subsidy for properties that are newly mapped into higher-risk zones. The subsidy is phased out for individual properties over time. HFIAA also restored grandfathered rates and generally limited yearly increases in property- specific rates to 18 percent. In addition, HFIAA created a premium surcharge that would be deposited into the reserve fund (generally, $25 for primary residences and $250 for others). As of November 2015, the last time we reviewed the implementation of these acts, FEMA estimated that it had met the requirements for almost two-thirds of the Biggert- Waters Act provisions and about half of the HFIAA provisions and was taking actions on others. Table 1 provides additional detail of some of the selected requirements of the two laws. Policy Goals for Evaluating Potential Options for Reforming Flood Insurance Using the input of stakeholders and based on our prior work, we identified five policy goals for the flood insurance program: (1) promoting flood risk resilience, (2) minimizing fiscal exposure to the federal government, (3) requiring transparency of the federal fiscal exposure, (4) encouraging consumer participation in the flood insurance market, and (5) minimizing transition and implementation challenges. For each goal we identified several characteristics that illustrate how potential reform proposals might help meet the objective of the goal (see table 2). Potential Comprehensive Reform of NFIP Would Require Actions in Six Key Areas Our review of literature and prior GAO reports and interviews, a questionnaire, and roundtable discussions with industry and nonindustry stakeholders identified a number of potential reform actions that can be considered to improve NFIP’s solvency and enhance the nation’s resilience to flood risk. These potential reform actions fall into the following six areas: (1) outstanding debt, (2) premium rates, (3) affordability, (4) consumer participation, (5) barriers to private-sector involvement, and (6) NFIP flood resilience efforts. However, actions for reform in one area have implications that could affect reform actions in other areas. Therefore, it is necessary to consider flood insurance reform comprehensively. In the following sections, we present information for each of the six areas, including why reform is needed, the potential actions we identified from our review, and the potential implications of each of these reform actions that will need to be considered. Eliminating Outstanding Debt According to industry and nonindustry stakeholders with whom we spoke, potential reform actions will need to address the $24.6 billion debt to Treasury. Servicing the debt puts a strain on NFIP operations and burdens current policyholders. If the debt were eliminated, FEMA could reallocate funds used for debt repayment for other purposes such as building a reserve fund and program operations. Any reforms related to the debt also have potential implications for issues such as premium rates and consumer participation. These implications are discussed in the following relevant sections. Outstanding debt. FEMA’s $24.6 billion outstanding debt, as of March 2017, represents a significant financial obligation for the program and making principal and interest payments on that debt has tied up funds that might otherwise have been used for program operations. Since FEMA initially borrowed $17.5 billion to pay losses from the 2005 catastrophic flood events, FEMA has paid about $6.3 billion in principal and interest on its outstanding balance. However, since 2005 FEMA has had to borrow additional funds from Treasury (following Superstorm Sandy in 2012 and a series of floods in 2016). In prior reports, we found that while Congress has directed FEMA to provide subsidized premium rates for policyholders meeting certain requirements, it has not provided FEMA with funds to offset these subsidies, which has contributed to FEMA’s need to borrow. Despite these requirements and the resulting insufficiency in premiums, current law requires FEMA to repay its borrowing from Treasury. However, we reported that FEMA is unlikely to be able to repay this debt, and some industry and nonindustry stakeholders with whom we spoke said that Congress should eliminate it, as Congress has done when FEMA accrued NFIP debt in the past. Eliminating the debt would require Congress to either appropriate funds for FEMA to repay the debt, or change the law to eliminate the requirement that FEMA repay the accumulated debt. Since 2010, NFIP has benefited from low interest rates. For example, interest rates on the debt during fiscal years 2013–2015 ranged from 0.125 percent to 2.5 percent. As of March 2017, FEMA estimated that NFIP’s $24.6 billion debt would require annual interest-only payments of nearly $400 million. If interest rates increased, FEMA’s annual interest payments could rise significantly. As such, FEMA may not be able to retire any of its debt, even in low-loss years. Charging current policyholders to repay past debt. In the Biggert- Waters Act, Congress set an expectation that FEMA would repay its debt through funds collected from current and future policyholders. That is, in addition to charging policyholders enough to pay for their current risk of flood losses (provisions subsequently revised under HFIAA), FEMA also must collect a surcharge from all NFIP policyholders to help repay program debt, among other things. This creates a potential inequity because policyholders would not only be charged for the flood losses they are expected to incur, but also losses incurred by past policyholders. Such surcharges could discourage some homeowners from purchasing flood insurance from NFIP. Charging current policyholders to pay for debt incurred in past years is contrary to actuarial principles and insurers’ pricing practices, as described by industry stakeholders with whom we spoke, and could encourage some low-risk policyholders to leave NFIP. According to actuarial principles, a premium rate is based on the risk of future losses and does not include past costs. For example, if in prior years an insurer’s claim payments had exceeded the premiums collected, it would not recoup those payments from current or future policyholders because those claims payments would have resulted from risks faced by past policyholders. According to one industry stakeholder, any shortfall in premiums needed to pay claims would be made up by using funds from the insurance company’s surplus. Potential Reform Actions Congress could eliminate FEMA’s debt to Treasury. Implications of Potential Reform Actions Eliminating the debt would allow FEMA to take funds currently used for principal and interest payments and reallocate them for other purposes such as building a reserve fund or financing program operations. It also would be more equitable for current policyholders and consistent with actuarial principles. Eliminating the debt in concert with other actions mentioned in this report would be important because eliminating the debt without addressing an underlying cause of the debt—insufficient premium rates—would keep in place an unsustainable system. That is, NFIP likely would need to rely on new borrowing from Treasury to help pay claims for future flood losses because of its premium structure. Establishing Premium Rates That Reflect the Full Risk of Loss As we previously reported, challenges resulting from NFIP’s current rate structure include fiscal unsustainability, policyholder misperception of flood risk, limits to competition from private-sector insurance, and limited transparency of federal fiscal exposure. Reforming rates so that they reflect the full risk of loss would address several of the policy goals we identified for NFIP because a reformed rate structure would place the program on a more financially sustainable path and policyholders could better understand their flood risk. Additionally, a reformed rate structure would encourage more private insurers to enter the flood insurance market, and Congress and taxpayers would be better informed about federal fiscal exposure. However, it is important to remember that this reform action could affect the implementation of other reform actions such as eliminating the debt, expanding requirements to purchase flood insurance, removing barriers to private-sector involvement, and funding for NFIP mitigation and mapping. These implications are discussed in each of the sections related to the other reform actions. Insufficient premiums. As previously discussed—and as we have been reporting since as early as 1983—NFIP’s premium rates do not reflect the full risk of loss because of various legislative requirements, which exacerbates the program’s fiscal exposure. For example, in a December 2014 report we estimated that the legislative requirements for subsidized premium rates left FEMA with a premium shortfall of $11–$17 billion for the period from 2002 to 2013. Subsidized premium rates and several years with catastrophic losses have led to the need for NFIP to borrow from Treasury to pay claims. While actuarially sound premium rates that reflect the full risk of loss would reduce the likelihood of future borrowing, they would not fully eliminate it. Because of the highly variable nature of flood risk, the chance exists that adverse loss experience over a relatively short period could require borrowing if a sufficient reserve had not yet been accumulated. As noted earlier, the Biggert-Waters Act required FEMA to phase out subsidized and grandfathered rates, both of which allowed premium rates that did not reflect the full risk of loss. However, due to concerns that increased premiums created affordability concerns for some policyholders, HFIAA slowed the phase-out of subsidized premium rates and reinstated grandfathered rates for most properties. Elevation certificates. As we previously reported, the Biggert-Waters Act also required FEMA to phase in full-risk rates, but FEMA does not have data that would allow it to determine full-risk rates for currently subsidized policies. Specifically, according to FEMA, it lacked elevation information for 97 percent of subsidized policies as of February 2017. In 2016, FEMA said that obtaining data for the approximately 1 million subsidized policies could take considerable time and cost several hundred million dollars. According to FEMA, obtaining an elevation certificate typically would cost a policyholder from $500 to $2,000 or more; however, some nonindustry stakeholders with whom we spoke said that the cost for some certificates could be below this range. According to FEMA, some policyholders already have paid to obtain the certificates because doing so could enable them to receive lower premium rates and that they expected more policyholders to do so as rate increases continued. This property-level information is necessary for FEMA to determine the difference between subsidized and full-risk rates and to determine when full-risk rates have been reached. Thus, the incomplete information on rates prevents Congress and the public from understanding the amount of unfunded subsidization within the program, and therefore the federal fiscal exposure it creates. Reinsuring for catastrophic risk. To reflect the full risk of loss, premium rates need to account for the risk of catastrophic losses—large aggregate losses resulting from relatively infrequent phenomena. Many private insurers purchase reinsurance to mitigate the risk of such large financial losses. In January 2017, FEMA executed a 1-year agreement with a consortium of 25 private reinsurers, transferring more than $1 billion of its flood risk exposure to the private reinsurance market. FEMA officials said that this reinsurance not only will protect the program from some financial risk, but also help FEMA gain experience purchasing reinsurance and the private sector gain experience insuring flood risk. Based on our analysis, reinsurance could be beneficial because it would allow FEMA to recognize some of its flood risk and the associated costs up front through the premiums it must pay to the reinsurers rather than after the fact in borrowing from Treasury. However, because reinsurers must charge FEMA premiums to compensate for the risk they assume, reinsurance’s primary benefit would be to transfer and manage risk rather than to reduce NFIP’s expected long-term fiscal exposure. Furthermore, if FEMA did not charge its policyholders for the cost of reinsurance premiums—and more broadly, implement full-risk rates for all policyholders—it could continue to face challenges relating to the transparency of NFIP’s federal fiscal exposure and the sustainability of its program. Effects on private sector and consumer perception of risk. Industry and nonindustry stakeholders with whom we spoke said that it is difficult for private insurers to compete with NFIP premium rates that do not reflect the full risk of loss. To remain solvent, private insurers must charge premium rates that are adequate to cover long-term estimated losses and associated expenses. If NFIP rates were not set similarly, they would be below what private insurers would need to charge, and the private insurers would be unable to compete for these policies based on price. As a result, the private market for flood insurance would continue to be limited. We also have previously concluded, and many industry and nonindustry stakeholders with whom we spoke affirmed, that because NFIP premium rates do not reflect the full risk of loss, consumers may not understand the risk of flood loss associated with a particular property. HFIAA requires FEMA to clearly communicate flood risk to individual property owners regardless of whether their premiums are based on full actuarial rates. FEMA officials said they had begun implementing this requirement by notifying policyholders receiving a subsidy that their premium rates do not reflect the full risk of loss. However, we previously reported that FEMA officials noted that determining a full-risk rate would require elevation information, which FEMA does not have for most subsidized properties. Without the appropriate information on a property’s potential for flood damage, consumers may not be discouraged from purchasing homes in risky areas or they may not take actions to mitigate potential flood damage, which would undermine the nation’s resilience to flood risk and also potentially increase NFIP’s fiscal exposure. Basis for surcharges. Similarly, the surcharges used to build the reserve fund are not charged based on the risk of the individual properties. For example, owners of properties used as second homes pay a significantly higher policy surcharge—$250, compared to $25 for primary residences—regardless of the risk of flood loss that each property faces. State regulations regarding rate setting by private insurers generally stipulate that premium rates should reflect the underlying risk insured by the policy and not be excessive, inadequate, or unfairly discriminatory. However, NFIP’s surcharges are flat and not risk-based. Furthermore, NFIP surcharges, particularly the $250 surcharge, can be significant when compared with the annual premium rate and might affect policyholder behavior. For example, according to FEMA officials, as of February 2017, the average annual premium (including all surcharges and fees) for NFIP policies subject to the $250 surcharge was $1,791. Some industry and nonindustry stakeholders told us that the surcharges could cause certain NFIP policyholders to discontinue their NFIP policies, and they might or might not purchase private flood insurance instead. Grandfathering. NFIP allows some property owners to continue to pay grandfathered rates, which do not reflect the most recent reassessments of flood risk (which occur when the properties are remapped into higher- risk flood zones). The grandfathered policies continue to pay premium rates as if they were still located in lower-risk zones. FEMA does not categorize policies with grandfathered rates as subsidized because they are within classes of policies that FEMA says are not subsidized as a whole. FEMA officials acknowledged that in such classes of policies, property owners who obtain grandfathered rates are cross-subsidized by other policyholders in the same flood zone. That is, other policyholders pay higher rates to cover the shortfall in premiums from grandfathered policies. As a result, both grandfathered policies and the policies that cross- subsidize them do not pay rates in line with the risk of the individual property and can send inaccurate risk signals to policyholders. Furthermore as we found in prior reports, FEMA does not know how many of its current policies pay grandfathered rates, which raises questions about its rate-setting process. Before 2010, it did not identify whether newly issued policies were receiving grandfathered rates. As a result, it cannot currently verify whether grandfathered policies result in premium revenue sufficient to pay for the estimated full long-term risk of flood loss. However, FEMA officials said that in April 2016 they had begun a phased effort to collect the information necessary to identify and analyze grandfathered policies, and that they expect to complete the effort by September 2018. Potential Reform Actions As we previously recommended, FEMA needs to obtain the information necessary to determine full-risk rates for subsidized policyholders. As we previously recommended, FEMA needs to collect information on the location, number, and losses associated with grandfathered policies and analyze the financial effect these properties had on NFIP. As we previously suggested, Congress could eliminate subsidized premium rates and require FEMA to charge all policyholders premium rates that reflect the full risk of loss. In addition, Congress could ensure that premium rates are more closely linked to the individual property’s flood risk by eliminating flat HFIAA surcharges and requiring FEMA to incorporate necessary reserve fund charges into premium rates based on individual property risk. Implications of Potential Reform Actions Requiring FEMA to obtain elevation certificates for subsidized policyholders and data on grandfathered policies could address the policy goal of making NFIP’s federal fiscal exposure more transparent and facilitate congressional oversight. The cost of obtaining elevation certificates could be burdensome for some policyholders, but could be considered as part of an affordability assistance program (see following section) and also could help some policyholders reduce their premium rates. Eliminating subsidized premium rates and requiring FEMA to charge all policyholders premium rates that reflect the full risk of loss could reduce fiscal exposure to the federal government and promote flood risk resilience, two of the policy goals we identified. Full-risk rates would help ensure that premiums collected were sufficient to pay claims in the long-term, and therefore reduce the likelihood that the program would need to borrow from Treasury. Full-risk rates would provide incentives for mitigation measures that would reduce flood risk and thus premium rates. Full-risk rates also could allow more opportunities for private-sector insurers to enter the flood insurance market, transferring federal fiscal exposure to flood risk to the private sector. Eliminating reserve fund surcharges, and instead charging full-risk premium rates based on individual property risk, which would include funding a reserve for future adverse experience, could address the policy goal of encouraging consumer participation in flood insurance for those whom FEMA might be charging premium rates higher than appropriate for their flood risk. Doing so also could encourage private insurers to compete for more flood insurance policies, rather than only for policies in which NFIP premium rates are higher than the associated flood risk. Taking these actions in concert with other actions mentioned in this report would be important because implementing full-risk rates will create affordability concerns for some consumers, highlighting the need for other assistance to help reduce negative consequences on consumer participation. Creating an Affordability Assistance Program That Is Funded with Appropriations, Means- Based, and Prioritized to Mitigate Risk Industry and nonindustry stakeholders with whom we spoke said that rate increases associated with the transition to full-risk premium rates can raise affordability concerns for some policyholders and create a risk that fewer consumers would purchase flood insurance. Some key characteristics for designing an affordability assistance program that addresses the goals of encouraging consumer participation and promoting resilience include providing assistance through appropriations rather than through discounted premiums, making it means-based, and prioritizing it to mitigate risk. The implementation of affordability reforms has implications for other issues such as premium rates, requirements to purchase flood insurance, and barriers to private-sector involvement. Discussions about these implications are included in each section of this report related to those specific areas. Making premium assistance more transparent. As we previously reported, subsidized rates are available regardless of a property owner’s ability to afford a full-risk premium. Because NFIP offers discounted or “subsidized” rates for some policyholders, NFIP collects insufficient revenue to fully pay expected claims over the long term, and these costs generally remain hidden until NFIP must borrow from Treasury to fund a shortfall. This lack of transparency in relation to program costs hinders the ability of Congress to oversee the program and the public to scrutinize it. As we previously reported, means-testing premium assistance would help ensure that only those who could not afford full-risk rates would receive assistance and may increase the amount NFIP collects in premiums, thus reducing the program’s federal fiscal exposure. In our February 2016 report, we estimated that 47–74 percent of policyholders could be eligible for the subsidy, when income eligibility was set at 80 percent or 140 percent of area median income, respectively. Ultimately, the change in federal fiscal exposure generated by means-tested premium assistance would depend on how the assistance was structured, as illustrated by the following examples: Higher premiums collected from currently subsidized policyholders who can afford the full-risk premium rate could be offset by premium assistance to policyholders currently paying full-risk rates and deemed eligible for the means-based assistance. Savings resulting from restricting premium assistance to those with a demonstrated need could be offset by increasing the amount of assistance given to each individual eligible recipient. If consumer participation increased, the change in fiscal exposure also would depend on the extent to which new policyholders were eligible for the means-based subsidies. Limiting potential sources of additional costs (for example, by limiting the amount of the subsidy) could help ensure that NFIP’s fiscal exposure would be reduced. Some nonindustry stakeholders with whom we spoke suggested that any premium assistance should be temporary and only used to help policyholders transition to full-risk rates rather than provided indefinitely. However, it is important to note that while making premium assistance temporary could help reduce fiscal exposure, it also could create affordability concerns in future years for some policyholders. As we previously reported, a premium assistance program could be designed to consider effects on the private flood insurance market. For example, if premium assistance were made available only for NFIP policies and not for private flood insurance policies, private insurers would continue to be at a significant competitive disadvantage, thus hindering the growth of the private market. An option for addressing this concern would be to make private policies eligible for the same affordability assistance. However, during the course of this review, we determined that the federal government would have to overcome implementation challenges; for example, developing and implementing a program to provide assistance for the purchase of private flood insurance policies. Linking affordability to mitigation. Many industry and nonindustry stakeholders with whom we spoke said that instead of premium assistance, it would be preferable to address affordability by providing assistance for mitigation measures that would reduce the flood risk of the property—thus enhancing resilience—and ultimately result in a lower premium rate. Premium assistance does not reduce a property’s flood risk and also reduces incentives for mitigation. Although mitigation assistance would entail a larger up-front cost, it would increase resilience by reducing the risk of loss and reduce the need for premium assistance. Reducing flood risk through mitigation also could reduce the need for federal disaster assistance, further deceasing federal fiscal exposure. A number of recent studies have proposed linking mitigation assistance to premium assistance by requiring mitigation financed through a low- interest loan and providing a means-tested voucher. Many industry and nonindustry stakeholders with whom we spoke said that funding mitigation activities through loans would be preferable to funding through grants because loans would be repaid by the consumer and represent a lower cost to the federal government in the long term. For example, one study suggested that the consumer’s total annual cost could be equal to the loan servicing cost (interest plus some repayment of principal) plus the flood insurance premium, with the premium being lower after mitigation efforts were completed and had reduced the risk of flood loss. Under one proposal developed by some academics who have conducted extensive research on flood insurance, the program would determine the annual amount of flood insurance costs the consumer would be able to afford (for example, a percentage of annual household income), and the annual voucher would be equal to any difference between the consumer’s annual cost—post-mitigation premiums and servicing of the mitigation loan—and what they were determined to be able to pay. The voucher would be tied to individuals and their income level but the loan would be attached to the property so that it could be transferred if the property were sold. Under a related study, for properties for which elevation is not cost effective, other mitigation measures, such as modifying the ground floor with wet floodproofing and moving habitable areas to the second floor of multistory homes, could be helpful. As would be the case with premium assistance, Congress would need to provide funding for any mitigation loan program. Potential Reform Actions Building on what we previously recommended, Congress could create an affordability assistance program that (1) is funded through an appropriation rather than through discounted premiums, (2) is means- tested, (3) considers making any premium assistance temporary, (4) considers allowing assistance to be used for private policies, (5) prioritizes investments in mitigation efforts over premium assistance whenever economically feasible, and (6) prioritizes mitigation loans over mitigation grants. Implications of Potential Reform Actions Providing premium assistance through appropriations rather than through discounted premiums would address the policy goal of making the fiscal exposure more transparent because any affordability discounts on premium rates would be explicitly recognized in the budget each year. Because current subsidies are not based on the policyholder’s ability to pay, means-testing assistance would restrict subsidies to those with a demonstrated need, which could lower the number of policyholders receiving a subsidy and therefore reduce fiscal exposure while maintaining consumer participation (two of the policy goals we identified). However, creating and administering a premium assistance or mitigation loan program would entail some administrative costs. Furthermore, any premium assistance (including vouchers) could continue to reduce incentives for mitigation to some extent. Making premium assistance temporary could help address the policy goal of reducing long-term federal fiscal exposure, but could leave some affordability concerns unaddressed, thus potentially reducing consumer participation. Prioritizing mitigation over premium assistance could address the policy goal of enhancing resilience because it would involve taking steps to reduce the risk of the property, thus reducing the likelihood of future flood claims and potentially reducing long-term federal fiscal exposure. Creating an affordability program would require determining how to assess eligibility for the assistance, which also would include the collection of consumer data. Some mitigation efforts, such as elevating a house, can be expensive and may require significant up- front costs, increasing federal fiscal exposure in the short term. However, these costs could be recaptured over time through reduced flood exposure in the long term and as policyholders repaid mitigation loans. Increased consumer participation in NFIP (by making insurance more affordable) as well as higher levels of subsidies than currently provided could result in higher fiscal exposure. Taking these actions in concert with other actions mentioned in this report would be important because creating a system for providing means- based assistance without establishing full-risk premium rates for all policyholders, or establishing a source of funding for that assistance, could increase the fiscal exposure NFIP creates for the federal government. Increasing Consumer Participation by Expanding the Mandatory Purchase Requirement and Improving Risk Communication Based on our analysis of stakeholder comments, issues relating to the policy goal of encouraging consumer participation include the effects of the mandatory purchase requirement and disaster assistance on consumer perception of flood risk. If the mandatory purchase requirement were expanded to more (or all) mortgage loans made by federally regulated lending institutions for properties in communities participating in NFIP, consumer participation could increase, more consumers would have some protection from the financial effects of flooding, and private insurers would have a greater incentive to offer flood insurance coverage. Any reforms related to consumer participation will have potential implications for full-risk rates, affordability assistance, and barriers to private-sector involvement. Discussions about these implications are included in each section of this report related to those specific areas. Consumer participation and the mandatory purchase requirement. As discussed earlier, owners of properties in participating communities in SFHAs generally are required to purchase flood insurance if their mortgage loans are made by federally regulated lenders (mandatory purchase requirement). This requirement was created to increase the number of consumers who purchase flood insurance coverage. A number of studies have shown that individuals focus on short time horizons and have difficulty fully understanding low-probability, high-severity risks such as flooding. For example, a 1 percent chance of flooding in a single year may seem like a low probability to many—despite being FEMA’s defined threshold for high risk—and lead homeowners to believe they will never experience flooding and that flood insurance coverage is not necessary. In addition, many industry and nonindustry stakeholders with whom we spoke said that the requirement’s current structure discourages some consumers from purchasing coverage. For example, many industry and nonindustry stakeholders with whom we spoke (including FEMA representatives), described the SFHA designation as an “in or out” line that unintentionally gives consumers the false perception that because they are not required to purchase flood coverage, they are not at risk of flooding and do not need the coverage. While FEMA considers areas outside of SFHAs to be at low- to moderate-risk of flooding, it estimated that properties outside of SFHAs accounted for about 20 percent of NFIP claims from 2006 through 2015. But according to a 2006 study, only an estimated 1 percent of consumers outside of SFHAs purchase flood insurance. Moreover, in a 2008 report, we discussed areas of the country that appeared to have higher populations and flooding risks relative to their policy volumes, thus indicating the potential for an increase in the number of consumers with flood insurance coverage. Many industry and nonindustry stakeholders with whom we spoke suggested eliminating the SFHA designation and expanding the mandatory purchase requirement to include more (or all) federally regulated mortgages. Some of these stakeholders acknowledged that flood insurance coverage, and therefore a purchase requirement, might not be as necessary for some consumers with properties at an extremely low risk of flooding but noted that in those situations, the premium rate should be extremely low to reflect the low flood risk of the property. Limited information is readily available to help inform consumers who may be inclined to purchase flood insurance coverage voluntarily. For example, mortgage documents inform the borrower if the property is in an SFHA and the flood zone in which the property is located. However, for most properties outside of SFHAs, the flood zone is listed as “X” without any additional information on the property’s risk. As a result, according to industry and nonindustry stakeholders with whom we spoke, the borrower may be unclear if the property is at moderate risk or at a very low risk for flooding. One nonindustry stakeholder with whom we spoke has proposed communicating flood risk to consumers by creating a flood safety score, similar to a credit score, that reinforces the idea that flood risk is on a continuous spectrum rather than undifferentiated high-risk versus not high risk. Industry and nonindustry stakeholders with whom we spoke generally agreed with the potential merit of such a system, and one also highlighted the importance of ensuring that such a system be as public and transparent as possible so that consumers would understand it. As mentioned previously, HFIAA requires FEMA to communicate full flood risk to existing policyholders but does not require FEMA to improve communication of risk to consumers who lack NFIP coverage. While HFIAA does not specifically require a risk score, such a scoring system could help communicate risk to all consumers and potentially improve consumer participation. Enforcement of the mandatory purchase requirement. Many industry and nonindustry stakeholders with whom we spoke expressed concern that the mandatory purchase requirement has not been adequately enforced. However, few specific examples of noncompliance exist, so the extent of compliance remains unknown. For example, a 2006 study estimated that NFIP participation rates were 75–80 percent in SFHAs, in which property owners with loans from federally regulated lenders are required to purchase flood insurance. However, FEMA officials said that estimates indicate that as little as one-third of residential properties in SFHAs have flood insurance coverage. A 2012 study found that homeowners both inside and outside SFHAs who obtained flood insurance when purchasing their homes typically kept it 2–4 years before canceling the policies. Furthermore, some industry and nonindustry stakeholders with whom we spoke cited recent flooding in Louisiana (2016) and South Carolina (2015) and the fact that so few consumers had flood insurance as additional evidence that either the enforcement of the mandatory purchase requirement needed to be improved or its scope needed to be expanded. However, one official from the lending industry with whom we spoke noted that federal banking regulators have found few examples of noncompliance in lending institutions. The lack of clarity on the extent of compliance with the mandatory requirement poses the risk that compliance could be low, resulting in actions not being taken to address the situation. Disaster assistance and consumer participation. Many industry and nonindustry stakeholders with whom we spoke, including FEMA representatives, said that many consumers have the false perception that individual disaster assistance will be sufficient to help them recover and rebuild after a flood, leading them to forgo purchasing flood insurance. Literature we reviewed has noted that individual assistance is limited and means-based, and much of the disaster assistance goes to pay for rebuilding public infrastructure, as the following examples illustrate: A FEMA official said that homeowners should not rely on potential grant programs as an alternative to flood insurance coverage to recover after a flood because such assistance is not designed to repair and rebuild a property. For example, after the floods in Louisiana in 2016, NFIP policyholders received claim payments that averaged approximately $86,500, while assistance payments to individuals averaged approximately $9,150. A 2012 study found that public perception of federal post-disaster assistance creates a moral hazard that not only discourages consumers from purchasing flood insurance but also discourages flood risk mitigation and encourages people to live in high-risk areas. The study concluded that available disaster assistance “is relatively small and certainly does not make people whole after devastating events.” According to FEMA officials, FEMA provides post-disaster grants of up to $33,300 to repair and rebuild, but recent payments have averaged around $4,100. The Small Business Administration also offers loans for repair and rebuilding after a disaster, but these are means-tested, and because they must be repaid by the homeowner, they do little to protect against the financial risk of flooding. Nonindustry stakeholders with whom we spoke said that most disaster assistance is provided to state and local governments to repair infrastructure rather than to individuals. Furthermore, a 2014 study found that when the average individual assistance grant increased by $1,000, average flood insurance coverage per policy in that community dropped by about $6,400. Many industry and nonindustry stakeholders with whom we spoke agreed that misperceptions about disaster assistance can negatively affect consumer participation in flood insurance, resulting in more exposure to financial loss from flooding. Potential Reform Actions Congress could expand the mandatory purchase requirement to more (or all) mortgage loans made by federally regulated lending institutions for properties in communities participating in NFIP. Congress could require FEMA to explore ways to improve its communication of risk to all consumers; for example, through a risk scoring system. Implications of Potential Reform Actions Expanding the mandatory purchase requirement to more (or all) mortgage loans made by federally regulated lending institutions in communities participating in NFIP (rather than only those in an SFHA) could address the policy goal of increasing consumer participation in flood insurance. Expanded purchase requirements likely would face resistance from consumers who do not wish to purchase coverage or who might face affordability issues. Lenders also may object to being responsible for enforcement of mandatory purchase requirements on an expanded number of properties. Expanded purchase requirements could affect the values of properties subject to the new requirement to purchase flood insurance. Increased consumer participation would address the goal of enhancing resilience by providing consumers with some protection from the financial effects of flooding and reduce the need for disaster assistance, therefore potentially reducing federal fiscal exposure. Increased consumer participation could increase the size and scope of NFIP and potentially increase federal fiscal exposure, but this could be reduced by implementing full-risk rates and balanced by an increasing number of lower-risk properties. Taking these actions in concert with other actions mentioned in this report would be important to address affordability concerns associated with an expanded mandatory purchase requirement. Furthermore, accurate, property-specific premium rates would be necessary to provide assurance that policyholders newly mandated to purchase flood insurance coverage would be paying rates based on their risk of flood loss. Removing Other Barriers to Private-Sector Involvement According to some industry and nonindustry stakeholders with whom we spoke, private insurer interest in selling flood insurance has been increasing. Also, according to industry and nonindustry stakeholders with whom we spoke, NFIP’s subsidized premium rates remain the primary barrier to private-sector involvement in flood insurance. As we previously reported, besides NFIP’s subsidized rates, other barriers to private-sector involvement include uncertainty about how private coverage could satisfy the mandatory purchase requirement and FEMA policies on continuous coverage and premium refunds. If such barriers were removed, private- sector involvement in flood insurance could increase, potentially resulting in a reduced size and scope for NFIP’s insurance activities and allowing FEMA to focus on other activities such as mitigating flood risk and developing flood maps. As a result, this could address the policy goal of reducing federal fiscal exposure while promoting flood resilience. Increased private-sector involvement has implications for other issues such as full-risk premium rates, consumer participation, and NFIP flood resilience efforts. Discussions about these implications are included in each section of this report related to those specific areas. Private-sector coverage and the mandatory purchase requirement. Industry and nonindustry stakeholders with whom we spoke cited uncertainty among lenders and insurers about regulations specifying how private flood insurance policies could satisfy the mandatory purchase requirement. The Biggert-Waters Act requires regulated lending institutions to accept private flood insurance, but as of March 2017, federal banking regulators had not issued final rules with such directions. As a result, it is not known what the regulations will be and how lenders and private insurers ought to comply. Furthermore, some industry and nonindustry stakeholders with whom we spoke were concerned that private policies lenders had accepted as satisfying the mandatory purchase requirement might retroactively be deemed noncompliant if they did not meet the requirements of the new regulations. The stakeholders added that issuance of final rules on the acceptance of and definition of private flood insurance could help provide more clarity and could lead to increased private-sector involvement in flood insurance. Continuous coverage, premium refunds, and WYO noncompete clause. Some industry and nonindustry stakeholders with whom we spoke also cited FEMA’s interpretation of the continuous coverage requirement in connection with private flood insurance and the effect on consumers’ ability to qualify for NFIP discounted rates as a barrier to private-sector involvement in flood insurance. FEMA prohibits the use of subsidized rates for policies for which there has been a lapse in NFIP coverage of more than 90 days. That is, if a NFIP policyholder who qualified for a subsidized rate switched to a private flood policy, and then switched back to an NFIP policy (more than 90 days after originally cancelling the NFIP policy), the policyholder would no longer qualify for the subsidized rate. FEMA officials noted that in these cases they disallowed private coverage from constituting continuous coverage because of the agency’s interpretation of a HFIAA provision on policy lapses. Some industry and nonindustry stakeholders with whom we spoke also noted that FEMA’s decision to exclude private flood insurance policies in these cases could have financial repercussions for some consumers seeking to reinstate their previously discounted NFIP coverage. Some of these industry and nonindustry stakeholders said that due to the risk of losing their discounted NFIP rates, consumers might avoid the private market. However, to the extent that reforms result in the elimination of discounted rates, this issue could become less of a concern. Furthermore, some industry and nonindustry stakeholders with whom we spoke said that FEMA’s policy related to policy cancellations could discourage the use of private flood insurance by consumers. FEMA allows full or partial refunds of paid NFIP premiums for coverage terminated in accordance with its accepted cancellation reasons, but does not allow policyholders to cancel their NFIP policy and obtain a refund if they obtained a non-NFIP policy (private flood insurance). These industry and nonindustry stakeholders said that private insurers typically allow refunds to policyholders when switching insurers. FEMA officials said that FEMA only can allow cancellation of policies and refunds according to its standard policy terms and conditions (which only reference cancellations of NFIP policies). However, FEMA previously allowed such refunds and could have taken steps to revise guidance to allow them. Allowing this type of refund would be in line with industry practice to allow for refunds of paid premiums as well as Congress’s interest in transferring some of the federal government’s exposure to flood risk to the private sector. We previously recommended that FEMA consider reinstating the cancellation reason code allowing policyholders to be eligible for prorated premium refunds if they obtained a private policy and then cancelled their NFIP policy. FEMA agreed with our recommendation and said it planned to implement the policy change effective October 2017. Some industry and nonindustry stakeholders with whom we spoke also told us that certain FEMA restrictions on WYO insurers—private insurers that sell and service policies and adjust claims for NFIP—may be an impediment to increasing the availability of private flood insurance. Specifically, NFIP’s arrangement with the insurers restricts them from selling stand-alone flood insurance coverage outside of NFIP. The stakeholders said that this restriction can limit companies with the most experience in flood insurance from entering the private market. FEMA officials stated that, despite this restriction, a number of companies have found ways to offer flood insurance while remaining compliant with the arrangement. For example, if a subsidiary of a large insurance company were a WYO, the parent company could offer stand-alone flood coverage. Alternatively, the WYO insurer could offer flood coverage as part of a multiperil policy. NFIP claims data. Many industry and nonindustry stakeholders with whom we spoke also noted the lack of access to NFIP data on flood losses and claims as a barrier to more private companies offering flood insurance. In our previous work, industry and nonindustry stakeholders said that access to such data would allow private insurance companies to better estimate losses, price flood insurance premiums, and determine which properties they might be willing to insure. According to FEMA officials, the agency would need to address privacy concerns to provide property-level information to insurers, because the Privacy Act of 1974 prohibits the agency from releasing detailed NFIP policy and claims data. FEMA officials said that while the agency could release data in the aggregate, some information could not be provided in detail. For example, in 2017 FEMA publicly released ZIP code-level data but would need to determine how to release property-level information while protecting the privacy of individuals. According to officials from the National Association of Insurance Commissioners, it is coordinating with FEMA actuaries to determine how FEMA could share specific data with states, without disclosing personally identifiable information. Potential Reform Actions Congress could amend (or clarify) the statutory definition of private flood insurance as it relates to the mandatory purchase requirement. Congress could direct FEMA to allow private coverage to satisfy NFIP continuous coverage requirements. As we previously recommended, FEMA could reinstate the ability for policyholders replacing their NFIP policies with private policies to be eligible for prorated refunds. Congress could direct FEMA to eliminate the WYO noncompete clause. Congress could determine the appropriateness of amending the privacy law to allow for FEMA to enter into confidentiality agreements to share claims data with the insurance industry. Depending on the extent to which a private flood insurance market develops over time, other changes to further encourage the private market and potentially change the role and structure of NFIP could be considered. Such potential actions are discussed in appendix II. Implications of Potential Reform Actions Increased private-sector involvement could address the policy goal of reducing the federal fiscal exposure relating to flood risk by reducing the number of properties that NFIP covers with flood insurance. Reduced size and scope of NFIP’s insurance activities could free up resources and allow FEMA to focus more heavily on other activities such as mitigating high-risk properties and developing and maintaining flood maps, thus potentially addressing the policy goal of enhancing resilience. Increased private-sector involvement could make flood insurance more attractive to consumers by introducing products more tailored to each consumer’s needs. Some industry stakeholders with whom we spoke said that private insurers would be able to price flood risk more accurately for each individual property because they have better flood-loss modeling capabilities (and thus consumers pay for the risk of their own property and do not significantly cross-subsidize other policyholders). Increased private-sector involvement could cause a greater portion of NFIP’s portfolio to be composed of higher-risk policies. Taking these actions in concert with other actions mentioned in this report, such as addressing NFIP’s subsidized rates, would be key to encouraging private-sector involvement because industry and nonindustry stakeholders with whom we spoke cited subsidized rates as a significant barrier to private-sector entrance into the flood insurance market. And, it is unclear how effective these actions would be without addressing subsidized rates. Protecting and Enhancing NFIP Flood Resilience Efforts NFIP flood resilience efforts include mitigation, mapping, and floodplain management through community participation. Based on our analysis of the policy goals we identified, supporting these activities could address the policy goal of enhancing resilience by ensuring flood risk was identified through mapping and reduced through mitigation and floodplain management. Any reforms related to NFIP flood resilience efforts will have potential implications for issues such as premium rates, consumer participation, and private-sector involvement in flood insurance. Discussions about these implications are included in each section of this report related to those specific areas. Mitigation and mapping funding. NFIP’s flood resilience efforts (mitigation, mapping, and floodplain management through community participation) are important and deliver a wide range of benefits. For example, a 2005 report estimated that for every $1 spent on mitigation, losses were reduced by an average of $4. Furthermore, FEMA officials said that mitigation programs have saved the American public an estimated $3.4 billion annually. Mapping is essential to NFIP rate setting and risk identification. If private insurers began to write a significant number of flood insurance policies, and NFIP wrote fewer, there would be less funding for such resilience efforts. As discussed previously, FEMA charges a fee on NFIP policies that helps fund efforts to mitigate flood risk on properties covered by NFIP policies and develop and maintain flood maps. While FEMA also received $190 million in appropriations in fiscal year 2016 to help fund its mapping efforts, it expects to collect about $197 million in fee revenue in 2017. As a result, to the extent that the private flood insurance market grew and policies moved from NFIP to private insurers, FEMA would no longer collect fees on those policies. Nonindustry stakeholders have proposed a number of solutions for addressing this issue. For example, the Association of State Floodplain Managers proposed requiring an equivalency fee (equal to the Federal Policy Fee) on all private flood insurance policies because it would help pay for floodplain management and flood mapping services that also would benefit private insurance companies. For example, flood maps are an important source of flood risk data that private insurers could use to assess risk, and mitigation helps lower the risk of properties and make them more insurable. While other industry and nonindustry stakeholders with whom we spoke shared the concern over the effect on fee revenue, some instead preferred to compensate for the diminished fee revenue by funding mitigation and mapping directly through an appropriation in the federal budget because the services benefit all taxpayers. Community participation. Increased private-sector involvement in flood insurance also has the potential to negatively affect flood resilience because communities may have less of an incentive to meet floodplain management standards. Industry and nonindustry stakeholders with whom we spoke said that many of the more than 22,000 communities currently participating in NFIP do so primarily because participation provides community residents with access to NFIP coverage. In exchange, communities must adopt and enforce floodplain management standards that help to reduce flood risk. According to FEMA, structures built to NFIP standards experience 73 percent less damage than structures not built to these standards and result in a $1.9 billion annual reduction in flood losses. Some nonindustry stakeholders with whom we spoke said that the availability of private flood insurance coverage could lead some communities to drop out of NFIP and rescind some of the standards and codes they had adopted. However, rescinding these standards could increase the risk of flood damage and therefore the cost of flood insurance premiums, which could be an incentive for keeping the standards in place. The Association of State Floodplain Managers proposed addressing this issue by allowing private flood policies to meet the mandatory purchase requirement only if they were sold in participating NFIP communities. Potential Reform Actions Congress could establish a fee on private flood insurance policies to address the loss in NFIP fee revenue used to fund mitigation and mapping activities. Alternatively, Congress could appropriate funds for mitigation and mapping activities to offset the diminished fee revenue. Implications of Potential Reform Actions Proactively addressing potential effects on policy fee revenue could address the policy goal of enhancing resilience by helping ensure that flood risk would be identified through mapping and reduced through mitigation activities and floodplain management. In turn, enhancing resilience could address the policy goal of reducing federal fiscal exposure in the long term because property would be at a lower risk of flood loss and therefore less likely to experience flood claims. Ensuring that these activities continued even as the number of NFIP policies decreased could support the private flood insurance market, which in turn could address the policy goal of reducing federal fiscal exposure (through transfer to the private market). For example, private insurers could use flood maps to assess risk and mitigation could make more properties insurable. A requirement for a fee on private flood insurance policies could face resistance from insurers, and creating a federal appropriation to pay for mitigation and mapping would be a new cost. There also could be implementation costs and challenges associated with administering a fee on private insurers. Taking these actions in concert with other actions mentioned in this report would be important because doing so could ensure that efforts to increase private-sector involvement in flood insurance would not harm resilience efforts, particularly funding for mitigation and mapping, and community participation in NFIP. Conclusions NFIP has experienced significant challenges because FEMA is tasked with pursuing competing programmatic goals—keeping flood insurance affordable while keeping the program fiscally solvent. Emphasizing affordability has led to premium rates that in many cases do not reflect the full risk of loss and produce insufficient premiums to pay for claims. In turn, this has transferred some of the financial burden of flood risk from individual property owners to taxpayers as a whole and resulted in the program owing $24.6 billion to Treasury. Without reforms, the financial condition of NFIP could continue to worsen. Shifting the emphasis toward fiscal solvency would reduce the burden on the taxpayer, but would require increasing premium rates, which could create affordability challenges for many policyholders and discourage consumer participation in flood insurance. Private insurers’ interest in selling flood insurance has been increasing, which could transfer some risk from the federal government. This increased interest, combined with the challenges experienced by the program, create an opportunity for Congress to consider potential reforms to NFIP as well as the best role for the federal government in relation to flood insurance. Regardless of changes in private-sector involvement or the government’s role, congressional oversight of the program’s federal fiscal exposure will remain important. Actions in six areas could advance programmatic goals, mitigate some of the trade-offs resulting from the competing goals, and reform the flood insurance program by (1) promoting flood risk resilience, (2) minimizing fiscal exposure to the federal government, (3) requiring transparency of the federal fiscal exposure, (4) encouraging consumer participation in the flood insurance market, and (5) minimizing transition and implementation challenges. However, a piecemeal approach will not address NFIP’s ongoing challenges. Rather, taking actions from a comprehensive perspective—in all six areas—could help balance or mitigate the various trade-offs and challenges. The sequence of actions for these areas is also important. That is, some actions would be more likely to achieve goals if they followed others, while some could be taken concurrently. For example, when addressing barriers to private-sector involvement, it would be important to protect NFIP’s flood resilience activities at the same time. Other important reforms such as requiring full-risk rates for all policyholders and expanding the mandatory purchase requirement would create affordability concerns, so they would warrant having an affordability assistance program already in place. Finally, addressing the outstanding debt would best be accompanied by premium rate reform to help reduce the likelihood of a recurrence of another unpayable debt buildup. Taking these factors into consideration will therefore be important for any reform decisions made. We recognize that many of the potential reforms, in and of themselves, involve competing goals, and that taking some actions in isolation could create challenges for some property owners. We also recognize that many reforms can be challenging to start or complete because they could involve new programs, new appropriations, and revisions to current law. As such, they could face resistance because they could create new costs for the federal government, the private sector, or property owners. Nevertheless, taking actions on multiple fronts represents the best opportunity to help address the spectrum of challenges confronting NFIP, advance private-sector participation, reduce federal fiscal exposure, and enhance resilience to flood risk. Matter for Congressional Consideration As Congress considers reauthorizing NFIP, it should consider comprehensive reform to improve the program’s solvency and enhance the nation’s resilience to flood risk, which could include actions in six areas: (1) addressing the current debt, (2) removing existing legislative barriers to FEMA’s revising premium rates to reflect the full risk of loss, (3) addressing affordability, (4) increasing consumer participation, (5) removing barriers to private-sector involvement, and (6) protecting NFIP flood resilience efforts. In implementing these reforms, Congress should consider the sequence of the actions and their interaction with each other. Agency Comments We provided a draft of this report to the Department of Homeland Security and the Department of the Treasury for review and comment. Both departments provided technical comments, which we incorporated, as appropriate. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology In September 2017, the National Flood Insurance Program’s (NFIP) current authorization will expire, and reauthorization of the program provides Congress with an opportunity to consider options for improving the program and changing the federal role in flood insurance. We performed our work under the authority of the Comptroller General in light of congressional interest in flood insurance and NFIP’s impeding reauthorization. This report examines potential reform actions Congress and the Federal Emergency Management Agency (FEMA) could take to reduce federal fiscal exposure and improve resilience to flood damage. To identify these actions, we took an iterative multiphase approach that included reviewing available information on flood insurance reform; obtaining input on potential reform actions from knowledgeable stakeholders through semi-structured individual interviews, a questionnaire, and a series of roundtables; identifying policy goals for flood insurance reform; and evaluating the various actions using these policy goals. The information on flood insurance reform that we reviewed included prior GAO reports, relevant laws, NFIP history, academic papers, and testimonies. We interviewed officials of or representatives from FEMA, the Congressional Budget Office, the Federal Insurance Office, the National Association of Insurance Commissioners, insurance industry associations, catastrophe modelers, state insurance programs, an actuarial association, consumer advocacy groups, think tanks, academics, and others to gather information on flood reform options. We also attended the National Flood Conference in May 2016, which was attended by hundreds of flood insurance stakeholders and included discussion of a number of topics related to flood insurance reform. Furthermore, we developed a questionnaire to gather stakeholder input on options for reforming NFIP, policy goals for evaluating those options, and the roles for the private sector, federal government, and state governments in providing flood insurance coverage and managing flood risk. Specifically, we judgmentally selected a diverse group of 108 stakeholders for the questionnaire based on a review of available literature on flood insurance reform, work conducted for our prior reports on flood insurance, and suggestions from stakeholders we interviewed. Questionnaire respondents represented a number of stakeholder categories: insurers, insurance agents, insurance adjuster associations, reinsurers, catastrophe modelers, lender associations, federal agencies, state insurance regulators, state residual insurance programs, consumer advocates, academics, think tanks, mitigation associations, real estate associations, and environmental associations. We conducted the questionnaire in June 2016 and received responses from 82 of the 108 questionnaire recipients. The results from the questionnaire are not generalizable to the population of respondents and only represent the opinions of the individuals but provided insights into potential flood insurance reforms. We also conducted four web-based roundtables in August and September 2016 with a variety of stakeholders to obtain their views on flood insurance reform. The 43 roundtable participants represented the same stakeholder categories from which we drew our questionnaire respondents as well as FEMA. We judgmentally selected a diverse group of stakeholders for the roundtable based on their knowledge of flood insurance reform. We used stakeholders’ responses to the questionnaire to ensure that each roundtable was balanced with a diverse range of perspectives on flood insurance reform. Two of the roundtables focused on reforming the flood insurance marketplace and explored several topics, including the roles of the private sector and the federal government in primary insurance and managing catastrophic risk. The other two roundtables focused on promoting flood risk resilience and explored several topics, including enhancing resilience for existing structures and future development, the roles of mitigation assistance and premium assistance, and strategies for encouraging greater consumer participation in flood insurance. We also identified five policy goals for evaluating options for flood insurance reform by reviewing prior GAO reports—one of which included policy goals for federal involvement in natural catastrophe insurance— and FEMA’s 2015 report on options for privatizing NFIP. We validated the goals by discussing them during stakeholder interviews, and obtaining input on them in the questionnaire, including asking questionnaire respondents to rate the policy goals, provide comments, and suggest revisions. We analyzed and incorporated all input as necessary and then developed the five following policy goals: (1) promoting flood risk resilience, (2) minimizing fiscal exposure to the federal government, (3) requiring transparency of the federal fiscal exposure, (4) encouraging consumer participation in the flood insurance market, and (5) minimizing transition and implementation challenges. Within each of these goals, we identified several characteristics to help illustrate how various reform proposals might meet each goal. We used these policy goals and the information we gathered from the interviews, questionnaire, and roundtables to evaluate potential actions for flood insurance reform. We conducted this performance audit from September 2015 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Reforms That Are Contingent on the Development of the Private Flood Insurance Market Based on our review of the literature and discussions with industry and nonindustry stakeholders, we identified two other frequently cited potential flood insurance reforms: (1) converting the National Flood Insurance Program (NFIP) to the insurer of last resort (residual insurer), and (2) having NFIP act as a reinsurance backstop to the private flood insurance market. However we determined, based on input from industry and nonindustry stakeholders with whom we spoke, that implementing such reforms now could be premature because they would depend on the extent to which a private market for flood insurance developed. Instead, these additional reforms could be considered and evaluated once the reforms discussed in this report were implemented, and the extent of development of a private flood insurance market determined. Because these options were widely mentioned in our meetings and discussed in the literature we reviewed, we summarized them in this appendix for completeness. Many industry and nonindustry stakeholders with whom we spoke believed that as the private sector enters the flood insurance market, NFIP could naturally become the insurer of last resort, or residual insurer, and that this would be a more appropriate role for the federal government in the long term than the current program. However, because the private flood insurance market remains in the early stages of development, some said that it would be preferable to allow the private market to further develop before considering whether a structured residual insurance program might be needed or how such a program might be structured. For example, some state residual insurance programs require their rates to be at or above that of the highest private-sector rate to ensure they do not compete with private insurers and that they only provide insurance coverage to those unable to find it in the private market. If NFIP’s premium rates continued to be less than full-risk and impeded private- sector involvement, it might become necessary to create a program to transfer policies to the private sector as we have seen with some state residual insurance programs. For example, Florida Citizens Property Insurance Corporation developed a two-pronged effort that (1) provides claims and risk data to private insurers through a confidentiality agreement that allows the insurers to determine which policies they would like to cover, and (2) allows private insurers to submit an application with detailed risk characteristics to a clearinghouse that matches existing policies to those characteristics. Some nonindustry stakeholders with whom we spoke expressed concern that private insurers only would offer coverage to NFIP’s lowest-risk policies and leave NFIP with the higher-risk policies, increasing the risk exposure of the program. Similarly, some nonindustry stakeholders with whom we spoke expressed concern that private insurers only would be able to compete for those policies that had premium rates higher than what the private insurers determine to be necessary to reflect their full risk of loss, thus leaving NFIP with policies that had less than full-risk rates. As a result, the decrease in premium revenue could outpace the decrease in expected flood losses. However, some nonindustry stakeholders with whom we spoke that had direct experience with residual insurance programs said that they discovered that private insurers had been willing to insure much riskier policies than they originally expected. One industry stakeholder explained that higher-risk policies can be desirable for private insurers because they have the potential for higher profit than lower-risk policies, and a private insurer’s concern is not so much the risk of the individual properties it insures but rather the correlation of risk among those properties. A nonindustry stakeholder said that insurers were willing to insure higher-risk properties as long as they were geographically diversified and balanced by lower- risk properties. One industry and one nonindustry stakeholder acknowledged that there will always be some properties that are too risky for the private market, and that these likely would fall to NFIP as a residual insurer. While the average risk of residual policies that remain in NFIP would be higher than NFIP’s current average risk level, the aggregate exposure to risk could be much lower because of the lower number of policies. Some industry and nonindustry stakeholders with whom we spoke also said that if NFIP became a smaller program of the highest-risk properties, it would be much better positioned to target its mitigation efforts to those properties. Furthermore, the concern that a residual flood insurance program would have increased fiscal exposure exists primarily because of the existence of less than full-risk rates. Therefore, if rates accounted for the full risk of loss, the program ought to be collecting sufficient premium revenue to pay for the estimated losses associated with the high-risk policies over the long term, although with significant uncertainty because of the nature of the risk. Many industry and nonindustry stakeholders with whom we spoke agreed that the private reinsurance market had a significant capacity to reinsure flood risk for private insurance companies, but disagreed on whether the federal government also would need to play a role in providing such reinsurance. For example, some industry and nonindustry stakeholders said that private reinsurers had an abundance of capital available and saw flood risk as an attractive option for diversifying their portfolios and earning a return on the capital. Some of these industry and nonindustry stakeholders said that the private reinsurance market would be sufficient to fully reinsure a private flood insurance market. However, other industry and nonindustry stakeholders with whom we spoke said that the federal government would need to act as a backstop to reinsure the most catastrophic flood risks (those capable of causing losses at levels above which the private market would be unwilling to reinsure). Because the private flood insurance market is still developing, it is unclear whether such a gap in the market might develop. Thus, it could be premature to create a federal reinsurance program, but the issue could be revisited after it is seen how the private insurance and reinsurance markets developed. Appendix III: GAO Contact and Staff Acknowledgments Contact Alicia Puente Cackley, (202) 512-8678 or [email protected]. Acknowledgments In addition to the contact named above, Patrick Ward (Assistant Director); Christopher Forys (Analyst in Charge); Abby Brown; Pamela Davidson; Eli Harpst; Carol Henn; Elizabeth Jimenez; John Karikari; Marc Molino; Patricia Moye; Carl Ramirez; Oliver Richard; Barbara Roesmann; Jessica Sandler; Joe Silvestri; Andrew Stavisky; and Frank Todisco made key contributions to this report. | Congress created NFIP to reduce the escalating costs of federal disaster assistance for flood damage, but also prioritized keeping flood insurance affordable, which transferred the financial burden of flood risk from property owners to the federal government. In many cases, premium rates have not reflected the full risk of loss, so NFIP has not had sufficient funds to pay claims. As of March 2017, NFIP owed $24.6 billion to Treasury. NFIP's current authorization expires in September 2017. In this report, GAO focuses on potential actions that can help reduce federal fiscal exposure and improve resilience to flood risk. GAO reviewed laws, GAO reports, and other studies. GAO interviewed officials from FEMA and other agencies. GAO also solicited input from industry stakeholders (including insurers, reinsurers, and actuaries) and nonindustry stakeholders (including academics, consumer groups, and real estate and environmental associations) through interviews, a nongeneralizable questionnaire, and four roundtable discussions. Based on discussions with stakeholders and GAO's past work, reducing federal exposure and improving resilience to flooding will require comprehensive reform of the National Flood Insurance Program (NFIP) that will need to include potential actions in six key areas (see figure below). Comprehensive reform will be essential to help balance competing programmatic goals, such as keeping flood insurance affordable while keeping the program fiscally solvent. Taking actions in isolation may create challenges for some property owners (for example, by reducing the affordability of NFIP policies) and therefore these consequences also will need to be considered. Some of the potential reform options also could be challenging to start or complete, and could face resistance, because they could create new costs for the federal government, the private sector, or property owners. Nevertheless, GAO's work suggests that taking actions on multiple fronts represents the best opportunity to help address the spectrum of challenges confronting NFIP. Through its work, GAO identified the following interrelationships and potential benefits and challenges associated with potential actions that could be taken to reform NFIP in the six areas: Outstanding debt. The Federal Emergency Management Agency (FEMA), which administers NFIP, owed $24.6 billion as of March 2017 to the Department of the Treasury (Treasury) for money borrowed to pay claims and other expenses, including $1.6 billion borrowed following a series of floods in 2016. FEMA is unlikely to collect enough in premiums to repay this debt. Eliminating the debt could reduce the need to raise rates to pay interest and principal on existing debt. However, additional premiums still would be needed to reduce the likelihood of future borrowing in the long term. Raising premium rates could create affordability issues for some property owners and discourage them from purchasing flood insurance, and would require other potential actions to help mitigate these challenges. Premium rates. NFIP premiums do not reflect the full risk of loss, which increases the federal fiscal exposure created by the program, obscures that exposure from Congress and taxpayers, contributes to policyholder misperception of flood risk (they may not fully understand the risk of flooding), and discourages private insurers from selling flood insurance (they cannot compete on rates). Eliminating rate subsidies by requiring all rates to reflect the full risk of loss would address an underlying cause of NFIP's debt and minimize federal fiscal exposure. It also would improve policyholder understanding of flood risk and encourage private-sector involvement. However, raising rates makes policies less affordable and could reduce consumer participation. The decreases in affordability could be offset by other actions such as providing means-based assistance. Affordability. Addressing the affordability issues that some consumers currently face, or might face if premium rates were raised, could help ensure more consumers purchase insurance to protect themselves from flood losses. GAO previously recommended that any affordability assistance should be funded with a federal appropriation (rather than through discounted premiums) and should be means-tested. Means-testing the assistance could help control potential costs to the federal government, and funding with an appropriation would increase transparency of the federal fiscal exposure to Congress. Many industry and nonindustry stakeholders with whom GAO spoke said affordability assistance should focus on helping to pay for mitigation—such as elevating buildings—because mitigation permanently reduces flood risk (thus reducing premium rates). Mitigation efforts can have high up-front costs, and may not be feasible in all cases, but many stakeholders suggested that federal loans could be used to spread consumer costs over time. Consumer participation. According to many industry and nonindustry stakeholders with whom GAO spoke, some consumers might not purchase flood insurance because they misperceive their flood risk. For example, consumers located outside of the highest-risk areas, who are not required to purchase flood insurance, may mistakenly perceive they are not at risk of flood loss. Consumers also may choose not to purchase flood insurance because they overestimate the adequacy of federal assistance they would expect to receive after a disaster. Expanding the mandatory purchase requirement beyond properties in the highest-risk areas is one option for encouraging consumer participation in flood insurance. However, doing so could face public resistance and create affordability challenges for some, highlighting the importance of an accompanying affordability assistance program. Increasing consumer participation could help ensure more consumers would be better protected from the financial risk of flooding. Other barriers to private-sector involvement. Industry and nonindustry stakeholders with whom GAO spoke cited regulatory uncertainty and lack of data as barriers to their ability to sell flood insurance, in addition to the less than full-risk rates charged by FEMA. For example, some industry and nonindustry stakeholders told GAO that while lenders must enforce requirements that certain mortgages have flood insurance, some lenders are uncertain whether private policies meet the requirements. Clarifying the types of policies and coverage that would do so could reduce this uncertainty and encourage the use of private flood insurance. In addition, some stakeholders said that access to NFIP claims data by the insurance industry could allow private insurers to better estimate losses and price policies. FEMA officials said they would need to address privacy concerns to provide such information but have been exploring ways to facilitate more data sharing. NFIP flood resilience efforts. Some industry and nonindustry stakeholders told GAO that greater involvement by private insurers could reduce funding available for some NFIP flood resilience efforts (mitigation, mapping, and community participation). For example, some of these stakeholders said that as the number of NFIP policies decreased, the policy fees FEMA used to help fund mitigation and flood mapping activities also would decrease. Potential actions to offset such a decrease could include appropriating funds for these activities or adding a fee to private policies. This would allow NFIP flood resilience efforts to continue at their current levels as private-sector involvement increased. |
Background IOM defines an emerging infectious disease as either a newly recognized, clinically distinct infectious disease or a known infectious disease whose reported incidence is increasing in a given place or among a specific population. More than 36 newly emerging infectious diseases were identified between 1973 and 2003, and new emerging infectious diseases continue to be identified. Figure 1 provides information on selected emerging infectious diseases compiled by the World Health Organization (WHO) and CDC. According to CDC, nearly 70 percent of emerging infectious disease episodes during the past 10 years have been zoonotic diseases, which are diseases transmitted from animals to humans. The West Nile virus, which was first diagnosed in the United States in 1999, is an example of a zoonotic disease. The West Nile virus can cause encephalitis, or inflammation of the brain. Mosquitoes become infected with West Nile virus when they feed on infected birds, and infected mosquitoes transmit the virus to humans and animals by biting them. Other zoonotic diseases include SARS, avian influenza, human monkeypox, and variant Creutzfeldt-Jakob disease (vCJD), which scientists believe is linked to eating beef from cattle infected with bovine spongiform encephalopathy (BSE) and is also called mad cow disease. Surveillance for zoonotic diseases requires collaboration between animal and human disease specialists. Disease surveillance provides information for action against infectious disease threats. Basic infectious disease surveillance activities include detecting and reporting cases of disease, analyzing and confirming this information to identify possible outbreaks or longer-term trends, and applying the information to inform public health decision-making. When effective, surveillance can facilitate (1) timely action to control outbreaks, (2) informed allocation of resources to meet changing disease conditions and other public health threats, and (3) adjustment of disease control programs to make them more effective. Responsibilities for Disease Surveillance In the United States, responsibility for disease surveillance is shared— involving health care providers; more than 3,000 local health departments including county, city, and tribal health departments; 59 state and territorial health departments; more than 180,000 public and private laboratories; and public health officials from four federal departments. Although state health departments have primary responsibility for disease surveillance in the United States, health care providers, local health departments, and certain federal departments and agencies share this responsibility. In addition, the United States is a member of WHO, which is responsible for coordinating international disease surveillance and response efforts. Health Care Providers Health care providers are responsible for the medical diagnosis and treatment of their individual patients, and they also have a responsibility to protect public health—a responsibility that includes helping to identify and prevent the spread of infectious diseases. Because health care providers are typically the first health officials to encounter cases of infectious diseases—and have the opportunity to diagnose them—these professionals play an important role in disease surveillance. Generally, state laws or regulations require health care providers to report confirmed or suspected cases of notifiable diseases to their local and/or state health department. A notifiable disease is an infectious disease for which regular, frequent, and timely information on individual cases is considered necessary for the prevention and control of the disease. States publish a list of the diseases they consider notifiable and therefore subject to reporting requirements. According to IOM, most states also require health care providers to report any unusual illnesses or deaths—especially those for which a cause cannot be readily established. State and Local Health Departments States, through the use of their state and local health departments, have principal responsibility for protecting the public’s health and therefore take the lead in conducting disease surveillance and supporting response efforts. Generally, local health departments are responsible for conducting initial investigations into reports of infectious diseases. They employ epidemiologists, physicians, nurses, and other professionals. Local health departments are also responsible for sharing information they obtain from providers or other sources with their state department of health. State health departments are responsible for collecting surveillance information from across their state, coordinating investigations and response efforts, and voluntarily sharing surveillance data with CDC and others. Federal Agencies and Departments Several federal agencies and departments are involved in disease surveillance. For example, CDC, an agency in HHS, is charged with protecting the nation’s public health by directing efforts to prevent and control diseases and responding to public health emergencies. It has primary responsibility for conducting national disease surveillance and developing epidemiological and laboratory tools to enhance disease surveillance. CDC also provides an array of technical and financial support for state infectious disease surveillance efforts. FDA, which is also a part of HHS, is responsible for protecting the public health by ensuring that domestic and imported food products (except meat, poultry, and certain processed egg products) are safe and properly labeled. It is also responsible for ensuring that all drugs and feeds used in animals are safe, effective, and properly labeled and produce no health hazards when used in animals that produce foods for humans. FDA enforces food safety laws by inspecting food production establishments and warehouses and collecting and analyzing food samples for microbial contamination that could lead to foodborne illnesses. USDA is responsible for protecting and improving the health and marketability of animals and animal products in the United States by preventing, controlling, and eliminating animal diseases. USDA is also responsible for regulating veterinary vaccines and other similar products. USDA undertakes disease surveillance and response activities to protect U.S. livestock, ensure the safety of international trade, and contribute to the national zoonotic disease surveillance effort. In addition, USDA is responsible for ensuring that meat, poultry, eggs, and certain processed egg products are safe and properly labeled and packaged. USDA establishes quality standards and conducts inspections of processing facilities in order to safeguard certain animal food products against infectious diseases that pose a risk to humans. DOD, while primarily responsible for the health and protection of its service members, contributes to global disease surveillance, training, research, and response to emerging infectious disease threats. DHS’s mission involves, among other things, protecting the United States against terrorist attacks. One activity undertaken by DHS is to coordinate the surveillance activities of federal agencies and departments related to national security. World Health Organization While national governments have primary responsibility for disease surveillance and response within their country, WHO plays a central role in coordinating international surveillance and response efforts. An agency of the United Nations, WHO administers the International Health Regulations, which outline WHO’s role and the responsibility of member states in preventing the global spread of infectious diseases. Adopted in 1951 and last modified in 1981, the International Health Regulations require, among other things, that WHO member states report the incidence of three diseases within their borders—cholera, plague, and yellow fever. There are currently proposed revisions to these regulations that will expand the scope of reporting beyond the current three diseases to include all events potentially constituting a public health emergency of international concern. WHO is the agency that serves as the focal point for international information on these diseases as well as others, and the agency also helps marshal resources from member states to control outbreaks within individual countries or regions. In addition, WHO works with national governments to improve their surveillance capacities through—for example—assessing and redesigning national surveillance strategies, offering training in epidemiologic and laboratory techniques, and emphasizing more efficient communication systems. Disease Surveillance Comprises a Variety of Efforts at the State and Federal Levels Disease surveillance comprises a variety of efforts at the state and federal levels. At the state level, state health departments collect and analyze data on notifiable diseases submitted by health care providers and others, although the diseases considered notifiable and the requirements for reporting them vary by state. State-run laboratories conduct testing of samples for clinical diagnosis and participate in special clinical or epidemiologic studies. State public health departments verify cases of notifiable diseases, monitor disease incidence, and identify possible outbreaks within their state. At the federal level, agencies and departments collect and analyze surveillance data gathered from the states and from international sources. Some federal agencies and departments also support their own national surveillance systems and laboratory networks and have several means of sharing surveillance information with local, state, and international public health partners. Finally, some federal agencies and departments support state and international surveillance efforts by providing training and technical expertise. States Collect and Report Data on Notifiable Diseases, Although the Diseases Considered Notifiable and the Reporting Requirements Vary by State To conduct disease surveillance at the state level, state public health officials collect reports on cases of notifiable diseases from health care providers and others. Both the diseases considered notifiable and the requirements for reporting them vary by state. Most states have their list of notifiable diseases approximate a national list of notifiable diseases maintained and revised by the Council of State and Territorial Epidemiologists (CSTE) in collaboration with CDC. (See table 1 for the 2004 national list of notifiable diseases maintained by CSTE.) This national list is reviewed annually and revised periodically. State lists of notifiable diseases generally include cholera, plague, and yellow fever—consistent with WHO’s International Health Regulations. On the other hand, according to state and federal health officials, states modify their list of notifiable diseases to reflect the public health needs of their region. States may include diseases on their state list that impact their state but do not appear on the national list. For example, one border state includes the gastrointestinal disease amebiasis—a disease most often found in the United States among immigrants from developing countries—in its state list of notifiable diseases. However, amebiasis is not included on the current national list of notifiable diseases. Conversely, states may exclude diseases that are on the national list but have little relevance for their state. For example, although Rocky Mountain spotted fever is listed on the national list of notifiable diseases, it was excluded from one state’s list we reviewed because relatively few cases of this disease are reported in that area. Appendix II provides a description of diseases on the national notifiable disease list and other selected emerging infectious diseases. States also vary in their requirements for who should report notifiable diseases, and the deadlines for reporting these diseases after they have been diagnosed vary by disease. Officials from the 11 states we interviewed told us that, in addition to health care providers, they require clinical laboratories to report notifiable diseases. On the other hand, some—but not all—of the11 states have expanded the responsibility for reporting suspected notifiable diseases. Depending on the state, those required to report suspected notifiable diseases can include veterinarians, day care centers, hotels, and food service establishments. Penalties for not reporting a notifiable disease vary by state. For example, failing to report a notifiable disease in one state is a misdemeanor, and upon conviction, violators may be fined from $50 to $1,000 and/or may be imprisoned for up to 90 days. In another state, the penalty ranges from $25 to $300. Depending on the contagiousness or virulence of the disease, some diseases have to be reported more quickly than others. For example, in one state, botulism must be reported immediately after a case or suspected case is identified, while chronic hepatitis B must be reported within one month of its identification. Similarly, in another state, Q fever must be reported within one working day, while gonorrhea must be reported within one week. Health care providers rely on a variety of public and private laboratories to help them diagnose cases of notifiable diseases. In some cases only laboratory results can definitively identify pathogens. Every state has at least one state public health laboratory to support its infectious diseases surveillance activities and other public health programs. State laboratories conduct testing for routine surveillance or as part of clinical or epidemiologic studies. For rare or unusual pathogens, these laboratories provide diagnostic tests that are not always available in commercial laboratories. For more common pathogens, these laboratories provide testing using new technologies that still need controlled evaluation. State public health laboratories also provide specialized testing for low- incidence, high-risk diseases, such as tuberculosis and botulism. Results from state public health laboratories are used by epidemiologists to document trends and identify events that may indicate an emerging problem. Upon diagnosing a case involving a notifiable disease, local health providers and others who report notifiable diseases are required to send the reports to state health departments through a variety of state and local disease reporting systems, which range from paper-based reporting to secure, Internet-based systems. Our interviews of public health officials in 11 states found that about half of these states have systems that allow public health care providers to submit reports of notifiable diseases to their state health department over the Internet. For example, state officials in one state we interviewed said their public health department has supported a state-wide Internet-based electronic communicable disease reporting and outbreak alert system since 1995. Officials in another state told us that since 2002, the state has had a secure statewide Web-based hospital, laboratory, and physician disease-reporting system. State health officials conduct their own analysis of disease data to verify cases, monitor the incidence of diseases, and identify possible outbreaks. States voluntarily report their notifiable disease data to CDC, using multiple and sometimes duplicative systems. For example, state officials currently report information on gonorrhea to CDC through two CDC systems: the Sexually Transmitted Disease Management Information System (STD*MIS) and the National Electronic Telecommunications System for Surveillance (NETSS). STD*MIS is a national electronic surveillance system that tracks sexually transmitted diseases, including gonorrhea throughout the United States. NETSS is a computerized public health information system used for tracking notifiable diseases. Although states are not legally required to report information on notifiable diseases to CDC, CDC officials explained the agency makes such reporting from the states a prerequisite for receiving certain types of CDC funding. Appendix III provides additional information on NETSS and other types of systems used for disease surveillance. Federal Agencies and Departments Conduct and Support Disease Surveillance in a Variety of Ways In partnership with states, the federal government also has a key role in disease surveillance. Federal agencies and departments collect and analyze national disease surveillance data and maintain disease surveillance systems. Federal agencies and departments become involved in investigating the causes of infectious diseases and maintain their own laboratory facilities. Federal agencies and departments also share disease surveillance information. In addition, federal agencies and departments provide funding and technical expertise to support disease surveillance efforts at the state, local, and international levels. Federal Agencies and Departments Collect and Analyze Surveillance Data Gathered by States One way federal agencies and departments support disease surveillance is by collecting and analyzing surveillance data gathered by the states. CDC, for example, analyzes the reports it receives from state health departments on cases of notifiable diseases in humans. CDC uses the reports from the states to monitor national health trends, formulate and implement prevention strategies, and evaluate state and federal disease prevention efforts. The agency publishes current data on notifiable diseases in its Morbidity and Mortality Weekly Report. Like CDC, USDA also collects surveillance data from the states. Specifically, USDA collects information from participating state veterinarians on the presence of specific confirmed clinical diseases in specific livestock, poultry, and aquaculture species in the United States. State animal health officials obtain this information from multiple sources—including veterinary laboratories, public health laboratories, and veterinarians—and report this information to the National Animal Health Reporting System (NAHRS). Similarly, FDA, often in cooperation with CDC, receives and interprets state data. For example, FDA officials told us they analyze state information from CDC on outbreaks of infectious diseases that originate from foods that FDA regulates. FDA then uses this information to trace the regulated food back to its origin and investigate possible sources of contamination. In addition, FDA and CDC interpret data on emerging infectious diseases to establish safeguards to minimize the risk of infectious disease transmission from regulated biological products, such as blood and vaccines. Federal agencies and departments also collect and analyze information from international sources. For example, CDC and DOD obtain information on potential outbreaks from WHO. According to CDC, in many cases the initial alert of potential outbreaks is reported to WHO through the Global Public Health Intelligence Network (GPHIN), a system developed by Canadian health officials and used by WHO since 1997. GPHIN is an Internet-based application that searches more than 950 news feeds and discussion groups around the world in the media and on the Internet. WHO then verifies the reported outbreak and, if necessary, notifies the global health community. About 40 percent of the approximately 200 outbreaks investigated and reported to WHO each year come from the GPHIN. In addition to these formal mechanisms for collecting and analyzing data, federal public health officials stressed the importance of obtaining information through their contacts at state and local health departments, other federal agencies and departments, foreign ministries of health, or other international organizations. For example, according to state public health officials, CDC learned of last year’s monkeypox outbreak in one state through a phone call from the state public health department officials. After this initial contact, the state health department officials, in collaboration with officials from CDC, arranged a conference call that included federal officials from CDC and USDA, state and local health department officials, health care providers, and hospital epidemiologists to further share information on the outbreak. Federal Agencies and Departments Operate and Fund Disease Surveillance Systems Some federal agencies and departments conduct disease surveillance using disease surveillance systems they operate or fund. These systems gather data from various locations throughout the country to monitor the incidence of infectious diseases. These systems supplement the data on notifiable diseases collected by states and monitor surveillance information states do not collect. In general, these surveillance systems are distinguished from one another by the types of infectious diseases or syndromes they monitor and the sources from which they collect data. Some surveillance systems, known as sentinel surveillance systems, rely on groups of selected health care providers who have agreed to routinely supply information from clinical settings on targeted diseases. Other systems, known as syndromic surveillance systems, monitor the frequency and distribution of health-related symptoms—otherwise known as syndromes—among people within a specific geographic area. Syndromic surveillance systems are designed to detect anomalous increases in certain syndromes, such as skin rashes, that may indicate the beginning of an infectious disease outbreak. Because these systems monitor symptoms and other signs of disease outbreaks instead of waiting for clinically confirmed reports or diagnoses of a disease, some experts believe that syndromic surveillance systems help public health officials increase the speed with which they may identify outbreaks. There are a number of disease surveillance systems operating in the United States that are operated or funded by federal agencies and departments. Some of these include the following: IDSA-EIN—A Sentinel Disease Surveillance System The Infectious Diseases Society of America Emerging Infections Network (IDSA-EIN) consists of about 900 physicians who specialize in infectious diseases. The network conducts surveillance by contacting the physicians every six to eight weeks to request information about any unusual clinical cases they have encountered. IDSA-EIN members, CDC, and state and territorial epidemiologists all receive summaries of the information obtained by the IDSA-EIN. EIP Site Surveillance—Participants Conduct Population-Based Participants in CDC’s Emerging Infections Programs (EIPs) conduct population-based surveillance of specific diseases in certain locations throughout the United States. As of May 2004, there were 11 EIP sites nationwide that involved partnerships among CDC, state and local public health departments, and academic centers. The 11 EIP sites are California, Colorado, Connecticut, Georgia, Maryland, Minnesota, New Mexico, New York, Oregon, Tennessee, and Texas. The type of surveillance conducted by EIP sites depends on local priorities and expertise. For example, the Connecticut EIP conducts active surveillance for emerging tick-borne diseases in the state. FoodNet—A National Surveillance System for Monitoring Foodborne One of the principal systems used for surveillance of foodborne diseases is the Foodborne Disease Active Surveillance Network (FoodNet). FoodNet—a collaborative effort among CDC, USDA, FDA, and nine EIP sites—is a system that collects information about the occurrence and causes of certain types of foodborne outbreaks. FoodNet is used to detect cases or outbreaks of foodborne disease, identify their source, recognize trends, and respond to outbreaks. Public health departments that participate in FoodNet receive funds from CDC, USDA, and FDA to systematically contact laboratories in their geographical areas and solicit incidence data. According to CDC, as a result of this active solicitation, FoodNet provides more accurate estimates of the occurrence of foodborne diseases than are otherwise available. ESSENCE—A DOD Syndromic Surveillance System Similar to CDC, DOD maintains its own surveillance system. DOD’s ESSENCE is a syndromic surveillance system designed to increase the rapid detection of disease outbreaks. DOD’s system collects data on patient symptoms from military treatment facilities and selected civilian populations. ESSENCE then classifies these symptoms into syndrome groups based on presented signs, symptoms, and diagnoses. These syndrome groups include respiratory, fever/malaise/sepsis, gastrointestinal, neurologic, dermatologic, and coma or sudden death. The frequency of these syndromes can be monitored by DOD and participating state public health officials on a daily basis, and unusual increases can be detected through data analysis. Federal Agencies and Departments Maintain Laboratories and Support Networks of Laboratories Federal agencies and departments also support networks of laboratories that test specimens and develop diagnostic tests for identifying infectious diseases and biological or chemical agents. In some cases, these laboratories provide highly specialized tests—such as tests for anthrax— that are not always available in state public health or commercial laboratories, and they assist states with testing during outbreaks. These laboratories help diagnose life-threatening or unusual infectious diseases for which satisfactory tests are not widely or commercially available, and they confirm public or private laboratory test results. For example, to strengthen the nation’s capacity to rapidly detect biological and chemical agents that could be used as a terrorist weapon, CDC, in partnership with the Federal Bureau of Investigation and the Association of Public Health Laboratories, created the Laboratory Response Network (LRN). According to CDC, the LRN, which was created in 1999, leverages the resources of 126 laboratories to maintain an integrated national and international network of laboratories that are fully equipped to respond quickly to acts of chemical or biological terrorism, emerging infectious diseases, and other public health threats and emergencies. The network includes the following types of laboratories— federal, state and local public health, military, and international laboratories, as well as laboratories that specialize in food, environmental, and veterinary testing. LRN laboratories have been used in several public health emergencies. For example, in 2001, a Florida LRN laboratory discovered the presence of Bacillus anthracis, the pathogen that causes anthrax, in a clinical specimen it tested. CDC has also developed and operates PulseNet. PulseNet is a national network of public health laboratories that perform DNA “fingerprinting” on bacteria that may be foodborne. The network identifies and labels each “fingerprint” pattern and permits rapid comparison of these patterns through an electronic database at CDC. This network is intended to provide an early warning system for outbreaks of foodborne disease. FDA’s system, the Electronic Laboratory Exchange Network (eLEXNET), is a Web-based system for real-time sharing of food safety laboratory data among federal, state, and local agencies. It is a secure system that allows public health officials at multiple government agencies engaged in food safety activities to compare and coordinate laboratory analysis findings. According to FDA officials, it enables public health officials to assess risks and analyze trends, and it provides the necessary infrastructure for an early warning system that identifies potentially hazardous foods. As of July 2004, FDA officials said there were 113 laboratories representing 50 states that are part of the eLEXNET system. DOD also maintains laboratories that perform and develop diagnostic tests for infectious diseases. For example, the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) has the capability to diagnose infectious diseases that require relatively more advanced testing techniques. During the SARS outbreak, CDC requested assistance from USAMRIID to conduct laboratory testing related to the SARS investigation. USAMRIID is also a member of the LRN. In addition, DOD maintains a network of five overseas medical research laboratories that support worldwide efforts to detect and respond to infectious diseases. These five overseas laboratories primarily focus on surveillance for drug-resistant pathogens, unexplained fevers, and influenza. In addition, two of these overseas laboratories are WHO Collaborating Centers. Like DOD and CDC, the USDA has laboratories that test for infectious diseases. USDA’s National Veterinary Services Laboratories is the only federal program in the United States dedicated to testing for domestic and foreign animal diseases. In doing so, it supports surveillance for zoonotic diseases. The National Veterinary Services Laboratories have the ability to test for more than 100 diseases in animals, and some of these—such as rabies, anthrax, and BSE (also known as mad cow disease)—can be transmitted to humans. In addition, the National Animal Health Laboratory Network is a pilot program of diagnostic laboratories that provide animal disease surveillance testing and develop standardized and rapid diagnostic techniques. According to a USDA official, this network has improved zoonotic disease detection due to the use of better technology, improved coordination among the network laboratories, and improved disease reporting. Federal Agencies and Departments Share Disease Surveillance Information As part of their role in national disease surveillance efforts, officials from federal agencies and departments share the surveillance information they collect and analyze with local, state, and international partners. One mechanism federal agencies and departments use to share information is their respective Internet sites. For example, in its annual “Summary of Notifiable Diseases,” CDC posts on its Internet site the data it collects from state health departments. The agency also posts information on foodborne diseases on its FoodNet Internet page. During the SARS outbreak, CDC, USDA, FDA, DOD, and DHS posted information about the disease on their respective Web sites. Web site postings included information on clinical evaluation and diagnosis, travel advisories, and assessments of the impact of the outbreak on food consumption in various regions. CDC also operates an early warning and response system, the Health Alert Network (HAN), that is designed to ensure that state and local health departments as well as other federal agencies and departments have timely access to emerging health information. Through HAN, CDC issues health alerts and other public health bulletins to an estimated 1 million public health officials, including physicians, nurses, laboratory staff, and others. During the SARS outbreak, for instance, CDC used HAN to disseminate what the agency knew about the emerging infectious disease. Also, state officials we interviewed reported receiving updates through HAN on the avian influenza outbreak in Asia. According to CDC, as of March 2003, 89 percent of local health departments have high-speed continuous Internet access and the ability to receive broadcast health alerts. CDC also shares information on infectious diseases through a restricted communication system, the Epidemic Information Exchange (Epi-X). Developed by CDC, this system is a secure, Web-based communication system operating in all 50 states. CDC uses this system primarily to share information relevant to disease outbreaks with state and local public health officials and with other federal officials. CDC uses Epi-X to issue emergency alerts, but unlike HAN, Epi-X also serves as a forum for routine professional discussions and non-emergency inquiries. Authorized Epi-X users can post questions and reports, query CDC, and receive feedback on ongoing infectious disease control efforts. According to CDC, as of 2004, over 1,200 public health officials at the federal, state, and local levels had used the system to communicate with colleagues and experts, track information for outbreak investigations and response efforts, conduct online discussions, and request assistance. In addition, according to CDC, it has agreements with Canada and Mexico that allow international public health officials to become authorized Epi-X users. These international users include officials from both the Canadian and Mexican Ministries of Health and health officials in Mexican states that border the United States. In addition, CDC staff assigned to WHO and health care providers working internationally for the U.S. Department of State are authorized Epi-X users. Federal Agencies and Departments Provide Training, Technical Assistance, and Funding Federal agencies and departments also provide training, technical assistance, and funding to state and international public health officials. For example, to enhance the U.S. public health infrastructure for disease surveillance and response to infectious diseases, CDC operates several programs, including the Epidemiology and Laboratory Capacity (ELC) program, the Epidemic Intelligence Service (EIS) program, and EIP. The ELC program provides training, technical assistance, and funding to 58 state and local health departments. The program assists state and local health departments in maintaining surveillance for infectious diseases, providing technical support through laboratory services, and investigating outbreaks. Additionally, the EIS is a 2-year postgraduate program intended to increase the number of federally trained epidemiologists working in public health. While the majority of EIS officers train at CDC headquarters, others are trained at state and large local health departments. Graduates of the program are employed in federal government, state health departments and other health care settings. Further, the EIP—which is a collaboration among CDC, state health departments, and other public health partners— is a network of sites that acts as a national resource for the surveillance, prevention, and control of emerging infectious diseases. These sites conduct population-based surveillance for selected diseases or syndromes and research that go beyond the routine functions of local health departments to address issues in infectious diseases and public health. CDC provided nearly $20 million in funding to EIPs in fiscal year 2003 in order to support their surveillance and research activities. In selected foreign locations, CDC operates international training programs, such as the Field Epidemiology Training Program (FETP). For more than 20 years, CDC has collaborated with foreign ministries of health around the world to help establish and conduct field epidemiology training programs in those countries. CDC officials said that through FETP, CDC trains approximately 50 to 60 physicians and social scientists each year from these countries. This training in applied public health integrates disease surveillance, applied research, prevention, and control activities. Graduates of the FETP program serve in their native country and provide links between CDC and their respective ministries of health. CDC officials said that trainees from its international programs have frequently provided important information on disease outbreaks. Another international program sponsored by CDC is the International Emerging Infections Program (IEIP). IEIP sites are modeled on the EIP sites in the United States that integrate disease surveillance, applied research, training, and prevention and control activities. According to CDC, the IEIP in Thailand that was established in 2001 played a key role in the global response to the SARS and avian influenza outbreaks. DOD has also taken steps to increase the international disease surveillance expertise by providing various types of laboratory and epidemiology training through its overseas laboratories. Some federal agencies and departments also provide direct technical assistance to foreign countries both directly and through WHO. For example, CDC officials told us they provide support in the form of technical assistance and training that supports the development of major international networks that are critical to enhancing global surveillance, such as the WHO Global Influenza Surveillance Network. Additionally, throughout the SARS outbreak, CDC was the foremost participant in WHO’s multilateral efforts to identify and respond to SARS in Asia, with CDC officials constituting about two-thirds of the 115 public health experts deployed to the region. CDC also contributed its expertise and resources by conducting epidemiological studies, laboratory testing, and clinical research on the disease. Specifically, CDC assigned epidemiologists, laboratory scientists, hospital infection control specialists, and environmental engineers to provide technical assistance in Asia. CDC also assigned senior epidemiologists to work locally with a WHO team to investigate the outbreak in China. DOD has also provided technical assistance during investigations of potential outbreaks. For example, DOD established a field laboratory during the Rift Valley fever epidemic in Yemen in 2000 to assist with surveillance during the outbreak. Public Health Officials Have Implemented Initiatives Intended to Enhance Disease Surveillance, but Challenges Remain Public health officials at the state and federal level have undertaken several initiatives that are intended to enhance disease surveillance capabilities. Public health officials have implemented and expanded syndromic surveillance systems in order to detect outbreaks more quickly, but there are concerns that these systems are costly to run and still largely untested. Public health officials have also implemented initiatives designed to improve public health communications and disease reporting. However, some of these initiatives have not been fully implemented. Federal public health officials have also undertaken initiatives intended to improve the coordination of zoonotic surveillance efforts. Finally, federal officials have also expanded training programs for epidemiologists and other public health experts. Public Health Officials Have Implemented and Expanded Syndromic Surveillance Systems, but There Are Some Concerns about the Value of This Type of Surveillance In an effort to enhance the ability to detect infectious disease outbreaks, particularly in their early stages, states have implemented numerous syndromic surveillance systems. Officials from each of the state public health departments we interviewed reported that at least one syndromic surveillance system was used in their state. These systems collect information on syndromes from a variety of sources. For example, the Real-time Outbreak and Disease Surveillance (RODS) system, used in four of the states in our study, automatically gathers patient data from hospital emergency room visits. This system identifies patients’ chief medical complaints, classifies the complaints according to syndrome, and aggregates that data in order to look for anomalous increases in certain syndromes that may reveal an infectious disease outbreak. Another syndromic surveillance system used by some state public health officials that we interviewed, the National Retail Data Monitor (NRDM), collects data from retail sources instead of hospitals. As of February 2004, NRDM collected sales data from about 19,000 stores, including pharmacies, in order to monitor sales patterns in such items as over-the counter influenza medications for signs of a developing infectious disease outbreak. The system looks for unusual sales patterns—such as a spike in the number of over-the-counter medications purchased in a particular city or county—that might indicate the onset of an infectious disease outbreak. The system monitors the data automatically on a daily basis and generates summaries of sales patterns using timelines and maps. At the federal level, CDC has recently introduced a new syndromic surveillance system called BioSense. BioSense aggregates data from numerous electronic sources to enhance early detection of possible disease outbreaks, bioterrorist threats, or other urgent public health threats. The data are collected and analyzed by CDC and also made available to state and local public health departments. In the first quarter of 2004, BioSense became available for use, gathering data from DOD and the Department of Veterans Affairs medical treatment facilities in the United States and more than 10,000 over-the-counter retail drug stores nationwide. According to CDC, the agency plans to add other data sources, such as data from laboratories, poison control centers, health plan medical records, nursing call centers, emergency medical service dispatches, health care provider billing claims, and pharmacy prescriptions. Since the end of 2001, DOD has made enhancements designed to improve its syndromic surveillance system, ESSENCE. Specifically, DOD expanded ESSENCE to include data from all military treatment facilities worldwide and data from various civilian sources, such as civilian intensive care units, over-the-counter pharmacies, school attendance records and laboratory test results. In addition, DOD officials told us they are in the process of improving ESSENCE’s mapping capabilities and developing more advanced statistical algorithms for identifying anomalous increases in syndromes. DOD officials also told us that they are exploring additional data sources for ESSENCE, such as large health maintenance organizations, and working on improving the speed at which the system’s data can be accessed. Although syndromic surveillance systems are used by federal agencies and departments and in all 11 of the states whose officials we interviewed, concerns about this approach to surveillance have been raised. Relative to traditional methods of surveillance, syndromic surveillance systems are costly to maintain and still largely untested. According to a recent IOM report, the resource requirements for automatic reporting of syndromic data from hospitals, clinics, and emergency departments are currently high, but these costs may lessen over time with standardization of software. Syndromic surveillance systems require relatively more resources to operate than other types of surveillance systems, in part, because their sensitivity makes them more likely to issue false alarms, which in turn have the potential to overtax public health systems. Furthermore, some state officials as well as public health experts noted that it has not been demonstrated in a rigorous way that these systems can detect emerging infectious diseases or bioterrorist events more rapidly than they would otherwise be detected through traditional surveillance. According to public health experts, evaluation tools, performance measures and evidence-based standards for syndromic surveillance are needed. CDC recently published a “Framework for Evaluating Public Health Surveillance Systems for Early Detection of Outbreaks.” This framework creates a standardized evaluation methodology intended to help public health officials improve decision-making regarding the implementation of syndromic and other surveillance systems for outbreak detection. Public Health Officials Are Implementing Initiatives Designed to Enhance Public Health Communications and Disease Reporting, but Some Initiatives Are Incomplete CDC is taking steps to enhance its two public health communications systems, HAN and Epi-X, which are used in disease surveillance and response efforts. For example, CDC is working to increase the number of HAN participants who receive assistance with their communication capacities. According to CDC, the agency will continue to increase the number of local jurisdictions that have high-speed Internet capability from 90 percent to 100 percent. Similarly, CDC has expanded Epi-X by giving officials at other federal agencies and departments, such as DOD, the ability to use the system. In addition, CDC is also adding users to Epi-X from local health departments, giving access to CDC staff in other countries, and making the system available to FETPs located in 21 countries. Finally, CDC is facilitating Epi-X’s interface with other data sources by allowing users to access GPHIN, the system that searches Web- based media for information on infectious disease outbreaks worldwide. In addition to the efforts to enhance communication systems, public health officials are taking steps to enhance the reporting of notifiable disease data and other surveillance information. Some of the state public health officials we interviewed told us that they have implemented efforts to increase health care providers’ reporting of notifiable diseases to their state health department. For example, an official from one state we interviewed said that the state health department now uses liaisons that regularly visit health care providers to establish regular communication between the providers and local public health authorities. The liaisons remind the providers of their responsibility for reporting cases of notifiable diseases to the state. Similarly, the Commissioner of Health from another state sent letters to health care providers in the state, reminding the providers of their important role in recognizing an infectious disease outbreak or bioterrorist event. The letter contained information on changes to the state’s notifiable disease list, a listing of references and Internet sites for clinical information on specific pathogens, and information on the Internet-based communication system the state department of health used to disseminate and gather sensitive information regarding disease surveillance. Despite some states’ efforts to increase disease reporting by health care providers, some public health experts believe that underreporting by providers is still a problem. According to the IOM, many health care providers do not fully understand their role in infectious disease surveillance, including the importance of prompt reporting of clinical information to relevant public health authorities. According to the study, few medical or other health science schools’ curricula emphasize the importance of and the requirements for reporting diseases of public health significance; residency programs seldom address the need for health care provider participation in public health surveillance; and little, if any, continuing medical education exists on the topic, nor is it widely integrated into board certification exams. Furthermore, despite the existence of state notifiable disease lists and related laws, some providers may be unaware of basic reporting requirements. One study noted that health care providers failed to report disease information because they often lacked information about what, when, and how to report such information. Other efforts by public health officials to enhance notifiable disease reporting target the information technology used in such reporting. For example, public health officials in several states told us that they are enhancing their electronic systems to permit providers in their states to report notifiable diseases to the states’ health department. For example, public health officials in one state told us that they are enhancing their reporting system to permit 20,000 to 30,000 physicians to report 61 notifiable diseases using an integrated, secure, Web-based system. Similarly, some states have also implemented electronic reporting systems that obtain information on notifiable diseases directly from clinical laboratories. When the laboratories conduct tests for health care providers on cases that may involve notifiable diseases, in some states the results of those tests—if positive—are automatically reported to the state health department. Several state public health officials we interviewed told us that they receive electronic laboratory reports from clinical laboratories in their state. Other state officials told us that they were developing or piloting this capability. According to state public health officials and IOM, automated laboratory reporting of notifiable infectious diseases has been shown to improve the timeliness of reporting on these diseases. At the federal level, CDC is deploying a technological initiative known as NEDSS. According to CDC, this initiative is designed to make the electronic reporting from both clinical laboratories and practitioners to state and local health departments and from state and local health departments to CDC more timely, accurate, and complete. CDC officials said that NEDSS will facilitate reporting by supporting a unified and standardized way of transmitting information to CDC, and result in the integration of 60 to 100 different systems used by state health departments to report disease data to CDC. As part of the NEDSS initiative, CDC is developing an architecture that consists of a set of standards that can be used for creating interoperable systems. These standards comprise (1) data standards, (2) parameters for an Internet-based communications infrastructure and (3) policy-level agreements on data access and sharing as well as on protections for confidentiality. CDC has also developed ready-to-use software—the NEDSS-Base system (NBS)—that operates within these standards. State and local health departments that are updating their reporting systems have the option of either using the NBS software or developing their own systems based on the common NEDSS architecture. According to CDC, when fully implemented, the use of NEDSS- architecture-compliant-software or NBS software by local and state public health departments and CDC will allow public health partners to exchange data, merge data from different laboratories, and obtain information on cross-jurisdictional outbreaks. Whereas states currently use multiple and sometimes duplicative systems to report different notifiable diseases to CDC, NEDSS will replace many of these systems with a single system. For example, the National Electronic Telecommunications System for Surveillance (NETSS), STD*MIS (sexually transmitted diseases), TIMMS (tuberculosis), STELLAR (lead poisoning in children) and EHARS (HIV) will be consolidated through NEDSS. Despite the advantages that may be gained from creating interoperable systems, the NEDSS initiative has not been implemented in many states. The NEDSS initiative first began in fiscal year 2000, and by May 2004, only 4 states that use the NBS software are able to transfer data to CDC. According to CDC, 10 states are actively deploying NEDSS-architecture- compliant-software or NBS software and 16 states are in the preliminary process of developing their technical and security infrastructure to accommodate NEDSS standards. Some state officials told us that even though they have developed electronic systems that comply with the NEDSS standards, they have not been able to transfer data to CDC using their systems because the systems are still not compatible. CDC officials said that the national industry standards on design, development, and data transport have continued to evolve and they are working with the states to receive data from those who opted to use the NEDDS architecture to develop their own compliant software. Federal Public Health Officials Have Enhanced Federal Coordination on Zoonotic Disease Surveillance and Expanded Training Programs, but Surveillance Efforts Still Face Challenges CDC, USDA, and FDA have made recent efforts to enhance their coordination of zoonotic disease surveillance. For example, CDC and UDSA are working with two national laboratory associations to enhance coordination of zoonotic disease surveillance by adding veterinary diagnostic laboratories to the LRN. As of May 2004, 10 veterinary laboratories have been added to the LRN, and CDC officials told us that they have plans to add more veterinary laboratories in the future. In addition, CDC officials told us it has appointed a staff person whose responsibility, in part, is to assist in finding ways to enhance zoonotic disease coordination efforts among federal agencies and departments and with other organizations. This person is helping CDC reconstruct a working group of officials from CDC, USDA, and FDA to coordinate on zoonotic disease surveillance. According to CDC officials, the goal of this working group is to explore ways to link existing surveillance systems to better coordinate and integrate surveillance for wildlife, domestic animal, and human diseases. CDC officials also said that the feasibility of a pilot project to demonstrate this proposed integrated zoonotic disease surveillance system is being explored. Finally, USDA officials told us that they hired 23 wildlife biologists in the fall of 2003 to coordinate disease surveillance, monitoring, and management activities among USDA, CDC, states and other agencies. While each of these initiatives is intended to enhance the surveillance of zoonotic diseases, each is still in the planning stage or the very early stages of implementation. Another way CDC has worked to enhance disease surveillance is through its support for epidemiological training programs. In general, these programs are aimed at developing an experienced workforce for state and local public health departments and disease surveillance systems. For example, in recent years, CDC has expanded its EIS program. CDC has increased the number of participants in this program from 148 in 2001 to 167 in 2003. During this time period, CDC has also increased the number of EIS participants assigned to state and local health departments from 25-35 per year to about 50 per year. CDC has also enhanced the type of training the participants receive. All participants now receive training in terrorism preparedness and emergency response. CDC has also expanded its training programs intended to increase the expertise involved in international disease surveillance efforts. For example, CDC is helping to implement a comprehensive system of surveillance and containment of global infectious diseases through the expansion of its IEIP and the creation of the Field Epidemiology and Laboratory Training Program (FELTP). CDC is enhancing a comprehensive global surveillance and response network for infectious diseases by adding two new IEIP sites in China and Kenya and by expanding activities in the existing site in Thailand. CDC officials said that the program in Kenya began in June 2004 and they may be able to begin recruitment for the program in China by the end of 2004. CDC is expanding its FETP program by creating a laboratory training component, known as the FELTP. According to CDC officials, FELTPs are designed to increase laboratory capacity in overseas locations. Currently, there is one FELTP located in Kenya whose students recently began their training program. The efforts to build disease surveillance capacities abroad, which were discussed above, may also help domestic disease surveillance efforts. According to a recent IOM report, surveillance of and response to emerging infectious diseases in other parts of the world can directly benefit the United States as well as the country in which the disease is detected. According to the IOM, some disease outbreaks that have been detected internationally allowed the United States to develop diagnostic tests, prepare for influenza outbreaks, or recognize zoonotic threats like avian influenza. Similarly, the IOM points out that coordination between U.S. and European sentinel surveillance systems have allowed several countries, including the United States, to remove products from the market that were contaminated with pathogens. On the other hand, efforts to enhance international disease surveillance still face challenges. Foremost among these are limitations in the amount of surveillance information that many countries can collect and therefore share with international partners. Many developing countries lack health care infrastructures and the ability to administer simple diagnostic tests for diseases such as tuberculosis. We have previously reported that few developing countries have public heath laboratories. Also, many developing countries lack the ability to compile basic health indices, such as death rates, causes of death, or general disease burden. Furthermore, even countries with public health infrastructures may lack developed surveillance systems for reporting crucial disease information to authorities. For example, officials in China noted that during the first SARS outbreak, a large number of cases in Beijing were not reported because there was no system to collect this information from hospitals in the city. Concluding Observations The threat posed by infectious diseases has continued to grow as new diseases have emerged and as known diseases have reappeared with increased frequency. In addition, there are concerns about the threat posed by the deployment of infectious disease pathogens as instruments of terror or weapons of war. The U.S. surveillance system is built largely on cooperation among many different individuals and entities at the local level. State and federal initiatives to enhance their ongoing disease surveillance efforts are important to ensure that disease surveillance in the United States can meet the threat posed by infectious diseases. Some of these initiatives, such as improvements to information technology, offer the possibility of increasing the accuracy and timeliness of disease surveillance. As state and federal public health officials develop these initiatives, their ongoing evaluation efforts may help decision-makers address technical issues and allocate resources to the most effective disease surveillance systems. Agency Comments and Our Evaluation HHS, USDA, and DOD reviewed a draft of this report. HHS provided written comments. In its written comments, HHS stated that the draft captures many important issues in surveillance. However, HHS stated that the draft includes a discussion of programs that do not directly pertain to surveillance for emerging infectious diseases. In this report, we defined surveillance activities to include detecting and reporting cases of disease, analyzing and confirming this information to identify possible outbreaks or longer-term trends, and applying the information to inform public health decision-making; and the programs and surveillance systems discussed in this report fit within that definition. HHS’s written comments also stated that the report should characterize the essential purpose of the NEDSS initiative as an initiative designed to transform surveillance at the local and/or state health department level. It said that the current gap NEDSS seeks to address is primarily between the clinical sector and local and state public health departments. We have added information to indicate that NEDSS is designed to enhance the electronic reporting of information from both clinical laboratories and practitioners to state and local health departments and from state and local health departments to CDC. HHS’s written comments also pointed out that FDA does not collect surveillance reports on foodborne outbreaks as a part of a national surveillance system, but that CDC shares its findings with FDA. We have clarified the report to say that FDA analyzes state information it receives from CDC. HHS’s written comments also suggested that information be added to the draft report. Specifically, it said that the draft report should have described the PulseNet network and should have included information on CDC’s technical advice and training that supports major international networks, such as the WHO Global Influenza Surveillance Network. Although this report only provides examples of selected surveillance systems and we could not describe all systems, we have added some information on these networks. Finally, HHS said that we should clarify that CDC is the lead agency for human disease surveillance and that it fulfills this responsibility in close collaboration with states, other federal agencies, WHO, and other partners. As we noted in the draft report, CDC is charged with protecting the nation’s public health by directing efforts to prevent and control diseases and CDC has primary responsibility for conducting national disease surveillance. HHS’s comments are reprinted in appendix IV. In providing oral comments on a draft of this report, DOD said it concurred and did not have any substantive comments. USDA said it had no comments on the draft report. HHS and USDA provided technical comments that we incorporated where appropriate. As agreed with your office, we plan no further distribution of this report until 30 days from its date of issue, unless you publicly announce its contents. At that time, we will send copies of this report to the Secretaries of Health and Human Services, Agriculture, and Defense; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7119. Other contacts and staff acknowledgments are listed in appendix V. Appendix I: Scope and Methodology To describe how state and federal public health officials conduct disease surveillance, we reviewed state documents—such as state policy manuals, reports, cooperative agreements with the Centers for Disease Control and Prevention (CDC), and various other documents—from 11 states. These states—California, Colorado, Indiana, Louisiana, Minnesota, New York, Pennsylvania, Tennessee, Texas, Washington, and Wisconsin—were selected based on their participation in CDC’s Emerging Infections Program, each state’s most recent infectious disease outbreak, and their geographic location. Of these 11 states, California, Colorado, Minnesota, New York, Tennessee, and Texas participate in CDC’s Emerging Infections Program. We also conducted structured interviews of state public health officials from these states. In addition to our structured questions, we asked public health officials from Colorado, Louisiana, and New York questions about their most recent West Nile outbreak. We asked public health officials from Indiana and Wisconsin questions about their monkeypox outbreak, and public health officials from Pennsylvania and Tennessee about their hepatitis A outbreak. We asked public health officials from the remaining states—California, Minnesota, Texas, and Washington—to describe their respective experiences with their most recent infectious disease outbreak, which included outbreaks of wound botulism and severe acute respiratory syndrome (SARS). We also reviewed documents and interviewed officials from the Departments of Agriculture, Defense, and Homeland Security; CDC, and the Food and Drug Administration. In addition, we interviewed representatives from professional associations representing state and local public health officials. These associations included the Association of Public Health Laboratories, the Association of State and Territorial Health Officials, Council of State and Territorial Epidemiologists, and the National Association of County and City Health Officials. We reviewed related publications by these professional organizations, including studies and position papers written by these associations. To identify initiatives intended to enhance disease surveillance, we reviewed information on states’ initiatives designed to enhance infectious disease surveillance, including the use of syndromic surveillance systems, information technology systems, and journal articles assessing the value of syndromic surveillance systems. We also interviewed public health officials from the 11 states and representatives from professional associations about their assessments of enhancements and continuing concern in infectious disease surveillance efforts. To identify federal initiatives to enhance disease surveillance, we reviewed related federal documents, including federal policy directives, agency and departmental strategies, and annual reports. In addition, we interviewed federal health officials involved in disease surveillance, asking them about efforts to enhance existing surveillance programs and activities. We also reviewed reports and recommendations published by the Institute of Medicine related to emerging infectious diseases. We focused our review of initiatives intended to enhance surveillance on those currently underway or implemented since 2001. We conducted our work from October 2003 through July 2004 in accordance with generally accepted government auditing standards. Appendix II: Information on Nationally Notifiable Infectious Diseases and Selected Worldwide Emerging Infectious Diseases This appendix provides descriptions of the diseases contained on the U.S. List of Nationally Notifiable Infectious Diseases for 2004 as well as other selected worldwide emerging infectious diseases. Description of U.S. List of Nationally Notifiable Infectious Diseases, 2004 Acquired immunodeficiency syndrome (AIDS) is caused by the human immunodeficiency virus (HIV), which progressively destroys the body’s immune system. AIDS patients may contract opportunistic infections that usually do not make healthy people sick. Symptoms of opportunistic infections common in people with AIDS include coughing and shortness of breath, seizures, difficult or painful swallowing, fever, vision loss, nausea, weight loss and extreme fatigue, severe headaches, and coma. The term AIDS applies to the most advanced stages of HIV infection. Anthrax is an acute infectious disease caused by a bacterium commonly found in the soil. Although anthrax can infect humans, it occurs most commonly in plant-eating animals. Human anthrax infections have usually resulted from occupational exposure to infected animals or contaminated animal products. Anthrax infection can take one of three forms: cutaneous, usually through a cut or an abrasion; gastrointestinal, usually by ingesting undercooked contaminated meat; or inhalation, by breathing airborne anthrax spores into the lungs. The symptoms are different for each form and usually occur within 7 days of exposure. Anthrax can be treated with antibiotics and a vaccine is available. Botulism is a muscle-paralyzing disease caused by a bacterial toxin. Symptoms of botulism include double vision, blurred vision, drooping eyelids, slurred speech, difficulty swallowing, dry mouth, and muscle weakness that always descends through the body. Paralysis of breathing muscles can cause a person to stop breathing and die, unless mechanical assistance is provided. An antitoxin exists that is effective in reducing the severity of symptoms if administered early in the course of the disease. Brucellosis, a disease of animals, is transmitted to humans through contact with infected animals or contaminated milk. Infection produces a wide range of symptoms, including fever, generalized aches and pains, and fatigue, which may last from a few weeks to several months. Brucellosis can be treated with antibiotics. Chancroid is a highly contagious sexually transmitted disease (STD) caused by a bacterial infection. Transmission results through either skin- to-skin contact with open sore(s) or when contact is made with the pus- like fluid from the ulcer. Chancroid causes ulcers, usually of the genitals and if left untreated, may facilitate the transmission of HIV. Chancroid can successfully be treated with antibiotics. Chlamydial infection is a STD resulting from a bacterial infection. One of the most widespread bacterial STDs in the United States, genital chlamydial infection can occur during oral, vaginal, or anal sexual contact with an infected partner. Because chlamydial infection does not make most people sick, infected persons may not know they have it and symptoms that do develop may be mild. Chlamydial infection is treated with antibiotics. However, if left untreated, it can lead to serious illnesses. Cholera is a bacterial illness that is contracted by ingesting contaminated water or food. Infection results in acute watery diarrhea, leading to extreme dehydration and death if left unaddressed. Known vaccines and antibiotics have only limited impact on the disease—treatment focuses on rehydration. In the United States, cholera has been virtually eliminated by modern sewage and water treatment systems. However, travelers have brought contaminated seafood back to the United States resulting in foodborne outbreaks. Coccidioidomycosis is a disease caused by a fungus that grows as a mold in the soil. It is transmitted through inhalation after a disturbance of contaminated soil by humans or natural disasters, such as earthquakes and usually presents as a flu-like illness with symptoms such as fever, cough, headaches, and rash. Although most infections are undetectable, it can cause serious and life-threatening infections, especially among the immunosuppressed. The disease causing fungus is endemic in soil in semiarid areas, including the Southwestern United States. Various drugs are now available to treat this disease. Cryptosporidiosis is caused by a microscopic parasite and can be spread through contaminated water, uncooked contaminated foods, including fruits and vegetables, and any surface that has been in contact with the parasite. Symptoms include diarrhea, stomach cramps or upset stomach, and a slight fever. People with weak immune systems may have more serious reactions. There is currently no consistently effective treatment for this disease. Cyclosporiasis is a foodborne illness caused by a microscopic parasite that infects the small intestine. Humans contract the illness by ingesting contaminated water or food. Cyclosporiasis usually results in watery diarrhea. Other symptoms can include loss of appetite, substantial weight loss, bloating, stomach cramps, nausea, muscle aches, and fatigue. This disease is often treated with a combination of two antibiotics. Diphtheria is a respiratory disease occurring worldwide that is spread through coughing and sneezing. Symptoms range from mild to severe and can be complicated by damage to the heart muscle or peripheral nerves. Treatment for diphtheria consists of immediate administration of diphtheria antitoxin and antibiotics. Ehrlichiosis is the general name used to describe several bacterial diseases that affect humans and animals. In the United States, the disease is transmitted through the bite of an infected tick. Early clinical presentations of ehrlichiosis may resemble nonspecific signs and symptoms of various other infectious and non-infectious diseases, such as fever, headache, and muscle ache. In some cases, patients develop a very mild form of the disease and may not seek medical attention or present any symptoms. In other cases, Ehrlichiosis may be treated with an antibiotic. The disease occurs primarily in the southeastern and south central regions of the United States. Encephalitis, Arboviral is an inflammation of the brain that may be caused by arthropod-borne viruses, also called arboviruses. Six types of arboviral encephalitides are present in the United States—eastern equine encephalitis, western equine encephalitis, St. Louis encephalitis, La Crosse encephalitis, and West Nile encephalitis, all of which are transmitted by mosquitoes, and Powassan encephalitis, which is transmitted by ticks. The majority of human infections are asymptomatic or may result in a nonspecific flu-like syndrome. However, in a small proportion of cases, infections may lead to death or permanent neurologic damage. No effective antiviral drugs have been discovered and there are no commercially available human vaccines for these diseases. Enterohemorrhagic Escherichia coli (E. coli) is a bacterium that includes multiple serotypes, such as E. coli O157:H7, that can cause gastroenteritis in humans. E. coli is normally found in the intestines and serves a useful function in the body. However, a minority of E. coli strains are capable of causing human illness. Transmission occurs by ingesting contaminated food or water. Infections vary in severity and may be characterized by diarrhea (often bloody) and abdominal cramps. The illness is usually self-limited and lasts for an average of 8 days. Giardiasis is a diarrheal illness caused by a one-celled, microscopic parasite in the intestines of humans and animals. It has become recognized as one of the most common causes of waterborne disease in humans in the United States. Humans may contract the disease by accidentally swallowing the parasite, such as through swallowing contaminated water or eating uncooked, contaminated food. Symptoms of giardiasis include diarrhea, loose or watery stool, stomach cramps, and upset stomach. Several drugs are available to treat this disease. Gonorrhea is a bacterial STD that infects the genital tract, the mouth, and the rectum. Gonorrhea is transmitted during sexual intercourse and affects both women and men. Symptoms in women include bleeding associated with vaginal intercourse and painful or burning sensations when urinating. Symptoms in men include pus from the penis and pain and burning sensations during urination. Gonorrhea is usually treated with antibiotics. Haemophilus influenzae is a bacterium found in the nose and throat that is transmitted through direct contact with respiratory droplets from a carrier or patient. It causes a variety of illnesses including meningitis (inflammation of the coverings of the spinal column and brain), bacteremia (infection of the blood), pneumonia (infection of the lungs), and septic arthritis (infection of the joints). Serious infections are treated with specific antibiotics. Hansen’s disease (leprosy) is a chronic bacterial infection for which the exact mode of transmission is not fully understood. However, most investigators think that the bacterium is usually spread from human-to- human through respiratory droplets. Primarily affecting the skin, nerves, and mucous membranes, leprosy causes deformities of the face and extremities after many years but those receiving antibiotic treatment are considered free of active infection. Hantavirus pulmonary syndrome is caused by several strains of a virus that is transmitted by exposure to infected rodents. Symptoms include fever, fatigue, muscle aches, coughing, and shortness of breath; the onset of respiratory distress often leads to death. There is no specific treatment for the disease, other than appropriate management of respiratory problems. The virus was first identified in the Southwestern United States in 1993. Hemolytic uremic syndrome is one of the most common causes of sudden, short-term kidney failure in children. Most cases occur after an infection of the digestive system by a specific E. coli bacterium. It develops when the bacteria lodged in the digestive system make toxins that enter the bloodstream and start to destroy red blood cells. Symptoms may not become apparent until a week after the digestive problems and include, paleness, tiredness, and irritability, as well as small, unexplained bruises or bleeding from the nose or mouth. Treatments usually consist of maintaining normal salt and water levels in the body, but may include blood transfusions. Hepatitis A is an acute viral infection of the liver. Human-to-human transmission of hepatitis A often occurs by placing something contaminated in the mouth. Symptoms include jaundice, fatigue, abdominal pain, loss of appetite, nausea, diarrhea, and fever. A vaccine is available for protection against hepatitis A and once a person has had the disease, it cannot be contracted again. Hepatitis B is a viral infection of the liver that is transmitted by contact with the body fluids of an infected person. The virus may cause an acute illness, as well as a life-long infection that carries a high risk of serious illness or eventual death from liver cancer or cirrhosis. Symptoms include jaundice, fatigue, abdominal pain, loss of appetite, nausea, vomiting, and joint pain. An effective vaccine that has been available for this disease since 1982 is the best protection against hepatitis B. Treatment is also available for chronic hepatitis B. Hepatitis C is a viral infection of the liver that may be either acute or chronic and is transmitted by contact with the body fluids of an infected person. Symptoms of this disease include jaundice, fatigue, dark urine, abdominal pain, loss of appetite, and nausea. There is currently no vaccine available for hepatitis C; however two drugs are available for treatment. Human immunodeficiency virus (HIV) causes AIDS and is transmitted through contact with the body fluids of an infected person or from mother to baby. Infected adults may be asymptomatic for 10 years or more. Because the immune system is weakened there is eventually greater susceptibility to opportunistic diseases such as pneumonia and tuberculosis. Drugs are available that can prevent transmission from pregnant mothers to their unborn children and can help slow the onset of AIDS. Legionellosis is a bacterial infection that has two distinct forms— Legionnaires’ disease, the more severe form of infection, which includes pneumonia, and Pontiac fever, a milder illness. Legionellosis outbreaks have often occurred after persons have breathed mists that come from a contaminated water source. Symptoms for Legionnaires’ disease usually include fever, chills, and a cough. Chest X-rays often show pneumonia; however additional tests are needed to confirm diagnosis. Those with Pontiac fever experience fever and muscle aches and do not have pneumonia. Legionnaires’ disease is treated with antibiotics, while those with Pontiac fever generally recover without treatment. Listeriosis is a bacterial foodborne illness. The disease affects primarily pregnant women, newborns, and adults with weakened immune systems and is spread through the consumption of contaminated food. Symptoms of listeriosis include fever, muscle aches, and, at times, gastrointestinal symptoms, such as nausea or diarrhea. Listeriosis is treated with antibiotics. Lyme disease is a bacterial illness transmitted by ticks. The area around the tick bite sometimes develops a “bull’s eye” rash, typically accompanied by fever, headache, and musculoskeletal aches and pains. There is an effective vaccine for adults at high risk. If untreated by antibiotics, arthritis, neurologic abnormalities, and—rarely—cardiac problems may follow. The disease is rarely, if ever, fatal and is endemic in North America and Europe. The pathogen for Lyme disease was first detected in the United States in 1982. Malaria is a parasitic disease transmitted by infected mosquitoes. Symptoms include fever, shivering, joint pain, headache, repeated vomiting, severe anemia, convulsions, coma, and, in severe cases, death. Malaria is becoming increasingly resistant to known antimalarial treatments and is now reemerging in countries where it was once under control. Measles is a highly contagious viral disease, transmitted through human- to-human contact, such as by coughing or sneezing. It often strikes children and causes fever, conjunctivitis, congestion, and cough, followed by a rash. Secondary infections often cause further complications. A measles vaccine is available. Meningococcal disease, caused by a particular type of bacteria, is transmitted by human-to-human contact and is characterized by sudden onset of fever, headache, neck stiffness, and altered consciousness. There is a vaccine for this disease, but it loses its effectiveness over time and must be repeated. Mumps is a viral disease of the lymph nodes, transmitted though human- to-human contact, such as by coughing or sneezing. Symptoms include fever, headache, muscle ache, and swelling of the lymph nodes close to the jaw. A vaccine is available to prevent mumps. Pertussis (whooping cough) is a highly contagious bacterial disease transmitted though human-to-human contact, such as by coughing or sneezing. Symptoms include runny nose and sneezing, a mild fever, and a cough that gradually becomes more severe, turning into coughing spasms that end in vomiting and exhaustion. Pertussis is treatable with antibiotics, and a pertussis vaccine is available. Plague, a severe bacterial infection, is usually transmitted to humans by infected rodent fleas (bubonic plague) and uncommonly by human-to- human respiratory exposure (pneumonic plague). Symptoms of bubonic plague include swollen, painful lymph glands, fever, chills, headache, and exhaustion. People with pneumonic plague develop cough, bloody sputum, and breathing difficulty. Plague is treatable with antibiotics if diagnosed early. Poliomyelitis, paralytic (polio) is a virus transmitted through human- to-human contact. In most cases, there are no symptoms or only mild, flu- like symptoms. However, it may lead to debility of the lower extremities. Although there is no cure, an effective vaccine is available. Psittacosis (parrot fever) is a bacterial infection that is spread from birds to humans. Humans become infected by inhaling aerosolized dried bird droppings and by handling infected birds. Symptoms of psittacosis include fever, headache, rash, chills, and sometimes pneumonia. The disease is treatable with antibiotics. Q Fever is a bacterial disease that is spread from livestock or domesticated pets to humans. Infection of humans usually occurs by inhalation of barnyard dust contaminated with animal fluids. Symptoms for Q fever are not specific to this disease, making it difficult to make an accurate diagnosis without appropriate laboratory testing. However, most acute cases begin with a sudden onset of symptoms such as high fevers, severe headache, confusion, sore throat, nausea, vomiting, abdominal pain, and chest pain. Q fever is treated with antibiotics. Rabies is a viral disease transmitted through contact with saliva of infected animals. Symptoms progress from respiratory, gastrointestinal, or central nervous system affliction to hyperactivity to complete paralysis, coma, and death. Once symptoms start to appear, the disease is not treatable. Multiple-dose courses of vaccine and immunoglobulin can be used to prevent onset of the disease if administered immediately after contact with a suspected carrier. Rocky Mountain spotted fever is a bacterial disease spread to humans by ticks. It can be difficult to diagnose in the early stages. Initial signs and symptoms of the disease include sudden onset of fever, headache, and muscle pain, followed by the development of a rash. Without prompt and appropriate treatment of antibiotics, it can be fatal. Rubella is a viral disease that is transmitted through human-to-human contact, such as by coughing and sneezing. Symptoms of this disease include a rash, conjunctivitis, low fever, and nausea. Natural rubella infection normally confers lifelong immunity. A number of vaccines for rubella are also available. Congenital rubella syndrome is a form of rubella that is characterized by multiple defects, particularly to the brain, heart, eyes, and ears. This syndrome is an important cause of hearing and visual impairment and mental retardation in areas where the mild form of rubella has not been controlled or eliminated. The primary purpose of the rubella vaccine is to prevent the occurrence of this disease. Salmonellosis (salmonella infection) is a bacterial infection transmitted to humans by eating contaminated foods. Most persons infected with salmonella develop diarrhea, fever, and abdominal cramps. Infections often do not require treatment unless the patient becomes severely dehydrated or the infection spreads from the intestines. In this latter instance, antibiotics are used to treat salmonellosis. Severe acute respiratory syndrome (SARS) is an emerging, viral respiratory illness that seems to be transmitted primarily through close human-to-human contact, such as through coughing and sneezing. In general, SARS begins with a high fever. Other symptoms may include headache, an overall feeling of discomfort, and body aches. Some people also have mild respiratory symptoms at the onset and may develop a dry cough and most patients develop pneumonia. Currently, there is no definitive test to identify SARS during the early phase of the illness, which complicates diagnosis. Furthermore, there is no specific treatment for SARS. SARS was first reported in Asia in February 2003. Shigellosis is a highly contagious, diarrheal disease caused by four strains of bacteria and is transmitted by human-to-human contact and contaminated food and water. One of these strains, an unusually virulent pathogen, causes large-scale, regional outbreaks of dysentery (bloody diarrhea). In addition to diarrhea, patients experience fever, abdominal cramps, and rectal pain. The disease is treatable by rehydration and antibiotics. Smallpox is an acute, contagious, and sometimes fatal viral disease transmitted through human-to-human contact. Symptoms usually begin with high fever, head and body aches, and sometimes vomiting. A rash follows that spreads and progresses to raised bumps and pus-filled blisters that eventually fall off, leaving pitted scars. There is no treatment for smallpox. However, it can be prevented through use of the smallpox vaccine. Streptococcal disease (invasive Group A) is a bacterial disease transmitted through direct contact with an infected person’s mucus or through contact with wounds or sores on the skin. Invasive group A streptococcus (GAS) infections occur when bacteria get into parts of the body where they are not usually found, such as the blood, muscle, or lung. GAS infections can be treated with many different antibiotics. Streptococcal toxic shock syndrome (STSS) is one of the most severe, but least common forms of invasive GAS diseases. STSS, which is not spread from human-to-human, causes blood pressure to rapidly drop and organs to fail. Symptoms include fever, dizziness, confusion and a flat red rash over large areas of the body. Early treatment of GAS infections with antibiotics may reduce the risk of death from invasive GAS disease. Streptococcus pneumoniae is a bacterium that includes more than 90 strains and is transmitted through human-to-human contact. It is the cause of multiple diseases, including pneumonia, bacteremia, meningitis, and sinusitis. Some strains of this bacterium are becoming resistant to one or more antibiotics. CDC and several states are currently conducting additional surveillance for the resistant forms of this bacterium. Syphilis is a bacterial STD with signs and symptoms that are indistinguishable from those of other diseases. Syphilis is passed from person-to-person through direct contact with a syphilis sore and progresses through three stages. The primary stage is usually marked by the appearance of a single sore. The second stage is involves a skin rash and mucous membrane lesions. Finally, the late stage begins when secondary symptoms disappear. Many people infected with syphilis do not have any symptoms for years yet remain at risk for late complications if they are not treated. Syphilis is easy to treat in its early stages, usually with antibiotics. Tetanus (lockjaw) is caused by a bacterium found in the intestines of many animals and in the soil. It is transmitted to humans through open wounds. Symptoms include generalized rigidity and convulsive spasms of the skeletal muscles. Tetanus can be treated with an antitoxin, and there is an effective vaccine. Toxic shock syndrome is a bacterial disease that develops when the disease-causing bacterium colonizes skin and mucous membranes in humans. This disease has been associated with the use of tampons and intravaginal contraceptive devices in women and occurs as a complication of skin abscesses or surgery. Characterized by sudden onset of fever, chills, vomiting, diarrhea, muscle aches, and rash, toxic shock syndrome can rapidly progress to severe and intractable hypotension and multisystem dysfunction. Treatment usually includes the use of antibiotics and supportive treatment to prevent dehydration and organ failure. Trichinosis (trichinellosis) is food-borne illness caused by eating raw or undercooked pork and wild game products infected with a species of worm larvae. It cannot be spread from human-to-human, but only through consumption of contaminated food. Symptoms include nausea, diarrhea, vomiting, fatigue, fever, and abdominal discomfort, followed by additional symptoms, such as headaches, fevers, chills, aching joints, and muscle pains. Several drugs are available to treat trichinosis. Tuberculosis is a bacterial disease that is usually transmitted by contact with an infected person. People with healthy immune systems can become infected but not ill. Symptoms of tuberculosis can include a bad cough, coughing up blood, pain in the chest, fatigue, weight loss, fever, and chills. Several drugs can be used to treat tuberculosis, but the disease is becoming increasingly drug resistant. Tularemia is caused by a bacterium often found in animals. Humans can contract tularemia in different ways, including being bitten by an infected tick or other insect, handling infected animal carcasses, by ingesting contaminated food or water, or by inhaling the bacterium. Symptoms of this disease can include sudden fever, chills, headaches, muscle aches, joint pain, dry cough, and progressive weakness. Tularemia is often treated with antibiotics. Typhoid fever is a bacterial illness transmitted through contaminated food and water. Symptoms include high fever, stomach pains, and in some cases a rash. It is treatable by antibiotics and there is also a vaccine available, although it is not always effective. Vancomycin-Intermediate/Resistant Staphylococcus aureus are specific bacteria resistant to the antimicrobial agent vancomycin. Persons that develop these infections have certain characteristics such as having several underlying health conditions (such as diabetes and kidney disease), recent hospitalizations, and recent exposure to vancomycin and other antimicrobial agents. Despite their resistance to vancomycin, these infections can be treated with several drugs. Varicella (chickenpox) is highly infectious, viral disease that spreads from human-to-human contact, such as through coughing or sneezing. It results in a blister-like rash that appears first on the trunk and face, but can spread over the entire body. Other symptoms include itching, tiredness, and fever. Multiple drug treatments and a vaccine for varicella are available. Yellow fever is a mosquito-borne viral disease that occurs in tropical and subtropical areas. The yellow fever virus is transmitted to humans through a specific mosquito. Symptoms include fever, muscle pain, headache, loss of appetite, and nausea. There is no treatment for yellow fever beyond supportive therapies. A vaccine for yellow fever is available. Selected Worldwide Emerging Infectious Diseases Variant Creutzfeldt-Jakob disease (vCJD) is a rare, degenerative, fatal brain disorder in humans. It is believed that vCJD is contracted through the consumption of cattle products contaminated with the agent of bovine spongiform encephalopathy (BSE) or “mad cow disease”—a slowly progressive, degenerative, fatal disease affecting the central nervous system of adult cattle. There is no known treatment of vCJD. Dengue fever is a mosquito-borne infection that results in a severe, flu- like illness with specific symptoms that vary based on the age of the victim. Dengue hemorrhagic fever is a potentially lethal complication that may include convulsions. There is no vaccine for dengue fever, nor is there any treatment beyond supportive therapy. Ebola hemorrhagic fever, a viral disease, is transmitted by direct contact with the body fluids of infected individuals, causing acute fever, diarrhea that can be bloody, vomiting, internal and external bleeding, and other symptoms. There is no known cure, although some measures, including rehydration, can improve the odds of survival. Ebola kills more than half of those it infects. Identified for the first time in 1976, the Ebola virus is still considered rare, but there have been a number of outbreaks in central Africa. Echinococcosis (Alveolar Hydatid disease) is caused by a parasitic tapeworm found mostly in the Northern Hemisphere. The disease is transmitted to humans when they swallow the tapeworm eggs, either on contaminated food, or after contact with an animal carrier. Symptoms are slow to appear, usually involving the liver—and may mimic liver cancer or cirrhosis—and can include abdominal pain, weakness, and weight loss. Surgery is the most common form of treatment, although follow-up medication is often needed. Hendra virus infection occurs in both humans and many species of animals. In humans, it causes a respiratory disease that is often fatal. It was discovered in 1994, and has not been found outside of Australia. Human monkeypox is a rare viral disease caused by a virus related to smallpox. It is transmitted to humans through contact with infected animals as well as through human-to-human contact. In humans, symptoms of monkeypox are similar to smallpox, but usually they are milder. Monkeypox symptoms include fever, muscle ache, swelling of the lymph nodes, and a fluid-filled rash. The first case of monkeypox in the United States occurred in June 2003. There is no specific treatment for monkeypox but the smallpox vaccine may offer protection against the disease. Influenza A, H5N1 (avian influenza) is a type of influenza that infects birds and may be transmitted to humans. Symptoms of avian influenza in humans range from typical influenza-like symptoms to eye infections, pneumonia, acute respiratory distress, and other severe and life- threatening complications. Lassa fever is a viral disease, transmitted through contact with infected rats. Symptoms include deafness, fever, nausea, vomiting, diarrhea, and, in more severe cases, seizures and hemorrhage. This disease is difficult to distinguish from several other diseases. No vaccine is currently available, although ribavirin has been used as a preventive measure as well as to treat the disease. Marburg hemorrhagic fever is a rare and severe viral disease that affects both humans and animals. The mode of transmission from animals to humans is unknown. However, humans who become ill may spread the virus to other people. The onset of the disease is sudden and includes fever, chills, and headache. Symptoms progress to include a rash, nausea, vomiting, and chest pain as well as jaundice, inflammation of the pancreas, shock, massive hemorrhaging, and multi-organ dysfunction. Because many of the signs and symptoms of Marburg fever are similar to other infectious diseases, it may be difficult to diagnose. A specific treatment for this disease is unknown. Nipah virus is an emerging disease causing encephalitis. It is believed to be transmitted through contact with infected pigs. Symptoms include headache, fever, muscle spasms, coma, and brain damage. There is no treatment beyond alleviation of symptoms. O’nyong-nyong fever is a viral illness spread by mosquitoes. It causes symptoms such as joint pain, rash, high fever, and eye pain. Fatalities are rare. Rift Valley fever is a viral disease that primarily affects animals— including domesticated livestock—but can be transmitted to people by mosquitoes or contact with the body fluids of infected animals. Rift Valley fever usually causes a flu-like illness lasting 4 to 7 days, but can develop into a more severe hemorrhagic fever that can result in death. There is no established course of treatment for infected patients. The disease has occurred in many parts of Africa and, in September 2000, was for the first time reported outside of Africa, in Saudi Arabia and Yemen. Venezuelan equine encephalitis is a mosquito-borne viral disease that can be transmitted to humans from equine hosts. Symptoms in humans include flu-like symptoms of fever and headache. Severe illness and death can occur in the young and the elderly and those with weakened immune systems. The only treatment available is supportive therapy. West Nile virus is a mosquito-borne viral disease that is transmitted to humans through infected mosquitoes. Many people infected with the virus do not become ill or show symptoms. Symptoms that do appear may be limited to headache, sore throat, backache, or fatigue. There is no vaccine for the West Nile virus, and no specific treatment besides supportive therapies. The disease occurs in Africa, Eastern Europe, West Asia, and the Middle East. This disease appeared for the first time in the United States in 1999. Appendix III: Selected List of Systems and Networks Engaged in Disease Surveillance Below we describe selected electronic systems and networks to support disease surveillance that are discussed in this report. This list encompasses electronic communications and surveillance systems as well as networks of laboratories and public health officials engaged in disease surveillance. BioSense is a syndromic surveillance system operated by CDC. BioSense aggregates syndromic data from a variety of electronic sources to improve early detection of possible disease outbreaks, bioterrorism threats, or other urgent public health threats. The data are collected and analyzed by CDC and also made available to state and local public health agencies. Data sources include patient encounters from the Department of Defense’s medical treatment facilities in the United States, the Department of Veterans Affairs’ medical facilities, national clinical laboratory test orders, and more than 10,000 over-the-counter retailers nationwide. Electronic Laboratory Exchange Network (eLEXNET) eLEXNET is a Web-based system for real-time sharing of food safety laboratory data among federal, state, and local agencies. It is a secure system that allows public health officials at multiple government agencies engaged in food safety activities to compare and coordinate laboratory analysis findings. According to FDA officials, it enables public health officials to assess risks, and analyze trends, and it provides the necessary infrastructure for an early warning system that identifies potentially hazardous foods. As of July 2004, FDA officials said there were 113 laboratories representing 50 states that are part of the eLEXNET system. Electronic Surveillance System for the Early Notification of Community-based Epidemics (ESSENCE) ESSENCE is a syndromic surveillance system operated by DOD. ESSENCE is used in the early detection of infectious disease outbreaks and it provides epidemiological tools for improved investigation. The system collects data from hospitals and clinics on a daily basis. Epidemiologists can track, in near real-time, the syndromes being reported in a region through a daily feed of reported data. ESSENCE uses the daily data downloads, along with traditional epidemiological analyses using historical data for baseline comparisons and analytic methods such as a geographic information system. A geographic information system, among other things, can be used to identify spatial clustering of abnormal health events as the data are collected. This can assist public health officials in identifying affected areas. DOD is in the process of improving ESSENCE’s mapping capabilities and developing more advanced statistical algorithms for identifying anomalous increases in syndromes. Epidemic Information Exchange (Epi-X) Epi-X is a secure, Web-based communication system operating in all 50 states. CDC uses this system primarily to share information relevant to disease outbreaks with state and local public health officials and with other federal officials. Epi-X also serves as a forum for routine professional discussions and non-emergency inquiries. Authorized Epi-X users can post questions and reports, query CDC, and receive feedback on ongoing infectious disease control efforts. According to CDC, as of 2004, over 1,200 public health officials at the federal, state, and local levels had used the system to communicate with colleagues and experts, track information for outbreak investigations and response efforts, conduct online discussions, and request assistance. Foodborne Disease Active Surveillance Network (FoodNet) FoodNet is a surveillance system that is a collaborative effort among CDC, USDA, and FDA. FoodNet operates in nine states that participate in CDC’s Emerging Infections Program. FoodNet provides a network for responding to new and emerging foodborne diseases of national importance, monitoring foodborne diseases, and identifying the sources of specific foodborne diseases. FoodNet is used to detect cases or outbreaks of foodborne disease, identify their source, recognize trends, and respond to outbreaks. State public health departments that participate in FoodNet receive funds from CDC to systematically contact laboratories in their geographical areas to solicit incidence data. As a result of this active solicitation, FoodNet is intended to provide more accurate estimates of the occurrence of foodborne diseases than are otherwise available. Global Outbreak Alert and Response Network (GOARN) GOARN electronically links WHO member countries, disease experts, agencies, and laboratories in order to keep them informed of disease outbreaks, either rumored or confirmed. GOARN is the primary mechanism by which WHO mobilizes technical resources for the investigation of, and response to, disease outbreaks of international importance. GOARN issues real-time outbreak alerts and gathers global disease information from a number of sources, including media reports, ministries of health, laboratories, academic institutes, and WHO offices in various countries. Global Public Health Intelligence Network (GPHIN) GPHIN is an electronic system developed by Canadian health officials and used by WHO. GPHIN is an Internet-based application that searches in French and English more than 950 news feeds and discussion groups around the world in the media and on the Internet for information on possible outbreaks of infectious diseases. CDC officials said that translation capabilities will be expanded in 2004 from French and English to also include Arabic, Chinese, Russian, and Spanish. Health Alert Network (HAN) CDC operates an early warning and response system, the Health Alert Network (HAN), that is designed to ensure that state and local health departments as well as other federal agencies and departments have timely access to emerging health information. Through HAN, CDC issues health alerts and other public health bulletins to an estimated 1 million public health officials, including physicians, nurses, laboratory staff, and others. Infectious Diseases Society of America Emerging Infections Network (IDSA-EIN) IDSA-EIN is a network of over 900 infectious disease practitioners. The network surveys its members regularly on topical issues in clinical infectious diseases. It also enhances communications and health education among its members, collaborates in research projects, and provides assistance during outbreak investigations. Its membership represents a source of infectious disease expertise for CDC and state health departments to draw on during outbreaks or when unusual illnesses occur. Laboratory Response Network (LRN) LRN is an integrated network of public health and clinical laboratories run by CDC to test specimens and develop diagnostic tests for identifying infectious diseases and biological or chemical agents. The network includes the following types of laboratories—federal, state and local public health, military, and international laboratories, as well as laboratories that specialize in food, environmental, and veterinary testing. Some LRN laboratories provide highly specialized tests not always available in state public health or commercial laboratories. National Animal Health Reporting System (NAHRS) NAHRS is collaborative program with USDA, the U.S. Animal Health Association, the American Association of Veterinary Laboratory Diagnosticians, and participating states. NAHRS collects data from state veterinarians in participating states on the presence of confirmed clinical diseases of major international significance in livestock, poultry and aquaculture species in the United States. Individual state reports are submitted monthly to the central collection point at the USDA where they are verified, summarized and compiled into a report. National Electronic Disease Surveillance System (NEDSS) CDC’s NEDSS is an initiative that is designed to make the electronic reporting of disease surveillance data to CDC by state and local health departments more timely, accurate, and complete. Specifically, NEDSS is intended to replace or enhance the interoperability of CDC’s numerous existing surveillance systems. Interoperability is the ability of two or more systems or components to exchange information and to use the information that has been exchanged. As part of the NEDSS initiative, CDC is developing an architecture that consists of a set of standards that can be used for creating interoperability among systems. These standards comprise (1) data standards, (2) parameters for an Internet-based communications infrastructure, and (3) policy-level agreements on data access and sharing as well as on protections for confidentiality. CDC has also developed ready-to-use software—the NEDSS-Base system (NBS)— that operates within these standards. National Electronic Telecommunications System for Surveillance (NETSS) NETSS is a computerized public health surveillance system that provides CDC with weekly data regarding cases of nationally notifiable diseases. Core surveillance data—date, county, age, sex, and race/ethnicity—and some disease-specific epidemiologic information for nationally notifiable diseases and for some nonnotifiable diseases are transmitted electronically by the state public health departments to CDC through NETSS each week. Data from NETSS is published in CDC’s Morbidity and Mortality Weekly Report. NETSS will be phased out as NEDSS is deployed and implemented. National Retail Data Monitor (NRDM) NRDM is a syndromic surveillance system developed by the University of Pittsburgh in collaboration with CDC and others, and it is used by state public health officials. NRDM collects data from retail sources. NRDM collects sales data from 19,000 stores, including pharmacies, to monitor sales patterns in such items as over-the counter medications for signs of a developing infectious disease outbreak. The system looks for unusual sales patterns—such as -a spike in the number of over-the-counter medications purchased in a particular city or county—that might indicate the onset of an infectious disease outbreak. The system monitors the data automatically on a daily basis and generates summaries of sales patterns using timelines and maps. National Veterinary Services Laboratories (NVSL) NVSL are veterinary laboratories run by USDA. These laboratories are the only U.S. federal veterinary reference laboratories to provide diagnostics for domestic and foreign animal diseases. NVSL also provides diagnostic support for disease control and eradication programs, testing imported and exported animals, training, and laboratory certification for selected diseases. PulseNet is a national network of public health laboratories that perform DNA “fingerprinting” on bacteria that may be foodborne. The network idenifies and labels each “fingerprint” pattern and permits rapid comparison of these patterns through an electronic database at CDC. This network is intended to provide an early warning system for outbreaks of foodborne disease. Real-time Outbreak and Disease Surveillance (RODS) RODS is a syndromic surveillance system developed by the University of Pittsburgh and used by state public health officials. RODS automatically gathers data from hospital clinical encounters in order to identify patients’ chief medical complaints, classify them according to syndrome, and aggregate that data in order to look for anomalous increases in certain syndromes that may reveal an infectious disease outbreak. Sexually Transmitted Disease Management Information System (STD*MIS) STD*MIS is an electronic system used by state and local health departments to report sexually transmitted diseases to CDC. Systematic Tracking of Elevated Lead Levels & Remediation (STELLAR) STELLAR is an electronic system used by state and local health departments to report lead poisoning cases to CDC. Appendix IV: Comments from the Department of Health and Human Services Appendix V: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to the persons named above, Louise M. Duhamel, Krister Friday, Gay Hee Lee, and Merrile Sing made key contributions to this report. Related GAO Products Emerging Infectious Diseases: Asian SARS Outbreak Challenged International and National Responses. GAO-04-564. Washington, D.C.: April 28, 2004. Public Health Preparedness: Response Capacity Improving, but Much Remains to Be Accomplished. GAO-04-458T. Washington, D.C.: February 12, 2004. Infectious Diseases: Gaps Remain in Surveillance Capabilities of State and Local Agencies. GAO-03-1176T. Washington, D.C.: September 24, 2003. Severe Acute Respiratory Syndrome: Established Infectious Disease Control Measures Helped Contain Spread, But a Large-Scale Resurgence May Pose Challenges. GAO-03-1058T. Washington, D.C.: July 30, 2003. Bioterrorism: Information Technology Strategy Could Strengthen Federal Agencies’ Abilities to Respond to Public Health Emergencies. GAO-03-139. Washington, D.C.: May 20, 2003. SARS Outbreak: Improvements to Public Health Capacity Are Needed for Responding to Bioterrorism and Emerging Infectious Diseases. GAO-03- 769T. Washington, D.C.: May 7, 2003. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Food Safety: CDC Is Working to Address Limitations in Several of Its Foodborne Disease Surveillance Systems. GAO-01-973. Washington, D.C.: September 7, 2001. Global Health: Challenges in Improving Infectious Disease Surveillance Systems. GAO-01-722. Washington, D.C.: August 31, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Global Health: Framework for Infectious Disease Surveillance. GAO/NSIAD-00-205R. Washington, D.C.: July 20, 2000. West Nile Virus: Preliminary Information on Lessons Learned. GAO/HEHS-00-142R. Washington, D.C.: June 23, 2000. Emerging Infectious Diseases: National Surveillance System Could Be Strengthened. GAO/T-HEHS-99-62. Washington, D.C.: February 25, 1999. Emerging Infectious Diseases: Consensus on Needed Laboratory Capacity Could Strengthen Surveillance. GAO/HEHS-99-26. Washington, D.C.: February 5, 1999. GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs | The threat posed by infectious diseases has grown. New diseases, unknown in the United States just a decade ago, such as West Nile virus and severe acute respiratory syndrome (SARS), have emerged. To detect cases of infectious diseases, especially before they develop into widespread outbreaks, local, state, and federal public health officials as well as international organizations conduct disease surveillance. Disease surveillance is the process of reporting, collecting, analyzing, and exchanging information related to cases of infectious diseases. In this report GAO was asked to examine disease surveillance efforts in the United States. Specifically, GAO described (1) how state and federal public health officials conduct surveillance for infectious diseases and (2) initiatives intended to enhance disease surveillance. GAO reviewed documents, such as policy manuals and reports related to disease surveillance, and interviewed officials from selected federal departments and agencies, including the Departments of Defense (DOD), Agriculture (USDA), and Homeland Security (DHS) as well as the Food and Drug Administration (FDA), and the Centers for Disease Control and Prevention (CDC). GAO conducted structured interviews of state public health officials from 11 states. Surveillance for infectious diseases in the United States comprises a variety of efforts at the state and federal levels. At the state level, state health departments collect and analyze data on cases of infectious diseases. These data are required to be reported by health care providers and others to the state. State public health departments verify reported cases of diseases, monitor disease incidence, identify possible outbreaks within their state, and report this information to CDC. At the federal level, agencies and departments collect and analyze disease surveillance data and maintain disease surveillance systems. For example, CDC uses the reports of diseases from the states to monitor national health trends, formulate and implement prevention strategies, and evaluate state and federal disease prevention efforts. FDA analyzes information on outbreaks of infectious diseases that originate from foods that the agency regulates. Some federal agencies and departments also fund and operate their own disease surveillance systems and laboratory networks and have several means of sharing surveillance information with local, state, and international public health partners. State and federal public health officials have implemented a number of initiatives intended to enhance disease surveillance, but challenges remain. For example, officials have implemented and expanded syndromic surveillance systems, which monitor the frequency and distribution of health-related symptoms among people within a specific geographic area. Although syndromic surveillance systems are used by federal agencies and departments and in all of the states whose officials GAO interviewed, concerns have been raised about this approach to surveillance. Specifically, syndromic surveillance systems are relatively costly to maintain compared to other types of surveillance and are still largely untested. Public health officials are also implementing initiatives designed to enhance public health communications and disease reporting. For example, CDC is working to increase the number of participants using its public health communication systems. In addition, state public health departments and CDC are implementing an initiative designed to make electronic disease reporting more timely, accurate, and complete. However, the implementation of this initiative is incomplete. Finally, federal public health officials have enhanced federal coordination on disease surveillance and expanded training programs for epidemiologists and other public health experts. In commenting on a draft of this report, the Department of Health and Human Services (HHS) said the report captures many important issues in surveillance. HHS also provided suggestions to clarify the discussion. |
Background INS’ overall budget has more than doubled within 5 years, from $1.5 billion in fiscal year 1993 to $3.1 billion in fiscal year 1997. INS has spent about $2.3 billion on border enforcement from fiscal years 1994 through 1997. For fiscal year 1997, the combined budget for INS’ Border Patrol and Inspections programs—the two programs responsible for deterring illegal entry along the border—was nearly $800 million. INS, through other INS programs, provides additional support for the strategy by allocating funds for computer automation, technology procurement, and construction of barriers. INS’ Border Patrol is responsible for preventing and detecting illegal entry along the border between the nation’s ports of entry. The Border Patrol has 21 sectors, 9 of which are along the southwest border. The Border Patrol’s appropriations were $631 million for fiscal year 1997, a 69 percent increase over its $374 million expenditure for fiscal year 1994. As of July 21, 1997, the Border Patrol had about 6,500 agents nationwide. About 6,000, or 92 percent, were located in the nine sectors along the southwest border. In fiscal year 1996, the Border Patrol apprehended about 1.6 million aliens nationwide, of whom 1.5 million were apprehended in sectors along the southwest border. (Appendix III contains detailed staffing and selected workload data for the Border Patrol.) INS Inspections and the U.S. Customs Service share responsibility for inspecting all applicants for admission at the U.S. ports of entry. The purpose of their inspections is to prevent the entry of inadmissible applicants by detecting fraudulent documents, including those representing false claims to U.S. citizenship or permanent residence status, and seize conveyances used for illegal entry. Figure 2 depicts INS’ 36 land ports of entry along the southwest border. As of March 30, 1997, INS Inspections had about 1,300 inspectors at ports of entry along the southwest border. The Inspections’ appropriation was $151 million for fiscal year 1997, a 78 percent increase from its $85 million expenditure for fiscal year 1994. As of March 15, 1997, the U.S. Customs Service had about 7,400 inspectors nationwide. Of these 7,400 inspectors, 2,200, or 30 percent, were located along the southwest border to inspect individuals and cargo. In fiscal year 1996, INS and Customs inspectors along the southwest border inspected about 280 million people, including 84 million, or 30 percent, who were U.S. citizens. (App. III contains detailed staffing and selected workload data for INS Inspections.) Within the Department of Justice, the 94 offices of the U.S. Attorneys are responsible for prosecuting individuals charged with committing offenses under U.S. law, including persons who illegally enter the United States. Because the Justice Department determined that it does not have the resources necessary to prosecute all illegal entrants, the U.S. Attorneys located in districts along the southwest border have instituted a policy to focus criminal prosecutions on alien smugglers, and on those aliens without legal documentation who are linked directly to violence and crime in the community. The policy calls for imposing administrative, rather than criminal, sanctions on first-time violators who do not otherwise have criminal histories. In October 1995, the Attorney General appointed the U.S. Attorney for the Southern District of California as her Special Representative for Southwest Border Issues. This collateral responsibility includes coordinating the border law enforcement activities of various Justice Department agencies, including INS, the Drug Enforcement Administration, and the Federal Bureau of Investigation, with the activities undertaken by the Departments of the Treasury and Defense. The Department of State also has a role in deterring illegal entry along the southwest border. Mexican nationals who seek to visit the United States can obtain a border-crossing card, a type of entry document, from either State Department consulates in Mexico or from INS at ports of entry. According to the State Department, insufficient staffing levels overseas, ineffective interagency cooperation over the exchange of data, and needed computer enhancements all contribute to a weakening of management controls in the visa issuance function. Scope and Methodology To determine what the Attorney General’s strategy to deter illegal entry called for, we reviewed and summarized information on border control strategies, plans, and directives contained in a variety of Justice Department and INS documents related to border control and, because the Attorney General had not published a specific strategy for the southwest border, we prepared a summary of these documents (see app. I). Not all of the documents used were specifically identified as “strategy” documents. Justice Department officials reviewed this summary in May 1997 and agreed that it accurately reflected the Attorney General’s strategy and its various components at that time. To determine what actions had been taken to implement the strategy along the southwest border and whether initial results expected from the strategy’s implementation have occurred, we conducted in-person interviews with officials from six of the nine Border Patrol sectors along the southwest border and telephone interviews with officials from the remaining three sectors. We interviewed INS officials from the five INS district offices responsible for all of the ports of entry along the southwest border, INS Inspections officials from seven ports of entry, and Customs officials from five ports of entry. In addition, we analyzed INS’ Border Patrol and Inspections workload and apprehension data, reviewed documents pertaining to INS’ management priorities, and reviewed INS intelligence reports and previous reports done by us and the Department of Justice’s Inspector General. We did not verify the validity of INS computer generated data related to workload and apprehension statistics. However, we discussed with INS officials their data validation efforts. These officials were confident that the data could be used to accurately portray trends over time. We met with the U.S. Attorney for the Southern District of California, who is also the Attorney General’s Southwest Border Representative, to discuss aspects of the strategy related to prosecuting those that violate immigration laws. We also discussed with INS and State Department officials the status of various border control efforts to deter illegal entry mandated by the 1996 Act. In conjunction with one of these efforts—improvements in border-crossing identification cards—we visited State Department consulates in Ciudad Juarez and Tijuana, Mexico. To identify indicators that could be used to evaluate the effectiveness of the strategy in deterring illegal entry, we reviewed illegal immigration research studies and interviewed officials from INS and the U.S. Commission on Immigration Reform. We also convened a meeting with a panel of immigration researchers to obtain their views on a range of evaluation issues, such as appropriate indicators of the strategy’s outcome, sources of relevant data in addition to INS, the reliability of existing data, and how data should be analyzed. We did our work between December 1996 and September 1997 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Attorney General, the INS Commissioner, the Acting Customs Commissioner, and the Secretary of State or their designees. On September 16, 1997, the INS Executive Associate Commissioner for Policy and Planning provided us with oral comments, which are discussed near the end of this letter. The Customs Service had no comments on our report and the Department of State provided technical corrections only. The Strategy In February 1994, the Attorney General announced a five-part strategy to strengthen enforcement of the nation’s immigration laws. The strategy included strengthening the border, removing criminal aliens, reforming the asylum process, enforcing workplace immigration laws, and promoting citizenship for qualified immigrants. The strategy to strengthen the border called for “prevention through deterrence,” that is, to raise the risk of being apprehended for illegal aliens to a point where they would consider it futile to try to enter the United States illegally. The strategy was to involve concentrating new resources on the front lines at the most active points of illegal activity along the southwest border. To carry out the priority to strengthen the border, the Border Patrol was to, among other things, (1) concentrate personnel and technology resources in a four-phased approach, starting first with the sectors with the highest level of illegal immigration activity (as measured by apprehensions) and moving to the areas with the least activity; (2) make maximum use of physical barriers to deter entry along the border; (3) increase the proportion of time Border Patrol agents spent on border control activities; and (4) identify the appropriate quantity and mix of technology and personnel needed to control the border. Recognizing that increased enforcement by the Border Patrol might force some aliens to try to enter the United States illegally through the ports of entry, the strategy calls for INS’ Inspections program to increase the number of inspectors and the use of technology to both deter and detect illegal entry and improve management of legal traffic and commerce. For example, to deter illegal entry the strategy called for increasing the number of illegal aliens referred for prosecution and testing automated fingerprint technology to detect inadmissible aliens. To improve management of legal traffic, the strategy called for providing the public with more information so they would be better prepared for the inspection process. In concert with INS’ efforts to deter illegal entry, the strategy calls for increasing felony prosecutions of alien smugglers and those criminal aliens who have repeatedly reentered the United States after having been removed. In addition, the 1996 Act requires the Attorney General to take additional border control measures to deter illegal entry into the United States. Appendix II discusses the status of these efforts not discussed elsewhere in this report. Implementation of the Strategy INS has made progress in implementing the Attorney General’s strategy to deter illegal entry along the southwest border. In September 1997, Border Patrol officials told us that the Border Patrol had nearly completed phase I of the strategy, which called for allocating Border Patrol resources to the San Diego, California, and El Paso, Texas, sectors. They stated that the Border Patrol was now moving into phase II of the strategy, which called for increasing resources in the Tucson, Arizona, sector and three sectors in south Texas—Del Rio, Laredo, and McAllen. INS officials told us they could not speculate as to when they would complete the remaining aspects of the strategy, that called for focusing resources in the remaining three sectors along the southwest border (phase III) and the sectors along the rest of the U.S. land border and coastal waterways (phase IV). As part of phase I of the strategy, in October 1994, INS launched a major initiative in San Diego called Operation Gatekeeper. This multiphase, multiyear operation was designed to reduce illegal immigration into San Diego and to force alien traffic eastward to deter and delay illegal aliens’ attempts to reach urban areas. Resources added to the area as a result of Gatekeeper included new Border Patrol agents and support staff, new inspectors at the San Ysidro port of entry, new computers and technology to maximize efficiency, and new resources for the Office of the U.S. Attorney for the Southern District of California to increase its capability to prosecute criminal aliens. Gatekeeper focused first on the area of the greatest illegal immigration activity—the 5-mile stretch of Imperial Beach, California. The next phase, which began in June 1995, included intensified enforcement at the San Ysidro port of entry and the rural parts of the San Diego sector. In December 1994, we reported that on the basis of initial positive results, the strategy appeared encouraging. In August 1997, INS began the next major phase of its strategy by concentrating resources in the McAllen sector, starting first in Brownsville, Texas. Named Operation Rio Grande, INS plans to add agents and equipment, such as high-powered vision scopes and stadium type lighting, to the McAllen sector to deter illegal entry. The Border Patrol has generally allocated its additional resources in accordance with the strategy and has made progress in constructing barriers along the southwest border. However, the agents’ percentage of time spent on border enforcement has not increased in most southwest border sectors since 1994, and the Border Patrol has yet to determine the best mix of agents and technology on which to base future staffing allocations. INS’ Inspections program has deployed additional inspectors to the southwest border and is in the process of pilot testing various technology initiatives designed to deter illegal entry and streamline the inspections process. In addition, the U.S. Attorneys in the five districts along the southwest border increased the number of prosecutions for certain immigration violations. Border Patrol Has Generally Allocated New Agents and Other Resources in Accordance With Strategy With some exceptions, the Border Patrol was generally able to allocate its additional resources according to the strategy, allocating first to sectors with the highest level of known illegal immigration activity. In fiscal year 1993, the San Diego and El Paso sectors had the highest levels of apprehensions of illegal immigrants, accounting for 68 percent of all apprehensions along the southwest border. In fiscal year 1994, INS received funding for an additional 350 agents and assigned these new agents to San Diego and El Paso, the sectors with the highest priority. Three hundred agents were allocated to the San Diego sector and 50 to the El Paso sector. The strategy noted that the Border Patrol needed to be flexible to respond to changing patterns in illegal alien traffic. According to INS officials, the Border Patrol began to notice “almost immediately” an increase in apprehensions in other sectors, particularly Tucson and those in south Texas (Del Rio, McAllen, and Laredo). INS officials attributed this increase in apprehensions in other sectors to a “shift” in the flow of illegal alien traffic as it became more difficult to cross illegally in San Diego and El Paso. Consequently, in fiscal year 1995, the Border Patrol deployed some of the additional agents funded that year and originally planned for San Diego and El Paso to the Tucson and south Texas sectors, the sectors with the next highest priority after San Diego and El Paso. According to Border Patrol officials, deploying additional agents in a phased manner was a new approach. Prior to the strategy, as additional positions became available, the Border Patrol tried to allocate at least a few additional positions to as many of the 21 sectors as possible. However, under the strategy, 98 percent (or 2,792) of the 2,850 new Border Patrol agent positions nationwide authorized from fiscal year 1994 through fiscal year 1997 have gone to 6 of the 21 Border Patrol sectors. INS allocated 1,235 (about 43 percent) of these positions to the San Diego sector and 351 (about 12 percent) to the El Paso sector, sectors with the highest priority. Nearly all of the remaining 1,264 went to the Tucson and the south Texas sectors, the sectors with the next highest priority (see fig. 3). As shown in figure 4, the additional allocations have resulted in an increase in on-board staff in most southwest border sectors. Overall, the number of Border Patrol agents on board along the southwest border increased 76 percent between October 1993 and July 1997. To complement the increase in staffing, southwest border sectors have also received additional technology, equipment, and infrastructure improvements in accordance with the Border Patrol strategy. For example, between October 1994 and October 1996, the San Diego sector added almost 5 miles of permanent high-intensity lighting, 8 miles of reinforced steel fencing, 28 infrared scopes (a night-vision device), and 3 helicopters to detect illegal entry. El Paso sector officials told us they received six additional infrared scopes and expected to receive five more soon, and in March 1997, the sector completed a 3.5 mile fencing project. INS purchased a building for a new Border Patrol station in Nogales, Arizona, to house the increased number of new agents. In March 1997, the Border Patrol submitted a 5-year staffing plan to Congress covering fiscal years 1996 through 2000. The plan calls for adding between 1,500 and 2,500 additional Border Patrol agents in fiscal years 1998 through 2000. This is fewer than the 3,000 additional agents whom the 1996 Act authorized INS to hire over the 3-year period because, according to INS officials, they were concerned that their staff was growing faster than they could properly manage and that they did not have an adequate infrastructure (facilities, equipment, training, and supervisory capacity) to absorb 3,000 new agents. INS plans to assign two-thirds of the new agents to Arizona and Texas; the remainder will go to states along the northern border and Gulf Coast. Progress in Installing Barriers Between Ports of Entry One of the main efforts outlined in the strategy is the “maximum utilization of lighting, fencing, and other barriers” by all sectors, although the strategy did not outline specific barrier projects or miles of fencing to be built. A 1993 border control study, commissioned by the U.S. Office of National Drug Control Policy (Sandia Study), recommended fencing and in some cases vehicle barriers in every southwest border patrol sector to deter illegal entry. While INS has not formally endorsed all of the Sandia Study recommendations, INS Headquarters Border Patrol officials told us that some recommendations, such as erecting 90 miles of barriers along the southwest border, are valid. These officials told us that, while adding barriers is part of the strategy, INS has left it up to each sector chief to propose where and when to build barriers. Seven of the nine sector plans written to carry out the strategy cite the need for barriers to increase agents’ effectiveness in apprehending illegal aliens and reducing crime. According to INS officials, proposals for barrier projects are reviewed in the context of INS’ budget process; require consultation with Congress; and must be coordinated with the U.S. Department of Defense Joint Task Force Six, the military unit that constructs much of the fencing for INS. Congress has also emphasized the need for additional fencing. The 1996 Act requires the Attorney General, in consultation with the Commissioner of INS, to take such actions as may be necessary to install additional physical barriers and roads in the vicinity of the U.S. border to deter illegal crossings in areas of high illegal entry into the United States. In carrying out this provision in the San Diego sector, the 1996 Act requires the Attorney General to provide for second and third fences, in addition to the existing reinforced fence, and for roads between the fences for the 14 miles of the international land border of the United States extending eastward from the Pacific Ocean. INS has allocated $8.6 million of its fiscal year 1996 and 1997 appropriations to complete the first two phases of the triple fencing project in the San Diego sector. Of the $8.6 million allocated, about $4.3 million had been obligated for expenditure as of July 1997. According to a San Diego sector official, INS is in the process of acquiring the property upon which to build the fencing and conducting environmental assessment reports, before construction can begin in certain areas. Figure 5 depicts bollard-type (concrete cylindrical columns set in a staggered manner) fencing which is being constructed in the area immediately inland from the Pacific Ocean in the San Diego sector. Prior to 1994, very little substantial fencing existed—about 14 miles of reinforced steel fencing in the San Diego sector. Since the strategy was announced, INS has built approximately 32 miles of new fencing in five sectors. As of July 1997, nearly 24 miles of additional fencing was under construction. Most of the fencing constructed, under construction, or planned to be built is in the San Diego sector. Currently, no barriers exist or are planned to be built in four Texas sectors (Marfa, Del Rio, Laredo, and McAllen). However, two of these sectors (Laredo and McAllen) had indicated a need for barriers in their sector plans. INS officials cited the need to overcome community concerns as a reason why they have not built more barriers. The majority of the fencing construction along the southwest border has been accomplished by the U.S. Department of Defense Joint Task Force Six and, to a lesser degree, the National Guard. The military provides the personnel to construct the fencing and pays for their salaries, transportation, meals, and housing. INS typically pays for building materials (although steel runway landing mats are provided at no cost because they are military surplus) and other costs, such as equipment rental. INS employees, such as repair and maintenance staff, also help with the construction. In a few instances, private contractors have been hired by INS to construct particular types of fencing, such as the bollard fencing project in the San Diego sector. However, INS officials stated that the use of private contractors is much more costly to INS than using military assistance. Proportion of Time Spent on Border Enforcement Did Not Increase in Most Southwest Border Sectors The strategy called for an increasing border enforcement activity, as measured by the proportion of time Border Patrol agents spent on border enforcement. According to INS data, agents in the nine sectors along the southwest border collectively spent 59 percent of their total time on border enforcement in the first half of fiscal year 1997, nearly reaching the servicewide goal of 60 percent. However, this proportion of time spent on border enforcement activities was the same as that in fiscal year 1994. Although the proportion of time spent on border enforcement activities did not increase during this period, the total number of hours spent by all Border Patrol agents on border enforcement activities along the southwest border has been increasing since fiscal year 1994, because the overall number of Border Patrol agents assigned to the southwest border sectors increased. In five of the nine southwest border sectors, the proportion of time that Border Patrol agents spent on border enforcement in the first half of fiscal year 1997 was within 2 percentage points (plus or minus) of what it was in fiscal year 1994. In two sectors, Tucson and El Paso, the proportion of time spent on border enforcement decreased by 7 and 13 percentage points, respectively. In two other sectors, San Diego and Marfa, it increased by 7 and 10 percentage points, respectively (see fig. 6). INS officials reported that an increase in the amount of time spent on program support activities such as supervision, training, and processing apprehended aliens, as well as possible reporting errors, could have hindered increasing the overall percentage of time spent on border control activities. For example, during fiscal year 1994, the El Paso sector spent 20 percent of its time on program support compared with 33 percent during the first half of fiscal year 1997, an increase of 13 percentage points. During this same time period, the Tucson sector’s percentage of time on program support increased from 41 percent to 50 percent, an increase of 9 percentage points. Consistent with the strategy, southwest border sectors have redirected Border Patrol agents from general enforcement activities back to border control. For example, the San Diego sector reduced the proportion of time spent on general enforcement from 4 percent in fiscal year 1994 to less than 1 percent for the first half of fiscal year 1997. The Marfa sector reduced the proportion of time spent on general enforcement from 14 percent to 2 percent during this same time period. As a result, Marfa increased its border enforcement percentage although it did not receive additional staff during this period. Servicewide, the proportion of time spent on border enforcement declined from 56 percent in fiscal year 1994 to 50 percent in the first half of fiscal year 1997. Thus, the performance of INS as a whole on this measure declined rather than increased as intended. INS has lowered its expectations regarding the percentage of time, servicewide, that it expects its Border Patrol agents to spend on border enforcement. According to its fiscal year 1998 budget submission, INS’ 1998 servicewide goal is for the Border Patrol to spend 56 percent of its time on border enforcement compared with the 60 percent goal for fiscal year 1997. Appropriate Mix of Agents’ Staffing and Materiel Resources Had Not Been Determined The strategy states that it will “seek the best mix of technology and personnel resources to meet the long term goals,” and that “improvements in technology will make border control strategies more effective and less resource intensive.” The Border Patrol has been increasing its supply of equipment and advanced technologies. The conference report for the fiscal year 1997 Department of Justice Appropriations Act includes $27 million for infrared scopes, low-light television systems, sensors, and the replacement of three helicopters, including upgraded forward-looking infrared systems. Since 1994, the San Diego sector alone acquired an additional 28 infrared scopes, about 600 underground sensors, about 500 vehicles, about 600 computers, and several advanced computer systems. The Border Patrol has not identified the most appropriate mix of staffing and other resources needed for its sectors. Headquarters officials told us that sector chiefs may have taken current technological assets into consideration when developing their sector staffing proposals. However, according to these officials, sector staffing proposals generally did not include a discussion of the potential impact on staffing needs of adding barriers and/or technology. In addition, when allocating the additional 1,000 Border Patrol agents funded for fiscal year 1997 to the various sectors, the Border Patrol did not formally consider how adding barriers and/or technology would potentially affect staffing needs. In the 5-Year Border Patrol Deployment Plan submitted to Congress in March 1997, the Border Patrol stated that in fiscal year 1998, it would “assess technology improvements in sensors, scopes, biometrics identification systems, etc., and effects on staffing requirements”; and in fiscal year 1999, it would “implement staffing changes based on technology assessments.” With the help of a contractor, the Border Patrol is currently working on developing a computerized staffing model to help it identify the right mix of staffing and technology. According to Border Patrol officials, the model will allow INS to estimate the impact of different levels of materiel resources on sectors’ staffing levels and effectiveness in apprehending illegal aliens. As of June 1997, the model was being tested in the El Paso sector. INS officials plan to have this model operational in El Paso by December 1997 and in all southwest border sectors by the summer of 1998. INS headquarters officials told us that they were also testing new technologies, such as weight-sensitive sensors and satellite global positioning systems, to determine their usefulness in Border Patrol operations. They told us that they have yet to determine how these new technologies might be integrated into border control operations and what their impact on agent needs might be. Resources, Enforcement Initiatives, and Technology Testing Increased at Ports of Entry The strategy postulated that enhanced enforcement efforts between the ports of entry would cause an increase in port-of-entry activity, including increased attempts to enter through fraudulent means. To handle the increased activity, Congress authorized an increase of about 800 inspectors for southwest ports of entry since 1994 (see fig. 7), almost doubling the number of authorized positions, from about 865 in 1994 to 1,665 in 1997. As of March 1997, 1,275 inspectors were on board at land ports of entry along the southwest border (see fig. 8). INS has begun testing a number of programs and systems to increase deterrence to illegal entry and improve and streamline the inspection process. These efforts are intended to prevent illegal aliens and criminal violators from entering the United States and facilitate the entry of legal travelers. To accomplish these objectives, INS is using technology to segment people seeking admission by risk category and forming strategic partnerships with others concerned with border management, such as the Customs Service, local communities, and the Mexican government. INS is also planning to measure its effectiveness in detecting illegal entry attempts. According to port-of-entry officials, the additional INS inspectors have enabled INS to staff more inspection booths at peak hours, and allowed both INS and Customs to increase enforcement efforts and undertake some new initiatives to deter illegal entry. For example, inspectors at some ports of entry are spending more time inspecting vehicles before they reach the inspection booth to detect concealed drugs or smuggled aliens. INS believes that increased sanctions will result in deterring illegal aliens from attempting to enter the United States fraudulently. In response to an increase in attempted fraudulent entries at ports of entry in the San Diego area, the Justice Department established the first permanent immigration court facility to be located at a port of entry. The “port court,” located at the Otay Mesa port of entry, began as a pilot project in July 1995 and was made permanent in October 1995. This program was intended to eliminate costly and time-consuming transportation of aliens to the immigration court in downtown San Diego and allowed for the immediate implementation of the immigration judge’s order of exclusion and deportation. In addition, INS inspectors in San Diego worked with the U.S. Attorney for the Southern District of California to develop a program intended to increase prosecutions of persons attempting to enter the United States fraudulently. However, the ability to prosecute may be hampered by the limited availability of detention space, according to Inspections officials. Provisions in the 1996 Act that took effect April 1, 1997, provide for the expedited removal of certain aliens who attempt fraudulent entry. INS officials believe these provisions may also help deter attempted fraudulent entry. INS has several pilot projects under way that use technology to try to segregate low- and high-risk traffic and streamline the inspections process. In the San Diego area, a dedicated commuter lane is in operation. INS authorizes certain frequent crossers and their vehicles to enter the United States through a preclearance process. Through an automated photo identification and card system, registered vehicles and occupants can pass through the port of entry quickly. INS is testing other technology, including automated vehicle license-plate readers to check vehicles against law enforcement lookout databases and a system which uses palm prints and fingerprints to verify the identity of individuals in order to reduce passenger processing time. In addition, according to INS headquarters and most port officials we interviewed, INS and Customs have increased cooperation between their agencies and are working together to manage the ports in a more efficient manner. In 1993, we reported on the lack of cooperation and coordination of border crossing operations as well as a long history of interagency rivalries between INS and Customs. We recommended that the Director of the Office of Management and Budget (OMB), working with the Secretary of the Treasury and the Attorney General, develop and present to Congress a proposal for ending the dual management of border inspections. As a result of our report, INS and Customs formed Port Quality Improvement Committees (PQIC) at selected ports of entry, including some along the southwest border. A 1996 follow-up report on the Vice President’s National Performance Review indicated that the PQIC structure encouraged and strengthened cooperation and communication among officers of all federal inspection service personnel. In addition, Customs and INS reviews have found that better coordination and improved services have been achieved through PQICs. Four of the ports we visited (San Ysidro, El Paso, Nogales, and Brownsville) had PQICs and had implemented various cooperative initiatives to facilitate border crossing (e.g., increased cross-training). We did not independently assess whether the PQICs have resulted in better coordination and cooperation between INS and Customs because such an assessment was beyond the scope of this review. As part of its efforts to measure performance, as required by the Results Act, INS plans full implementation in fiscal year 1998 of a port performance measurement system, which is to include randomly selecting applicants who seek to cross into the United States and processing them through a more rigorous inspection. One of the goals of this system is to project the estimated number of immigration related violations. According to INS headquarters officials, this system will ensure the effectiveness of inspections at the officer level and will allow for evaluation of overall program performance. Prosecutions The strategy called for increasing prosecutions for immigration related violations. As part of the strategy, U.S. Attorneys in the five judicial districts contiguous to the southwest border have developed federal prosecution policies to, among other things, target criminal aliens, smugglers, and those who attempt entry by using false documents. The strategy also outlines the expanded use of administrative sanctions through immigration court orders. U.S. Attorneys in the five districts along the southwest border have increased the number of prosecutions since 1994. For example, in fiscal year 1994, the 5 districts filed about 1,000 cases involving about 1,100 defendants related to 3 major immigration violations. The Justice Department projected that these 5 districts would file about 3,800 such cases in fiscal year 1997, involving over 4,000 defendants, more than tripling the number of cases and defendants. The U.S. Attorney for the Southern District of California, which includes San Diego, projected that his office would file about 1,900 such cases in fiscal year 1997, over 6 times the 290 filed in 1994. Some Anticipated Interim Effects of Strategy Are Occurring As the strategy along the southwest border is carried out, the Attorney General anticipated the following changes in certain indicators would provide evidence of the interim effectiveness of the strategy: (1) an initial increase in the number of illegal aliens apprehended in locations receiving an infusion of Border Patrol resources, followed by a decrease in apprehensions when a “decisive level of resources” has been achieved, indicating that illegal aliens are being deterred from entering; (2) a shift in the flow of illegal alien traffic from sectors that traditionally accounted for most illegal immigration activity to other sectors as well as shifts within sectors from urban areas where the enforcement posture was greater to rural areas; (3) increased attempts by illegal aliens to enter illegally at the ports of entry, as it becomes more difficult to enter between the ports; (4) an increase in fees charged by alien smugglers to assist illegal aliens in crossing the border and more sophisticated smuggling tactics; (5) an eventual decrease in attempted reentries by those who have previously been apprehended (recidivism); and (6) reduced violence at the border. According to the strategy, changes in the predicted direction on these indicators would be evidence that INS enforcement efforts effectively raised the cost and difficulty of entering the United States illegally. Ultimately, the strategy posits that there would be fewer illegal aliens in the United States and reduced use of social services and benefits by illegal aliens. Data on the interim effects of the strategy have been collected and reported primarily by INS, and their interpretation is not clear-cut. The available data suggest that some of the predicted changes have occurred. For example, INS data indicate that (1) there was a period after additional resources were applied to the San Diego sector in which Border Patrol apprehensions increased in the sector and a subsequent period in which apprehensions decreased; (2) there has been a shift in illegal alien traffic from sectors that traditionally accounted for most illegal immigration activity to other sectors as well as shifts within some sectors; (3) there were increased numbers of illegal aliens attempting to enter illegally at some ports of entry; and (4) alien smuggling fees may have increased, and smuggling tactics may have become more sophisticated. Data were unavailable on whether there has been a decrease in attempted reentries by those who have previously been apprehended, and data on violence at the border were inconclusive. Changes in Illegal Alien Apprehensions Apprehension statistics are routinely reported by INS, and they are INS’ primary quantitative indicator of the results of the strategy. Although an effective strategy should affect apprehensions, apprehension data, standing alone, have limited value for determining how many aliens have crossed the border illegally. (A later section discusses the limitations of apprehension data more fully.) According to INS data for the San Diego sector, after an increase in apprehensions, as resources were applied, sector apprehensions eventually began to decrease. According to an INS analysis of seasonally adjusted San Diego sector apprehension data from October 1992 to March 1997 (see fig. 9), monthly sector apprehensions were on a downward trend from February 1993 through December 1994. In January 1995, 3 months after the sector began applying Operation Gatekeeper resources in the western part of the San Diego sector, apprehensions began increasing. Apprehensions continued to increase for about 1 year. Beginning in January 1996, apprehensions started to decline and continued to do so through March 1997 (the end point of the INS analysis). The last decline in apprehensions coincided with the addition of Border Patrol agents, barriers, and technology to areas of the San Diego sector that were east of the original Gatekeeper effort. It is difficult to determine whether the increase in apprehensions experienced in 1995 is due to increased enforcement or other factors. In December 1994, the Mexican government devalued the peso. According to INS officials and INS reports, apprehensions in the San Diego sector could have increased in part due to the strategy and in part due to an increase in illegal flow resulting from poor economic conditions in Mexico and the associated devaluation of the peso. It is also difficult to determine whether the decline in apprehensions that began in January 1996 was part of the original downward trend predicted by the strategy or a specific result of the spring initiative—INS’ 1996 enhancement to Operation Gatekeeper—in which additional resources were applied to the eastern parts of the San Diego sector. In El Paso, the pattern of apprehensions following implementation of a separate border enforcement initiative, Operation Hold the Line, differed from that of San Diego. In this operation, begun in September 1993, the sector redeployed its agents directly to a 20-mile section of the border in the metropolitan El Paso area adjoining Ciudad Juarez in Mexico and maintained a high-profile presence that was intended to deter illegal aliens from attempting to cross the border. According to an INS analysis of seasonally adjusted apprehension data, apprehensions decreased in the El Paso sector immediately after the initiative was launched, and after declining for a period of 1 month, apprehensions began to increase (see fig. 10). Although the Border Patrol has continued Operation Hold-the-Line and added new agents to the sector between fiscal years 1994 and 1997, apprehension levels began to increase in November 1993, and have generally continued to do so through March 1997, although remaining at levels below those that existed before September 1993. Shift in Illegal Alien Traffic The Border Patrol strategy directed additional enforcement resources first to the San Diego and El Paso sectors, where the majority of illegal entries have historically occurred. INS expected that the flow of illegal alien traffic would shift from San Diego and El Paso to other sectors as control was achieved. Our analysis of INS apprehension data indicate that a shift in apprehensions has occurred. As shown in figure 11, in the first 6 months of fiscal year 1993 the San Diego and El Paso sectors accounted for 68 percent of all southwest border apprehensions. However, during the first 6 months of fiscal year 1997, San Diego and El Paso accounted for 33 percent of all southwest border apprehensions. Other sectors now account for a larger share of the apprehensions. For example, in the first 6 months of fiscal year 1993, the Tucson sector accounted for 7 percent of all southwest border apprehensions. During the first half of fiscal year 1997, the sector’s share rose to 19 percent. The proportion of southwest border apprehensions of the three south Texas sectors—McAllen, Laredo, and Del Rio—rose from 19 percent to 37 percent over the same period. The Border Patrol’s enforcement posture aimed to reduce illegal entries into large urban areas, thereby, forcing illegal alien traffic to use rural routes where the Border Patrol believes it has a tactical advantage. Our analysis of INS apprehension data shows that within the San Diego, El Paso, and Tucson sectors, apprehensions have decreased in areas that have received greater concentrations of enforcement resources and increased in more remote areas. As shown in figure 12, in the San Diego sector, the stations of Imperial Beach and Chula Vista, which provide the shortest established routes to urban San Diego, accounted for 61 percent of the sector apprehensions in the first half of fiscal year 1993. In the first half of fiscal year 1997, these two stations accounted for 39 percent of the sector’s apprehensions. Conversely, in the other stations, largely in rural sections of the sector, the share of total apprehensions increased from 39 percent to 61 percent over the same time period. In the El Paso sector, the proportion of apprehensions in the urban El Paso station dropped from 79 percent of all sector apprehensions in the first 6 months of fiscal year 1993, to 30 percent in the first 6 months of fiscal year 1997, as shown in figure 13. A 1994 study commissioned by the U.S. Commission on Immigration Reform concluded that Operation Hold-the-Line appeared to have substantially deterred some types of illegal crossers but not others. Using official statistics on apprehensions and crossings at the ports of entry (e.g., police and crime data, birth and hospital data, education and school attendance statistics, sales tax and general sales data) and qualitative information from in-depth interviews with government officials and persons at border crossing sites in El Paso and Ciudad Juarez, the study concluded that the operation had been more successful in curtailing illegal immigration among aliens who crossed illegally from Ciudad Juarez to engage in illegal work or criminal activity in El Paso (local crossers) than among aliens who crossed at El Paso but were headed for other U.S. destinations (long-distance labor migrants). According to the study, a substantial amount of long-distance labor migration appeared to have been diverted to other locations along the southwest border. Border Patrol officials in the El Paso sector told us in March 1997 that the majority of illegal aliens entering the sector were long-distance migrants, so they believed that their enforcement efforts were continuing to deter local crossers. Apprehension data also provide some support that a shift occurred in illegal alien traffic from the urban area of Nogales, Arizona, targeted by the Border Patrol in the first phase of its enforcement strategy in the Tucson sector, to other stations in the sector. As shown in figure 14, in the first 6 months of fiscal year 1993, the Nogales station accounted for 39 percent of all sector apprehensions. In the first 6 months of fiscal year 1997, the station accounted for 28 percent of all sector apprehensions. Over this same period, apprehensions have increased in the city of Douglas, Arizona, from 27 percent to 43 percent of all sector apprehensions. Other information also indicates that there has been a shift in alien traffic. According to an INS report on Operation Gatekeeper, data from INS’ automated fingerprinting system, known as IDENT, showed that illegal aliens were less likely to try to cross in Imperial Beach in January 1995 than in October 1994. According to an April 1996 report summarizing the results from an INS Intelligence conference held in the Del Rio sector, potential illegal aliens had been channeled away from the U.S. urban areas on the southwest border to more inhospitable areas. The report stated that a sizeable number of apprehensions were being made in extremely desolate areas of the border, and this was taken to be an indication that illegal aliens were trying to avoid Border Patrol deployments. However, according to the report, INS lacked the resources to apprehend aliens traversing the remote areas. The report suggested that the Border Patrol should not abandon its high-visibility deterrent posture in the urban areas to respond to the remote areas because it would encourage a return to the earlier entry patterns. According to some INS sector officials, shifts in illegal alien traffic have had a negative effect in their sectors. For example, according to a January 1997 Laredo sector report, continued media coverage of Operations Hold-the-Line and Gatekeeper have discouraged entries in those areas and made Laredo vulnerable to would-be crossers. Border Patrol officials in the Del Rio sector told us in May 1997 that because of substantial increases in illegal alien entry attempts and limited resources, the sector had limited success in controlling the two main corridors within the sector. According to INS headquarters officials, such shifts in illegal traffic are not failures of the strategy; rather, they are interim effects. The strategy has also produced some effects that INS did not anticipate. INS officials told us that they were in some cases caught unaware by some of the changes in illegal alien traffic and the tactics of illegal crossers. For example, INS Western Region officials told us that the increases in illegal alien traffic that they predicted would occur in the Tucson, Arizona, and McAllen, Texas, sectors happened earlier than they expected. According to these officials and San Diego sector officials, the mountains in the eastern section of the San Diego sector were expected to serve as a natural barrier to entry. They were surprised when illegal aliens attempted to cross in this difficult terrain as it became more difficult to cross in the urban sections of the sector. Changes in Illegal Entry Attempts at Ports of Entry The strategy anticipated increases in the flow of illegal traffic through the ports of entry as it became more difficult to cross between the ports. According to INS regional and district officials, some of these anticipated increases had begun to occur. Officials from the San Diego district told us that they had seen large increases in attempts to enter the United States using fraudulent documents or making false claims to U.S. citizenship immediately following the increase of resources to the Border Patrol in the San Diego sector. According to San Diego district inspections data, the number of fraudulent documents intercepted increased about 11 percent from fiscal year 1994 to 1995 (from about 42,000 to about 46,500), and the number of false claims to U.S. citizenship increased 26 percent (from about 15,400 to about 19,400). Officials in El Paso told us they uncovered more fraudulent documents after the initiation of Operation Hold-the-Line. El Paso district inspections data show a steady increase in the number of fraudulent documents intercepted from fiscal year 1994 (about 8,200) through fiscal year 1996 (about 11,000). The Del Rio intelligence conference report stated that Border Patrol enforcement efforts had forced more people to attempt entry fraudulently through the ports of entry. Officials from the Phoenix district, however, told us that they had not seen any significant change in the number of fraudulent documents intercepted at Arizona ports of entry, as the Border Patrol resources increased in the Tucson, Arizona sector. INS Western Region and San Diego District officials told us that they had not expected the volume of people trying to enter illegally at the ports of entry in the San Diego areas to be as large as it was after Operation Gatekeeper or the tactics illegal aliens chose to try to get past the inspectors. For example, at various times during 1995, illegal aliens gathered at the entrance to the San Ysidro port of entry and attempted to overwhelm inspectors by running in large groups through the port. INS created emergency response teams at the port to deal with these unexpected tactics and added bollards and other physical barriers to make unimpeded passage more difficult. In addition, INS asked Mexican government officials to help prevent large gatherings on the Mexican side of the border. During our visit to San Ysidro in March 1997, INS officials told us that due to these actions, large groups of port-runners were no longer a problem. It is difficult to determine whether the increases in number of fraudulent documents intercepted and false claims to U.S. citizenship were a result of actual increases in illegal entry attempts at the ports or greater efforts made to detect fraud. The 1994 U.S. Commission on Immigration Reform evaluation of Operation Hold-the-Line noted that INS inspectors at the ports of entry in El Paso began checking documents more closely after the Border Patrol instituted the operation, which may have contributed to the increase in recorded illegal entry attempts. INS headquarters officials disagreed with these findings. They stated that they believed that inspectors in El Paso were exposed to heavier traffic flows after the operation began and, therefore, inspectors could have compensated for the increased workload by doing more cursory reviews of documents. These headquarters officials stated that increases in detected fraud reflected actual increases in fraudulent entry attempts, which were a response to heightened enforcement between the ports of entry caused by the Border Patrol’s implementation of Operation Hold-the-Line. Alien Smuggling INS expected that if successful, its enforcement efforts along the southwest border would make it more difficult and costly for illegal aliens and alien smugglers to cross the border. INS postulated that this should be reflected in higher fees charged by alien smugglers and more sophisticated tactics used by smugglers to evade capture by INS. Fees charged by smugglers, and the sophistication of smuggling methods have reportedly increased since fiscal year 1994. On the basis of testimony of the U.S. Attorney for Southern California, INS evaluation reports on enforcement efforts in San Diego, and INS intelligence reports, fees paid by illegal aliens to smugglers have increased substantially. Fees for smuggling illegal aliens across the southwest border have reportedly tripled in some instances and may be as high as several thousand dollars for transportation to the interior of the United States. Intelligence assessments by INS’ Central Region in April and October 1996 concluded that smugglers had become more sophisticated in their methods of operation. The assessments said that smugglers were more organized and were transporting aliens further into the interior of the United States. Similarly, officials from the U.S. Commission on Immigration Reform told us that their examination of alien smuggling along the southwest border indicated a trend toward more organized smuggling. INS sector officials and documents have reported that changes in alien smuggling patterns may be having a negative impact in some sectors. According to a June 1996 Laredo sector intelligence assessment, due to the success of Operation Gatekeeper and Operation Hold-the-Line, as well as increased personnel in the McAllen sector, alien smuggling had increased in the sector. The assessment further stated that limited manpower caused deterrence to be “negligible” and, consequently, alien smugglers crossed virtually at any point along the Rio Grande River beyond the area where the sector focused its enforcement resources. According to the report from the Del Rio intelligence conference, the increased use of organized smuggling may make INS’ mission more difficult. For example, most of the alien smuggling organizations were reportedly “long-haul” groups transporting aliens to interior work sites via interstate highways. In addition, according to the report, many of the smuggling organizations that used to be headquartered in the United States moved their operations out of the United States, which may make prosecution of the principal leaders of these organizations more difficult. Recidivism The strategy anticipated eventual reductions in apprehensions of illegal aliens who had previously been apprehended, as control was gained in particular locations. According to INS, reductions in recidivism in these locations would indicate that some illegal aliens are being deterred from entering. INS has plans to use IDENT, its automated fingerprinting system, to identify recidivists and analyze their crossing patterns. However, due to the length of time that it is taking INS to deploy the IDENT system in field locations, difficulties in getting agents to use IDENT or use it properly, and computer problems, IDENT to date has provided limited information on recidivism. According to INS’ fiscal year 1998 budget request, IDENT systems were to be installed in 158 fixed locations along the southwest border by the end of fiscal year 1997. As of October 1, 1997, however, only 140 of these locations had an IDENT system in place. Of the nine southwest border sectors, only the San Diego and El Centro sectors had an IDENT system installed in every fixed location. Even when IDENT has been installed, according to INS sector officials, it has not always been put in locations where aliens are apprehended. One of the reasons given for not installing IDENT in these locations is that it depends on telephone hookup to a central database, but aliens are not necessarily apprehended where such telephone communications exist. As a result, many apprehended aliens have not been entered into IDENT and whether they are recidivists cannot be readily determined. Further, according to INS Headquarters officials, even when agents have access to IDENT, they don’t always use all the system capabilities to identify recidivists. This reportedly occurs when agents input information on apprehended aliens into IDENT at the end of their work shift. In such a case, agents may input apprehended aliens into the system but leave without determining whether the alien is a recidivist, because doing so would require additional time for further processing. In addition, if agents don’t use the IDENT equipment properly (e.g., they do not clean the platen, which records the fingerprint), they may obtain poor quality fingerprints from apprehended aliens. This can result in a failure to identify aliens as recidivists. These situations have been acknowledged as problems by INS, but the frequency of their occurrence is not known. Finally, computer problems have affected the usefulness of IDENT data and INS’ ability to track recidivism over several years. The first year of IDENT data, which were collected in fiscal year 1995, is not comparable with data collected in later years because of changes made to the software at the end of 1995. Accordingly, data collected since January 1996 can be used as a baseline for assessing the effects of the southwest border strategy, but data collected prior to then cannot be used. In addition, computer problems arose in fiscal year 1996, affecting the ability of IDENT to accurately read apprehension dates and times recorded by agents and to match poor quality fingerprints within the system. INS officials told us these problems have been corrected and the data have been reprocessed, validated, verified, and are more consistent. INS officials told us that although IDENT data gathered since January 1996 are reliable and accurate, they have not yet been analyzed to examine trends in recidivism. Changes in Violence at the Border The strategy anticipated a reduction in border violence as border control was achieved. INS officials told us that they anticipated that crime would decline in those sections of the border where INS invested more enforcement resources. The results on this indicator are inconclusive. In November 1996, INS officials reported that crime statistics for the San Diego area showed that property crime rates and violent crime rates dropped between 1994 and 1995, after the infusion of resources in the sector. The decreases exceeded decreases reported for the same time period for the state of California and the nation as a whole. However, property and violent crime rates were decreasing in San Diego prior to the infusion of resources. In addition, according to FBI crime statistics, the crime rate for Imperial Beach, the area that received the greatest infusion of Border Patrol agents in the San Diego sector, was 19 percent lower from January to June of 1996 compared with the same period in 1992. However, this decrease was smaller than the 32 percent decrease for the San Diego region as a whole. According to an official in the Executive Office for the U.S. Attorneys, the Executive Office as well as local law enforcement leaders believe that a more secure border is a material element in the reduction of crime in San Diego. In addition, according to this official and INS officials, there has been a significant drop in violence against aliens crossing the border illegally due to the Mexican government establishing a special group to patrol its side of the border. According to the Executive Office official, enhanced coordination between the group and U.S. authorities has had a profound and positive impact on the level of violence. The U.S. Commission on Immigration Reform, in its 1994 evaluation of Operation Hold-the-Line, examined statistics for a number of different types of crimes in El Paso and the age groups involved in committing the crimes. The evaluation found that certain types of petty crime and property crime committed by young adults and juveniles had declined in El Paso after the implementation of the operation. However, the evaluation report said that the declines were neither statistically significant nor greater than drops that had occurred in previous years. Furthermore, linking changes in crime rates to border enforcement efforts is problematic because there are often no data available on whether arrested offenders have entered the country illegally. Without this information, it is difficult to determine what proportion of the reported declines in crime rates may be due to changes in the number of illegal aliens arrested for criminal activity. Formal Evaluation Based on Multiple Indicators Would Be Needed to Assess Effectiveness of the Attorney General’s Strategy The Attorney General’s strategy for deterring illegal entry across the southwest border envisions three distinct but related results: fewer aliens will be able to cross the border illegally; fewer aliens will try to illegally immigrate into the United States; and, consequently, the number of illegal aliens in the United States will decrease. However, the indicators presently used for measuring the overall success of the strategy are not sufficiently comprehensive to address these three distinct results, and, in many cases, data are not being gathered systematically. In addition, there is no overall plan describing how these and other indicators could be used to systematically evaluate the strategy to deter illegal entry. To gauge the overall success of the strategy, data would be needed to assess each of the results envisioned by the strategy and an evaluation plan would be needed to describe the interrelationship of those results. To illustrate this point, the strategy could have a desirable effect on the flow of illegal aliens across the southwest border and, concomitantly, an undesirable effect on the size of the illegal alien population in the United States. This could occur if, for example, in response to the strategy, aliens made fewer illegal trips to the United States, but stayed longer on each trip, or made a legal trip but overstayed the terms of their visas. Information would thus be needed on whether a possibly reduced flow across the border may be an artifact of aliens’ staying in the United States for longer periods of time than when the border was more porous or whether aliens were deterred from making attempts at illegal entry at the border. Our review of the illegal immigration research literature, interviews with agency officials, and suggestions on evaluation strategies made to us by our panel of immigration experts verified the importance of a comprehensive approach for assessing the effectiveness of the strategy. Such an approach would require a formal evaluation plan consisting of indicators that would serve as measures of flow across and to the border as well as the size of the illegal alien population. We drew on our literature review and other consultations to identify indicators, some noted by INS and others not, that might provide useful information. The indicators we identified are discussed in the following pages and summarized in appendix V. It is important to keep several points in mind when contemplating these indicators: First, the indicators we identified address some aspects of each of the border strategy’s intended results dealing with the flow of illegal aliens across the southwest border, whether aliens are being deterred from attempting to illegally migrate into the United States, and the number of illegal aliens residing in the United States. Second, each indicator or result by itself would be insufficient to assess the effectiveness of the strategy as a whole, but multiple indicators drawing on a variety of methods and data sources could contribute to a more comprehensive evaluation of the effectiveness of the strategy along the southwest border. Third, these indicators should not be viewed as an exhaustive or all-inclusive list. Our purpose was to illustrate the significance of the multiple-indicator concept; devising an implementable evaluation strategy was beyond the scope of this review. Fourth, the Government Performance and Results Act (Results Act) of 1993 is intended to improve the efficiency and effectiveness of federal programs by establishing a system to set goals for program performance and to measure results. The Results Act requires that agency strategic plans contain a schedule for future program evaluations. Program evaluations can be a potentially critical source of information in ensuring the reasonableness of goals and strategies and explaining program results. The Results Act defines program evaluations as objective and formal assessments of the results, impact, or effects of a program or policy. We recognize that the results envisioned by the strategy are complex and interrelated and that a rigorous and comprehensive approach to evaluating the results would be challenging and potentially costly. However, we believe that the information that would be gained from such an approach would be important to obtain because it could help identify areas where Justice might not be achieving the strategy’s objectives and suggest areas where strategy and policy changes may be necessary. In addition, as Justice states in its September 1997 Strategic Plan, “evaluation identifies and explains the linkages between the activities and strategies undertaken and the results achieved . . . enables future planning and resource decisions to be better informed.” Indicators of Flow Across the Border Historically, inferences about the flow of illegal aliens across the border have relied heavily on data recorded when aliens are apprehended attempting to make illegal entry between ports of entry or at ports of entry. Although the number of apprehensions is a quantifiable measure of law enforcement activity and INS can collect apprehension information systematically and completely along the length of the border, it is not a very good measure of the effectiveness or results of broad strategies, such as the strategy to deter illegal entry across the southwest border. A major limitation of apprehension data is that it only provides information on illegal aliens who have been captured by INS. Because arrested aliens are the basis for apprehension data, such data provide little information about illegal aliens who eluded capture, possibly by (1) changing their crossing patterns and entering in areas where INS has not been able to detect them, (2) using smugglers to raise the probability of a successful crossing, (3) entering successfully at the ports of entry with false documents, or (4) entering legally with a valid visa or border-crossing card and then overstaying the time limits on these documents or working illegally in the United States. INS has, in various contexts, made note of indicators other than apprehensions to measure the effects of various border-control initiatives. These indicators include smuggling and crime statistics that were mentioned in the previous section as indicators of the interim effects of the strategy. Other indicators on which INS has collected some data in certain sectors or has reported data collected by others include (1) the number of migrants staying in hotels and shelters in Mexican border cities, (2) the number of “gotaways” who are detected by the Border Patrol but not apprehended, (3) complaints from members of U.S. border communities about suspected illegal aliens trespassing on their property, and (4) interview responses by apprehended aliens on the perceived difficulty of crossing the border. However, INS has collected much of these data on an ad hoc basis, not systematically and comprehensively in accordance with a formal evaluation plan. As a result, it is often difficult to determine the time frame in which the data were collected, the methodological integrity of the data collection procedures, and the generalizability of the findings from one area of the border to another. We reviewed a number of ongoing and completed research projects to determine what indicators have been identified by others that can be potentially used in a formal evaluation of the effects of the strategy. These research projects contain information gathered by immigration researchers in Mexico and the United States on the crossing patterns and tactics of Mexicans who have crossed or intend to cross the border illegally. In addition, the panel of immigration researchers that we convened for advice on evaluating the effectiveness of the strategy suggested that information on (1) the social costs and effects of illegal immigration in border cities; (2) deaths, injuries, and abuses of illegal aliens; and (3) Mexican efforts to crack down on its border could bolster an understanding of the strategy’s potential effects on flow across the border. Several of the research projects we identified reported information obtained from interviews with illegal aliens in the United States, potential illegal aliens in Mexico, Mexicans who had been in the United States illegally, and illegal and legal immigrants preparing to cross the U.S.-Mexico border. Although we cannot attest to the quality of data collected in these research studies, nor that the data were properly analyzed and interpreted, these research studies have produced quantitative and qualitative information on the crossing experiences of illegal aliens—including the number of times individuals have been apprehended on prior illegal trips, the locations where they have crossed the border, tactics used to cross the border, whether smugglers were used, and the overall costs of making an illegal trip. One ongoing project, the Mexican Migration Project, has gathered interview data from migrants from over 30 communities in Mexico, including detailed histories of border crossing and migratory experience in the United States covering the past 30 years. Two other projects conducted by a Mexican university gathered interview data from migrants who were preparing to cross the border illegally or legally, or who had just returned from the United States to Mexico. The recently completed Binational Study on Migration gathered interview data from Mexican migrants and from alien smugglers on such issues as smuggling costs and tactics and perceptions of United States border-control efforts. (See app. VI for more details on these research projects, and others listed below, including discussion of their strengths and weaknesses.) Members of our expert panel also suggested that evaluations of the strategy should include indicators of the social costs and benefits of border enforcement, such as the social and financial effects of illegal migration on border communities in the United States. As mentioned earlier, the 1994 U.S. Commission on Immigration Reform evaluation of Operation Hold-the-Line examined a variety of social indicators in El Paso to estimate the impact of illegal aliens on these communities and to determine whether there were any changes in these indicators as a result of the operation. Future evaluation research could examine these kinds of indicators in other cities along the southwest border. Such evaluations would need to take into account differences between border cities in terms of the enforcement strategies employed by INS, social and cultural conditions, and the types of illegal aliens that traditionally cross through these cities. The panel also recommended more systematic study of deaths, injuries, and abuse suffered by illegal aliens in crossing the border. Advocacy groups have raised concerns that the strategy may divert illegal aliens to more dangerous terrain to cross the border, thereby, producing an unacceptably high toll in human suffering. A recent study by the University of Houston’s Center for Immigration Research tracked deaths of illegal aliens from such causes as automobile-pedestrian accidents, drowning, and environmental and weather hazards (dehydration, hypothermia, accidental falls) in counties on the U.S.-Mexico border during the years 1993 to 1996. The study did not find an increase in the overall number of illegal alien deaths over the time period but did find an increase in the number of deaths in the remote areas where immigrants have traveled in an effort to avoid areas of greater enforcement along the border. These deaths in remote areas were purportedly caused by exposure. The study also noted that alien deaths were not recorded systematically by any centralized agency, and local databases were partial and did not use common standards. Immigration researchers and INS officials we spoke with agreed that IDENT data, when more fully available, could be quite useful for examining the flow of illegal aliens across the border. First, IDENT could provide an unduplicated count of the number of persons apprehended. Second, information on the frequency of apprehension of individuals, the time between multiple apprehensions, and the locations where illegal aliens are apprehended could provide a greater understanding of how border enforcement efforts are affecting illegal crossing patterns. Finally, recidivism data have potential for statistically modeling the flow of illegal aliens across the border and the probability of apprehension. INS officials told us in June 1997 that they were examining different analytic techniques that could be used to model the flow. Indicators of Deterrence The southwest border strategy is ultimately designed to deter illegal entry into the United States. It states that “The overarching goal of the strategy is to make it so difficult and so costly to enter this country illegally that fewer individuals even try.” INS officials told us they had no plans, and there are no plans in the Attorney General’s strategy, to directly examine deterrence—that is, the extent to which potential illegal aliens decide not to make an initial or additional illegal trip to the United States or decide to limit the number of illegal trips they had planned to take. According to some research, decisions to migrate illegally are determined by a complex set of factors; among them are perceptions of border enforcement efforts; economic conditions in the country of origin; demand for labor in the United States; and the extent to which social networks are established that facilitate migration from other countries to the United States. The previously mentioned Mexican Migration Project has collected some survey information on economic conditions in each of the communities selected for study as well as the extent of social networks that facilitate migration from these communities to the United States. Although the validity of the results would depend greatly on the quality of the underlying data and the appropriateness of the statistical assumptions made, data from the project may help to (1) estimate annual changes over a 30-year period in migrants’ likelihood to take an illegal trip to the United States and (2) examine the factors contributing to decisions to take an illegal trip. The Binational Study also conducted interviews with Mexicans concerning factors affecting their decisions to migrate illegally. Researchers interviewed residents of several different types of communities in Mexico that differed in the extent to which patterns of migration to the United States had taken hold. In older sending communities, nearly all male residents had made a trip to the United States, and illegal migration was seen as a rite of passage. Study directors from the U.S. Commission on Immigration Reform told us that efforts at deterring illegal migration from these kinds of communities were unlikely to be effective. However, they said that deterrence efforts might work better with migrants from newer sending communities, where social networks were not yet well established. Some people in these communities reportedly were leaving because of difficult economic circumstances, while others were staying because they had heard that the trip was harder, more costly, and more dangerous. Commission officials told us that an evaluation should include interviews with residents of older and newer sending communities in Mexico to ascertain their rationales for undertaking trips to the United States and their perceptions of border-control efforts. If the border becomes harder to cross illegally, it is possible that some migrants who might have crossed illegally may try to enter through legal means by applying for nonimmigrant visas or border-crossing cards. Information available from INS and the Department of State on the number of people applying for nonimmigrant visas and border-crossing cards could be used to track whether greater numbers of aliens are applying for legal means of entry. Trends in application rates may be difficult to interpret in the future, however, since aliens already holding border-crossing cards will be required to use new cards containing biometric information, such as the cardholder’s fingerprints, by October 1, 1999. Application rates may then overstate the actual demand for legal entry to the United States. Indicators of the Number of Illegal Aliens in the United States The strategy anticipates that enforcement activities will ultimately reduce the number of illegal aliens in the United States and, thereby, reduce their use of benefits and social services. INS estimated that the number of illegal aliens residing in the United States grew from 3.9 million in October 1992 to 5.0 million in October 1996. These estimates are difficult to use to evaluate the impact of the strategy, however, because (1) they focus on long-term illegal residents, rather than illegal aliens who come to the United States for relatively short periods and return periodically to their country of origin; (2) they do not allow for estimates of the extent to which reductions in the flow of illegal immigration across the southwest border may be offset by increases in aliens using legal nonimmigrant visas but overstaying the terms of their visas; (3) they have so far been produced too infrequently to be useful to evaluate short-term effects of enforcement efforts; and (4) a shortage of information about some components of the estimates makes it difficult to estimate the number of illegal aliens without making questionable assumptions about these components. Some additional data sources are available that could supplement INS data. However, considerable effort and research would probably be entailed in actually obtaining adequate estimates of the number of illegal aliens in the United States. There are numerous difficulties in accurately measuring the total number of illegal aliens in the United States and in estimating how many illegal aliens come to this country each year. One of the major difficulties arises because of the heterogeneity of the illegal alien population. Some illegal aliens, referred to as sojourners in the illegal immigration research literature, come to the United States on a temporary basis but consider themselves residents of a foreign country and intend to return. Others, referred to as settlers, come here with the intention of staying on a more permanent basis. In addition, illegal aliens differ in their mode of arrival. Some illegally cross the southwest border; enter illegally along other borders; or enter illegally at the land, sea, or air ports of entry. Other aliens enter legally with one of several types of visas and then fail to leave within the allowed time period. Border enforcement efforts may have different effects on each of these groups, some of them not intended or anticipated in the Attorney General’s strategy. For example, successful border enforcement may result in sojourners limiting the number of trips they make to the United States, but staying for longer periods of time, because of the greater difficulty in crossing the border. In that case, successful border control would effectively reduce the number of illegal border crossings but could increase the average length of time that illegal aliens reside in the United States. Successful border strategies may also discourage sojourners from crossing the border illegally but may encourage them to try to enter legally and stay longer than authorized. A comprehensive evaluation that seeks to determine the effect of the strategy on the number of illegal aliens in the United States would need to have reliable and valid data on sojourners and settlers, and on those who enter legally as well as those who overstay. The evaluation would also have to account for the impact of outside factors, such as economic conditions in countries of origin and policy changes in the United States. In February 1997 INS released estimates of the overall stock of illegal aliens residing in the United States in October 1996 and updated earlier estimates that it had made for October 1992, using data collected by INS and the Census Bureau. According to the latest estimates, 5 million illegal aliens resided in the United States as of October 1996, up from 3.9 million as of October 1992. These estimates are difficult to use to evaluate the impact of enforcement efforts at the border, for a number of reasons. First, individuals covered in these estimates are defined as those who have established residence in the United States by remaining here illegally for more than 12 months. The estimates therefore provide little information about the number of sojourners who come to the United States and stay for periods shorter than 1 year. Other kinds of data are needed to examine the sojourner population. Second, uncertainty about the number of people who enter legally with nonimmigrant visas and stay longer than authorized (overstays) makes it difficult to assess whether a decrease in the estimated number of illegal border crossers from Mexico due to border enforcement is offset by an increase in Mexican overstays. The INS overstay estimates for 1996 contain a projection based on earlier overstay data, thus the total estimate is subject to considerable uncertainty. In recent testimony to Congress, INS admitted that it has been unable to produce estimates of overstays since 1992. Indeed, because of problems with the data system on overstays,INS acknowledges that it is unable to estimate the current size of the overstay problem and the magnitude by country of origin. Third, estimates made once every 4 years are not suited to identifying intermediate trends that might signal effects of border enforcement efforts. INS has calculated an average annual growth rate of 275,000 in the illegal resident population between 1992 and 1996, but these estimates result from averaging over 4 years and cannot distinguish yearly changes in the illegal alien population. Fourth, INS acknowledges that limited information about some components of the estimates may increase uncertainty about the size of the illegal population. For example, limited information about the extent to which the Mexican-born population may be undercounted and the number who emigrated from the United States during the period between estimates necessitate assumptions that affect the precision of the estimates of the illegal alien population. Unfortunately, data are limited on the number of illegal alien sojourners who travel back and forth between Mexico and the United States. The Binational Study on Migration cited estimates from the previously mentioned Northern Border study of the number of both legal and illegal sojourners traveling to the United States and returning to Mexico in 1993 and 1995. The study found a decrease between 1993 and 1995 in the overall number of people traveling in each direction. The Binational Study interpreted these findings to mean that sojourners were likely staying longer in the United States in 1995 than in 1993. However, the study was unable to distinguish the size of the illegal from the legal alien flow. The Mexican Migration Project data have also been used to estimate changes in net annual illegal migration from Mexico (subtracting estimates of the number of illegal aliens who return to Mexico from estimates of the number of illegal aliens who migrate to the United States). Because neither study was based on data that represented all illegal aliens in the United States, estimates derived from these studies may be most useful for looking at trends, rather than absolute levels, of illegal immigration from Mexico. Finally, a number of existing data sources provide estimates of the number of illegal aliens in United States workplaces. Since 1988, the U.S. Department of Labor has administered its National Agricultural Workers Survey 3 times a year to a random sample of the nation’s crop farm workers and has asked respondents about their immigration status, among other things. Such data may be useful for examining whether border enforcement efforts have reduced the number, or proportion, of illegal aliens in a sector that has traditionally attracted high proportions of illegal migrants, and whether the characteristics of illegal aliens working in agriculture have changed. An April 1997 report prepared for the U.S. Commission on Immigration Reform, based on National Agricultural Workers Survey data from 1988 to 1995, showed an increasing percentage of illegal aliens in agriculture during this time period. Many illegal aliens work in sectors other than agriculture, including urban-sector employment in construction and services. INS collects statistics on the results of targeted and random investigations in U.S. workplaces. Trends in the number of illegal aliens apprehended at the workplace may reflect enforcement efforts at the southwest border. For example, if recent border enforcement efforts are effective, the proportion of apprehended illegal aliens who have recently arrived in the United States might decrease at various workplaces. However, changes in the number of illegal aliens apprehended at U.S. workplaces may be a result of other causes, such as greater emphasis by INS staff on investigating whether employers are complying with the law and INS regulations concerning aliens’ right to work in the United States. INS Evaluation Efforts INS recognizes that multiple indicators are necessary, and, as part of its fiscal year 1997 priority to strengthen border facilitation and control, INS stated that it would “review past and current studies and initiatives regarding measurements of success of both enforcement and facilitation along the border.” An objective within this priority was to make recommendations for appropriate measures and complete implementation plans. However, INS was unable to provide us with any documentation that it or any other Justice component was pursuing a broader, formal evaluation of the southwest border strategy. In June 1997, INS officials told us that they had completed a review of indicators used in the past by INS and others, but had not yet made recommendations regarding what measures would be collected, who would be responsible for collecting the data, or how the data would be analyzed. Along these lines, the Justice Department’s September 1997 Strategic Plan stated that a key element in the Department’s strategic planning process is the “systematic evaluation of major programs and initiatives.” Although the strategic plan described program goals, strategies, and performance indicators for INS, it did not contain a program evaluation component to explain how it will assess success in meeting these goals and, in a broader sense, the effectiveness of the southwest border strategy. Justice’s strategic plan acknowledged that there has been relatively little formal evaluation of its programs in recent years. According to the plan, Justice intends to examine its current approach to evaluation to determine how to better align evaluation with its strategic planning efforts. Conclusions The Attorney General has a broad strategy for strengthening enforcement of the Nation’s immigration laws that places priority on deterring illegal entry into the United States along the southwest border. Congress has been supporting efforts to gain control of the southwest border by substantially increasing INS’ funding for enforcement activities. As a result, the Department of Justice and INS have made some strides in implementing the strategy. INS’ data on the effects of the implementation of the strategy indicate that a number of the interim results anticipated by the Attorney General are occurring. To comprehensively and systematically assess the effects of the strategy over time, there are a variety of other indicators that may be useful to provide information on some aspects of each of the results envisioned by the strategy. INS has collected data and reported on some, but not all, of these indicators. Although Justice’s strategic plan recognizes the importance of systematic evaluation of major initiatives, Justice has no comprehensive evaluation plan for formally evaluating whether the strategy is achieving its intended range of results—such as reducing flow across the border, reducing flow to the border, and reducing the number of illegal aliens who reside in the United States. We recognize that developing a formal evaluation plan and implementing a rigorous and systematic evaluation of the strategy could require a substantial investment of resources, in part because the needed data may not be presently available, thereby possibly requiring support for new data collection efforts. Thus, devising such an evaluation plan should entail determining the most important data needs and the most appropriate and cost-effective data sources and data collection activities as well as carefully analyzing the relationships among various indicators to correctly interpret the results. Furthermore, data obtained on any set of indicators should be interpreted in the context of economic conditions and policy changes in the countries of origin of illegal immigrants and in the United States to help ensure that the results are attributable to the strategy and not to other potential causes. Notwithstanding the challenges in devising such a broad-based evaluation plan, we believe that the substantial investment of billions of dollars being made in the Attorney General’s strategy warrants a cost-effective, comprehensive evaluation to demonstrate whether benefits commensurate to the investment have been realized. Such an evaluation would also be in keeping with the concepts embodied in the Government Performance and Results Act of 1993 as well as the Department’s strategic plan to evaluate major initiatives. In addition, a comprehensive evaluation would also assist Justice in identifying whether INS is implementing the strategy as planned; what aspects of the strategy are most effective; and, if the strategy’s goals are not being achieved, the reasons they are not. Such information would help the agency and Congress identify whether changes are needed in the strategy, in policy, in resource levels, or in program management. Recommendations We recommend that the Attorney General develop and implement a plan for a formal, cost-effective, comprehensive, systematic evaluation of the strategy to deter illegal entry across the southwest border. This plan should describe (1) the indicators that would be required for the evaluation, (2) the data that need to be collected, (3) mechanisms for collecting the data, (4) controls intended to ensure accuracy of the data collected, (5) expected relationships among the indicators, and (6) procedures for analyzing the data. Agency Comments We requested comments on a draft of this report from the Attorney General or her designees. On September 16, 1997, we met to obtain oral comments on the draft report from the INS Executive Associate Commissioner (EAC) for Policy and Planning and other officials from INS’ Office of Policy and Planning, Field Operations, Border Patrol, Inspections, Intelligence, Budget, General Counsel, Internal Audit, and Congressional Relations as well as from the Justice Management Division and the Executive Office for United States Attorneys. INS and the Executive Office for United States Attorneys followed up this discussion with point sheets, which reiterated their oral comments suggesting clarifications and technical changes. We also requested comments from the Secretary of State and the Acting Commissioner of Customs. The Customs Service had no comments on our draft report. We received technical comments from the Department of State. We made clarifications and technical changes to the draft report where appropriate. Although INS chose not to provide written comments on the report or the recommendation, INS officials told us that they recognize the need for a comprehensive border-wide evaluation and are in the process of designing and implementing one. The evaluation design was not available when we finalized this report; therefore, we are not in a position to assess whether it contains the types of evaluation factors discussed in our recommendation. We are sending copies of this report to the Attorney General, the Commissioner of the Immigration and Naturalization Service, the Acting Commissioner of the U.S. Customs Service, the Secretary of State, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions concerning this report, please contact me on (202) 512-8777. This report was done under the direction of Evi L. Rezmovic, Assistant Director, Administration of Justice Issues. Other major contributors are listed in Appendix VII. Attorney General’s Strategy to Deter Illegal Entry Into the United States Along the Southwest Border In February 1994, the Attorney General and Immigration and Naturalization Service (INS) Commissioner announced a comprehensive five-part strategy to strengthen enforcement of the nation’s immigration laws. The strategy outlined efforts to curb illegal entry and protect legal immigration by focusing on five initiatives: (1) strengthening the border; (2) removing criminal aliens; (3) reforming the asylum process; (4) enforcing workplace immigration laws; and (5) promoting citizenship for qualified immigrants. The first priority, strengthening the border, focused on immigration control efforts along the 2,000-mile U.S.-Mexican border and is summarized here. According to the 1994 strategy announced by the Attorney General and the INS Commissioner, the first priority was to focus on the areas between the ports of entry, which are divided into nine Border Patrol sectors. Recognizing the displacement effect of enhanced efforts between the ports of entry, the ports of entry along the southwest border were added to provide comprehensive coverage, and thus compensate for displacement. The new border strategy involved “prevention through deterrence.” This strategy called for concentrating new resources on the front lines at the most active points of illegal entry along the southwest border. The key objectives of the between-the-ports strategy were to (1) close off the routes most frequently used by smugglers and illegal aliens (generally through urban areas) and (2) shift traffic through the ports of entry or over areas that were more remote and difficult to cross illegally, where INS had the tactical advantage. To carry out this strategy, INS planned to provide the Border Patrol and other INS enforcement divisions with the personnel, equipment, and technology to deter, detect, and apprehend illegal aliens. INS also had related objectives of preventing illegal entry through the ports, increasing prosecutions of smugglers and aliens who repeatedly enter the United States illegally, and improving the intelligence available to border patrol agents and port inspectors. Border Patrol In July 1994, the Border Patrol developed its own plan to carry out the Attorney General’s between-the-ports strategy. The plan, still current in September 1997, was to maximize the risk of apprehension, using increased human and physical barriers to entry to make passage more difficult so that many will consider it futile to continue to attempt illegal entry through traditional routes. Consistent with the Attorney General’s strategy, the Border Patrol sought to bring a decisive number of enforcement resources to bear in each major entry corridor so that illegal aliens would be deterred from using traditional corridors of entry. The plan called for targeting resources to the areas of greatest illegal activity in four phases: Phase I -Concentrate efforts in San Diego (Operation Gatekeeper) and El Paso (Operation Hold-the-Line), since these two areas historically accounted for 65 percent of all apprehensions along the southwest border, and contain displacement effects in the El Centro and Tucson sectors. Phase II -Control the Tucson and south Texas corridors. Phase III -Control the remainder of the southwest border. Phase IV -Control all other areas outside the southwest border, and concentrate on the northern border and water avenues (for example, Pacific and Gulf coasts). The Border Patrol strategic plan directed intense enforcement efforts at the areas of greatest illegal activity. This strategy differed from the previous practice of thinly allocating Border Patrol agents along the border as new resources became available. Each southwest sector developed its own tactical plan detailing how it would implement the Border Patrol’s strategic plan. According to the plan, each sector was to focus first on controlling urban areas and increase the percentage of border-control hours on the front lines of the border in such duties as linewatch, patrol, checkpoints, and air operations. Inspections at Ports of Entry To complement the Border Patrol’s strategic plan, which will be carried out between the ports of entry, INS Inspections developed Operations 2000+ in September 1996 to enhance operations at the ports of entry. This plan called for increasing the identification and interception of criminals and illegal entrants. To accommodate legal entry while enhancing its efforts to deter illegal entry at the ports of entry, INS will increase its use of technology to improve management of legal traffic and commerce. As part of this effort, INS had several pilot projects ongoing to use dedicated commuter lanes and automated entry systems to speed the flow of frequent low-risk travelers through the ports. The use of automated systems could then free inspectors’ time to be spent on other enforcement activities. Prosecutions In concert with INS’ efforts to deter illegal entry, the Attorney General seeks to increase prosecutions of alien smugglers, coyotes, and those with criminal records who repeatedly reenter the United States after having been removed. This program involves several aspects regarding the federal prosecution function and is intended to develop, through the U.S. Attorneys in the five judicial districts contiguous to the U.S.-Mexico border, federal prosecution policies to (1) support enforcement strategies implemented at the ports of entry by INS and U.S. Customs and between the ports by the Border Patrol, (2) maximize crime reduction in border communities, (3) minimize border-related violence, and (4) engender respect for the rule of law at the border. These policies include (1) targeting criminal aliens for 8 U.S.C. 1326 prosecution, (2) targeting coyotes to disrupt the smuggling infrastructure, (3) increasing alien smuggling prosecutions, (4) increasing false document vendor prosecutions, (5) enforcing civil rights laws, (6) prosecuting assaults against federal officers, and (7) prosecuting border-related corruption cases. to coordinate the use of INS/Executive Office of Immigration Review (EOIR) administrative sanctions and the application of federal criminal sanctions to achieve an integrated and cost-effective approach to the goal of deterrence in border law enforcement. This step includes (1) enforcing EOIR orders as predicates for federal prosecution under section 1326, and (2) using INS/EOIR removal orders and section 1326 provisions to prosecute coyotes. to enhance cooperation between federal immigration authorities and state and local law enforcement agencies to (1) leverage INS enforcement efforts through coordination with local law enforcement as allowed by law; (2) identify arrestees’ immigration status to facilitate choice at the outset of official processing among the options of administrative sanctioning, federal criminal prosecution, and state criminal prosecutions; and (3) ensure removal by immigration authorities of criminal aliens from the country, following service of state and local sentences after conviction. This plan involves (1) instituting county jail programs, (2) employing state and local police to supplement INS personnel, (3) allocating cases effectively between federal and state prosecutors, and (4) integrating information/identification systems among jurisdictions. The Attorney General designated the U.S. Attorney for the Southern District of California as her Southwest border representative and directed him to coordinate the border law enforcement responsibilities of various Justice Department agencies—including INS, the Drug Enforcement Agency, and the Federal Bureau of Investigation—with activities undertaken by the Treasury Department and the Department of Defense. Intelligence Another part of the strategy to deter illegal entry across the Southwest border was to reengineer INS’ Intelligence program so the border patrol agents and port inspectors could better anticipate and respond to changes in illegal entry and smuggling patterns. By the end of fiscal year 1997, INS expected to have an approved strategic plan for collecting and reporting intelligence information, integrating additional assets into the Intelligence program, and assessing field intelligence requirements. In addition, intelligence training and standard operating procedures for district and sector personnel were to be developed. Indicators of Success As the strategy is being carried out, the Department anticipates that its success will be measurable using more indicators than the one traditionally used: apprehensions. The Department anticipates the following measurable results to occur: an initial increase in arrests between the ports and then an eventual reduction of arrests and recidivism in traditional corridors; shift in the flow of illegal alien entries from the most frequent routes (generally through urban areas) to more remote areas; increased port-of-entry activities, including increased entry attempts and increased use of fraudulent documents; increased instances of more sophisticated methods of smuggling at checkpoints; increased fees charged by smugglers; increased numbers of criminal aliens prosecuted for entering the country illegally; increased numbers of alien smugglers, coyotes, and false document vendors prosecuted; increased numbers of deportations of people presenting false documents; and, reduced violence at the border (for example, less instances of border banditry). Further, although the Department will not be able to measure them and many complex variables other than border enforcement affect them, the Department believes its enhanced enforcement activities will have a positive impact on the following: fewer illegal immigrants in the interior of the United States and reduction in the use of social services and benefits by illegal aliens in the United States. Documents Used to Prepare Analysis The following documents, listed in chronological order, were used to prepare our summary of the Attorney General’s strategy to deter illegal entry into the U.S. along the Southwest border: Department of Justice, Attorney General and INS Commissioner Announce Two-Year Strategy to Curb Illegal Immigration, Press Release, February 3, 1994. U.S. Government Printing Office, Accepting the Immigration Challenge, The President’s Report on Immigration, 1994. INS, Strategic Plan: Toward INS 2000, Accepting the Challenge, November 2, 1994. INS, Border Patrol Strategic Plan 1994 and Beyond, National Strategy, U.S. Border Patrol, July 1994. INS, Priority 2: Strengthen Border Enforcement and Facilitation, Fiscal Year 1996 Priority Implementation Plan. Department of Justice, Executive Office for United States Attorneys, USA Bulletin, Vol. 44, Number 2, April 1996. Department of Justice, INS, Building a Comprehensive Southwest Border Enforcement Strategy, June 1996. Department of Justice, INS, Meeting the Challenge Through Innovation, September 1996. INS Office of Inspections, Inspections 2000+: A Strategic Framework for the Inspections Program, September 1996. Illegal Immigration Reform and Immigrant Responsibility Act of 1996. INS, Operation Gatekeeper, Two Years of Progress, October 1996. INS, Strengthen Border Facilitation and Control, Fiscal Year 1997 Priorities. Department of Justice, INS, Immigration Enforcement, Meeting the Challenge, A Record of Progress, January 1997. In addition, we reviewed the strategies prepared in 1994 by the Border Patrol’s nine southwest border sectors. Status of Selected Border Enforcement Initiatives Mandated by the 1996 Act Subtitle A, Title I of the 1996 Act mandates further improvements in border-control operations. These include improvement in the border-crossing identification card, new civil penalties for illegal entry or attempted entry, and a new automated entry/exit control system. The status of initiatives to address these provisions is discussed below. Improvement in the Border-Crossing Identification Card Section 104 of the 1996 Act provides that documents issued on or after April 1, 1998, bearing the designation “border-crossing identification cards” include a machine-readable biometric identifier, such as the fingerprint or handprint of the alien. The 1996 Act further mandates that by October 1, 1999, an alien presenting a border-crossing identification card is not to be permitted to cross over the border into the United States unless the biometric identifier contained on the card matches the appropriate biometric characteristic of the alien. According to INS Inspections officials, as of September 1997, initiatives to implement this provision were under way, but several issues had yet to be resolved. An INS official told us that INS and the State Department have reached a mutual decision on the biometric characteristics to be used on the card. A digitized color photograph, with potential use for facial recognition as that technology advances, will be collected. Live scan prints of both index fingers will also be collected. These fingerprints are compatible with IDENT, INS’ automated fingerprint identification system. However, the system to be used to verify this information at the point of entry into the United States had not been determined. According to the official, resolution of this problem is necessary since currently no viable technology available can implement what the law requires. INS plans to pilot test a biometric card for pedestrians at the Hidalgo, Texas port of entry by the end of calendar year 1997. The new system would also change which federal agency provides border-crossing cards and which agency produces the cards as well as where the cards are available. Currently, INS Immigration Inspectors at ports of entry and some State Department consular officers adjudicate applications and issue border-crossing cards. The INS Commissioner and the Assistant Secretary of State for Consular Affairs, on September 9 and September 18, 1997, respectively, signed a memorandum of understanding on how the revised border-crossing card process would work. Under the new system, the State Department will take over all adjudication of these cards. Foreign services posts will collect the fees and necessary data, including the biometric, and will send the data electronically to INS. INS will produce the cards. According to a State Department official, consular officers will check border-crossing card applicants against the State Department’s Consular Lookout and Support System (CLASS) to identify those that may be ineligible. The actual details of how the system will operate are, however, still being worked out, e.g., how to handle the volume of applicants. INS officials said that if all goes as expected, the new process will be in place on April 1, 1998. New Civil Penalties for Illegal Entry Section 105 of the 1996 Act mandates new civil penalties for illegal entry. Any alien apprehended while entering (or attempting to enter) the United States at a time or place other than as designated by immigration officers is subject to a civil penalty of at least $50 and not more than $250 for each such entry or attempted entry. The penalty is doubled for an alien who has been previously subject to a civil penalty under this subsection. Moreover, such penalties are in addition to, not instead of, any criminal or other civil penalties that may be imposed. This provision applies to illegal entries or attempts to enter occurring on or after April 1, 1997. As of September 1997, INS was still studying this issue. According to an INS official, a working group met in April 1997 and prepared an option papers for INS’ Policy Council. Based on this preliminary analysis, the council concluded that the program’s administration would probably cost more than the fines collected and the fines would not deter individuals, since few individuals would have the money to pay the fines. Accordingly, the INS Policy Council requested further study of the potential effects of the program before proceeding further with implementation. Such a study was on-going as of September 1997. No deadline was set for the study. Further development of the rules and regulations required to implement the program will depend on the results of the study. New Automated Entry/Exit Control System Section 110 of the 1996 Act directs the Attorney General to develop an automated entry and exit control system by October 1, 1998, to (1) collect data on each alien departing the United States and match the record of departure with that of the alien’s arrival in the United States and (2) enable the Attorney General to identify through on-line searching procedures, lawfully admitted nonimmigrants who remain in the United States beyond the time authorized. According to INS Inspections officials, this system is in the early development stage. INS is pilot testing a system for air travelers and believes it will have a system in place for them within the 2-year mandated time period. However, many implementation issues remain regarding travelers who enter and exit across the land borders. For example, how will exit control be done for those driving across the land borders? Will everyone entering the United States, including citizens, be checked? If so, how do you check everyone? If not, how do you know who is an alien and who is not? INS expects to test a pilot project for an arrival/departure system for pedestrian crossers at the Eagle Pass, Texas land port of entry. According to INS Inspections officials, INS plans to ask Congress for more time to implement such as a system at land ports of entry, but had not done so as of September 1997. At that time, INS had at the Office of Management and Budget a technical amendment requesting more time to implement the system for land ports of entry. Officials also indicated that there is some concern regarding different mandates in the 1996 Act. For example, while the act specifies that the biometric is for the border-crossing card, the provision mandating the automated entry/exit system includes no mention of using a biometric. INS Border Patrol and Inspections Staffing and Selected Workload Data According to INS officials, the number of on-board agents as of September 30, 1993, is considered as fiscal year 1993 authorized border patrol staffing levels for comparison purposes. Table III.2: Apprehensions by Southwest Border-Patrol Sector, Fiscal Year 1992 Through First Half of Fiscal Year 1997 FY 96 First half FY 97 (continued) Table III.4: Authorized Inspector Positions by Land Ports of Entry and INS District Offices Along the Southwest Border, Fiscal Years 1994 - 1997 (continued) Table III.5: INS Inspections, Selected Workload and Enforcement Data by Southwest Border District Offices, Fiscal Year 1994 Through First Half Fiscal Year 1997 (continued) Estimate based on periodic sampling of the number of occupants per vehicles entering the port of entry. Expert Panel Participants The following experts in research on illegal immigration issues paticipated in a panel discussion held at GAO in Washington, D.C. on March 28, 1997. They also reviewed portions of the draft report. Deborah Cobb-Clark Professor of Economics Research School of Social Sciences The Australian National University Canberra, Australia Sherrie Kossoudji Professor of Economics The School of Social Work University of Michigan Ann Arbor, MI B. Lindsay Lowell Director of Policy Research U.S. Commission on Immigration Reform Washington, D.C. Indicators for Measuring the Effectiveness of the Strategy to Deter Illegal Entry Along the Southwest Border A. Flow across the Southwest border Border Patrol apprehensions. 1. Displacement of flow in accordance with Border Patrol strategy. 2. Increasing apprehensions at first; eventual reductions as control is gained. Detection of illegal aliens crossing the border. Fewer aliens detected. Fewer “gotaways.” Number of “gotaways.” 1. Sensor hits. 2. Sign cuts. 3. Infrared and low-light television cameras. Recidivism (repeat apprehensions of aliens). INS IDENT system. Attempted reentries will decrease over time. Probability of apprehension. An increase in the probability of apprehension. Difficulty in crossing border. 1. IDENT recidivism data. 2. Mexican Migration Project (MMP) provides annual data from Mexican migrants on the number of times they were captured on trips to the U.S. and places where they were apprehended. 3. Colegio de la Frontera Norte (COLEF) surveys of border crossers (Zapata Canyon Project and Survey of Migration to the Northern Border). 4. Focus group interviews in Mexico (Binational Study on Migration). Is indicator currently being measured? Are there plans to measure indicator on ongoing basis? 1. Measures event, not individuals. Currently collected along the entire southwest border. 2. Unclear relationship to actual flow of illegal aliens. 3. No information on those aliens who elude capture. No systematic methodology for measuring entries and gotaways; likely differs across sectors. Yes, in some places. Measures event, not individuals. 1. Limited IDENT implementation along southwest border. Yes, but not completely implemented along southwest border. 2. No usable data preceding AG strategy, precluding a time-series analysis of its effectiveness. 3. Not yet clear how recidivism data will be used to model the flow. Several possible analytic models can be tested. Household surveys conducted in Mexico (e.g., the MMP) and surveys of migrants at points of entry and exit between the U.S. and Mexico (e.g., the Survey of Migration to the Northern Border) sample different subpopulations of migrants, who may differ in their likelihood of apprehension. MMP data are only collected annually, and retrospective accounts of apprehensions may be biased. There isn’t consensus on the methodology for estimating probability of apprehension. (continued) Estimated number of illegal entry attempts at U.S. ports of entry (POE). INS INTEX system — random checks of travelers at POEs. An increase in the number of aliens attempting to enter illegally at POEs. 1. Smuggling usage. 2. Smuggling costs. 1. INS intelligence data as well as intelligence data from other federal and state agencies. 3. Tactics of smugglers. An increase in smuggling fees and sophistication of smugglers. Smugglers may try other means of delivering aliens to destinations (e.g., using vans, small trucks, and tractor trailers). 2. MMP data . 3. COLEF data. 4. Binational study. Number of people in hotels and shelters in Mexican border cities. INS intelligence information. An increase in the number of people in hotels and shelters in Mexican border cities. Periodic surveys by interested organizations. Binational study site visits. Crime in U.S. border cities. Local crime data. Less crime in U.S. border cities. Is indicator currently being measured? Are there plans to measure indicator on ongoing basis? The causes of illegal POE entries may be difficult to disentangle. For example, increased illegal attempts may reflect more effective Border Patrol strategies between the ports, an increase in the flow of illegal migrants, or simply a more effective inspection strategy at the POE. INTEX methodology is still being tested. Changes in smuggling fees and tactics may indicate that border crossing is more difficult. However it does not necessarily indicate that migrants are less successful at entering. Not clear how systematically data are collected across the southwest border. Number of people in hotels and shelters may be affected by economic opportunities in border cities. Crime may be dropping overall, regardless of effects of border strategy. Need to control for such effects using time-series analyses. Not clear how systematically data are collected across the southwest border. However, drops in specific types of crime (e.g., property crime, car thefts) may be best indicators of drops in crimes committed by aliens. (continued) Use of public services in U.S. border cities. State and local sources. A decrease in public service usage in border cities. Hospitals, local school districts, local welfare departments. Deaths of aliens attempting entry. County death records. Death records in Mexico. University of Houston Center for Immigration Research reports (1996, 1997). Depends on how enforcement resources are allocated. In some cases, deaths may be reduced or prevented (by fencing along highways, for example). In other cases, deaths may increase (as enforcement in urban areas forces aliens to attempt mountain or desert crossings) . Assaults against INS agents. INS statistics. A higher incidence of violence against INS agents as crossing efforts of illegal aliens are frustrated. Abuses of aliens by INS officers. DOJ. May vary, depending on type of enforcement effort. Advocacy groups. Is indicator currently being measured? Are there plans to measure indicator on ongoing basis? Need to control for long-term trends in these indicators, using time-series analyses. Not clear how systematically data are collected across the southwest border. Can be difficult to identify if deceased is an undocumented migrant. Data not collected on systematic basis along southwest border. Problems of undercount—some proportion of deaths in remote areas will go undiscovered. Violence may increase if INS is able to frustrate drug traffickers. But drug trafficking may be only somewhat related to alien smuggling. Frustrating drug traffickers does not necessarily imply success in deterring illegal alien traffic. Potential for underreporting of abuse incidents by victims. (continued) B. Flow to Southwest border (deterrence) Probability of taking a first illegal trip to the U.S. MMP surveys of migrant communities in Mexico. Decreasing probability of first-timers migrating without documents. Probability of taking additional illegal trips. Results of focus groups conducted in Mexico (Binational Study on Migration). Decreasing probability that experienced migrants will take additional trips. Trends in applications for temporary visas and border-crossing cards. INS. U.S. Department of State. An increase in applications for border-crossing cards and nonimmigrant visas. Departures and arrivals at airports in Mexican cities along the Southwest border. Where available from Border Patrol officials in each sector. Roughly equal number of departures and arrivals in border areas where Border Patrol has successfully deterred entry (more arrivals than departures in areas where the border has not yet been secured). Changes in traffic through Mexico of illegal aliens from countries other than Mexico. Mexican intelligence sources. Reduced traffic of aliens from other countries through Mexico or longer durations of stays in Mexico by these aliens. Is indicator currently being measured? Are there plans to measure indicator on ongoing basis? The probability of taking first or additional trip is influenced by a number of factors besides INS efforts at the border, including economic conditions in Mexico and the U.S., social networks in U.S., and availability of legal modes of entry. MMP Data are not representative of the immigrant population in the U.S. and may underrepresent new sending areas in southern Mexico. Retrospective survey design. Data only collected annually. Application rates may simply be indicators of greater economic difficulties in Mexico and other sending countries, rather than indicators of deterrence of migrants from attempting illegal modes of entry. The 1996 Act mandates that border-crossing cards be replaced by biometric border-crossing cards within 3 years. This may make trends difficult to interpret. Data are not consistently available. Not determined. Data do not distinguish between trips made to Northern Mexico in order to cross border with trips made in order to work (in maquiladora industries, for example) or settle. Depends on reliability and generalizability of intelligence information. Not determined. (continued) C. Number of undocumented aliens in the U.S. Estimates of the stock of illegal residents in the United States. Published analyses based on INS and Census Bureau data. An eventual reduction in the overall stock of illegal residents in the U.S. Published MMP analyses. A decreased flow of illegal sojourners to the U.S. COLEF Survey of Migration to the Northern Border. Estimates of overstays of legal visas. INS analyses; Nonimmigrant Overstay Method (matches record of arrival forms with departure forms). An increase in the number of aliens, specifically Mexican nationals, who try to enter legally and overstay, in an effort to get around border enforcement. By fiscal year 1999, INS is required to have in place a system to track all entries and exits from the U.S. Labor supply and wages in sectors of the economy where illegal aliens tend to work. National Agricultural Workers Survey (NAWS). A labor shortage or higher wages in an industry that has traditionally hired undocumented workers. Number of undocumented aliens found in INS employer inspections. INS. A decrease in the number of undocumented workers who are recent arrivals to the U.S. found in INS employer inspections. Is indicator currently being measured? Are there plans to measure indicator on ongoing basis? Current surveys generally measure only long-term residents (those who have been in the U.S. for at least 1 year). Yes, periodically. Estimates differ based on assumptions made about undercount and the emigration of illegal residents. Difficult to determine the specific causes of changes in the stock (e.g., better border enforcement, better worksite enforcement, shrinking potential migrant populations, etc.). Unrepresentativeness of samples. Successful deterrence may have perverse results—with aliens spending longer time in the U.S. on any one illegal visit. Fewer border crossings, but an overall increase in illegal aliens in the U.S. GAO has reported on some problems in estimating visa overstays; estimates are likely to be more accurate for those who enter and leave by air. INS has not had capability to produce reliable estimates since 1992. Growing proportions in agriculture may reflect replacement of special agricutural workers legalized after passage of the Immigration Reform and Control Act. NAWS data are not generalizable to sectors other than agriculture. The number of undocumented workers discovered is also a function of the amount of emphasis by INS on worksite enforcement. Description of Selected Immigration Research Projects Several ongoing and recently completed research projects have collected data on the flow of undocumented migrants across the border, as well as information on migrants’ background characteristics, labor market experience in country of origin and the United States, and potential migrants’ intentions to take undocumented trips to the United States. The Mexican Migration Project The Mexican Migration Project was funded by the National Institute of Child Health and Human Development to create a comprehensive binational dataset on Mexican migration to the United States. The project is directed by Jorge Durand of the University of Guadalajara (Mexico), and Douglas S. Massey of the University of Pennsylvania. Two to five Mexican communities were surveyed each year during December and January of successive years using simple random sampling methods. The sample size was generally 200 households unless the community was under 500 residents, in which case a smaller number of households were interviewed. If initial fieldwork indicated that U.S. migrants returned home in large numbers during months other than December or January, interviewers returned to the community during those months to gather a portion of the 200 interviews. These representative community surveys yielded information on where migrants went in the United States, and during the months of July and August interviewers traveled to those U.S. destinations to gather nonrandom samples of 10 to 20 out-migrant households from each community. The U.S.-based samples thus contain migrants who have established their households in the United States. The communities were chosen to provide a range of different sizes, regions, ethnic compositions, and economic bases. The sample thus includes isolated rural towns, large farming communities, small cities, and very large metropolitan areas; it covers communities in the states of Guanajuato, Michoacan, Jalisco, Nayarit, Zacatecas, Guerrero, Colima, and San Luis Potosi; and it embraces communities that specialize in mining, fishing, farming, and manufacturing, as well as some that feature very diversified economies. The study’s questionnaire followed the logic of an ethnosurvey, which blends qualitative and quantitative data-gathering techniques. A semi-structured instrument required that identical information be obtained for each person, but question wording and ordering were not fixed. The precise phrasing and timing of each query was left to the judgment of the interviewer, depending on circumstances. The design thereby combined features of ethnography and standard survey research. The ethnosurvey questionnaire proceeded in three phases, with the household head serving as the principal respondent for all persons in the sample. In the first phase, the interviewer gathered basic social and demographic information on the head, spouse, resident and nonresident children, and other household members, including age, birthplace, marital status, education, and occupation. The interviewer then asked which of those enumerated had ever been to the United States. For those individuals with migrant experience, the interviewer recorded the total number of U.S. trips as well as information about the first and most recent U.S. trips, including the year, duration, destination, U.S. occupation, legal status, and hourly wage. This exercise was then repeated for first and most recent migrations within Mexico. The second phase of the ethnosurvey questionnaire compiled a year-by-year life history for all household heads, including a childbearing history, a property history, a housing history, a business history, and a labor history. The third and final phase of the questionnaire gathered information about the household head’s experiences on his or her most recent trip to the United States, including the mode of border-crossing, the kind and number of accompanying relatives, the kind and number of relatives already present in the United States, the number of social ties that had been formed with U.S. citizens, English language ability, job characteristics, and use of U.S. social services. Data at the community and municipio (county) levels were also collected, using several sources. First, since 1990, interviewers have used a special community questionnaire to collect and compile data from various sources in each community. Next, data were compiled from the Anuario Estadistico of 1993, published in Mexico by INEGI (el Instituto Nacional de Economia, Geografia e Informatica). Finally, data were compiled from the published volumes of the Mexican censuses for 1930, 1940, 1950, 1960, 1970, 1980, and 1990. These decennial census data were then interpolated for years between the censuses and extrapolated for years after 1990. Strengths Strengths include the large number and diversity of households and communities represented in the samples and the breadth of information collected. The retrospective nature of the survey allows for analysis of migration flows over a long period of time. The survey is a representative sample of communities in states in Western Mexico that have been the traditional sending areas of Mexico; it captures both migrants that are living in the United States as well as persons who have been in the United States but live in Mexico. Limitations One limitation is the representativeness of the sample. The Mexican sample underrepresents those states in Mexico that have only recently become sources of U.S. migration, including states in southern Mexico and states bordering the United States. The U.S. subsample may undersample people with little connection to their community of origin, or those who have moved to nontraditional locations in the United States. In addition, information on other household members collected from the head of the household may be inaccurate, and retrospective data on migration experience may be biased in that respondents may attribute events that occurred to the incorrect time period or recall more recent events more accurately than past events. The Zapata Canyon Project Since September 1987, the ongoing Zapata Canyon Project has conducted personal interviews with randomly selected undocumented immigrants preparing to cross the border between Mexico and the United States. Interviews are conducted 3 days per week—usually weekends—at habitual crossing points of undocumented immigrants in the Mexican border cities of Tijuana, Mexicali, Ciudad Juarez, Nuevo Laredo, and Matamoros. The survey employs a short, standardized questionnaire specifically designed not to take too much time from someone who is in the process of entering the United States illegally. The project is directed by Jorge Bustamante of El Colegio de la Frontera Norte (COLEF), in collaboration with Jorge Santibanez and Rodolfo Corona. The surveys include information on the demographic characteristics of migrants, prior labor experience in Mexico and the United States, location of border crossing, cost of the trip to the border, whether a smuggler was used for crossing, how many times migrants were apprehended by the INS, and reasons for making the trip. Strengths The survey provides extensive time series information on illegal border crossing and can be used for examining changes in sociodemographic characteristics of border crossers. Limitations The Zapata Canyon data may not come from a representative sample of undocumented migrants. Thus, it is not possible to estimate the volume of the undocumented flow from Mexico. The survey does not focus on the extent of return migration from the United States to Mexico. A limited number of questions can be asked of migrants who are in a hurry to make a clandestine crossing into the United States. Survey of Migration to the Northern Border This survey, also known as EMIF (Encuesta sobre Migracion en la Frontera Norte de Mexico), was funded by the World Bank through the Mexican Ministry of Labor and the Ministry of the Interior’s National Council for Population. A team of researchers from El Colegio de la Frontera Norte, directed by Jorge Bustamante, designed and administered the survey. The survey was designed to produce direct estimates of the volume of documented and undocumented migration flows from Mexico to the United States as well as return migration from the United States to Mexico. The survey methodology derives from the theoretical concept of circular migration and is based on an adaptation of what biological statisticians call the “sampling of mobile populations.” The team identified the “migratory routes” through which circular migration occurred and defined empirical points of observation where migration could be viewed, including bus stations, airports, railroad stations, and customs and immigration inspection places along highways. In these places, systematic counts of the number of migrants were made over specific periods of time. The survey was conducted continuously from March 28, 1993, to March 27, 1994; from December 14, 1994, to December 13, 1995; and from July 14, 1996, to July 13, 1997. The survey sampled four groups of migrants: (1) illegal migrants voluntarily deported from the United States by the INS; (2) illegal and legal migrants preparing to cross the border from Mexico to the United States; (3) Mexican nationals who were permanent residents of the United States and were returning to Mexico; and (4) permanent residents of Mexico who have been in the United States legally or illegally and were returning to Mexico. The survey includes questions on the demographic characteristics of migrants, prior labor experience in Mexico and the United States, prior legal and illegal border crossing experience (including number of previous crossings, whether documents were used to cross, number of times apprehended by INS, location of border crossings, whether smugglers were used for crossing, and how much was paid to them), and reasons for making the current trip. Strengths The survey provides information on legal and illegal flow in both directions—from Mexico to the United States, and from the United States to Mexico. Limitations The survey focuses only on labor migrants; nonlabor migrants are not interviewed. Data collection has recently been completed; however, the study directors have applied for additional funding to continue the survey. The United States-Mexico Binational Study on Migration After a meeting of the Migration and Consular Affairs Group of the Mexican-United States Binational Commission in March 1995, the governments of Mexico and the United States decided to undertake a joint study of migration between the two countries. The Binational Study was funded by both the United States and Mexican governments in conjunction with private sector funding in both countries. The main objective of the Binational Study was to contribute to a better understanding and appreciation of the nature, dimensions, and consequences of migration from Mexico to the United States. National coordinators were designated for each country, with the Commission on Immigration Reform coordinating the work of U.S. researchers. The Binational Study was released on September 2, 1997. The research was conducted by a team of 20 independent researchers, 10 from each country, who reviewed existing research, generated new data and analyses, and undertook site visits and consulted with migrants and local residents to gain a joint understanding of the issues raised in this study. The researchers participated in five teams studying different aspects of migration, including (1) the size of the legal and illegal Mexican-born population in the United States, and the size of the migration streams crossing the border; (2) demographic, educational, and income characteristics of Mexican-born migrants; (3) factors influencing migration from Mexico; (4) economic and social effects of migration on both the United States and Mexico; and (5) societal responses to migration in both the United States and Mexico, including legislation and policy responses, court decisions, advocacy from the private sector, and public opinion. Strengths The study analyzed a broad number of issues relating to illegal and legal immigration. Its use of a combination of data from both the United States and Mexico enhanced understanding of Mexican migration and its impacts on both countries. The study made a number of specific recommendations for needed research. Limitations Data limitations often constrained the study teams’ abilities to draw firm conclusions on many issues. Data collection and analysis has been completed, and there are no mechanisms in place for follow-up. Major Contributors to This Report General Government Division, Washington, D.C. Office of the General Counsel, Washington, D.C. Los Angeles Field Office Bibliography Bean, Frank D., Roland Chanove, Robert G. Cushing, Rodolfo de la Garza, Gary P. Freeman, Charles W. Haynes, and David Spener. Illegal Mexican Migration and the United States/Mexico Border: The Effects of Operation Hold the Line on El Paso/Juarez. Austin, TX: Population Research Center, University of Texas at Austin and U.S. Commission on Immigration Reform, July 1994. Binational Study on Migration. Binational Study: Migration Between Mexico & The United States. Mexico City and Washington, D.C.: Mexican Foreign Ministry and U.S. Commission on Immigration Reform, September 1997. Bustamante, Jorge A. “Mexico-United States Labor Migration: Some Theoretical and Methodological Innovations and Research Findings.” Paper prepared for the 23rd General Population Conference of the International Union for the Scientific Study of Population (unpublished), October 1997. Bustamante, J.A., Jorge Santibanez, and Rodolfo Corona. “Mexico-United States Labor Migration Flows: Some Theoretical and Methodological Innovations and Research Findings,” Migration and Immigrants: Research and Policies. Mexico’s Report for the Continuous Reporting System on Migration (SOPEMI) of the Organization for Economic Cooperation and Development (OECD), November 1996. Chavez, Leo R., Estevan T. Flores, and Marta Lopez-Garza. “Here Today, Gone Tomorrow? Undocumented Settlers and Immigration Reform.” Human Organization, Vol. XLIX (1990), pp. 193-205. Cornelius, Wayne A., and J.A. Bustamante (eds.). Mexican Migration to the United States: Process, Consequences, and Policy Options. La Jolla, CA: Center for U.S. - Mexican Studies, University of California, San Diego, 1990. Cornelius, W.A. “Impacts of the 1986 U.S. Immigration Law on Emigration from Rural Mexican Sending Communities.” Population and Development Review, Vol. XV, No. 4, (1989). Crane, Keith, Beth J. Asch, Joanna Z. Heilbrunn, and Danielle C. Cullinane. The Effect of Employer Sanctions on the Flow of Undocumented Immigrants to the United States. Santa Monica, CA, and Washington, D.C.: The RAND Corporation and The Urban Institute, April 1990. Donato, Katharine M., Jorge Durand, and Douglas S. Massey. “Stemming the Tide? Assessing the Deterrent Effects of the Immigration Reform and Control Act.” Demography, Vol. XXIX, No. 2 (May 1992), pp. 139-157. Durand, Jorge and Douglas S. Massey. “Mexican Migration to the United States: A Critical Review.” Latin American Research Review, Vol. XXVII (1992), pp. 3-42. Eschbach, Karl, Jacqueline Hagan, Nestor Rodriguez, Ruben Hernandez-Leon, and Stanley Bailey. “Death at the Border.” Houston, TX: University of Houston Center for Immigration Research (Working Paper #97-2), June 1997. Espenshade, Thomas J. “Using INS Border Apprehension Data to Measure the Flow of Undocumented Migrants Crossing the U.S.-Mexico Frontier.” International Migration Review, Vol. XXIX, No. 2 (Summer 1995), pp. 545-565. Espenshade, Thomas J., and Dolores Acevedo. “Migrant Cohort Size, Enforcement Effort, and the Apprehension of Undocumented Aliens.” Population Research and Policy Review, Vol. XIV (1995), pp. 145-172. Espenshade, Thomas J. “Does the Threat of Border Apprehension Deter Undocumented U.S. Immigration?” Population and Development Review, Vol. XX, No. 4 (December 1994), pp. 871-891. Espenshade, Thomas J. “Policy Influences on Undocumented Migration to the United States.” Proceedings of the American Philosophical Society, Vol. CXXXVI, No. 2 (1992), pp. 188-207. Espenshade, Thomas J., Michael J. White, and Frank D. Bean. “Patterns of Recent Illegal Migration to the United States.” Future Demographic Trends in Europe and North America: What Can We Assume Today? W. Lutz (ed.). London: Academic Press, 1991, pp. 301-336. Heyman, Josiah McC. “Putting Power in the Anthropology of Bureaucracy: The Immigration and Naturalization Service at the Mexico-United States Border.” Current Anthropology, Vol. XXXVI, No. 2 (April 1995), pp. 261-287. Johnson, Hans. Undocumented Immigration to California: 1980-1993. San Francisco: Public Policy Institute of California, September 1996. Kossoudji, Sherrie A. “Playing Cat and Mouse at the U.S.-Mexican Border.” Demography, Vol. XXIX, No. 2 (May 1992), pp. 159-180. Lowell, B. Lindsay (ed.). Temporary Migrants in the United States. Washington, D.C.: U.S. Commission on Immigration Reform, 1996. Lowell, B. Lindsay and Zhongren Jing. “Unauthorized Workers and Immigration Reform: What Can We Ascertain from Employers?” International Migration Review, Vol. XXVIII, No. 3 (1994), pp. 427-429. Martin, Philip L. “Good Intentions Gone Awry: IRCA and U.S. Agriculture.” ANNALS AAPSS, Vol. DXXXIV (July 1994), pp. 44-57. Massey, Douglas S., and Kristin E. Espinosa. “What’s Driving Mexico-U.S. Migration? A Theoretical, Empirical and Policy Analysis.” American Journal of Sociology, Vol. CII, No. 4 (January 1997), pp. 939-999. Massey, Douglas S., and Audrey Singer. “New Estimates of Undocumented Mexican Migration and the Probability of Apprehension.” Demography, Vol. XXXII, No. 2 (May 1995), pp. 203-213. Massey, Douglas S., and Felipe Garcia Espana. “The Social Process of International Migration.” Science, Vol. CCXXXVII (1987), pp. 733-738. Mines, Richard, Susan Gabbard, and Anne Steirman. A Profile of U.S. Farm Workers: Demographics, Household Composition, Income and Use of Services. Washington, D.C.: Prepared for the U.S. Commission on Immigration Reform, April 1997. Reyes, Belinda I. Dynamics of Immigration: Return Migration to Western Mexico. San Francisco: Public Policy Institute of California, January 1997. Singer, Audrey, and Douglas S. Massey. “The Social Process of Undocumented Border Crossing.” Paper presented at the Meetings of The Latin American Studies Association, Guadalajara, Mexico (unpublished), April 19, 1997. U.S. Commission on Agricultural Workers. Report of the Commission on Agricultural Workers. Washington, D.C.: Government Printing Office, 1992. U.S. Commission on Immigration Reform. U.S. Immigration Policy: Restoring Credibility. Washington, D.C., September 1994. U.S. Department of Justice Office of the Inspector General. Immigration and Naturalization Service Monitoring of Nonimmigrant Overstays. Washington, D.C.: U.S. Department of Justice, September 1997. U.S. Department of Labor. U.S. Farmworkers in the Post-IRCA Period. Washington, D.C.: U.S. Department of Labor Research Report No. 4, March 1993. U.S. Immigration and Naturalization Service. Operation Gatekeeper: Landmark Progress. Washington, D.C., October 1995. U.S. Immigration and Naturalization Service. Operation Gatekeeper: Two Years of Progress. Washington, D.C., October 1996. Warren, Robert. Estimates of the Unauthorized Immigrant Population Residing in the United States, by Country of Origin and State of Residence: October 1992. Washington, D.C.: U.S. Immigration and Naturalization Service (unpublished), 1994. Related GAO Products Border Patrol: Staffing and Enforcement Activities (GAO/GGD-96-65, Mar. 11, 1996). Illegal Immigration: INS Overstay Estimation Methods Need Improvement (GAO/PEMD-95-20, Sept. 26, 1995). Border Control: Revised Strategy Is Showing Some Positive Results (GAO/T-GGD-95-92, Mar. 10, 1995). Border Control: Revised Strategy Is Showing Some Positive Results (GAO/GGD-95-30, Dec. 29, 1994). Border Management: Dual Management Structure at Entry Ports Should End (GAO/T-GGD-94-34, Dec. 10, 1993). Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates (GAO/PEMD-93-25, Aug. 5, 1993). Customs Service and INS: Dual Management Structure for Border Inspections Should Be Ended (GAO/GGD-93-111, June 30, 1993). Border Patrol: Southwest Border Enforcement Affected by Mission Expansion and Budget (GAO/T-GGD-92-66, Aug. 5, 1992). Border Patrol: Southwest Border Enforcement Affected by Mission Expansion and Budget (GAO/GGD-91-72BR, Mar. 28, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Attorney General's strategy to deter illegal entry into the United States along the southwest border, focusing on: (1) what the strategy calls for; (2) actions taken to implement the strategy along the southwest border; (3) whether available data confirm the strategy's hypotheses, with respect to expected initial results from the strategy's implementation along the southwest border; and (4) the types of indicators that would be needed to evaluate the strategy. GAO noted that: (1) to carry out the priority to strengthen the detection of and deterrence to illegal entry along the border, the Attorney General's strategy called for the Border Patrol to: (a) allocate additional Border Patrol resources in a four-phased approach starting first with the areas of highest known illegal activity; (b) make maximum use of physical barriers; (c) increase the proportion of time Border Patrol agents spend on border enforcement activities; and (d) identify the appropriate mix of technology, equipment, and personnel needed for the Border Patrol; (2) since the strategy was issued in 1994, the Immigration and Naturalization Service (INS) has made progress in implementing some, but not all, aspects of the strategy; (3) at the southwest border land ports of entry, INS has added about 800 inspector positions since fiscal year 1994, increasing its on-board strength to about 1,300; (4) INS and other data indicate that some of the initial results of the strategy's implementation along the southwest border correspond with the expected results stated in the strategy; (5) however, sufficient data were not available for GAO to determine whether other expected results have occurred; (6) INS data indicated that, as a percentage of total apprehensions along the southwest border, apprehensions of illegal aliens have decreased in the two sectors that in 1993 accounted for the most apprehensions and received the first influx of new resources--San Diego and El Paso; (7) the Attorney General's strategy for deterring illegal entry across the southwest border envisions three distinct but related results: (a) fewer aliens will be able to cross the border illegally; (b) fewer aliens will try to illegally immigrate into the United States; and (c) the number of illegal aliens in the United States will decrease; (8) evaluating the overall effectiveness of the strategy for deterring illegal entry would require a formal, rigorous plan for: (a) collecting and analyzing consistent and reliable data on several different indicators related to the three expected results from the strategy; and (b) examining their interrelationships; and (9) although developing a formal evaluation plan and implementing a rigorous and comprehensive evaluation of the strategy may prove to be both difficult and potentially costly, without such an evaluation the Attorney General and Congress will have no way of knowing whether the billions of dollars invested in reducing illegal immigration have produced the intended results. |
Background FAA’s ATC network is an enormous, complex collection of interrelated systems, including navigation, surveillance, weather, and automated information processing and display systems that reside at, or are associated with, hundreds of ATC facilities. These systems and facilities are interconnected by complex communications networks that separately transmit both voice and digital data. As stated in our 1997 report on high-risk issues, while the use of interconnected systems promises significant benefits in improved government operations, it also increases vulnerability to anonymous intruders who may manipulate data to commit fraud, obtain sensitive information, or severely disrupt operations. Since this interconnectivity is expected to grow as systems are modernized to meet the projected increases in air traffic and to replace aging equipment, the ATC network will become even more vulnerable to such network-related threats. The threat to information systems is also growing because of the increasing availability of strategies and tools for launching planned attacks. For example, in May 1996 we reported that tests at the Department of Defense showed that Defense systems may have experienced as many as 250,000 attacks during 1995, about 65 percent of these succeeded in gaining access, and only about 4 percent were detected. Since intruders can use a variety of techniques to attack computer systems, it is essential that FAA’s approach to computer security be comprehensive and include (1) physical security of the facilities that house ATC systems (e.g., locks, guards, fences, and surveillance equipment), (2) information security of the ATC systems (e.g., safeguards incorporated into computer hardware and software), and (3) telecommunications security of the networks linking ATC systems and facilities (e.g., secure gateways, firewalls, and communication port protection devices). For years, the need for federal agencies to protect sensitive and critical, but unclassified, federal data has been recognized in various laws, including the Privacy Act of 1974, the Computer Security Act of 1987, and the Paperwork Reduction Act of 1995, and was recently reemphasized in the Clinger-Cohen Act of 1996. The adequacy of controls over computerized data is also addressed indirectly by the Federal Managers’ Financial Integrity Act (FMFIA) of 1982 and the Chief Financial Officers Act of 1990. For example, FMFIA requires agency managers to evaluate their internal control systems annually and report to the President and the Congress any material weaknesses that could lead to fraud, waste, and abuse in government operations. In addition, a considerable body of federal guidance on information security has been developed by both the Office of Management and Budget (OMB) and the National Institute of Standards and Technology (NIST). Objectives, Scope, and Methodology The objectives of our review were to determine (1) whether FAA is effectively managing physical security at ATC facilities and systems security for its current operational systems, (2) whether FAA is effectively managing systems security for future ATC modernization systems, and (3) the effectiveness of FAA’s management structure and implementation of policy for computer security. To determine whether FAA is effectively managing physical security at ATC facilities, we reviewed FAA Order 1600.6C, Physical Security Management Program, to determine ATC facility security inspection and accreditation requirements; reviewed data from FAA’s Facility Inspection Reporting System (FIRs) to determine the accreditation status of category I and II towers, terminal radar approach control (TRACON) facilities, and air route traffic control towers (en route centers) and their last inspection date; verified the accuracy of the FIRs accreditation data with each of the nine regional FIRs program managers by requesting accreditation reports for each facility that FIRs reported as being accredited; for those facilities that were not accredited, requested dates of their initial comprehensive physical security inspection and follow-up inspections from each of the nine regional FIRs program managers to determine why ATC facilities were not accredited; verified the initial and follow-up inspection dates by requesting and reviewing documentation for each inspection conducted from April 16, 1993, to July 31, 1997, and then provided our analyses to Office of Civil Aviation Security Operations officials, who in turn verified it with each region; reviewed the Department of Justice’s June 28, 1995, report, Vulnerability Assessment of Federal Facilities, to identify new physical security requirements for federal facilities; reviewed physical security assessments for three locations to determine FAA’s ATC compliance with Department of Justice blast standards and to identify additional physical security weaknesses at key ATC facilities; reviewed the Facility Security Risk Management Mission Need Statement for Staffed Facilities, Number 316, June 23, 1997, to determine physical security deficiencies and FAA’s plans to improve physical security; and interviewed officials from the Offices of Civil Aviation Security, Operations and Policy and Planning, and Airways Facility Services to determine physical security requirements, to determine whether FAA is in compliance with 1600.6C, to identify reasons for noncompliance, and to identify who develops, implements, and enforces ATC physical security policy. To determine whether FAA is effectively managing systems security for its current operational systems, we reviewed federal computer security requirements specified in the Computer Security Act of 1987 (Public Law 100-235); Paperwork Reduction Act of 1995 (Public Law 104-13), as amended; OMB Circular A-130, appendix III, “Security of Federal Automated Information Resources;” the 1996 Clinger-Cohen Act; and An Introduction to Computer Security: The NIST Handbook to identify federal security requirements; reviewed FAA Order 1600.54B, FAA Automated Information Systems Security Handbook, and FAA Order 1600.66, Telecommunications and Information Systems Security Policy, to determine ATC system risk assessment, certification, and accreditation requirements; reviewed Volpe National Transportation Systems Center NAS AIS Security Review, October 1, 1996, to determine how many ATC operational systems were assessed, certified, and accredited as of October 1, 1996; requested and reviewed accreditation reports, security certification reports, risk assessments, contingency plans, and disaster recovery plans for six operational ATC systems; reviewed the White House Commission on Aviation Safety and Security’s final report to the President, February 12, 1997, to determine recommendations to improve ATC computer security; reviewed the Federal Aviation Administration Air to Ground Communications Vulnerabilities Assessment, June 1993, to determine ATC communication systems vulnerabilities; reviewed the Report to Congress, Air Traffic Control Data and Communications Vulnerabilities and Security, Report of the Federal Aviation Administration Pursuant to House-Senate Report Accompanying the Department of Transportation and Related Agencies Appropriations Act, 102-639, June 1, 1993, to determine what ATC security vulnerabilities FAA disclosed to the Congress in 1993; interviewed the telecommunications integrated product team to determine what operational communication systems have been assessed, certified, and accredited and reviewed the team’s 1994 and 1997 strategic plans to determine communication system risks and planned security improvement initiatives; interviewed the Director of Spectrum Policy and Management to determine the extent to which intruders are accessing ATC frequencies; interviewed FAA’s Designated Approving Authority (DAA) to determine FAA’s policy for accrediting ATC systems; and interviewed the Office of Civil Aviation Security Operations officials and Airways Facilities Services officials to determine who develops, implements, and enforces ATC operational systems security policy and to determine whether an incident reporting and handling capability exists. To determine whether FAA is effectively managing systems security for future ATC modernization systems, we requested and reviewed risk assessments and acquisition specifications for six ATC systems that are being developed to determine if security requirements based on detailed assessments existed;interviewed three integrated product teams (IPT) to determine what security policy/guidance each follows in developing ATC systems; reviewed the NAS Information Security Mission Need Statement, April 22, 1997, to determine information security deficiencies, future system vulnerabilities, and FAA’s plans to improve information security; interviewed the NAS Information Security (NIS) group to determine its plans to improve ATC information security and reviewed its NAS Information Security Action Plan; and reviewed the President’s Commission of Critical Infrastructure Protection’s (PCCIP) final report, Critical Foundations, Protecting America’s Infrastructures, October 1997, and its supplemental report, Vulnerability Assessment of the FAA National Airspace Systems (NAS) Architecture, October 1997, to determine future ATC systems security vulnerabilities. To determine the effectiveness of FAA’s management structure and implementation of policy for computer security, we reviewed FAA Order 1600.6C, Physical Security Management Program (dated April 1993), Order 1600.54B, FAA Automated Information Systems Security Handbook (dated February 1989), and Order 1600.66, Telecommunications and Information Systems Security Policy (dated July 1994), to determine what organizations are assigned responsibility for developing, implementing, and enforcing ATC computer security policyand interviewed officials from the Offices of Civil Aviation Security, Air Traffic Services, and Research and Acquisitions to determine what organizations are responsible for developing, implementing, and enforcing ATC computer security policy. In addition, we interviewed the Associate Administrators for Civil Aviation Security and for Research and Acquisitions and the Director of Airway Facilities under the Associate Administrator for Air Traffic Services to determine why ATC computer security policies have not been adequately implemented and enforced. We performed our work at FAA headquarters in Washington, D.C., from April 1997 through January 1998 in accordance with generally accepted government auditing standards. ATC Physical Security Management and Controls Are Ineffective ATC systems used to control aircraft reside at, or are associated with, a variety of ATC facilities including towers, TRACONs, and en route centers. FAA policy, dated April 1993, required that these facilities be inspected by April 1995 and that annual or triennial follow-up inspections be conducted depending on the type of facility to determine the status of physical security at each facility. These inspections determine whether the facility meets the physical security standards established in FAA policy and are the basis for accrediting ATC facilities (i.e., concluding that they are secure). FAA is not effectively managing physical security at ATC facilities. Known physical security weaknesses exist at many ATC facilities. For example, an inspection of a facility that controls aircraft disclosed 26 physical security findings including (1) fire protection systems that failed to meet minimum detection and suppression standards and (2) service contract employees that were given unrestricted access to sensitive areas without having appropriate background investigations. FAA recently confirmed its physical security weaknesses when it performed detailed assessments of several key ATC facilities following the Oklahoma City bombing to determine physical security risks and the associated security measures and costs required to reduce these risks to an acceptable level. For example, an assessment of a facility that controls aircraft concluded that access control procedures are weak to nonexistent and that the center is extremely vulnerable to criminal and terrorist attack. In addition, FAA is unaware of physical security weaknesses that may exist at other FAA facilities. For example, FAA has not assessed the physical security controls at 187 facilities since 1993 and therefore does not know how vulnerable they are. Until FAA inspects its remaining facilities, it does not know if they are secure and if the appropriate controls are in place to prevent loss or damage to FAA property, injury to FAA employees, or compromise of FAA’s capability to perform critical air safety functions. ATC Operational System Security Is Ineffective and Systems Are Vulnerable FAA policy requires that all ATC systems be certified and accredited. A risk assessment, which identifies and evaluates vulnerabilities, is a key requirement for certification and accreditation. We recently reported that leading information security organizations use risk assessments to identify and manage security risks confronting their organizations. FAA has not assessed, certified, or accredited most operational ATC systems. A review conducted for FAA’s Office of Civil Aviation Security in October 1996 concluded that FAA had not conducted risk assessments on 83 of 90, or over 90 percent, of all operational ATC systems. FAA officials told us that this assessment is an accurate depiction of the agency’s knowledge regarding operational systems security. As a result, FAA does not know how vulnerable these operational ATC systems are and consequently has no basis for determining what protective measures are required. Further, the review concluded that of the 7 systems assessed, only 3 resulted in certifications because 4 systems did not have the proper certification documentation. Accordingly, less than 4 percent of the 90 operational systems are certified. In addition, FAA has not assessed most ATC telecommunication systems. For example, FAA’s officials responsible for maintaining the nine FAA-owned and leased communication networks told us that only one has been assessed. Such poor security management exists despite the fact that FAA’s 1994 Telecommunications Strategic Plan stated that “vulnerabilities that can be exploited in aeronautical telecommunications potentially threaten property and public safety.” FAA’s 1997 Telecommunications Strategic Plan continues to identify security of telecommunication systems as an area in need of improvement. Office of Civil Aviation Security officials told us that they were not aware of a single ATC system that was accredited. We found similar results when we reviewed six operational systems to determine if they were assessed, certified, or accredited. Risk assessments had been conducted and certification reports written for only two of the systems, while none of the systems had been accredited. The Associate Administrator for Civil Aviation Security, who is responsible for accrediting systems, told us that FAA has decided to spend its limited funds not on securing currently operating systems, but rather on developing new systems and that FAA management is reluctant to acknowledge information security threats. FAA claims that because current ATC systems often utilize custom-built, 20-year-old equipment with special purpose operating systems, proprietary communication interfaces, and custom-built software, the possibilities for unauthorized access are limited. While these configurations may not be commonly understood by external hackers, one cannot conclude that old or obscure systems are, a priori, secure. In addition, the certification reports that FAA has done reveal operational systems vulnerabilities. Furthermore, archaic and proprietary features of the ATC system provide no protection from attack by disgruntled current and former employees who understand them. FAA Is Not Effectively Managing Security for New ATC Systems Essential computer security measures can be provided most effectively and cost efficiently if they are addressed during systems design. Retrofitting security features into an operational system is far more expensive and often less effective. Sound overall security guidance, including a security architecture, security concept of operations, and security standards, is needed to ensure that well formulated security requirements are included in specifications for all new ATC systems. FAA has no security architecture, security concept of operations, or security standards. As a result, implementation of security requirements across ATC development efforts is sporadic and ad hoc. Of the six current ATC system development efforts that we reviewed, four had security requirements, but only two of the four developed their security requirements based on a risk assessment. Without security requirements based on sound risk assessments, FAA lacks assurance that future ATC systems will be protected from attack. Further, with no security requirements specified during systems design, any attempts to retrofit security features later will be increasingly costly and technically challenging. An FAA June 1993 report to the Congress on information security states that because FAA lacks a security architecture to guide the development of ATC security measures, technical security requirements will be retrofitted or not implemented at all because the retrofit “could be so costly or technically complex that it would not be feasible.” In April 1996, the Associate Administrator for Research and Acquisitions established the National Airspace Systems (NAS) Information Security (NIS) group to develop, along with other security initiatives, the requisite security architecture, security concept of operations, and security standards. The NIS group has developed a mission need statement that asserts that “information security is the FAA mission area with the greatest need for policy, procedural, and technical improvement. Immediate action is called for, to develop and integrate information security into ATC systems throughout their life cycles.” FAA has estimated that it will cost about $183 million to improve ATC information security. The NIS group has developed an action plan that describes each of its proposed improvement activities. However, over 2 years later it has not developed detailed plans or schedules to accomplish these tasks. As FAA modernizes and increases system interconnectivity, ATC systems will become more vulnerable, placing even more importance on FAA’s ability to develop adequate security measures. These future vulnerabilities are well documented in FAA’s information security mission need statement and also in reports completed by the President’s Commission on Critical Infrastructure Protection. The President’s Commission summary report concluded that the future ATC architecture appears to have vulnerabilities and recommended that FAA act immediately to develop, establish, fund, and implement a comprehensive systems security program to protect the modernized ATC system from information-based and other disruptions, intrusions, and attacks. It further recommended that this program be guided by the detailed recommendations made in the NAS vulnerability assessment. FAA’s Management Structure Is Not Effectively Implementing and Enforcing Computer Security Policy FAA’s management structure and implementation of policy for computer security has been ineffective: the Office of Civil Aviation Security has not adequately enforced the security policies it has formulated; the Office of Air Traffic Services has not adequately implemented security policy for operational ATC systems; and the Office of Research and Acquisitions has not adequately implemented policy for new ATC systems development. For example, the Office of Civil Aviation Security has not enforced FAA policies that require the assessment of physical security controls at all ATC facilities and vulnerabilities, threats, and safeguards for all operational ATC computer systems; the Office of Air Traffic Services has not implemented FAA policies that require it to analyze all ATC systems for security vulnerabilities, threats, and safeguards; and the Office of Research and Acquisitions has not implemented the FAA policy that requires it to include, in specifications for all new ATC modernization systems, requirements for security based on risk assessments. FAA established a central security focal point, the NIS group, to develop additional security guidance (i.e., a security architecture, a security concept of operations, and security standards), to conduct risk assessments of selected ATC systems, to create a mechanism to respond to security incidents, and to provide security engineering support to ATC system development teams. The NIS group includes members from the Offices of Civil Aviation Security, Air Traffic Services, and Research and Acquisitions. Establishing a central security focal point is a practice employed by leading security organizations. In order to be effective, the security focal point must have the authority to enforce the organization’s security policies or have access to senior executives that are organizationally positioned to take action and effect change across organizational divisions. One approach for ensuring that a central group has such access at FAA would be to place it under a Chief Information Officer (CIO) who reports directly to the FAA Administrator. This approach is consistent with the Clinger-Cohen Act, which requires that major federal departments and agencies establish CIOs who report to the department/agency head and are responsible for implementing effective information management. FAA does not have a CIO reporting to the Administrator. Although the NIS group has access to certain key Associate Administrators (e.g., the Associate Administrator for Civil Aviation Security and the Associate Administrator for Research and Acquisitions), it does not have access to the management level that can effect change across organizational divisions (e.g., FAA’s Administrator or Deputy Administrator). Thus, there is no assurance that the NIS group’s guidance, once issued, will be adequately implemented and enforced, that results of its risk assessments will be acted upon, and that all security breaches will be reported and adequately responded to. Until existing ATC computer security policy is effectively implemented and enforced, operational and developmental ATC systems will continue to be vulnerable to compromise of sensitive information and interruption of critical services. In addition, OMB Circular A-130, Appendix III, requires that systems, such as ATC systems, be accredited by the management official who is responsible for the functions supported by the systems and whose mission is adversely affected by any security weaknesses that remain (i.e., the official who owns the operational systems). At FAA, this management official is the Associate Administrator for Air Traffic Services. However, FAA’s ATC systems authorizing official is the Associate Administrator for Civil Aviation Security, who does not own the operational ATC systems. Conclusions Since physical security is the agency’s first line of defense against criminal and terrorist attack, failure to strengthen physical security controls at ATC towers, TRACONs, and en route centers places property and the safety of the flying public at risk. Information system security safeguards, either those now in place or those planned for future ATC systems, cannot be fully effective as long as FAA continues to function with significant physical security vulnerabilities. Also, because FAA has not assessed physical security controls at all facilities since 1993, it does not know how vulnerable they are. Similarly, FAA does not know how vulnerable its operational ATC systems are and cannot adequately protect them until it performs the appropriate system risk assessments and certifies and accredits ATC systems. In addition, FAA is not effectively incorporating security controls into new ATC systems. FAA has taken preliminary steps to develop security guidance by forming the NIS group and estimating the cost to fill this void. However, until this group develops the guidance and the ATC development teams apply it, new ATC system development will not effectively address security issues. Until FAA’s three organizations responsible for ATC system security carry out their computer security responsibilities adequately, sensitive information is at risk of being compromised and flight services interrupted. Moreover, central security groups assigned to assist these organizations can only be successful if they have the authority to enforce their actions or a direct line to top management to ensure that needed changes can be implemented across organizational divisions. At FAA this central security group has neither. Finally, FAA’s designated ATC system accrediting authority is inconsistent with federal guidance and sound management practices since this designee is not responsible for the daily operations of ATC systems. Recommendations Given the importance of physical security at the FAA facilities that house ATC systems, we recommend that the Secretary of Transportation direct the FAA Administrator to complete the following tasks: Develop and execute a plan to inspect the 187 ATC facilities that have not been inspected in over 4 years and correct any weaknesses identified so that these ATC facilities can be granted physical security accreditation as expeditiously as possible, but no later than April 30, 1999. Correct identified physical security weaknesses at inspected facilities so that these ATC facilities can be granted physical security accreditation as expeditiously as possible, but no later than April 30, 1999. Ensure that the required annual or triennial follow-up inspections are conducted, deficiencies are promptly corrected, and accreditation is kept current for all ATC facilities, as required by FAA policy. Given the importance of operational ATC systems security, we recommend that the Secretary of Transportation direct the FAA Administrator to complete the following tasks: Assess, certify, and accredit all ATC systems, as required by FAA policy, as expeditiously as possible, but no later than April 30, 1999. Ensure that all systems are assessed, certified, and accredited at least every 3 years, as required by federal policy. To improve security for future ATC modernization systems, we recommend that the Secretary of Transportation direct the FAA Administrator to ensure that specifications for all new ATC systems include security requirements based on detailed security assessments by requiring that security requirements be included as a criterion when FAA analyzes new systems for funding under its acquisition management system and the NIS group establishes detailed plans and schedules to develop a security architecture, a security concept of operations, and security standards and that these plans are implemented. We further recommend that the Secretary report FAA physical security controls at its ATC facilities, operational ATC system security, and the lack of information security guidance (e.g., a security architecture, a security concept of operations, and security standards) as material internal control weaknesses in the department’s fiscal year 1998 FMFIA report and in subsequent annual FMFIA reports until these problems are substantially corrected. Finally, we recommend that the Secretary of Transportation direct the FAA Administrator to establish an effective management structure for developing, implementing, and enforcing ATC computer security policy. Given the importance and the magnitude of the information technology initiative at FAA, we are expanding on our earlier recommendation that a CIO management structure similar to the department-level CIOs as prescribed in the Clinger-Cohen Act be established for FAA by recommending that FAA’s CIO be responsible for computer security. We further recommend that the NIS group report to the CIO and that the CIO direct the NIS group to implement its plans. In addition, we recommend that the CIO designate a senior manager in Air Traffic Services to be the ATC operational accrediting authority. We made two additional recommendations pertaining to operational ATC systems security in our “Limited Official Use” report. Agency Comments and Our Evaluation The Department of Transportation provided written comments on a draft of our “Limited Official Use” report. In summary, the department recognized that facility, systems, and data security are critical elements in FAA’s management of the nation’s ATC systems and that adequate physical security controls are important to ensure the safety of employees and ATC systems. The department agreed that required FAA inspections should be completed and said that immediate action had been directed to inspect and, where appropriate, accredit the 187 facilities identified in the draft report, that inspections had already been completed for about 100 of these facilities, and that completion of the remaining inspections was expected by June 1998. However, the department did not state what, if any, specific action it would take on the remaining 14 recommendations. Further, while the department did not dispute any of the facts presented, it offered alternative interpretations of some of them. For example, the department did not agree that FAA’s management of computer security has been inappropriate or that ATC systems are vulnerable to the point of jeopardizing flight safety. In addition, the department stated that the report does not present a complete picture regarding decisions guiding FAA resource allocation in that it does not recognize the basis for FAA decisions to allocate resources to other concerns facing FAA, rather than to correcting computer security vulnerabilities. We do not agree with these alternative interpretations. As discussed in the report, FAA’s management of facility, systems, and data security is ineffective for the following reasons: Known physical security weakness persist at many ATC facilities, and FAA is unaware of weaknesses that may exist at another 187 facilities. FAA has not analyzed the threats and vulnerabilities, or developed safeguards to protect 87 of its 90 operational ATC computer systems and 8 of its 9 operational ATC telecommunications networks. FAA does not have a well-defined security architecture, a security concept of operations, or security standards, and does not consistently include well formulated security requirements in specifications for new ATC systems. None of the three organizations responsible for ATC security have discharged their respective security responsibilities effectively: the Office of Civil Aviation Security has not adequately enforced FAA policies that require the assessment of (1) physical security controls at all ATC facilities and (2) vulnerabilities, threats, and safeguards of all operational ATC computer systems; the Office of Air Traffic Services has not implemented FAA policies that require it to analyze all ATC systems for security vulnerabilities, threats, and safeguards; and the Office of Research and Acquisitions has not implemented FAA policy that requires it to formulate requirements for security in specifications for all new ATC modernization systems. FAA has recognized for several years that its vulnerabilities could jeopardize, and have already jeopardized, flight safety. In its 1994 Telecommunications Plan, FAA states that vulnerabilities that can be exploited in aeronautical telecommunications potentially threaten property and public safety. Vulnerabilities that have jeopardized flight safety are discussed in our “Limited Official Use” report. Finally, making judicious decisions regarding resource allocation requires a thorough understanding of relative levels of risk, as well as reliable estimates of costs. As we have reported, FAA has not fully assessed its security vulnerabilities and threats and does not understand its security risks. Further, since it has not formulated countermeasures, it cannot reliably estimate the cost to mitigate the risks. As a result, FAA has no analytical basis for its decisions not to allocate resources to security. In recent years, FAA has invested billions of dollars in failed efforts to modernize its ATC systems while critical security vulnerabilities went uncorrected. The department’s comments and our detailed evaluation of them are presented in our “Limited Official Use” report. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies to the Secretary of Transportation; the Director, Office of Management and Budget; the Administrator, Federal Aviation Administration; and interested congressional committees. Copies will be available to others upon request. If you have any questions about this report, please call me at (202) 512-6253. I can also be reached by e-mail at [email protected]. Major contributors to this report are listed in appendix I. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Dr. Rona B. Stillman, Chief Scientist for Computers and Telecommunications Keith A. Rhodes, Technical Director Randolph C. Hite, Senior Assistant Director Colleen M. Phillips, Assistant Director Hai V. Tran, Technical Assistant Director Nabajyoti Barkakati, Technical Assistant Director David A. Powner, Evaluator-in-Charge Barbarol J. James, ADP/Telecommunications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Federal Aviation Administration's (FAA) computer security practices, focusing on: (1) whether FAA is effectively managing physical security at air traffic control (ATC) facilities and systems security for its current operational systems; (2) whether FAA is effectively managing systems security for future ATC modernization systems; and (3) the effectiveness of FAA's management structure and implementation of policy for computer security. GAO noted that: (1) FAA is ineffective in all critical areas included in GAO's computer security review--facilities physical security, operational systems information security, future systems modernization security, and management structure and policy implementation; (2) in the physical security area, known weaknesses exist at many ATC facilities; (3) FAA is similarly ineffective in managing systems security for its operational systems and is in violation of its own policy; (4) an October 1996 information systems security assessment concluded that FAA had performed the necessary analysis to determine system threats, vulnerabilities, and safeguards for only 3 of 90 operational ATC computer systems, or less than 4 percent; (5) FAA officials told GAO that this assessment is an accurate depiction of the current state of operational systems security; (6) according to the team that maintains FAA's telecommunications networks, only one of the nine operational ATC telecommunications networks has been analyzed; (7) without knowing the specific vulnerabilities of its ATC systems, FAA cannot adequately protect them; (8) FAA is also not effectively managing systems security for future ATC modernization systems; (9) it does not consistently include well formulated security requirements in specifications for all new ATC modernization systems, as required by FAA policy; (10) it does not have a well-defined security architecture, a concept of operations, or security standards, all of which are needed to define and ensure adequate security throughout the ATC network; (11) FAA's management structure and implementation of policy for ATC computer security is not effective; (12) security responsibilities are distributed among three organizations, all of which have been remiss in their ATC security duties; (13) the Office of Civil Aviation Security is responsible for developing and enforcing security policy, the Office of Air Traffic Services is responsible for implementing security policy for operational ATC systems, and the Office of Research and Acquisitions is responsible for implementing policy for ATC systems that are being developed; (14) the Office of Civil Aviation Security has not adequately enforced FAA's policies that require the assessment of physical security controls at all ATC facilities and vulnerabilities, threats, and safeguards for all operational ATC computer systems; and (15) the Office of Research and Acquisitions has not implemented the FAA policy that requires it to formulate requirements for security in specifications for all new ATC modernization systems. |
Background In keeping with its trust responsibility with respect to Indian tribes, the federal government holds title to the Navajo and Hopi tribal land in trust for the benefit of the tribes and their members. In this context, this section provides information on (1) the Navajo Nation and Hopi Tribe; (2) uranium mining and processing on the Navajo reservation and its environmental effects; (3) Navajo people’s exposure to uranium contamination and related health effects; (4) key statutes relevant to addressing uranium contamination; and (5) the roles of federal and tribal agencies and selected actions taken to address uranium contamination on the Navajo and Hopi reservations prior to 2008. Navajo Nation and Hopi Tribe The Navajo reservation consists of more than 24,000 square miles of land—about the size of the state of West Virginia—in the states of Arizona, New Mexico, and Utah, making it the largest reservation, geographically, in the United States. The Hopi reservation consists of approximately 2,500 square miles of land in northeastern Arizona, entirely surrounded by the Navajo reservation. Figure 1 shows the locations of the Navajo and Hopi reservations, as well as the locations of 521 abandoned uranium mines and other key sites. According to the 2010 Census, 174,000 people lived on Navajo land, and, according to Census Bureau estimates, more than 90 percent of the population identified as Navajo. Navajo culture is historically agrarian, and the Navajo people tend to live in small group clusters that are widely dispersed across the reservation. Many Navajo graze sheep and other livestock, which they use for wool and for consumption, among other things. According to Census Bureau estimates, the Navajo reservation’s poverty rate is more than twice as high as the poverty rate in the state of Arizona, with 38 percent of people on the reservation—and 44 percent of all children on the reservation—living in poverty. Most homes do not have electricity or telephones; roads are unpaved, and there are no urban centers, although there are large towns generally found near the boundaries of the reservation. Residents living in homes without piped, regulated water sources haul their water from a nearby source, and many of these sources are unregulated, untreated water sources such as livestock wells or natural springs. The Navajo Nation government includes 110 local government subdivisions, known as chapters. There are more than 5,000 Hopi living on Hopi land, according to the 2010 Census. Poverty rates on the Hopi reservation are similar to those among the Navajo. There were no uranium mines on the Hopi reservation. Uranium Mining and Processing on the Navajo Reservation and its Environmental Effects The Navajo reservation is located on the southern end of a stretch of naturally occurring uranium deposits that spans the western United States (see fig. 2). The uranium found on the reservation is primarily located in sandstone formations that range from surface outcrops to deposits more than 4,000 feet deep. On the Navajo reservation, uranium ore was removed from the ground at more than 500 mines, generally through open pit mining for ore deposits located relatively close to the surface, or underground mining for deeper deposits. The mines were often located in mountainous areas and consisted of multiple features, such as portals and vertical shafts. The material left behind after the ore was removed—known as waste rock— was then disposed, often in nearby piles, and contained dangerous materials, such as radium, radon, and heavy metals. Once mining ceased at a site, companies often abandoned the mines, leaving the waste rock piles in place without conducting any cleanup or posting signs warning about the dangers of contamination or physical hazards. The extracted ore was sent to an off-site processing facility called a mill. At the mill, the mined uranium ore was crushed, ground, and then fed to a leaching system that produced yellow slurry—called yellowcake—that was further processed for use in nuclear weapons or, as of the mid-1960s, for use in nuclear power plants. The leaching system left a waste product known as mill tailings that retained some toxic contaminants. The tailings were of a sandy consistency and mixtures of tailings and water were placed in unlined evaporation ponds at the mill site. DOE estimates that millions of gallons of water contaminated by mill tailings were released into the groundwater over the life of the sites through the unlined ponds. In addition, on July 16, 1979, the largest release of radioactive materials in the United States occurred when a dam on one of the evaporation ponds broke at a processing site near Church Rock, New Mexico, resulting in the release of 94 million gallons of radioactive waste to the Puerco River, which flowed through nearby communities. Figure 3 depicts the uranium mining and processing that occurred on or near the Navajo reservation. Most of the uranium mining and processing on the Navajo reservation occurred from the late 1940s through the 1960s, and the federal government played a variety of roles during this time. For example, as reported by EPA and the Navajo Nation, beginning in the 1940s, the Secretary of the Interior, along with the Navajo Nation, and later BIA, issued leases and permits to private companies and individuals for uranium mining on the Navajo reservation. In another example, the Atomic Energy Commission (Commission)—a precursor agency to DOE—established a series of financial incentives for the discovery and production of domestic uranium, including guaranteed minimum prices for uranium ore and financial bonuses for uranium ore mined from any previously unidentified site, according to a report prepared by the Commission. According to the report, the Commission also provided infrastructure support, such as roads needed to survey mine sites and transport ore. Finally, the federal government was the sole customer of the processed uranium from 1947 to 1965. Beginning in 1970, uranium from the Navajo reservation was sold exclusively to the commercial sector for use in nuclear power plants, but prices fell in the 1980s, and uranium mining operations on the reservation ended in 1986. Because of lingering contamination and its effects, in 2005 the Navajo Nation enacted a law placing a moratorium on uranium mining and processing on any site within the tribe’s territorial jurisdiction. In 2012, the Navajo Nation enacted a law prohibiting transportation of uranium ore or radioactive waste through lands under the tribe’s territorial jurisdiction unless fees, bonding, and other requirements were met. The 2012 law also stated that the Navajo Nation generally opposed transportation of uranium ore or radioactive materials, except for the purpose of disposing of materials from past mining or milling in a long- term facility outside of the tribe’s territorial jurisdiction or a temporary facility within the jurisdiction. Even with these laws, however, increases in the price of uranium during the past 10 years have sparked renewed interest in uranium mining and processing on and near the Navajo reservation, and opinions over new mining appear split, especially given the potential for job creation offered by the industry in economically depressed areas. For example, a committee of the Navajo Nation Council approved a resolution in December 2013 acknowledging a private company’s right-of-way across tribal land near Church Rock, New Mexico, and authorizing its use for a demonstration project that extracts uranium from beneath the surface. The Navajo Nation Department of Justice has concluded that the resolution conflicts with the 2005 and 2012 laws. Exposure to Uranium and Its Health Effects Uranium is present naturally in virtually all soil, rock, and water, and is spread throughout the environment by geological processes, as well as by wind and rain. On the Navajo reservation, winds blow during most of the year, exceeding 50 miles per hour at times, and localized, heavy rainstorms occur throughout the summer. When uranium is present in the environment, people may be exposed to it, as well as the radioactive by- products that are created as uranium decays—including radium and radon (a gas)—through a variety of exposure pathways. For example, Navajo people have been exposed to naturally occurring uranium and its by-products by drinking water from unregulated wells that tap into groundwater that comes into contact with underground uranium deposits. People are also exposed to naturally occurring radon when it migrates into their homes from the uranium-bearing soil underneath. Mining and milling processes on the Navajo reservation created new pathways for exposure to uranium, increasing the amount of potential exposure at the surface. When the uranium mines were in operation, uranium miners inhaled radioactive dust and radon in and near the mines where they worked. Miners tracked the dust into their homes, exposing their families. Community members occasionally used materials from the mines and mill sites to build their houses and ceremonial structures, leading to increased radon inside these structures. When waste rock piles were left next to abandoned mines, wind and rain at times spread—and could continue to spread—the hazardous materials, sometimes through intermittent streams, where these materials could come into contact with residents of nearby communities, who could then be exposed by inhalation or ingestion. Residents of some communities located near the mines reported playing as children in and around the open mines. Many of the exposure pathways that existed when the mines and processing sites were in operation have been eliminated, and the Navajo Nation has stated that the most significant safety hazards that were present at the abandoned mines have been addressed. However, there are pathways of exposure that remain for Navajo residents today (see fig. 4). Although historic, occupational exposure to uranium has been shown by the Centers for Disease Control and Prevention (CDC) and others to have affected human health, the extent to which the Navajo people have experienced health effects resulting from uranium exposures in other ways has not been thoroughly examined and remains uncertain. For nonoccupational exposures, comprehensive health studies have not been conducted to assess the health effects of uranium contamination on Navajo communities or other communities located near active or abandoned uranium mines and processing sites, but Navajo community members who have lived near these sites have reported a variety of serious health effects, including cancers, according to CDC. EPA reports that exposure to gamma radiation—such as from waste rock located near abandoned mines—can cause a variety of cancers, including lung cancer and leukemia, and that exposure to radon can cause lung cancer. Because of these potential dangers, EPA recommends that people stay away from areas on the Navajo reservation with especially high levels of radiation—more than 10-times above the naturally-occurring, background radiation—in order to avoid potential health effects. ATSDR and EPA have noted that the abandoned mines pose a risk especially to children, since children tend to put dirt in their mouths, and the dirt at the mines could be contaminated. EPA noted in the 2008 5-year plan that inhabitants of structures constructed with uranium mining waste are at risk of developing lung cancer because of the increased presence of radon in indoor air. In addition, given the consumption by Navajo residents of livestock that have grazed on plants located on or near abandoned mine sites, residents and researchers have identified the need to study the potential for exposure to radiation through consuming these animals. Key Statutes Relevant to Addressing Uranium Contamination Two key statutes involved with addressing uranium contamination are (1) CERCLA, also known as Superfund, and (2) UMTRCA. CERCLA established the Superfund program in 1980 to protect human health and the environment from the effects of hazardous substances, including uranium. Under CERCLA, potentially responsible parties—such as current or former owners or operators of a mine site containing hazardous substances—are liable for conducting or paying for cleanup of hazardous substances at contaminated sites. If the federal government is a potentially responsible party at a site, it is liable for cleanup costs even if there are nonfederal potentially responsible parties. For example, one court has held that the federal government was liable as an owner under CERCLA for the cleanup costs at a mine located within an Indian reservation. Under CERCLA, EPA has the authority to compel potentially responsible parties to clean up contaminated sites, or to conduct cleanups itself and then seek reimbursement from the potentially responsible parties. EPA may compel cleanup by bringing an enforcement action against a potentially responsible party or by attempting to reach an administrative agreement—known as an administrative order on consent, or a settlement agreement—requiring the responsible party to perform and pay for site cleanup. Sometimes, however, potentially responsible parties cannot be identified or may be financially unable to perform the cleanup. Under the Superfund program, EPA and potentially responsible parties can undertake two types of cleanups: (1) removal actions and (2) remedial actions. Removal actions are generally shorter-term or emergency cleanups to mitigate immediate threats. These include time-critical removals for threats requiring action within 6 months, and non-time-critical removals for threats where action can be delayed to account for a 6-month or longer planning period, which includes a site evaluation to characterize the site and identify and analyze removal alternatives. Removal actions can include, for example, installing a fence around a contaminated site and excavating contaminated soils for disposal, and can be quite complex. Removal actions can be financed by the Hazardous Substance Superfund Trust Fund (Superfund Trust Fund). Remedial actions typically are longer-term actions that involve a more elaborate process to permanently and significantly reduce contamination. Remedial actions are taken instead of, or in addition to, a removal action. Before undertaking a remedial action, a remedial investigation and feasibility study (RI/FS) is conducted in accordance with an approved work plan to (1) characterize site conditions and assess the risks to human health and the environment, among other things, and (2) evaluate various options to address the problems identified. A remedy is selected for addressing the site’s contamination in a record of decision, and the design of the selected remedy is then developed and implemented (see fig. 5). Only sites on the National Priorities List (NPL)—EPA’s list of the nation’s most contaminated sites—are eligible to have remedial actions financed by the Superfund Trust Fund. None of the mine sites on the Navajo reservation are currently listed on the NPL. UMTRCA required DOE to take remedial actions at certain uranium mill sites across the country—and properties in the vicinity that were contaminated with radioactive materials from the mill sites—to stabilize and control the mill tailings in a safe and environmentally sound manner and to minimize or eliminate health hazards, among other things. UMTRCA included four sites on the Navajo reservation: Mexican Hat, Monument Valley, Shiprock, and Tuba City, which we refer to as the Rare Metals site in this report. Under the act, DOE is also responsible for ensuring that any residual radioactive minerals entering the groundwater do not exceed specified limits; therefore, DOE maintains groundwater remediation systems at sites where groundwater contamination has persisted. In accordance with the act, the Secretary of Energy has entered into cooperative agreements with the Navajo Nation to perform the remedial actions at the sites located on the tribe’s land; the current agreement lasts until 2017. The act also required DOE to complete the remedial actions at the processing sites and vicinity properties before DOE’s authority under the act, as amended, to perform these actions expired in 1998; DOE’s authority to perform groundwater restoration activities has not expired. UMTRCA did not include provisions for DOE to remediate abandoned uranium mines. Federal and Tribal Agencies’ Roles and Selected Actions Taken to Address Uranium Contamination on or Near the Navajo Reservation Prior to 2008 A variety of federal agencies have specific roles in addressing uranium contamination on or near the Navajo reservation. Table 1 outlines the agencies, their roles, and selected actions they took to begin addressing the contamination prior to 2008. In addition to the federal agencies, tribal agencies play key roles and have been actively addressing the impacts of historical uranium mining and processing on or near the Navajo reservation. Navajo Nation Environmental Protection Agency (NNEPA). NNEPA is the lead Navajo agency for regulating radiological contamination at abandoned uranium mines. NNEPA addresses uranium contamination on the Navajo reservation through a variety of programs, including the Navajo Superfund program, which is responsible for assessing hazardous waste sites on the reservation, including abandoned uranium mines. NNEPA partners with EPA in working under the structures, abandoned mines, and unregulated drinking water objectives of the 2008 5-year plan, and provides input on the other objectives as well. Navajo Abandoned Mine Lands Reclamation/ Uranium Mill Tailings Remedial Action (UMTRA) Department. This Navajo department consists of two programs, the Abandoned Mine Lands Reclamation Program and the UMTRA program. The Abandoned Mine Lands Reclamation Program reclaims abandoned mines on the Navajo reservation. From the 1990s through 2005, the Abandoned Mine Lands Reclamation Program reclaimed more than 900 abandoned uranium mine features found at the 521 mines located on or near the reservation, primarily addressing surface hazards, including stabilizing steep areas and burying uranium-contaminated soils. According to Navajo Nation officials, this work did not address all associated radiological hazards, and the program continues to conduct maintenance on past reclamation work. The department’s UMTRA program provides assistance to DOE under an UMTRCA cooperative agreement at the four former uranium processing sites on the reservation. Under the 2008 5-year plan, the Navajo Nation chose not to have the Abandoned Mine Lands Reclamation Program coordinate its reclamation work with EPA and NNEPA’s abandoned mine work. The UMTRA program, however, supported DOE’s efforts at the former processing sites under the 5-year plan. Agencies Met the Targets in Six Out of Eight Plan Objectives Primarily Because of Additional Federal Resources Federal agencies met the targets in six of the eight objectives they established in the 2008 5-year plan, but they did not meet the targets in two of the eight objectives. Reasons the agencies met the targets in five objectives were primarily because additional federal resources, including funding and staff time, were dedicated to their efforts. DOE met the targets in a sixth objective because the agency set targets that represented a continuation of previously required activities. By contrast, the reasons federal agencies did not meet the targets in two objectives were, in part, because of decisions to conduct additional assessment and outreach activities before identifying final cleanup actions. Remaining actions are necessary to meet the targets in these two objectives. Agencies Met the Targets in Six Out of Eight Plan Objectives In the 2008 5-year plan, federal agencies identified targets under each of the eight objectives that they intended to meet, in cooperation with tribal agency partners, by the end of the plan period in 2012. We found that the agencies met the targets in six of the eight objectives. According to the agencies’ January 2013 summary report, the 2008 5-year plan outlined a strategy for gaining a better understanding of the scope of the problem and addressing the greatest risks first. The scope of work required to meet most of the targets did not represent the entirety of work necessary to fully address the issues encompassed by each objective. Table 2 explains the actions taken by the federal agencies—and the tribal agencies with whom they worked—during the period of 2008 through 2012, and our assessment of whether these actions met the targets in the 2008 5-year plan. We also found that federal agencies completed additional actions and produced results beyond the targets in the 2008 5-year plan during the plan period and in 2013. Among other things, EPA, working with the Navajo Nation, conducted a time-critical removal action at the Skyline mine, located within the Oljato Chapter in southern Utah. The action involved moving 25,000 cubic yards of radioactive mine waste—most of which was located at the bottom of a 700-foot high mesa—to a repository constructed on-site at the top of the mesa. According to EPA officials, the agency built the repository to be permanent, but the waste could ultimately be removed from the site given the Navajo Nation’s preference that all contamination be removed from the reservation. EPA undertook smaller, interim removal actions at three other sites, including the Quivira mine, which is located near the Northeast Church Rock mine. At two of the sites, EPA built temporary storage repositories to hold the waste on-site until a final disposal option is selected. In addition, EPA and NNEPA identified 43 of the 521 abandoned mines as the highest priority for additional assessment work and cleanup actions; EPA officials said these mines are the highest priority because they pose the greatest exposure risks to the Navajo people since elevated radiation levels are present at the mines and they are located near houses or other potentially inhabited structures. EPA recommends that people stay away from areas on the Navajo reservation with such elevated levels of radiation in order to avoid potential health effects. The 43 highest priority mines include 37 mines where radiation levels measured at or above 10-times the background radiation at the mine and where a potentially inhabited house or structure is within one-quarter mile of the mine. Of these mines, 8 mines measured at or above 50-times the background radiation, and 3 mines measured above 95-times the background radiation. EPA and NNEPA identified 6 additional mines as part of the 43 highest priority mines where radiation levels were lower— from 2-times to 10-times background—but that posed especially high risks because, for example, a potentially inhabited house or structure is within 200 feet. EPA Region 9 officials we spoke with said prioritizing the mines benefits both federal and tribal agencies by providing a common road map for their efforts. Figure 6 shows how the 43 highest priority mines relate to the rest of the 521 abandoned uranium mines in terms of radiation levels and distance to homes or structures. Further, ATSDR worked with the Navajo Nation, the University of New Mexico, and IHS to develop and begin the Navajo Birth Cohort Study, a health study that is intended to improve the understanding of the relationship between uranium exposures and human health—specifically that of mothers and babies—on the Navajo reservation. According to the study proposal, for the Navajo Nation, congenital anomalies remain the leading cause of infant deaths, and the infant mortality rate among the Navajo people is 8.5 deaths per 1,000 live births, compared with 6.9 deaths per 1,000 live births overall in the United States. ATSDR awarded a research cooperative agreement to the university in August 2010, and ATSDR and the university received approval to begin recruiting participants in February 2013; this approval occurred after a lengthy review process that included obtaining multiple, separate approvals, including from the university, the Navajo Nation, and the Office of Management and Budget (OMB). The Navajo Nation and others expressed frustration about the length of time spent developing and approving the study, which took longer than anticipated for a variety of reasons. For example, one reason for the overall amount of time is that OMB did not approve the information collection necessary for the study, or take other actions, within the 60-day regulatory deadline, but rather approved it after more than 300 days. Now that the study is under way, however, ATSDR officials told us that it has already had positive outcomes in Navajo communities. For example, according to ATSDR officials, recent observations of increased levels of prenatal care across the reservation may be a result of the outreach and community education that has occurred as part of the study. Moreover, according to ATSDR’s study proposal, the results of the study will answer long-standing questions on whether or not exposures to uranium wastes and other environmental contaminants are associated with adverse birth outcomes or developmental delays on the Navajo reservation. Navajo Nation officials, however, stated that the Navajo Birth Cohort Study is just a small step and that more comprehensive studies are needed to better assess the health effects of uranium contamination on the Navajo people. We found that some of the agencies’ actions during the 2008 5-year plan period yielded additional benefits. For example, outreach to affected communities was an important component of some of the objectives under the 2008 5-year plan, although the plan did not include a strategy for coordinating agencies’ outreach. Regardless, federal agencies began to coordinate these efforts, and, for example, held five joint workshops for stakeholders, including members of Navajo communities affected by uranium contamination, during which the agencies presented information about their efforts and solicited feedback. Federal agencies also partnered with Navajo agencies on some outreach efforts, which was important for their success in some cases. For example, EPA and NNEPA officials told us EPA relied heavily on NNEPA’s outreach staff to communicate with affected community members in identifying and addressing contaminated structures. NNEPA outreach staff’s ability to speak Navajo and their familiarity with Navajo cultural practices allowed them to work more effectively with community members than if EPA had conducted outreach on its own. Other benefits from the agencies’ actions included tribal capacity building and career development and education opportunities for Navajos. For example, EPA helped enhance capacity building within NNEPA by training some of its staff to assess potentially contaminated structures, and it also provided job training to 20 Navajo hazardous waste workers through EPA’s Superfund Job Training Initiative program. In another example, DOE continued to sponsor a summer internship program to give assistance to American Indian college students—including Navajo students—who are pursuing degrees in science, engineering, and technology. Agencies Met the Targets in Six Objectives Primarily Because of Additional Federal Resources or Because Targets Continued Previously- Established Efforts We found that a key reason why agencies met the targets for five objectives in the 2008 5-year plan was because additional resources, mostly federal but also private, were dedicated to their efforts. For their work on the objectives addressing contaminated houses, abandoned mines, unregulated drinking water sources, the Highway 160 site, and treatment of health conditions, the agencies either dedicated more funds and staff resources than during the previous 5-year period or received additional appropriations for work related to Navajo uranium contamination. DOE was able to meet the targets for a sixth objective, regarding former uranium processing sites, primarily because its targets largely represented a continuation of previously required activities. Additional Resources In accomplishing the targets under five of the objectives outlined in the 2008 5-year plan, according to agency officials, agencies benefited from dedicating additional resources from their existing budgets, receiving additional appropriations to conduct the work, or leveraging funds from private parties. Examples are as follows: Additional funding and staff time from agencies’ existing budgets. EPA prioritized its work under three objectives of the 2008 5-year plan by dedicating additional resources from its existing budget for addressing contaminated houses, assessing abandoned uranium mines, and addressing unregulated drinking water sources. EPA provided from $1.8 million to $7.8 million annually to the Region 9 Superfund Removal program to fund the program’s Navajo uranium work during the 5-year plan period—a significant increase over the previous 5-year period. For example, from fiscal years 2008 through 2012, EPA reported that it expended $22 million on efforts to identify and address contaminated houses and other structures, compared with the $1.5 million it expended on similar efforts in the preceding 5 years. Throughout the 2008 5-year plan period, the additional Superfund Removal program funds allowed EPA Region 9 to increase the amount of money it spent on the Navajo work even as the national Superfund budget decreased, according to a senior EPA Region 9 official. Further, EPA officials told us that they conducted work that went beyond the 5-year plan targets because of the increased funding the agency dedicated to Navajo uranium work. Specifically, these officials said they could not have completed the removal action at the Skyline mine without the increased funding since the Region 9 Superfund removal program’s prior budget would have been insufficient, and there was no potentially responsible party to contribute funds. In addition to increased funding, EPA Region 9 also increased the number of full-time equivalent employees that it dedicated to its Navajo uranium work from approximately 3.68 in 2008 to 6.95 in 2012. Similarly, IHS reported that its Navajo Area identified nearly $1 million from within its existing budget that it used to support the creation of a uranium-related health screening program, which was established in 2010. Additional funding from the American Recovery and Reinvestment Act of 2009 (Recovery Act). EPA and IHS used Recovery Act funds for some of the water infrastructure projects that were selected to serve Navajo communities in which contaminated, unregulated water sources had been identified. For example, in fiscal year 2009, EPA contributed $3 million in Drinking Water Infrastructure Grants Tribal Set Aside funds, and IHS contributed about $2 million from the IHS Recovery Act Sanitation Facilities Construction Fund toward a nearly $10 million, 50-mile extension of a water main to the communities around Sweetwater, Arizona. An EPA official familiar with the project told us that it will supply water to homes within the vicinity of four contaminated, unregulated wells, including a well that had the highest uranium levels of all unregulated water sources tested during the 5-year plan period. Additional appropriations. In fiscal year 2009, DOE received a $5 million appropriation to carry out a remedial action of the Highway 160 site. The 2008 5-year plan included a target for assessing the site and identifying the best path forward, but not for completing cleanup at the site. According to DOE and NNEPA officials involved with the project, having the resources available to fund assessment and cleanup work allowed the agencies to move forward and complete the cleanup more quickly than they had anticipated. Moreover, NNEPA and DOE also used the appropriated funds to begin to address recently discovered, contaminated structures in the area of the Highway 160 site and the nearby Rare Metals processing site. In addition, ATSDR officials we spoke with said they would not have been able to fund the Navajo Birth Cohort Study without additional appropriations for such research. These officials said ATSDR received an increase of $2 million in funding for fiscal year 2010 to begin the study, and that the agency has subsequently put that amount toward the study. Leveraging funding from private, potentially responsible parties and other federal agencies. According to EPA officials, the agency was able to complete some of the work that went beyond the targets in the 2008 5-year plan, including conducting the interim time-critical removal actions at the Quivira mine and others, because of funding that came from private, potentially responsible parties. Specifically, EPA issued an administrative order to one of the former operators of the Quivira mine to conduct and pay for the interim removal action. In addition, EPA and NNEPA used funds from a bankruptcy settlement with another potentially responsible party to pay for the interim actions at three other mines or mine-related sites. Without funds from the private, potentially responsible parties, EPA officials said they would not have been able to conduct these actions during the 5-year plan period. Further, EPA officials said that funds from the bankruptcy settlement were instrumental in providing the initial funding for the agency’s efforts to pursue potentially responsible parties at other abandoned uranium mines on the Navajo reservation. Federal agencies’ ability to share resources was also an important factor in meeting the targets in at least one objective. Specifically, IHS officials told us the agency would not have been able to contribute funding to all 13 drinking water infrastructure projects funded during the 5-year plan period without combining its funds with funds from EPA and the Department of Housing and Urban Development. IHS officials said the agency’s ability to fund these drinking water projects would have been limited because IHS’s share of some of the projects’ costs would have exceeded the agency’s limit for economic feasibility. For example, if the agency had to solely fund a $4.75 million project in Dennehotso, in the northern part of the reservation, it would have cost the agency about $44,000 per home, an amount that would have been considered economically infeasible for IHS in fiscal year 2009, the year the project was funded. However, contributions of $2 million from EPA and $1 million from the Department of Housing and Urban Development reduced IHS’s per-home cost so that the agency was able to participate in funding the project that provided piped drinking water to 107 homes that did not previously have piped water. Overall, the federal agencies reported spending $121 million on work performed under the 2008 5-year plan. These amounts do not include the approximately $17 million in private funds spent during the 5-year plan period, including by potentially responsible parties at the Northeast Church Rock mine, the Quivira mine, and from the bankruptcy settlement, according to the federal agencies’ January 2013 summary report. In contrast, agencies reported spending approximately $42 million during the prior 5 years, and more than half of that amount was spent by DOE at the four processing sites. Figure 7 compares the amount of funds spent by the federal agencies under each objective in the 5-year plan period with the previous 5 years. Because the 2008 5-year plan did not include an overall cost estimate for conducting the work, we cannot determine whether the total amount spent by the agencies was in keeping with their expected costs. The 5-year plan included estimated costs of varying specificity for the first 2 years of the plan—2008 and 2009—since agency budgets for the first 2 years were already in place at the time of the plan’s development but not for the final 3 years of the plan. For one objective of the 2008 5-year plan—addressing contamination at former uranium processing sites—DOE set targets that largely continued previously authorized activities that the agency was already undertaking, which according to DOE officials, helped the agency accomplish the targets. DOE set targets to continue to address groundwater contamination at three of the sites and to continue long-term surveillance and maintenance at all four sites—actions the agency was required to undertake under UMTRCA. DOE officials told us that continuing to carry out the already approved groundwater remediation strategies at the sites was the most appropriate action for the agency during the 2008 5-year plan period since those were the actions they were explicitly authorized to conduct. Navajo Nation officials told us that they were disappointed that DOE did not increase its level of effort at the sites. They also told us they were concerned that the remediation efforts that DOE is implementing are not achieving sufficient results, and that it appears that the agency is not expected to complete its efforts to treat contaminated groundwater in the foreseeable future. Agencies Did Not Meet the Targets in Two Objectives for a Variety of Reasons, Including Optimistic Schedules and Decisions to Perform Additional Work That Extended Time Frames Federal agencies did not meet the targets for two of the eight objectives in the 2008 5-year plan—cleanup of the Northeast Church Rock mine and the Tuba City Dump—for a variety of reasons, including that the schedules were optimistic and ambitious, and EPA decided to increase outreach work at the Northeast Church Rock mine and assessment work at the Tuba City Dump before identifying final cleanup actions for the sites. EPA and BIA officials told us they estimated the schedules based on the information they had at the time, but neither agency anticipated the need for additional steps in the assessment process and therefore did not include these steps in their schedules. Officials from both agencies said they deliberately created ambitious schedules for these sites, in part, to acknowledge the threats they posed and to make it clear that the agencies were committed to cleaning them up. Work remains for both agencies to complete the cleanups at the two sites, and the agencies expect that time frames will likely extend beyond the agencies’ 2014 5-year plan and that federal costs will be in the tens of millions of dollars at each site. Northeast Church Rock Mine EPA did not meet its target to complete cleanup of the Northeast Church Rock mine in part because its schedule was optimistic and ambitious. According to the 2008 5-year plan, EPA expected to select the removal action for the site in December 2008; however, this selection did not occur until September 2011. According to EPA officials familiar with the project, selecting the removal action took longer than anticipated for a number of reasons. First, completing the cleanup assessment for the site took 8 months longer than planned. Second, after EPA issued the cleanup assessment, the agency postponed selecting the removal action by 2 years so that agency officials could better understand and attempt to address community concerns. Over this 2-year period, EPA conducted 10 public meetings and brought in a Navajo peacemaker to facilitate discussions and improve communication between the agency and the community. According to EPA officials we spoke with, in order to further respond to community concerns, EPA also began work on some predesign analyses that are normally conducted at a later stage in the cleanup process. EPA officials told us they felt the meetings were valuable and that they have conducted more outreach at this site than at most other sites, but community members we spoke with said they remain frustrated with the decision process and disappointed with the outcome. Third, when estimating the schedule under the 2008 5-year plan, EPA Region 9 officials did not anticipate that additional approval processes would be necessary to implement the removal action for the site that EPA ultimately selected. EPA’s selected action involves disposing of approximately 1 million cubic yards of mine waste within an existing disposal cell for mill tailings at a former uranium processing site. The site is located less than 1 mile from the Northeast Church Rock mine and is regulated and managed by NRC and EPA Region 6, respectively. The former operator of the processing site—which is also the former operator and a potentially responsible party for the mine—currently holds a license from NRC for the existing disposal cell at the former processing site. According to EPA Region 9 officials, for EPA to transfer waste from the mine to the disposal cell, EPA headquarters officials determined that EPA Region 6 would need to approve a Record of Decision, which took 18 months to complete. A number of steps remain for EPA to fully meet the target of cleaning up the mine. As of February 2014, EPA was in the removal predesign phase of the cleanup process. NRC and DOE are both participating in an EPA- led design work group since NRC will transfer responsibility for the site to DOE once the processing site is closed for long-term surveillance and maintenance pursuant to UMTRCA. Once the design phase is complete, the former operator of the processing site must submit a license amendment request and receive an amended license from NRC before disposing of the mine waste at the former processing site. This former operator and potentially responsible party for the mine is expected to implement the removal action if and when NRC issues the amended license. In addition, NNEPA officials told us they have concerns regarding groundwater contamination at the site that have yet to be examined. EPA’s current schedule estimate is to complete the removal action in 2020. EPA officials, however, acknowledged that this schedule is also optimistic since it assumes that NRC’s approval process for the license amendment will take 1 year, and that the cleanup itself will take 4 years. NRC officials also said they felt the schedule was too optimistic, and they told us that NRC’s safety and environmental reviews will take approximately 2 years but, if a public hearing on the license amendment is requested, the approval process could take up to 5 years. An EPA project manager for the mine told us EPA is working with NRC to revise the schedule to provide 2 years for the license amendment approval process in order to better account for NRC’s process. Moreover, although the former operator and potentially responsible party at Northeast Church Rock mine is taking the lead for the cleanup, the government will pay up to 33 percent of future cleanup costs; in 2009, EPA estimated that these total future costs could be $44 million. Tuba City Dump BIA also did not meet its targets in the 2008 5-year plan for the Tuba City Dump, in part, because the schedules were optimistic and ambitious. Under the plan, BIA was to (1) complete a set of studies to assess whether interim actions were warranted to protect Hopi water supplies, including drinking water wells, by mid-2008; (2) create a work plan for, then conduct and complete a RI/FS by late 2009; and (3) complete a remedial action by the end of 2012. Partway through BIA’s implementation of the 5-year plan, in August 2010, BIA entered into a settlement agreement with EPA to conduct the RI/FS. Under the settlement agreement, BIA’s work is subject to EPA’s approval, and EPA will select the remedial action. As of the end of the 5-year plan period in 2012, BIA had completed the interim action studies, which found the dump did not pose an immediate threat to the wells, and it had implemented certain actions recommended by the studies, including installing a fence around the perimeter and conducting a detailed analysis of one location with high levels of uranium. BIA developed the RI/FS work plan but had not completed the plan’s required work. EPA had not selected a remedial action and, therefore, BIA had not begun or completed a remedial action. Further, BIA’s actions under the 5-year plan took longer than expected, for various reasons, which also contributed to the agency not meeting the targets. First, BIA spent nearly 1 year longer than expected conducting the interim action studies, and implementing the recommended actions took an additional year that had not been accounted for in the 5-year plan. BIA officials said they underestimated the amount of time needed to complete these efforts. Second, BIA spent more time developing the work plan for the RI/FS than had been anticipated, in part because EPA directed it to significantly revise the work plan. Under the settlement agreement, BIA was responsible for submitting a work plan for EPA approval that specified the activities and deliverables, as well as deadlines, for BIA in the development of the RI/FS. The work plan and its deadlines are legally enforceable once EPA approves the work plan. EPA approved the initial work plan developed by BIA in January 2011, and BIA issued a $2 million task award for its implementation in June 2011. However, in July 2011, 1 month later, EPA notified BIA that it would need to revise the RI/FS work plan, which significantly increased the amount of work to be performed. BIA then spent an additional year working with EPA revising the work plan, which EPA ultimately approved in July 2012. According to EPA correspondence to BIA, although it would delay completion of the RI/FS, the additional investigative work was necessary to resolve conflicting interpretations of data collected over the previous years of assessments and to support a defensible selection of a remedial action for the site. EPA officials said the additional work has yielded valuable information, including determining whether groundwater contamination at the site can reach nearby Hopi drinking water wells. Hopi tribal leaders, however, told us that although they appreciate the additional understanding that has been gained through the RI/FS, they are frustrated that the federal agencies have continued to dedicate resources to conducting additional assessments instead of cleanup actions, especially in light of the fact that, as of 2013, BIA had overseen assessment work at the site for more than 10 years. Third, implementing the RI/FS has taken longer than expected by BIA under the 2008 5-year plan, in part because BIA conducted additional work under the work plan at EPA’s direction. In addition to the work EPA directed BIA to add in 2011, EPA subsequently required BIA to conduct additional field investigations. EPA officials explained that the scope of an RI/FS is often changed in response to conditions found on the ground, and that the Tuba City RI/FS has been typical in that respect. According to EPA and BIA documents, conducting this additional fieldwork contributed to BIA missing some of the work plan deadlines. Moreover, project and contract management challenges faced by BIA have also contributed to the length of time spent on the RI/FS. BIA officials told us they had communication problems with the agency’s contractor for the RI/FS and performance problems regarding the quality and timeliness of the contractor’s deliverables. For example, in correspondence with the contractor, BIA noted multiple instances when the contractor was late in providing draft deliverables to BIA, which did not provide sufficient time for BIA to review the deliverables before they were due to EPA. In addition, when BIA completed its review of the deliverables, it found they did not all meet the terms and conditions of the contract. BIA also noted in its correspondence with the contractor that the contractor’s performance problems began soon after the contract was signed. After the problems continued to mount, according to BIA officials, they worked informally through phone calls and e-mails to correct the performance problems; however, BIA did not formally notify the contractor of the problems and require corrective action until about 16 months after the problems began. During that time period, BIA modified the contract four times, each time increasing the work to be performed in accordance with direction from EPA; these modifications totaled nearly $1.6 million, about an 80 percent increase above the value of the original contract. By adding work to the contract without correcting the contractor’s poor performance and adding stronger performance provisions, BIA was effectively rewarding the contractor for its poor performance. In hindsight, BIA officials responsible for managing the contract told us, had they known the problems would not improve, they would have initiated formal action against the contractor sooner; however, they were reluctant to further delay the project. Had they terminated the contractor for default, BIA would have had to award a new contract, taking a minimum of 90 days, plus the additional time it would take to bring a new contractor up to speed to perform the contract. The BIA officials said, instead, they prioritized meeting the deadlines in the work plan and avoiding the delay of awarding a new contract. These officials told us that the RI/FS contract, initially valued at approximately $2 million, is not typical for their region and is much larger than any other contract they manage. For example, the next largest environmental contract in BIA’s Western region is worth $300,000. Learning from its challenges in managing the RI/FS contract will become even more important to BIA in the next few years as the agency moves from assessment to cleanup work after a remedial action is selected. At that time, BIA officials said their agency will award and manage a new contract, one that is even larger and more complicated that will increase costs significantly. In August 2011, we reported that incorporating lessons learned from past contracts is an important element of successful acquisition planning when preparing to award a new contract. Through this process, agencies ensure that knowledge gained from prior acquisitions is used to refine requirements and acquisition strategies. Without examining lessons learned from managing the RI/FS contract and considering these lessons as part of the acquisition planning process for the remedial action contract, BIA could face contract management challenges on a larger scale. Further, according to EPA and BIA officials, BIA’s management of the project also contributed to BIA’s missing some legally enforceable deadlines in the work plan within months of EPA approving it in July 2012. Specifically, BIA did not comply with the settlement agreement’s terms for requesting an extension to these deadlines in the work plan. As a result, BIA was subject to stipulated penalties under the settlement agreement for the deadlines it missed. EPA officials told us the agency held the penalties in abeyance; as a result, EPA did not calculate the total amount of the penalties. As EPA noted in correspondence to BIA, the missed deadlines only led to a few weeks of direct delays to the work plan schedule, but the missed deadlines used much of the contingency, or slack, in the schedule, meaning any future delays could not be absorbed without directly lengthening the project. In 2013, according to EPA officials, BIA notified EPA that it was going to miss another work plan deadline, however, BIA again did not submit the extension request before the deadline passed, potentially subjecting it to additional stipulated penalties. Further complicating its management of the project, we found that the schedule BIA used to manage its responsibilities under the RI/FS was not created based on best practices for effective scheduling. We have reported that a sound schedule is comprehensive, well-constructed, credible, and controlled. The RI/FS schedule generated by BIA’s contractor and approved by EPA minimally met these criteria. For example, we could not verify that the schedule included all the actions needed to complete the RI/FS, which is an essential practice in ensuring that the schedule is comprehensive. If a project schedule does not fully reflect the scope of the project, it can result in unreliable estimated completion dates and delays. In another example, neither BIA nor EPA regularly updated the schedule based on actual progress, an important aspect of a controlled schedule. BIA officials explained that they do not keep a copy of the schedule file that they can update; BIA relies on its contractor to update the schedule, and EPA maintains control of the master schedule file for the RI/FS. Without an updatable version of the schedule, BIA cannot effectively monitor its contractor’s progress and cannot evaluate the quality of changes to the schedule proposed by the contractor, which BIA then proposes to EPA for approval. According to BIA officials, not having information about the basis for the proposed schedule changes contributed to the agency proposing a new RI/FS schedule to EPA in 2012 that contained errors and was not achievable. Appendix II contains additional details about our analysis. A number of steps remain for BIA to meet the target of completing cleanup at the Tuba City Dump. As of February 2014, the full scope of remaining cleanup work—and an estimate of when it may be completed— had not been determined since the RI/FS was ongoing. BIA requested two extensions to the deadlines in the work plan in 2013; as a result, the current deadline for completion of the RI/FS is May 2014, more than 4 years after the completion date in the 2008 5-year plan. According to BIA officials, the May 2014 deadline may not be achieved. For example, BIA officials said they are expecting the schedule to change to allow for additional time for stakeholders’ review of a key draft document and for additional analysis requested by EPA in December 2013. In another example, BIA has continued to experience performance problems with its contractor related to timeliness and product quality. These performance problems prompted BIA to send a second formal notification to take corrective action to its contractor in January 2014. Nevertheless, in the short-term, EPA officials said they plan to conduct extensive outreach with local communities as they evaluate the remedial options for the site. Hopi officials we spoke with stressed the tribe’s concern over protecting their water sources in the area and told us that having a contaminated dump located on their land is affecting their ability to expand economic development. Because of these concerns, Hopi officials stated that the only acceptable solution is to remove the contamination from the site. DOE and EPA officials involved at the site told us, however, that the data collected thus far indicate the Hopi drinking water wells will not be affected by the dump, and there are other factors limiting development in the area, including the region’s scarcity of water. Based on two potential remedial actions for the site identified by BIA, the agency has estimated that the range of probable future cleanup costs is from $22 million to $72 million. BIA created this estimate range in order to contribute to the Department of the Interior’s (Interior) environmental and disposal liability estimate, which is included in Interior’s annual financial statement, but we found the estimate was not generated according to the government and industry cost-estimating best practices identified in our 2009 cost estimating and assessment guide. According to BIA officials, the estimate was created according to Interior’s guidance. Nevertheless, according to best practices, cost estimates should be comprehensive, well-documented, accurate, and credible, which are the four characteristics of a high-quality cost estimate of any type, and BIA’s estimate does not fully reflect these characteristics. For example, the estimate did not completely define the program, an important aspect of a comprehensive schedule. In response to our questions about some aspects of the work scope that were included in the estimate, including whether future groundwater treatment was included, BIA officials stated that such treatment should be included in the estimate. However, after checking with the contractor that created the estimate, one BIA official involved with managing the project discovered that costs for groundwater treatment were not included in the estimate. Without fully accounting for all future costs, management will have difficulty successfully planning program resource requirements. In response, BIA officials said that they directed the contractor to include these costs in a revised estimate. These officials also said they did not apply all of the best practices when developing the estimate, in part, because it would not have been appropriate for BIA to expend significant resources developing a detailed cost estimate since they have not completed the RI/FS, and that once EPA selects a final remedial action, BIA will work to apply cost estimating protocols when it develops a more detailed cost estimate for the site. Appendix III provides additional details of the results of our analysis of the cost estimate. Agencies Have Not Estimated the Full Scope of Work, Time Frames, or Costs Needed to Address Uranium Contamination but Recognize That Significant Work Remains The agencies that implemented the 2008 5-year plan have not identified the full scope of remaining work, time frames, or costs of fully addressing uranium contamination on or near the Navajo reservation, especially at abandoned uranium mines, but have recognized that significant work remains for addressing such contamination beyond the targets in the plan. As a result, decision makers and stakeholders do not have sufficient information about the overall remaining work, time frames, and costs to assess the overall pace of the cleanup efforts. Given that significant work remains to address contamination on or near the Navajo reservation, it is likely that it will take many decades and cost at least hundreds of millions of dollars in additional funding to make significant progress in this area. Agencies Have Not Identified the Full Scope of Remaining Work, Time Frames, or Costs Needed to Fully Address Uranium Contamination, Especially at Abandoned Mines The draft action plans do not clearly delineate a course of action for fully resolving the problem. Given the extent of the contamination that is already known, it is obvious that the contamination cannot and will not be cleaned up in the 3- to 9-month timeframes covered by the draft plans. We need a 5-year plan from each agency that sets out specific cleanup objectives, specific timeframes for achieving those objectives, and the new authorities and funding, if any, necessary to achieve those objectives. These plans will provide the Congress, the Navajo Nation, and the public with concrete benchmarks against which to measure the progress of the federal agencies in cleaning up the contamination. In its critique of the short-term action plans, the committee requested of the agencies additional information to understand the full scope of the cleanup effort. The same critique is also generally applicable to the 2008 5-year plan because it too did not contain information on the full scope of the cleanup and instead provided targets for achieving incremental progress under the plan. For example, it is still unclear what percentage of the overall cleanup effort was expected to be achieved in the 2008 5-year plan or how many additional 5-year plans may be necessary to fully address the contamination. As we discussed above, the agencies stated that the 2008 5-year plan focused on addressing over the 5-year period what they identified as the most urgent uranium-related problems and was not intended to be a long-term plan for dealing with the entirety of the contamination. EPA officials involved with coordinating the development of the agencies’ 2014 5-year plan told us this plan also will not include the full scope of the cleanup work. EPA officials cited a variety of reasons for not having identified estimates of the full scope, time frames, or costs of cleanup, including at the abandoned mines. These officials explained that providing such high- level, general estimates of required work, time frames, or costs is not consistent with how EPA cleans up contaminated sites under CERCLA. The agency typically develops detailed, site-specific information on a site- by-site basis, and then estimates costs and schedules based on that specific information. They said the agency generally does not create even rough estimates if cleanup actions have not been selected or if they do not know the total number of mines that will need cleanup. These officials also said that a number of other uncertainties remain. More specifically, Incomplete information about the extent of contamination. According to the January 2013 report summarizing the agencies’ accomplishments under the 2008 5-year plan, EPA and NNEPA’s actions resulted in an improved understanding of the scope of uranium contamination at the mines on the reservation, and the agencies identified and prioritized 43 mines that pose the highest risk to surrounding communities. EPA, however, does not know the full scope of cleanup actions that will be necessary to address these highest priority mines (see app. IV for more information about the status of each of the 43 mines), and EPA officials said they expect that some number of the rest of the 521 abandoned mines will also need cleanup, but they do not know what that number will be. EPA officials said that they need additional information about, for example, the location and volume of waste present at each site before they can identify the scope of cleanup actions. However, EPA officials told us they have begun making assumptions about what work may be needed at the highest priority mines based on the site-specific information they have already collected. These officials stated that they expect that most of the highest priority mines will need removal actions, involving excavating and disposing from a few thousand to hundreds of thousands of cubic yards of mine waste at each mine, and a few of these mines may warrant longer-term remedial actions where surface water and/or groundwater may be contaminated. Uncertainty about potentially responsible parties. According to EPA officials, the total number of abandoned mines that will have a potentially responsible party to lead or contribute funding for assessment and cleanup work is unknown, and this number will affect the scope of work, time frames, and costs necessary to clean up abandoned mines using federal funds. As of February 2014, EPA had signed agreements with potentially responsible parties regarding 24 mines and received money from a bankruptcy settlement for use at another 49 mines—these actions covered 9 of the 43 highest priority mines. EPA Region 9 officials said they are continuing to pursue potentially responsible parties, but the total number of mines that could ultimately be subject to agreements with such parties may be limited, in part, because of difficulties associated with identifying parties more than 50 years after mines were abandoned. Further, there are other reasons why the government’s ultimate share of the cleanup costs is unknown. If the federal government is a potentially responsible party at a site, it is liable for the cleanup costs even if a viable nonfederal potentially responsible party is also identified. Also, in November 2013, the Navajo Nation formally stated its intent to file a claim against the United States, and DOE in particular, for reimbursement of its cleanup costs at the abandoned mines on the reservation if a cooperative approach is not successful, which could further affect the government’s share of those costs. Uncertainty about disposal options. Another uncertainty that affects the scope, time frames, and costs of the remaining abandoned mine work is where the mine waste will ultimately be disposed. Currently, the Navajo Nation’s position is that all remaining contaminated materials from uranium mines and at processing sites should be excavated and disposed off of Navajo lands. As a result, it is unclear where the volumes of mine waste will be disposed. As of January 2014, the Navajo Nation was working on drafting legislation to create a Uranium Commission under Navajo law that is expected to recommend options for mine waste disposal. However, according to a Navajo Nation official involved with the process, this commission is not expected to make any disposal recommendations until sometime in the next few years. Even when significant uncertainties regarding the scope of work and available funding remain, however, we have reported that agencies can create high-level estimates of costs and time frames that can be useful for decision makers and stakeholders. For example, EPA can base these estimates on the information it currently has regarding the removal actions that may be necessary at most of the highest priority mines. Specifically, according to our 2009 cost estimating and assessment guide of government and industry cost-estimating best practices, agencies can create high-level cost estimates—for example, rough order of magnitude estimates—for efforts, even with significant uncertainties, that can inform decision makers as they evaluate resource requirements. These cost estimates are often in the form of a range to correspond with the level of uncertainty associated with the estimate and can be developed in short time frames of weeks or months. Although not budget-quality estimates, these types of estimates can be used in planning and can be created before detailed requirements are known. Typically, according to our 2009 cost estimating and assessment guide, an estimate should be revised and contain more detail as the agency obtains more site-specific information and the effort becomes better defined, and the estimate should become more certain as actual costs begin to replace earlier estimates. For example, as EPA obtains more detailed information about the site-specific characteristics at each of the highest priority mines, it would be able to update the scope of its estimate, bringing more certainty. According to our 2009 cost estimating and assessment guide and our 2012 schedule assessment guide, agencies can also create high-level schedules that are linked to cost estimates, based on stated assumptions, and that incorporate uncertainties regarding future activities through a schedule risk analysis. The risk analysis provides agencies with a range of dates that correspond with levels of confidence in the ability to meet those dates. As further evidence that it is possible to develop these types of high-level estimates, the National Defense Authorization Act for Fiscal Year 2013 requires the Secretary of Energy, in consultation with the EPA Administrator and the Secretary of the Interior, to undertake a review of and prepare a report on abandoned uranium mines across the United States that previously provided uranium ore for the nation’s nuclear defense activities. According to DOE documents, the agency plans to issue a report in July 2014 that will include information about the potential costs and feasibility of reclaiming or remediating abandoned uranium mines, including the mines on or near Navajo lands. According to a DOE presentation on the draft report, the report is expected to contain cost estimate ranges based on the amount of uranium ore produced at the mines, among other assumptions. A DOE official involved with developing the draft report told us that the cost estimate ranges are not specific to mines on the Navajo reservation but are based on production size categories of mines across the United States that provided ore to the Atomic Energy Commission. This work by DOE could be a good starting point for a high-level cost estimate to clean up the uranium mines on or near the Navajo reservation; however, based on the statutory requirements for the study, we neither anticipate that it will provide information on the full scope or costs of any other activities covered in the 2008 5-year plan nor any time frames. Although EPA, DOE, BIA, IHS, and NRC provided some information on high-priority cleanup issues in their 2008 5-year plan, the agencies did not provide the House Committee on Oversight and Government Reform with overall estimates of the remaining scope of work, time frames, and costs of fully addressing uranium contamination on or near the Navajo reservation as requested. Without an estimate of the remaining scope of work, time frames, and costs to fully address uranium contamination, especially at the abandoned mines, decision makers and stakeholders neither have the information they need to assess the overall pace of the cleanup efforts, nor do they have a basis to put the agencies’ accomplishments under the 2008 5-year plan into perspective. Navajo Nation officials and other stakeholders told us that they want the federal agencies to describe the full scope of work that remains to fully address the contamination. Agencies Recognize That Significant Work Remains to Address Uranium Contamination Although the agencies have not identified the full scope of work that remains to address uranium contamination on or near the Navajo reservation, through implementing the 2008 5-year plan, federal and tribal agencies have compiled information that shows that significant work is needed. For some plan objectives, the agencies have developed a significant long-term scope of work, including the objectives of providing regulated, piped drinking water to Navajo residents in uranium-affected areas, treating groundwater contamination at the former processing sites, and at the abandoned mines. For example, to help lower the number of Navajo residents without access to regulated, piped drinking water in their homes and to continue reducing the use of unregulated and potentially contaminated water sources, IHS developed a list of 145 potential water infrastructure projects that would serve approximately 3,300 homes that do not have piped water. IHS, however, considers just 36 of the 145 projects—serving about 1,000 homes—as economically feasible to fund, according to IHS documents, so it is unclear how many of the 145 projects will ultimately be undertaken by IHS. In another example, according to DOE officials, the agency will continue its active groundwater remediation work at the Rare Metals and Shiprock processing sites, but the future scope of work at the sites is unclear. This is, in part, because the remediation systems that were designed to address contamination from millions of gallons of water contaminated by mill tailings that entered the ground at these sites are not performing as anticipated. As a result, DOE has not made as much progress toward meeting water quality standards as it originally projected. According to DOE officials, they plan to revise the two sites’ groundwater compliance action plans beginning in 2014 and 2015, and these revised plans will dictate the future scope of work at these sites. For EPA’s work at the abandoned uranium mines, although many uncertainties remain about the full scope of work needed to clean up the mines, EPA and Navajo Nation officials said that they recognize that the amount of work will be significant. During the 2014 5-year plan period, EPA officials said, in order to obtain additional information needed to select removal or remedial actions at the highest priority mines, the agency plans to conduct additional assessments at 41 of the 43 highest priority mines, beyond the initial screening information gathered during the 2008 5-year plan period, in cooperation with potentially responsible parties where applicable. EPA and NNEPA officials said these additional assessments range from, at a minimum, scanning the entire site to identify the likely boundaries of contamination and conducting tests to estimate the volume of waste present, to conducting more thorough assessments. These officials told us they do not anticipate conducting cleanups at any mines without potentially responsible parties that would require full funding from EPA during this time period. Appendix V contains additional information about the future work associated with the other 2008 5-year plan objectives not discussed here. Full Scope of Additional Work Will Likely Take Many Decades to Complete Given what is known about the significant scope of work that agencies have recognized as remaining to address uranium contamination on or near the Navajo reservation, it is clear that, at current funding levels, it could take at least many decades to complete. For example, at the abandoned uranium mines, EPA officials said that they are assuming that most of the remaining 42 of the 43 highest priority abandoned mines will need additional removal actions. For one of the highest priority mines—the Skyline mine—EPA has conducted a removal action to clean it up, but the waste remains on the reservation. In the absence of an EPA-estimated time frame, we roughly estimated time frames using information from EPA officials about the number of removal actions that they said they assumed EPA will need to fund and the costs of the agency’s removal action for the Skyline mine. Specifically, assuming viable nonfederal potentially responsible parties can be identified for about half of these highest priority mines, which EPA officials said is a reasonable early assumption, federal funds would be necessary to cover the full cost of removal actions at the other half of these mines, or 21 mines. Over the 2008 5-year plan period (i.e., 2008 through 2012), EPA funded the removal action at the Skyline mine from Region 9’s Superfund removal budget, spending about $7 million. Assuming Region 9’s Superfund removal budget funding levels from the 2008 5-year plan period continued, it would take EPA 105 years to fund the removal actions at 21 of the highest priority mines. According to our rough estimate, it would take even longer to also address the unknown number of mines without potentially responsible parties that will also need cleanup, but which have not been identified. Moreover, during the decades-long time frames for conducting cleanup at the highest priority abandoned mines, the Navajo people living near these mines could continue to be exposed to elevated radiation levels that pose a high risk, either by visiting the abandoned mines or by inhaling or ingesting contaminated dust that migrates from the mines into communities. For example, as of February 2014, 38 of the 43 highest priority mines remained physically accessible and/or there were no signs communicating the radiation dangers present at the sites. We visited one such mine—the A&B #3 mine near Cameron, Arizona—in July 2013, where EPA measured radiation levels that were 37-times above background (see fig. 8). The mine is not signed or fenced and is located within one-quarter mile of nearby homes. According to a local government official and a Navajo agency official, they have seen evidence that people visit the mine site; they told us they have found children’s toys at the site and pointed out vehicle tracks. In addition, the time frames associated with addressing contamination under other plan objectives are lengthy as well. For example, IHS officials estimated that it would take approximately 38 years to complete 36 of the 145 projects it identified as necessary to provide regulated, piped drinking water to residents of areas affected by historic uranium mining, assuming IHS were the sole contributor of funds and based on IHS’s Navajo Area water program budget for fiscal year 2013. During this time frame, residents may continue to be exposed to harmful constituents potentially found in unregulated drinking water sources. EPA and NNEPA officials told us that although the official position of the Navajo Nation is that unregulated water sources are not fit for human consumption, they continue to receive reports that Navajo residents use these sources, in part if there is no alternative, but for other reasons as well. In addition to health dangers posed by drinking uranium-contaminated water, which ATSDR and others have linked to kidney disease, IHS and EPA consider the general lack of regulated drinking water to be a health risk because contaminants often found in unregulated sources, such as E. coli bacteria, can pose an immediate health danger to people that consume them. Appendix V contains additional information about the potential future time frames associated with the other 2008 5-year plan objectives not discussed here. The Scope of Additional Work Will Most Likely Cost Hundreds of Millions of Dollars For the significant amount of remaining work to address uranium contamination that federal agencies recognize is needed, it appears that associated costs could exceed hundreds of millions of dollars. For example, in the absence of a cost estimate from EPA for work at the abandoned mines, our rough estimate based on previously incurred costs by EPA at the Skyline mine indicates that EPA’s costs to fund removal actions at just half of the highest priority mines, or 21 mines, could be a minimum of about $150 million. This is a conservative, low-end estimate for a number of reasons but, most importantly, because it does not include costs to transport and dispose of waste off-site. According to EPA officials, this is one of the most significant factors influencing cleanup costs; disposing of waste offsite is consistent with the Navajo Nation position that such waste be removed from Navajo lands. Other federal agencies have developed cost estimates for addressing contamination under other plan objectives, and the costs for these efforts appear to reach into the hundreds of millions of dollars as well. For example, IHS officials estimated that the 36 economically feasible drinking water projects would cost about $35 million, and that all 145 projects would cost about $195 million to complete. In another example, DOE has estimated that its various actions at the former processing sites will cost about $193 million over the next 75 years, but DOE officials said that this estimate will need revision based on new groundwater remediation plans at the Rare Metals and Shiprock sites. Appendix V contains additional information about the potential future costs associated with the other 2008 5-year plan objectives not discussed here. Agencies Face Challenges Meeting Funding Needs and Engaging Affected Communities, Though Opportunities Exist for Improving Relationships with the Navajo and Hopi People Federal agencies face a variety of challenges in continuing to address uranium contamination on and near the Navajo reservation, including securing adequate funding and effectively engaging tribal communities. However, federal and Navajo agency officials and community members we spoke with identified opportunities for improving relationships with the Navajo and Hopi people, which could help the federal agencies more effectively engage these communities. In addition, other opportunities exist to enhance collaboration with federal agencies and with Navajo agencies. Adequate Funding for Continued Efforts to Address Uranium Contamination Is a Key Challenge One key challenge that federal agencies face is difficulty meeting funding needs with available federal resources. Specifically, according to EPA Region 9 officials, funding for EPA’s efforts to assess and clean up abandoned uranium mines under its Superfund removal program, especially those without viable private, potentially responsible parties, and to provide clean and safe drinking water is expected to decrease from funding levels that have been available from 2008 through 2012. EPA officials told us that reducing the human health risks associated with abandoned uranium mines on the Navajo reservation is a priority and that the agency intends to continue providing funding as resources allow. However, these officials stated that declining Superfund removal program resources nationally will likely result in a reduction in funding available to conduct removal actions at Navajo abandoned mines from the level available in previous years. An EPA Region 9 official familiar with funding tribal drinking water projects on the Navajo reservation also told us that federal resources available for drinking water infrastructure projects are expected to continue to decrease. In addition, as federal funding resources for abandoned mines and water projects become more constrained, such projects on the Navajo reservation may be less likely to receive federal funds because the programs by which they are funded prioritize projects based on risk. For example, under the EPA Superfund removal program, which EPA has used to pay for assessment and cleanup at the Navajo abandoned mines, projects that address emergency situations, such as toxic waste spills, are prioritized for funding over projects that generally are not considered emergencies, like the abandoned mine projects. EPA officials have prioritized the Navajo abandoned mine projects, but these officials told us that there is already a high demand for Region 9’s Superfund removal program funds with more projects that warrant selection than available funding. Moreover, with few exceptions, federal agencies currently have limited options for other sources of federal funding for uranium-related work on the Navajo reservation. For example, according to EPA Region 9 officials, one possible source of funding for the typically longer-term remedial actions that may be warranted at a few of the highest priority mines without potentially responsible parties is the Superfund Trust Fund. Although the Superfund Trust Fund can be used for removal actions at all sites, it can only be used for remedial actions at sites that are included on the NPL. None of the Navajo abandoned mines are currently listed on the NPL, and, according to EPA Region 9 officials, only a few of the highest priority mines on the reservation may qualify for listing. In general, according to these officials, to score high enough in the Hazard Ranking System to be included on the NPL, an abandoned uranium mine would need to impact a sufficient number of people using a drinking water supply contaminated by the mine, expose a sufficient population to uranium through air or soil contamination, or impact a sensitive environment such as a wetland. However, given the locations and characteristics of the mines on the reservation and the population or environment affected, these officials said they believe most mines do not meet these criteria. For example, EPA officials told us that even the Navajo mines located near communities affect relatively small populations given the dispersed nature of the Navajo population, and the surrounding desert conditions mean that most of the mines do not appear to impact surface water. EPA officials said that community members in rural communities, including Navajo communities, have expressed a deep frustration with the ranking system used to determine sites’ inclusion on the NPL because they feel that the system unfairly discriminates against small communities. Nevertheless, EPA officials said they will continue to pursue including some of the mines on the NPL in order to use the Superfund Trust Fund to pay for remedial actions. Another potential source of federal funding is Interior’s Central Hazardous Materials Fund (Fund), an appropriation available to pay for Interior’s CERCLA response actions. BIA received $162,000 from the Fund for the Tuba City Dump site, in 2008 and 2009. According to BIA officials, this money was used, in part, to search for another potentially responsible party at the site, pay for project oversight, and hire a technical consultant. In 2011, Interior issued a memorandum stating that CERCLA response actions on Indian trust lands were no longer eligible to receive money from the Fund. BIA officials told us, based on this memorandum, they believed that they were no longer eligible to obtain money for assessment or cleanup work at the Tuba City Dump. These officials said that they have not requested funding from the Fund to pay for assessment work and do not plan to request funding for the eventual remedial action at the site. However, Interior’s memorandum also stated that the Fund would continue to fund cleanup-related activities on Indian trust lands if BIA had received funding for cleanup-related activities, including those undergoing CERCLA assessments, in the past. According to a senior Fund official, because the Tuba City Dump site received Fund money prior to 2011, it is eligible to receive additional funds, including funding for the remedial action work. Pursuing this funding is important for two reasons: (1) once the remedial action has been selected, BIA’s funding requirements are likely to increase substantially (BIA estimated the remedial action could cost about 3- to 10-times as much as the RI/FS), and (2) federal standards for internal control encourage agencies to strive for efficiency in their use of resources. Since the Tuba City Dump is located in BIA’s Western region, the agency has paid for its work at the site out of that region’s budget. According to BIA officials, BIA has prioritized the Tuba City Dump project over other projects in the region that also need funding, which has resulted in some projects not receiving funding when BIA’s costs at the Tuba City site were especially high. In addition, EPA is pursuing two other funding sources to contribute to the work at the abandoned mines. First, EPA and a potentially responsible party have reached an agreement to settle a pending lawsuit which, if approved by the judge presiding over the lawsuit, could result in approximately $1 billion for the agency’s and the Navajo Nation’s cleanup efforts at 50 mines and other contaminated sites. Second, EPA has sought to involve DOE in assessments and potentially cleanups at mines that do not have potentially responsible parties. In June 2013, EPA Region 9 corresponded with DOE, stating that DOE’s financial assistance with developing and implementing an approach to conducting assessments, interim actions, and cleanups at highest priority mines on the Navajo reservation is essential. Senior DOE officials told us they interpreted the letter as encouraging DOE to play a larger role in addressing the contamination from the mines on the reservation by funding assessment and cleanup at some of the mines. In January 2014, DOE responded to EPA that, although the National Defense Authorization Act for Fiscal Year 2013 required a DOE report on abandoned uranium mines, DOE was not given budget authority in fiscal year 2013 to remediate uranium mines, and its authority to take remedial actions under UMTRCA, which was limited to the former uranium processing sites and vicinity properties, has expired. Engaging Tribal Communities Continues to Be a Key Challenge Another key challenge faced by federal agencies is identifying ways of more effectively engaging with tribal communities. According to outreach plans prepared by DOE and EPA, and other documents prepared by various federal agencies, engaging with communities is important for a number of reasons, including soliciting feedback on the decision-making process, obtaining meaningful input into cleanup decisions, and working with community members to determine how best to limit exposures to uranium contamination. During the 2008 5-year plan period, federal agency officials increasingly recognized the importance of community engagement and began building bridges in the communities where they conducted work, both by developing relationships themselves, and by funding Navajo agency officials’ outreach work. Even with the federal agencies’ increased attention to outreach activities, agency officials and community members we spoke with said that the need for increased and improved outreach is great. Nevertheless, federal agencies face challenges in their ongoing efforts to effectively engage Navajo communities for at least four reasons: (1) building trust may require significant time and effort on the Navajo reservation, (2) the number of outreach staff is small compared with the size of the reservation, (3) commonly used tools for engaging communities may not be effective in Navajo communities, and (4) federal agencies have not coordinated their outreach efforts. Building Trust Agency officials and community members we spoke with said that although building trust among the Navajo people is necessary to effectively engage local communities, it will take significant time and effort. One reason for this is that many members of the Navajo community distrust outsiders—especially those representing the federal government—because of historical events, related to both uranium mining and the government’s treatment of Native people. The federal government’s inconsistent attention to uranium-related issues on Navajo lands in recent decades may also have contributed to a lack of trust among community members. For example, according to EPA Region 9 officials, EPA compiled a list of potentially contaminated houses on the Navajo reservation in the 1970s, but it did not take steps to ensure that all houses on the list were assessed for contamination until it began work under the 2008 5-year plan. Distrust of the federal government is also exacerbated by concerns about ongoing issues, such as fears that federal agencies will issue permits for new uranium mining near the Navajo reservation before contamination from previous mining is fully addressed. Another reason why building trust will be a challenge is because Navajo community members are concerned that the federal agencies that worked on the 2008 5-year plan may not have a long-term commitment to addressing uranium contamination, according to Navajo Nation officials and some stakeholders. For example, Navajo community members have expressed disappointment that the 2008 plan encompassed just 5 years’ worth of work when, in their view, fully addressing the effects of contamination will take decades of commitment. A long-term commitment—along with completing cleanup work—could help build trust with Navajo communities. One challenge the federal agencies will continue to face in addressing these concerns, however, is that, in some cases, the agencies are limited in the types of long-term commitments that they can make. For example, EPA officials explained that the agency cannot commit to cleaning up even the 43 highest priority mines at this time because they do not have dedicated funding for addressing the highest priority mines that do not have potentially responsible parties and are not listed on the NPL. This is in contrast to the situation at the former uranium processing sites, where DOE must prepare and implement a long-term surveillance plan for disposal sites, in accordance with NRC regulations implementing UMTRCA. In addition, building trust is a challenge because Navajo Nation officials and some stakeholders told us they are frustrated by what they see as examples of environmental injustice. These are instances when uranium contamination on the Navajo reservation appears to be treated differently than contamination on non-Indian lands, such as the community in Moab, Utah, where DOE is excavating a large mill tailings pile and disposing of it elsewhere. In another example of environmental injustice on the Navajo reservation, high-level Navajo officials and others have said that the release of radioactive materials from the uranium processing site near Church Rock, New Mexico, has received far less attention nationally than the radioactive release at Three Mile Island—which occurred 4 months earlier—although the amount of radioactive materials released in the later incident was significantly greater. Finally, some tribal agency officials we spoke with told us they believe the federal agencies had fostered mistrust by sometimes overstating their progress in addressing uranium contamination on the Navajo reservation. Specifically, some high-level Navajo Nation officials have stated that they believe the federal agencies have understated the scope of the uranium contamination problem on the Navajo reservation and have overstated the federal agencies’ efforts in addressing the problem. For example, Navajo Nation officials said they were frustrated that the federal agencies’ January 2013 summary report, published at the end of the 2008 5-year plan period, highlighted the agencies’ accomplishments but did not identify or communicate the larger context: that, overall, significant progress has not been made in addressing uranium contamination on the reservation. Limited Number of Outreach Staff Federal and tribal agency officials we spoke with said that the number of federal and tribal agency outreach staff working on engaging Navajo communities about uranium-related issues is very small compared with the size of and conditions on the reservation. Outreach staff are responsible for engaging communities that are spread out over three states across the 24,000 square-mile reservation. Many of these communities are not only remote, but they are also difficult to access because of harsh terrain and rough roads. Dedicated outreach staff. As of November 2013, EPA Region 9 had the full-time equivalent of 1.5 outreach staff working on Navajo uranium issues, including only one staff member who speaks Navajo. EPA outreach staff are not based on the Navajo reservation, and travel time between the reservation and Region 9 headquarters in San Francisco limits the amount of time that staff are able to spend engaging communities. In addition to EPA outreach staff, NNEPA has one outreach staff member dedicated to uranium issues, and one additional staff member who incorporates outreach in her work. NNEPA’s dedicated staff person, however, is responsible for activities in addition to community outreach, such as interfacing with the media on uranium-related issues, and this staff person told us that she cannot meet all the outreach needs put before her. For example, she said that coordinating with IHS on outreach events could be a full-time position, but that she is limited by other demands on her time. Other staff conducting outreach. Other federal and tribal agencies that worked on the 2008 5-year plan conduct outreach as part of their activities, but they do not have dedicated staff working full-time on uranium-related outreach to Navajo communities. For example, IHS has two staff members who engage with communities on uranium issues, in addition to their other responsibilities, and DOE has one staff member who performs outreach activities in addition to her site management responsibilities. The Navajo Abandoned Mine Lands Reclamation/UMTRA department’s public affairs staff, under the UMTRA program’s cooperative agreement with DOE, has engaged Navajo communities to discuss concerns about former processing sites located on the reservation. In addition, the Navajo Nation Division of Health has used ATSDR funding to hire staff who conduct outreach activities as part of their responsibilities related to implementing the Navajo Birth Cohort Study. Limitations of Common Outreach Tools For a variety of reasons, commonly used tools for communicating with communities—such as disseminating written materials, including brochures or e-mails, and putting up signs and fencing off contaminated areas—may generally be less effective on the Navajo reservation. For example, written materials may be less effective because Navajo and Hopi are traditionally spoken languages—not written languages—and many community members learned English as a second language. Also, many residents are not connected by Internet or telephone in their homes. According to EPA and NNEPA officials, the most effective way to communicate with many members of Navajo communities is through face-to-face interactions, which requires trusted native speakers and is more time-consuming than written communications. Furthermore, although signs and fences may be used to communicate information about risks from contamination, they may be less effective on the Navajo reservation. In part because of differences in how Navajo people traditionally view the land, it is generally not acceptable to restrict the use of reservation lands, although there are some exceptions, such as those related to grazing uses and home sites. As a result, knowledgeable EPA officials told us they did not believe that signs and fences would be sufficient to limit access to contaminated areas because they felt signs and fences would be disregarded. As an example, these officials told us that a mining company had erected a fence to restrict access to an area contaminated by uranium, but, rather than staying out of the area, a community member had instead used the fence to contain his livestock, confining them in the very area to which the agency was trying to restrict access. Furthermore, according to these officials, physical structures such as signs and fences are difficult to maintain in remote areas of the reservation, where vandalism and theft pose challenges. Moreover, providing information, regardless of delivery method, may itself be a limited tool for changing behavior because, in many cases, no acceptable alternatives are available. For example, according to Navajo agency officials and community members with whom we spoke, some community members—because they do not have a better alternative— continue to get their drinking water from unregulated livestock wells that may be contaminated with uranium and other toxins, even though some of these community members understand that doing so is unsafe. Some community members we spoke with said that they used unregulated water sources for domestic purposes, such as cooking and drinking, and that more education would not be effective in changing this behavior until better alternatives were made available. Limited Federal Agency Collaboration on Outreach Efforts Federal agencies that worked on the 2008 5-year plan have not generally coordinated their outreach efforts. Although the agencies began hosting joint workshops for stakeholders in 2008, according to agency officials, the agencies generally have conducted their own public meetings in communities without inviting other agencies to participate. Agency officials told us that not coordinating outreach poses a challenge to effectively engaging communities because community members often expect these meetings to cover a variety of uranium-related issues, regardless of whether those issues fell within the jurisdiction of the agency present. For example, according to an EPA official we spoke with, when EPA conducted outreach related to abandoned uranium mines, community members often had questions about other uranium-related topics, such as health effects. In some cases, the limited scope of issues covered in community meetings has caused significant frustration among community members, according to EPA and IHS officials. An EPA official told us that this may hamper the efforts of outreach staff to build relationships in tribal communities. Because the costs of attending community meetings can be high for both federal agencies and community members—many of whom travel significant distances to attend the meetings—EPA officials said they realized it is important to maximize each contact that they have with affected communities. Officials from multiple agencies told us they recognize the value in coordinating on outreach and have begun to coordinate their efforts by, for example, holding joint community meetings. For example, in March 2012, IHS and DOE met jointly with community members to discuss the Shiprock former uranium processing site, including concerns about health impacts. In another example, in 2013, EPA and IHS jointly hosted a health screening in one Navajo community for residents who had been living in contaminated homes that were being demolished through EPA’s actions to address and replace contaminated structures. Opportunities Exist to Improve Federal Agencies’ Relationships with the Navajo and Hopi People Federal and Navajo agency officials and community members we spoke with identified a number of opportunities that federal agencies could pursue to improve relationships with the Navajo people, as well as with the Hopi people affected by the Tuba City Dump. Opportunities identified for improving relationships with tribal communities include the following: Provide information on long-term scope of work. Federal officials and community members identified opportunities for federal agencies to provide a more complete picture of the scope of the uranium contamination problem and their progress toward addressing the problem. For example, some community members, including participants at one of the agencies’ stakeholder workshops, told the agencies they would like to see the next interagency plan cover a period longer than just another 5 years because they believe it is clear that the amount of work remaining will take significantly longer than 5 years. Stakeholders said that including information about the long- term scope of work in the next plan would increase their ability to hold the agencies accountable and provide a benchmark against which they can measure the agencies’ progress. Conduct in-person outreach. Federal officials and community members identified opportunities for federal agencies to improve their relationships with Navajo communities—and the Hopi people affected by the Tuba City Dump—by conducting in-person outreach where possible, although such methods are resource intensive. For example, in a community involvement plan created to guide its outreach related to the Tuba City Dump, EPA noted that distributing information in small group or door-to-door settings assists in developing trust and keeping misunderstandings of new materials and information to a minimum. According to the plan, the Hopi people have regularly requested that federal agencies rely on in-person outreach methods when feasible. The plan stated that although it is not feasible to distribute all information in person, doing so conveys to community members that they are important and are part of the process. Establish agency offices on or near the reservation. One way agency officials identified to establish a more constant presence on the reservation would be for federal agencies that do not already have offices nearby, including EPA and DOE, to set up and assign technical and outreach staff to offices in the area. This would increase the amount of time that staff can interact with communities, since typically EPA and DOE staff travel to the reservation from California or Colorado. According to a Navajo Nation official we spoke with, having staff on or near the reservation would increase the federal agencies’ ability to connect with communities, especially since it would help increase their cultural awareness and sensitivity. One EPA staff member who spent 2 months working on-site during a mine cleanup said that his consistent presence during that time—as well as the extensive outreach he conducted over a longer period—allowed him to build strong relationships with community members, which in turn increased that community’s acceptance of the cleanup remedy. Partner with community organizations. Some stakeholders we spoke with said that opportunities may exist for federal agencies to more effectively engage Navajo communities by partnering with trusted community organizations. For example, the president of a nonprofit community organization that works to ensure that the Navajo people—especially those affected by uranium contamination in a remote region of the reservation—have access to safe drinking water and economic development, among other things, told us that he would welcome a partnership with federal agencies to conduct outreach on uranium-related issues. Such a partnership would take full advantage of the organization’s existing connection with Navajo communities. Representatives from this organization and others told us that community organizations are often a trusted source of information, and their involvement would lend credibility to federal and tribal agencies’ engagement efforts. Promote job creation and training. Navajo officials we spoke with also told us that the federal agencies could help improve relationships by identifying opportunities to promote job creation and training on the Navajo reservation as part of the efforts to address uranium contamination. Officials we spoke with said the federal government should provide more funding for new positions within the tribal agencies to address uranium contamination. They said that, in 2007, the Navajo Nation had identified the need for 20 new full-time employees within NNEPA to address uranium contamination, but that federal agencies awarded funding for just 2 additional employees from 2008 to 2012. Tribal officials also said that they would like to see the federal agencies provide job training programs similar to the one that EPA offered during the 5-year plan period. Issue a formal apology. Some stakeholders, including Navajo community members, told us they felt that receiving an official apology from the federal government for failing to ensure that the companies conducting uranium mining to support U.S. nuclear weapons development were protective of the environment and public health would go a long way toward improving relationships. Additional Opportunities Exist for Federal Agencies to Enhance Interagency Collaboration and Collaboration with Navajo Agencies Agency officials we spoke with said that additional opportunities exist for the federal agencies to enhance both interagency collaboration at the federal level and collaboration with Navajo agencies. Specifically, EPA, IHS, and DOE officials identified a number of opportunities for increased interagency collaboration on efforts to engage tribal communities. For example, an IHS official involved with conducting uranium-related health screenings told us that the joint health screening event conducted with EPA in 2013 was a success and that, in addition to duplicating such efforts in other affected communities, there may be additional opportunities for enhanced interagency collaboration to help ensure that the Navajo people receive health screenings as well as information on how they can most effectively protect themselves from uranium contamination. More specifically, the IHS official told us that there may be opportunities to work with partners, including CDC and EPA, to develop informational videos on the health effects of uranium exposure that could be screened in IHS clinics. In addition, EPA officials told us that they have initiated a pilot effort to provide more coordinated outreach in the Cameron region of the reservation. In that region—where potentially responsible parties will be conducting extensive assessment and some cleanup of abandoned mines in the coming years—EPA and NNEPA plan to work with other partners, including other federal agencies, to provide a more coordinated approach to engaging the communities in discussions about, among other things, steps the federal agencies and community members can take to mitigate exposures to hazardous uranium contamination. Further, EPA officials involved in coordinating the 2014 5-year plan told us that they plan to engage the other federal agencies in developing and including a coordinated outreach strategy in the plan to better ensure that the agencies maximize each contact that they have with affected communities by, for example, providing the communities with information on a variety of uranium-related issues. Officials from the other agencies agreed that they would engage with EPA to develop or support such a strategy. This is consistent with one of the key practices that, in October 2005, we reported can help enhance and sustain collaboration among federal agencies—establishing mutually reinforcing or joint strategies—which can assist partner agencies in aligning their activities and resources, among other things. We have reported on other key practices to enhance and sustain interagency collaboration, including, for example, for collaborating agencies to define and agree on their respective roles and responsibilities. In doing so, collaborating agencies can identify how their collaborative efforts will be led, clarify who will do what, organize their joint and individual efforts, and facilitate decision making. Federal and Navajo agency officials also identified opportunities for the federal agencies to enhance collaboration with Navajo agencies, some of which also present opportunities to enhance capacity building. For example, an EPA official we spoke with said that the agency could potentially train NNEPA staff to perform the more detailed assessments that EPA has been conducting to determine whether houses are contaminated and warrant replacement. In addition, Navajo Nation officials told us that they would like the federal agencies to work with them to identify as many opportunities as possible for the federal agencies to partner with the Navajo agencies on uranium-related work. These officials pointed to the partnership between DOE and NNEPA at the Highway 160 site—where NNEPA led the implementation of the cleanup work—as a particular success that they would like to see replicated in other areas. EPA Region 9 officials also pointed to the partnership between EPA and the Navajo Community Housing and Infrastructure Department, a Navajo agency that is helping to replace some of the contaminated houses on the reservation. According to both federal and tribal agency officials, federal agencies could also enhance their collaboration with Navajo agencies by including additional tribal agencies in future efforts. For example, EPA and DOE identified potential opportunities for enhanced collaboration with the Navajo Abandoned Mine Lands Reclamation Program, which was not involved in the abandoned mine work conducted under the 2008 5-year plan, although the UMTRA program under the same department has been working with DOE at the former uranium processing sites. The Navajo Abandoned Mine Lands Reclamation Program was active in abandoned uranium mine-related efforts during the plan period by, among other things, maintaining reclamation work previously conducted to mitigate physical hazards at the mines. According to Navajo Abandoned Mine Lands Reclamation Program officials, they plan to continue this maintenance work at reclaimed mine sites in the future. According to both federal and tribal agency officials we spoke with, EPA could potentially collaborate with that program and NNEPA to help ensure that, where feasible, any additional maintenance work on reclaimed uranium mines is done in coordination with NNEPA and EPA to help further reduce radiological hazards. EPA officials told us that they have begun talking with Navajo Abandoned Mine Lands Reclamation Program staff to identify ways to work together, whereas in the period of the 2008 5-year plan, the program and NNEPA each generally operated independently from one another. According to DOE, EPA and DOE have agreed to invite officials from NNEPA and both programs within the Navajo Abandoned Mine Lands Reclamation/UMTRA Department to participate in activities of either federal agency. In another example, a CDC official told us there may be opportunities for CDC to increase its collaboration with the Navajo Division of Health to improve available data on how cancers impact the Navajo people. Conclusions From 2008 through 2012, six federal agencies increased their overall efforts to address the legacy of uranium contamination that remained on the Navajo reservation after uranium mining and processing ceased, spending more than $120 million on various actions, including assessments of abandoned mines and cleanups of contaminated homes and other sites. However, nearly 30 years since the last active uranium mine on the Navajo reservation ceased production, federal agencies do not have comprehensive information about the extent of the contamination or the total scope of work—and associated time frames and costs—required to fully address it, especially the contamination found at the abandoned mines. When requesting the 2008 5-year plan, policymakers were looking for a comprehensive course of action for fully resolving the problem of uranium contamination on or near the Navajo reservation. Given that the scope of the 2008 5-year plan focused on addressing the most urgent problems, and the agencies’ next 5-year plan is not expected to identify the full scope of work that remains, it is unclear how many 5-year plans at this rate would be needed to estimate the remaining scope of work, time frames, and costs for fully addressing the contamination. While many uncertainties exist, it is possible to generate useful, high-level estimates of the work, time frames, and costs in a short period of time based on the information the federal agencies currently possess. However, absent a statutory requirement to develop such a comprehensive estimate, it appears unlikely that the agencies will undertake such an effort. Without more comprehensive information about the overall remaining scope of work, time frames, and costs needed to address contamination across the reservation, including at the abandoned mines, stakeholders and decision makers do not have a basis on which to assess the overall pace of the cleanup efforts; and without this information cannot put the accomplishments of the 2008 5-year plan—or any future plans—into perspective, and cannot make effective resource allocation decisions. Effectively engaging with tribal communities is a key challenge facing federal agencies in the efforts to address contamination on Navajo and Hopi lands. The 2008 5-year plan did not contain information about how the federal agencies would coordinate their outreach efforts to these communities. While the agencies began to integrate their outreach activities over the course of the 5-year plan period, community members continued to express frustration with the agencies’ efforts. Creating a coordinated outreach strategy is consistent with the key practice of establishing mutually reinforcing or joint strategies that we have reported can help enhance and sustain interagency collaboration and help agencies better align their activities and resources. Such an effort should also identify how the collaborative effort will be led, clarify who will do what, organize their joint and individual efforts, and facilitate decision making. In addition, assessment work conducted by BIA at the Tuba City Dump has yielded information about the contamination at the site that has provided significant value to decision makers. In doing so, however, BIA has experienced a number of challenges, some concerning contract management. BIA has missed enforceable deadlines, subjecting BIA to stipulated penalties under its settlement agreement with EPA. Moreover, BIA continued to increase the value of the contract while the contractor was not performing according to the contract’s terms and conditions. BIA is nearing the end of its management of the current RI/FS contract; these contract management challenges, however, if left unaddressed, will become even more pertinent in the next few years as BIA moves into the cleanup phase after a remedial action is selected. At that point, the agency will award and manage an even larger and more complicated contract that will increase costs significantly. Without examining lessons learned from managing the RI/FS contract and considering these lessons as part of the acquisition planning process for the remedial action contract, BIA could face contract management challenges on a larger scale. Further, BIA did not fully follow best practices in estimating the schedule for the RI/FS, which was not fully comprehensive or controlled. Without control over the schedule, BIA cannot effectively monitor its contractor’s progress and cannot evaluate the quality of changes proposed by the contractor. BIA’s estimate of probable future costs for the cleanup at the Tuba City Dump also did not always reflect the characteristics of a comprehensive high-quality cost estimate. Without fully accounting for all future costs, management will have difficulty successfully planning program resource requirements. Further, significantly more funds will likely be needed to implement the remedial action that will be selected for the Tuba City Dump site. Given this increased need and other competing interests for BIA’s limited resources, other funding sources for remedial actions, such as Interior’s Central Hazardous Materials Fund, become more important. The Tuba City Dump site is eligible to receive funds from the Fund for the RI/FS, as well as the selected remedial action, although BIA officials have not applied for such funding and do not plan to do so. Without leveraging the Fund, BIA will have difficulty meeting the funding needs for the remedial action cleanup phase of the project. Matter for Congressional Consideration To develop an estimate of the scope of work remaining to address uranium contamination on or near the Navajo reservation, Congress should consider requiring that the Environmental Protection Agency take the lead and work with the other federal agencies to develop an overall estimate of the remaining scope of the work, time frames, and costs. Recommendations for Executive Action We are making the following four recommendations in this report: To ensure that agencies working on the 2014 5-year plan better align their activities and resources, we recommend that the Administrator of EPA; the Secretaries of Energy, the Interior, and Health and Human Services; and the Chairman of the Nuclear Regulatory Commission, as they develop a coordinated outreach strategy to include in the 2014 5-year plan, take action to incorporate key practices in their collaborative effort, such as defining and agreeing on the agencies’ respective roles and responsibilities. In light of the problems BIA has encountered in managing the cleanup at the Tuba City Dump site, we recommend that the Secretary of the Interior direct the Assistant Secretary for Indian Affairs to take the following three actions: identify and examine any lessons learned from managing the RI/FS contract and consider these lessons as part of the acquisition planning process for the remedial action contract, employ best practices in creating the schedule and cost estimates for the remedial action cleanup phase, and apply for funding from Interior’s Central Hazardous Materials Fund in order to help meet the funding needs for the remedial action cleanup phase of the project. Agency and Third Party Comments and Our Evaluation We provided a draft of this report for review and comment to the Environmental Protection Agency; the Departments of Energy, the Interior, and Health and Human Services; the Nuclear Regulatory Commission; the Office of Management and Budget; and the Navajo Nation and Hopi Tribe. EPA, DOE, BIA (responding on behalf of Interior), the Department of Health and Human Services, and NRC generally agreed with our recommendations, and their written comments are reproduced in appendixes VI, VII, VIII, IX, and X, respectively. Each of these agencies also provided technical comments, which we incorporated as appropriate. The Navajo Nation also provided written comments (reproduced in app. XI) and technical comments, which we incorporated as appropriate. The Office of Management and Budget and the Hopi Tribe did not comment on our report. DOE was the only agency that commented on our Matter for Congressional Consideration. Specifically, DOE acknowledged the need to identify the remaining scope of work, time frames, and costs of fully addressing uranium contamination on the Navajo reservation. However, the agency expressed concern about the difficulty of quantifying the full scope of work at this time, given the number of uncertainties that remain. We agree that attempting to quantify, in a detailed manner, the full scope of remaining work is not possible at this time because of the uncertainties we describe in this report, as well as those identified by DOE in its comments. However, we believe that the agencies’ estimates can be improved and, for the reasons detailed in our report, consider it essential for Congress to have more comprehensive information about the remaining scope of work in order to assess the overall pace of the cleanup and to make informed resource allocation decisions. In addition, DOE commented on (1) our use of the term “Rare Metals” site instead of the “Tuba City Mill Site” or the “Tuba City Former Processing Site” and (2) an observation in the report about the cleanup of the former processing sites on the Navajo reservation versus another site near Moab, Utah. Regarding the comparison of the former processing sites on the Navajo reservation to the Moab site, DOE provided comments on the unique nature of each site. Regarding the Rare Metals site, we clarified our report to indicate that our use of the term “Rare Metals site” was solely to prevent it from being confused with other contaminated sites in the Tuba City area, including BIA’s Tuba City Dump site. In agreeing with our recommendation that the federal agencies incorporate key practices for enhancing and sustaining interagency collaborative efforts into their coordinated outreach strategy, EPA, DOE, and the Department of Health and Human Services provided additional details about the contents of the draft outreach strategy that they intend to include in their 2014 5-year plan. We are encouraged by elements of the draft strategy, which appear to include some of the key practices for enhancing and sustaining interagency collaboration. For example, the agencies plan to leverage resources to fund a shared coordinator who will direct a community outreach network composed of representatives from the federal and tribal agencies. In its written comments, BIA asserted that our report disproportionately focuses on the agency’s management of the Tuba City Dump site, a site that BIA described as comprising a very small part of the overall problem of uranium contamination across the Navajo reservation. Although we agree that cleaning up the Tuba City Dump is but one part of the agencies’ broader efforts to address uranium contamination on the Navajo reservation, we believe that our detailed examination of BIA’s and EPA’s management of the assessment work at the site was appropriate. As explained in our report, the Tuba City Dump site was one of the two objectives where the federal agencies did not meet the targets in the 2008 5-year plan. In order to identify reasons why BIA did not meet those targets, we analyzed the agency’s management of the Tuba City Dump site, including its project and contract management approaches. In addition, BIA commented that it believes the management of the cleanup has been handled responsibly but that circumstances beyond its control have contributed to delays. We acknowledge in this report that circumstances beyond BIA's control contributed to delays in the Tuba City Dump cleanup; however, BIA's own actions also contributed to these delays. For example, BIA had communication problems with its RI/FS contractor and performance problems regarding the quality and timeliness of the contractor’s deliverables. These problems and others led BIA to miss several legally enforceable deadlines in the RI/FS work plan and resulted in BIA failing to complete the RI/FS for the site, an objective of the 2008 5-year plan and a work plan requirement. In its comments, BIA appears to fault EPA and the contractor for the delays and associated stipulated penalties, but BIA did not explain how, if at all, EPA’s and the contractor’s actions would relieve BIA of its legal obligation to meet the work plan deadlines, or of the consequences of failing to do so. While agreeing with our recommendations, BIA disagreed with our findings regarding the agency’s cost and schedule estimating at the Tuba City Dump. These findings were the basis for our recommendation that BIA employ best practices in creating the schedule and cost estimates for the forthcoming CERCLA remedial action at the Tuba City Dump. BIA asserted that its approach to developing the cost and schedule estimates was reasonable, and did not believe that the best practices found in our cost estimating and assessment guide were applicable, at least in part, because the details of the remedial action to be conducted at the site have not been identified. As explained in this report, the best practices in our cost estimating and assessment guide can be used for projects with significant unknowns, such as the Tuba City Dump, and include specific steps for properly taking those unknowns into account. For example, the guide discusses how every cost estimate is uncertain because of the assumptions that must be made about future projections. It also states that because many of the assumptions made at the start of a project turn out to be inaccurate, it is important to assess the risk associated with changes in assumptions. As described in our report, we found that BIA did not assess such risks when creating its cost estimate. In addition, although BIA concurred with our recommendation to use best practices when creating cost and schedule estimates for the forthcoming remedial action, BIA stated that it does not intend to use these best practices when developing future estimates at the site because it does not believe that they are applicable to environmental cleanup projects. To the contrary, these guides offer best practices that are directly relevant to a wide range of government projects, including environmental cleanups, and we have previously assessed such projects using the criteria in the guides. BIA’s refusal to apply cost and schedule estimating best practices is troublesome, especially since, as we note in the report, the remedial action at the site will represent a more significant undertaking than the RI/FS. As such, it is important that BIA have reliable cost and schedule estimates in order to effectively manage the project. In written comments, the Navajo Nation stated that, overall, our report represented a good start toward illuminating the nature and extent of the damage uranium mining and processing has caused to Navajo lands and people. However, the Navajo Nation also noted some instances where it felt the report fell short. For example, the tribe noted that our projection of the need for hundreds of millions of dollars to address the remaining scope of work that has been identified by the agencies was a significant underestimation of the total projected costs for future remediation at all uranium-related sites on Navajo lands. The tribe stated that it believes that these total costs will be in the billions of dollars rather than hundreds of millions of dollars. As noted in our report and recognized by the Navajo Nation in its comments, our estimate related to the cleanup of highest priority abandoned mines represents a conservative, low-end estimate. While we cannot comment on the accuracy of the tribe’s characterization of total future costs, the tribe’s estimate further illuminates the need for the federal agencies involved to generate a high-level estimate, as discussed in our report, based on their most current information regarding the remaining scope of work. In addition, the Navajo Nation noted the absence of any discussion of potential groundwater contamination in the draft report, especially at the Northeast Church Rock mine. In response, we have made changes to include information about the status of federal agencies’ groundwater assessment efforts. We are sending copies of this report to the Administrator of the Environmental Protection Agency; the Secretaries of Energy, the Interior, and Health and Human Services; the Chairman of the Nuclear Regulatory Commission; the Director of the Office of Management and Budget; the President of the Navajo Nation; the Chairman of the Hopi Tribe; the appropriate congressional committees; and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix XII. Appendix I: Objectives, Scope, and Methodology In this report, we examined: (1) the extent to which federal agencies, including the Environmental Protection Agency (EPA), Department of Energy (DOE), the Department of the Interior’s Bureau of Indian Affairs (BIA), the Department of Health and Human Services’ Indian Health Service (IHS), and the Nuclear Regulatory Commission (NRC) achieved the targets identified in the 2008 5-year plan, and the reasons why or why not; (2) what is known about the scope of work, time frames, and estimated costs of fully addressing uranium contamination on the Navajo reservation; and (3) the key challenges, if any, faced by federal agencies in completing this work and the opportunities, if any, which may be present to help overcome these challenges. To determine the extent to which federal agencies achieved the targets identified in the 2008 5-year plan, we compared the agencies’ targets as laid out in the 5-year plan with the actions taken by the agencies and their partners over the 5-year plan period from 2008 through 2012. We identified these actions by reviewing key documents, including the summary report issued by the federal agencies in January 2013. We corroborated information in the documents by interviewing relevant federal agency officials and Navajo and Hopi tribal officials from relevant tribal government agencies—the Navajo Nation Department of Justice, Navajo Nation Environmental Protection Agency, the Navajo Nation Division of Natural Resources’ Navajo Abandoned Mine Lands Reclamation/Uranium Mill Tailings Remedial Action Department, and the Hopi Tribe Department of Natural Resources’ Water Resources Program—and by obtaining additional documentation and visiting relevant sites across the Navajo and Hopi reservations where federal and tribal agencies have been conducting their work. In addition to the five federal agencies listed above, other relevant federal agencies we spoke with included the Centers for Disease Control and Prevention (CDC) and the Agency for Toxic Substances and Disease Registry (ATSDR), the Office of Management and Budget, and the Department of the Interior’s Office of Environmental Policy and Compliance, which manages the Central Hazardous Materials Fund, and the Office of Surface Mining Reclamation and Enforcement, which provides funding to the Navajo Abandoned Mine Lands Reclamation Program. In April 2013 and July 2013, we visited key sites, including the Northeast Church Rock, Quivira, and Skyline mines, as well as abandoned uranium mines and mine-related sites near the communities of Cameron, Cove, and Teec Nos Pos, Arizona; and Haystack, and Casamero Lake, New Mexico; the former uranium processing site in Shiprock, New Mexico; the Highway 160 site near Tuba City, Arizona; and the Tuba City Dump, located on both the Hopi and Navajo reservations and near the Hopi Villages of Moenkopi and the Navajo town of Tuba City, Arizona. We selected these sites based on the level of activity that federal and tribal agencies conducted there during the 5-year plan, and in order to see some of the sites that the agencies have identified as needing cleanup work in the near future. To identify the reasons why the agencies met or did not meet the targets in the 5-year plan, we reviewed agency documents and interviewed federal and tribal agency officials. We also reviewed federal agency expenditure data for the 2008 5-year plan period (fiscal years 2008 through 2012) and compared it with expenditure data from the previous 5 years (fiscal years 2003 through 2007). These data represent obligations or direct outlays by the agencies and represent the agencies’ direct costs and did not include intramural costs, such as staff salaries. We received data from ATSDR, BIA, CDC, DOE, EPA, and IHS; NRC did not provide expenditure data because NRC did not incur any direct obligations during the time period, although it did expend resources for staff time. In order to determine costs in constant 2013 dollars, we adjusted the amounts reported to us for inflation by applying the fiscal year chain-weighted gross domestic product price index, with fiscal year 2013 as the base year. To evaluate the reliability of these data and determine their limitations, we reviewed the data obtained from each agency. For each data source, we analyzed related documentation, examined the data to identify obvious errors or inconsistencies, and compared the data we received with other published data sources, where possible. We also interviewed officials from each agency to obtain information on the internal controls of their data systems. On the basis of our evaluation of these sources, we concluded that the expenditure data we collected and analyzed were sufficiently reliable for our purposes. To identify what is known about the scope of work remaining to fully address uranium contamination on or near the Navajo reservation, we reviewed available documents and interviewed knowledgeable federal agency and tribal officials. To identify what is known about time frames and costs, we reviewed documentation containing schedule and/or cost estimates or general information about time frames or costs, where available. To create estimates of time frames and costs to clean up the highest priority abandoned mines, we gathered information about the costs associated with the Skyline mine cleanup, which was the one cleanup EPA conducted during the 5-year plan period with the agency’s funds, the pace of work conducted under the 5-year plan, and the number of mines that would need full funding from EPA. In addition, we analyzed the extent to which the schedule generated by BIA for the remedial investigation and feasibility study at the Tuba City Dump reflected the four general characteristics for sound schedule estimating, as outlined in our schedule assessment guide: comprehensive, well-constructed, credible, and controlled. We selected this schedule to review because it was the most robust of the available schedules, and it represented an entirely federal effort. We also examined the extent to which BIA’s estimate of probable future costs for Tuba City Dump reflected the four characteristics of high-quality cost estimates, as outlined in our cost estimating and assessment guide: comprehensive, well-documented, accurate, and credible. We selected this cost estimate to review since the cleanup will be paid for entirely with federal funds, and it represented a distinct cleanup project rather than an ongoing level of effort. In reviewing BIA’s schedule and cost estimates, we analyzed supporting documentation submitted by BIA and conducted interviews with BIA and EPA project managers and staff. We shared our cost and schedule guides and the criteria against which we would be evaluating the estimates with BIA staff. We then compared BIA’s methods and approaches for preparing the estimates with the best practices contained in the guides. To ascertain the key challenges faced by federal agencies in completing this work and the opportunities that may be present to help overcome these challenges, we reviewed key documents, including the January 2013 summary report, written materials from the federal agencies’ Navajo uranium stakeholder workshops, agency reports, and Navajo Nation laws and position papers. We corroborated and supplemented information in the documents by interviewing relevant federal agency officials and Navajo and Hopi tribal officials. We also interviewed knowledgeable stakeholders, including community members living in areas affected by uranium mining or contamination. For example, we worked with the federal and tribal agencies and others to hold meetings in seven affected communities, which were attended by 50 local government officials and community members. We also spoke with other knowledgeable stakeholders, such as university researchers and representatives of nonprofit and community organizations active on Navajo uranium issues. We met with most of these stakeholders during our July 2013 site visits and at the April 2013 Navajo Uranium Stakeholder Workshop held in Gallup, New Mexico, and spoke with other stakeholders by telephone. We identified stakeholders by performing an Internet and literature search for individuals and organizations involved in relevant issues, attending the stakeholder workshop and identifying participating stakeholders, and requesting referrals from agency officials and stakeholders with whom we spoke. The views of the stakeholders we spoke with are not representative of and cannot be generalized to all stakeholders. We conducted this performance audit from January 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Detailed Results of GAO Assessment of the Schedule for the Tuba City Dump Remedial Investigation and Feasibility Study Work Plan Our prior work has identified 10 best practices associated with effective scheduling. These are (1) capturing all activities; (2) sequencing all activities; (3) assigning resources to all activities; (4) establishing the duration of all activities; (5) verifying that the schedule is traceable horizontally and vertically; (6) confirming that the critical path is valid; (7) ensuring reasonable total float; (8) conducting a schedule risk analysis; (9) updating the schedule with actual progress and logic; and (10) maintaining a baseline schedule. These practices are summarized into four characteristics of a reliable schedule—comprehensive, well- constructed, credible, and controlled. We assessed the extent to which the Bureau of Indian Affairs’ (BIA) January 2013 schedule for the remedial investigation and feasibility study (RI/FS) at the Tuba City Dump met each of the 10 best practices, and characterized whether the schedule met each of the four characteristics of a reliable schedule. We found that the schedule minimally met each of the four characteristics of a reliable schedule. As a result, we are concerned about the validity of the dates that were forecasted by the schedule, as well as the identification of the critical path. Without an accurate critical path, management cannot focus on the activities that will be most detrimental to the project’s key milestones and finish date if they slip. Table 3 provides the detailed results of our analysis. Appendix III: Detailed Results of GAO Assessment of the Tuba City Dump 5-Year Cost Estimate To analyze the Bureau of Indian Affairs’ (BIA) 5-year cost estimate, dated April 2013, for the Tuba City Dump, we determined the extent to which BIA followed the best practices outlined in the GAO Cost Estimating and Assessment Guide. The guide identifies 12 practices that are the basis for effective cost estimation, including cost estimation for annual budget requests. The guide associates these practices with four characteristics: accurate, well-documented, comprehensive, and credible. The Office of Management and Budget endorsed this guidance as being sufficient for meeting most cost-estimating requirements, including for budget formulation. If followed correctly, these practices should result in reliable and valid budgets that (1) can be easily and clearly traced, replicated, and updated; and (2) enable managers to make informed decisions. BIA created this cost estimate to contribute to the Department of the Interior’s (Interior) environmental and disposal liability estimate as part of Interior’s annual financial statement. Since a remedial action has not been selected for the site, BIA estimated two options, a low-cost option and a high-cost option. In accordance with Interior’s guidance, BIA submitted the low-cost option as part of the liability estimate. We assessed the extent to which the Tuba City Dump 5-year cost estimate from April 2013 met each of the four characteristics associated with cost-estimating best practices. As table 4 illustrates, we found that the Tuba City Dump April 2013 cost estimate minimally met each of these four characteristics. Appendix IV: The 43 Highest Priority Mines, Their Locations, and the Status of Assessment and Cleanup Efforts Removal actions conducted at these sites were done as time-critical removal actions and are listed as interim because they did not constitute the final cleanup actions at the mines. Detailed assessments listed here include remedial action assessments (preliminary assessment and/or site investigation) and removal action assessments (engineering evaluation/cost analysis). The Skyline mine cleanup is included here because the Navajo Nation considers it to be temporary, given its current position regarding removing all mine waste from the reservation. EPA considers the cleanup complete. Appendix V: Additional Information about the Remaining Scope of Work, Time Frames, and Costs to Address Uranium Contamination on or Near the Navajo Reservation The following is a discussion of what is known about the remaining scope of work, time frames, and costs to assess and clean up contaminated structures and to assess and treat health conditions and conduct health research. Assess and Clean Up Contaminated Houses and Other Structures The Environmental Protection Agency (EPA) and the Navajo Nation Environmental Protection Agency (NNEPA) have not identified a full scope of work because there is no comprehensive source of information regarding the number of houses that may be contaminated; the agencies do not have an end date for this work nor an overall cost estimate. Scope of work: EPA Region 9 officials told us they believe living in contaminated homes continues to be the greatest uranium-related health risk to people on the Navajo reservation today. To continue mitigating this risk, EPA and NNEPA officials told us they plan to continue the work they conducted under the 2008 5-year plan, but they do not know how many homes they will ultimately need to assess and replace since there is no comprehensive source for this information. According to NNEPA officials, the agency has a backlog of more than 100 homes where residents have requested testing. A NNEPA official familiar with the work told us the agency expects to address this backlog in the 2014 5-year plan period. The official also told us that the number of requests continues to increase significantly as more people become aware of the agencies’ efforts to assess houses and structures, in part, through outreach conducted by NNEPA. NNEPA is also responsible for communicating the results of completed assessments to residents when those results indicated homes were safe. A NNEPA official told us that, as of February 2014, the agency had communicated the results of fewer than half of the completed assessments, in some cases, because the agency was waiting for EPA to provide the results, and that the agency will continue to address this backlog moving forward. Overall, EPA officials said that they expect the total number of homes needing replacement will decrease in the 2014 5-year plan period since they believe the homes most likely to be contaminated have been addressed through earlier efforts. Time frame: According to EPA officials involved, there is no end date for this work. They expect that the work assessing and potentially cleaning up contaminated houses to continue into the future as long as NNEPA continues to receive requests. Cost: Given the unknown number of homes needing assessment and cleanup, EPA has not developed an overall cost estimate. However, EPA officials told us they typically spend $6,000 on a detailed assessment and from $80,000 to $300,000 to demolish a home and either provide financial compensation or a replacement home. Assess and Treat Health Conditions and Conduct Health Research Federal agencies do not have concrete plans or available funds to begin additional health studies on the Navajo reservation, but the Indian Health Service (IHS) will continue its work and the Agency for Toxic Substances and Disease Registry (ATSDR) and its partners will continue the Navajo Birth Cohort Study, spending up to $10 million from 2013 to 2018. Scope of work: IHS and ATSDR officials told us they intend to include short-term health treatment and assessment efforts in the 2014 5-year plan, and they are developing overall goals for health research related to exposure to uranium on the Navajo reservation as well. In the short-term, beyond health care delivery to patients, IHS plans to continue its efforts under the uranium health-screening program that it established under the 2008 5-year plan. With 3.5 staff members, however, this is a small-scale effort. In addition, the Navajo Birth Cohort Study is expected to continue. Regarding other long-term research studies, ATSDR officials told us they do not have specific plans or funding to initiate additional uranium-related health research studies at this time. Members of the Navajo Nation, researchers, and other stakeholders have repeatedly called for a long- term epidemiological study of the effects of nonoccupational exposures to uranium in the communities that have lived closest to former mines and processing sites in order to gain a better understanding of how these communities have been and continue to be affected. IHS officials told us they have been limited in their ability to identify or plan potential studies because conducting research studies is not a part of IHS’s mission, and dedicated funding is not available for such efforts. ATSDR officials told us that their agency infrequently funds such studies; for example, it is currently conducting two long-term epidemiological studies across the country in addition to the Navajo study. Time frame: IHS officials said they expect to extend the uranium health screening program into the future, at least through the 2014 5-year plan period. In 2013, ATSDR extended its agreement with the University of New Mexico and the Navajo Nation for the Navajo Birth Cohort Study by 5 years, until 2018. Cost: ATSDR has committed to spending up to $10 million on the Navajo Birth Cohort Study from 2013 to 2018, but overall costs for potential future studies are unknown. Appendix VI: Comments from the Environmental Protection Agency Appendix VII: Comments from the Department of Energy Appendix VIII: Comments from the Department of the Interior Appendix IX: Comments from the Department of Health and Human Services Appendix X: Comments from the Nuclear Regulatory Commission Appendix XI: Comments from the Navajo Nation Appendix XII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Jeffery D. Malcolm (Assistant Director), Juaná Collymore, John Delicath, Tisha Derricotte, Emily Hanawalt, John Krump, Leslie Kaas Pollock, Karen Richey, Dan Royer, Kelly Rubin, Jeanette Soares, Kiki Theodoropoulos, and Sarah Veale made significant contributions to this report. | Four million tons of uranium ore were extracted from mines on the Navajo reservation primarily for developing the U.S. nuclear weapons stockpile. For over 30 years, the Navajo people have lived with the environmental and health effects of uranium contamination from this mining. In 2008, five federal agencies adopted a 5-year plan that identified targets for addressing contaminated abandoned mines, structures, water sources, former processing sites, and other sites. Federal agencies also provide funding to Navajo Nation agencies to assist with the cleanup work. GAO was asked to examine the agencies' cleanup efforts. This report examines (1) the extent to which the agencies achieved the targets set in the 5-year plan and reasons why or why not; (2) what is known about the future scope of work, time frames, and costs; and (3) any key challenges faced by the agencies in completing this work and any opportunities to overcome them. GAO examined agency documents; interviewed agency officials, tribal leaders, and stakeholders; and visited sites on the Navajo and Hopi reservations. Federal agencies implementing the 2008 5-year plan, including the Environmental Protection Agency (EPA), the Department of Energy, the Bureau of Indian Affairs (BIA), and the Indian Health Service, met the targets in six of the plan's eight objectives, working in cooperation with tribal agencies, including the Navajo Nation Environmental Protection Agency. Reasons agencies met the targets were primarily because additional federal and other resources were dedicated to these efforts compared with prior years. For example, from 2008 through 2012, EPA spent $22 million to test and replace contaminated houses, compared with $1.5 million spent in the preceding 5 years. In contrast, targets for two objectives—cleanup of the Northeast Church Rock mine and Tuba City Dump—were not met primarily because EPA's and BIA's estimated schedules were optimistic, and EPA added additional work that extended the time frames. BIA experienced project and contract management challenges in conducting work at Tuba City Dump and did not always follow best practices when estimating the schedule for assessment work at the site. These challenges, if not addressed, could affect BIA's ability to meet future targets for cleanup at the site and successfully plan for project resources. Federal agencies have not identified the full scope of remaining work, time frames, or costs to fully address uranium contamination on or near the Navajo reservation, although they recognize that significant work remains. In 2008, congressional decision makers requested the agencies provide an overall estimate of the full scope of work needed to address the contamination. The 5year plan the agencies developed in response to this request does not provide a comprehensive estimate; instead, it focuses on the highest priorities over 5 years. EPA officials said that they typically do not provide cost or schedule estimates until a specific cleanup action is selected and that a number of current uncertainties make developing such an estimate difficult. Even with significant uncertainties, GAO has reported that agencies can create high-level estimates of costs and time frames that can be useful for decision makers and stakeholders. The agencies have collected important information that could provide a starting point for such an estimate. However, absent a statutory requirement to develop such a comprehensive estimate, it appears unlikely that the agencies will undertake such an effort. As a result, decision makers and stakeholders will not have the information they need to assess the overall pace of the cleanup efforts or make resource allocation decisions. Federal agencies face a variety of challenges in continuing to address uranium contamination on or near the Navajo reservation. For example, according to EPA officials, funding for EPA's efforts at the Navajo abandoned uranium mines is expected to decrease from funding levels available during the 2008 5-year plan because of overall declining federal resources for cleanup. Further, agencies face challenges in effectively engaging tribal communities, in part, because agencies have not always collaborated on their outreach efforts. These agencies identified opportunities to enhance their collaboration by creating a coordinated outreach strategy for the next 5-year plan. Creating such a strategy is consistent with one of the several key practices that GAO has reported can enhance and sustain interagency collaboration and help ensure that agencies make efficient use of limited resources. |
Background BIE, formerly known as the Office of Indian Education Programs when it was part of the Bureau of Indian Affairs (BIA), was renamed and established as a separate bureau in 2006. Organizationally, BIE is under the Office of the Assistant Secretary- Indian Affairs (Indian Affairs), and its director reports to the Principal Deputy Assistant Secretary-Indian Affairs. The director is responsible for the direction and management of all education functions, including the formation of policies and procedures, supervision of all program activities, and approval of the expenditure of funds for education functions. BIE is comprised of a central office in Washington, D.C.; a major field service center in Albuquerque, New Mexico; 3 associate deputy directors’ offices located regionally (1 in the east and 2 in the west); 22 education line offices located near Indian reservations; and schools in 23 states. Of the 183 schools and dormitories BIE administers, 58 are directly operated by BIE (BIE- operated), and 125 are operated by tribes (tribally-operated) through federal contracts or grants. BIE schools are primarily funded through Interior. Similar to public schools, BIE schools receive formula grants from Education. BIE, like state educational agencies, administers and monitors the operation of these Education programs. Currently, BIE’s administrative functions—human resources, budget management, information technology, and acquisitions—are managed by Indian Affairs’ Deputy Assistant Secretary for Management (DAS-M). The heads of both BIE and DAS-M report to the Principal Deputy Assistant Secretary-Indian Affairs. (See fig. 1) BIE and its predecessor, the Office of Indian Education Programs, have been through a number of restructuring efforts. Prior to 1999 BIA’s regional offices were responsible for most administrative functions for Indian schools. In 1999, the National Academy of Public Administration issued a report, commissioned by the Assistant Secretary of Indian Affairs, which identified management challenges within BIA. The report concluded that BIA’s management structure was not adequate to operate an effective and efficient agency. The report recommended centralization of some administrative functions. According to BIE officials, for a brief period from 2002 to 2003, BIE was responsible for its own administrative functions. However, in 2004, in response to the NAPA study, its administrative functions were centralized under the DAS-M. More recently, in 2011, Indian Affairs commissioned another study— known as the Bronner report—to evaluate the administrative support structure for BIE and BIA. The report, issued in March 2012, found that organizations within Indian Affairs, including DAS-M, BIA, and BIE, do not coordinate effectively and communication among them is poor. The study recommended that Indian Affairs adopt a more balanced organizational approach to include, among other things, shared responsibility, new policies and procedures, and better communication, with increased decentralization. the process of developing a plan to address these recommendations, but they have not yet finalized a proposal for reorganization. Management Challenges Continue to Impede BIE’s Mission Fragmented Administrative Structure Negatively Affects Schools Bronner Group, Final Report: Examination, Evaluation, and Recommendations for Support Functions (March 2012). instance, is critical to ensure that all supplies and textbooks are delivered before the start of the school year. However, the procurement process used by BIE-operated schools can cause delays in textbook delivery. Likewise, delays in contracting have occasionally affected BIE’s ability to provide timely services for students with disabilities. Communication is especially difficult because of Indian Affairs’ fragmented administrative structure. For example, school officials we spoke with said that their correspondences are often lost and that there appears to be little coordination between Indian Affairs offices. For instance, the Bronner report found that the responsibility for facilities management is scattered between three divisions within DAS-M. First, the Property Management Division in the Office of the Chief Financial Officer (OCFO) is responsible for maintaining the real property inventory. Second, the Acquisition Office in the OCFO manages the leasing of buildings for the BIA and BIE. Finally, maintenance and construction of all Indian Affairs’ buildings is under the purview of the Office of Facilities, Environmental and Cultural Resources, and Office of Facilities Management and Construction. This fragmented administrative structure directly impact schools. For instance, the Little Wound School on the Pine Ridge reservation in South Dakota closed for a few days because Indian Affairs initially did not respond to their request for funds to replace a broken boiler. Tribal school officials in Mississippi told us they are unsure whether they should invest in repairs or rent additional modular classrooms as they have not been told when or if the department will construct new facilities. The Bronner report found that although DAS-M is tasked with supporting both BIE and BIA, its staff is not structured in a way that effectively supports both bureaus. Although the contracting needs of schools are different than those of a federal agency, DAS-M does not have a specific acquisition team assigned to BIE. The report also found that DAS-M’s acquisition services were slow and not customer focused and that there was a disconnect between programs and support. Further, DAS-M staff may not have the requisite expertise needed for working on BIE-related tasks. The Bronner report found that key staff positions, such as budget analysts, were not assigned responsibilities in a manner that would help them develop expertise on the goals, funding history, and performance of BIE programs. Despite a request from BIE, DAS-M has not conducted a workforce analysis to determine the number and skill set of staff supporting the mission of BIE. According to BIE officials, DAS-M staffs’ focus on supporting BIA rather than BIE hinders DAS-M from seeking and acquiring expertise in education issues and from making the needs of BIE schools a priority. We have previously reported that strategic workforce planning, similar to workforce analysis, can identify core competencies for mission-critical occupations and be used to develop targeted training as well as spur planning efforts. In a December 2011 memo to Secretary Salazar, BIE’s former Director expressed frustration with the current organizational structure of Indian Affairs and asserted that the “major challenges facing BIE cannot be overcome . . . until basic structure and governance issues are addressed and resolved.” In addition, according to his memo, “because of this disjointed system, points-of-view concerning the effectiveness of support functions do not necessarily originate from a similar organizational culture, mindset, or most importantly, mission outcomes.” Additionally, he noted that “the outcome of student achievement is often overshadowed and leaves our Bureau fighting to focus attention on education priorities and competing for leftover resources scattered throughout the larger organization.” The challenges outlined above run contrary to our past work on agency collaboration. We have found that different agencies participating in any collaborative mechanism bring diverse organizational cultures to it. Accordingly, it is important to address these differences and establish ways to operate across agency boundaries.reported, agencies can work together to define and agree on roles and As we have previously responsibilities, which can be set forth in policies, memorandums of understanding, or other arrangements.issues and report our final results later this year. BIE Faces Significant Turnover in Leadership Leadership turnover in the Office of the Assistant Secretary for Indian Affairs, DAS-M, and BIE has exacerbated the various challenges created by administrative fragmentation. (See fig. 2.) Since approximately 2000 there have been: 12 acting and permanent Assistant Secretaries for Indian Affairs, 6 DAS-M Deputy Assistant Secretaries, and 8 BIE Directors or Acting Directors. The tenure of acting and permanent assistant secretaries has ranged from 16 days to 3 years. Further, from August 2003 through February 2004 the post was unfilled. These are key leadership positions. The assistant secretary provides direction on all issues related to Indian affairs, while DAS-M, as mentioned above, provides essential administrative functions for BIE and its schools. In previous reports, we found that frequent changes in leadership may complicate efforts to improve student achievement, and that lack of leadership negatively affects an organization’s ability to function effectively and to sustain focus on key initiatives. Preliminary results from our work also suggest that lack of consistent leadership within DAS-M and BIE hinders collaboration between the two offices. According to our work on leadership, effective working relationships between agency leaders and their peers is essential to using resources most effectively and ensuring that people and processes are aligned to an agency’s mission. Working relations between BIE and DAS-M’s leadership appears informal and sporadic. Currently, there are no regularly scheduled meetings between BIE and DAS-M leadership to discuss issues, priorities and goals. Additionally, BIE officials reported having difficulty obtaining timely updates from DAS-M on its responses to requests for services from schools. According to BIE officials, they used to have regularly scheduled meetings with DAS-M leadership to discuss operations, but the meetings were discontinued in September 2012. BIE now depends on ad hoc meetings to discuss issues requiring resolution. As a result, BIE officials stated there is a disjointed approach to serving schools. BIE’S Limited Governance of Schools Affects Reform Efforts Although BIE’s responsibilities to operate Indian schools are in some respects similar to those of state educational agencies (SEA), BIE’s influence is limited because most schools are tribally operated. Like an SEA, BIE administers, oversees, and provides technical support for a number of programs funded by Education. These include grants for disadvantaged children, students with disabilities, and teacher quality improvement. BIE also acts in the capacity of an SEA by monitoring, overseeing, and providing technical support to BIE schools. Yet, in contrast to states that can impose a range of reforms on schools, in tribally operated schools, which form the majority of BIE schools, tribes retain authority over key policies. This means that BIE must seek cooperation from tribal officials to implement reform. For example, BIE cannot require tribally-operated schools to adopt or develop their own teacher and principal evaluation systems. Also, although BIE could implement a curriculum for the schools it operates, BIE cannot implement a bureau-wide curriculum that would apply to tribally-operated schools. In contrast, some SEAs may be granted this authority through their state’s laws. According to BIE correspondence submitted to Education in June 2012,the accountability system BIE is required to use, as a condition of receiving funding under Title I-A of the Elementary and Secondary Education Act (ESEA), as amended, is onerous. Like SEAs, BIE is accountable for the academic achievement of students in its schools. However, BIE schools must use the accountability measures of the 23 respective states where the schools are located unless an alternative has been approved. As a result, BIE calculates proficiency—the extent to which schools have made adequate yearly progress meeting performance goals—using the states’ accountability systems. In 2008, we reported that BIE officials told us that, given the work involved, it was challenging to calculate and report proficiency levels to schools before the start of the subsequent school year. However, under ESEA, if schools do not make adequate yearly progress toward specific proficiency levels set by the states in reading, math, and science, they may be required to pursue reforms that are best implemented at the beginning of the school year. Recently, Education allowed 16 of the 23 states where BIE schools are located to change their assessments and methodology for calculating proficiency. Consequently, this has affected BIE’s ability to calculate proficiency for its schools in a timely manner. Currently, BIE is seeking to revise its regulations that require it to use the 23 states’ accountability systems. Further complicating reform efforts, both BIE and Education consider BIE schools, unlike public schools, to have the responsibilities of both school districts and schools. BIE, unlike an SEA, treats each school as a public school district. According to BIE and Education officials, many of these individual schools are small in size and lack the organizational capacity to function as a school district. We have previously reported that smaller school districts face challenges acquiring special education services or providers because they lack the same capacity, resources, knowledge, or experience necessary to provide those services as larger-sized school districts. BIE and Education officials acknowledge that this represents a strain on BIE’s capacity to function in this manner. BIE is one of two federal entities that directly oversees the management and operation of schools. The Department of Defense is the only other federal agency that operates elementary and secondary schools, and it does so to meet the educational needs of military dependents and children of some civilian employees. The Department of Defense Education Activity (DODEA) oversees the management and operation of 194 schools in seven states; Puerto Rico and Guam; and 12 foreign countries. Unlike BIE, DODEA has considerable autonomy over its own internal management, budget, and operations. According to the Director of DODEA, the DODEA headquarters office is responsible for setting general policy guidelines, while schools and local DODEA administrative offices are charged with overseeing day-to-day operations. As a result, DODEA retains full operational control over all its schools and is therefore able to establish standardized curricula, testing, and evaluations. Concluding Observations It is critical that Indian students receive a high-quality education in order to ensure their long-term success. While BIE confronts several limitations in its ability to govern schools, its mission remains to provide students quality education opportunities. To this end, officials’ roles and responsibilities must be clear, and sustained leadership is key. Additionally, it is imperative that the offices responsible for education work together more efficiently and effectively to enhance the education of Indian children. We will continue to monitor these issues as we complete our ongoing work and consider any recommendations needed to address these issues. Chairman Simpson, Ranking Member Moran, and Members of the Subcommittee, this concludes my prepared statement. I will be pleased to answer any questions that you may have. GAO Contact and Staff Acknowledgments For future contact regarding this testimony, please contact George A. Scott at (202) 512-7215 or [email protected]. Key contributors to this testimony were Beth Sirois, Ramona Burton, Sheranda Campbell, Holly Dye, Alex Galuten, Rachel Miriam Hill, and Jean McSween. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 2011, the federal government provided over $800 million to BIE schools that serve about 41,000 Indian students living on or near reservations. Within the Department of Interior, BIE is part of Indian Affairs, and BIEs director is responsible for the management of all education functions. BIEs mission is to provide quality education opportunities to Indian students. However, poor student outcomes raise questions about how well BIE is achieving its mission. This testimony reports on ongoing GAO work about the Department of Interiors management of BIE schools. A full report will be issued later this year. Based on preliminary findings, todays testimony will focus on: (1) the key management challenges affecting BIE and (2) BIEs governance of schools. For this work, GAO reviewed agency documents and relevant federal laws and regulations; interviewed agency officials; and conducted site visits to public and BIE schools. Management challenges within the Department of Interior's Office of the Assistant Secretary - Indian Affairs (Indian Affairs), such as fragmented administrative structures and frequent turnover in leadership, continue to hamper efforts to improve Indian education. For example, incompatible procedures and lack of clear roles for the Bureau of Indian Education and the Indian Affairs' Deputy Assistant Secretary for Management (DAS-M), which provides administrative functions to BIE, such as human resources and acquisitions, contribute to delays in schools acquiring needed materials and resources. According to BIE officials, some DAS-M staff are not aware of the necessary procedures and timelines to meet schools' needs. For instance, delays in contracting have occasionally affected BIE's ability to provide services for students with disabilities in a timely manner. A study commissioned by Indian Affairs to evaluate the administrative support structure for BIE and the Bureau of Indian Affairs (BIA)--also under Indian Affairs--concluded that organizations within Indian Affairs, including DAS-M, BIA, and BIE, do not coordinate effectively and communication among them is poor. Similarly, preliminary results from GAO's work suggest that lack of consistent leadership within DAS-M and BIE hinders collaboration between the two offices. Although BIE's responsibilities to operate Indian schools are in some respects similar to those of state educational agencies (SEAs), BIE's influence is limited because most schools are tribally-operated. Like an SEA, BIE administers, monitors, and provides technical support for a number of programs funded by the Department of Education. Yet, in contrast to states that can impose a range of reforms on schools, in most BIE schools tribes retain authority over key policies. For example, BIE cannot require most schools to adopt or develop their own teacher and principal evaluation systems. Further complicating reform efforts, many small individual BIE schools function as their own school districts. We have previously reported that smaller school districts may face challenges acquiring special education services or providers because they lack the same capacity, resources, knowledge, or experience necessary to provide those services as larger-sized school districts. |
Background In the 21st century, older Americans are expected to make up a larger share of the U.S. population, live longer, and spend more years in retirement than previous generations. The share of the U.S. population age 65 and older is projected to increase from 12.4 percent in 2000 to 19.6 percent in 2030 and continue to grow through 2050. In part this is due to increases in life expectancy. The average number of years that men who reach age 65 are expected to live is projected to increase from just over 13 in 1970 to 17 by 2020. Women have experienced a similar rise—from 17 years in 1970 to a projected 20 years by 2020. While life expectancy has increased, labor force participation rates of older Americans only began to increase slightly in recent years. As a result, individuals are generally spending more years in retirement. In addition to these factors, fertility rates at about the replacement level are contributing to the increasing share of the elderly population and a slowing in the growth of the labor force. Also contributing to the slowing in the growth of the labor force is the leveling off of women’s labor force participation rate. While women’s share of the labor force increased dramatically between 1950 and 2000— from 30 percent to 47 percent—their share of the labor force is projected to remain at around 48 percent over the next 50 years. By 2025 labor force growth is expected to be less than a fifth of what it is today. The aging of the baby boom generation, increased life expectancy, and fertility rates at about the replacement level are expected to significantly increase the elderly dependency ratio—the estimated number of people aged 65 and over in relation to the number of people of aged 15 to 64. In 1950, there was one person age 65 or over for every eight people aged 15 to 64. The ratio increased to one to five in 2000 and is projected to further increase to one person aged 65 and over for every three people aged 15 to 64 by 2050. As a result, there will be fewer younger workers to support a growing number of Social Security and Medicare beneficiaries. The aging of the population also has potential implications for the nation’s economy. If labor force growth continues to slow as projected, fewer workers will be available to produce goods and services. Without a major increase in productivity or higher than projected immigration, low labor force growth will lead to slower growth in the economy and slower growth of federal revenues. These circumstances in turn will accentuate the overall pressure on the federal budget, which will be encumbered with increased claims for benefits for seniors such as Medicare and Social Security, while relatively fewer workers are paying into the benefits systems. As Americans live longer and spend more years in retirement, several factors contribute to the growing insecurity of retirement income. With greater life expectancies, individuals need to finance more years of retirement; however, many workers claim their Social Security benefits prior to reaching the full retirement age, which results in lower monthly payments. Not only do individuals need to make their money last longer; they also bear greater risk and responsibility for their retirement savings than in the past. About half of U.S. workers do not have a pension plan through their employer, and those who do are less likely than in the past to be covered by defined benefit (DB) plans. The shift from traditional DB plans to defined contribution (DC) plans places greater responsibility on workers to make voluntary contributions and make prudent investment decisions. It also increases the importance of workers preserving such savings for retirement. Moreover, rising health care costs have also made health insurance and anticipated medical expenses increasingly important issues for older Americans. A long-term decline in the percentage of employers offering retiree health coverage has leveled off in recent years, but retirees face an increasing share of costs, eligibility restrictions, and benefit changes that contribute to an overall erosion in the value and availability of coverage. Finally, it is clear that Social Security, Medicare, and Medicaid are unsustainable in their present form. When the needed reforms to these programs are made, one result will be that millions of individuals will have to assume increased responsibility for their economic security in retirement. These trends suggest that more and more Americans will find they have inadequate resources to finance retirement. For many, continued work past traditional retirement age may be the solution. We, along with others, have highlighted the need to engage and retain older workers to address some of these challenges associated with an aging workforce. In 2001, we recommended that the Secretary of Labor form a broad interagency task force to develop regulatory and legislative proposals addressing the issues raised by the aging of the labor force and to serve as a clearinghouse of information about employer programs to extend the work life of older workers. After strong encouragement from this Committee, this task force, which includes representatives from the Departments of Labor (Labor), Commerce, and Education, along with the Social Security Administration, began meeting in 2006 and plans to focus on three areas: employer response to the aging of the workforce; individual opportunities for employment in later years; and legal and regulatory issues regarding work and retirement. The task force intends to release a report on its findings and strategies in summer 2007. In 2003, we recommended that Labor review the Workforce Investment Act performance measure regarding earnings to ensure that this measure does not provide a disincentive for serving employed workers, some of whom might be older workers. Labor has partially addressed this issue, but the potential for existing measures to have unintended consequences remains. In 2005, we held a series of focus groups with workers and retirees to better understand the factors that influence the timing of retirement. We found that health problems and layoffs were common reasons to retire and that few focus group members saw opportunities to gradually or partially retire. Workers also cited what they perceived as their own limited skills and employers’ age discrimination as barriers to continued employment. As part of this work, we also participated in a roundtable discussion with employers to learn what they were doing to hire and retain older workers. While these employers generally agreed that flexibility was the key feature necessary to recruit and retain older workers, few of them had developed programs to put this belief into practice. Building on this body of work, we convened this forum on older workers to address these issues. Obstacles to Engaging and Retaining Older Workers According to participants at our forum, some of the key obstacles that hinder continued work at older ages include: first, employer perceptions about the cost of employing older workers; second, employee perceptions about the costs and benefits of continued work; and third, changes in industry and job skill requirements, which may hinder older workers from remaining employed or finding suitable new employment. First, many employers cite both compensation—including the rising cost of health insurance—and training costs as obstacles to hiring and retaining older workers. In addition, forum participants reported that many employers have not learned to place a high value on their experienced workers, instead gearing their succession planning toward replacing older workers with younger ones. Forum participants also cited negative stereotypes surrounding older workers that include the belief that such workers produce lower-quality work than their younger counterparts, and less work overall. Also, many employers believe that older workers are resistant to change. Finally, but not least, it was suggested that some employers are hesitant to hire older workers for fear of age discrimination lawsuits. While many employers express an interest in recruiting older workers, our prior work has found that few develop programs to do so. At the same time that there is some resistance among employers to hiring older workers, there are also strong incentives for workers to retire. Participants noted that a “culture of retirement” exists in this country which encourages workers to claim retirement benefits and stop working as early as possible. The availability of Social Security at age 62 and high effective tax rates on earnings between age 62 and Social Security’s full retirement age may discourage some workers from continuing to work once they start claiming benefits. Workers who receive Social Security benefits but have not yet reached the full retirement age will have their benefits reduced by one dollar for every two or three dollars that they earn above a set threshold due to the Social Security earnings test. As a result, workers who have claimed Social Security benefits at 62 may not feel that it is worthwhile to continue working. Also, the structure of traditional DB pension plans may encourage retirement because pension laws have prohibited working for the same employer while receiving benefits. While the Pension Protection Act of 2006 does contain a provision that allows plans the option of providing some benefits to participants who remain in the workforce at age 62 and beyond, it is too soon to determine what the impact of this policy change will be. In addition to these financial incentives, jobs that are physically demanding or have inflexible schedules that compete with family caregiving needs also provide strong disincentives to continued work. For some, the incentive to retire lies in the lack of suitable job opportunities. Some employers are reluctant to offer flexible work arrangements such as part-time work to existing employees. In addition, layoffs due to changes in the economy, along with the lack of skills needed to compete in the global economy, are also challenges facing older workers. Forum participants reported that employers who downsize may lay off older workers sooner than younger workers, in part because older industries tend to have a disproportionate number of older workers in their labor force. Positions of some low-skill older workers also may have been automated, eliminated, or outsourced. At the same time, displaced older workers may lack the necessary training to make a career change. Our past work has found that when older workers lose a job, they are less likely than younger workers to find other employment. Best Practices and Lessons Learned on Engaging and Retaining Older Workers We, along with others, have previously reported on the importance of flexibility in recruiting and retaining older workers. In order to effectively engage older workers, forum participants suggested implementing new recruiting approaches, workplace flexibility, the right mix of benefits and incentives, financial literary education, and consistent performance management systems. Moreover, participants warned against designing a “one-size-fits-all” approach, noting the significant differences among employers and employees. New Approaches Being Used to Engage Older Workers Employers have found innovative recruiting techniques to identify and recruit older workers. For example, some employers have established partnerships with national organizations, such as AARP, to help advertise themselves as employers of older workers. Other employers rehire their own retirees for specific needs, both short-term and long-term. For example, one employer actively retrains its employees for other distinct roles in the organization. Flexible Schedules and Workplaces Needed to Attract and Retain Older Workers Labor force decisions of older workers are also influenced by the availability of flexible work arrangements. Full and complete withdrawals from the workforce are no longer as common as they once were, but rather workers are more likely to seek out phased retirement or bridge employment options. Employers who are creative in how they design jobs, and who allow for flexible work locations away from the traditional office, have an advantage in engaging older workers. One employer mentioned three reasons older workers retire: (1) elder care responsibilities, (2) physical constraints, and (3) a desire to pursue other interests. To address these concerns, this employer provides workers 10 days off each year for elder care, and flexible work schedules. Two employers have a “snow bird” program, which allows employees who live in different places during the year to work in both locations. Other employers have adapted job designs to accommodate the physical constraints of older workers. One participant mentioned a hospital that installed hydraulic systems in all of its beds so the beds could fold into a sitting posture, a change that assisted older staff in moving patients. In the second example, an employer modified an assembly line so that cars on the line could be rotated to grant easier access for mechanics who were unable to lie down to work on cars. Benefit Packages Help to Attract and Retain Older Workers Benefit packages that complement some of these new work arrangements are also important in attracting and retaining older workers. Some forum participants’ organizations offer benefits to both full- and part-time workers. One employer offers medical benefits and tuition reimbursement for employees working at least 15 hours per week, while another offers employee discounts. Modifying pension plans can also entice workers to work longer. One participant’s organization offers its employees the opportunity to retire and return to work after 1 month while still collecting pension benefits. Another employer is considering matching a greater percentage of older workers’ DC plan contributions, thereby appealing to older workers who may not have been with the company for a very long time. However, not every employer can offer such a portfolio of benefits for older workers who work part-time, due to the costs. Improving Employee Financial Literacy and Helping Employees Better Prepare for Retirement With older Americans living longer and spending more time in retirement, workers will have to ensure they have a realistic plan to provide for retirement security that may include working longer. Increasing financial literacy can help workers better prepare for retirement by giving them the tools to assess whether or not they have sufficient funds to retire at a particular age. With DC pension plans becoming more common, the burden of financial management on employees is growing, thereby increasing the importance of financial literacy. To address this issue, one forum participant’s employer offers a retirement-planning program for employees over 50 years old that includes individual counseling services. Another participant mentioned automatic enrollment in retirement savings plans as an effective way to help employees save for retirement, while also noting that employees need ongoing education to ensure their portfolios remain balanced. Such education should not be limited to only pension plans, as participants highlighted the need to plan for future health care costs as well. To limit exposure to age discrimination litigation, one participant said a consistent performance management system is essential for dealing with all workers. Besides saying that all employees should be treated in a fair and consistent manner, participants agreed it is also important to show older workers that they are valued. Finally, when discussing best practices, forum participants cautioned against designing solutions with a “one-size-fits-all” approach due to the variety of employers’ needs and workers’ knowledge, skills, and goals. Suggested Strategies for Policymakers and Employers Given the scope and importance of this issue, participants offered a number of strategies to encourage older workers to remain in the labor force and to encourage employers to engage and retain older workers. They generally agreed that a change was needed in the national mind-set about work at older ages and that a national campaign to promote this concept was needed. Such a campaign could highlight the different types of work older people are engaged in, the positive attributes of older workers, and the benefits to employers of engaging and retaining older workers. To change the “culture of retirement” that currently exists, one participant suggested the need for a national discussion to reconsider what “old” is, and how we should think about retirement, or if there should even be a retirement age. Participants also agreed that employers need information about the best practices for engaging and retaining older workers. One strategy discussed was the establishment of a national clearinghouse of best practices, such as the different kinds of work structures, recruiting techniques, and workplace flexibilities used by some employers to attract and retain older workers. Participants agreed that strategies that increase financial literacy may help workers better plan for their futures and learn more about the benefits of working longer. Although this is a long-term endeavor, participants suggested that both public and private efforts may be needed to promote financial literacy, including incorporating financial literacy into the grade school curriculum, promoting the discussion of retirement planning much earlier in workers’ careers, and using faith-based organizations as a conduit for financial planning. Finally, participants discussed a number of ways that the federal government could be a leader in encouraging older workers to remain in the workforce. First, as an employer of millions facing the impending retirement of many of its workers, the federal government should “lead by example” and be a role model in how it engages and retains older workers. Second, it can help to foster the kinds of public/private partnerships that would promote the national campaign, begin a national discussion, or contribute to the national clearinghouse discussed above. In addition, the public sector, in cooperation with the private sector, can help displaced older workers who need new skills to remain in the workforce. And third, through specific legislation or regulations that would increase flexibility for employers and employees, the federal government can help create new models of employment for older Americans. For example, some participants discussed the need for safe harbors in the tax code and the Employee Retirement Income Security Act that would make it easier for people to return to work after retirement and still collect their pensions. The related provision in the Pension Protection Act, which affords some flexibility in this area, represents a step in the right direction. Another participant suggested that age discrimination laws may have had some unintended consequences, and that these laws should be reevaluated or amended to provide safe harbors that would encourage employers to hire older workers. Conclusions Engaging and retaining older workers is critical for promoting economic growth, improving federal finances, and shoring up retirees’ income security. Given the right mix of incentives, programs, and job designs, we have an opportunity today to support those who wish to work later in life, thereby reinventing the traditional concept of retirement, helping to bolster individuals’ retirement security, and fostering economic growth. With the oldest members of the baby boom generation eligible to begin collecting early Social Security benefits next year, time is running out to seize this opportunity. We convened this forum because of the importance of engaging and retaining older workers, and we congratulate this Committee for its sustained leadership on this issue. Given existing trends in the aging of baby boomers, pressures on federal entitlement programs, and threats to individuals’ retirement security, it is in the nation’s interest for people to work longer. Harnessing the benefits of this growing group of potential older workers requires that barriers to continued work be removed sooner rather than later. At the same time, it is important to acknowledge that not everyone can work at older ages, and proper accommodations are needed for such persons as well. Despite evidence indicating the future importance of older Americans to the workforce, barriers and perceptions continue to get in the way of making progress. Forum participants generally agreed that employers do not place a high enough value on experienced workers, and that suitable job opportunities are lacking for older workers. These findings echo many of those that we heard in our 2005 focus groups and reiterate findings from our 2001 report. While some progress is being made, in the absence of additional change, we risk a missed opportunity to engage those workers who wish to remain in the workforce longer. At our forum, there was a good deal of enthusiasm among participants to confront this issue, and I hope that by sharing some best practices and suggested strategies today progress will continue with renewed insight and energy. At the same time, given the national scope of the challenge, addressing it will require not only workers and employers. Clearly, there is also a role for government to play, whether it be through becoming a model employer of older federal employees or helping to foster flexible work arrangements in the private sector to meet the needs of older workers, or by considering legislative and regulatory changes, including those that Labor’s interagency task force may propose. Finally, consideration of the current mix of federal policies—including Social Security, Medicare, and pension laws—may be warranted to ensure that their incentives are appropriate given future demographic changes and the benefits that can be gained from work at later years for both individuals and the nation. Mr. Chairman, this concludes my remarks. I would be happy to answer any questions you or the other members of the Committee may have. I am pleased by your continued interest in this area and look forward to working with you on this issue in the future. Contact and Staff Acknowledgments For questions regarding this testimony, please call Barbara Bovbjerg, Director at 202-512-7215. Other individuals making key contributions to this statement included Mindy Bowman, Alicia Puente Cackley, Jennifer Cook, Scott Heacock, and Kevin Kumanga. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | More Americans remaining in the workforce at older ages could lead to benefits at several levels. First, working longer will allow older workers to bolster their retirement savings. Second, hiring and retaining older workers will help employers deal with projected labor shortages. Third, older workers will contribute to economic growth and increase federal revenues, helping to defray some of the anticipated costs associated with increased claims on Social Security and Medicare. Despite all of these gains to be had, there are barriers to continued employment for older workers. In addition, some employers remain reluctant to engage and retain this group. It is in the nation's interest for people to work longer, which requires that barriers to continued work be removed sooner rather than later. This testimony highlights issues discussed at a recent forum GAO convened on engaging and retaining older workers, as well as prior GAO work. Forum participants included experts representing employers, business and union groups, advocates, researchers, actuaries, and federal agencies. These highlights do not necessarily represent the views of any one participant or the organizations that these participants represent, including GAO. Obstacles continue to exist for older workers seeking continued or new employment and for employers who want to attract or retain older workers. The following obstacles, best practices, lessons learned, and strategies to address some of these obstacles and promote work at older ages were discussed at a recent GAO forum on older workers. Key Obstacles: (1) Some employers' perceptions about the cost of hiring and retaining older workers are a key obstacle in older workers' continued employment. (2) Workplace age discrimination, the lack of suitable job opportunities, layoffs due to changes in the economy, as well as the need to keep skills up to date, are all challenges facing older workers. (3) Strong financial incentives for workers to retire as soon as possible and some jobs that are physically demanding or have inflexible schedules provide strong disincentives to continued work. Best Practices and Lessons Learned: (1) Use nontraditional recruiting techniques such as partnerships with national organizations that focus on older Americans. (2) Employ flexible work situations and adapt job designs to meet the preferences and physical constraints of older workers. (3) Offer the right mix of benefits and incentives to attract older workers such as tuition assistance, time off for elder care, employee discounts, and pension plans that allow retirees to return to work. (4) Provide employees with financial literacy skills to ensure they have a realistic plan to provide for retirement security. (5) Treat all employees in a fair and consistent manner and employ a consistent performance management system to prevent age discrimination complaints. Strategies: (1) Conduct a national campaign to help change the national mindset about work at older ages. (2) Hold a national discussion about what "old" is to help change the culture of retirement. (3) Create a clearinghouse of best recruiting, hiring, and retention practices for older workers. (4) Strengthen financial literacy education to help workers prepare to retire.(5) Make the federal government a model employer for the nation in how it recruits and retains older workers. (6) Create a key federal role in partnerships to implement these strategies. (7) Consider specific legislation or regulations to increase flexibility for employers and employees to create new employment models. |
Background Overview of the Federal Reserve System The Federal Reserve Act established the Federal Reserve to operate collectively as the country’s central bank. The Federal Reserve Act established the Federal Reserve as an independent agency with a decentralized structure to ensure that monetary policy decisions would be based on a broad economic perspective from all regions of the country. The Federal Reserve’s monetary policy decisions do not have to be approved by the President, the executive branch of the government, or Congress. However, the Federal Reserve is subject to oversight by Congress and conducts monetary policy so as to promote the long-run objectives of maximum employment, stable prices, and moderate long- term interest rates in the United States, as specified by law. The Federal Reserve operates in a unique public and private structure. It consists of the Board of Governors (a federal agency), the 12 Reserve Banks (federally chartered corporations), and FOMC. Board of Governors. The Board of Governors is an independent regulatory federal agency located in Washington, D.C., and has broad interest in monitoring and promoting the stability of financial markets. The Board of Governors’ authorities include: supervising bank and thrift holding companies, state-chartered banks that are members of the Federal Reserve, and the U.S. operations of foreign banking organizations; reviewing and determining discount rates for lending to depository institutions; conducting monetary policy (in cooperation with FOMC); and providing general supervision over the operations of the Reserve Banks. The top officials of the Board of Governors are the seven members who are appointed by the President and confirmed by the Senate. Moreover, the Federal Reserve Act requires the Board of Governors to submit written reports to Congress twice each year containing discussions of the conduct of monetary policy and economic developments and prospects for the future. The act also requires the Chair of the Board of Governors to testify on the conduct of monetary policy twice each year in connection with the monetary policy report, as well as economic development and prospects for the future. Reserve Banks. The Federal Reserve is divided into 12 districts, with each district served by a regional Reserve Bank. In most cases, each regional Reserve Bank also operates one or more branch offices (see fig. 1). The Reserve Banks are not federal agencies; rather, each Reserve Bank is a federally chartered corporation with a board of directors and member banks that are stockholders. Under the Federal Reserve Act, Reserve Banks are subject to the general supervision of the Board of Governors. The Reserve Banks were established by Congress as the operating arms of the Federal Reserve, and they combine both public and private elements in performing a variety of services and operations. These functions include participating in formulating and conducting monetary policy; providing payment services to depository institutions, including transfers of funds, automated clearinghouse services, and check collection; distributing coin and currency; performing fiscal agency functions for Treasury, certain federal agencies, and other entities; providing short-term loans to depository institutions; serving consumers and communities by providing educational materials and information on financial consumer protection rights and laws and information on community development programs and activities; and supervising bank holding companies, state member banks, savings and loan holding companies, U.S. offices of foreign banking organizations, and designated financial market utilities pursuant to authority delegated by the Board of Governors. In addition, certain services are provided to foreign and international monetary authorities, primarily by the Federal Reserve Bank of New York. State-chartered member banks are subject to supervision by the state in which they are chartered and the Board of Governors (through a regional Reserve Bank) as a condition of membership. National banks are chartered and supervised by OCC. State nonmember banks are supervised by the state in which they are chartered and by FDIC. See figure 2 for a chart displaying the number and percentage of commercial banks supervised by each prudential regulator. FOMC. FOMC plays a central role in the execution of the Federal Reserve’s monetary policy mandate to promote price stability, maximum employment, and moderate long-term interest rates in the United States. FOMC is responsible for directing open market operations—the purchase and sale of securities in the open market by a central bank—to influence the total amount of money and credit available in the economy. FOMC has authorized and directed the Federal Reserve Bank of New York to conduct open market operations by engaging in purchases or sales of certain securities, typically U.S. government securities, in the secondary market. FOMC also plays a central role in monetary policy strategy and communication. Federal Reserve Income, Surplus Account, and Remittances to Treasury Reserve Banks derive income from various sources, maintain surplus accounts, and remit earnings in excess of expenses to Treasury. The Reserve Banks derive income primarily from the interest on their holdings of U.S. government securities, agency mortgage-backed securities, and agency debt acquired through open market operations. Other sources of income are the interest on foreign currency investments held by the Reserve Banks; interest on loans to depository institutions; reimbursements for services performed as fiscal agent for Treasury and other agencies; and fees received for payment services provided to depository institutions, such as check clearing, funds transfers, and automated clearinghouse operations. However, Reserve Banks are not operated for profit. The Reserve Banks use earnings to pay operational expenses and dividends to member banks and to fund their capital surplus accounts. The surplus account is primarily intended to cushion against the possibility that total Reserve Bank capital would be depleted by losses incurred through Federal Reserve operations. Until enactment of the FAST Act, Federal Reserve policy as established in the Financial Accounting Manual for Federal Reserve Banks required the Reserve Banks to retain a surplus balance equal to the 3 percent that commercial banks pay in to purchase Reserve Bank stock. Due to this matching provision, as the value of member banks’ capital and surplus increased over time, so did the values of the Federal Reserve’s surplus account (see fig. 3). The Reserve Banks then transfer earnings in excess of expenses to Treasury. About 95 percent of the Reserve Banks’ net earnings have been transferred to Treasury since the Federal Reserve began operations in 1914. The transfers, known as remittances, have been above historic levels since the 2007—2009 financial crisis (see fig. 4). Stock Purchase Requirement for Member Banks and Membership Benefits Under the Federal Reserve Act, a member bank (a national bank or state- chartered bank that applies and is accepted to the Federal Reserve) must subscribe to capital stock of the Reserve Bank of its district in an amount equal to 6 percent of the member bank’s capital and surplus. The member bank will pay for one-half of this subscription upon approval by the Reserve Bank of its application for capital stock (with the remaining half of the subscription subject to call by the Reserve Bank). The capital stock of each Reserve Bank is valued at $100 per share. When a member bank increases its capital stock or surplus, it must subscribe for an additional amount of Reserve Bank stock equal to 6 percent of the increase with half of the stock paid in. Conversely, when a member bank reduces its capital stock or surplus it is to surrender the same amount of stock to its regional Reserve Bank. Shares of the capital stock of Reserve Banks owned by member banks do not carry with them the typical features of control and financial interest conveyed to holders of common stock in for-profit organizations. For example, member banks cannot transfer or sell Reserve Bank stock or pledge it as collateral; voting rights do not change with the number of shares held; and each member bank has only a single vote in those director elections in which they are eligible to vote. Currently, stock ownership provides a dividend payment and the right to vote for two classes of Reserve Bank directors, as discussed later. Under the original Federal Reserve Act, the annual dividend rate was 6 percent on paid-in capital stock and was cumulative. Therefore, member banks would earn a dividend of 0.5 percent per month on the amount of their paid-in capital stock. The Reserve Banks’ long-standing practice is to make dividend payments on the last business days of June and December (that is, a dividend payment of 3 percent twice a year). Provisions in the FAST Act effective January 1, 2016, altered the dividend rate that some member banks receive on paid-in capital. For banks with more than $10 billion in consolidated assets, the dividend rate was reduced from 6 percent per annum to the lesser of 6 percent or the highest accepted yield at the most recent auction of 10-year Treasury notes before the dividend payment date. The high yield of the 10-year Treasury note auctioned on June 30, 2016 (the last auction before the dividend payment) was 1.702 percent, and on December 30, 2016, was 2.233 percent. The dividend rate for member banks with less than $10 billion in consolidated assets remains at 6 percent. The Reserve Banks continue to make dividend payments semiannually. Reserve Bank Boards of Directors The composition of boards of directors for Reserve Banks is statutorily determined and intended to ensure that each board represents both the public and member banks in its district. The Federal Reserve Act established nine-member boards of directors to govern all 12 Reserve Banks. Each board is split equally into three classes of directors. Class A directors represent the member banks, while Class B and C directors represent the public. For Class B and C directors, the Federal Reserve Act requires “due but not exclusive consideration to the interests of agriculture, commerce, industry, services, labor, and consumers.” The Federal Reserve Act also requires that member banks elect Class A and Class B directors and that the Board of Governors appoints Class C directors. The Federal Reserve Act provides that the chairman of the board, like all Class C directors, cannot be an officer, director, employee, or stockholder of any bank. The principal functions of Reserve Bank directors are to play a role in the conduct of monetary policy; oversee the general management of the Reserve Bank, including its branches; and act as a link between the Reserve Bank and the community. The boards of directors of Reserve Banks play a role in the conduct of monetary policy in three primary ways: (1) by providing input on economic conditions to the Reserve Bank president (all 12 Reserve Bank presidents attend and participate in deliberations at each FOMC meeting); (2) by participating in establishing discount rate recommendations (interest rate charged to commercial banks and other depository institutions on loans received from their regional Reserve Bank’s discount window) for Board of Governors’ review and determination; and (3) for the Class B and C directors, by appointing Reserve Bank presidents with the approval of the Board of Governors. Central Bank Independence A large amount of research has been produced on the attributes and effects of central bank independence. According to the research, a high level of central bank independence is generally considered to be desirable. The research has generally found that countries with high central bank independence have been able to maintain lower levels of inflation. Central bank independence can be divided into three categories (political, instrument, and financial independence), as described in the following bullets. Political independence is based on a central bank’s capacity to define monetary policy strategy (goals) without political interference. Political independence encompasses appointing procedures, relationships with the government, and formal responsibilities. Instrument independence is based on a central bank’s capacity to define monetary policy instruments without political interference. Instrument independence for a central bank includes the ability to avoid financing public spending by money creation, autonomy in setting interest rates, and ability to conduct monetary policy without banking sector oversight responsibilities. Financial independence is based on a central bank’s capacity to govern its own budget. Financial independence encompasses conditions for capitalization and recapitalization, determination of the central bank budget, and arrangements for profit distribution and loss coverage. Independence in the implementation of monetary policy can be a function of the degree of independence in all three categories: political, instrument, and financial. Lower degrees of independence in any of these areas can affect monetary policy independence. Existing research shows that the Federal Reserve is relatively independent overall compared to central banks in other advanced economies. The level of political independence is lower for the Federal Reserve than its instrument or financial independence due in part to existing appointment procedures for the Board of Governors, whose members are appointed by the President and confirmed by the Senate. However, Board of Governors officials stated that Federal Reserve political independence is strengthened by the fact that Reserve Bank presidents are not political appointees. In addition, the instrument independence of the Federal Reserve is high, and the financial independence of the Federal Reserve is also relatively high. Rationales for Stock Purchase Requirement and Dividend Rates According to legislative history and historical accounts that we reviewed, the stock purchase requirement in the Federal Reserve Act established an ownership and control arrangement at Reserve Banks to facilitate a balance of power between the Board of Governors and private interests, capitalized the Reserve Banks, and helped support the new national currency created by the act. Based on our interview with a past Federal Reserve historian and historical accounts, the dividend rate of 6 percent was intended to compensate member banks for the requirement to provide funds to the Reserve Banks to begin operations and the risk of the Federal Reserve not succeeding, as well as to attract state-chartered banks to the Federal Reserve. Stock Purchase Requirement Established an Ownership and Control Structure for Reserve Banks as a Counterbalance in the Federal Reserve According to the legislative history and historical accounts related to the Federal Reserve Act, debate over the creation of the Federal Reserve focused on the balance of power among economic regions of the United States and between the private sector and government. The resultant corporate structure of Reserve Banks was intended to help balance the influence of government over the central bank, of different regions, and of small versus large banks, as well as to help fund the Federal Reserve. Many Americans were resistant to the creation of a central bank (dating back to the nation’s founding); thus, early drafts of proposals for a central bank did not include the term “central bank.” However, there was strong recognition that the nation needed a central bank to forestall and mitigate financial panics. There was considerable disagreement about how it should be structured, including considerations about the role of private bankers versus government officials, how centralized the new bank should be, and the extent of its powers. The Federal Reserve Act as proposed in January 1913 by Representative Carter Glass generally was viewed as occupying the middle ground between positions advocating for government control over the Federal Reserve and positions advocating for more control by private commercial banks. The Glass bill proposed creating up to 15 Reserve Banks and the Board of Governors. The Reserve Banks were modeled after clearinghouses or “banker’s banks” and some European central banks in that they would be funded by selling stock shares to commercial banks. In particular, the design adopted for the Federal Reserve was federated, with independent Reserve Banks overseen by the Board of Governors. Under the bill, each Reserve Bank was required to have minimum capital to begin business. According to a past historian of the Federal Reserve, proponents in Congress of a central bank did not want to fund it, but needed to raise cash for capital and gold to back Federal Reserve notes that would serve as the national currency. The original proposal required member banks to purchase stock equal to 20 percent of their paid-in and unimpaired capital, with one-half paid on joining the Reserve Bank and one-half callable from the member bank. The Senate and conference committees agreed to change the capital of the Federal Reserve to 6 percent of member banks’ capital and surplus rather than 20 percent of capital alone as provided in the Glass bill. This change yielded almost the same total capital but satisfied small banks claiming that the Glass bill discriminated against them. Member banks had to pay for the stock in gold or gold certificates, which concentrated gold deposits in the Reserve Banks to support Federal Reserve notes. The bill also required that each national bank subscribe to the stock of the Reserve Bank in its district. Only national banks were compelled to subscribe because their charters were issued by the federal government. State-chartered banks were not required to purchase Reserve Bank stock but were permitted to join the Federal Reserve if they met certain requirements. State-chartered banks opposed mandatory membership because they did not want to be subject to supervision by a federal regulator. The mandatory nature of national bank membership and stock ownership was controversial when the Federal Reserve Act was under debate. But for Glass, “the compulsory and pro rata capital contribution were ‘means to the achievement of a democratic organization constituted by the democratic representation of the several institutions which are members and stockholders of a reserve bank’” and also “considered necessary for the establishment of corporate entities that would act ‘primarily in the public interest.’” In addition, Senator Robert Owen, primary sponsor of the Federal Reserve Act in the Senate, supported the stock purchase requirement because he believed it would ensure that commercial banks would have an incentive to safeguard the Federal Reserve. Dividend Rate of 6 Percent Was Intended to Compensate for Bank Costs and Risks and Attract State Banks The rationales for paying a 6 percent dividend rate included compensating banks for opportunity costs for providing capital and reserves to the Reserve Banks and attracting state-chartered banks to Federal Reserve membership. According to the legislative history and other information we reviewed, notable proposals for creating a central bank included stock and dividend payments. One of the early proposals for a central bank was written in 1910 by Paul Warburg and included dividends on central bank stock of 4 percent. The original Federal Reserve Act proposed by Representative Glass provided for a dividend rate of 5 percent. Glass stated in his report on the 1913 bill that 5 percent represented the normal rate of return from current bank investments “considering the high character of the security offered.” Debate in the Senate and conference committee resulted in a 6 percent dividend rate on Reserve Bank stock. This rate was comparable to those of European central banks of the time. Based on our interviews with a past Federal Reserve historian, one of the rationales for creation of the 6 percent dividend rate was to compensate member banks for the opportunity costs of the capital they invested in the Reserve Bank stock. National banks and state-chartered banks that chose to join the Federal Reserve were required to purchase the stock and therefore could not invest this capital in other instruments that might earn a higher return. Also, the 6 percent dividend rate included a risk premium associated with the stock of this new institution. While the Reserve Banks are seen as safe today, during the debates over the Federal Reserve Act there was worry that they would fail, particularly smaller Reserve Banks in rural regions of the country that had less initial capital. However, concerns were raised about making the dividend rate so attractive that member banks would pull too much bank capital away from the local community. Lastly, the dividend was intended to help induce state-chartered banks to join the Federal Reserve. As noted earlier, state-chartered banks were not required to join the Federal Reserve and purchase Reserve Bank stock and therefore would not be subject to supervision by the Federal Reserve. As a result, a low percentage of state-chartered banks initially joined the Federal Reserve and there was a gap in the Board’s knowledge of the safety and soundness of the banking system. Thus, the 6 percent dividend rate was intended as an incentive for state-chartered banks to voluntarily join the Federal Reserve. To examine the comparative value of a dividend rate of 6 percent since the enactment of the Federal Reserve Act, we examined rate of return information on Treasury and certain corporate bonds. (See appendix I for information about our data sources and methodology.) As shown in figure 5, returns on investment-grade and medium-grade corporate bonds and Treasury bonds varied widely from 1900 through 2015. Before enactment of the Federal Reserve Act in 1913, returns for investment-grade corporate bonds and Treasury bonds were around 4 percent and stayed in that range until about 1960, when they began to rise dramatically. Returns for each of the instruments (medium-grade corporate bond data were recorded from the mid-1940s) were consistently above 6 percent from the early 1970s to the early 1990s, and peaked around 1980 at about 16 percent. Returns for each of the bond categories above are now below 6 percent. In addition, we reviewed U.S. stock data and found total returns to average about 6.5 percent over more than a century. However, stocks can pose higher variability in returns than corporate bonds and Treasury securities. We also compared the 6 percent Reserve Bank dividend rate to the federal funds rate and 1-year nominal interest rates. We analyzed the federal funds rate—the interest rate at which depository institutions trade federal funds (balances held at Reserve Banks) to other depository institutions overnight—to consider a member bank’s opportunity costs of holding a share of Reserve Bank stock. The federal funds rate represents a market of interbank lending at low risk. We collected data on the federal funds rate since 1954. In addition, we analyzed a nominal interest rate series to understand opportunity costs prior to 1954. As shown in figure 6, nominal interest rates were between 4 percent and 6 percent in 1913, but dipped dramatically during the Great Depression and World War II. Rates reached 6 percent again in the late 1960s and then peaked around 18 percent in the early 1980s. The federal funds rate has been near zero since the 2007—2009 financial crisis. Potential Implications of Modifying the Capital Surplus Account and Dividend Rate Based on our interviews with Federal Reserve officials, the cap on the aggregate Reserve Banks’ surplus account had little effect on Federal Reserve operations, and we found that the modification to the Reserve Bank stock dividend rate has had no immediate effect on membership. While it is debatable whether transferring funds from the Federal Reserve to Treasury when the FAST Act also funded specific projects should be viewed any differently than the recurring transfers that occur on a regular basis, some stakeholders raised concerns about future transfers that could ultimately affect, among other things, the Federal Reserve’s financial independence and consequently, autonomy in monetary policy decision making (instrument independence). Although commercial banks and Federal Reserve officials we interviewed raised a number of concerns about the stock dividend rate change, it appears to have had no effect on Reserve Bank membership as of December 2016. Surplus Account Cap Has Not Impeded Federal Reserve’s Operations but Raises Other Questions for Some According to Board of Governors officials, the statutory requirement to cap the surplus account and transfer excess funds has not impeded Federal Reserve operations as of December 2016. However, according to current and former Federal Reserve officials we interviewed, the nature of the transfer of funds, which were added to Treasury’s General Fund and used as an offset to make up a shortfall in the Highway Trust Fund, raises questions about the possibility of future transfers. They also raised questions that the cap could negatively affect the Federal Reserve’s independence in monetary policy decision making by rendering it dependent on Treasury for recapitalization in the event that total Reserve Bank capital is depleted. The FAST Act, which authorized the Highway Trust Fund for fiscal year 2016 through fiscal year 2020, requires that the aggregate of the Reserve Banks’ surplus funds not exceed $10 billion and directed that amounts in excess of $10 billion be transferred to Treasury’s General Fund. The excess of Reserve Bank surplus over the $10 billion limitation as of the December 4, 2015, enactment date of the FAST Act was $19.3 billion, which was transferred to Treasury on December 28, 2015. The $19.3 billion transferred from the surplus account was part of $117 billion in earnings the Federal Reserve transferred to Treasury in 2015. The FAST Act transferred a total of $70 billion from Treasury’s General Fund to make up a projected shortfall in the Highway Trust Fund through fiscal year 2020. In addition to its annual remittances, the Congressional Budget Office estimates that the Federal Reserve’s transfers to Treasury will be increased by a total of $53.3 billion from 2016 to 2025 as a result of capping the surplus account balance at $10 billion. As we found in our 2002 report on the surplus account, reducing the Federal Reserve capital surplus account creates a one-time increase in federal receipts, but the transfer by itself will have no significant long-term effect on the federal budget or the economy. Because the Federal Reserve is not included in the federal budget, amounts transferred to Treasury from reducing the capital surplus account are treated as a receipt under federal budget accounting but do not produce new resources for the federal government as a whole. The surplus account cap reduces future Reserve Banks’ earnings because the Reserve Banks would hold a smaller portfolio of securities. As a result, the cap reduces their transfers to Treasury in subsequent periods. Since the one-time transfer from the Federal Reserve also increases Treasury’s cash balance over time, Treasury would sell fewer securities to the public and thus pay less interest to the public. Over time, the lower interest payments to the public approximately offset the lower receipts from Federal Reserve earnings. According to Board of Governors officials, the cap on the surplus account had little effect on Federal Reserve operations as of December 2016, and the chances of the cap impeding operations in the long term appear to be small. This is because Federal Reserve operations are funded before remaining excess funds are transferred to the surplus account. In addition, if Reserve Bank earnings during the year are not sufficient to provide for the costs of operations, payment of dividends, and maintaining the $10 billion surplus account balance, remittances to Treasury are suspended. A deferred asset is recorded in the Federal Reserve’s accounts to represent the amount of net earnings a Reserve Bank will need to realize before remittances to Treasury resume. In our September 2002 report, we found no widely accepted, analytically based criteria to show whether a central bank needs capital as a cushion against losses or how the level of such an account should be determined. However, according to Board of Governors officials, if a central bank exhausts its capital cushion or its capital position is negative, realized losses that result from asset sales or draining of monetary liabilities would further exacerbate the capital deficiency. According to Federal Reserve officials and academics we interviewed, transferring Federal Reserve funds to address a budgetary shortfall might lead the public and financial markets to question if the Federal Reserve was independent from the executive and legislative branches. In their view, if these actions set a precedent, the public and financial markets might conclude that the central bank was not conducting monetary policy aimed solely at achieving the monetary policy objectives set forth in the Federal Reserve Act (price stability, maximum employment, and moderate long-term interest rates in the United States). Instead, some might believe that the Federal Reserve had been directed to take policy actions that would help fund government spending. Whether transferring funds from the Federal Reserve to address budgetary shortfalls should be viewed any differently than the annual remittances is debatable. Congress has transferred money from the surplus account to Treasury’s General Fund on other occasions, most recently with the Consolidated Appropriations Act of 2000 that directed the Reserve Banks to transfer to Treasury additional surplus funds of $3.752 billion during fiscal year 2000. These transfers are deposited in Treasury’s General Fund and available for appropriation and use for general support of the government. Nevertheless, Federal Reserve officials, an industry association, and some commercial banks we interviewed believed the requirement to transfer funds from the surplus account, which many see as specifically intended to support the Highway Trust Fund, was different and set a worrying precedent. In particular, Board of Governors officials stated that prior transfers from the Reserve Banks to Treasury did not place a cap on the amount of the surplus accounts that could be retained by the Reserve Banks. Several academic experts with whom we spoke noted that countries with independent central banks have strict provisions against transfers of central bank funds by the legislative branch. However, as long as rules regarding the transfer of central bank earnings to the government are clearly defined, such transfers are consistent with best practices associated with central bank financial independence. As we discuss later, concerns may arise if subsequent transfers reduce the capital surplus to zero, which could lead to dependence on Treasury for capital integrity. Since capital integrity is required to support monetary policy autonomy, reliance on Treasury could diminish the independence of the Federal Reserve. As we discuss later in this report, there are ways to preserve Federal Reserve independence under varying capital structures. Dividend Rate Modification Raises Potential Implications but Has Had No Immediate Effect on System Membership The FAST Act’s modification of the Reserve Banks’ stock dividend rate for large member banks from 6 percent to a rate pegged at the lesser of 6 percent or the 10-year Treasury rate, which was below 6 percent in June 2016, increased federal receipts and reduced revenues for large member banks, but has had no immediate effect on Federal Reserve membership. In 2015, the Federal Reserve made dividend payments to member banks totaling more than $1.7 billion. Board of Governors officials told us that dividend payments to member banks in 2016 totaled $711 million. The modified dividend rate for the larger member banks reduced the dividend payment for the first half of 2016 by nearly two-thirds from the payment for the first half of 2015 (from approximately $850 million to approximately $300 million). More specifically, the difference between what larger member banks received at June 30, 2015, and what they received at June 30, 2016, ranged from about $185,000 to about $112 million less. While the current interest rate environment is historically low, the difference in dividend income earned by large banks due to the dividend rate modification would decline in a higher interest rate environment, because the 10-year Treasury rate could increase over time to 6 percent (the ceiling on the dividend rate for member banks with more than $10 billion in consolidated assets). Commercial banks and Federal Reserve officials we interviewed expressed some concerns about the dividend rate modification. We interviewed 17 member and nonmember commercial banks, including 6 of the 85 Federal Reserve member banks that held more than $10 billion in assets as of December 31, 2015, and 11 smaller member banks. Four of the 6 large member banks stated that they would likely act to recoup this lost revenue. For example, some mentioned employee layoffs and increased fees on consumers as potential options to recoup the lost revenue. Two large member banks noted that the dividend rate modification was made at a time when these institutions were adjusting to changes in the regulatory and financial environment, and incorporating the revenue cut made adjusting to these changes even more challenging. However, these factors also make it difficult to link the dividend rate modification to any specific effects on employees or consumers. Most of the member and nonmember banks we interviewed argued that the selection of the 10-year Treasury note as a benchmark for the dividend rate does not appropriately compensate member banks. Several commercial banks noted that the decision to use the 10-year Treasury note did not account for the illiquidity of Reserve Bank stock (it cannot be traded while 10-year Treasury notes can). They added that this illiquidity should be accounted for by the addition of a premium to the rate paid on Reserve Bank stock (an illiquidity premium). Additionally, several commercial banks reported that shifting from a fixed dividend rate to a floating rate determined during the month when dividends are paid increased the uncertainty surrounding their business decisions. Several commercial banks also stated that they would have preferred that the dividend rate modification were considered on its own merits rather than utilized to help pay for transportation projects. The American Bankers Association stated in a comment letter on the interim final rule implementing the dividend rate modification that the change represented a breach of contract between the Federal Reserve and member banks and amounted to “an unconstitutional taking of member banks’ property without compensation.” It further stated that the “Takings Clause of the Fifth Amendment provides that ‘private property’ shall not ‘be taken for public use, without just compensation’” and the dividend rate change was in violation of the Fifth Amendment. On February 9, 2017, the American Bankers Association filed a lawsuit against the United States which included a Fifth Amendment Taking Clause claim. Certain Federal Reserve officials with whom we spoke were concerned about increased membership attrition as a result of the dividend rate modification. However, as of December 2016 there was no evidence that banks had dropped their Federal Reserve membership as a result of lower dividend payments. According to data provided by the Board of Governors and Reserve Banks, membership in the Reserve Banks dropped by about 2 percent (46 banks) from December 31, 2015, to June 30, 2016. The Reserve Banks generally attributed this drop to normal attrition and consolidation in the industry. This decrease is consistent with the general decline in the number of banks supervised by the Federal Reserve from 2010 through 2015 (as shown in fig. 2). FDIC officials stated in May 2016 that they had seen no impact of the dividend rate modification on state-chartered member and nonmember banks. OCC officials stated that it was too early to determine the impact of the dividend rate modification on national banks. However, OCC officials noted that the costs associated with changing membership can be significant and can be a decision-making factor. For example, industry association officials said that such costs could include those associated with changing the institution’s name. Furthermore, of the 14 member banks with which we spoke, including 6 banks with assets of more than $10 billion, none indicated that they would drop Federal Reserve membership as a result of the dividend rate modification. But several of the banks with less than $10 billion in assets stated that they were worried that the dividend rate modification would set a precedent for future transfers from the Reserve Banks, and that they would reconsider Federal Reserve membership if the dividend rate threshold were reduced to include banks in their asset range. Modifications to Stock Ownership Requirement Would Have Implications for the Federal Reserve’s Public and Private Balance and Reserve Bank Operations Modifying the Reserve Bank stock ownership requirement could have a number of wide-ranging policy implications on the structure of the Federal Reserve. We examined potential implications of three scenarios for modifying the purchase requirement: (1) permanently retiring Reserve Bank stock and eliminating the stock ownership requirement, (2) making ownership of Reserve Bank stock voluntary for member banks, and (3) modifying the capital requirement associated with the stock to allow member banks to hold the entire 6 percent capital contribution as callable capital. In scenario 1, permanently retiring Reserve Bank stock could change the existing corporate structure of the Reserve Banks. In scenario 2, Federal Reserve membership would not require stock ownership; however, Reserve Bank stock would remain available for purchase by member banks. In scenario 3, the full capital contribution would be retained by member banks, could be called at any time by the Reserve Banks, and could be available for use by the member bank. The primary benefit to making any of the changes to the stock purchase requirement is that member banks would gain more control over the capital currently committed to ownership of Reserve Bank stock. Banking associations that we interviewed said that the capital contribution for the stock places a burden on member banks. Specifically, the capital is illiquid and cannot be used as collateral, so it represents a significant opportunity cost to member banks. Despite the cost associated with the capital requirement, 11 of the 17 banks we interviewed indicated that the capital requirement is either not an important factor or only somewhat of an important factor in their decision on Federal Reserve membership. More frequently, familiarity with their Reserve Bank as a supervisor was more important to their decision to join the Federal Reserve. The three scenarios are not an exhaustive representation of possible modifications to the structure of the Federal Reserve, nor does our analysis account for all of the potential consequences of such modifications. Our discussion of the implications of each scenario should not be interpreted as a judgment on how or whether the Federal Reserve should be restructured. Instead, our intent is to identify policy implications that warrant full consideration and additional research should changes to the Federal Reserve stock requirement and therefore, the Federal Reserve’s structure, be made. Furthermore, the discussion of the impacts of the three scenarios is limited without identification of the exact replacement structures, which is beyond the scope of this study. As each scenario has a number of potential structures, each structure would have to be evaluated on its own merits to assess its ability to better ensure the benefits Congress seeks to achieve in the central bank, such as price stability and maximum employment. This discussion assumes that the goals reflected in the original construction of the Federal Reserve remain (independence, balance of power, and geographical diversity). Reserve Bank and Board of Governors officials with whom we spoke said that changes to the stock ownership requirement should not be evaluated in isolation because any changes would have ripple effects on the governance structure, financial independence, and Reserve Bank operations that would warrant consideration in any discussion. In the following discussion, we focus on the impacts of modifying the purchase requirement that were of primary concern to regulators, commercial banks, and academics. Many were concerned that such modifications could undermine the governance of a central bank with a combined private and public structure—key attributes of the current structure designed to construct some barriers to political pressures and provide nationwide input for monetary policies. Nevertheless, these governance elements could be maintained through legislation and other mechanisms if the current Federal Reserve structure were altered. Retiring Reserve Bank Stock and Making Reserve Banks Field Offices Retiring Reserve Bank stock could have a number of implications, including disrupting the Federal Reserve’s public and private balance, but other mechanisms could be used to preserve the structure’s key attributes. As discussed previously, the stock purchase requirement reflects the desire of the founders of the Federal Reserve to strike a balance between control by commercial banks and government control of the Federal Reserve. Under the Federal Reserve Act, the Reserve Banks were established as corporate entities after national banks subscribed to the minimum amount of Reserve Bank stock. Therefore, a structural change could result if Congress decided to retire the stock and the corporate structure of the Reserve Banks were not preserved. The corporate structure, which includes a board of directors to oversee operations, enables the Reserve Banks to maintain a degree of autonomy from the Board of Governors. Furthermore, the stock ownership requirement enables the Federal Reserve to maintain financial independence from the federal government because it allows the Reserve Banks to maintain a capital base that is not funded at the discretion of the government. Retirement of Reserve Bank stock could have implications for the autonomy of the Reserve Banks, the independence of the Federal Reserve, and the operations of the Reserve Banks, all of which would warrant consideration. Diminished Reserve Bank autonomy. One of the policy goals of the Federal Reserve’s structure is to provide Reserve Banks with a degree of autonomy or regional authority in relation to the Board of Governors. Eliminating Reserve Bank stock would have implications for this goal. According to Reserve Bank officials, all else being equal, retirement of the stock coupled with elimination of the current corporate structure of the Reserve Banks could result in removal of Reserve Bank boards of directors or limit the benefits currently provided by their participation. The existence of the boards of directors is tied to member banks’ equity ownership in their regional Reserve Bank. Specifically, this action could limit the diversity of views in monetary policy by weakening the link to regional input in FOMC discussions. Reserve Bank officials said that Reserve Bank boards serve an important function in the Federal Reserve, including providing important business advice and perspectives to the Reserve Banks. In our 2011 report on Federal Reserve governance, we found that directors of the Reserve Bank boards provide a link to the regions that the Reserve Banks serve, and give information on economic conditions to the Reserve Bank presidents who may use it to inform FOMC discussions about regional conditions. With the loss of member bank equity ownership and the absence of Reserve Bank boards, advisory boards or advisory councils are mechanisms that could be used to serve the same function. However, according to Reserve Bank officials and directors, this approach might not be as effective as a formal corporate board. They said that, as appointed directors of a Reserve Bank board, they have a fiduciary responsibility to perform their duties and place the interests of the Reserve Bank and the nation ahead of personal interests. They noted that it may be difficult to attract high-caliber members to an advisory council or board in a different, more removed relationship. However, we found in our 2011 report that existing Reserve Bank branch boards and advisory councils are sometimes a source of director candidates for the Reserve Banks. Reserve Bank officials and directors also said that the level of commitment and engagement from members of an advisory board or council would be less than that of directors of a formal corporate board. Many different mechanisms could be employed to mitigate the effects of eliminating Reserve Bank boards, but without further analysis on specific mechanisms it is difficult to determine whether those mechanisms would be feasible. Reserve Bank officials, academics, and banks said that another potential consequence of retiring Reserve Bank stock and eliminating the incorporated entities could be diminished Reserve Bank autonomy in relation to the Board of Governors. For example, retirement of Reserve Bank stock could result in eliminating the current corporate structure, and one structural option that we examined was to convert the Reserve Banks into field offices of the Board of Governors—that is, Reserve Banks would become part of a federal agency. Reserve Bank presidents currently are appointed by and accountable to Reserve Bank boards of directors. Some officials we interviewed believed that Reserve Bank presidents might feel less comfortable voicing dissenting opinions in FOMC meetings if they were leading field offices directly accountable to the Board of Governors. Therefore, a loss of autonomy could limit the diversity of views in FOMC meetings. More importantly, it could concentrate power and influence within the Board of Governors—for example, by centralizing FOMC decision making in the hands of the Board of Governors. The diversity of economic views that Reserve Bank presidents bring to FOMC meetings is illustrated by dissenting votes at FOMC meetings from July 1996 to July 2016. In that time, Reserve Bank presidents cast 80 dissenting votes while members of the Board of Governors cast 2 dissenting votes. Some academics with whom we spoke pointed out that eliminating the Reserve Bank stock purchase requirement could remove the perception of undue influence from member banks. For example, such perceptions might be removed if member banks (shareholders) no longer vote on Class A and B directors of Reserve Bank boards. We previously reported that the requirement to have representatives of member banks on the Federal Reserve Bank boards creates an appearance of a conflict of interest because the Federal Reserve has supervisory authority over state-chartered member banks and bank holding companies. Conflicts of interest involving directors historically have been addressed through both federal law and Federal Reserve policies and procedures, such as by defining roles and responsibilities and implementing codes of conduct to identify, manage, and mitigate potential conflicts. Federal Reserve officials said that the Board of Governors already restricts Reserve Bank directors’ participation in banking supervision and, therefore, a field-office structure would address perception, not practice. For example, Reserve Bank directors cannot access member banks’ confidential supervisory information. Any application of a Class A director’s financial institution that requires Federal Reserve approval may not be approved by the director’s Reserve Bank, but instead is acted on by the Secretary of the Board of Governors. Class A directors cannot be involved in the selection, appointment, or compensation of Reserve Bank officers whose primary duties involve banking supervision. And Class B directors with certain financial company affiliations are subject to the same prohibition. Class A directors are also not involved in the selection of the Reserve Bank President or First Vice President. To the extent that Congress values the benefits conferred by the current structure characterized by the balance of power and Reserve Bank autonomy, mechanisms would need to be devised to provide assurance these benefits remained if the Reserve Bank stock were retired. Eight of the 14 member banks that we interviewed said that Reserve Bank autonomy is either important or very important. For example, one bank stated that Reserve Bank autonomy is “hyper–important” because it creates a system of checks and balances, limits politicization of monetary policy, and ensures that viewpoints from across the nation are considered. Five of the member banks that we interviewed said that the structural option of converting the Reserve Banks to field offices would diminish the Reserve Banks’ autonomy and some said that the change would harm connections to the local communities. But only 1 of the 14 member banks with which we spoke said that they would be likely or very likely to drop membership if the Reserve Banks became field offices of the Board of Governors. Diminished Federal Reserve financial independence. One of the policy goals of the Federal Reserve System’s structure was to provide it with independence within the federal government. As noted earlier, financial independence supports monetary policy autonomy, which research has shown is important to low levels of inflation. Eliminating Reserve Bank stock, without a mechanism to re-establish financial autonomy, would have implications for this goal. The Reserve Banks’ income is generated primarily through interest on their investments and loans and through fees received for services provided to depository institutions. Reserve Bank officials said that historically the Federal Reserve has received enough income to fund its operations and therefore would be able to capitalize itself. According to the Federal Reserve, if losses were incurred remittances to Treasury would be suspended and a deferred asset would be recorded that represents the amount of net earnings a Reserve Bank would need to realize before remittances to Treasury could resume. Therefore, Reserve Banks do not need capital to fund operations. However, operating without a capital base could exacerbate negative perceptions that the Federal Reserve is insolvent. Alternatively, Treasury could capitalize the Federal Reserve through Treasury-owned stock, which would allow the Reserve Banks to maintain a corporate structure but would result in a central bank dependent, in part, on government funding. Depending on how it is structured, dependence on Treasury for capitalization could diminish the financial independence of the Federal Reserve. In particular, Federal Reserve independence would be diminished if recapitalization (in the event of capital base depletion) were at the discretion of Treasury. One academic we interviewed said that the $10 billion surplus cap introduced under the FAST Act increased the likelihood of the depletion of the Federal Reserve’s capital. Some academics have written that if Treasury capitalized the Federal Reserve, Congress could include provisions for automatic recapitalization of the Federal Reserve in the event that its capital were depleted and provide stronger capital buffers by increasing the surplus account cap. These provisions would preserve the independence of the Federal Reserve by removing the discretion of Treasury in recapitalizing the Federal Reserve. Moreover, according to research, 8 of 166 central banks are capitalized, in whole or in part, by private shareholders. The remaining 158 central banks, some of which are considered to be highly independent, are capitalized by their governments. None of the 17 member and nonmember banks that we interviewed said they would be likely or very likely to change their membership status if Reserve Bank stock were permanently retired. The banks said that the stock ownership is not a major factor in membership considerations. Member banks cited familiarity with and reputation of their regulator, consistency of regulation across the holding company, and their bank structure as the most important factors for making a membership choice. Hindered ability to conduct Reserve Bank operations. The Federal Reserve Act authorized the Federal Reserve Banks to act as depositories and fiscal agents of the United States government, at the direction of the Secretary of the Treasury. Eliminating Reserve Bank stock could have implications for the Reserve Banks’ ability to perform these functions, depending on how the Reserve Banks’ structures and authorities were revised. For example, converting the Reserve Banks to field offices could preclude them from conducting critical banking functions, and the activities they could undertake as fiscal agents for the government if they were to become government entities are unclear. Banking activities conducted by the Reserve Banks, including executing monetary policy through open market operations and providing short-term loans to institutions, are essential to the functioning of the Federal Reserve. Some Reserve Bank officials said that without the stock, the Reserve Banks would no longer be corporations and might not be able to conduct certain banking activities, depending on how the replacement structure and authorities were configured. If the Reserve Banks were to become field offices of the Board of Governors, they would no longer be able to perform certain activities related to their function as Treasury’s fiscal agent because the Board of Governors currently is not authorized to provide these services. Some also said that having the Board of Governors act as Treasury’s fiscal agent could present a conflict of interest. However, other Reserve Bank officials said that the current corporate structure could be maintained without the stock, but would at least require legislation amending the Federal Reserve Act to allow continuing conduct of banking activities. Reserve Bank officials noted that Treasury directs the Reserve Banks, as fiscal agents, to conduct auctions on its behalf and it is unclear whether Treasury could direct another federal agency to do so. Reserve Bank officials also pointed out that the Reserve Banks hold accounts for foreign central banks and it is unclear whether the federal government could hold an account for another government. As discussed earlier, capitalization by Treasury would allow the Reserve Banks to maintain their current corporate structure, through Treasury-owned stock. This could preserve the ability of Reserve Banks to conduct banking operations; however, as discussed earlier, this involves many issues that would need to be considered. Eliminating the current corporate structure and converting the Reserve Banks into field offices of the Board of Governors could lead to more centralized functions, which could further improve the net efficiency of Reserve Bank operations. However, Reserve Bank officials said that innovation often comes from having private-sector voices on their boards. Moreover, Reserve Bank officials said that despite their autonomous structure they have been able to achieve efficiencies in their operations by consolidating certain activities such as retail payment (check and Automated Clearing House) processing, which is conducted through the Federal Reserve Bank of Atlanta; wholesale payment operations (Fedwire funds and securities services) and open-market operations, which are primarily conducted through the Federal Reserve Bank of New York, or information technology and payroll services, which are primarily conducted by the Federal Reserve Bank of Richmond. In contrast, we have reported that some efficiencies in Reserve Bank operations were achieved partly because of external factors such as legislation. Voluntary Stock Ownership Making stock ownership voluntary could have a number of policy implications. Voluntary ownership likely would not significantly affect Federal Reserve membership, but according to Reserve Bank officials, the implications could include concentration of stock ownership and voting rights and a need for more resources to plan for and manage increased fluctuations in paid-in capital. Voluntary ownership of Reserve Bank stock could take many forms. Currently, only nationally chartered banks and state-chartered banks that opt to join the Federal Reserve are required to purchase stock. Such a scenario could entail no ownership requirement for membership and an option for member banks to purchase (or redeem) stock in their regional Reserve Bank at any time. As with permanent retirement of the stock, we did not find evidence that voluntary stock purchase would have a significant impact on Federal Reserve membership. Member banks that we interviewed suggested that making stock ownership voluntary would not affect their Federal Reserve membership decision, but stock ownership could become volatile in certain interest rate environments, as the following examples illustrate. Thirteen of the 14 member banks that we interviewed said that they likely would not change their Federal Reserve membership status if the ownership of stock became voluntary for member banks. Of these 13, all 6 member banks with more than $10 billion in assets also said that they likely would not purchase stock if ownership were voluntary for members. Of those 13, 6 of the 7 member banks with assets below $10 billion indicated they likely would (ranging between somewhat likely, likely, and very likely) purchase the stock if it were voluntary. They added that if they could make a better return than 6 percent on the capital committed to the stock in a higher interest-rate environment, they would redeem the stock. Two of the three nonmember banks that we interviewed said that they likely would not change their Federal Reserve membership status if the ownership of stock became voluntary for member banks. The remaining banks (one member, one nonmember) said that they would be somewhat likely to change their membership status. In a high interest rate environment stock ownership by member banks could be low, because banks could receive a higher return by investing the capital in securities other than the Reserve Bank stock. This would result in a high concentration of voting rights; however, this might not differ much from current practices. Reserve Bank officials stated that if voting rights remained with stock ownership, not membership, and if stock ownership among member banks were low, then the votes to elect board members would be concentrated in just a few banks. Some Reserve Bank officials said that the concentration of votes could lead to undue influence from a few banks. We previously found that, under the current mandatory stock ownership structure, member bank voter turnout was often low during some Reserve Banks’ elections. In these cases, assuming current participation rates persist, voting patterns under voluntary stock ownership might not significantly differ from those of the current arrangement. Reserve Bank officials also said that high volatility of stock ownership would require a higher level of management of the stock. Officials said that the processes for issuing, monitoring, and redeeming the stock would become significantly more complex as a result of a likely increase in the volume of transactions and require additional personnel. While a voluntary stock ownership structure is more complicated than the current structure, it would involve similar stock ownership characteristics as publicly traded stocks and publicly traded companies have systems to manage stock ownership. Reserve Bank officials pointed out that volatility in stock ownership among member banks also would result in fluctuation in the level of paid- in capital held at the Reserve Banks, which could make it more difficult for Reserve Banks to predict and manage their capital. If a large number of member banks chose not to purchase the stock, which member banks suggested would be likely in a high-interest rate environment, then the potential public perception issues associated with having a low capital base, as discussed previously, could apply. However, as we have discussed, the Reserve Banks could operate without capital, or Treasury could capitalize the Reserve Banks. Callable Stock Purchase Requirement Allowing member banks to hold the full capital contribution on call could have a number of implications. For instance, allowing member banks to hold the entire capital contribution on call would allow Reserve Banks to maintain their current corporate structure, since the member banks would retain their equity stakes. However, this scenario would eliminate the dividend payment to member banks because there would be no Reserve Bank stock outstanding for which dividend payments would be owed. Also, it could cause public perception problems and, in theory, exacerbate financial distress in stressful economic times. Currently, member banks are required to purchase stock in their regional Reserve Bank equal to 6 percent of their capital and surplus, with 3 percent paid-in and 3 percent on call by the Reserve Bank. This scenario would make the entire 6 percent purchase requirement callable, so that member banks would not have to contribute any capital to the Reserve Banks on joining the Federal Reserve. This modification would allow the Reserve Banks to keep their current corporate structure and preserve their ability to conduct banking operations. The change would also eliminate the dividend payment to member banks since the capital associated with the Reserve Bank stock would no longer be paid-in, so there would no longer be a basis to pay member banks a dividend. Similar to the scenario of retiring the stock or making its purchase voluntary for members, the Reserve Banks’ capital base would be reduced—in this case, to the amount of capital held in each Reserve Bank’s surplus account. Reserve Bank officials and some academics said that Reserve Banks can operate without a capital base but, as discussed previously this could cause a public perception problem. Specifically, Reserve Bank officials said that if Reserve Banks incurred losses and called in capital from member banks, the call could send a signal to the broader markets that the Reserve Banks were insolvent. In turn, this perception could lead to negative ripple effects throughout the economy. That is, Reserve Bank officials said that situations in which Reserve Banks would incur losses and need to call capital likely would be situations of economic stress for banks. If banks could not quickly raise sufficient funds to meet the Reserve Bank’s capital call, their lending capacity could fall and a credit crunch could follow. Calling in capital from member banks at such a time could have a procyclical effect; that is, the call would exacerbate financial distress experienced by the member banks. Reserve Bank officials added that because of the potentially severe systemic effects such a capital call would be highly unlikely. Officials pointed out that the Reserve Banks have never called in the 3 percent capital at member banks and that Reserve Banks currently do not have procedures for calling the 3 percent capital held at member banks. As discussed earlier, if losses were incurred remittances to Treasury would be suspended. If the Reserve Banks incurred losses over multiple periods and their capital base were depleted, then the method for recapitalization would need to be addressed (which, as discussed earlier, involves many issues that would need to be considered). Based on our interview responses, most banks would be unlikely to change their membership status as a result of making the entire capital contribution callable. All three of the nonmember banks that we interviewed said that they likely would not become members or would be only somewhat likely to become members in response to this change. Member banks likely would not drop membership as a result of this modification because, as some banks noted, it removes a potential barrier to membership (paying in 3 percent of capital). Agency Comments We provided a draft of this report to FDIC, the Federal Reserve, OCC, and Treasury for review and comment. None of the agencies provided written comments on the draft report. FDIC and the Federal Reserve provided technical comments, which we have incorporated, as appropriate. We are sending copies of this report to FDIC, the Federal Reserve, OCC, and Treasury. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology In this report, we (1) examine the historical rationale for the Reserve Banks’ stock purchase requirement and 6 percent dividend, (2) assess the potential implications of capping the Reserve Banks’ aggregate surplus account and modifying the Reserve Bank stock dividend rate, and (3) analyze the potential policy implications of modifying the Reserve Bank stock ownership requirement for member banks under three scenarios. To address our first objective, we conducted a literature search on the history of the Federal Reserve System (Federal Reserve), including a review of the legislative history of the Federal Reserve Act. See appendix II for a selected bibliography of literature we reviewed. We interviewed a past Federal Reserve historian and selected academics. We also conducted a literature search on rates of return on selected investment products. We specifically identified the following data sources: Roger Ibbotson, 2013 Ibbotson SBBI Classic Yearbook: Market Results for Stocks, Bonds, Bills, and Inflation1926–2012 (Chicago, Ill.: Morningstar, 2013).—We reviewed information describing the rates of return for a number of basic asset classes including large company stocks, small company stocks, long-term corporate bonds, long-term government bonds, intermediate-term government bonds, and Treasury bills. The return rate data include information from 1926 through 2012. Robert Shiller, Market Volatility (Cambridge, Mass.: MIT Press, 1989). —We reviewed annual data on the U.S. stock market specifically concerning prices, dividends, and earnings from 1871 to the present with associated interest rate, price level and consumption data. Frederick R. Macaulay, The Movements of Interest Rates, Bond Yields and Stock Prices in the United States since 1856 (New York: National Bureau of Economic Research, 1938).—We reviewed commercial paper rates in New York City from January 1857 to January 1936. Sidney Homer and Richard Sylla, A History of Interest Rates, 4th ed. (Hoboken, N.J.: John Wiley & Sons, 2005).—We reviewed data on interest rates and yields from prime corporate bonds, medium-grade corporate bonds, and long-term government securities from 1899 to 1989. We determined that these sources were sufficiently reliable for the purposes of our reporting objectives. Our data reliability assessment included reviewing the methodologies employed by the authors of each source and cross-checking certain data from the sources against each other. First, we analyzed return data on investment-grade and medium- grade corporate bonds, and Treasury bonds. We selected these instruments for comparison with Reserve Bank stock because they generally present low risk of default and have relatively long maturity periods. Corporate bonds can be classified according to their credit quality. Medium-grade corporate bonds can indicate a strong capacity to meet financial commitments but also can still be vulnerable to a changing economy. Investment grade corporate bonds are considered more likely than noninvestment grade bonds to be paid on time and have lower investment risk. Treasury bonds are obligations by the U.S. government and are considered to have low investment risk. Second, we analyzed return data on interest rates based on commercial paper and certificates of deposit, and the federal funds rate. We selected these return data for analysis because they are common measures of the value of money in the markets. Commercial paper consists of short-term, promissory notes issued primarily by corporations that mature in about 30 days on average, with a range up to 270 days. A certificate of deposit is a savings account that holds a fixed amount of money for a fixed period of time, such as 6 months, 1 year, or 5 years, and in exchange, the issuing bank pays interest. The federal funds rate is the central interest rate in the U.S. financial market and is the interest rate at which depository institutions trade federal funds with each other overnight. We determined not to include rate of return information on stocks and agency mortgage-backed securities. Stock is a more volatile investment product than Reserve Bank stock, with wide variation in prices from year to year. In addition, stock is a relatively liquid investment product compared to Reserve Bank stock, which cannot be sold or otherwise posted as collateral. Agency mortgage-backed securities are debt obligations that represent claims to the cash flows from pools of mortgage loans, most commonly on residential property. We found that agency mortgage-backed securities generally return higher yields than Treasury bonds, but not as high as corporate bonds, which have higher risk. Therefore, by discussing Treasury and corporate bonds, we are illustrating a complete range of possible returns. To assess the potential implications of capping the aggregate Reserve Banks’ surplus account, we reviewed past GAO, Congressional Research Service, and Congressional Budget Office reports and Federal Reserve financial documents on the status of the surplus account. We interviewed Federal Reserve officials, including from the Board of Governors and the Reserve Banks; former members of the Board of Governors who had written about the changes in the Fixing America’s Surface Transportation Act (FAST Act); academics who had written extensively about the Federal Reserve; other federal bank regulators, including the Federal Deposit Insurance Corporation (FDIC) and the Office of the Comptroller of the Currency (OCC); and, banking industry associations. To assess the potential implications of modifying the Reserve Bank stock dividend rate, we reviewed Board of Governors financial documents as of June 30, 2016, for dividend payment information. We conducted structured interviews with 17 commercial banks (including 14 member and 3 nonmember banks) to obtain their perspectives on the dividend rate modification and if it would affect their membership decisions or status. We selected commercial banks for these interviews to ensure representation for all size categories and primary federal banking regulator, using data from SNL Financial. We assessed the reliability of the data by reviewing information about the data and systems that produced them, and by reviewing assessments we did for previous studies. We determined that the data we used remain sufficiently reliable for the purposes of our reporting objectives. To assess the potential implications of modifying the stock ownership requirement, we reviewed academic literature on the structure and independence of central banks. We also interviewed selected academics and economists who had written extensively on central bank independence; the chairpersons of all the Reserve Banks’ boards of directors, who may not be affiliated with commercial banks; officials from FDIC and OCC; and banking industry associations. In the structured interviews with selected commercial banks described above, we also sought to learn what factors might influence the banks’ choice to become a member of the Federal Reserve, and whether potential modifications to the Reserve Banks’ stock ownership structure would affect their choice. We presented three scenarios (of changes to the stock ownership requirement and therefore the Federal Reserve’s structure) in the interviews to which respondents could react and discuss implications. The scenarios are illustrative and do not represent all of the ways in which the Federal Reserve structure might be altered nor does our analysis account for all of the potential consequences of stock ownership modifications. Furthermore, our discussion of the range of consequences is limited to the respondents’ responses and the strategy in the interview, without knowledge of the mechanisms that could be put in place to retain the benefits of the current structure or mitigate any negative effects of the changes. We conducted this performance audit from February 2016 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Bibliography Alesina, Alberto, and Lawrence H. Summers. “Central Bank Independence and Macroeconomic Performance: Some Comparative Evidence.” Journal of Money, Credit and Banking, vol. 25, no. 2 (May 1993): 151–162. Arnone, Marco, Bernard J. Laurens, Jean-Francois Segalotto. “The Measurement of Central Bank Autonomy: Survey of Models, Indicators, and Empirical Evidence.” International Monetary Fund Working Paper 06/227 (October 2006). Calomiris, Charles, Matthew Jaremski, Haelim Park, and Gary Richardson. “Liquidity Risk, Bank Networks, and the Value of Joining the Federal Reserve System.” National Bureau of Economic Research Working Paper 21684 (October 2015). Clifford, A. Jerome. The Independence of the Federal Reserve System. Philadelphia, Penn.: University of Pennsylvania Press, 1965. Conti-Brown, Peter. The Power and Independence of the Federal Reserve. Princeton, N.J.: Princeton University Press, 2016. Cukierman, Alex. “Central Bank Finances and Independence – How Much Capital Should a Central Bank Have?” in The Capital Needs of Central Banks, S. Milton and P. Sinclair eds. Sue Milton and Peter Sinclair. London, England and New York, NY: Routledge, 2011. Cukierman, Alex, Steven B. Webb, and Bilin Neyapti. “Measuring the Independence of Central Banks and Its Effects Policy Outcomes.” World Bank Economic Review, vol. 6, no. 3 (September 1992): 353-398. Gorton, Gary. “Clearinghouses and the Origin of Central Banking in the United States.” The Journal of Economic History, vol. 45, no. 2 (June 1985): 277-283. Homer, Sidney and Richard Sylla. A History of Interest Rates. 4th ed. Hoboken, N.J.: John Wiley & Sons, 2005. Ibbotson, Roger. Ibbotson SBBI Classic Yearbook 2013: Market Results for Stocks, Bonds, Bills, and Inflation 1926-2012. Chicago, Ill.: Morningstar, March 2013. Lowenstein, Roger. America’s Bank: The Epic Struggle to Create the Federal Reserve. New York, N.Y.: Penguin Press, 2015. Masciandaro, Donato. “More Than the Human Appendix: Fed Capital and Central Bank Financial Independence.” BAFFI CAREFIN Centre Research Paper 2016-35 (September 2016). Macaulay, Frederick R., The Movements of Interest Rates, Bond Yields and Stock Prices in the United States since 1856. New York: National Bureau of Economic Research, 1938. Moen, Jon R., and Ellis W. Tallman. “Lessons from the Panic of 1907.” Federal Reserve Bank of Atlanta, Economic Review, May/June 1990, pp. 2-13. National Monetary Commission. Report of the National Monetary Commission. Washington, D.C.: Government Printing Office, 1912. Rossouw, Jannie, and Adele Breytenbach. “Identifying Central Banks with Shareholding: A Review of Available Literature.” Economic History of Developing Regions, vol. 26, supplement 1 (January 2011): 123-130. Shiller, Robert J., Market Volatility. Cambridge, Mass.: MIT Press, 1989 (as updated). Stella, Peter, and Åke Lönnberg. “Issues in Central Bank Finance and Independence.” International Monetary Fund working paper 08/37 (Feb. 2008). Timberlake, Richard H., Jr. “The Central Banking Role of Clearinghouse Associations.” Journal of Money, Credit, and Banking, vol. 16, no. 1 (February 1984): 1-15. Todd, Tim. The Balance of Power: The Political Fight for an Independent Central Bank, 1790-Present. 1st ed. Kansas City, Mo.: Federal Reserve Bank of Kansas City, 2009. Warburg, Paul M. The Federal Reserve System: Its Origin and Growth. 2 vols. New York, N.Y.: MacMillan, 1930. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments GAO staff who made major contributions to this report include Karen Tremba (Assistant Director), Philip Curtin (Analyst-in-Charge), Farrah Graham, Cody Knudsen, Risto Laboski, Barbara Roesmann, Christopher Ross, Jessica Sandler, Jena Sinkfield, and Stephen Yoder. | Member banks of the Federal Reserve must purchase stock in their regional Reserve Bank, but historically received a 6 percent dividend annually on paid-in stock. A provision of the 2015 FAST Act modified the dividend rate formula for 85 larger member banks—and currently reduces the amount these banks receive. The FAST Act also capped the surplus capital the Reserve Banks could hold and directed that any excess be transferred to Treasury's general fund. Congress offset payments into the Highway Trust Fund by, among other things, instituting the Reserve Bank surplus account cap. GAO was asked to report on the effects of these changes and the policy implications of modifying the stock ownership requirement. Among its objectives, this report (1) examines the effects of capping the Reserve Banks' aggregate surplus account and reducing the Reserve Bank stock dividend rate, and (2) evaluates the potential policy implications of modifying the stock ownership requirement for member banks under three scenarios. GAO reviewed legislative history and relevant literature about the Federal Reserve, prior GAO reports, and interviewed academics and current and former officials of the Board of Governors, Reserve Bank, other banking regulators, and industry associations. In addition, GAO conducted structured interviews with 17 commercial banks, selected based on bank size and regulator. GAO makes no recommendations in this report. GAO requested comments from the banking regulators and Treasury, but none were provided. According to Federal Reserve System (Federal Reserve) officials, capping the surplus account had little effect on Federal Reserve operations, and GAO found that modifying the stock dividend rate formula had no immediate effect on membership. Reserve Banks fund operations, pay dividends to member banks, and maintain a surplus account before remitting excess funds to the Department of the Treasury (Treasury). Whether the transfers to Treasury's General Fund in the Fixing America's Surface Transportation Act (FAST Act) when the act also funds specific projects should be viewed any differently than the recurring transfers of Reserve Bank earnings to Treasury is debatable. Some stakeholders raised concerns about setting a precedent—future transfers could affect the Federal Reserve's independence and, consequently, autonomy in monetary policymaking. Dividend payments to 85 banks decreased by nearly two-thirds (first half of 2016 over first half of 2015), but GAO found no shifts in Reserve Bank membership as of December 2016. Some member banks affected by the rate change told GAO they had a few concerns with it and some said they might try to recoup the lost revenue, but none indicated they would drop membership. Assuming that the policy goals—independence, balance of power, and geographical diversity—reflected in the original private-public Federal Reserve structure remain important, the implications of modifying the stock ownership requirement and therefore the Federal Reserve structure could be considerable. The scenarios discussed in this report are illustrative and do not represent all the ways in which the Federal Reserve structure might be altered. Also, the discussion of effects is limited because exact replacement structures are unknown. Retiring the stock could result in changes to the existing corporate structure of the 12 Reserve Banks. These changes could diminish Reserve Bank autonomy in relation to the Board of Governors of the Federal Reserve System (Board of Governors) by removing or changing Reserve Banks' boards of directors, which could limit the diversity of economic viewpoints in monetary policy discussions and centralize monetary policy decision making in the hands of the Board of Governors, eliminate the private corporate characteristics of Reserve Banks and convert them to government entities (such as field offices of the Board of Governors), which could lead to less private sector involvement and reduced financial independence of the Federal Reserve, and remove the authority the Reserve Banks currently have to conduct activities critical to the Federal Reserve, such as executing monetary policy through open market operations and those related to the Reserve Banks' role as fiscal agents for the federal government. Making stock ownership voluntary could increase fluctuations in outstanding shares, affecting Federal Reserve governance and complicating the Reserve Banks' processes for managing their balance sheets. While modifying the stock ownership requirement could give member banks greater control of the capital tied to the stock, member and nonmember banks with which GAO spoke indicated that they likely would not change their membership in response to any modifications discussed in this report. |
Background The Ranger Training Brigade, located at Fort Benning, Georgia, conducts three phases of Ranger training to develop tactical combat arms and leadership skills in infantry, airborne, air assault, mountaineering, and waterborne operations. The initial training phase is conducted at Fort Benning, the second phase is conducted in the Georgia mountains, and the third phase is conducted in river and swamp terrain in Florida. In February 1995, four Ranger students died of hypothermia while undergoing waterborne training in the Florida swamps. The Army’s investigation of the accident recommended corrective actions to improve Ranger training safety and preserve the lessons learned from the accident. Corrective actions to improve the safety of Ranger training were also prescribed by the Fiscal Year 1996 National Defense Authorization Act. The act required the Army to ensure that the number of officers and the number of enlisted personnel assigned to the Ranger Training Brigade are not less than 90 percent of required levels. The Army defines requirements as the minimum number of personnel needed to perform a unit’s mission effectively. This mandate was to become effective no later than February 1997 and expire 2 years after it is achieved. The act also required the Army to establish at each of the three Ranger training locations an organization known as a “safety cell,” comprising individuals with sufficient continuity and experience in each geographic area to be knowledgeable of local conditions and the potential impact of weather and other conditions on training safety. The act further provided that these individuals shall serve as advisors to the officers in charge of training to assist in making training “go” and “no go” decisions in light of weather and other conditions. Our preliminary report assessed the implementation and effectiveness of the corrective actions, the Army’s progress in implementing the mandated staffing levels and safety cell organizations, and the adequacy of Army oversight to ensure that the corrective actions are sustained in the future. We recommended that the Army direct the Ranger Training Brigade to identify critical training safety controls and ensure that the Ranger training chain of command, and organizations outside the chain of command, conduct periodic inspections to determine compliance with the safety controls implemented after the accident. Army Increased Brigade Personnel but Many Factors Have Hindered Meeting Mandated Levels At the time of the 1995 accident, the Ranger Training Brigade had a staffing priority that authorized it to be staffed at about 85 percent of its personnel requirements. In response to the mandated 90-percent level, the Army excepted the Brigade from normal Army staffing priorities and raised the Brigade’s officer distribution and enlisted personnel authorizations to 90 percent of the required numbers. It expected to staff the Brigade at this level in February 1997. Despite these measures, the Army was not able to assign and maintain the numbers of officers and enlisted personnel the act required for most months since that time. The Brigade staffing level has improved since the accident, even though the Army has not maintained staffing at the mandated level. Mandated Officer and Enlisted Personnel Levels Have Not Been Sustained Although in the aggregate, the Brigade was assigned 96 percent of its required personnel in February 1997, it had only 88 percent of the required number of officers. The Brigade’s officer strength has remained below the mandated 90-percent level for most of the time between February 1997 and November 1998 and fell to under 80 percent for 9 months. While the Brigade was able to maintain higher enlisted personnel levels because of the Army priority for assigning enlisted Ranger instructors, its enlisted strength overall was also under the mandated level for 14 months from February 1997 through September 1998, as shown in figure 1. At the end of November 1998, when we completed our review, the Brigade was assigned 59 (or 80 percent) of its 74 required officers and 596 (or 93 percent) of its required enlisted personnel. Although the number of assigned officers was below the act’s requirement, it was significantly higher than it was at the time of the accident, when only 38 officers were assigned. Further, although the Brigade was assigned less than the required number of enlisted personnel from October 1997 through September 1998, it did have over 90 percent of its required number of enlisted Ranger instructors. As of November 1998, the Brigade would have needed eight more officers to meet the mandated 90-percent level. Fort Benning officials said that they would be unable to assign any additional officers until captains undergoing advanced infantry officer training become available in December 1998. Data on the Brigade’s numbers of required and assigned officers and enlisted personnel by month are included in appendix I. Many Factors Have Contributed to Shortfalls in Meeting Required Personnel Levels Many factors have contributed to the Army’s shortfalls in meeting the required numbers of officers and enlisted personnel, including unplanned losses of officers, shortages of branch-qualified captains and certain enlisted specialties, unfilled requirements for other service’s instructors, and higher personnel requirements. Army officials at Fort Benning told us that the unplanned loss of personnel was the primary reason for not meeting the mandated officer level. The Brigade lost several officers who resigned their commissions or were injured while conducting Ranger training exercises. When these unexpected losses occurred, it was not possible to immediately reassign officers from other Army units to fill them. Fort Benning officials told us that replacing experienced and branch-qualified captains was particularly difficult because they are in short supply throughout the Army. As a result, Fort Benning was unable to immediately replace the officers lost by the Brigade and had to wait for graduates of the Infantry Officer Advanced course at Fort Benning to become available. Some of the shortfall of enlisted personnel was due to unfilled requirements for instructors from the other services. For fiscal year 1998, the Army determined that the Air Force, the Navy, and the Marine Corps were to provide 20 instructors, and for fiscal year 1999, 16 instructors, based on the numbers of students they collectively planned to enroll in the Ranger course. However, the other services have not provided the numbers of instructors required. For example, thus far, in fiscal year 1999, the Marine Corps has provided only 2 of the 13 instructors. If the services had met their instructor requirements, the Army would have achieved the mandated enlisted personnel level in most months since February 1997. Table 1 shows the number of students the Army and other services planned to enroll in the Ranger course in fiscal year 1999 with the required and assigned instructors. Two other factors contributed to personnel shortages in the Brigade. First, the Army had difficulty assigning the required numbers of enlisted training support personnel, such as medics and signal systems specialists, because there were, and still are, relatively small numbers of personnel with these specialties in the force. Second, in October 1997, the Army added 7 additional personnel requirements for officers and 86 additional requirements for enlisted personnel. Because the numbers of assigned personnel did not significantly change along with the added requirements, the percentages of assigned to required personnel declined significantly. Although Army officials at Fort Benning thought they could fill these positions within several months, both officer and enlisted personnel levels remained well below the mandated levels throughout fiscal year 1998. Other Assignments and Civilian Personnel Shortages Reduce the Availability of the Brigade’s Personnel The actual number of personnel available is often less than the number of personnel assigned to the Brigade. At any given time, some Brigade personnel are attending Army schools or are assigned to other duties, such as recruiting, thus reducing the actual number of personnel available to conduct and support Ranger training. As in all Army units, Brigade personnel periodically attend Army schools to complete their career training requirements or perform other duties for their units. In November 1998, the Brigade was assigned 59 (or 80 percent) of its 74 required officers. However, 3 of the 59 officers were attending schools or performing other full-time duties. As a result, the Brigade only had 76 percent of its required officers available. In addition, Ranger training battalion commanders must often assign soldiers to fill vacant civilian personnel positions. In November 1998, the Brigade had only 10 (or 20 percent) of its 49 required civilian personnel. To compensate for these shortages, battalion commanders periodically assigned Ranger training personnel to maintenance, supply, administrative, and other jobs—a common practice throughout the Army when civilian personnel requirements cannot be met. Unique Ranger Training and Personnel Requirements Are Not Recognized in Army Personnel Distribution Priorities Both Ranger training and the requirements for the personnel that conduct the training are unique. Unlike training at other TRADOC schools, Ranger training is conducted around the clock, under hazardous conditions, at three separate locations in difficult mountainous, river, and swamp terrain. The training is designed to subject students to hot and cold weather temperature extremes and mental and physical stresses, including nutritional and sleep deprivation—conditions that are intended to approach those found in combat. To conduct this type of training, Ranger instructors, battalion and company commanders, and support personnel must be qualified to function effectively under similar conditions. Therefore, many Brigade personnel are required to have special qualifications, including airborne and Ranger qualifications, and some are required to have swimmer and diver qualifications. Personnel with these qualifications are in short supply and in high demand throughout the Army. However, the current Army officer distribution policy gives top priority units, such as special operations forces, 100 percent of their requirements for these kinds of specialties. Without the higher priority the Army implemented to meet the mandated levels, the Brigade would receive only about 85 percent of its officer requirement. The Brigade would therefore compete with higher priority units and other TRADOC schools to obtain personnel with these specialized qualifications. The Army’s enlisted distribution policy, however, does give a higher priority to the Brigade for enlisted instructors because it needs between 60 and 180 days to train and certify personnel to become fully qualified Ranger instructors. Further, assigning personnel is complicated because, unlike other Army training units, the Brigade’s headquarters and three training battalions are located in separate geographic areas. While Army commanders usually move personnel between positions within their units to compensate for any losses, the Brigade’s ability to do so is limited because reassigning personnel from one training battalion to another involves permanent changes of station for soldiers and their families. Therefore, when losses occur, the Brigade must wait for available personnel from other Army units rather than move personnel internally between battalions. Army Plans to Staff Safety Cells With Civilians The act specified that safety cell personnel at each location must have sufficient continuity and experience to be knowledgeable of local terrain, weather, and other conditions. Currently, members of the Brigade and the battalions’ chains of command, including the Brigade and battalion commanders, serve in the safety cells and supervise daily training safety decisions. While these people have developed a high degree of experience and knowledge of local conditions, the frequency of their rotations to new units may prevent the safety cells from obtaining individuals with sufficient continuity in the local training areas. Army officers usually rotate to new units every 2 years, enlisted personnel about every 3 years. In contrast, Army civilian employees do not rotate jobs as frequently and thus would appear to provide the continuity envisioned in the act. In 1996, the Infantry Center at Fort Benning and the Brigade considered requesting civilian personnel for the safety cells but decided to adopt the current approach of having Brigade personnel serve in the safety cells. However, in September 1998, TRADOC reconsidered this approach and began work on a plan to authorize hiring four civilians for the safety cells at the Brigade and at each of the three training battalions. Army officials at Fort Benning told us they plan to develop job descriptions, identify candidates, and hire staff for the safety cells by September 1999. Corrective Safety Actions Are Incorporated in Standard Operating Procedures The Army’s investigation of the accident recommended corrective actions to improve (1) risk assessments of training conditions, (2) command and control of training exercises, and (3) medical support and evacuation procedures. We reported in our preliminary report that the risk assessments had been improved, command and control procedures had been revised, and evacuation and medical support capabilities had increased. In addition, in September 1997, the Army Inspector General reviewed the corrective actions and waterborne training safety controls at the Florida Ranger camp and concluded that they were in place and functioning as intended. During our review, we found that the corrective actions had been institutionalized in Brigade standard operating procedures and that the safety control measures and medical evacuation procedures remained in place and appeared to be functioning effectively. Specifically, the Brigade continued to apply safety improvements at the Florida Ranger camp, such as command and control systems to better monitor and predict river and swamp conditions, and to conduct waterborne training exercises in designated training lanes outside of high-risk areas. At all three training locations, medical evacuation procedures had been revised, rehearsed, and inspected; and physician assistants had been assigned to the Brigade and training battalions. In addition, the Brigade has improved safety and the supervision of training by requiring that its training companies be commanded by experienced and branch-qualified captains. To better supervise training safety, the Brigade also assigns an officer and an enlisted noncommissioned officer to serve as training liaisons to accompany and monitor each Ranger class through all three phases of training. A complete description and status of all corrective actions are included in appendixes II through V. Safety Inspections Do Not Evaluate or Document Compliance With Training Safety Controls Our preliminary report assessing Ranger training safety recommended that TRADOC, the Army Infantry Center, Fort Benning, the Ranger Training Brigade, and organizations outside the chain of command, such as the Army Inspector General, conduct periodic inspections to determine compliance with the safety controls implemented after the 1995 accident. Since 1997, the Army Infantry Center commander has conducted 6 personal safety inspections, and Brigade commanders have conducted 23 personal safety inspections. Also, Fort Benning has conducted two command and staff inspections, and the Brigade has conducted three command and staff inspections. In addition, the Army Inspector General has visited all three phases of Ranger training and, in September 1997, completed an inspection of the safety controls. However, the scope and results of the personal inspections conducted by the Infantry Center and Brigade commanders have not been documented. We were, therefore, unable to determine whether (1) the commanders’ inspections focused on the identified safety control measures or (2) the commanders had determined that safety controls were working effectively. While the scope and results of the Infantry Center’s and the Brigade’s command and staff inspections were documented, these inspections covered a broad range of unit activities, including safety. However, the safety related portion focused entirely on general safety procedures, such as fire prevention measures, not on training safety. Also, although the Ranger training chain of command was briefed on the scope and results of the Army Inspector General’s safety control inspection, a written report was not done. Conclusions and Recommendations Since the mandated staffing goal was instituted, the Ranger Training Brigade staffing level has improved, even though the Army has not maintained staffing at the mandated 90-percent level. A key factor in this improvement has been the Army’s decision to give priority to staffing the Brigade. Without sufficient priority, we believe that unplanned losses and other problems that have kept the Brigade’s officer strength below the mandated 90-percent levels would, over time, degrade officer strength to the levels that existed at the time of the accident. In view of the increased personnel levels since the accident, and provided that the Army continues the current staffing priority for the Brigade, we do not believe that it is necessary to maintain mandated personnel levels in law. Additionally, the failure to evaluate specific training safety controls and document the results of such evaluations provide inadequate assurance that safety measures and controls are in place and functioning effectively. Inspections are vital in ensuring that corrective actions instituted after the accident are sustained. We, therefore, recommend that the Secretary of the Army continue the current 90-percent officer distribution planning level for the direct that future inspections of the Brigade include evaluations of training safety controls and that the inspections’ results are documented. Agency Comments In written comments on a draft of this report (see app.VI), DOD concurred with the report and its recommendations. DOD stated that the Secretary of the Army has directed that the officer and enlisted strength of the Brigade be sustained at or above the 90-percent distribution level and that the Commander, Total Army Personnel Command, has established procedures to ensure compliance. DOD also stated that the Army has conducted frequent inspections to evaluate training safety controls and has moved to address the documentation of training safety controls inspections. DOD also noted that its goal is to provide safe, tough, and realistic training to Brigade students and that it believes it is meeting this goal. DOD also provided technical comments that we incorporated where appropriate. Scope and Methodology To determine the status of the mandated Ranger training manning levels, we reviewed and analyzed personnel requirements and numbers of officers and enlisted personnel assigned to the Ranger Training Brigade from February 1997 through November 1998. We reviewed changes in Army and Fort Benning personnel policies, plans, and distribution priorities to assess the measures taken to increase personnel to the mandated levels. To assess the adequacy of current personnel levels and the need to continue the mandated levels, we analyzed personnel requirements and obtained the views of Department of Army, TRADOC, and Fort Benning officials. We assessed the status of establishing training safety cells by reviewing the duties, qualifications, and experience of safety cell members and interviewing Fort Benning and Ranger officials. To determine the status of the corrective actions and determine whether they are functioning effectively, we received briefings from Brigade officials, observed training exercises, and reviewed safety procedures at each Ranger battalion’s facilities. To determine whether the Army has adequately inspected compliance with the identified safety controls, we interviewed Brigade officials and reviewed Army and Infantry Center inspection regulations, procedures, and records. We conducted our review at Department of Army headquarters, Army Infantry Center, Ranger Training Brigade headquarters, and the Ranger training battalions at Fort Benning, Dahlonega, Georgia, and Eglin Air Force Base, Florida. Our review was conducted from September through November 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen, Senate and House Committees on Appropriations, Senate Committee on Armed Services, and House Committee on Armed Services and to the Secretaries of Defense and the Army. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix VI. If you or your staff have questions about this report, please call me on (202) 512-5140. Ranger Training Brigade Personnel Levels, February 1997 Through November 1998 Status of Actions to Improve Safety Management: Risk Assessments Completed Weather, river, and swamp information obtained from local and federal agencies is integrated in training decision-making. Also, three remote weather sensors on the Yellow River provide real-time water depth and temperatures. Risk management assessments have been completed for all training activities. Daily risk assessments capture information on changing weather, water level, temperature, student conditions, and readiness of support systems. The water immersion guide is briefed at the beginning of each day and updated as conditions change. Completed Written standardized briefing formats are used for daily briefings of instructors at all three Ranger training battalions. Medical and other information on selected students and student platoons is forwarded to each training phase’s incoming commander. The Army Corps of Engineers erected 32 water depth markers along the Yellow River and training lanes in the swamps. System reviewed, and it remains a first line of safety defense. When assigned buddy is not available, teams will move to three-person system. The 6th Battalion now assigns a captain or senior noncommissioned officer and a staff sergeant to each class with responsibility for class cohesion, student advocacy, feedback to battalion commanders, and other issues. Lesson added to the Ranger course program of instruction. Completed Weak swimmers are identified during the combat water survival test and marked on their headgear and equipment. Experimental monitoring software was provided to Ranger medical clinics. Due to implementation problems, the Brigade has discontinued its use. The Brigade Commander has increased meals provided Ranger students from 1-1/2 to 2 per day based on Army nutritional studies. Experimental monitors tested in June 1996, but no procurement made. Status of Actions to Improve Safety Management: Command and Control, Equipment, and Training Procedures have been written and included in the Brigade and the three training phases daily operating procedures. The Florida battalion identified specific lanes from the Yellow River through the swamps. The lanes were narrowed and adjusted to avoid hazardous areas. Students are not allowed to deviate from designated boat drop sites and training lanes. The Brigade developed a standardized instructor certification program. The program focuses on the development of instructor competency, experience, and application of procedures, safety, and risk management. Communications and computer upgrades were installed and they are functioning effectively at Florida and mountain phases. The Florida battalion acquired whisper mikes for use with Motorola radios during training exercises. Florida battalion students must demonstrate their ability to properly construct a one-rope bridge in 8 minutes prior to entering the swamp. A Brigade decision paper concluded that global position receivers will be used by medical evacuation helicopters and Ranger instructors. The Brigade acquired 66 receivers to track the movement of students. Equipment and supply packing lists for instructors, medics, and aeromedevac crews have been updated. The winter packing list has been reviewed, and minor changes were made. Instructors inspect student rucksacks to ensure they have been tailored, weight distributed, and waterproofed. A waterproofing lesson has been added to the Ranger course program of instruction. Status of Actions to Improve Safety Management: Medical Support and Evacuation Procedures Air, water, surface, and ground evacuation procedures have been planned, rehearsed, and inspected. Joint medical evacuation procedures have been established among the Ranger training battalions and local medical services. Mass casualty procedures have been included in each Ranger training battalion’s standard operating procedure. The former battalion commander concluded that the road is not critical for safe training and that following an environmental assessment, high construction and environmental mitigation cost estimates, it is not justified. A 2,000-gallon tanker is on hand at the Florida camp and two tankers with about 10,000 gallons fuel capacity are on hand at the Georgia mountain camp. All three Ranger training battalions now have full-time, Ranger-qualified medics. The Florida Ranger camp acquired 21 CO inflatable rafts, which are used by each Ranger instructor team. Six hypothermia bags were issued to each of the Ranger training battalions. All medevac emergency equipment is inspected for accountability and serviceability upon arrival at the training battalions. Fort Benning Medical Command has developed training guidelines for medics and Physician’s Assistants in each camp. Revised standard operating procedures outline cold and hot weather training procedures. Status of Actions to Preserve Lessons Learned 1977 and 1995 accident summaries have been integrated into instructor certification programs and are required reading for new members of the chain of command. VCR tape summarizing the 1977 and 1995 accidents was produced and is in use in the instructor certification program. Monument to students who died was erected at the site of the accident. Although all battalions have been inspected, the inspections do not focus on training-related safety. The Army Inspector General completed a review of waterborne procedures in September 1997. Comments From the Department of Defense Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Carol R. Schuster Reginald L. Furr, Jr. Atlanta Field Office Kevin C. Handley Katherine P. Chenault The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO provided information on the corrective actions taken by the Army following the deaths of four Ranger students in a 1995 training accident, focusing on the status of: (1) Army Ranger training manning levels required by the fiscal year 1996 National Defense Authorization Act; (2) establishing safety cell organizations required by the act; (3) corrective safety actions instituted after the accident; and (4) inspections of identified safety controls. GAO noted that: (1) even though the Army placed the Ranger Training Brigade on the list of units excepted from normal Army personnel priorities and raised the Brigade's personnel distribution to 90 percent of required numbers, it was not able to meet the act's required personnel levels; (2) in February 1997, when the Army planned to first meet the act's requirement, the Brigade had 97 percent of required enlisted personnel but only 88 percent of the required number of officers; (3) the Brigade's personnel strength was below the mandated 90-percent level for both officers and enlisted personnel from October 1997 through September 1998; (4) while Brigade officer staffing levels were below the mandate, they were significantly higher than they were at the time of the accident; (5) if the Army continues the current 90-percent officer distribution planning level for the brigade, it is not necessary to continue the mandated personnel levels in law; (6) the Army has established safety cells with personnel knowledgeable about local terrain and weather conditions, but the frequency of personnel rotations may make it difficult to provide sufficient continuity that the act requires; (7) specifically, the Brigade and battalion chains of command who serve as the safety cell members and supervise daily training safety decisions generally rotate to new units every 2-3 years; (8) because of the act's requirement that safety cell personnel have sufficient continuity and experience, the Army has recently authorized the addition of four civilian personnel to the safety cells at the Brigade and the three training battalions; (9) the Army plans to fill these positions by September 1999; (10) the Army has completed and institutionalized most of the recommended corrective actions, and they appear to be functioning effectively; (11) the Brigade has improved safety controls at the Florida Ranger camp by developing systems to better monitor and predict river and swamp conditions; (12) it has moved waterborne training exercises outside high-risk areas and eliminated discretion to deviate from established training lanes; (13) at all three training phases, medical evacuation procedures have been revised, rehearsed, and inspected, physician assistants have been assigned to the Brigade and training battalions, and a Brigade communications officer has been assigned; (14) in addition, the Brigade now requires that its training companies be commanded by branch-qualified captains; and (15) although frequent inspections have been conducted since the accident, they did not evaluate continued compliance with the training safety controls, nor were the results of the inspections adequately documented. |
Introduction A perennial issue for federal and District of Columbia officials has been determining the proper level of federal assistance to the District. Federal assistance historically has helped the District offset costs associated with its unique status and position. However, according to District officials, this assistance is inadequate. Based on the District’s most recent budget analysis, District officials claim that they will be unable to maintain the District’s current level of services into the future under its current revenue policies. District officials also point to a deeper structural imbalance, stating that they do not have sufficient revenue capacity to meet the high cost of providing residents and visitors with adequate public services. In addition, the District has experienced serious and longstanding management problems. In September 2002, we published an interim report that concluded that the District had not provided sufficient data and analysis for us to determine whether, or to what extent, the District is, in fact, facing a fiscal structural imbalance. To help inform this debate about the proper level of federal assistance, this report (1) assesses whether, or to what extent, the District faces a structural imbalance between its revenue capacity and the cost of providing residents and visitors with average levels of public services, (2) identifies significant constraints on the District’s revenue capacity, (3) examines cost conditions and management problems in key program areas, and (4) studies the effects of the District’s fiscal situation on its ability to fund infrastructure projects and repay related debt. Characteristics of the District While the District serves as the seat of the federal government, it also serves as home to over a half million people. The District is 61 square miles and had 9,316 residents per square mile in 2000. The District’s primary industry after the federal government is tourism. Other important industries include trade associations, as the District is home to more associations than any other U.S. city. Table 1 describes some of the demographic characteristics of the District and compares them to national averages in 2000. The District’s Fiscal Relationship with the Federal Government The fiscal relationship between the federal government and the District has been a subject of perennial debate. Although the U.S. Constitution gives the Congress exclusive legislative authority and control over the District as the seat of the federal government, the Constitution did not specifically define the fiscal relationship between the District and the federal government. Accordingly, tension has existed between maintaining some degree of federal control over the District and the desire to grant District residents a say in how they are governed. As a result, local autonomy and federal fiscal support for the District have evolved throughout the last 200 years. Through the 1870s to the present, the federal government has made financial contributions to the District’s operations. Table 2 briefly describes the evolution of this fiscal relationship by highlighting the important milestones since home rule in 1973. Reports on the District’s Unique Circumstances, Fiscal Health, and Management Problems Several recent reports address some of the unique challenges the District faces as the nation’s capital, the status of its fiscal health, and the management inefficiencies that continue to affect its programs, costs, and service delivery. While these studies reach similar conclusions about the District’s unique costs associated with the federal presence, as well as its high demand for services, these studies also recognize that the District needs continued management improvements. Table 3 highlights the conclusions reached in several recent reports about the District. The Economic Slowdown and the District’s Finances After the economic boom of the 1990s, all levels of government are now experiencing serious fiscal challenges and are likely to face even more fundamental ones in the future. The federal budget has moved from unprecedented federal surpluses in the late 1990s to deficits, with the Congressional Budget Office (CBO) now projecting the federal government to run deficits of $246 billion in fiscal year 2003 and $200 billion in fiscal year 2004. At the same time, spending demands are also on the rise, as the federal government deals with funding entitlement programs, such as Medicaid, Medicare, and Social Security, along with new and rapidly increasing health care costs and recent defense and homeland security needs. Similarly, states are experiencing significant, recurring revenue declines— estimates show state budget shortfalls of about $80 billion by 2004. States are not only facing a major decline in revenues—attributed to the recession, steep stock market declines and other factors—but also increased spending in areas like Medicaid due to increased enrollment and health care costs. This shortfall translated into reductions in aid to local governments, hiring and salary freezes, cuts in infrastructure projects and discretionary programs aimed at low-income individuals and families and even across the board spending reductions. Many states have also taken other actions like tapping “rainy day funds” or tobacco settlement money, or raising “sin” taxes. Like those of other state and local governments, the District’s finances have been adversely affected by the recent economic slowdown. The CFO’s office projects that total local source revenues for fiscal year 2003 will be $53.5 million (or 1.5 percent) lower in inflation-adjusted terms than they were in fiscal year 2000. The principal reason for this decline is a significant deterioration in individual income tax revenue. In fact, the decline of $214.1 million in the individual income tax far exceeds the decline in overall revenues. The CFO’s office attributes much of this decline to a steep drop-off in capital gains earned by residents, although the office does not have sufficiently detailed data to quantify the decline in this specific source of income. Sales tax and business franchise tax revenues have also declined, but in smaller absolute amounts compared to the individual income tax. In contrast, revenues from property taxes (the District’s second most important revenue source after the income tax), gross receipts, other taxes, and nontax sources have increased since fiscal year 2000. Table 4 shows the change in revenue from each principal source from fiscal year 2000 through fiscal year 2003. The District’s approved fiscal year 2003 budget was $5.6 billion. As of April 2003, District officials projected that over the long term, continuing current spending and tax policies would lead to increasingly large deficits, growing to $325 million dollars annually by 2007. Scope and Methodology The Ranking Minority Member of the Subcommittee on the District of Columbia, Committee on Appropriations, United States Senate, and the Honorable Eleanor Holmes Norton, House of Representatives, asked us to study the District’s fiscal position, including whether, or to what extent, the District faces a structural imbalance. To address our requesters’ questions, we used a body of evidence approach that combined quantitative and programmatic analyses to identify any possible structural imbalance. Our approach was not intended to provide a definitive point estimate of any imbalance, rather, it was expected to show whether the District’s ability to provide an average level of services with its given revenue capacity is substantially different from that of most jurisdictions. The approach was also designed to examine cost conditions in key program areas and to identify management problems that could lead to wasted resources. In addition, we attempted to identify the effects of the District’s fiscal situation on deferred infrastructure projects and debt capacity. Our methodology was vetted among key experts, including individuals who designed the underlying methodology and District economists. We revised our methodology based on expert consultation as appropriate. (See apps. I, II, and III for more detail on our overall approach.) Our methodology was based on previous efforts to define an objective measure of a fiscal system’s structural balance. No consensus exists regarding the appropriate level of services and taxation, and this issue has been a matter of perennial debate in every state. For this reason, when public finance analysts have, in the past, compared the underlying or “structural” fiscal position of jurisdictions, they have attempted to estimate objective measures of each jurisdiction’s spending that are independent of that jurisdiction’s particular preferences and policies. Similarly, analysts have estimated measures of revenue capacity that are independent of each jurisdiction’s decisions regarding tax rates and other tax policy choices. As we explain in more detail below, these objective benchmarks for levels of service and for revenue capacity are based on the national average spending and the national average tax rates for state fiscal systems. Consequently, the benchmarks are “representative” of the level of services that a typical fiscal system provides and the tax rates that it imposes on its tax bases. A fiscal system is said to be in structural balance if it is able to finance a representative basket of services by taxing its funding capacity at representative rates. Our use of an average level of services and average tax rates should not be interpreted as an indication that these are the levels of spending and taxation that jurisdictions should seek to provide. Each jurisdiction is an autonomous governmental entity responsible for providing the package of services and level of taxation desired by its citizens. Depending on the preferences of local citizens and their representatives, levels of taxation and the services they support may be higher in some jurisdictions and lower in others. The use of average levels in our analysis should only be thought of as a convenient benchmark against which to gauge relative differences in the cost of providing public services over which local officials have little direct control and as providing an indication of the potential availability of revenue sources from which to finance those costs. Because the District has all the fiscal responsibilities generally shared by state, city, county, and special district governments, we used two baskets of services as benchmarks. The first is a basket of services typically provided by state fiscal systems (the state and all of its local governments), and the second is a basket of services typically provided in more densely populated urban areas. Both baskets include such functions as elementary and secondary education, higher education, public welfare, health and hospitals, surface transportation, public safety, and other public service functions. For the basket of services provided by state fiscal systems, we combined our separate estimates by weighting each spending function by its proportionate share of total spending of the average state fiscal system. For the second basket of services provided by governments serving densely populated urban areas, we combined our separate estimates by weighting each spending function by its proportionate share of total spending of the average urban areas. To calculate the cost of providing an average, or representative, basket of public services, we used the national average per capita spending for each expenditure function as a benchmark for an average service level. For example, the national average per capita spending for elementary and secondary education was $1,338 per capita. We used this figure as a benchmark indicator of an average level of educational services. However, this benchmark has to be adjusted to account for the fact that an average level of spending does not support the same level of service in each fiscal system. To estimate the cost of an average level of services for each state fiscal system, we adjusted our benchmark by cost drivers that reflect specific demographic, economic, and physical characteristics that are beyond the direct control of government officials to affect. For example, we used the number of school-age children (excluding children attending private schools) rather than actual school enrollments to represent the overall scope of government responsibility for elementary and secondary education since actual enrollments can be affected by the decisions of policymakers. Similarly, we used the average wage rate in private sector employment to measure the personnel cost of delivering public services rather than using actual government labor compensation rates since these too are affected by negotiations with public employees and, therefore, reflect government policy choices. Our estimates of the cost of providing an average level of services are likely to understate to some unknown extent the District’s cost of an average service level for a number of reasons. First, by using the average per capita spending of all state fiscal systems as our benchmark of an average service level, by necessity the benchmark excludes any unique public service costs associated with being the nation’s capital. Such unique costs would include, for example, above average costs for crowd control for political demonstrations and increased public safety and sanitation costs based on the disproportionate number of visitors. In addition, data for the various cost drivers (e.g., school-age children and low-income residents) are limited and may not fully reflect all relevant cost drivers affecting a jurisdiction’s cost environment. In addition, a degree of uncertainty exists regarding the relative importance each should have in the overall cost calculation. In these instances, we have generally attempted to choose conservative assumptions so as not to overstate the cost impact of factors used in our analysis. (See app. I for a more detailed discussion of our methodology and examples of instances where conservative assumptions were employed in calculating the cost of providing an average level of public services.) To estimate total revenue capacity, we combined revenue estimates for the two principal sources from which state fiscal systems finance their expenditures: (1) revenues that could be raised from a fiscal system’s own revenue sources and (2) the federal grants that the system would receive if it provided an average basket of services. In the past, two basic approaches have been employed to estimate the own- source revenue capacity of states: (1) those that use income to measure the ability of governments to fund public services and (2) those that attempt to measure the amount of revenue that could be raised in each state if a standardized set of tax rates were applied to a specified set of statutory tax bases typically used to fund public services. Total taxable resources (TTR), developed by the U.S. Department of the Treasury (Treasury), is a leading example of the first type of measure and the representative tax system (RTS), developed by the Advisory Commission on Intergovernmental Relations, is a leading example of the second. Because experts disagree as to which approach is superior, we present separate results using both methodologies. Both RTS and TTR take into account the restrictions placed on the District’s taxing authority. For example, they do not include tax-exempt property or the income earned by nonresidents who work in the District. However, since other states may tax nonresidents’ incomes, those incomes are included in their tax bases. We generally used the actual amounts that state fiscal systems received from the federal government as proxies for the actual amounts that each system would receive if it provided an average basket of services. We do so because grant amounts generally are not likely to change significantly in response to changes in state and local spending choices. However, in the case of the Medicaid program, the federal government provides open- ended matching funds to the District and other state fiscal systems that automatically adjust to changing state policy choices regarding the coverage of their Medicaid programs and the benefits that are provided. In this case, we used an estimate of the Medicaid funding amount that state fiscal systems would likely receive if average Medicaid services were provided. We have not attempted to estimate the extent to which the District and state fiscal systems take advantage of all of their opportunities to receive federal grants. As a consequence, our grant estimates may understate the true potential that these fiscal systems have to receive grants. (See app. II for a more detailed description of the methodology we used to estimate the revenue capacity of state fiscal systems.) To obtain information on federally imposed constraints on the District’s revenue authority, we interviewed officials from the office of the District’s CFO and several local experts on the District’s economy and finances. We also reviewed a number of studies prepared by the District, independent commissions, and other researchers that contained information, evaluations, and estimates relating to these constraints. In addition to the quantitative analysis, we conducted a programmatic analysis of the District’s reported structural imbalance by evaluating the levels of service, costs, management, and financing of three of the District’s highest cost program areas: Medicaid; elementary and secondary education; and public safety, particularly police, fire, and emergency medical services. We also conducted case study work on two similar jurisdictions: San Francisco, California and Boston, Massachusetts. These jurisdictions were selected based upon a literature search for empirically based comparisons of cities; opinions of experts of District finances; and a cluster analysis, using demographic and economic variables such as populations, measures of poverty, and number of school-age children. Cluster analysis is a technique that groups units (in this case, cities) into clusters based on their closeness on a set of measures. The case study work was conducted to assess how the District compares to other jurisdictions regarding the types and costs of similar services in Medicaid, education, and public safety, as well as to provide contextual sophistication to the quantitative analysis. In conducting the programmatic work, we collected and analyzed program data and interviewed government officials in the District, California, Massachusetts, San Francisco and Boston governments and in federal agencies responsible for overseeing or providing major funding in these three program areas. Finally, we conducted companion work to identify the effects of the District’s fiscal situation on deferred infrastructure projects and debt structure. To examine the factors involved, we met with officials of the District CFO’s office and Capital Improvement Program (CIP). We also obtained and reviewed prior-year District budget and financial plans, current year expenditure reports for the capital projects, internal studies, and statistics and financial information on the current expenditures for the District’s CIP. Our approach to analyzing the District’s infrastructure projects differed from the approaches used to address the other objectives in this report. Because of the variety of ways infrastructure projects are owned, managed, and reported by other jurisdictions, comparative information on infrastructure across states and local jurisdictions was not readily available; therefore, we did not do a comparative analysis of the District’s infrastructure with states or other jurisdictions. We reviewed the data that the District had available in its annual budget and financial plans and CAFRs, and other documents. To assess the District’s debt service, we obtained and analyzed information from the District’s CFO on the District’s debt levels and projected infrastructure needs. We also compared selected debt service measures for the District to other state fiscal systems. Our work was performed from August 2002 through May 2003 in accordance with generally accepted government auditing standards. The District’s Cost of Meeting Its Public Service Responsibilities Exceeds Its Revenue Capacity, Resulting in a Structural Deficit To determine if a jurisdiction has a structural deficit, we estimated, for the District of Columbia and the 50 state fiscal systems, the spending needed to provide an average level of public services, the revenues that could be raised with average tax rates and the amount of grant funding the jurisdiction can expect to receive. Our analysis indicated that the District’s cost of delivering an average level of services per capita is the highest in the nation due to factors such as high poverty, crime, and a high cost of living. Our analysis also indicated that the District’s total revenue capacity (own- source revenues plus grants) is higher than all state fiscal systems, but not to the same extent that its costs are higher. The District’s own-source revenue capacity ranked among the top five when compared to those of the 50 state fiscal systems, and its federal grant funding is over two and one half times the national average. To estimate a structural imbalance, we performed several sensitivity analyses to show how our estimates changed as we varied specific judgments and assumptions regarding cost circumstances and the value of specific tax bases. The consistency of our basic result over a broad range of alternative assumptions and approaches led us to conclude that the District does have a substantial structural deficit, even though considerable uncertainty exists regarding its exact size. Using fiscal year 2000 data, our lowest estimate was $470 million and our highest estimate was over $1.1 billion annually. Our analysis did not take into account the unique public service costs associated with being the nation’s capital; however, our analysis did take into account the significant federal restrictions on the District’s taxing authority. The primary reason for the structural deficit is high costs due to conditions beyond District officials’ direct control. To cope with its high cost conditions, the District uses its relatively high revenue capacity to a greater extent than almost all state fiscal systems. However, this relatively high tax burden, in combination with federal grants, is just sufficient to fund an average level of public services if delivered with average efficiency. The Spending Necessary to Fund an Average Basket of Public Services Exceeds That of All State Fiscal Systems Using an average of the 50 state fiscal systems as a benchmark, our analysis indicates that the per capita cost of funding an average level of services in the District exceeds that of the average state fiscal system by approximately 75 percent (and is over a third more than the second highest cost fiscal system, New York). In dollar terms, the District would have to spend $2.3 billion more each year to fund an average level of public services compared to what it would have to spend if it faced average cost circumstances. When we adjusted the basket of services to reflect those typically provided in more densely populated urban areas, we estimated that the District would annually have to spend over 85 percent more than the average state fiscal system per capita. As a result, to provide an average level of services the District would have to spend $2.6 billion more than if it faced average cost circumstances. Figure 1 compares the District’s per capita costs of funding an average level of services with those of the five state fiscal systems with the highest costs. We used the U.S. average per capita spending for each specific expenditure function (for example, Medicaid, education, and public safety) as a benchmark for an average service level for that function. We then adjusted this benchmark to account for differing workloads and costs to reflect the fact that an average level of spending does not support the same level of services in each fiscal system because cost conditions differ across locations. For example, adjustments are necessary to reflect the fact that the District must compete with a high-wage private sector in attracting public employees, and high real estate costs push up the cost of government office space, making the provision of public services more expensive than in most states. The adjustments also reflect the fact that the District faces unusually high workloads per capita, such as large numbers of low-income people and high crime rates that increase the cost of Medicaid and public safety. The public service functions that contribute most to the District’s high cost circumstances are Medical Vendor Payments (Medicaid), health and hospitals, and police and corrections. To provide average Medicaid coverage and benefits to its low-income population residents, the District would have to spend about $1,315 per capita, which is more than twice the national average of $551 per capita. (See table 5.) This added Medicaid cost accounts for $437 million of the $2.3 billion difference between what the District would have to spend to meet its high costs and what it would have to spend if it faced only average costs (based on the state basket of services). Similarly, we estimated the per capita cost of providing police services is more than four times the average state fiscal system, adding $436 million to the District’s cost of providing an average level of services annually. One area of the budget where costs are not as high is elementary and secondary education, where, due to a comparatively small percentage of school-age children, the estimated per capita cost of an average level of services is 18 percent above that of the average state fiscal system. The only expenditure function in which the District’s per capita cost of an average service level is estimated to be well below the national average is highways, of which the District has comparatively few miles per capita. Table 5 provides information on the District’s costs of funding services for all functions. The cost estimates shown in table 5 are likely to understate to some unknown extent the District’s cost of an average level of services for a number of reasons. First, by using the average per capita spending of all state fiscal systems as our benchmark for an average level of public services, the benchmark by necessity, excludes any unique public service costs associated with the District being the nation’s capital. Such costs would include, for example, crowd control for political demonstrations that occur disproportionately in the nation’s capital and a disproportionate number of tourists and out of town visitors that impose public safety and sanitation costs on the District’s budget. In addition, limited data are available for the various indicators of workload used in our analysis and there is a degree of uncertainty regarding their relative importance in our overall cost estimates. In these instances, we generally chose conservative assumptions so as not to overstate the cost impact of factors used in our analysis. For example, in adjusting for differences in the cost of living, we took into account only differences in the cost of housing, but due to data limitations, we were unable to take into account other potential sources of such cost variation. Such conservative assumptions likely result in an underestimate of the number of low-income residents in our analysis. For more discussion and examples of instances where conservative assumptions were employed in our analysis, see appendix I. The District’s Per Capita Total and Own- Source Revenue Capacities Are High Relative to Those of State Fiscal Systems Our analysis indicated that the District’s per capita total revenue and own- source revenue capacities are higher than those of all but a few state fiscal systems. As noted earlier, the District’s total revenue capacity equals the sum of its own-source revenue capacity (the revenue that it could raise from its own economic base), plus the amount of federal grants that the District would receive if it provided a representative level of services. Experts disagree on the best approach for estimating revenue capacity and numerous data limitations exist; thus, in the course of our analyses we made a variety of methodological decisions and assumptions. For this reason, we present a range of estimates for the District’s revenue capacity based on two fundamentally different approaches that have been used in the past. All of the estimates we present include adjustments designed to account for significant constraints on the District’s taxing authority, which are discussed in chapter 3. For one measure of the District’s own-source revenue capacity we used the U.S. Department of the Treasury’s (Treasury) estimates of total taxable resources (TTR). TTR is a comprehensive measure of all income either received by state residents (from state or out-of-state sources) or income produced within the state but received by nonresidents. We also developed a second set of estimates of own-source revenue capacity, using the representative tax system (RTS) methodology. The RTS methodology estimates the amount of revenue that could be raised in each state if a standardized set of tax rates were applied to a set of uniformly defined statutory tax bases typically used to fund public services. Proponents of TTR believe that a measure of revenue capacity should be independent of policy decisions and should avoid judgments about the administrative or political feasibility of taxing particular bases. Proponents of the RTS approach believe that administrative and political constraints should be taken into account, even though it may be subjective to say what is a constraint and what is a choice. In producing our RTS estimates, data limitations compelled us to use a variety of assumptions and, in some cases, several different approaches when estimating individual tax bases. Rather than present results for every possible combination of plausible assumptions, we developed “low” and “high” RTS estimates of own-source revenue capacity. The “low” estimate is the result we obtained when we used all of the assumptions that tended to lower our estimate of the District’s capacity relative to those of the states; the reverse holds for our “high” RTS estimate. (See app. II for additional details.) The two fundamentally different estimation approaches yielded the same basic result—the District’s own-source revenue capacity per capita ranked among the top five when compared to those of the 50 state fiscal systems. According to the Treasury’s TTR estimates, the District’s per capita own- source revenue capacity was 34 percent larger than that of the average state fiscal system in fiscal year 2000. According to our RTS estimates for that same year, the District’s per capita own-source revenue capacity was from 19 percent to 29 percent greater than the average. Although we believe it is likely that the District’s actual revenue capacity falls within the range spanned by both Treasury’s and our estimates, we cannot be absolutely certain that it does. The District’s relatively high own-source revenue capacity, combined with the fact that the District has access to much larger federal grants per capita than any of the state fiscal systems, gives the District a higher total revenue capacity than any of the state fiscal systems. We estimated that, if the District had provided an average level of services in fiscal year 2000, its federal grants would have been more than two and one-half times as large as the average per capita federal grants received by state fiscal systems and over 50 percent more than the second largest recipient of federal assistance, Alaska. Adding these grants to the TTR estimate of own-source revenue capacity yields an estimated total revenue capacity for the District that is 60 percent greater than that of the average state fiscal system. The estimated total revenue capacity for the District, based on the grants plus our “low” RTS estimate, is 47 percent above the national average. Figure 2 compares the District’s total revenue capacity to those of the five state fiscal systems with the highest total revenue capacities. The values in the figure show the extent to which each system’s revenue capacity exceeds the national average, which equals 100 percent. Although the District had the highest total revenue capacity of any fiscal system, the District’s distance from the next highest fiscal systems is not nearly as extreme as it was for the representative expenditure estimates presented previously in figure 1. The District’s Structural Deficit Results from a High Cost of Funding an Average Level of Services The District has a structural deficit because its costs of providing an average level of services exceed the amount of revenue that it could raise by applying average tax rates. This result holds regardless of which range of estimating approaches and assumptions we used. We obtained our lowest deficit estimate of about $470 million by combining our lowest estimate of the District’s costs (the one based on the state basket of services) with our highest estimate of the District’s total revenue capacity (the one based on the TTR approach). In contrast, we obtained our highest deficit estimate of over $1.1 billion by combining our highest estimate of the District’s costs (the one based on the urban basket of services) with our lowest estimate of the District’s total revenue capacity (the one based on the “low” RTS approach). While we cannot be certain that the actual size of the District’s structural deficit falls within this range of estimates, we believe that the District's structural deficit is unlikely lower than our most conservative estimate of $470 million for the reasons explained earlier. To better compare the size of the District’s deficit to those of the state fiscal systems, we sought to control for the wide differences in the sizes of the fiscal systems by dividing each system’s deficit (or surplus) by its population and own-source revenues. Table 6 presents the three alternative measures of the deficit and, for each of them, shows how the District ranks against the 50 state fiscal systems. The District’s deficit is larger in per capita terms than that of any state fiscal system for both our higher and lower estimates. The District’s deficit as a percentage of own- source revenue is sixth largest according to our lower estimate, and the largest according to our higher estimate. Figures 3 shows how the District’s structural deficit per capita compares to the state systems with the largest structural deficits. The figure shows that, if the District’s actual structural deficit is close to our lower estimate, then it is not much different than the deficits of most of the state fiscal systems in the top 10 in per capita terms. However, if the District’s actual structural deficit is close to our higher estimate, then it is much larger in per capita terms than the deficits of any state fiscal system. The District’s High Tax Burden Yields Revenues That Could Only Support an Average Level of Services The District’s tax burden (actual revenues collected from local resources relative to their own-source revenue capacity) is among the highest of all fiscal systems, but that burden yields revenues that are only sufficient to fund an average level of services. The District’s actual tax burden exceeded that of the average state fiscal system by 33 percent, based on our lower estimate of its own-source revenue capacity, and by 18 percent, based on our higher estimate of that capacity. (See the first two bars of fig. 4.) The combination of a high revenue capacity and a high tax burden allows the District to fund a very high level of actual spending—$9,298 per capita in fiscal year 2000 compared to a national average of $5,236. However, when the District’s high cost circumstances are taken into account, this high spending level would only be sufficient to provide an average level of services if those services were delivered with average efficiency. Specifically, for the state basket of services, the District’s actual spending is nearly the same as the cost of an average level of public services; for the urban basket of services, its actual spending is about 5 percent below average. (See the last 2 bars of fig. 4.) Moreover, as we discuss in chapter 4, the fact that the District’s aggregate spending is approximately equal to the aggregate cost of an average level of services, suggests that the level of services it actually provides may be below average due to inefficient service delivery and other management problems. Nevertheless, even if the District were to provide its public services as efficiently as a typical state fiscal system, it would still face a structural deficit of $470 million or more. The District’s Revenue Capacity Would Be Even Higher in the Absence of Several Constraints on Its Taxing Authority Although the District of Columbia’s (District) own-source revenue capacity per capita appears to be large relative to those of most state fiscal systems, it would be even larger in the absence of several existing constraints on the District’s taxing authority. The most significant constraints are (1) the unique prohibition against the taxation of District-source income earned by nonresidents and (2) the relatively large proportion of the District’s property tax base that is not taxable because it is either owned or specifically exempted by the federal government. District officials say that building height restrictions also limit the District’s property tax base. We are not able to estimate the amount of revenue that the District would gain if these constraints were removed. However, our quantitative analysis indicates that, despite these constraints, the per capita revenue capacities of the District’s income and property taxes are higher than those of all but a few state fiscal systems. In contrast, the District likely has a relatively low sales tax capacity due, in part, to a disproportionate share of sales to the federal government and other exempt purchasers. The fact that the federal government does not pay property or sales taxes to the District does not necessarily mean that the federal presence has a net negative effect on the District’s finances. A significant portion of the private sector activity in the District is linked to the presence of the federal government. The Federal Prohibition against a District Tax on the Income of Nonresidents Is Unique Unlike that of any state, the District’s government is prohibited by federal law from taxing the District-source income of nonresidents. The 41 states that have income taxes tax the income of residents of at least some other states. Fifteen states participate in reciprocal nontaxation agreements, but no state has an agreement with more than 6 other states. States that impose income taxes also typically provide tax credits to their residents for income taxes paid to other states. In addition, some cities such as Philadelphia, Detroit, Cleveland, and several other cities in Ohio, tax the incomes of commuters who work within their boundaries. These taxes are typically levied at a low flat rate (most of the ones we identified were between 1 and 2 percent) on city- source earnings. Other cities are not authorized to levy commuter taxes by their state governments. However, in those cases the state governments are able, if they choose, to redistribute some of the state tax revenues collected from residents of suburbs to central cities in the form of grants to the city governments or in the form of direct state spending within the cities. Critics of this restriction on the District’s income tax base argue that commuters increase the demand for city services and, therefore, should contribute to defraying the additional costs that they impose. Although no data are collected on the amount of money the District spends on commuters, we have rough indications of some of the impacts based on our own quantitative analysis. For example, we estimated that the cost to the District of providing a representative level of police and fire services, solid waste management, parking facilities, local libraries, and transit subsidies in fiscal year 2000 was from $44 million to $77 million more than it would have been if the daily inflow of commuters to the District had only equaled the daily outflow. We cannot separate the impact of commuters from residents on the District's highway costs. Commuters should not have a large impact on the District's costs for other services, such as primary and secondary education or Medicaid. Although commuters impose costs, some local economists we interviewed noted that commuters already do contribute to the financing of these services, even without a tax on their income. Again, no data are collected on the amount of taxes paid directly by commuters or the tax revenues attributable to jobs supported by them. Some rough indications of the revenue contributions are available. One recent study estimated that a typical daily commuter to the District pays about $250 per year in sales and excise taxes, parking taxes, and purchases of lottery tickets. Another study indicates that spending by commuters supports jobs for District residents who are subject to the District’s income tax. It is difficult to estimate the amount of additional revenue that the District would gain if it were allowed to tax the income of nonresidents. The revenue consequences and the distribution of the ultimate burden of a nonresident income tax for the District would depend on how the tax is designed and how nonresidents and neighboring governments respond to it. Particularly important is the nature of the crediting mechanism that would be established under such a tax. For example, if the District’s tax were made fully creditable against the federal income tax liabilities of the commuters, as was proposed in the “District of Columbia Fair Federal Compensation Act of 2002” (H.R. 3923), then the federal government would bear the cost and would have to either reduce spending or make up for this revenue loss by other means. If the states of Maryland and Virginia allowed their residents to fully credit any tax paid to the District against their state income tax liabilities, then those two states would suffer a revenue loss (relative to the current situation). The two states might respond to a District commuter tax by taxing the income of District residents who work within their jurisdictions or increasing the tax rates on all of their residents. If the District’s tax were not fully creditable against either the federal or state taxes, then the commuters themselves would bear some of the tax burden. Those commuters might try to pass the burden of the tax along to their employers by demanding higher compensation, or they might choose to work elsewhere. This, in turn, would reduce the amount of revenue the District would gain from the tax. Conversely, the higher taxes paid by commuters could result in decisions to relocate to the District to avoid paying the commuter tax. The difficulty of predicting the magnitudes of the various potential policy and behavioral responses makes it difficult to estimate the revenue that the District would gain from a typical tax on nonresidents. The District’s Property Tax Base Is Relatively Large despite the Disproportionate Presence of Properties Owned by the Federal and Foreign Governments Like all state and local governments, the District is unable to tax property owned by the federal government and foreign governments. As the nation’s capital, the District clearly has a higher percentage of its total property value owned by the federal government and by foreign governments than most jurisdictions and, therefore, would benefit more than most jurisdictions if the federal government and foreign governments paid property taxes or made payments-in-lieu-of-taxes. Nevertheless, our quantitative analysis indicates that the District’s per capita property tax base is already larger than those of all but a few state fiscal systems. (See app. II.) There does not appear to be a strong basis for concluding that the District’s commercial property tax base is negatively affected by the federal presence. Given that a large portion of the private sector activity in the District is linked to the presence of the federal government and other exempt entities, it is unclear whether commercial property would fill the void left if federally owned property were reduced to the hypothetical average level seen in other cities. In fact, a good deal of the commercial property tax base locates in the District due to the federal presence. For example, commercial office buildings in the District are occupied by contractors who provide services to the federal government, lawyers who need to interact with regulatory agencies, and public relations firms that interact with congressional offices, among others. The District of Columbia Tax Revision Commission presented a comparison suggesting that, even with the large concentration of exempt property, the per capita value of the District’s taxable property base is large compared to that of other large cities and comparable to the per capita values in surrounding jurisdictions. It is difficult to estimate the net fiscal impact of the presence of the federal government or other tax-exempt entities because of the wide variety of indirect contributions that these entities make to District revenues and the lack of information on the services they use. Tax-exempt entities do generate revenues for the District, even though they do not pay income or property taxes directly. For example, employees of the tax-exempt entities and employees of businesses that provide services to these entities pay sales taxes to the District. We have found no comprehensive estimates of these revenue contributions; however, studies of individual tax-exempt entities suggest that the amounts could be significant. Fully taxable properties also generate these indirect revenues and a fully taxable property that is similar to a U.S. government property in every respect, except for ownership, would contribute more to the District’s finances than the government-owned property. However, as noted above, it is not clear that the District would have more taxable property than it currently has if the federal presence were reduced to a level typical of other jurisdictions. District Officials Believe That the Federally Imposed Height Restriction on Buildings Also Limits the District’s Property Tax Base District officials cite the congressionally imposed height restrictions on buildings as another factor that constrains the District’s property tax base. Although these restrictions may affect the distribution of commercial and residential buildings within the District, it is difficult to determine whether, or to what extent, these restrictions affect the aggregate amount and value of those buildings. Two factors are likely to mitigate the potential negative impact on the District’s tax base. First, the space available for building within the District has not been completely used. At least some of the office or residence space that would have been supplied on higher floors at certain locations, if it were not for the height restrictions, is likely to have been shifted to other locations in the District where building would have been less intensive otherwise. Second, in the face of a given demand for office space, a constraint on the supply of that space will increase its value per square foot. In addition, the restriction could have an effect on the cost of the District’s services by influencing the District’s population density. However, the size of any such effect on service costs is unknown. Other Nationwide Restrictions on Taxing Authority Are Likely to Affect the District Disproportionately In addition to the restrictions discussed above, the District is unable to tax the incomes or most purchases of foreign embassies and diplomats, purchases or sales by the federal government, the personal property of the United States or foreign exempt entities, the income of military personnel who are stationed in the District but claim residence in another jurisdiction, or the income of federal government sponsored enterprises (GSE), such as the Federal National Mortgage Association and the Student Loan Marketing Association. All states and localities nationwide are potentially subject to these same restrictions on their taxing authority, even though some of the restrictions may have a disproportionate effect on the District, given the relatively high concentration of these nontaxable entities and persons within its boundaries. In contrast to the case with the income and property taxes, where nontaxable income and property were already excluded from the data we used in our quantitative analysis, the sales data that we used contained some sales to the federal government, embassies, and military personnel that would be exempt. Given data limitations, we were required to make a range of assumptions to estimate the amount of sales that would be exempt (see app. II for details). Our lower estimate for the District’s sales tax revenue capacity placed it below that of 49 of the state fiscal systems; our higher estimate placed it below 31 of the state fiscal systems. The District Faces High Cost Conditions and Significant Management Problems The District’s high spending on the key program areas of Medicaid, elementary and secondary education, as well as public safety, particularly police, fire, and emergency medical services, is influenced by several cost factors, including high poverty, economically disadvantaged children and elderly, and high crime. Our quantitative analysis shows that the District’s spending for Medicaid and elementary and secondary education is slightly above what it would take to provide an average level of services, while police spending may be significantly below what it would take to provide an average level of services if provided with average efficiency. However, this analysis does not account for all special circumstances beyond the control of the District, such as high demand for Medicaid, high demand for special education services, and extra police and fire services associated with political demonstrations. In addition, in each of the three key program areas we identified significant management problems, such as inadequate financial management, billing systems and internal controls that result in unnecessary spending, which draw scarce resources away from program services. In recognition of the District’s high-cost environment and management challenges, the federal government provides financial and other support to the District, including an enhanced Medicaid match. Special Circumstances and Management Problems Influence High Medicaid Costs in the District Medicaid is a large and growing portion of the District’s budget, with the per capita delivery costs of the program being more than twice the national average. Certain population and delivery characteristics largely outside the District’s control influence these high Medicaid costs. These characteristics include a high poverty rate that contributes to the large numbers of citizens who lack private health insurance and who meet existing Medicaid eligibility criteria, a heavy concentration of Medicaid beneficiaries with chronic health conditions that require expensive and ongoing care, and high real estate and personnel costs for health and long- term care providers. When we adjusted for these high-cost characteristics, our analysis revealed that the District spent only slightly more than that needed to fund the national average levels of coverage and services. However, management problems, which are under the District’s control, have further influenced the local share of Medicaid spending. For example, the District has been foregoing millions in available federal matching funds due to claims management and billing problems, requiring it to use more local funds than necessary in support of the program. If the District adequately addressed these problems and continued to actively pursue reforms already in place, it could receive more federal matching funds and free local funds for other purposes. In recognition of the high costs and management challenges, the federal government provides certain supplemental financial and other support to the District, such as an enhanced federal share of the District’s spending on Medicaid. The District’s Spending on Medicaid Is Slightly More Than That Needed to Fund Average Levels of Coverage and Services The District’s per capita costs of providing Medicaid services were more than twice the national average. However, when we adjusted for the District’s high-cost environment, it spent only 11 percent more than what it would take to fund the national average Medicaid coverage and services. Our analysis adjusted for several factors that affect costs but are to a large extent beyond the control of District officials, including people in poverty, the elderly poor, the high cost of living, and real estate and personnel costs for providers. Special Population and Service Delivery Characteristics Influence High Medicaid Costs Special population and service delivery characteristics create a high-cost environment in the District, requiring it to spend substantially more than other jurisdictions to fund an average level of Medicaid coverage and services. The District’s high costs for Medicaid are caused by a high demand for Medicaid that, in part, can be attributed to its special population consisting of people at a very high poverty rate and a high proportion of citizens who lack private health insurance because their employers do not offer it or they cannot afford it; thus, a large number of District residents rely on Medicaid for public health care coverage. These factors lead to the District spending disproportionately more to fund an average level of Medicaid coverage and services. Specifically, the District’s poverty level is the second highest among states, and many District residents meet income-based coverage criteria. For example, in 1999 the District had the highest percentage of individuals under age 65 with incomes less than 100 percent of the poverty limit covered by Medicaid (based on 1997 through 1999 data). Overall, one in four District residents receive Medicaid, which was high in comparison to its neighboring state, Maryland. However, when the District’s high poverty rate is taken into account, its Medicaid coverage of low-income residents is about average, as the District has not elected to provide optional coverage or services that are far above the national average. An additional factor influencing costs is that District residents—many of whom rely on Medicaid for health care coverage—have a disproportionately high number of chronic health conditions that require expensive, ongoing care. The District ranks near the bottom in many health indicators relative to other states, a situation that affects the types and levels of services the population needs. For example, among states, it has very high rates of low birth weight infants, adult-diagnosed diabetes, lung cancer, and human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) infection, which tend to be found disproportionately among the poor and in urban areas like the District. Further, these chronic health conditions for the most part are costly to treat, often requiring expensive institutional care or ongoing outpatient treatment, such as drug therapy—all at a time when health care costs, particularly prescription drugs, are increasing. The HIV/AIDS epidemic has presented a particular fiscal challenge for the District’s Medicaid program. For example, the Centers for Disease Control and Prevention reported that the District’s 2001 AIDS prevalence rate was 152 per 100,000 people whereas the next highest state, New York, was 39 per 100,000 people. The costs of treating Medicaid beneficiaries with HIV/AIDS are very high and because the District has the highest infection rate in the country and a disproportionately large number of Medicaid beneficiaries, the fiscal burden of the HIV/AIDS epidemic on the District’s Medicaid program is likely disproportionately larger than most states. Another factor influencing the District’s high Medicaid costs relates to the ways in which health and long-term care services are delivered. Providers generally are located in densely populated urban areas with high real estate and personnel costs, a situation which drives providers’ costs upward. Specifically, many providers have high operating costs in the District, largely due to the high costs of purchasing or renting office space and the necessity of paying higher salaries to medical personnel. Moreover, according to District officials, many of the District’s provider payment rates, particularly for physicians, are below average relative to operating costs. The combined effects of high operating costs and low payment rates may contribute to physicians not accepting beneficiaries of the District Medicaid program. This could be a reason why many of the District’s Medicaid beneficiaries rely on emergency rooms more so than in other jurisdictions. District Medicaid beneficiaries may also not obtain preventive care when needed, thus allowing health conditions to worsen, which could lead to hospital stays. Use of these more costly forms of health care are disproportionately high in the District. One report found the District had the highest emergency room visits per 1,000 of the population in the country as well as the highest hospital admissions rate. Management Problems Result in the District Foregoing Significant Federal Matching Funds, but the District Is Taking Steps to Address Them Billing and claims management problems are forcing the District to forego millions in federal matching funds and, as a result, requiring it to use more local funds than necessary to pay for expenditures already incurred. Key issues that lead to rejected federal reimbursement claims include incomplete documentation, inadequate computerized billing systems, submission of reimbursement requests past federal deadlines, providing services to individuals not eligible for Medicaid at the time of delivery, and billing for services not allowable under Medicaid. According to a recent report, these problems resulted in the District receiving $40 million less in expected federal reimbursement during fiscal year 2002 than it had projected in its budget. District officials and other experts told us it would be difficult to make any precise estimate of how much the District is foregoing in federal funding. These management problems involve the weaknesses in the processes and systems that several District agencies use to track and process claims for federal Medicaid reimbursement after services have already been provided. The difference between costs submitted for reimbursement and the costs actually reimbursed based on federal criteria result in the use of local, rather than federal money, to pay for these costs. While many states have experienced similar financial management problems, the District’s problems appear to be worse than most states, according to a federal official we interviewed. The magnitude of the problem is serious: Medicaid financial management was identified as a “material weakness” by independent auditors of the District’s fiscal year 2001 financial statements. These problems have been addressed in several of our reports over the years, as well as in reports by the District Inspector General (IG), the District Auditor, and McKinsey and Company. According to these reports, less than projected federal reimbursements have amounted to millions of dollars across the various agencies, creating significant, unexpected pressures on the District’s budget. The management problems rest mostly with individual District agencies that bill for federal Medicaid reimbursement: Child & Family Services Agency (CFSA), Department of Mental Health (DMH), and District of Columbia Public Schools (DCPS). For example, DMH, which was removed from federal receivership in May 2001, did not have an adequate billing process or information management systems in place. District officials told us that DMH’s billing system contained system edits that permitted unallowable costs to go through undetected and then forwarded these claims to the Medical Assistance Administration’s (MAA) fiscal agent for reimbursement, which would reject them after the services were already provided. As a result, Medicaid charges, as well as Medicare, were not properly documented and deemed unreimbursable by the federal government. In fact, officials said the problems were so severe that DMH voluntarily ceased billing for Medicaid federal funds—as well as Medicare—for most of 2001 to resolve these problems and avoid almost certain disallowances from the federal government. DMH did not provide a precise estimate of the federal reimbursement that was lost during this period. The District also does not have an effective centralized monitoring process for Medicaid. Officials of MAA told us they have a limited ability to control and monitor CFSA, DMH, and DCPS—unlike the private third parties that provide services under the regular Medicaid program. Because these public provider agencies are distinct units of the District government, the District’s budget makes it clear that MAA does not have authority over these agencies in terms of financial management, programs, budget, claims for submission or billing, or estimation of federal reimbursement. Officials told us that historically individual agencies, such as DMH or DCPS, made their own Medicaid projections for inclusion in the District’s budget and the projections were almost always highly inflated. Accordingly, the baseline of the District’s budget would indicate a large influx in federal Medicaid funds that would never materialize due to billing and claims management problems. For example, DCPS’s original estimate of expected federal reimbursement for fiscal year 2002 was $43 million, which was later reduced to $15 million by the District chief financial officer (CFO). In fiscal year 2001, the District wrote off over $78 million of several years worth of such unpaid federal claims, which were still in the baseline of its budget. If District agencies adequately addressed these problems, they could receive more federal matching funds and free local funds for other purposes, such as providing an above average level of Medicaid coverage or optional services. While the District has taken some positive steps to improve management, more improvements are needed. Steps to Address Management Problems District officials have acknowledged the severity of the District’s Medicaid management problems and have taken steps to remedy them. Most significantly, improving management could help the District increase its share of federal Medicaid reimbursement. Most of these reforms have only been implemented within the past year, so it is unclear how effective they will be in the long run. Key examples include the following: The Office of Medicaid Public Provider Operations Reform, which was created in June 2002, has become a needed focal point in the Mayor’s office for integrating billing processes across District agencies and helping these agencies modify their processes and management systems to maximize federal Medicaid reimbursement. The District recently created an $87 million Medicaid reserve to compensate for the costs of Medicaid reimbursements that may need to be covered by local funds and to serve as a cushion for any less than expected reimbursement in federal Medicaid funds, Medicare and Title IV-E. District officials told us they expect to use at least a portion of these funds during the current fiscal year. The District CFO is now responsible for analyzing and clearing any Medicaid projections made by CFSA, DCPS, and DMH (and eventually DHS) before they are incorporated into the District’s budget. Officials told us that the District plans to be more conservative in its projections for federal Medicaid funds to avoid the negative effects of less than expected federal reimbursement. DMH has designed and implemented a new billing process for Medicare and Medicaid, in accordance with the business plan mandated by the court as part of its post-receivership agreement. CFSA is implementing a new computerized billing system, making changes to its data collection process, and working closely with federal Medicaid officials to ensure that any changes meet federal requirements. The District Receives Enhanced Medicaid Matching Support and Other Assistance from the Federal Government Recognizing the District’s Medicaid situation, the federal government has provided additional funding, as well as technical assistance and other programmatic flexibilities. Most significantly, in 1997 Congress provided the District with a fixed, enhanced Medicaid federal medical assistance percentage (FMAP) of 70 percent, which has resulted in an influx of millions of additional federal Medicaid funds that the District was not eligible to receive previously. Previously, under the statutory formula that establishes the federal matching share of eligible state Medicaid expenditures, the District received a 50 percent FMAP—the lowest possible under the law. In addition, the District uses programmatic flexibility and technical assistance from the Centers for Medicare & Medicaid Services (CMS), the federal agency within the U.S. Department of Health and Human Services that is responsible for Medicaid. CMS officials told us they have more frequent contact with the District than with many other states. For example, they have reviewed the District’s billing processes and computer systems in some cases to ensure they meet federal criteria. Special Circumstances and Management Problems May Result in Increased Education Costs and Below Average Services When we adjusted for the District’s service costs and workload factors, our cost analysis suggests that the District spent 18 percent more than what would be necessary to fund an average level of services. However, our analysis was not able to take into account all of the special circumstances facing the District. Specifically, it is likely that significant management problems and disproportionately high special education costs are drawing resources away from elementary and secondary education, suggesting that the District provides less than the national average level of education services. The federal government to some extent has recognized the District’s special circumstances and the extent of its management problems by providing it with special technical and other assistance. The District’s Education Spending Is Somewhat Higher Than What It Would Take to Fund an Average Level of Services We estimate that the District’s elementary and secondary education costs were 18 percent above what it would take to fund an average level of services. Our analysis incorporated several workload factors that represent cost conditions that are largely beyond the control of District officials, which include the number of school age children (excluding those enrolled in private schools), and the specific costs of serving elementary and secondary students and economically disadvantaged children. Our model also took into account the costs of attracting teachers and the maintenance of capital facilities, both of which are higher in the District. When the District’s costs and these workload factors were considered, our analysis showed that the District’s spending is somewhat higher than what it would take to fund a national average level of services. Our analysis, however, probably understated the District’s education costs because we were not able to quantify the District’s significant management problems or high special education costs due, in part, to court mandated services. If these factors could be adequately taken into account, they may show that the District is actually spending less than what is needed to fund a national average level of education services. Significant Management Problems Are Further Drawing Resources Away from Educational Services We, along with the District IG, the District Auditor, and federal inspectors general have identified—and District officials have acknowledged—serious management problems throughout DCPS’s programs and divisions in areas such as financial and program management, as well as compliance with the requirements of federal programs, such as Medicaid and the Individuals with Disabilities Education Act (IDEA). These reports estimate that the local costs of management problems could be in the millions of dollars. However, our cost analysis did not take into account the costs associated with fiscal resources that are wasted due to inefficient management. This limitation likely results in significant amounts of DCPS’s fiscal resources being lost. Many of the management problems at DCPS can be attributed to inadequate financial management, including a lack of effective internal controls and clearly defined and enforced policies and procedures. For example, the independent audit of the District’s financial statements for fiscal year 2001 classified DCPS’s accounting and financial reporting as a “material weakness.” The auditors found that DCPS did not ensure timely loading of budget information into its accounting system, which prevented DCPS from monitoring expenditures and having accurate financial reports. In another instance, DCPS’s procurement procedures were not routinely enforced, as exemplified by capital project purchase orders being processed directly through the DCPS CFO instead of through the procurement office. Recently, DCPS officials acknowledged that they face difficulties in tracking procurement costs, and as a result, individuals at schools may purchase goods without completing a purchase order. Often through a process known as a “friendly lawsuit,” vendors will deliver goods without a purchase order and subsequently notify DCPS of the purchase to receive payment. Last year, DCPS set aside $17 million to compensate for such unauthorized purchases, and spent $10 million of it. DCPS officials provided us with other examples of the limitations of DCPS’s electronic financial management system. These limitations prevent DCPS from adequately tracking personnel costs, which represent approximately 80 percent of the school district’s budget. The system also does not allow DCPS officials to track either the total number of employees or whether particular positions are still available or have been filled. Recently reported problems with managing personnel expenses further highlight DCPS’s financial management problems. In March 2003, DCPS officials announced that the school system had hired about 640 more employees than its budget authorized, resulting in DCPS exceeding its personnel budget by a projected amount of $31.5 million over the entire fiscal year. Also, in December 2002, DCPS officials announced that it paid $5 million for employee insurance benefits and contributed to tax-free retirement accounts for employees who no longer worked for DCPS. Reports have also identified management problems in particular educational programs, which influence costs and negatively affect the quality and level of service provided to students, particularly in special education. For example, a September 2002 investigation by the District Auditor found that DCPS paid $1.2 million to vendors for providing special education services to individuals whose eligibility could not be determined from information on vendors’ invoices. In November 2000, the District IG reported that DCPS paid more than $175,000 in tuition to nonpublic special education schools that failed to meet the standards for special education programs. The District IG also reported inaccuracies in DCPS’s database for special education students, inadequate oversight of special education tuition payments, and insufficient monitoring of nonpublic special education schools. Finally, the District IG concluded that DCPS lacked adequate management controls to ensure that transportation services were adequately procured, documented, and paid. The IG concluded that by implementing certain cost saving measures DCPS could save at least $2.4 million annually. In addition, DCPS has longstanding issues regarding its ability to comply with the laws and regulations of federal education programs, including IDEA, and the U.S. Department of Agriculture’s (USDA) food and nutrition programs. The extent of DCPS’s compliance issues with IDEA have been serious, and by 1998 the U.S. Department of Education (Education) entered into a compliance agreement with DCPS that mandated improvements in DCPS’s special education program. Further, the District has experienced longstanding issues of complying with USDA’s requirements for the National School Lunch Program and the School Breakfast Program. DCPS’s poor management of USDA’s food and nutrition programs resulted in the Mayor and the City Council removing oversight and monitoring responsibilities from DCPS and placing them under a new, independent District State Education Office (SEO). SEO officials told us that while oversight and monitoring have improved, they still face many problems in effectively managing USDA’s food and nutrition programs. High Special Education Costs May Result in Less Funding Available for All Other Elementary and Secondary Education Services Our program review revealed that the District has a high demand for special education and related costs, which are not adequately captured in our quantitative analysis. The District has a disproportionately large share of special education due process hearings that often result in it having to provide more expensive services and pay large legal fees; relies heavily on costly non-public schools; and operates under an array of court orders springing from class action lawsuits, many of which mandate additional types and levels of services. Accordingly, our cost analysis does not sufficiently consider a major education cost driver for the District because we assumed that the District’s special education costs were typical of the average state system, which we found is not the case. For example, the number of special education students has grown rapidly in recent years. Between the 1998- 1999 and 2000-2001 school years, the number of special education students in DCPS grew by over 25 percent, while the total number of nonspecial education students decreased slightly. Over the same period of time, the percentage of special education students attending Boston Public Schools and the San Francisco Unified School District declined about 9 percent and 4 percent respectively. DCPS projects that the number of special education students will continue to grow even as the general student population is expected to continue declining, which will likely cause the special education program to pose an increasingly significant financial burden on DCPS. Overall, the size of the special education population as a percentage of students attending DCPS exceeds the average size for 100 of the largest urban school districts in the United States. Further, evidence suggests that DCPS may also pay a higher cost per special education student than other urban systems. The high costs associated with the District’s large number of due process hearings divert resources from other critical education services. As required by IDEA, a due process hearing gives parents of special needs children the opportunity to present complaints on any matter relating to the education of their children and seek remedies to any shortcomings. Some shortcomings that frequently spur due process hearings in the District include a lack of sufficient educational programs, older school buildings that are not handicapped accessible, failure to meet deadlines for providing services in accordance with students’ individualized education plans (IEP), and not sufficiently involving parents in the development of IEPs. The number of due process hearings held in the District in 2000 exceeded every state except New York, and DCPS estimates that the number of hearings requested will continue to grow as these shortcomings continue. DCPS officials also acknowledged that their special education program suffers from a range of shortcomings, such as a lack of early intervention and prevention and underinvestment in program capacity. For example, DCPS officials noted that many special education teachers are not certified to provide special education. According to some officials, due process hearings in the District often become forums for parents to advocate moving their child out of public schooling and into a private facility—at the District’s expense. Due process hearings may result in the placement of a child in a much more costly setting, such as the transfer of the student from a public to an out-of- District private facility at the expense of DCPS, or mandating additional types or levels of services. Furthermore, the due process hearings result in legal costs to the District because the parents of a student often use a law firm to handle their cases, and if the student prevails in the hearing, the District must pay the legal fees. DCPS officials and other key observers have told us that many parents in the District want their children to be moved into private facilities and lawyers respond to parents wishes and DCPS’s deficiencies, thereby realizing financial gains. For example, DCPS staff informed us that one law firm alone represented students in over 900 due process cases between September 2002 and January 2003 and earned approximately $1 million in fees from the District in 1 year. Even though Congress implemented a cap on legal fees related to special education, District officials told us the cap does not appear to have affected the incidence of due process hearings, but we did not independently verify these claims. DCPS officials indicated that DCPS has also incurred additional costs to comply with court orders and settlements resulting from class action lawsuits. For example, DCPS officials said that the costs of transporting special education students doubled after implementing service improvements as required by the court in the Petties case. However, DCPS could not verify that some of the costs attributed to the Petties case were court ordered. DCPS officials stated that even with increased spending and greater services, they do not think they will be able to meet all of the court ordered service improvements. According to DCPS officials, another significant case was the Nelson case, which required DCPS to develop emergency evacuation plans for students with mobility impairments. DCPS officials said that complying with the court order required DCPS to make significant capital expenditures. DCPS also reported that it has a high percentage of special education students attending nonpublic special education schools because it lacks the staff and facilities to adequately serve its special education students. DCPS officials acknowledged that the school system historically has relied on contracting with non-public education facilities, and DCPS has never built up the capacity to deliver sufficient special education services within DCPS. Services provided in non-public special facilities services are much more costly to DCPS than services provided in public institutions, as non- public schools charge much more for their services. For example, a special education student attending a nonpublic institution costs about twice as much as one receiving special education within DCPS. Spending on these services draws resources away from other public education services, as well as helping to build up the capacity to deliver more special education services within DCPS. The Federal Government Provides Technical Assistance to the District, Recognizing Its Challenges In recognition of the District’s special circumstances and management problems, the federal government, to some extent, has provided technical assistance to the District. Specifically, Education has provided substantial assistance to DCPS, including, since 1996, a dedicated liaison to DCPS to help identify opportunities for providing technical assistance. According to an Education official, no other school district in the country has such a departmentwide liaison. In addition, Education officials told us they have provided extensive technical assistance to DCPS, including guidance for developing an education plan, as well as help in improving its special education program, establishing performance standards for students, and developing a new database to track student data to increase DCPS’s capacity to comply with the future data requirements of the No Child Left Behind Act. Education staff has also hosted conferences to help the DCPS leadership better understand their oversight responsibilities for federal funding programs. The District Faces Significant Public Safety Demands due to the Federal Presence, but Related Costs Are Not Adequately Tracked The District’s costs for the key public safety functions of police and fire protection were far above average, according to our analysis. In fact, the District’s costs were higher for police than any other category. However, our analysis showed that when we adjusted for the District’s high-cost environment, the District spent far less on both police and fire than it would take to fund a national average level of services in these areas. However, the factors considered for both police and fire do not adequately capture the demands the District faces. Most significantly, our factors do not include any measures of the various public safety demands and costs associated with the federal presence and the District’s status as the nation’s capital, such as extra protection for federal officials, including the President and Vice President, as well as diplomatic personnel and foreign dignitaries who visit the city; nor did they capture the police and fire costs associated with the multitude of regular special events and political demonstrations that often draw thousands of people. As a result, the District’s spending on traditional public safety services for residents, such as policing neighborhoods, traffic control, and fire and emergency medical services, is likely even further below average than our analysis would suggest—indicating the District is providing fewer traditional police and fire services to its citizens. In addition, the District’s current cost tracking processes do not adequately capture the true total costs associated with providing police and fire services to support the federal presence, putting the District at a disadvantage in recovering more costs related to protection, special events, or demonstrations. Finally, while the District has received some special federal funding in recognition of the services it provides to support the federal presence, it is unlikely this funding fully compensates for all related costs—indicating that local dollars are being used in support of federal activities. Our Analysis Shows the District’s Police and Fire Spending Is Below Average When Its High-Cost Environment Is Considered According to our analysis, the District’s costs of providing police services were very high—at four and one-half times the national average—as were the costs of providing fire protection services, which were nearly double the national average. However, our analysis indicated that when we adjusted for the District’s high-cost environment for both police and fire, it was spending below what it would take to fund an average basket of services typically associated with police and fire departments. Specifically, the District’s spending on police was 66 percent below what would be necessary to fund a national average level of services based on the urban basket of services and 40 percent below using the state basket of services. Furthermore, fire protection was 28 percent below using the urban basket of services. Our analysis for police was based on three factors only—murder rates, the 18-24 year old population, and the general population. The District’s murder rate, which served as an indicator of the prevalence of violent behavior, was extremely high at more than seven times the national average. Further, we found that the percentage of residents in the 18-24 age range—a group prone to commit more crimes than any other age group— was disproportionately large in the District. Similarly, our workload factors for fire protection—multifamily housing units and older housing units built prior to 1939—indicated that the District faced high costs related to providing fire protection services. Specifically, the District had disproportionately high instances of older housing units, which are more prone to fires, and disproportionately high numbers of dense living conditions in multifamily units, another indication of the extent of fire services a jurisdiction must provide. The workload factors for police and fire protection suggested that the District’s costs of providing typical services in these areas were disproportionately higher than in most other jurisdictions. For several reasons, our analysis may understate what the District spends on police and fire services for residents. First, our factors may not fully capture the extent of police and fire demands or related costs in the District. Specifically, a great deal of uncertainty exists as to whether or not some of our factors adequately measure demand for services or cost burdens. In addition, we believe these factors understated the District’s expenditure demands because they did not capture any costs related to services provided to the federal government. For example, the factors do not adequately reflect increases in the District’s daily population due to tourists, college students and other commuters, as well as services related to the federal presence for which it does not receive full reimbursement, such as protection for federal officials and dignitaries, special events, or demonstrations. Because these costs were not taken into account in our analysis, we believe the District is likely providing less police or fire protection services to residents. The District Provides Significant Public Safety Services to the Federal Government, Likely Resulting in Less Spending on Services for Residents As the nation’s capital, the District is continually faced with paying for expenses to support the federal government's presence, such as extra services for federal officials, including the President and Vice President, and diplomatic personnel and foreign dignitaries who visit the city. It is also responsible for paying for services related to an array of special events and political demonstrations that often draw thousands of people, sometimes with short notice. The federal government routinely provides the District with special funding and other forms of assistance; however, it is unlikely that the federal government fully compensates the District for all expenses associated with the federal presence, meaning many related services provided by the District are funded with local money. Assistance in Protection of Federal Officials and Dignitaries Although the 1973 Home Rule Act requires the District, including the Metropolitan Police Department (MPD), to support federal agencies in providing protection to the President and Vice President as well as foreign missions and embassies, the federal government does not routinely reimburse the District for these expenditures, which District officials say places a financial strain on their budget and could negatively affect the operations of public safety agencies. The District’s Fire and Emergency Medical Services (FEMS) department also provides similar support to the federal government. Although the police and fire departments typically receive advanced notification of federal protection needs from the U.S. Secret Service, they are sometimes notified the day of or hours prior to an event, resulting in additional costs by necessitating the shifting of employees, calling up employees to back-fill positions, and paying overtime to employees. It also makes it difficult to plan or budget for federally related expenses. For example, MPD reported to us that in fiscal year 2002 it incurred 3,240 hours in police officer overtime hours related to providing protection to federal officials and dignitaries, at a cost of over $101,000. MPD operates a special dignitary protection unit that is solely responsible for assisting federal law enforcement agencies, such as the Secret Service, by providing police escort and protection for federal officials, such as the President and Vice President, as well as key foreign dignitaries. For example, when the motorcade of a federal official, such as the President or a key dignitary, travels anywhere in the District, MPD is responsible for closing off streets, sending out scout cars in advance of the motorcade, and placing motorcycles beside and in front of the official cars; for the President, as many as 100 traffic posts are sometimes needed. MPD officials noted that they have no choice but to provide these services because the District controls its streets, so MPD must assist the federal agencies in providing protective services for motorcades that travel upon them, as would be the case in whatever jurisdiction these officials visited. Often the magnitude of the required duties exceed the capacity of the dedicated unit; as a result, other MPD officers must be pulled from their regular duties, including policing District neighborhoods. According to MPD, the key difference between the District and other jurisdictions is the extent of the protective duties. For example, District officials told us the President often leaves the White House several times a day, necessitating police and fire support, whereas he visits other jurisdictions, such as San Francisco, with much less frequency. Similarly, FEMS regularly uses its resources to provide services to federal officials and dignitaries. For example, officials told us that a District emergency medical technician (EMT) unit is required to accompany the President whenever or wherever he travels within a 50-mile radius of the White House, as well as to the presidential retreat, Camp David, in Maryland. Further, FEMS is required to pre-inspect any District buildings where the President, Vice President, or a key dignitary is scheduled to appear. Special Events and Demonstrations District officials told us that special events and demonstrations also result in the District incurring costs funded with local dollars. Special events also affect police operations by diverting police officers from their normal duties as well as incurring costly overtime payments to police officers who are called upon during their scheduled time off. In addition, MPD staff said that it often does not have enough officers in its special events unit to provide all the necessary security for large events, meaning it must call up officers on leave or contract with officers from other jurisdictions. As the nation’s capital, the District is an attractive and preferred venue for demonstrations, protest rallies, and other special events as it provides a “high profile” venue and potential for media coverage for individuals and organizations seeking a mechanism for national publicity and potential access to legislators and other government officials. Thus, the District frequently hosts numerous planned and unplanned special events that often are not fully reimbursed by event organizers or the federal government. Although the District receives positive economic benefits generated by an influx of visiting demonstrators or protestors and dignitaries, such as revenue from sales taxes in restaurants, hotels, and stores, the District must also bear a financial burden in providing unbudgeted public safety services related to these events. A comparison of the District to our case study sites of San Francisco and Boston suggested that the magnitude of the District’s expenses related to protection, special events, and demonstrations is disproportionately higher than those of San Francisco and Boston police departments, which are both major international cities. For example, we collected data on overtime hours from several recurring special events in the District, Boston, and San Francisco and found that the District’s expenditures were roughly four to six times greater than those other cities. According to MPD officials, expenditures for the demonstrations resulting from the International Monetary Fund (IMF)/World Bank conferences represent the largest unreimbursed expenditures. A IMF/World Bank conference—and resulting demonstrations—is scheduled to occur at those organizations’ headquarters in the District twice in a fiscal year, usually during the spring and fall. District officials noted that the conference occurs in the District only because it is home to IMF and World Bank offices. MPD reported incurring over 116,800 in police officer overtime hours at a cost of more than $5.7 million during the fall of 2002, and this figure did not include the costs of purchasing new equipment, such as security fencing, or wear and tear on existing equipment and automobiles. MPD officials told us they also had to contract for officers from other jurisdictions to provide added security. MPD officials told us they estimated that the total costs of IMF/World Bank conferences could be as high as $14.8 million, but did not provide documentation for this figure. The national Independence Day celebration on the National Mall serves as a key example of a large scale, federally related special event that results in significant employee overtime expenses to the District. MPD officials told us that the U.S. Park Police (USPP)—which has jurisdiction over the National Mall, where the event is held—could not handle an event of this magnitude on its own. Because the National Mall is within the District’s boundaries, it must assist in security and assume any costs. On July 4, 2002, MPD activated 1,500 officers to work overtime to supplement USPP, and MPD brought in officers from other jurisdictions as well. MPD paid officers from other jurisdictions for their services, but MPD officials told us the department received no reimbursement from the federal government. FEMS officials also provided extensive services during the Independence Day celebrations, including emergency medical technicians. A final example of the federal presence’s impact on the District involves MPD’s newly constructed state-of-the-art command center that is intended to coordinate the law enforcement aspects of special events or emergencies, such as the IMF/World Bank conference. MPD officials told us that their previous facilities were not sufficient to effectively manage such events, so they felt it necessary to construct a new one at a total cost of nearly $7 million—all out of the District’s capital budget. The federal government has not provided financial support for constructing or maintaining the command center, but federal law enforcement agencies (e.g., the U.S. Secret Service, Federal Bureau of Investigation, the U.S. Capitol Police, and the USPP) nonetheless rely on the facility to coordinate and manage law enforcement responses to emergencies or large-scale special events within District boundaries. However, in the past the federal government has provided some funding to MPD for other capital improvements to MPD facilities. Effects of Increased Terrorist Threats District public safety officials told us that in recent years the number of special events and demonstrations, along with the potential for violence and security threats during them, have increased as have the security needs of federal officials and key dignitaries. Accordingly, District officials told us that unanticipated and unreimbursed expenditures have escalated. In addition, District officials told us that after the September 11, 2001, terrorist attacks—and the resulting national focus on enhanced homeland security preparedness and increased threats of additional terrorist attacks—their ongoing costs have escalated even more. Police and fire officials told us that since September 11 they have provided permanently higher levels of security and additional services to the federal government. The events of September 11 have also affected the security needs of special events and demonstrations, leading to increased costs to the District. For example, officials told us that, in 2002, expenses to ensure security were even higher for national Independence Day celebrations than in past years because of concerns about terrorist attacks on the National Mall. However, specific data are not available for this event and others. Better Tracking of Costs Could Strengthen the District’s Case for Federal Reimbursement The District’s current cost tracking processes do not provide officials in MPD or FEMS, or the District CFO, with reliable financial information to allow them to better estimate and budget for federally related expenditures, control overtime costs, or strengthen their cases for reimbursement from the federal government. In particular, the District is not collecting data and tracking all expenditures to determine its true total costs associated with its public safety programs and activities, putting the District at a disadvantage in capturing and recovering more costs related to protection, special events, or demonstrations. MPD and FEMS do some tracking of personnel costs associated with large events, such as the IMF/World Bank conference as well as ongoing protection, but neither agency routinely tracks data regarding supplies, equipment, training, vehicle maintenance, and repair costs, and they are likely underestimating the full extent of expenditures related to federal protection, special events, and demonstrations. The absence of a rigorous cost tracking process in MPD and FEMS appears to have hindered their ability to determine the true costs of providing public safety and other services in support of the federal presence. For example, MPD data on special events related to overtime paid for federal holiday activities, such as Independence Day, are aggregated with all other holiday overtime. The quality, accuracy and completeness of these data are also lacking. Recently, MPD and FEMS have attempted to improve tracking of costs associated with special events in response to direction from the District CFO’s Budget Office. For example, MPD reported that it now tracks special event overtime hours and associated costs by the respective police unit, and the District CFO’s Budget Office recently established a separate account to track actual expenditures for these events. The Federal Government Has Provided Some Amount of Financial Assistance Although it is unlikely that the federal government fully compensates the District for all related expenses, the federal government has provided the District with special funding and other forms of assistance in recognition of the magnitude of public safety demands related to the federal presence. For example, the District recently received $16 million to compensate for any expenses related to the demonstrations resulting from the IMF/World Bank conferences. However, District officials told us this level of funding would not be sufficient to cover many costs incurred by District agencies. Specifically, District officials claimed that each IMF/World Bank event might result in total costs, including personnel and equipment, of as much as $15 million, and two events are scheduled to occur within a fiscal year— although the District was unable to provide documentation for this figure. The District received an additional $15 million in fiscal year 2003 for emergency planning and security enhancements. Further, in April 2003 as part of its urban security initiative, the Department of Homeland Security (DHS) awarded the District an additional $18 million; DHS also awarded funding to other major cities. Another key example was Congress providing over $200 million to the District as part of the Defense Appropriations Act for fiscal year 2002 to improve emergency preparedness and the capacity of the District to deal with any terrorist attacks. This funding, which went to a number of District agencies including MPD and FEMS, as well as non- District entities, was intended to assist in purchasing equipment to respond to chemical or biological weapons, improve its public safety communications systems, improve emergency traffic management, and enhance training, among other things. The District Continues to Defer Infrastructure Projects While Debt Pressures Remain When forced to balance the budget when a structural imbalance exists, governments often choose to hold down debt by deferring capital improvements. The District has thus deferred infrastructure maintenance and new capital projects because of constraints within its operating budget. Contributing to the District’s difficulties is its legacy of an aging and deteriorated infrastructure, particularly in the schools, and maintaining its 40 percent share of the funding for the area’s metropolitan transit system. The District’s Chief Financial Officer (CFO) is actively managing the District’s debt, refinancing some bonds to reduce interest and issuing bonds backed by funds from the tobacco settlement. Nevertheless, the District cannot take on additional debt without cutting an already low level of services or raising taxes that are already higher than other jurisdictions, and so it has chosen to put off needed repairs to streets and schools and postpone new construction that would improve the city’s infrastructure. In fact, our analysis shows that the District’s debt per capita ranks the highest when compared to combined state and local debt across the 50 states. The District operates with an aged and badly deteriorated infrastructure— antiquated school buildings, health facilities, and police stations; out-of- date and inadequate computer systems; and aging sewer systems—for which the District has been unable to fund the needed improvements. The District is, however, attempting to address its backlog of infrastructure needs which, as several studies have noted, was long ignored throughout the 1970s, 1980s, and early 1990s. This legacy continues to exacerbate the current situation. The District’s level of spending for infrastructure repairs and improvements has increased steadily since 1995 and 1996, when virtually all major projects were deferred. The reality is, however, that the District continues to defer major infrastructure repair and development and capital acquisitions due to its budget and debt issues, while the legacy from its history of neglected infrastructure needs continues. Our approach to analyzing the District’s infrastructure projects differed from the approaches used to address the other objectives in this report. Because of the variety of ways infrastructure projects are owned, managed, and reported by other jurisdictions, comparative information on infrastructure across states and local jurisdictions was not readily available; therefore, we did not do a comparative analysis of the District’s infrastructure with states or other jurisdictions. We reviewed the data that the District has available in its annual budget, financial plans, comprehensive annual financial reports, and other documents. District Infrastructure Continues to Be Deferred The District is deferring significant amounts of capital projects by not funding or taking action on specific repairs and improvements to the District’s infrastructure. For the 6-year period fiscal years 2003 through 2008, the total number of projects that were not approved for funding was 115. These 115 projects represent about 43 percent of the total identified capital cost needs for fiscal years 2003 through 2008. Many of these capital projects affect the safety and health of citizens. Deferred public safety projects include, for example, renovation of the third and sixth police district buildings and a disaster vehicle facility. District of Columbia Public Schools’ (DCPS) fiscal year 2003 deferred projects included the replacement of electrical systems and heating and cooling plants and the upgrade of fire alarms, intercoms, and master clocks. Public health deferred projects include asbestos abatement and lighting system retrofitting in local facilities. Deferred transportation projects included rehabilitating bridges, paving alleys and sidewalks, and resurfacing streets. Deferred maintenance project costs for three agencies total 79 percent of the total percentage of all deferred maintenance projects for fiscal year 2003—DCPS totals about 34 percent, Department of Transportation is about 30 percent, and the Metropolitan Police Department is about 15 percent. Table 7 lists the agencies and their deferred maintenance project costs for fiscal year 2003 and the 6-year period fiscal years 2003 through 2008. See appendix IV for a detailed list of agency projects and funding requests that the District has deferred. The District’s Capital Improvement Plan (CIP) funding for fiscal years 2003 through 2008 is currently budgeted at $3.3 billion for a total of 229 projects. For fiscal year 2003, the amount for planned funding and expenditures is $881 million for projects such as school modernization, street repairs, roadway reconstruction, Metro bus replacement, equipment acquisition or leases, fire apparatus, and emergency communication systems. See table 8 for an overview of the District’s planned funding and expenditures for fiscal year 2003 and the period fiscal year 2003 through fiscal year 2008. These amounts do not include $371 million in deferred maintenance project costs from table 7, as well as an additional $51 million in other deferred project costs that were not approved in fiscal year 2003 due to budget concerns. In addition, the District estimates that the total amount of deferred projects not included in the plan for fiscal years 2003 through 2008 total approximately $2.5 billion. In many instances, new project requests require more financing than the District could afford to repay in future years. As shown in table 9, a total of 115 capital projects with a cost of about $422 million were deferred in fiscal year 2003. District officials told us that, in an attempt to remain steadfast to spending affordability limits, they did not recommend these projects for funding even though some projects ranked high in priority in the CIP process. Of the $422 million in deferred projects for fiscal year 2003, $371 million was deferred maintenance, and the remaining $51 million represented other deferred projects. These projects will eventually need to be funded, but possibly at a higher cost later. Table 9 shows the approximate amount of funding that would be required if all requested infrastructure projects had been approved for fiscal year 2003 and fiscal years 2003 through 2008. The category “other deferred infrastructure and acquisition projects” included 35 projects, at a total cost of about $51 million for fiscal year 2003 and about $345 million over the 6-year period fiscal years 2003 though 2008. Similar to the financial situation of deferred maintenance, these projects were not approved because the projects required more financing than the District could afford to repay in future years. (See table 10.) District Debt Pressures Remain There has been little change in the District’s outstanding general obligation debt, which totaled $2.67 billion as of September 30, 2002, except for a drop in 2001 attributable to the issuance of bonds backed by funds received from a multistate settlement with tobacco companies. Debt per capita has also remained fairly constant except for a dip as tobacco bonds were issued. In contrast, with expenditures holding steady, debt service costs as a percentage of expenditures have increased. As a percentage of local general fund revenues, debt service costs, which were 7.3 percent of revenue for fiscal year 2002, are expected to climb to approximately 10 percent by 2006. The District’s annual debt service for the fiscal year ended September 30, 2002, was $272 million, or approximately 7.3 percent of the local portion of general fund revenues, and the District’s projected debt service for fiscal year 2003 is about $304 million, which represents 8.3 percent of the local portion of projected general fund revenues. Although this level of debt service is well within the statutory limit of 17 percent of general fund revenues, the effect of issuing substantially more debt without a corresponding increase in general fund revenue or cuts in other areas of the budget would adversely affect the District’s debt ratios, its future ability to service its debt, and, consequently, its credit rating. The primary funding source for capital projects is through the issuance of tax-exempt bonds. These bonds are issued as general obligations of the District and are backed by the full faith and credit of the District. Several sources of funding for infrastructure and capital projects are presented in the capital budgets for fiscal years 2003 through 2008. However, only general obligation bonds and master equipment lease funding sources have an impact on the annual operating budget. These funding sources require debt service payments, which include principal and interest and are paid from general fund revenues. General obligation bonds represent about 52 percent of the funding sources for the District’s capital plan for fiscal years 2003 through 2008. (See table 11.) Faced with decreasing revenues and a significant backlog of unfunded capital projects, the District is taking steps to reduce debt service costs. In February 2003, the District’s CFO testified that in the first quarter of fiscal year 2003, the District issued general obligation bonds to finance capital projects through a complex transaction that produced historically low interest rates, and refinanced (refunded) outstanding general obligation bonds and certificates of participation, at lower interest rates. According to the Deputy CFO, Office of Finance and Treasury (OFT), the District took advantage of market conditions in October 2002 and used an interest-swap mechanism, resulting in an average interest rate of approximately 4 percent on a portion of the bonds. Another portion of the bonds was issued as variable-rate demand bonds, and the Deputy CFO reported that this allowed the District to benefit from extremely low interest rates (about 1.25 percent currently). The Deputy CFO also stated that OFT has continued to focus on issuing its bonds based on actual capital spending needs (as opposed to its previous approach of planned spending levels), reducing the amount of unspent bond proceeds on hand, and thereby reducing debt service expenses. District officials testified that these actions produced substantial debt service savings totaling about $20 million. Total Outstanding General Obligation Debt There was little change in the District’s total outstanding general obligation debt for the period 1995 through 2000, as shown in figure 5. The drop in outstanding debt in 2001 was attributable to the issuance of tobacco settlement bonds with the funds used to defease approximately $482.5 million of the District’s outstanding general obligation bonds. As of September 30, 2002, the District’s outstanding general obligation bonds totaled $2.67 billion. (See fig. 5.) Since fiscal year 1991, the District’s outstanding general obligation bonds have included balances related to the $331 million in deficit reduction bonds that were issued by the District in 1991 to eliminate the operating deficit in its general fund that year. As a result, the District’s debt included amounts that were used to cover operating expenditures. The District has continued paying debt service on those bonds in the intervening years. In fiscal year 2002, $38.9 million of the District’s $272.2 million in debt service expenditures was to cover principal and interest paymets on the deficit reduction bonds that had been issued in 1991. The District anticipates that it will make the final payment on these bonds in fiscal year 2003, in the amount of $39.3 million. Debt Per Capita Debt per capita measures the level of debt burden placed on each citizen of a state or city. Since the citizens are ultimately responsible for financing the debt through payment of taxes, debt per capita is a good way to measure changes in a city’s debt load or compare a city’s debt load to that of another municipality. The District’s ratio of general obligation debt per capita was fairly constant from fiscal years 1995 through 1999. (See fig. 6.) The general obligation debt per capita further declined in 2001 because of the reduction in outstanding general obligation debt through the issuance of tobacco settlement bonds. District officials offered the following explanations for the current situation of high debt per capita even while there has been a trend of significant deferred capital needs: (1) high funding for education and the Washington Metropolitan Area Transit Authority (WMATA), (2) funding projects with lifetimes shorter than the terms of the bonds, (3) funding enterprise fund activities, and (4) funding services that are now being provided by the federal government. The District’s largest authorization items over the past 18 years have been public schools (16.8 percent of total funding) and WMATA funding (12.0 percent of total funding). District officials also explained that the District had funded projects with lifetimes shorter than the term of the bonds issued, as well as provided funding for the original convention center, WASA, the Washington Aqueduct, and public assisted housing. These activities are now operating outside the District’s general fund. In addition, District officials identified past major events and circumstances that contributed to the present levels of long- term debt and deferred infrastructure projects, including the issuance of bonds in large amounts in fiscal years 1990, 1992, and 2002 for major authorization items such as public assisted housing and public education. Expenditures Required to Service Outstanding Debt From 1995 through 1998, the District’s debt service costs as a percentage of total general fund expenditures increased slowly, as shown in figure 7. Most of the increase was attributable to a steady increase in outstanding debt, while expenditures remained somewhat steady. However, from 1999 through 2001, the District’s debt service as a percentage of expenditures decreased substantially, due primarily to the defeasement of approximately $482.5 million in general obligation bonds through the issuance of tobacco settlement bonds. This trend was a result of a unique, one-time, permanent reduction in the District’s outstanding general obligation debt. Revenue Available to Service Outstanding Debt The most recent calculations show that, for 2002, the District’s debt service costs amounted to about 7.3 percent of general fund revenues, as shown in figure 8. Based on the District’s projections, the percentage of debt service costs to the local portion of general fund revenues is expected to climb steadily to approximately 10 percent by 2006. The District’s projections assume that debt service costs will increase at a higher rate than local revenues. Like debt costs as a percentage of expenditures, the District’s debt service expenditures as a percentage of revenue remained level through 1999, then decreased substantially in 2000 and 2001 (see figure 8). The decrease was due to the issuance of the tobacco settlement bonds mentioned in the debt service costs to general fund expenditures discussion, as well as an increase in general fund revenues over that same period. Credit Ratings During fiscal year 1995, the District’s general obligation debt was downgraded by all three rating agencies to “below-investment-grade” or “junk bond” levels. Since 1998, with the District’s financial recovery, each rating agency has issued a series of upgrades to the District’s bond rating. The upgrades that occurred in 1999 raised the District’s ratings back to “investment grade” levels. The upgrades in the bond ratings by the rating agencies made the District’s bonds more marketable, resulting in a lower cost of capital to the District. The District continues to have the goal of having its credit rating raised to the “A” level. In October 2002, the bond rating agency, Fitch IBCA, Inc., reviewed its rating for the District and reported that although the BBB+ long-term general obligation bond rating reflects the sound financial cushion that the District has built up over the last several years and the District’s demonstrated ability to respond quickly and effectively to funding shortfalls and unexpected expenditure needs while still strengthening reserves, its debt levels remain high and capital needs are substantial. While the District has seen significant improvement in its credit ratings over the last couple of years, its Baa1 from Moody’s rating places the District in the lowest tier among 35 U.S. cities. (See fig. 9.) Selected District Debt Statistics Compared to Other Jurisdictions Our analysis shows that the District’s debt per capita ranks the highest when compared to combined state and local debt across the 50 states. The District funds many infrastructure projects that in other U.S. cities would be financed either in part or in whole by state governments. For this reason, we have analyzed U.S. Census Bureau (Census) data that combine debt issued by the state government and all local governments within that state. The resulting debt per capita figure shows a complete picture of the debt burden for a state and all cities and municipalities within the state. From the Census data, we analyzed the portion of long-term debt that is backed by the full faith and credit of the government entity issuing the debt. This portion of long-term debt is supported solely by the taxing authority of the entity issuing the debt. Based on the Census data from all 50 states and the District of Columbia, the District shows the highest debt per capita level at $6,501. It is important to note that the Census data figures for the District’s “full-faith and credit debt outstanding” as of April 2000 is significantly higher than the District’s audited balance of general obligation debt as of September 30, 2000. Therefore, we also included an “adjusted” level of debt to reflect the lower, audited general obligation debt level. Even using the audited lower level of debt, the District still ranked highest in debt per capita when compared to the 50 states. Based on the Census data, debt per capita in the other states ranges from a low of $173 (Oklahoma) to the second highest debt per capita of $4,348 (for both Hawaii and Connecticut). The median debt per capita is $1,462. The average debt per capita is $1,812. (See table 12.) We also compared the District’s outstanding debt burden to that of the 50 state fiscal systems in terms of debt as a percentage of own-source revenue capacity for fiscal year 2000, using our own range of estimates of that capacity. Our results show that the District’s debt is larger relative to the resources it has available to repay it than that of any state fiscal system. (See the last two columns of table 12.) We estimated that the District’s outstanding debt was equal to between 114 percent and 129 percent of the District’s own-source revenue capacity in fiscal year 2000. Both of these percentages were higher than those of any state fiscal system and well above the state median of 38 percent. | District officials have recently reported both a budget gap and a more permanent structural imbalance between costs and revenue raising capacity. They maintain that the structural imbalance largely stems from the federal government's presence and restrictions on the District's tax base. Accordingly, at various times District officials have asked the Congress for additional funds and other measures to enhance revenues. In a preliminary September 2002 report, GAO concluded that the District had not provided sufficient data and analysis to discern whether, or to what extent, it is facing a structural imbalance. At that time, GAO also agreed to perform a more comprehensive analysis and was asked to (1) determine whether, or to what extent, the District faces a structural imbalance between its revenue capacity and its public service responsibilities, (2) identify any significant constraints on the District's revenue capacity, (3) discuss factors beyond the control of District officials that influence the District's spending in key program areas as well as factors within its control, such as management problems, and (4) report on the District's deferred infrastructure projects and outstanding debt service and related expenses that might be affected by a structural imbalance. The District concurred with our key findings. GAO used a multifaceted approach to measure structural imbalance that GAO defines as a fiscal system's inability to fund an average level of public services with revenues that it could raise with an average level of taxation, plus the federal aid it receives. This approach compared the District's circumstances to a benchmark based on the average spending and tax policies of the 50 state fiscal systems (each state and its local governments). However, the benchmark is adjusted by taking into account circumstances that are beyond the control of state and local government officials (e.g., number of school-age children and value of tax bases). GAO supplemented this analysis with reviews of the District's key programs to provide insights on factors influencing spending, and reviewed deferred infrastructure and outstanding debt. The cost of delivering an average level of services per capita in the District far exceeds that of the average state fiscal system due to factors such as high poverty, crime, and a high cost of living. The District's per capita total revenue capacity is higher than all state fiscal systems but not to the same extent that its costs are higher. In addition, its revenue capacity would be larger without constraints on its taxing authority, such as its inability to tax federal property or the income of nonresidents. The District faces a substantial structural deficit in that the cost of providing an average level of public services exceeds the amount of revenue it could raise by applying average tax rates. Data limitations and uncertainties surrounding key assumptions in our analysis made it difficult to determine the exact size of the District's structural deficit, though it likely exceeds $470 million annually. Consequently, even though the District's tax burden is among the highest in the nation, the resulting revenues plus federal grants are only sufficient to fund an average level of public services, if those services were delivered with average efficiency. The District's significant management problems in key programs waste resources and make it difficult to provide even an average level of services. Examples include inadequate financial management, billing systems, and internal controls, resulting in tens of millions of dollars being wasted, and hindering its ability to receive federal funding. Addressing management problems would not offset the District's underlying structural imbalance because this imbalance is determined by factors beyond the District's direct control. However, addressing these management problems would help offset its current budget gap or increase service levels. The District continues to defer major infrastructure projects and capital investment because of its structural imbalance and its high debt level. These two factors make it difficult for the District to raise taxes, cut services, or assume additional debt. Although difficult, District officials could address a budget gap by taking actions such as cutting spending, raising taxes, and improving management efficiencies. In contrast, a structural imbalance is largely beyond District officials' direct control. If this imbalance is to be addressed, in the near term, it may be necessary to change federal policies to expand the District's tax base or to provide additional financial support. However, given the existence of structural imbalances in other jurisdictions and the District's significant management problems, federal policymakers face difficult choices regarding what changes, if any, they should make in their financial relationship with the District. |
Background For decades, animal dealers have been providing dogs and cats to scientific researchers. Within this broader group, random source Class B dealers are those who provide dogs and cats that they obtain from pounds, shelters, auction sales, or owners who breed the animals on their premises. For certain research, some attributes of random source dogs and cats are considered useful and desirable, such as particular physical or genetic characteristics or the presence of specific diseases or conditions. For example, according to a study conducted by the National Research Council and information from the National Association of Biomedical Research, random source dogs tend to be 2 years or older, tend to weigh from 60 to 80 pounds, and may be of mixed breeds. These attributes make them useful for cardiovascular, pulmonary, orthopedic, and age-related studies. Random source cats are considered useful for neurological and cardiovascular research and studies on respiratory diseases and the immune system. In addition, random source dogs and cats are considered useful for the study of certain naturally occurring infectious diseases, such as Lyme disease and heartworm, or as animal models for human diseases, such as sleep apnea and muscular dystrophy. AWA provisions cover a variety of animals, including any live or dead dog, cat, nonhuman primate, guinea pig, hamster, or rabbit to be used for research, testing, exhibition, or kept as a pet. AWA requires businesses or individuals covered by the law to be licensed or registered and to uphold minimum standards of care set in regulation. Licensing and registration under AWA is based on broad business categories, including animal dealers, animal exhibitors, animal carriers, and research facilities. Animal dealers and exhibitors are required to be licensed, while animal carriers and research facilities are required to be registered. There are two types of licenses for dealers—Class A and Class B—and one type of license for exhibitors—Class C. Class A licenses are specifically for animal dealers who only deal in the animals they breed and raise, while Class B licenses are for all other types of dealers and include the purchase and resale of any animal covered by AWA. Class C licenses are for businesses or individuals whose business involves displaying animals to the public. According to APHIS information, in fiscal year 2009 there were a total of 9,530 facilities licensed or regulated under AWA, which consisted of 3,898 Class A dealers, 1,031 Class B dealers, 2,732 Class C exhibitors, and 1,257 research facilities, among others. The APHIS Animal Care program administers the requirements of AWA and its implementing regulations. APHIS Animal Care undertakes a variety of AWA regulatory activities, such as the licensing and registration of facilities with animals covered by the act, unannounced compliance inspections of licensed and registered facilities, and investigating public complaints. In addition, Animal Care administers activities related to the Horse Protection Act and has a role in planning and coordinating disaster response efforts for household pets. In fiscal year 2010, the Animal Care program had an annual budget of approximately $22.5 million and a staff of 209, including about 100 field inspectors, who report to either of two APHIS Animal Care regional offices. The APHIS field inspector cadre is about evenly divided between veterinary medical officers, who hold veterinarian degrees, and animal care inspectors, or technicians. Eight field inspectors are assigned to random source Class B dealers, and five of them are veterinary medical officers. Inspecting regulated licensees’ and registrants’ facilities, which include random source dealer facilities, is the primary way APHIS Animal Care ensures compliance with AWA. All inspections are unannounced, and generally the owner or manager of a facility accompanies the inspector during the inspection. The time required to conduct inspections varies and is affected by facility size and the number of regulated animals involved, among other things. The inspection process consists primarily of two parts—a physical inspection and a records inspection. During the physical inspection, the inspector observes and documents the condition of the facility and the animals in the facility to ensure the dealer is adhering to AWA. The physical inspection may also involve the inspection of transportation devices, such as vehicles and shipping containers for animals, if necessary. During the records inspection, inspectors review records that dealers are required to maintain to ensure they are accurate and complete for all animals the dealers have obtained or sold. Random source Class B dealers are generally required to comply with the same regulations as other licensed Class A or B dealers. As such, when APHIS inspectors inspect these dealers, they are to ensure that they, like other dealers, are providing appropriate and adequate veterinary care; properly tagging or identifying animals; maintaining accurate records; and complying with standards of humane care, treatment, handling, and transportation of animals. However, APHIS guidance imposes additional controls on random source Class B dealers, and inspectors are directed to (1) perform quarterly facility inspections, which are more frequent than for any other dealers, and (2) use dealer records to conduct tracebacks by tracing a particular dog or cat back to the source from which a dealer obtained the animal, both to verify the legitimacy of the sale and ensure the dog or cat was not lost or stolen. APHIS determined that more frequent inspections were required for random source Class B dealers because they are higher risk than other types of licensees. APHIS inspectors are to conduct tracebacks within 30 days after each inspection by tracing some of the dogs or cats a random source Class B dealer obtained back to their sources. Specifically, an inspector randomly selects 4 to 10 of the dogs and cats acquired by the dealer since the last quarterly inspection. Inspectors then are to use the dealer’s records on these selected animals to conduct tracebacks by either (1) visiting the seller listed on the dealer records or (2) if the seller is a pound, shelter, another licensed dealer, or an individual already known to the inspector, contacting them by telephone. During the visit or telephone call, inspectors are to obtain specific information from the seller to determine if the sale was from a legitimate source. For example, if an inspector conducted a traceback on a dog sold by an individual to a random source Class B dealer, the inspector would attempt to confirm that the requirement that the dog be bred and raised on the individual’s premises is met. Once the traceback information is obtained, the inspector completes a traceback worksheet form, documents the traceback result, and forwards the completed form to the appropriate APHIS regional office. If, however, an inspector is unable to perform a traceback because the seller is outside of the inspector’s geographic area, the inspector sends the incomplete traceback to the appropriate APHIS regional office. The traceback is then ultimately referred to an inspector who has responsibility for the area in which the seller is located. In these cases, inspectors are directed to complete referred tracebacks within 30 days of receiving the traceback request. Recently, the USDA Office of Inspector General completed an audit of the APHIS Animal Care program and reported several concerns related to APHIS inspections and enforcement. The Inspector General’s May 2010 report found, among other things, that APHIS was ineffective in dealing with problematic dealers and that some inspectors did not cite or document violations properly. The report primarily focused on Class A dealers who breed and sell dogs. The Inspector General chose these dealers in part for their large facility size and the number of violations, or repeat violations, that they received during fiscal years 2006 through 2008. According to Inspector General officials, no random source Class B dealers were included in this study. APHIS concurred with the findings and recommendations in the Inspector General’s report and has taken several actions to respond to the recommendations. Among them, the agency developed an Enhanced Animal Welfare Act Enforcement Plan in May 2010, which provides details on how the agency plans to focus its enforcement efforts on problematic dealers and improve inspector performance, such as by providing additional training and guidance to inspectors and their supervisors. APHIS also provided a new Inspection Requirements Handbook during the April 2010 national meeting it held with all of its Animal Care inspectors and regulatory staff in anticipation of the Inspector General’s report, along with training on inspection enforcement and consistency. Additionally, APHIS redirected funding in June 2010 to provide an extra $4 million to help implement steps in the enforcement plan and proposes using this funding to, for example, hire additional Animal Care inspectors and supervisors (up to 60 additional personnel total). Though none of these actions were explicitly directed at random source Class B dealers, the new handbook contains some relevant supplemental information, including the previously released July 2009 Standard Operating Procedures for Conducting Tracebacks from Random Source B Dealers, which generally directs that tracebacks be conducted within 30 days of an inspection. Nine Class B Dealers Provide Random Source Dogs and Cats for Research, Far Fewer Than in Recent Decades As of July 2010, there were 9 Class B dealers licensed by APHIS to sell random source dogs and cats for research. This number has changed little since the end of fiscal year 2005, when APHIS reported there were 10 active random source dealers. Eight of the 9 active dealers are in the APHIS Eastern Region, and 1 is in the APHIS Western Region. Overall, the number of random source Class B dealers has fallen by over 90 percent since the early 1990s, when there were over 100 such dealers licensed by APHIS. APHIS officials attributed the decline to several factors, although they said the agency has not performed a detailed study of this matter. These factors include (1) the reduced use of random source dogs and cats by research institutions due to new technologies and computer modeling; (2) increased pressure from animal advocacy organizations to use purpose-bred dogs and cats for research; and (3) APHIS’s oversight and issuance of citations for AWA violations, which has led some dealers to leave the business. The use of dogs and cats in research has dropped significantly over the last 30 years. According to academic and industry association information, this general decline may be due to several factors, which include the development of nonanimal research methods, such as computer models. According to APHIS information, the largest number of dogs and cats used in research was in fiscal year 1976—nearly 280,800 dogs and cats total. Since that year, the use of dogs and cats in research has generally declined, to less than 100,000 per year from fiscal years 1999 to 2007. In fiscal year 2008, the total number of dogs and cats used in research was about 101,700 animals—a decrease of nearly 64 percent from 1976. Moreover, the number of random source dogs and cats used in research is relatively small based on APHIS data collected from November 2007 to November 2008, a period roughly covering fiscal year 2008. These data showed that the total number of dogs and cats sold for that period by random source Class B dealers to research facilities was 3,139 animals (2,863 dogs and 276 cats), which was equivalent to about 3 percent of the total dogs and cats used in research in fiscal year 2008. APHIS Inspections Have Found Numerous Dealer Violations, but APHIS Has Not Completed All Tracebacks or Fully Analyzed Traceback Data APHIS inspection reports documented one or more violations by seven of the nine random source Class B dealers from fiscal years 2007 through 2009. Additionally, about 29 percent of tracebacks APHIS conducted during this period were either unsuccessful or had not been completed as of June 2010, as directed by agency guidance. The agency does not fully use the traceback information it collects, and thus cannot ensure it is detecting problems with the process. During Fiscal Years 2007 to 2009, About One-Third of Inspection Reports Reviewed Cited Violations, and Seven of the Nine Dealers Had One or More Violations Our review of all APHIS inspection reports from fiscal years 2007 through 2009 indicates that the agency has generally inspected, or attempted to inspect, each of the random source Class B dealers at least four times a year, as called for in APHIS guidance, and has documented numerous violations among the dealers. According to APHIS guidance, when conducting an inspection, inspectors are to examine the condition and cleanliness of the dealer facility and the condition of the dogs and cats present, among other things. Inspectors also are to review dealer records pertaining to the acquisition and disposition of animals. For example, according to APHIS guidance, inspectors are to determine if a dealer’s records include items required in agency regulations such as (1) the name and address of the person from whom a dog or cat was purchased by the dealer; (2) the vehicle license number and state, and the driver’s license number and state, of any person not licensed or registered under AWA; (3) the official USDA tag number or tattoo assigned to a dog or cat; (4) a description of each dog or cat, which includes certain specific information, such as breed, color, and distinct markings; and (5) certifications from any person not licensed, other than a pound or shelter, that any dogs or cats provided to the dealer were born and raised on that person’s premises. Overall, 54 of the 156 inspection reports from fiscal years 2007 through 2009 cited at least one dealer violation, and seven of the nine dealers had one or more violations during this period. The most common violation involved the dealer being absent when the inspector attempted to perform an inspection during normal business hours. Five dealers were cited for this violation in 23 inspection reports. The second most common violation was for problems with the condition of animal housing, such as excessive rust, peeling paint, or exposed sharp edges. Five dealers were cited for this violation in 14 inspection reports. Other violations included inadequate veterinary care (six dealers cited in 10 reports), poor recordkeeping (five dealers cited in 10 reports), and insufficient cleaning of kennels or cages (three dealers cited in 6 reports). As of July 2010, several of these dealers were under further investigation by APHIS in light of repeated violations and could be subject to fines or even license revocation in the future, depending on the severity or history of violations. Some APHIS Tracebacks for Verification Were Unsuccessful or Incomplete in Fiscal Year 2009, and APHIS Has Not Fully Used Its Traceback Data APHIS has performed tracebacks to verify the records of random source Class B dealers since fiscal year 1993, but it only recently started to compile traceback information using electronic spreadsheet logs. Prior to fiscal year 2009, the agency was not compiling traceback data. APHIS officials said that they began this effort in fiscal year 2009 in order to track traceback results more thoroughly, ensure all tracebacks were being completed, and follow up on tracebacks that were unsuccessful. Information in the traceback logs comes from inspectors, who send a form documenting the results of each traceback to the appropriate regional office. We reviewed the information in APHIS’s fiscal year 2009 traceback logs, as well as the individual forms from selected tracebacks. We found that APHIS attempted a total of 326 tracebacks in fiscal year 2009. As of June 2010, the data in APHIS’s traceback logs showed that APHIS was able to successfully trace a dog or cat back to a legitimate source in 231 of the 326 traceback cases, or about 71 percent of the time. Of the remaining tracebacks, 53, or about 16 percent of the total, were unsuccessful, generally meaning that inspectors (1) could not locate the source based on the address information they obtained from dealer records or (2) determined the source was not legitimate (for example, the dealer purchased the dog or cat from an individual who had not bred and raised the animal as required by regulation). The other 42 tracebacks, or about 13 percent of the total, had not been completed. In those instances where an inspector determined a traceback was unsuccessful, APHIS Animal Care forwarded the cases to APHIS’s Investigative and Enforcement Services for further investigation and potential enforcement action. APHIS officials said that they did not find any documented cases of lost or stolen dogs or cats being purchased by random source dealers via APHIS’s traceback efforts in fiscal year 2009. Because APHIS does not analyze the data in its traceback logs, it cannot systematically detect problems with its tracebacks. Although APHIS’s traceback guidance states that tracebacks should generally be completed within 30 days of a random source Class B dealer inspection, as of June 2010, 42 tracebacks from fiscal year 2009 remained incomplete. Furthermore, preliminary fiscal year 2010 APHIS traceback data show that as of June 2010, 47 tracebacks had not been completed that were already about 60 days beyond APHIS’s traceback time frames. According to APHIS’s guidance, “all tracebacks must be completed within 30 days of the inspection of the random source B dealer, or for referred tracebacks, within 30 days of the time the traceback request is received.” APHIS regional officials noted several factors that can sometimes hinder timely completion of tracebacks, such as competing priorities, limited resources, dealers not obtaining valid addresses from individuals, the logistics of tracking down individuals between APHIS regions, and having to obtain traceback information from more than one dealer. However, APHIS officials are not examining the log information for indications of any root causes of the delays that they could address, such as whether these incomplete tracebacks consistently involved the same sellers or inspectors. Without thoroughly analyzing its traceback data, APHIS cannot consistently detect problems and take all available steps to ensure random source Class B dealers are obtaining dogs and cats from legitimate sources. APHIS regional officials stated that it would be prudent to examine incomplete tracebacks more closely and, for example, obtain quarterly reports on their status to better manage them. We also found three instances where an inspector traced a dog back to another random source Class B dealer and then concluded all traceback efforts, which is contrary to APHIS’s traceback guidance. According to this guidance, in such instances, an inspector should continue the traceback process using the second random source dealer’s records to trace the dog or cat back to the seller listed on this second dealer’s records. However, in these three instances, each traceback form noted that the inspector only conducted the tracebacks as far as the random source Class B dealer; in these cases, the traceback still needed to continue back to the seller for “full verification.” During our discussions with APHIS regional officials regarding these tracebacks, they agreed that the traceback process should have continued according to APHIS traceback guidance. As with the incomplete tracebacks, APHIS cannot ensure it detects such problems or patterns among dealers or inspections, whether from the traceback forms or the traceback logs, unless it thoroughly analyzes its traceback data. APHIS Does Not Collect Data on the Cost of Its Oversight of Specific Classes of Dealers, or Others It Inspects, Including Random Source Class B Dealers According to APHIS officials, the agency does not collect cost information for its oversight of the specific classes of dealers and exhibitors, or others it inspects, including random source Class B dealers. Furthermore, APHIS officials also told us the agency does not currently have a mechanism in place to determine these costs. For example, APHIS inspectors do not currently record their time by specific oversight activity or class of dealer. As a result, the only current cost information APHIS can provide for any dealers, as well as others it inspects and oversees, is an estimate of the average cost of inspections overall. APHIS estimated that this average cost for fiscal year 2009 was $1,337 per inspection. According to APHIS officials, the average inspection cost is estimated by taking the Animal Care program’s annual appropriation, less certain administrative costs, and dividing it by the total number of inspections conducted for the fiscal year. However, the wide variety of inspections APHIS conducts, which includes dealers of various types and sizes, research facilities, zoos, and animal petting farms, limits the usefulness of this information. USDA has reported in previous years on the cost of agency oversight of random source Class B dealers. This information—provided to Congress in April 2006 and June 2009—gave oversight costs related to the regulation of random source Class B dealers for fiscal years 2005 and 2008. In April 2006, at the request of the Senate and House Appropriations Committees, the Secretary of Agriculture reported that the fiscal year 2005 cost of inspections and enforcement for these dealers was an estimated $270,000. This estimate also included $154,400 for two special enforcement and traceback projects that occurred that year. In June 2009, based on a request by a Member of Congress, the Acting APHIS Administrator reported information on APHIS’s regulation of random source Class B dealers. Included in this information, the Acting Administrator reported an estimated fiscal year 2008 oversight cost of approximately $309,000. However, the fiscal year 2008 amount was based on the previously reported fiscal year 2005 oversight cost figure, adjusted for cost-of-living increases, and incorrectly included the two fiscal year 2005 special project costs, which occurred only in fiscal year 2005. APHIS Animal Care officials explained that including the fiscal year 2005 special project costs in the fiscal year 2008 estimate occurred due to a lack of communication between APHIS Animal Care staff and APHIS Budget and Program Analysis staff and that those 2005 costs should not have been included in the 2008 estimate. To prevent this, APHIS officials said they are developing an internal standard operating procedure for reporting and communicating consistent and accurate Animal Care data that will include the key staff involved with this area. APHIS plans to have the procedure in place in early fiscal year 2011. APHIS Animal Care officials said they do not know how the fiscal year 2005 cost estimate for the agency’s oversight of random source Class B dealers was calculated and that they are unable to reconstruct or update this estimate. In addition, these officials said they are unable to develop a current estimate for these costs because they lack the necessary data. Federal internal control standards call for agencies to obtain, maintain, and use relevant, reliable, and timely information for program oversight and decision making, as well as for measuring progress toward meeting agency performance goals. Furthermore, Office of Management and Budget guidance directs agency managers to take timely and effective action to correct internal control deficiencies. APHIS’s lack of an accurate means of collecting and tracking oversight costs by activity and dealer, exhibitor, and any other entity type that APHIS inspects constitutes an internal control weakness and leaves the agency without an important management tool. For example, three inspectors we interviewed suggested that some random source Class B dealers may not require as many as four inspections per year because these dealers either have experienced few, if any, reportable violations over a period of years or are handling so few animals. In addition, as discussed, USDA’s Inspector General has reported a number of serious problems with APHIS’s oversight of other types of dealers, and recently APHIS determined that it will put more emphasis on, and provide additional resources for, enforcement oversight. Considering these and other potential factors, if APHIS had reliable and timely information on its oversight costs by activity and entity type, the agency would be in a better position to develop a business case for making changes to its oversight program that could allow it to use its limited resources more efficiently and effectively. Conclusions The number of random source Class B dealers has declined to 9 from more than 100 in the early 1990s. Tracebacks play an important role in APHIS’s oversight of random source Class B dealers and help the agency ensure that these dealers obtain dogs and cats from legitimate sources. APHIS recently began tracking the results of tracebacks. Our review of APHIS’s data revealed that about 13 percent of the tracebacks in fiscal year 2009 were incomplete, and preliminary APHIS data from fiscal year 2010 confirmed that incomplete tracebacks are continuing. Additionally, we found that by not analyzing traceback data, the agency is not yet making full use of the new traceback information it is collecting. Without analyzing this information—for example, by determining whether the same sellers or inspectors were consistently involved in late or incomplete tracebacks—APHIS cannot ensure it is detecting problems in a timely manner and that tracebacks are conducted according to the agency’s guidance, which would reduce the potential that lost or stolen dogs or cats could be used in research. In addition, having accurate, consistent, and reliable oversight cost data for the APHIS Animal Care program is a key element in managing the program effectively and enforcing AWA. Without such data, APHIS is not employing one of the standards of federal internal control. Currently, APHIS cannot determine what data it needs to estimate costs, and how to best collect that information to reasonably know the cost of its oversight of random source Class B dealers, as well as the other entities the agency regulates under AWA. Without this information APHIS cannot track specific oversight costs and cannot help management identify trends in its operations, including inspections and tracebacks on random source Class B dealers. In addition, not collecting and analyzing accurate and reliable oversight cost data prevents APHIS from developing a business case for changing its oversight program, if needed, and does not provide reasonable assurance that the agency’s resources are being used effectively and efficiently to enforce AWA and its implementing regulations. Recommendations for Executive Action To improve APHIS’s oversight of random source Class B dealers who purchase dogs and cats for research, we recommend that the Secretary of Agriculture direct the Administrator of APHIS to take the following two actions: Improve the agency’s analysis and use of the traceback information it collects, such as whether the same sellers or inspectors were consistently involved in late or incomplete tracebacks, and ensure it is taking all available steps to verify random source Class B dealers are obtaining dogs and cats from legitimate sources, including making certain that tracebacks are completed in a timely manner and conducted according to APHIS guidance. Develop a methodology to collect and track the oversight costs associated with the specific classes of dealers, and others the agency inspects, including random source Class B dealers, in order to identify potential problems requiring management attention and develop a business case for changing this oversight, if appropriate, to more efficiently use available resources. Agency Comments and Our Evaluation We provided a draft of this report to USDA for review and comment. In written comments, which are included in appendix II, USDA agreed with the report’s recommendations. Regarding the first recommendation, USDA stated that APHIS will develop (1) a database to help manage and analyze information associated with tracebacks and (2) a process to ensure tracebacks are complete and finished in a timely manner. USDA said it would complete these actions by December 31, 2010. Regarding the second recommendation, USDA stated that APHIS will develop an information management system to assist APHIS Animal Care managers in managing and analyzing information collected from field operations, determining associated costs, and measuring work performance. USDA estimated it would complete this action by June 30, 2011. USDA did not provide any suggested technical corrections. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine (1) the number of Class B dealers that sell random source dogs and cats for research; (2) the extent to which the U.S. Department of Agriculture’s (USDA) Animal and Plant Health Inspection Service (APHIS) conducts inspections of these dealers and verifies the accuracy of their records; and (3) the costs associated with APHIS’s oversight of these dealers compared with its costs for oversight of other types of dealers. To determine the number of Class B dealers that sell random source dogs and cats for research, we reviewed APHIS documents, such as prior agency annual reports, and USDA and APHIS information previously reported to the Senate and House Appropriations Committees in April 2006 and to an individual Member of Congress in June 2009. We also interviewed APHIS Animal Care officials at their headquarters in Riverdale, Maryland, and their two regional offices in Raleigh, North Carolina, and Fort Collins, Colorado, regarding how a Class B dealer is designated as a random source dealer, the number of Class B dealers that sell random source dogs and cats, and what accounted for any changes in these dealers’ numbers both historically and since the end of fiscal year 2005. Additionally, we reviewed a 2009 National Research Council report on random source dogs and cats for (1) information on the history of Class B dealers of random source dogs and cats, (2) background regarding the use of these animals in research, and (3) APHIS data in the study on the number of random source dogs and cats sold from November 2007 to November 2008. We also obtained information from APHIS regarding the overall number of dogs and cats used in research as reported to the agency from research facilities. To determine the extent to which APHIS conducts inspections of these dealers and verifies the accuracy of their records, we reviewed the Animal Welfare Act, APHIS regulations, and guidance applicable to random source Class B dealers, such as APHIS’s Standard Operating Procedures for Conducting Tracebacks from Random Source B Dealers and its Dealer Inspection Guide. We reviewed APHIS inspection reports for the nine current random source Class B dealers from fiscal years 2007 through 2009 and examined any violations APHIS inspectors recorded in each of the 156 inspection reports prepared during this period to obtain an understanding of the types of violations cited for these dealers. Using the inspection report dates, we also determined whether APHIS followed their guidance and inspected the nine current random source Class B dealers a minimum of four times each year. Additionally, we obtained information on APHIS’s fiscal year 2009 tracebacks efforts—an oversight process unique to random source Class B dealers. Tracebacks involve APHIS inspectors using a dealer’s records to trace a particular dog or cat back to the source where that dealer obtained the animal, both to verify the legitimacy of the sale and ensure the dog or cat was not lost or stolen. To determine if the fiscal year 2009 APHIS traceback information maintained in automated spreadsheets by the APHIS Eastern and Western Regional Offices was reliable for the purposes of our review, we conducted a data reliability assessment of it. Specifically, to ensure the validity and reliability of these data, we reviewed key data elements from (1) all 36 of the tracebacks listed on the Western Regional Office traceback spreadsheet and (2) a stratified random sample of 50 tracebacks, based on random source Class B dealers and inspectors, pulled from the Eastern Regional Office traceback spreadsheet total of 317 tracebacks. The Eastern Region had many more tracebacks because eight of the current nine random source dealers are located in that region. Based on our assessment, we believe these data are sufficiently reliable for reporting APHIS data for informational and contextual purposes. Additionally, we interviewed APHIS Animal Care headquarters and regional office officials—including the eight field inspectors who inspect the nine current random source Class B dealers—as well as the dealers, to obtain an understanding of APHIS oversight as it pertains these dealers. We also accompanied two APHIS inspectors on three random source Class B dealer inspections in two states to observe how inspections and tracebacks were conducted. Furthermore, we also interviewed and reviewed documents obtained from a cross section of stakeholder entities, including two animal welfare groups, the Animal Welfare Institute and the Humane Society of the United States, medical research associations such as the National Association for Biomedical Research, the National Research Council, the National Institutes of Health, and the USDA Office of Inspector General to provide us further context for understanding the issues involving both random source dealers and random source dogs and cats. To determine the costs associated with APHIS’s oversight of random source dealers compared with its costs for oversight of other types of dealers, we reviewed prior cost information the agency provided to the Senate and House Appropriations Committees in April 2006 and to an individual Member of Congress in June 2009. We discussed this previously reported information with APHIS Animal Care headquarters officials and inquired how the information was prepared. We also interviewed agency officials about APHIS’s current efforts to collect oversight cost data for random source Class B dealers, as well as for other entities the agency inspects, such as other types of dealers. Additionally, we obtained and reviewed documentation from APHIS regarding how the agency reports its average cost-of-inspection information. We conducted this performance audit from June 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the U.S. Department of Agriculture Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, James R. Jones, Jr., Assistant Director; Kevin S. Bray; Barry DeWeese; Kirk D. Menard; Michael S. Pose; David Reed; Terry Richardson; Cynthia Saunders; and Ben Shouse made key contributions to this report. | For decades, the public has been concerned that lost or stolen dogs and cats could be used in research. The U.S. Department of Agriculture's (USDA) Animal and Plant Health Inspection Service (APHIS) is responsible for the licensing and oversight of dealers who provide animals for research. Random source Class B dealers--who generally obtain dogs and cats for research from individuals, pounds, and other dealers--have been the focus of this concern. GAO was asked to determine (1) the number of random source Class B dealers, (2) the extent to which APHIS conducts inspections of these dealers and verifies their records, and (3) the costs associated with APHIS's oversight of these dealers compared to other types of dealers. GAO reviewed the Animal Welfare Act (AWA); APHIS regulations and guidance; inspection reports; agency data, such as "traceback" data used to verify dogs and cats are not lost or stolen; and interviewed and reviewed documents from agency officials and other stakeholders. As of July 2010, nine Class B dealers were licensed by APHIS to sell random source dogs and cats for research. This number has not changed significantly since fiscal year 2005 but declined from over 100 dealers in the early 1990s. Random source dealers sold 3,139 animals to research facilities from November 2007 to November 2008--equivalent to about 3 percent of the dogs and cats used in research in fiscal year 2008. APHIS inspections have found numerous random source Class B dealer violations, such as the condition of animal housing and inadequate veterinary care, but APHIS has not completed all of its fiscal year 2009 tracebacks related to these dealers or analyzed traceback verification data to detect problems with the process. In reviewing all inspection reports for fiscal years 2007 through 2009, GAO found APHIS generally inspected, or attempted to inspect, each of these dealers at least four times a year, as directed. APHIS guidance directs inspectors to examine the condition of a dealer facility, examine the condition of the dogs and cats present, and review dealer records. Overall, 54 of the 156 inspection reports cited at least one dealer violation, and seven of the nine dealers had one or more violations. As of July 2010, several dealers were under further APHIS investigation due to repeated violations. To verify dealer records and help ensure dealers are not obtaining lost or stolen animals, APHIS attempted a total of 326 tracebacks in fiscal year 2009. Though APHIS has conducted tracebacks since fiscal year 1993, it did not compile traceback data until fiscal year 2009. As of June 2010, data showed APHIS successfully traced a dog or cat back to a legitimate source about 71 percent of the time. About 29 percent of tracebacks APHIS conducted during this period were either unsuccessful or had not been completed as of June 2010, as directed by agency guidance. Because APHIS does not analyze traceback data, it cannot systematically detect problems with tracebacks and take all available steps to ensure random source dealers obtain dogs and cats from legitimate sources. For example, without analyzing data, APHIS cannot know whether the same sellers or inspectors were consistently involved in late or incomplete tracebacks. According to APHIS officials, the agency does not collect cost information specific to its oversight of random source Class B dealers, or to any other class of dealer it inspects. Officials also said the agency does not currently have a mechanism to determine these costs. Federal internal control standards call for agencies to obtain such information for program oversight. For example, APHIS inspectors do not record their time by specific oversight activity or class of dealer. Without a methodology to collect and track costs associated with the oversight of these dealers, and others APHIS inspects, APHIS management cannot identify trends or deficiencies requiring its attention. Furthermore, management cannot develop a business case to change its oversight program, if needed, to more effectively and efficiently use available resources. |
Background Several federal legislative and executive provisions support preparation for and response to emergency situations. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (the Stafford Act) primarily establishes the programs and processes for the federal government to provide major disaster and emergency assistance to state, local, and tribal governments, individuals, and qualified private nonprofit organizations. FEMA, within DHS, has responsibility for administering the provisions of the Stafford Act. Besides using these federal resources, states affected by a catastrophic disaster can also turn to other states for assistance in obtaining surge capacity—the ability to draw on additional resources, such as personnel and equipment, needed to respond to and recover from the incident. One way of sharing personnel and equipment across state lines is through the use of the Emergency Management Assistance Compact, an interstate compact that provides a legal and administrative framework for managing such emergency requests. The compact includes 49 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands. We have ongoing work examining how the Emergency Management Assistance Compact has been used in disasters and how its effectiveness could be enhanced and expect to report by this summer. As the committee is aware, a number of specific recommendations have been made to improve the nation’s ability to effectively prepare for and respond to catastrophic disasters following the aftermath of Hurricane Katrina. Beginning in February 2006, reports by the House Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina, the Senate Homeland Security and Governmental Affairs Committee, the White House Homeland Security Council, the DHS Inspector General, and DHS and FEMA all identified a variety of failures and some strengths in the preparations for, response to, and initial recovery from Hurricane Katrina. In addition to these reviews, a report from the American National Standards Institute Homeland Security Standards Panel (ANSI-HSSP) contains recommendations aimed at bolstering national preparedness, response, and recovery efforts in the event of a natural disaster. A key resource identified in the document is the American National Standard for Disaster/Emergency Management and Business Continuity Programs (ANSI/NFPA 1600), which was developed by the National Fire Protection Association (NFPA). The standard defines a common set of criteria for preparedness, disaster management, emergency management, and business continuity programs. Hurricane Katrina severely tested disaster management at the federal, state, and local levels and revealed weaknesses in the basic elements of preparing for, responding to, and recovering from any catastrophic disaster. Based on our work done during the aftermath of Hurricane Katrina, we previously reported that DHS needs to more effectively coordinate disaster preparedness, response, and recovery efforts, particularly for catastrophic disasters in which the response capabilities of state and local governments are almost immediately overwhelmed. Our analysis showed the need for (1) clearly defined and understood leadership roles and responsibilities; (2) the development of the necessary disaster capabilities; and (3) accountability systems that effectively balance the need for fast and flexible response against the need to prevent waste, fraud, and abuse. In line with a recommendation we made following Hurricane Andrew, the nation’s most destructive hurricane until Katrina, we recommended that Congress give federal agencies explicit authority to take actions to prepare for all types of catastrophic disasters when there is warning. We also recommended that DHS 1. rigorously retest, train, and exercise its recent clarification of the roles, responsibilities, and lines of authority for all levels of leadership, implementing changes needed to remedy identified coordination problems; 2. direct that the NRP base plan and its supporting Catastrophic Incident Annex be supported by more robust and detailed operational implementation plans; 3. provide guidance and direction for federal, state, and local planning, training, and exercises to ensure such activities fully support preparedness, response, and recovery responsibilities at a jurisdictional and regional basis; 4. take a lead in monitoring federal agencies’ efforts to prepare to meet their responsibilities under the NRP and the interim National Preparedness Goal; and 5. use a risk management approach in deciding whether and how to invest finite resources in specific capabilities for a catastrophic disaster. The Post-Katrina Reform Act responded to the findings and recommendations in the various reports examining the preparation for and response to Hurricane Katrina. While keeping FEMA within DHS, the act enhances FEMA’s responsibilities and its autonomy within DHS. FEMA is to lead and support the nation in a risk-based, comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation. Under the Act, the FEMA Administrator reports directly to the Secretary of DHS; FEMA is now a distinct entity within DHS; and the Secretary of DHS can no longer substantially or significantly reduce the authorities, responsibilities, or functions of FEMA or the capability to perform them unless authorized by subsequent legislation. FEMA has absorbed many of the functions of DHS’s Preparedness Directorate (with some exceptions). The statute establishes 10 regional offices with specified responsibilities. The statute also establishes a National Integration Center responsible for the ongoing management and maintenance of the NIMS and NRP. The Post-Katrina Reform Act also included provisions for other areas, such as evacuation plans and exercises and addressing the needs of individuals with disabilities, In addition, the act includes several provisions to strengthen the management and capability of FEMA’s workforce. For example, the statute called for a strategic human capital plan to shape and improve FEMA’s workforce, authorized recruitment and retention bonuses, and established a Surge Capacity Force. Most of the organizational changes became effective as of March 31, 2007. Others, such as the increase in organizational autonomy for FEMA and establishment of the National Integration Center, became effective upon enactment of the Post-Katrina Reform Act on October 4, 2006. FEMA Reviewing Its Responsibilities, Capabilities as It Implements Recommendations and Post-Katrina Reform Act After FEMA became part of DHS in March 2003, its responsibilities were over time dispersed and redefined. FEMA continues to evolve within DHS as it implements the changes required by the Post-Katrina Reform Act, whose details are discussed later. Hurricane Katrina severely tested disaster management at the federal, state, and local levels and revealed weaknesses in the basic elements of preparing for, responding to, and recovering from any catastrophic disaster. According to DHS, the department completed a thorough assessment of FEMA’s internal structure to incorporate lessons learned from Hurricane Katrina and integrate systematically new and existing assets and responsibilities within FEMA. As I stated in March 2007 testimony, the effective implementation of recent recommendations and the Post-Katrina Reform Act’s organizational changes and related roles and responsibilities should address many of our emergency management observations and recommendations. In addition, we previously reported that DHS needs to more effectively coordinate disaster preparedness, response, and recovery efforts, particularly for catastrophic disasters in which the response capabilities of state and local governments are almost immediately overwhelmed. Our analysis showed the need for (1) clearly defined and understood leadership roles and responsibilities; (2) the development of the necessary disaster capabilities; and (3) accountability systems that effectively balance the need for fast and flexible response against the need to prevent waste, fraud, and abuse. Leadership Is Critical to Prepare for, Respond to, and Recover from Catastrophic Disasters In preparing for, responding to, and recovering from any catastrophic disaster, the legal authorities, roles and responsibilities, and lines of authority at all levels of government must be clearly defined, effectively communicated, and well understood to facilitate rapid and effective decision making. Hurricane Katrina showed the need to improve leadership at all levels of government to better respond to a catastrophic disaster. As we have previously reported, developing the capabilities needed for catastrophic disasters requires an overall national preparedness effort that is designed to integrate and define what needs to be done, where, and by whom (roles and responsibilities), how it should be done, and how well it should be done—that is, according to what standards. The principal national documents designed to address each of these are, respectively, the NRP, NIMS, and the NPG. All three documents are undergoing extensive review and revision by federal, state, and local government officials, tribal authorities, non- governmental and private sector officials. For example, the review of the NRP is intended to assess the effectiveness of the NRP, identify modifications and improvements and reissue the document. This review includes all major components of the NRP including the base plan, Emergency Support Functions (ESF), annexes such as the Catastrophic Incident Annex and Supplement; as well as the role of the PFO, FCO, and the Joint Field Office structure. Also during the current NRP review period, FEMA has revised the organizational structure of Emergency Support Function 6 (ESF-6), Mass Care, Housing, and Human Services, and places FEMA as the lead agency for this emergency support function. The Red Cross will remain as a supporting agency in the responsibilities and activities of ESF-6. According to a February 2007 letter by the Red Cross, this change will not take place until the NRP review process is complete and all changes are approved. The revised NRP and NIMS were originally scheduled for release in June 2007. In April 2007, however, DHS officials notified stakeholders that some important issues were more complex and require national-level policy decisions, and additional time was needed to complete a comprehensive draft. DHS noted that the underlying operational principles of the NRP remain intact and the current document, as revised in May 2006, still applies. FEMA officials have told us that the final version of the National Preparedness Goal and its corresponding documents like the Target Capabilities List, are currently receiving final reviews by the White House and are expected to be out shortly. A key issue in the response to Hurricane Katrina was the lack of clearly understood roles and responsibilities. One that continues to be a subject of discussion is the roles and responsibilities of the FCO, who has the authority to make mission assignments to federal agencies for response and recovery under the Stafford Act, and the PFO, whose role was to provide situational awareness to the Secretary of Homeland Security. The May 2006 revisions to the NRP made changes designed to address this issue. However, as we noted in March 2007, the changes may not have fully resolved the leadership issues regarding the roles of the PFO and the FCO. While the Secretary of Homeland Security may avoid conflicts by appointing a single individual to serve in both positions in non-terrorist incidents, confusion may persist if the Secretary of Homeland Security does not exercise this discretion to do so. Furthermore, this discretion does not exist for terrorist incidents, and the revised NRP does not specifically provide a rationale for this limitation. FEMA has pre-designated five teams of FCOs and PFOs in the Gulf Coast and eastern seaboard states at risk of hurricanes. This includes FCOs and PFOs for the Gulf Coast Region, Northeast Region, and the Mid-Atlantic Region, and separate FCOs and PFOs for the states of Florida and Texas. It is critically important that the authorities, roles, and responsibilities of these pre-designated FCOs and PFOs be clear and clearly understood by all. There is still some question among state and local first responders about the need for both positions and how they will work together in disaster response. One potential benefit of naming the FCOs and PFOs in advance is that they have an opportunity meet and discuss expectations, roles and responsibilities with state, local, and nongovernmental officials before an actual disaster, possibly setting the groundwork for improved coordination and communication in an actual disaster. Enhanced Capabilities Are Needed to Adequately Prepare for and Respond to Major Disasters Numerous reports, including those by the House, Senate, and the White House, and our own work suggest that the substantial resources and capabilities marshaled by state, local, and federal governments and nongovernmental organizations were insufficient to meet the immediate challenges posed by the unprecedented degree of damage and the number of victims caused by Hurricanes Katrina and Rita. Developing the ability to prepare for, respond to, and recover from major and catastrophic disasters requires an overall national preparedness effort that is designed to integrate and define what needs to be done and where, how it should be done, and how well it should be done—that is, according to what standards. As previously discussed, the principal national documents designed to address each of these are, respectively, the NRP, NIMS, and the NPG, and each document is undergoing revision. Overall, capabilities are built upon the appropriate combination of people, skills, processes, and assets. Ensuring that needed capabilities are available requires effective planning and coordination in conjunction with training and exercises in which the capabilities are realistically tested and problems identified and subsequently addressed in partnership with other federal, state, and local stakeholders. In recent work on FEMA management of day-to-day operations, we found that although shifting resources caused by its transition to DHS created challenges for FEMA, the agency’s management of existing resources compounded these problems. FEMA lacks some of the basic management tools that help an agency respond to changing circumstances. Most notably, our January 2007 report found that FEMA lacks a strategic workforce plan and related human capital strategies—such as succession planning or a coordinated training effort. Such tools are integral to managing resources, as they enable an agency to define staffing levels, identify the critical skills needed to achieve its mission, and eliminate or mitigate gaps between current and future skills and competencies. FEMA officials have said they are beginning to address these and other basic organizational management issues. To this end, FEMA has commissioned studies of 18 areas, whose final reports and recommendations are due later this spring. An important element of effective emergency response is the ability to identify and deploy where needed a variety of resources from a variety of sources—federal, state, local or tribal governments; military assets of the National Guard or active military; nongovernmental entities; and the private sector. One key method of tapping resources in areas not affected by the disaster is the Emergency Management Assistance Compact (EMAC). Through EMAC about 46,000 National Guard and 19,000 civilian responders were deployed to areas directly affected by the 2005 Gulf Coast hurricanes. We have ongoing work examining how EMAC has been used in disasters and how its effectiveness could be enhanced and expect to report by this summer. One of the resources accessed through EMAC is the National Guard. States and governors rely on their National Guard personnel and equipment for disaster response, and National Guard personnel are frequently deployed to disaster areas outside their home states. However, as we reported in January 2007, the types and quantities of equipment the National Guard needs to respond to large-scale disasters have not been fully identified because the multiple federal and state agencies that would have roles in responding to such events have not completed and integrated their plans. As a liaison between the Army, the Air Force, and the states, the National Guard Bureau is well positioned to facilitate state planning for National Guard forces. However, until the bureau’s charter and its civil support regulation are revised to define its role in facilitating state planning for multistate events, such planning may remain incomplete, and the National Guard may not be prepared to respond as effectively and efficiently as possible. In addition, questions have arisen about the level of resources the National Guard has available for domestic emergency response. DOD does not routinely measure the equipment readiness of nondeployed National Guard forces for domestic civil support missions or report this information to Congress. Thus, although the deployment of National Guard units overseas has decreased the supply of equipment available to nondeployed National Guard units in the U.S., there has been no established, formal method of assessing the impact on the Guard’s ability to perform its domestic missions. Although DOD has begun to collect data on units’ preparedness, these efforts are not yet fully mature. The nation’s experience with hurricanes Katrina and Rita reinforces some of the questions surrounding the adequacy of capabilities in the context of a catastrophic disaster—particularly in the areas of (1) situational assessment and awareness, (2) emergency communications, (3) evacuations, (4) search and rescue, (5) logistics, and (6) mass care and sheltering. According to FEMA, the agency has described a number of actions it has taken or has underway to address identified deficiencies in each of these areas. Examples include designating national and regional situational awareness teams; acquiring and deploying mobile satellite communications trucks; developing an electronic system for receiving and tracking the status of requests for assistance and supplies; acquiring GPS equipment for tracking the location of supplies on route to areas of need; and working with the Red Cross and others to clarify roles and responsibilities for mass care, housing, and human services. However, a number of FEMA programs are ongoing and it is too early to evaluate their effectiveness. In addition, none of these initiatives appear to have been tested on a scale that reasonably simulates the conditions and demand they would face following a major or catastrophic disaster. Thus, it is difficult to assess the probable results of these initiatives in improving response to a major or catastrophic disaster, such as a category 4 or 5 hurricane. The section below briefly discusses actions taken or underway to make improvements in each of these areas. Additional details can be found in appendix I. Situational Awareness. FEMA is developing a concept for rapidly deployable interagency incident management teams, at this time called National Incident Management Team, to provide a forward federal presence on site within 12 hours of notification to facilitate managing the national response for catastrophic incidents. These teams will support efforts to meet the emergent needs during disasters such as the capability to provide initial situational awareness for decision-makers and support the initial establishment of a unified command. Emergency Communications. Agencies’ communications systems during a catastrophic disaster must first be operable, with sufficient communications to meet everyday internal and emergency communication requirements. Once operable, systems should have communications interoperability whereby public safety agencies (e.g., police, fire, emergency medical services, etc.) and service agencies (e.g., public works, transportation, and hospitals) can communicate within and across agencies and jurisdictions in real time as needed. DHS officials have identified a number of programs and activities they have implemented to improve interoperable communications nationally, and FEMA has taken action to design, staff, and maintain a rapidly deployable, responsive, interoperable, and reliable emergency communications capability, which we discuss further in appendix I. Logistics. FEMA’s inability to effectively manage and track requests for and the distribution of water, ice, food, and other supplies came under harsh criticism in the wake of Hurricane Katrina. Within days, FEMA became overwhelmed and essentially asked the military to take over much of the logistics mission. In the Post-Katrina Reform Act, Congress required FEMA to make its logistics system more flexible and responsive. FEMA’s ongoing improvements to its logistics strategy and efforts are designed to initially lean forward and provide immediate support to a disaster site mainly through FEMA-owned goods and assets, and later on to establish sustained supply chains with the private vendors whose resources are needed for ongoing response and recovery activities, according to FEMA officials. In addition, we recently examined FEMA logistics issues, taking a broad approach, identifying five areas necessary for an effective logistics system, which are discussed in appendix I. In short, FEMA is taking action to transition its logistics program to be more proactive, flexible, and responsive. While these and other initiatives hold promise for improving FEMA’s logistics capabilities, it will be several years before they are fully implemented and operational. Mass Care and Shelter. In GAO’s work examining the nation’s ability to evacuate, care for, and shelter disaster victims, we found that FEMA needs to identify and assess the capabilities that exist across the federal government and outside the federal government. In an April testimony, FEMA’s Deputy Administrator for Operations said that emergency evacuation, shelter and housing is FEMA’s most pressing priority for planning for recovery from a catastrophic disaster. He said that FEMA is undertaking more detailed mass evacuee support planning; the Department of Justice and Red Cross are developing methods for more quickly identifying and uniting missing family members; and FEMA and the Red Cross have developed a web-based data system to support shelter management, reporting, and facility identification activities. Balance Needed between Quick Provision of Assistance and Ensuring Accountability to Protect against Waste, Fraud, and Abuse Controls and accountability mechanisms help to ensure that resources are used appropriately. Nevertheless, during a catastrophic disaster, decision makers struggle with the tension between implementing controls and accountability mechanisms and the demand for rapid response and recovery assistance. On one hand, our work uncovered many examples where quick action could not occur due to procedures that required extensive, time-consuming processes, delaying the delivery of vital supplies and other assistance. On the other hand, we also found examples where FEMA’s processes assisting disaster victims left the federal government vulnerable to fraud and the abuse of expedited assistance payments. We estimated that through February 2006, FEMA made about $600 million to $1.4 billion in improper and potentially fraudulent payments to applicants who used invalid information to apply for expedited cash assistance. DHS and FEMA have reported a number of actions that are to be in effect for the 2007 hurricane season so that federal recovery programs will have more capacity to rapidly handle a catastrophic incident but also provide accountability. Examples include significantly increasing the quantity of prepositioned supplies, such as food, ice, and water; placing global positioning systems on supply trucks to track their location and better manage the delivery of supplies; creating an enhanced phone system for victim assistance applications that can handle up to 200,000 calls per day; and improving computer systems and processes for verifying the eligibility of those applying for assistance. Effective implementation of these and other planned improvements will be critical to achieving their intended outcomes. Finally, catastrophic disasters not only require a different magnitude of capabilities and resources for effective response, they may also require more flexible policies and operating procedures. In a catastrophe, streamlining, simplifying, and expediting decision making should quickly replace “business as usual” and unquestioned adherence to long-standing policies and operating procedures used in normal situations for providing relief to disaster victims. At the same time, controls and accountability mechanisms must be sufficient to provide the documentation needed for expense reimbursement and reasonable assurance that resources have been used legally and for the purposes intended. We have recommended that DHS create accountability systems that effectively balance the need for fast and flexible response against the need to prevent waste, fraud, and abuse. Doing so would enable DHS to provide assistance quickly following a catastrophe and keep up with the magnitude of needs to confirm the eligibility of victims for disaster assistance, or assure that there were provisions in contracts for response and recovery services to ensure fair and reasonable prices in all cases. We also recommended that DHS provide guidance on advance procurement practices and procedures (precontracting) for those federal agencies with roles and responsibilities under the NRP. These federal agencies could then better manage disaster-related procurement and establish an assessment process to monitor agencies’ continuous planning efforts for their disaster-related procurement needs and the maintenance of capabilities. For example, we identified a number of emergency response practices in the public and private sectors that provide insight into how the federal government can better manage its disaster-related procurements. These practices include developing knowledge of contractor capabilities and prices, and establishing vendor relationships prior to the disaster and establishing a scalable operations plan to adjust the level of capacity to match the response with the need. In my March 2007 testimony I noted that recent statutory changes have established more controls and accountability mechanisms. For example, The Secretary of DHS is required to promulgate regulations designed to limit the excessive use of subcontractors and subcontracting tiers. The Secretary of DHS is also required to promulgate regulations that limit certain noncompetitive contracts to 150 days, unless exceptional circumstances apply. Oversight funding is specified. FEMA may dedicate up to one percent of funding for agency mission assignments as oversight funds. The FEMA Administrator must develop and maintain internal management controls of FEMA disaster assistance programs and develop and implement a training program to prevent fraud, waste, and abuse of federal funds in response to or recovery from a disaster. Verification measures must be developed to identify eligible recipients of disaster relief assistance. Several Disaster Management Issues Should Have Continued Congressional Attention In November 2006, the Comptroller General wrote to the congressional leadership suggesting areas for congressional oversight. He suggested that one area needing fundamental reform and oversight was preparing for, responding to, recovering from, and rebuilding after catastrophic events. Recent events—notably Hurricane Katrina and the threat of an influenza pandemic—have illustrated the importance of ensuring a strategic and integrated approach to catastrophic disaster management. Disaster preparation and response that is well planned and coordinated can save lives and mitigate damage, and an effectively functioning insurance market can substantially reduce the government’s exposure to post-catastrophe payouts. Lessons learned from past national emergencies provide an opportunity for Congress to look at actions that could mitigate the effects of potential catastrophic events. On January 18, 2007, DHS provided Congress a notice of implementation of the Post-Katrina Reform Act reorganization requirements and additional organizational changes made under the Homeland Security Act of 2002. All of the changes, according to DHS, were to become effective on March 31, 2007. As stated in our March 2007 testimony, the effective implementation of the Post-Katrina Reform Act’s organizational changes and related roles and responsibilities—in addition to those changes already undertaken by DHS—should address many of our emergency management observations and recommendations. The Comptroller General also suggested in November 2006 that Congress could also consider how the federal government can work with other nations, other levels of government, and nonprofit and private sector organizations, such as the Red Cross and private insurers, to help ensure the nation is well prepared and recovers effectively. Given the billions of dollars dedicated to preparing for, responding to, recovering from, and rebuilding after catastrophic disasters, congressional oversight is critical. A comprehensive and in-depth oversight agenda would require long-term efforts. Congress might consider starting with several specific areas for immediate oversight, such as (1) evaluating development and implementation of the National Preparedness System, including preparedness for an influenza pandemic, (2) assessing state and local capabilities and the use of federal grants in building and sustaining those capabilities, (3) examining regional and multistate planning and preparation, (4) determining the status of preparedness exercises, and (5) examining DHS policies regarding oversight assistance. DHS Has Reorganized Pursuant to the Post- Katrina Reform Act On January 18, 2007, DHS provided Congress a notice of implementation of the Post-Katrina Reform Act reorganization requirements and additional organizational changes made under the Homeland Security Act of 2002. All of the changes, according to DHS, were to become effective on March 31, 2007. According to DHS, the department completed a thorough assessment of FEMA’s internal structure to incorporate lessons learned from Hurricane Katrina and integrate systematically new and existing assets and responsibilities within FEMA. DHS transferred the following DHS offices and divisions to FEMA: United States Fire Administration, Office of Grants and Training, Chemical Stockpile Emergency Preparedness Division, Radiological Emergency Preparedness Program, Office of National Capital Region Coordination, and, Office of State and Local Government Coordination. DHS officials stated that they have established several organizational elements, such as a logistics management division, a disaster assistance division, and a disaster operations division. In addition, FEMA expanded its regional office structure with each region in part by establishing a Regional Advisory Council and at least one Regional Strike Team. With the recent appointment of the director for region III, FEMA officials noted that for the first time in recent memory there will be no acting regional directors and all 10 FEMA regional offices will be headed by experienced professionals. Further, FEMA will include a new national preparedness directorate intended to consolidate FEMA’s strategic preparedness assets from existing FEMA programs and certain legacy Preparedness Directorate programs. The National Preparedness Directorate will contain functions related to preparedness doctrine, policy, and contingency planning. It also will include the National Integration Center that will maintain the NRP and NIMS and ensure that training and exercise activities reflect these documents. Effective Implementation of the Post-Katrina Reform Act’s Provisions Should Respond to Many Concerns As I have previously stated in my March 2007 testimony, the effective implementation of the Post-Katrina Reform Act’s organizational changes and related roles and responsibilities—in addition to those changes already undertaken by DHS—should address many of our emergency management observations and recommendations. As noted earlier, our analysis in the aftermath of Hurricane Katrina showed the need for (1) clearly defined and understood leadership roles and responsibilities; (2) the development of the necessary disaster capabilities; and (3) accountability systems that effectively balance the need for fast and flexible response against the need to prevent waste, fraud, and abuse. The statute appears to strengthen leadership roles and responsibilities. For example, the statute clarifies that the FEMA Administrator is to act as the principal emergency management adviser to the President, the Homeland Security Council, and the Secretary of DHS and to provide recommendations directly to Congress after informing the Secretary of DHS. The incident management responsibilities and roles of the National Integration Center are now clear. The Secretary of DHS must ensure that the NRP provides for a clear chain of command to lead and coordinate the federal response to any natural disaster, act of terrorism, or other man-made disaster. The law also establishes qualifications that appointees must meet. For example, the FEMA Administrator must have a demonstrated ability in and knowledge of emergency management and homeland security and 5 years of executive leadership and management experience. Many provisions are designed to enhance preparedness and response. For example, the statute requires the President to establish a national preparedness goal and national preparedness system. The national preparedness system includes a broad range of preparedness activities, including utilizing target capabilities and preparedness priorities, training and exercises, comprehensive assessment systems, and reporting requirements. To illustrate, the FEMA Administrator is to carry out a national training program to implement, and a national exercise program to test and evaluate the NPG, NIMS, NRP, and other related plans and strategies. In addition, FEMA is to partner with nonfederal entities to build a national emergency management system. States must develop plans that include catastrophic incident annexes modeled after the NRP annex in order to be eligible for FEMA emergency preparedness grants. The state annexes must be developed in consultation with local officials, including regional commissions. FEMA regional administrators are to foster the development of mutual aid agreements between states. FEMA must enter into a memorandum of understanding with certain non-federal entities to collaborate on developing standards for deployment capabilities, including credentialing of personnel and typing of resources. In addition, FEMA must implement several other capabilities, such as (1) developing a logistics system providing real-time visibility of items at each point throughout the logistics system, (2) establishing a prepositioned equipment program, and (3) establishing emergency support and response teams. The National Preparedness System Is Key to Developing Disaster Capabilities More immediate congressional attention might focus on evaluating the construction and effectiveness of the National Preparedness System, which is mandated under the Post-Katrina Reform Act. Under Homeland Security Presidential Directive-8, issued in December 2003, DHS was to coordinate the development of a national domestic all-hazards preparedness goal “to establish measurable readiness priorities and targets that appropriately balance the potential threat and magnitude of terrorist attacks and large scale natural or accidental disasters with the resources required to prevent, respond to, and recover from them.” The goal was also to include readiness metrics and standards for preparedness assessments and strategies and a system for assessing the nation’s overall preparedness to respond to major events. To implement the directive, DHS developed the National Preparedness Goal using 15 emergency event scenarios, 12 of which were terrorist related, with the remaining 3 addressing a major hurricane, major earthquake, and an influenza pandemic. According to DHS’s National Preparedness Guidance, the planning scenarios are intended to illustrate the scope and magnitude of large-scale, catastrophic emergency events for which the nation needs to be prepared and to form the basis for identifying the capabilities needed to respond to a wide range of large scale emergency events. The scenarios focused on the consequences that first responders would have to address. Some state and local officials and experts have questioned whether the scenarios were appropriate inputs for preparedness planning, particularly in terms of their plausibility and the emphasis on terrorist scenarios. Using the scenarios, and in consultation with federal, state, and local emergency response stakeholders, DHS developed a list of over 1,600 discrete tasks, of which 300 were identified as critical. DHS then identified 36 target capabilities to provide guidance to federal, state, and local first responders on the capabilities they need to develop and maintain. That list has since been refined, and DHS released a revised draft list of 37 capabilities in December 2005. Because no single jurisdiction or agency would be expected to perform every task, possession of a target capability could involve enhancing and maintaining local resources, ensuring access to regional and federal resources, or some combination of the two. However, DHS is still in the process of developing goals, requirements, and metrics for these capabilities and the National Preparedness Goal in light of the Hurricane Katrina experience. Several key components of the National Preparedness System defined in the Post-Katrina Reform Act—the NPG, target capabilities and preparedness priorities, and comprehensive assessment systems—should be closely examined. Prior to Hurricane Katrina, DHS had established seven priorities for enhancing national first responder preparedness, including, for example, implementing the NRP and NIMS; strengthening capabilities in information sharing and collaboration; and strengthening capabilities in medical surge and mass prophylaxis. Those seven priorities were incorporated into DHS’s fiscal year 2006 homeland security grant program (HSGP) guidance, which added an eighth priority that emphasized emergency operations and catastrophic planning. In the fiscal year 2007 HSGP program guidance, DHS set two overarching priorities. DHS has focused the bulk of its available grant dollars on risk- based investment. In addition, the department has prioritized regional coordination and investment strategies that institutionalize regional security strategy integration. In addition to the two overarching priorities, the guidance also identified several others. These include (1) measuring progress in achieving the NPG, (2) integrating and synchronizing preparedness programs and activities, (3) developing and sustaining a statewide critical infrastructure/key resource protection program, (4) enabling information/intelligence fusion, (5) enhancing statewide communications interoperability, (6) strengthening preventative radiological/nuclear detection capabilities, and (7) enhancing catastrophic planning to address nationwide plan review results. Under the guidance, all fiscal year 2007 HSGP applicants will be required to submit an investment justification that provides background information, strategic objectives and priorities addressed, their funding/implementation plan, and the impact that each proposed investment (project) is anticipated to have. The Particular Challenge of Preparing for an Influenza Pandemic The possibility of an influenza pandemic is a real and significant threat to the nation. There is widespread agreement that it is not a question of if but when such a pandemic will occur. The issues associated with the preparation for and response to a pandemic flu are similar to those for any other type of disaster: clear leadership roles and responsibilities, authority, and coordination; risk management; realistic planning, training, and exercises; assessing and building the capacity needed to effectively respond and recover; effective information sharing and communication; and accountability for the effective use of resources. However, a pandemic poses some unique challenges. Hurricanes, earthquakes, explosions, or bioterrorist incidents occur within a short period of time, perhaps a period of minutes, although such events can have long-term effects, as we have seen in the Gulf region following Hurricane Katrina. The immediate effects of such disasters are likely to affect specific locations or areas within the nation; the immediate damage is not nationwide. In contrast, an influenza pandemic is likely to continue in waves of 6 to 8 weeks for a number of weeks or months and affect wide areas of the nation, perhaps the entire nation. Depending upon the severity of the pandemic, the number of deaths could be from 200,000 to 2 million. Seasonal influenza in the United States results in about 36,000 deaths annually. Successfully addressing the pandemic is also likely to require international coordination of detection and response. The Department of Health and Human Services estimates that during a severe pandemic, absenteeism may reach as much as 40 percent in an affected community because individuals are ill, caring for family members, or fear infection. Such absenteeism could affect our nation’s economy, as businesses and governments face the challenge of continuing to provide essential services with reduced numbers of healthy workers. In addition, our nation’s ability to respond effectively to hurricanes or other major disasters during a pandemic may also be diminished as first responders, health care workers, and others are infected or otherwise unable to perform their normal duties. Thus, the consequences of a pandemic are potentially widespread and effective planning and response for such a disaster will require particularly close cooperation among all levels of government, the private sector, individuals within the United States, as well as international cooperation. We have engagements under way examining such issues as barriers to implementing the Department of Health and Human Services’ National Pandemic Influenza Plan, the national strategy and framework for pandemic influenza, the Department of Defense and Department of Agriculture’s preparedness efforts and plans, public health and hospital preparedness, and U.S. efforts to improve global disease surveillance. We expect most of these reports to be issued by late summer 2007. Knowledge of the Effects of State and Local Efforts to Improve Their Capabilities Is Limited Possible congressional oversight in the short term also might focus on state and local capabilities. As I testified in February on applying risk management principles to guide federal investments, over the past 4 years DHS has provided about $14 billion in federal funding to states, localities, and territories through its HSGP grants. Remarkably, however, we know little about how states and localities finance their efforts in this area, have used their federal funds, and are assessing the effectiveness with which they spend those funds. Essentially, all levels of government are still struggling to define and act on the answers to basic, but hardly simple, questions about emergency preparedness and response: What is important (that is, what are our priorities)? How do we know what is important (e.g., risk assessments, performance standards)? How do we measure, attain, and sustain success? On what basis do we make necessary trade-offs, given finite resources? There are no simple, easy answers to these questions. The data available for answering them are incomplete and imperfect. We have better information and a better sense of what needs to be done for some types of major emergency events than for others. For some natural disasters, such as regional wildfires and flooding, there is more experience and therefore a better basis on which to assess preparation and response efforts and identify gaps that need to be addressed. California has experience with earthquakes; Florida, with hurricanes. However, no one in the nation has experience with such potential catastrophes as a dirty bomb detonated in a major city. Although both the AIDS epidemic and SARS provide some related experience, there have been no recent pandemics that rapidly spread to thousands of people across the nation. A new feature in the fiscal year 2006 DHS homeland security grant guidance for the Urban Area Security Initiative (UASI) grants was that eligible recipients must provide an “investment justification” with their grant application. States were to use this justification to outline the implementation approaches for specific investments that will be used to achieve the initiatives outlined in their state Program and Capability Enhancement Plan. These plans were multiyear global program management plans for the entire state homeland security program that look beyond federal homeland security grant programs and funding. The justifications must justify all funding requested through the DHS homeland security grant program. In the guidance DHS noted that it would use a peer review process to evaluate grant applications on the basis of the effectiveness of a state’s plan to address the priorities it has outlined and thereby reduce its overall risk. For fiscal year 2006, DHS implemented a competitive process to evaluate the anticipated effectiveness of proposed homeland security investments. For fiscal year 2007, DHS will continue to use the risk and effectiveness assessments to inform final funding decisions, although changes have been made to make the grant allocation process more transparent and more easily understood. DHS officials have said that they cannot yet assess how effective the actual investments from grant funds are in enhancing preparedness and mitigating risk because they do not yet have the metrics to do so. Regional and Multistate Planning and Preparation Should Be Robust Through its grant guidance, DHS has encouraged regional and multistate planning and preparation. Planning and assistance have largely been focused on single jurisdictions and their immediately adjacent neighbors. However, well-documented problems with the abilities of first responders from multiple jurisdictions to communicate at the site of an incident and the potential for large-scale natural and terrorist disasters have generated a debate on the extent to which first responders should be focusing their planning and preparation on a regional and multigovernmental basis. As I mentioned earlier, an overarching national priority for the National Preparedness Goal is embracing regional approaches to building, sustaining, and sharing capabilities at all levels of government. All HSGP applications are to reflect regional coordination and show an investment strategy that institutionalizes regional security strategy integration. However, it is not known to what extent regional and multistate planning has progressed and is effective. Our limited regional work indicated there are challenges in planning. Our early work addressing the Office of National Capital Region Coordination (ONCRC) and National Capital Region (NCR) strategic planning reported that the ONCRC and the NCR faced interrelated challenges in managing federal funds in a way that maximizes the increase in first responder capacities and preparedness while minimizing inefficiency and unnecessary duplication of expenditures. One of these challenges included a coordinated regionwide plan for establishing first responder performance goals, needs, and priorities, and assessing the benefits of expenditures in enhancing first responder capabilities. In subsequent work on National Capital Region strategic planning, we highlighted areas that needed strengthening in the Region’s planning, specifically improving the substance of the strategic plan to guide decision makers. For example, additional information could have been provided regarding the type, nature, scope, or timing of planned goals, objectives, and initiatives; performance expectations and measures; designation of priority initiatives to meet regional risk and needed capabilities; lead organizations for initiative implementation; resources and investments; and operational commitment. Exercises Must Be Carefully Planned and Deployed and Capture Lessons Learned Our work examining the preparation for and response to Hurricane Katrina highlighted the importance of realistic exercises to test and refine assumptions, capabilities, and operational procedures; build on the strengths; and shore up the limitations revealed by objective assessments of the exercises. The Post-Katrina Reform Act mandates a national exercise program, and training and exercises are also included as a component of the National Preparedness System. With almost any skill and capability, experience and practice enhance proficiency. For first responders, exercises—especially of the type or magnitude of events for which there is little actual experience—are essential for developing skills and identifying what works well and what needs further improvement. Major emergency incidents, particularly catastrophic ones, by definition require the coordinated actions of personnel from many first responder disciplines and all levels of government, nonprofit organizations, and the private sector. It is difficult to overemphasize the importance of effective interdisciplinary, intergovernmental planning, training, and exercises in developing the coordination and skills needed for effective response. For exercises to be effective in identifying both strengths and areas needing attention, it is important that they be realistic, designed to test and stress the system, involve all key persons who would be involved in responding to an actual event, and be followed by honest and realistic assessments that result in action plans that are implemented. In addition to relevant first responders, exercise participants should include, depending upon the scope and nature of the exercise, mayors, governors, and state and local emergency managers who would be responsible for such things as determining if and when to declare a mandatory evacuation or ask for federal assistance. DHS Has Provided Limited Transparency for Its Management or Operational Decisions Congressional oversight in the short term might include DHS’s policies regarding oversight assistance. The Comptroller General has testified that DHS has not been transparent in its efforts to strengthen its management areas and mission functions. While much of its sensitive work needs to be guarded from improper disclosure, DHS has not been receptive toward oversight. Delays in providing Congress and us with access to various documents and officials have impeded our work. We need to be able to independently assure ourselves and Congress that DHS has implemented many of our past recommendations or has taken other corrective actions to address the challenges we identified. However, DHS has not made its management or operational decisions transparent enough so that Congress can be sure it is effectively, efficiently, and economically using the billions of dollars in funding it receives annually, and is providing the levels of security called for in numerous legislative requirements and presidential directives. Concluding Observations Since September 11, 2001, the federal government has awarded billions of dollars in grants and assistance to state and local governments to assist in strengthening emergency management capabilities. DHS has developed several key national policy documents, including the NRP, NIMS, and the NPG to guide federal, state, and local efforts. The aftermath of the 2005 hurricane season resulted in a reassessment of the federal role in preparing for and responding to catastrophic events. The studies and reports of the past year—by Congress, the White House Homeland Security Council, the DHS IG, DHS and FEMA, GAO, and others—have provided a number of insights into the strengths and limitations of the nation’s capacity to respond to catastrophic disasters and resulted in a number of recommendations for strengthening that capacity. Collectively, these studies and reports paint a complex mosaic of the challenges that the nation—federal, state, local, and tribal governments; nongovernmental entities; the private sector; and individual citizens—faces in preparing for, responding to, and recovering from catastrophic disasters. The Post- Katrina Reform Act directs many organizational, mission, and policy changes to respond to these findings and challenges. Assessing, developing, attaining, and sustaining needed emergency preparedness, response, and recovery capabilities is a difficult task that requires sustained leadership, the coordinated efforts of many stakeholders from a variety of first responder disciplines, levels of government, and nongovernmental entities. There is a no “silver bullet,” no easy formula. It is also a task that is never done, but requires continuing commitment and leadership and trade-offs because circumstances change and we will never have the funds to do everything we might like to do. That concludes my statement, and I would be pleased to respond to any questions you and subcommittee members may have. Contacts and Staff Acknowledgments For further information about this statement, please contact William O. Jenkins Jr., Director, Homeland Security and Justice Issues, on (202) 512-8777 or [email protected]. In addition to the contact named above the following individuals from GAO’s Homeland Security and Justice Team also made major contributors to this testimony: Sharon Caudle, Assistant Director; John Vocino, Analyst- in-Charge; Flavio Martinez, Analyst; and Amy Bernstein, Communications Analyst. Appendix I: Enhanced Capabilities for Catastrophic Response and Recovery Numerous reports and our own work suggest that the substantial resources and capabilities marshaled by state, local, and federal governments and nongovernmental organizations were insufficient to meet the immediate challenges posed by the unprecedented degree of damage and the number of victims caused by Hurricanes Katrina and Rita. Developing the capabilities needed for catastrophic disasters should be part of an overall national preparedness effort that is designed to integrate and define what needs to be done and where, how, and how well it should be done—that is, according to what standards. The principal national documents designed to address each of these are, respectively, the NRP, NIMS, and the NPG. The nation’s experience with Hurricanes Katrina and Rita reinforces some of the questions surrounding the adequacy of capabilities in the context of a catastrophic disaster—particularly in the areas of (1) situational assessment and awareness, (2) emergency communications, (3) evacuations, (4) search and rescue, (5) logistics, and (6) mass care and sheltering. FEMA is taking actions to address identified deficiencies in each of these areas. Examples include designating national and regional situational awareness teams; acquiring and deploying mobile satellite communications trucks; developing an electronic system for receiving and tracking the status of requests for assistance and supplies; acquiring GPS equipment for tracking the location of supplies on route to areas of need; and working with the Red Cross and others to clarify roles and responsibilities for mass care, housing, and human services. This appendix provides additional details of FEMA’s actions in each of these areas. FEMA Taking Steps to Improve Situational Assessment Capabilities One of the critical capabilities that FEMA is working to improve is their situational assessment and awareness. FEMA is developing a concept for rapidly deployable interagency incident management teams, at this time called National Incident Management Team, to provide a forward federal presence to facilitate managing the national response for catastrophic incidents. FEMA is planning to establish three national-level teams and ten regional-level teams, one in each of the ten FEMA regions. These teams will support efforts to meet the emergent needs during disasters such as the capability to provide initial situational awareness for decision-makers and support the initial establishment of a unified command. According to FEMA’s plans, these teams will have a multi-agency composition to ensure that the multi-disciplinary requirements of emergency management are met. The teams are envisioned to have the capability to establish an effective federal presence within 12-hours of notification, to support the state, to coordinate federal activities, and to be self sufficient for a minimum of 48-hours so as not to be a drain on potentially scarce local resources. National-level and regional-level teams will be staffed with permanent full-time employees, unlike the ERTs, which are staffed on a collateral duty basis. Team composition will include representatives from other DHS components, interagency and homeland security partners. When not deployed, the teams will team-train with federal partners and provide a training capability to elevate state and local emergency management capabilities. The teams will also engage in consistent and coordinated operational planning and relationship-building with state, local, tribal, and other stakeholders. According to FEMA officials, these teams are still being designed and decisions on team assets, equipment, and expected capabilities have not yet been finalized. The new teams are envisioned to eventually subsume the existing FIRST (Federal Incident Response Teams) and ERTs (FEMA’s Emergency Response Teams), and their mission and capabilities will incorporate similar concepts involving leadership, emergency management doctrine, and operational competence in communications. FEMA plans to implement one National Incident Management Team and one Regional Incident Management Team by May 25, 2007. Some Progress Has Been Made on Interoperable Communications As our past work has noted, emergency communications is a critical capability common across all phases of an incident. Agencies’ communications systems during a catastrophic disaster must first be operable, with sufficient communications to meet everyday internal and emergency communication requirements. Once operable, they then should have communications interoperability whereby public safety agencies (e.g., police, fire, emergency medical services, etc.) and service agencies (e.g., public works, transportation, and hospitals) can communicate within and across agencies and jurisdictions in real time as needed. DHS officials have identified a number of programs and activities they have implemented to improve interoperable communications nationally. DHS’s Office for Interoperability and Compatibility (OIC) was established to strengthen and integrate interoperability and compatibility efforts to improve local, tribal, state, and federal emergency preparedness and response. SAFECOM, a program of OIC which is transitioning to the Office of Emergency Communications (OEC)—in response to the Post-Katrina Reform Act—is developing tools, templates, and guidance documents, including field tested statewide planning methodologies, online collaboration tools, coordinated grant guidance, communications requirements, and a comprehensive online library of lessons learned and best practices to improve interoperability and compatibility across the nation. DHS officials cited the development of the following examples in their efforts to improve interoperable communications: Statement of Requirements (SoR) to define operational and functional requirements for emergency response communications. Public Safety Architecture Framework (PSAF) to help emergency response agencies map interoperable communications system requirements and identify system gaps. Project 25 (P25) suite of standards and a Compliance Assessment Program. This project is in conjunction with the National Institute of Standards and Technology (NIST) to support the efforts of the emergency response community and industry; Statewide Communications Interoperability Planning Methodology to offer states a tangible approach as they initiate statewide interoperability planning efforts. SAFECOM also collaborated in DHS grant guidance to help states develop statewide interoperability plans by the end of 2007. According to FEMA officials, the agency is taking actions to design, staff, and maintain a rapidly deployable, responsive, interoperable, and highly reliable emergency communications capability using the latest commercial off-the-shelf voice, video, and data technology. FEMA’s Response Division is the designated lead for tactical communications, along with situational awareness information technology enablers that are provided by FEMA’s Chief Information Officer. Mobile Emergency Response Support (MERS) detachments provide robust, deployable, command, control, and incident communications capabilities to DHS/FEMA elements for catastrophic Incidents of National Significance. The MERS mission supports Emergency Support Function partners at the federal, state, and local levels of government. The plan is to utilize enhanced MERS capabilities and leverage commercial technology to provide real-time connectivity between communications platforms in a manner consistent with emergency communication deployment doctrine being developed by DHS and FEMA. According to FEMA officials, emergency managers at the federal, state, and local levels of government will benefit from an integrated interoperable emergency communications architecture that includes the Department of Defense, United States Northern Command and the National Guard Bureau. Our recent work noted that $2.15 billion in grant funding has been awarded to states and localities from fiscal year 2003 through fiscal year 2005 for communications interoperability enhancements helped to make improvements on a variety of interoperability projects. However this work noted that the SAFECOM program has made limited progress in improving communications interoperability at all levels of government. For example, the program has not addressed interoperability with federal agencies, a critical element to interoperable communications required by the Intelligence Reform and Terrorism Prevention Act of 2004. The SAFECOM program has focused on helping states and localities improve interoperable communications by developing tools and guidance for their use. However, based on our review of four states and selected localities, SAFECOM’s progress in achieving its goals of helping these states and localities improve interoperable communications has been limited. Officials from the states and localities we reviewed often found that the tools and planning assistance provided by the program were not helpful, or they were unaware of what assistance the program had to offer. The program’s limited effectiveness can be linked to poor program management practices, including the lack of a plan for improving interoperability across all levels of government and inadequate performance measures that would provide feedback to better attune tools and assistance with public safety needs. Until SAFECOM adopts these key management practices, its progress is likely to remain limited. Further, little progress had been made in developing Project 25 standards—a suite of national standards that are intended to enable interoperability among the communications products of different vendors. For example, although one of the eight major subsets of standards was defined in the project’s first 4 years (from 1989 to 1993), from 1993 through 2005, no additional standards were completed that could be used by a vendor to develop elements of a Project 25 system. The private-sector coordinating body responsible for Project 25 has defined specifications for three additional subsets of standards. However, ambiguities in the published standards have led to incompatibilities among products made by different vendors, and no compliance testing has been conducted to ensure vendors’ products are interoperable. Nevertheless, DHS has strongly encouraged state and local agencies to use grant funding to purchase Project 25 radios, which are substantially more expensive than non-Project 25 radios. As a result, states and local agencies have purchased fewer, more expensive radios, which still may not be interoperable and thus may provide them with minimal additional benefits. Thus, until DHS takes a more strategic approach here, progress by states and localities in improving interoperability is likely to be impeded. FEMA Taking Steps to Address Logistics Problems In the wake of Hurricane Katrina, FEMA’s performance in the logistics area came under harsh criticism. Within days, FEMA became overwhelmed and essentially asked the military to take over much of the logistics mission. In the Post-Katrina Reform Act, Congress required FEMA to make its logistics system more flexible and responsive. FEMA’s improvements to their logistics strategy and efforts are designed to initially lean forward and provide immediate support to a disaster site mainly through FEMA-owned goods and assets, and later on to establish sustained supply chains with the private vendors whose resources are needed for ongoing response and recovery activities, according to FEMA officials. According to FEMA officials, the agency is building forward-leaning capabilities that include, for example, its MERS resources designed to support a variety of communications requirements—satellite, land mobile radio, computer and telephone systems—with the ability to operate from one or more locations (mobile and stationary) within the response area of operations. FEMA has also developed a Pre-Positioned Disaster Supply (PPDS) program to position containers of life-saving and life-sustaining disaster equipment and supplies as close to a potential disaster site as possible, in order to substantially reduce the initial response time to incidents. Further, FEMA is developing a Pre-positioned Equipment Program (PEP) that also consists of standardized containers of equipment to provide state and local governments responding to a range of major disasters such equipment as personal protective supplies, decontamination, detection, technical search and rescue, law enforcement, medical, interoperable communications and other emergency response equipment. According to FEMA officials, currently FEMA has established 8 of the 11 PEP locations, as mandated by the Post- Katrina Reform Act, and FEMA is currently conducting an analysis to determine where the additional PEP sites should be located. FEMA has also stated that it has enhanced its relationships with the public sector with its disaster logistics partners and has worked to utilize the public sector’s expertise through Inter-Agency Agreements with the Defense Logistics Agency, the Department of Transportation and Marine Corps Systems Command. According to FEMA officials, another critical component of creating an effective logistics system is based upon FEMA’s ability to work collaboratively with and leverage the capabilities of its public and private partners. FEMA’s logistics efforts have identified private sector expertise to improve and develop software systems to increase logistics program efficiency and effectiveness. For example, the Logistics Information Management System (LIMS) is FEMA’s formal accountability database system for all property managed within FEMA nation-wide or at disaster field locations. At the same time, FEMA is also developing a multi-phased Total Asset Visibility (TAV) program with the assistance of the private sector to leverage the collective resources of the private and public sector to improve emergency response logistics in the areas of transportation, warehousing, and distribution. The current phase of the program, which is operational at two FEMA logistics centers (Atlanta, Georgia, and Fort Worth, Texas), encompasses two software management packages designed to provide FEMA the ability to inventory disaster response commodities upon arrival at a warehouse, place the commodities in storage, and track the commodities while stored in the warehouse. FEMA plans to expand the capabilities of this first phase of the system to all FEMA Regions during 2007. This will provide FEMA with sufficient logistics management and tracking capabilities until an expanded phase two can be implemented. For the second phase, FEMA is currently conducting market research to solicit input from the private sector and other sources to facilitate final design of the program’s second phase. According to FEMA officials, initial operational capabilities for this phase are scheduled to be in place by June 2008, and fully-operational in June 2009. According to FEMA, the completed product will provide a more comprehensive approach to producing real-time, reliable reporting and incorporate FEMA’s financial resource tracking requirements. It will also be able to support other federal departments and agencies, non- government organizations, and state, local and tribal organizations under the guidelines of the NRP. While FEMA has been working to address its logistics capabilities, it is too early to evaluate these efforts. We recently examined FEMA logistics issues, taking a broad approach, identifying five areas necessary for an effective logistics system. Below, we describe these five areas along with FEMA’s ongoing actions to address each. Requirements: FEMA does not yet have operational plans in place to address disaster scenarios, nor does it have detailed information on states’ capabilities and resources. As a result, FEMA does not have information from these sources to define what and how much it needs to stock. However, FEMA is developing a concept of operations to underpin its logistics program and told us that it is working to develop detailed plans and the associated stockage requirements. However, until FEMA has solid requirements based on detailed plans, the agency will be unable to assess its true preparedness. Inventory management: FEMA’s system accounts for the location, quantity, and types of supplies, but the ability to track supplies in- transit is limited. FEMA has several efforts under way to improve transportation and tracking of supplies and equipment, such as expanding its new system for in-transit visibility from the two test regions to all FEMA regions. Facilities: FEMA maintains nine logistics centers and dozens of smaller storage facilities across the country. However, it has little assurance that these are the right number of facilities located in the right places. FEMA officials told us they are in the process of determining the number of storage facilities it needs and where they should be located. Distribution: Problems persist with FEMA’s distribution system, including poor transportation planning, unreliable contractors, and lack of distribution sites. FEMA officials described initiatives under way that should mitigate some of the problems with contractors, and has been working with Department of Defense and Department of Transportation to improve the access to transportation when needed. People: Human capital issues are pervasive in FEMA, including the logistics area. The agency has a small core of permanent staff, supplemented with contract and temporary disaster assistance staff. However, FEMA’s recent retirements and losses of staff, and its difficulty in hiring permanent staff and contractors, have created staffing shortfalls and a lack of capability. According to a January 2007 study commissioned by FEMA, there are significant shortfalls in staffing and skill sets of full-time employees, particularly in the planning, advanced contracting, and relationship management skills needed to fulfill the disaster logistics mission. FEMA has recently hired a logistics coordinator and is making a concerted effort to hire qualified staff for the entire agency, including logistics. In short, FEMA is taking many actions to transition its logistics program to be more proactive, flexible, and responsive. While these and other initiatives hold promise for improving FEMA’s logistics capabilities, it will be years before they are fully implemented and operational. Revisions Made to Evacuation Planning, Mass Care, Housing and Human Services In an April 2007 testimony, FEMA’s Deputy Administrator for Operations said that emergency evacuation, shelter and housing is FEMA’s most pressing priority for planning for recovery from a catastrophic disaster. He said that FEMA is undertaking more detailed mass evacuee support planning; the Department of Justice and Red Cross are developing methods for more quickly identifying and uniting missing family members; and FEMA and the Red Cross have developed a web-based data system to support shelter management, reporting, and facility identification activities. Evacuation. Recent GAO work found that actions are needed to clarify the responsibilities and increase preparedness for evacuations, especially for those transportation-disadvantaged populations. We found that state and local governments are generally not well prepared to evacuate transportation-disadvantaged populations (i.e. planning, training, and conducting exercises), but some states and localities have begun to address challenges and barriers. For example, in June 2006, DHS reported that only about 10 percent of the state and about 12 percent of the urban area emergency plans it reviewed adequately addressed evacuating these populations. Steps being taken by some such governments include collaboration with social service and transportation providers and transportation planning organizations—some of which are Department of Transportation (DOT) grantees and stakeholders—to determine transportation needs and develop agreements for emergency use of drivers and vehicles. The federal government provides evacuation assistance to state and local governments, but gaps in this assistance have hindered many of these governments’ ability to sufficiently prepare for evacuations. This includes the lack of any specific requirement to plan, train, and conduct exercises for the evacuation of transportation-disadvantaged populations as well as gaps in the usefulness of DHS’s guidance. We recommended that DHS should clarify federal agencies’ roles and responsibilities for providing evacuation assistance when state and local governments are overwhelmed. DHS should require state and local evacuation preparedness for transportation-disadvantaged populations and improve information to assist these governments. DOT should encourage its grant recipients to share information to assist in evacuation preparedness for these populations. DOT and DHS agreed to consider our recommendations, and DHS stated it has partly implemented some of them. In his April 26, 2007 testimony statement for the House Transportation and Infrastructure Committee, FEMA’s Deputy Administrator stated that FEMA is undertaking more detailed mass evacuation support planning to help State and local government plan and prepare for hosting large displaced populations. The project is to include the development of an evacuee registration and tracking capability and implementation plans for federal evacuation support to states. Mass Care and Shelter. During the current NRP review period, FEMA has revised the organizational structure of ESF-6, Mass Care, Housing, and Human Services, and places FEMA as the primary agency responsible for this emergency support function. The Red Cross will remain as a supporting agency in the responsibilities and activities of ESF-6. FEMA continues to maintain a Memorandum of Understanding (MOU) with Red Cross that articulates agency roles and responsibilities for mass care. The MOU and addendum were recently revised in May 2006 and December 2006, respectively. FEMA is currently working with Red Cross and other support agencies to revise ESF-6 standard operating procedures. According to a February 2007 letter by the Red Cross, this change will not take place until the NRP review process is complete and all changes are approved. According to FEMA's Deputy Administrator, FEMA and the Red Cross have developed the first phase of a web-based data system to support shelter management, reporting, and facility identification activities. The system is intended for all agencies that provide shelter service during disasters to ensure a comprehensive understanding of the shelter populations and available shelter capacity. Temporary housing. Other recent GAO work noted that FEMA needs to identify and assess the capabilities that exist across the federal government and outside the federal government, including temporary housing. In a recent report on housing assistance we found that the National Response Plan’s annex covering temporary shelter and housing in ESF 6 clearly described the overall responsibilities of the two primary responsible agencies—FEMA and the Red Cross. However, the responsibilities described for the support agencies—the Departments of Agriculture, Defense, Housing and Urban Development (HUD), and Veterans Affairs—did not, and still do not, fully reflect their capabilities. Further, these support agencies had not, at the time of our work, developed fact sheets describing their roles and responsibilities, notification and activation procedures, and agency-specific authorities, as called for by ESF-6 operating procedures. Our February 2007 report recommended that the support agencies propose revisions to the NRP that fully reflect each respective support agency’s capabilities for providing temporary housing under ESF-6, develop the needed fact sheets, and develop operational plans that provide details on how their respective agencies will meet their temporary housing responsibilities. The Departments of Defense, HUD, Treasury, and the Veterans Administration, and Agriculture, concurred with our recommendations. The Red Cross did not comment on our report or recommendations. As part of a housing task force, FEMA is currently exploring ways of incorporating housing assistance offered by private sector organizations. FEMA says it has also developed a housing portal to consolidate available rental resources for evacuees from Federal agencies, private organizations, and individuals. Appendix II: Related GAO Products Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-833T. Washington, D.C.: May 10, 2007 First Responders: Much Work Remains to Improve Communications Interoperability. GAO-07-301. Washington, D.C.: April 2, 2007. Emergency Preparedness: Current Emergency Alert System Has Limitations, and Development of a New Integrated System Will be Challenging. GAO-07-411. Washington, D.C.: March 30, 2007 Disaster Preparedness: Better Planning Would Improve OSHA’s Efforts to Protect Workers’ Safety and Health in Disasters. GAO-07-193. Washington, D.C.: March 28, 2007. Public Health and Hospital Emergency Preparedness Programs: Evolution of Performance Measurement Systems to Measure Progress. GAO-07-485R. Washington, D.C.: March 23, 2007. Coastal Barrier Resources System: Status of Development That Has Occurred and Financial Assistance Provided by Federal Agencies. GAO-07-356. Washington, D.C.: March 19, 2007. Hurricanes Katrina and Rita Disaster Relief: Continued Findings of Fraud, Waste, and Abuse. GAO-07-300. Washington, D.C.: March 15, 2007. Homeland Security: Preparing for and Responding to Disasters. GAO-07-395T. Washington, D.C.: March 9, 2007. Hurricane Katrina: Agency Contracting Data Should Be More Complete Regarding Subcontracting Opportunities for Small Businesses. GAO-07-205. Washington, D.C.: March 1, 2007. Hurricane Katrina: Allocation and Use of $2 Billion for Medicaid and Other Health Care Needs. GAO-07-67. Washington, D.C.: February 28, 2007. Disaster Assistance: Better Planning Needed for Housing Victims of Catastrophic Disasters. GAO-07-88. Washington, D.C.: February 28, 2007 Highway Emergency Relief: Reexamination Needed to Address Fiscal Imbalance and Long-term Sustainability. GAO-07-245. Washington, D.C.: February 23, 2007. Small Business Administration: Additional Steps Needed to Enhance Agency Preparedness for Future Disasters. GAO-07-114. Washington, D.C.: February 14, 2007. Small Business Administration: Response to the Gulf Coast Hurricanes Highlights Need for Enhanced Disaster Preparedness. GAO-07-484T. Washington, D.C.: February 14, 2007. Hurricanes Katrina and Rita: Federal Actions Could Enhance Preparedness of Certain State-Administered Federal Support Programs. GAO-07-219. Washington, D.C.: February 7, 2007. Homeland Security Grants: Observations on Process DHS Used to Allocate Funds to Selected Urban Areas. GAO-07-381R. Washington, D.C.: February 7, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-452T. Washington, D.C.: February 7, 2007. Homeland Security: Applying Risk Management Principles to Guide Federal Investments. GAO-07-386T. Washington, D.C.: February 7, 2007. Hurricanes Katrina and Rita Disaster Relief: Prevention Is the Key to Minimizing Fraud, Waste, and Abuse in Recovery Efforts. GAO-07-418T. Washington, D.C.: January 29, 2007 GAO, Reserve Forces: Actions needed to Identify National Guard Domestic Equipment Requirements and Readiness, GAO-07-60 Washington, D.C.: January 26, 2007 Budget Issues: FEMA Needs Adequate Data, Plans, and Systems to Effectively Manage Resources for Day-to-Day Operations, GAO-07-139. Washington, D.C.: January 19, 2007. Transportation-Disadvantaged Populations: Actions Needed to Clarify Responsibilities and Increase Preparedness for Evacuations. GAO-07-44. Washington, D.C.: December 22, 2006. Suggested Areas for Oversight for the 110th Congress. GAO-07-235R. Washington, D.C.: November 17, 2006. Hurricanes Katrina and Rita: Continued Findings of Fraud, Waste, and Abuse. GAO-07-252T. Washington, D.C.: December 6, 2006. Capital Financing: Department Management Improvements Could Enhance Education’s Loan Program for Historically Black Colleges and Universities. GAO-07-64. Washington, D.C.: October 18, 2006. Hurricanes Katrina and Rita: Unprecedented Challenges Exposed the Individuals and Households Program to Fraud and Abuse; Actions Needed to Reduce Such Problems in Future. GAO-06-1013. Washington, D.C.: September 27, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Disaster Relief: Governmentwide Framework Needed to Collect and Consolidate Information to Report on Billions in Federal Funding for the 2005 Gulf Coast Hurricanes. GAO-06-834. Washington, D.C.: September 6, 2006. Hurricanes Katrina and Rita: Coordination between FEMA and the Red Cross Should Be Improved for the 2006 Hurricane Season. GAO-06-712. Washington, D.C.: June 8, 2006. Federal Emergency Management Agency: Factors for Future Success and Issues to Consider for Organizational Placement. GAO-06-746T. Washington, D.C.: May 9, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Homeland Security: DHS’ Efforts to Enhance First Responders’ All- Hazards Capabilities Continue to Evolve. GAO-05-652. Washington, D.C.: July 11, 2005. Continuity of Operations: Agency Plans Have Improved, but Better Oversight Could Assist Agencies in Preparing for Emergencies. GAO-05-577. Washington, D.C.: April 28, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As a new hurricane season approaches, the Federal Emergency Management Agency (FEMA) within the Department of Homeland Security (DHS) faces the simultaneous challenges of preparing for the season and implementing the reorganization and other provisions of the Post-Katrina Emergency Management Reform Act of 2006. The Act stipulates major changes to FEMA intended to enhance its preparedness for and response to catastrophic and major disasters. As GAO has reported, FEMA and DHS face continued challenges, including clearly defining leadership roles and responsibilities, developing necessary disaster response capabilities, and establishing accountability systems to provide effective services while protecting against waste, fraud, and abuse. This testimony (1) summarizes GAO's findings on these challenges and FEMA's and DHS's efforts to address them; and (2) discusses several disaster management issues for continued congressional attention. Effective disaster preparedness and response require defining what needs to be done, where and by whom, how it needs to be done, and how well it should be done. GAO analysis following Hurricane Katrina showed that improvements were needed in leadership roles and responsibilities, development of the necessary disaster capabilities, and accountability systems that balance the need for fast, flexible response against the need to prevent waste, fraud, and abuse. To facilitate rapid and effective decision making, legal authorities, roles and responsibilities, and lines of authority at all government levels must be clearly defined, effectively communicated, and well understood. Adequacy of capabilities in the context of a catastrophic or major disaster are needed--particularly in the areas of (1) situational assessment and awareness; (2) emergency communications; (3) evacuations; (4) search and rescue; (5) logistics; and (6) mass care and shelter. Implementing controls and accountability mechanisms helps to ensure the proper use of resources. FEMA has initiated reviews and some actions in each of these areas, but their operational impact in a catastrophic or major disaster has not yet been tested. Some of the targeted improvements, such as a completely revamped logistics system, are multiyear efforts. Others, such as the ability to field mobile communications and registration-assistance vehicles, are expected to be ready for the coming hurricane season. The Comptroller General has suggested one area for fundamental reform and oversight is ensuring a strategic and integrated approach to prepare for, respond to, recover, and rebuild after catastrophic events. FEMA enters the 2007 hurricane season as an organization in transition working simultaneously to implement the reorganization required by the Post-Katrina Reform Act and moving forward on initiatives to address the deficiencies identified by the post-Katrina reviews. This is an enormous challenge. In the short-term, Congress may wish to consider several specific areas for immediate oversight. These include (1) evaluating the development and implementation of the National Preparedness System, including preparedness for natural disasters, terrorist incidents, and an influenza pandemic; (2) assessing state and local capabilities and the use of federal grants to enhance those capabilities; (3) examining regional and multi-state planning and preparation; (4) determining the status and use of preparedness exercises; and (5) examining DHS polices regarding oversight assistance. |
Background Under the DOD Financial Management Regulation (FMR), all major new systems are to be identified with a unique PE code. PE codes have 10 positions. In general, each position conveys information, as seen in figure 1. The first and second positions of the PE code illustrated above define the Major Force Program (MFP), which contain the resources necessary to achieve a broad objective or plan. In figure 1, for example, the “06” in the first positions indicate this is a research and development effort. Because it is a research and development effort beginning with “06,” under the FMR, the third and fourth positions must define the budget activity. In general, the budget activity codes that fill positions three and four are intended to describe the current nature of the research and development effort for each PE code. For example, budget activities 1 through 3 cover initial development efforts and should describe activities that take place in what is often called the science and technology realm. Research and development efforts in these first three budget activities may produce scientific studies and experimentation, develop paper studies of alternative concepts, or test integration of subsystems and components. Budget activities 4 and 5 cover efforts used to fully develop and acquire integrated weapon systems respectively. Programs in these budget activities may perform efforts necessary to further mature a technology or conduct engineering and manufacturing development tasks. Budget activity 6 funds efforts to sustain or modernize the installations or operations required for general RDT&E. Test ranges, military construction, maintenance support of laboratories, studies and analysis, and operations and maintenance of test aircraft and ships are funded with this budget activity. Budget activity 7 is used to designate R&D efforts for systems that have already been approved for production or those that have already been fielded. Unlike budget activities 1 through 6, budget activity 7 is not indicated in a PE code. This information is seen in the accompanying budget exhibits. Despite the fact that these are research and development efforts, their program element code does not contain any indication that they are for research and development. The FMR also requires that DOD justify the annual RDT&E budgets requests in budget exhibit documents. These accompanying budget exhibits are the primary information source for Congress and analysts throughout the government. Generally, there are six sections in each budget exhibit that are used to justify each funding request made using an RDT&E PE code. These include a mission description and budget item justification section; an accomplishments and planned program section; a program change summary section showing total funding, schedule, and technical changes to the program element that have occurred since the previous budget submission; a performance measures section to justify 100 percent of resources requested; a section that shows connections and dependencies among projects, which should also include information such as the appropriation, budget activity, line item, and program element number of the related efforts; and, a section providing a schematic display of major program milestones that reflect engineering milestones, acquisition approvals, test and evaluation events, and other key milestones for the program events. Program Element Codes and Budget Exhibits Do Not Consistently Provide Key Information The program element code structure and budget exhibits do not consistently provide accurate, clear, and complete information regarding RDT&E budget requests. The PE codes given for many programs do not indicate that they are for an R&D effort at all or do not accurately reflect the reported nature of the development. Budget exhibits sometimes omit required information about programs and their links to other programs, and may provide only minimal information. One-Third of the Requested RDT&E Funding Is Not Identified as RDT&E Programs Programs that were requested in budget activity 7—RDT&E efforts for fielded systems and programs approved for production—presented the greatest visibility problem. Under DOD’s current regulation, programs in this budget activity are not required to report in the code itself that the funds are for research and development, nor are they required to report the nature of the development effort. This budget activity was used to request $23.5 billion in fiscal year 2007, or about a third of DOD’s entire RDT&E funding request. Instead, the information available about these funding requests is contained in their budget exhibits. Programs classified as budget activity 7 do not begin their PE code with 06, which would identify them as RDT&E requests. Instead, budget activity 7 RDT&E efforts use PE codes that begin with the major force program code established for the system being modified. For example, PE 0102419A, the Joint Land Attack Cruise Missiles Defense Elevated Netted Sensor System (JLENS) program, provides no indication that the effort uses RDT&E funds, nor does it identify the nature of the development. It begins with 01, indicating it is for strategic forces and ends with A, indicating that it is an Army request. In addition, we found that nature of the efforts funded in budget activity 7 overlap with the nature of the efforts undertaken in other budget activities, making it difficult to determine the amounts requested for the different stages of development across the entire RDT&E budget. While the definition for budget activity 7 describes efforts that are fielded or approved for production, several defense acquisition efforts covered under budget activity 7 are involved in phases ranging from technology development to production, as shown in figure 2. The JLENS program, for example, is currently developing an aerostat for cruise missile defense, but its code does not indicate that it is in system development and demonstration. There is a separate code, budget activity 5, which also funds efforts in system development and demonstration. Program Element Codes also Misidentified the Specific Nature of R&D Efforts We found that 65 percent of the RDT&E PE codes for budget activities 1 through 6 misidentified the nature or stage of the development in fiscal year 2007. While the early development efforts described by budget activities 1 through 3 generally properly identified the nature of the effort in their codes, budget activities 4 through 6 generally did not, as seen in figure 3. Programs reported in budget activities 4 through 6 requested more than half of the total RDT&E funding. The challenge of clearly identifying the nature of the development efforts is actually much worse when the budget activity 7 programs are considered along with the misidentified programs in budget activities 1 through 6. As a result of these combined problems, the PE code provides only limited visibility into 85 percent of the requested funding, as seen in figure 4. Budget Exhibits Were Sometimes Inconsistent, Incomplete, or Unclear While reviewing a set of 47 RDT&E budget exhibits from fiscal years 2006 and 2007, we observed that some of the exhibits omitted key information. While DOD presents some valuable information in these exhibits, we found in many cases that the justification narratives were not clear or provided little or no additional information from the previous year’s justification. In a number of cases the narratives appeared to be “copy and paste” descriptions of activities from prior years, making it difficult to determine recent changes or program progress. We also observed that information on accomplishments from the past year was provided for few programs. In addition, few programs provided detailed narratives of planned activities for the current budget year. For the few exhibits that did contain narratives on planned activities, the level of detail was minimal. For example, the Air Force Global Positioning System (GPS) Block III program requested $315 million for fiscal year 2007 and had requested $119 million combined in the previous 2 fiscal years. In this section, the accomplishments were described as “Continue Program Support and Modernization Development for GPS III,” and the planned program was described as “Begin Modernization Segment.” Furthermore, funding changes from year to year were inconsistently reported. In some cases they were provided at the PE code level and in other cases at the project level, when the PE code involved multiple projects. These funding change summaries also routinely provide limited detail on the reasons for the changes. For example, the Navy’s EA-18G and DD(X) programs had funding changes of millions of dollars to their previous and current budgets but failed to provide details of why these changes had occurred. Additionally, the budget exhibits did not always identify the connections and dependencies among related projects consistently as required by the FMR. In general, these connections can be vital to the successful development of some programs. In some instances, key components to a system under development are being developed in other programs. A delay or failure in one program can mean delay and failure in the related program. For example: C-130 Avionics Modernization Program (AMP) exhibit did not identify all of the required information for related programs in the Navy and Special Forces nor did it identify C-130 Talon II procurement, which included C-130 AMP upgrades. DDG-1000 (formerly the DD(X)) destroyer program is developing dual- band radar that will be used on the CVN-21 aircraft carrier. No reference is made to this link/dependency in the exhibit. Expeditionary Fire Support System is being developed to be transported by the V-22 aircraft, but makes no reference to the V-22 program. Warrior UAV (unmanned aerial vehicle) is being developed in two PEs—one for the system and one to weaponize. Only one program references the other. Our review found the schedule profiles in the budget exhibits were generally provided but sometimes did not provide a detailed display of major program milestones such as engineering milestones, acquisition approvals, or test and evaluation events. Also, we could not find the standard program milestones in a number of the schedule profiles we reviewed. In some cases we found it difficult to determine the program’s phase of development. For example, in two cases the development schedule showed “continue development” across all fiscal years displayed. In another example, a program simply reported “S/W Development” with a bar covering all fiscal years. DOD Guidance and Practices Contribute to Reduced Visibility We found the RDT&E program element code and the budget exhibits are not always accurate, clear, consistent, and complete for two major reasons. First, DOD’s own regulation for constructing program element codes does not require a large part of the RDT&E effort to be reflected in program element codes. Second, the regulation governing the structuring of the coding and the content of the exhibits is vague. For example, it does not require the coding to be updated from one year to the next to ensure the correct stage of development has been accurately identified. The regulation also does not provide sufficiently detailed guidance to ensure consistency in the format and content of the budget exhibits. This leads to inconsistencies in how it is applied by different organizations and officials within DOD. The FMR requires that once a weapon system is fielded or approved for production that it be identified not as R&D efforts in the program element code, but rather under a different code. These PE codes are required to carry the Major Force Program code of the fielded systems. As a result, these PE codes, accounting for one-third of the requested RDT&E budget, do not identify the efforts as R&D activities and do not indicate the nature of the R&D effort. In addition, the regulation is unclear on how or when program element coding should change over time as the development progresses into a different stage. As a result, even if the program element code is accurate when a program is assigned a code, without updating the code, the programs that successfully mature will automatically develop inaccurate coding over time. Several DOD officials said that one of the reasons that the budget exhibits are insufficient as decision-making tools is the lack of clear and consistent guidance for the budget exhibits in the FMR. For example, while the FMR requires a “Program Schedule Profile” exhibit, it is not standardized. The FMR provides examples of the budget exhibits that include an “Accomplishment/Planned Program” section. However, it is unclear whether this section has to contain specific information about both the accomplishments achieved from previous funds and the activities to be achieved with requested funds. Conclusion Congress has the difficult task of choosing which RDT&E efforts to fund from the many competing demands. These RDT&E efforts are critical to the national interest. However, they must be balanced within the other fiscal pressures facing the government, including the large and growing structural deficit. These factors make it especially important that Congress get the justifications for these R&D efforts in a clear, consistent, and readily useable form. However, the department’s policies and practices are not providing this key information to congressional decision makers. The RDT&E justification material often obscures rather than reveals the nature of the efforts under way and prevents a determination of the specifics regarding why the money is needed. A number of opportunities exist for DOD to provide Congress with clearer justifications for the funds requested for these efforts. Congress needs a structure that will (1) properly identify the development status of projects for which funds are requested, (2) bring complete visibility to all of the activities for which funds are requested, and (3) provide consistent information about how well these projects are progressing in order to make efficient decisions. More specific guidance could improve the ability of the program element codes and budget exhibits to aid Congress in focusing oversight where it is needed, facilitate early corrective action, and improve accountability. Recommendations for Executive Action This report makes two recommendations. To provide Congress with greater understanding of the nature of developmental activities proposed, as well as to improve the consistency and completeness of the justification material provided for the RDT&E funds requested, the Secretary of Defense should ensure that the DOD Comptroller: Revises the Financial Management Regulation to, (1) in the case of programs approved for production or fielded, ensure that the code or the budget exhibit indicates which stage of development—from basic research through system development and demonstration—the effort is undertaking, and (2) ensure that the program element codes reflect the stage of development—that is from basic research through system development and demonstration—of the requested research and development effort. Develops more specific guidance for budget exhibits to ensure that they are accurate, consistent, clear, and complete, and enforce a disciplined process for ensuring proper reporting of program progress and planned efforts. Matter for Congressional Consideration Congress may wish to have DOD’s Comptroller work with relevant committees to reach agreement on how to revise budget exhibits and the program element code structure to meet congressional oversight needs as well as serve the needs of DOD. In these discussions, consideration could be given to: The value and cost of modifying or replacing the current PE code structure so that it more readily informs Congress as to the nature of the R&D effort for systems in development as well as fielded systems and systems approved for production. The best means to inform Congress of the state of development of the requested effort as it progresses toward production. The changes needed to the format and content of the budget exhibits to more effectively communicate the purpose for which funding is sought, the progress made with prior funding, and other key funding justification information. The time frames and funding needed to develop and implement any changes. Agency Comments and Our Evaluation DOD partially concurred with both of our recommendations. However, it is unclear from DOD’s response what specific actions the department will take in response to our recommendations other than to put additional emphasis on properly reporting program progress and planned efforts. In partially concurring with our first recommendation, DOD commented that in the case of systems that have been approved for production and fielded, specifically capturing RDT&E funding would be counterproductive to how the departmental leadership makes decisions. We recognize the importance to the department of enabling the department’s leadership to make decisions on the full scope of a program; however, we note that program element codes also have the purpose of providing important oversight information to Congress and the current practice significantly lacks the clarity of the RDT&E funding justifications to Congress. As we reported, RDT&E funds for programs approved for production and fielded systems currently represents one-third of the total RDT&E budget. As a result, we believe this level of investment warrants improved clarity. We have modified the wording in the recommendation to focus on providing clearer information to Congress either through the program element code structure or the budget exhibits. DOD fully concurred with the second part of that recommendation to ensure that the program element codes reflect the state of development of the requested effort as it progresses toward production. DOD noted that the program element structure accommodates this progression. However, our analysis found that a significant amount of funding is misidentified in the coding. DOD has not identified any proposed actions to correct this misidentification. DOD partially concurred with our second recommendation and will place greater emphasis on proper reporting of program progress and planned efforts as reported in its budget exhibits. However, DOD took issue with developing a template, stating that it is doubtful any single template would be feasible or desirable given the complexity of the RDT&E activity. We believe more specific guidance is needed to ensure that DOD is more effectively communicating the purpose for which funding is sought, the progress made with prior funding, as well as the other key funding justification information. We have modified the wording of the recommendation to remove the term “template.” We still believe that greater standardization is called for, but recognize that templates are but one means among many to achieve that end. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Navy, Army, and the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or any of your staff have any questions on matters discussed in this report, please contact me on (202)512-4841 or by e-mail at [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were David Best (Assistant Director), Jerry Clark, Chris Deperro, Greg Campell, Anna Russell, Julie Hadley, and Noah Bleicher. Appendix I: Scope & Methodology To assess whether DOD’s RDT&E program element code structure provides Congress consistent, complete and clear information, we reviewed relevant guidance while analyzing all 696 program elements contained in the fiscal year 2007 RDT&E budget request. We assessed program elements by budget activity, dollar value, number of projects, and phase of development. We also determined the number of program elements that properly matched their assigned budget activity to determine if guidance contained in the Financial Management Regulation was properly followed. To determine the actual phase of development for programs requesting funding in budget activity 07, we reviewed both their budget exhibits and other documents, such as their Selected Acquisition Reports. For this analysis, we reviewed the Global Hawk Unmanned Aircraft System, Aerial Common Sensor, Warrior Unmanned Aircraft System, Mobile User Objective System, Joint Land Attack Cruise Missile Defense Elevated Netted Sensor System, and Navstar Global Positioning System. We assessed the information contained the RDT&E budget justification documents by reviewing the content, structure, clarity, and completeness of all components of the budget exhibits, including sections related to the program’s description and budget item justification, schedule profile, and funding summaries. We reviewed multiple budget justification documents from 47 program element codes reported in February 2005 and 2006. We ensured that this review encompassed all military services and covered multiple fiscal years. This review focused on budget exhibits for programs in budget activities 4, 5, and 7 to identify any differences in the consistency, completeness and clarity of information presented. To assess connections and dependency information, we included additional programs that GAO has recently reported on because a more detailed understanding of the programs involved is required to identify these connections. To assess consistency, we compared the level of detail reported from year to year and compared the treatment of funding changes from program to program. To assess completeness, we compared the information in the budget exhibits to the requirements of the FMR. To assess the clarity of the information, we reviewed the details provided in the narrative language in the exhibits. To determine the factors that contribute to any problems found, we also reviewed the Department of Defense Financial Management Regulation and the Future Years Defense Planning Handbook policies and guidance related to developing program elements and budget justification documents. To better understand the processes involved with developing program elements and budget justification documents, we interviewed officials from the offices of the Under Secretary of Defense for Acquisition, Technology, and Logistics Director, Defense Research and Engineering; Under Secretary of Defense (Comptroller); Principal Deputy Under Secretary of Defense (Comptroller); Director, Program Analysis and Evaluation; Assistant Secretary of the Army for Financial Management and Comptroller; Office of the Assistant Secretary of the Navy Financial Management and Comptroller; Office of the Secretary of the Air Force; and the Air Force Office for the Deputy Chief of Staff for Strategic Plans and Programs. We conducted our review from June 2006 to January 2007 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of Defense | The Department of Defense (DOD) asked Congress for $73.2 billion in fiscal year 2007 for research, development, testing, and evaluation (RDT&E). DOD organized this request using program element (PE) codes, which are designed to convey key information about the budget request. DOD also provides documents called budget exhibits detailing the activities for which funds are being requested. The National Defense Authorization Act for Fiscal Year 2006 mandated that GAO examine the program elements and budget exhibits. GAO assessed (1) whether the RDT&E program element code structure and the associated budget exhibits provide accurate, consistent, complete, and clear information, and (2) what factors contribute to any problems found. In conducting this review GAO analyzed all of the fiscal year 2007 program element codes and 47 budget exhibits. GAO also interviewed key DOD officials. Neither the RDT&E program element code structure nor the budget exhibits consistently provide accurate, clear, and complete information on the nature of DOD's proposed research and development efforts. First, one-third of the requested RDT&E funding is for efforts that are not identified as research and development in their program element codes. In addition, a majority of the remaining funding request misidentifies the budget activity (which is a classification of the stage of development and ranges from BA 1 for basic research to BA 6 for management support) as it is stated in program element codes. Second, some of the budget exhibits justifying the programs' funding requests do not provide consistent, complete, and clear information with suitable levels of detail needed to understand DOD's research and development efforts. GAO found that DOD budget exhibits were difficult to understand, frequently lacked information about the accomplishments and planned efforts of each project, lacked appropriate cross-references between efforts, and were frequently missing key schedule data. The RDT&E program element codes are not always accurate nor are the budget exhibits always accurate, clear, consistent and complete for two major reasons. First, DOD's regulation does not require identification of any RDT&E effort as such in its program element code if it is taking place on a weapon system that is approved for production or already fielded. This affects over a third of all RDT&E funds. Second, the regulation governing the structuring of the coding and the content of the exhibits is vague. For example, the regulation does not require the coding to be updated from one year to the next to ensure the correct stage of development has been accurately identified. The regulation also does not provide sufficiently detailed guidance to ensure consistency in the format and content of the budget exhibits. This results in budget exhibits being insufficient as decision-making tools, according to DOD officials. |
Background The judiciary pays rent annually to GSA for court-related space. In fiscal year 2010, the judiciary’s rent payments totaled over $1 billion. The judiciary’s rent payments are deposited into GSA’s Federal Buildings Fund, a revolving fund used to finance GSA’s real property services, including the construction and repair of federal facilities under GSA control. Since fiscal year 1996, the judiciary has used a 5-year plan to prioritize new courthouse construction projects, taking into account a court’s projected need for space related to caseload and estimated growth in the number of judges and staff, security concerns, and any operational inefficiencies that may exist. Under current practices, GSA and the judiciary plan new federal courthouses based on the judiciary’s projected 10-year space requirements, which incorporate the judiciary’s projections of how many judges it will need in 10 years. The L.A. Court’s operations are currently split between two buildings—the Spring Street Courthouse built in 1938 and the Roybal Federal Building built in 1992. In 2008, we reported that the Spring Street building consists of 32 courtrooms—11 of which do not meet the judiciary’s minimum design standards for size—and did not meet the security needs of the judiciary. The Roybal Federal Building consists of 34 courtrooms (10 district, 6 magistrate, and 18 bankruptcy). (See fig. 1.) Since 2000, the construction of a new L.A. courthouse has been a top priority for the judiciary because of problems perceived by the judiciary related to the current buildings’ space, security, and operations. From fiscal year 2001 through fiscal year 2005, Congress made three appropriations for a new L.A. courthouse. Specifically, in fiscal year 2001, Congress provided $35.25 million to acquire a site for and design a 41- courtroom building, and in fiscal year 2004, Congress appropriated $50 million for construction of the new L.A. courthouse. In fiscal year 2005, Congress appropriated an additional $314.4 million for the construction of a new 41-courtroom building in Los Angeles, which Congress designated to remain available until expended for construction of the previously authorized L.A. courthouse. L.A. Courthouse Project Cancelled After Delays and Increases in Estimated Costs In our 2008 report, we found that GSA had spent $16.3 million designing a new courthouse for the L.A. court and $16.9 million acquiring and preparing a new site for it in downtown Los Angeles. In addition, we reported that about $366.45 million remained appropriated for the construction of a 41-courtroom L.A. courthouse. Subsequent to the initial design and site acquisition, we noted that the project experienced substantial delays. The project was delayed because GSA decided to design a larger courthouse than congressionally authorized, GSA and the judiciary disagreed over the project’s scope, costs escalated unexpectedly, and there was low contractor interest in bidding on the project. We also reported that because of the delays, estimated costs for housing the L.A. Court had nearly tripled to over $1.1 billion, rendering the congressionally-authorized 41-courtroom courthouse unachievable with current appropriations. As a result of the delays and the increases in estimated cost, in 2006, GSA cancelled the entire 41-courtroom courthouse project for which Congress had appropriated funds. By 2008, GSA was considering three options for a revised L.A. courthouse project, which would have required balancing needs for courtroom space, congressional approval, and additional estimated appropriations of up to $733 million. These options are summarized in Table 1. The L.A. Court supported the first of these options—building a 36- courtroom, 45-chamber courthouse to house all district and senior judges and adding 4 more courtrooms in the Roybal building to house all magistrate and bankruptcy judges—but it was the most expensive, pushing the total project costs to $1.1 billion at that time. While in 2008, we took no position on the three options, it was clear that the process had become deadlocked. Moreover, none of the options considered in 2008 would have solved the issue of a split court, as all involved using two buildings to house the L.A. Court. GAO Found Judiciary’s Rent Challenge Stems from Courthouses Having Unneeded Space with Higher Associated Costs GAO Found That Increases in the Judiciary’s Rent Costs Were Primarily Due to Increases in Space and That Courthouses Have Significant Unneeded Space In 2004, the judiciary requested a $483 million permanent, annual exemption from rent payments to GSA because it was having difficulty paying for its increasing rent costs. GSA denied this request. GAO found in 2006 that the federal judiciary’s rental obligations to GSA for courthouses had increased 27 percent from fiscal year 2000 through fiscal year 2005, after controlling for inflation, and that these increasing rent costs were primarily due to the judiciary’s simultaneous 19-percent increase in space. Much of the net increase in space was in new courthouses that the judiciary had taken occupancy of since 2000. In 2010, we found that the 33 federal courthouses completed since 2000 include 3.56 million square feet of unneeded space—more than a quarter of the space in courthouses completed since 2000. This extra space consists of space that was constructed as a result of (1) exceeding the congressionally authorized size, (2) overestimating the number of judges the courthouses would have, and (3) not planning for judges to share courtrooms. Overall, this space is equal to the square footage of about 9 average-sized courthouses. The estimated cost to construct this extra space, when adjusted to 2010 dollars, is $835 million, and the annual cost to rent, operate, and maintain it is $51 million. Most Federal Courthouses Constructed Since 2000 Exceed Authorized Size, Some by Substantial Amounts In our 2010 report on federal courthouse construction, we found that 27 of the 33 courthouses completed since 2000 exceeded their congressionally authorized size by a total of 1.7 million square feet. Fifteen exceed their congressionally authorized size by more than 10 percent, and 12 of these 15 also incurred total project costs that exceeded the estimates provided to congressional committees. However, there is no statutory requirement to notify congressional committees about size overages. According to our analysis, a lack of oversight by GSA, including not ensuring its space measurement policies were understood and followed, and a lack of focus on building courthouses within the congressionally authorized size, contributed to these size overages. For example, all 7 of the courthouses we examined in case studies for this 2010 report included more building common and other space—such as mechanical spaces and atriums—than planned for within the congressionally authorized gross square footage. The increase over the planned space ranged from 19 percent to 102 percent. Regional GSA officials involved in the planning and construction of several courthouses we visited stated that they were unaware until we told them that the courthouses were larger than authorized. Further indicating a lack of oversight in this area, GSA relied on the architect to validate that the courthouse’s design was within the authorized gross square footage without ensuring that the architect followed GSA’s policies on how to measure certain commonly included spaces, such as atriums. Although GSA officials emphasized that open space for atriums would not cost as much as space completely built out with floors, these officials also agreed that there are costs associated with constructing and operating atrium space. In fact, the 2007 edition of the U.S. Courts Design Guide, which reflects an effort to impose tighter constraints on future space and facilities costs, emphasizes that courthouses should have no more than one atrium. Because the Judiciary Overestimated the Number of Judges, Courthouses Have Much Extra Space after 10 Years For 23 of 28 courthouses whose space was planned at least 10 years ago, the judiciary overestimated the number of judges who would be located in them, causing them to be larger and costlier than necessary. Overall, the judiciary has 119, or approximately 26 percent, fewer judges than the 461 it estimated it would have. This leaves the 23 courthouses with extra courtrooms and chamber suites that, together, total approximately 887,000 square feet of extra space. A variety of factors contributed to the judiciary’s overestimates, including inaccurate caseload projections, difficulties in projecting when judges would take senior status, and long-standing difficulties in obtaining new authorizations and filling vacancies. However, we found that the contribution of inaccurate caseload projections to inaccurate estimates of how many judges would be needed cannot be measured because the judiciary did not retain the historic caseload projections used in planning the courthouses. Low Levels of Use Show That Judges Could Share Courtrooms, Reducing the Need for Future Courtrooms by More than One-Third According to our analysis of the judiciary’s data, courtrooms are used for case-related proceedings only a quarter of the available time or less, on average. Furthermore, no event (case related or otherwise) was scheduled in courtrooms for half the time or more, on average. Using the judiciary’s data, we designed a model for courtroom sharing, which shows that there is enough unscheduled time for substantial courtroom sharing. (For more information on our model, see app. I). Specifically, our model shows that under dedicated sharing, in which judges are assigned to share specific courtrooms, three district judges could share two courtrooms, three senior judges could share one courtroom, and two magistrate judges could share one courtroom with time to spare. This level of sharing would reduce the number of courtrooms the judiciary requires by a third for district judges and by more for senior district and magistrate judges. In our 2010 report, we found that dedicated sharing could have reduced the number of courtrooms needed in courthouses built since 2000 by 126 courtrooms—about 40 percent of the total number—accounting for about 946,000 square feet of extra space. Furthermore, we found that another type of courtroom sharing—centralized sharing, in which all courtrooms are available for assignment to any judge based on need—improves efficiency and could reduce the number of courtrooms needed even further. Some judges we consulted raised potential challenges to courtroom sharing, such as uncertainty about courtroom availability, but others with experience in sharing indicated they had overcome those challenges when necessary and no trials were postponed. In 2008 and 2009, the Judicial Conference adopted sharing policies for future courthouses under which senior district and magistrate judges are to share courtrooms at a rate of two judges per courtroom plus one additional duty courtroom for courthouses with more than two magistrate judges. Additionally, the conference recognized the greater efficiencies available in courthouses with many courtrooms and recommended that in courthouses with more than 10 district judges, district judges also share. Our model’s application of the judiciary’s data shows that still more sharing opportunities are available. Specifically, sharing between district judges could be increased by one-third by having three district judges share two courtrooms in courthouses of all sizes. Sharing could also be increased by having three senior judges—instead of two—share one courtroom. We found that, if implemented, these opportunities could further reduce the need for courtrooms, thereby decreasing the size of future courthouses. GSA and the Judiciary Have an Opportunity to Align Courthouse Planning and Construction with the Judiciary’s Real Need for Space In 2010, we concluded that, for at least some of the 29 courthouse projects underway at that time and for all future courthouse construction projects not yet begun, GSA and the judiciary have an opportunity to align their courthouse planning and construction with the judiciary’s real need for space. Such changes would reduce construction, operations and maintenance, and rent costs. We recommended, among other things, that GSA ensure that new courthouses are constructed within their authorized size or that congressional committees are notified if authorized sizes are going to be exceeded; that the Judicial Conference of the United States retain caseload projections to improve the accuracy of its 10-year-judge planning; and that the Conference establish and use courtroom sharing policies based on scheduling and use data. GSA and the judiciary agreed with most of the recommendations, but expressed concerns about our methodology and key findings. We continue to believe that our findings were well supported and developed using an appropriate methodology, as explained in the report. Challenges Related to Costs and Unneeded Space in Courthouses Are All Applicable to the L.A. Courthouse Project The three causes of extra space—and the associated extra costs—in courthouses that we identified in 2010 are all applicable to the L.A. courthouse project. These causes, as described above, include (1) exceeding the congressionally authorized size, (2) overestimating the number of judges the courthouses would have, and (3) not planning for courtroom sharing among judges. In 2008, we reported that GSA’s decision to design a larger courthouse in Los Angeles than was congressionally authorized had led to cost increases and delays. The design of a new courthouse in Los Angeles was congressionally authorized in 2000 and later funded based on a 41- courtroom, 1,016,300-square-foot GSA prospectus. GSA decided instead to design a 54-courtroom, 1,279,650-square-foot building to meet the judiciary’s long-term needs. A year and a half later, after conducting the environmental assessments and purchasing the site for the new courthouse, GSA informed Congress that it had designed a 54-courtroom courthouse in a May 2003 proposal. However, the Office of Management and Budget (OMB) rejected this proposal, according to GSA, and did not include it in the President’s budget for fiscal year 2005. GSA then designed a 41-courtroom building, but by the time it completed this effort, the schedule for constructing the building had been delayed by 2 years, according to a senior GSA official involved with the project. With this delay, inflation pushed the project’s cost over budget, and GSA needed to make further reductions to the courthouse in order to procure it within the authorized and appropriated amounts. However, GSA and L.A. Court officials were slow to reduce the project’s scope, which caused additional delays and then necessitated additional reductions. For example, GSA did not simplify the building-high atrium that was initially envisioned for the new courthouse until January 2006, even though the judiciary had repeatedly expressed concerns about the construction and maintenance costs of the atrium since 2002. In our 2010 report, we found that large atriums contributed to size overages in several courthouses completed since 2000. Moreover, according to GSA officials in 2010, GSA’s current policy on how to count the square footage of atriums and its target for the percentage of space in a building that should be used for tenant space (which does not include atriums) should make it difficult, if not impossible, for a courthouse project to include large atriums spanning many floors—although relatively modest atriums should still be feasible. Second, overestimates of how many judges the L.A. Court would need led to the design of a courthouse with more courtrooms than necessary. Specifically, we reported in 2004 that the proposed L.A. courthouse was designed to include courtrooms for 61 judges (47 current district and magistrate judges and 14 additional judges expected by 2011), but in 2011, the L.A. Court still has 47 district and magistrate judges—and none of the 14 additional judges that were expected. This outcome calls into question the space assumptions that the original proposals were based on. Third, in 2008 we reported that in planning for judges to share courtrooms, the judiciary favored an option proposed by GSA that provided for sharing by senior judges, but our 2010 analysis indicated that further sharing was feasible and could reduce the size and cost of the L.A. courthouse project. Specifically, GSA’s proposal to build a 36- courtroom, 45-chamber building and add 4 courtrooms to Roybal’s existing 34 courtrooms—which GSA estimated at the time would cost $1.1 billion, or $733.6 million more than Congress had already appropriated—would have provided the L.A. Court with 74 courtrooms in total—36 district courtrooms in the new building and 38 courtrooms (20 magistrate and 18 bankruptcy) in Roybal. The judiciary supported this proposal in part, it said, because, with more chambers than courtrooms included in the plan, it could fulfill its need for a larger building through courtroom sharing among senior judges who would occupy the extra chambers in the new building. In this option, the district and senior judges would be housed in the new courthouse, while the magistrate and bankruptcy judges would be housed in the Roybal building. As described above, our model suggested that additional courtroom sharing would be possible in a courthouse such as the L.A. courthouse, which could reduce the number of courtrooms needed for this project, broadening the potential options for housing the L.A. District Court. Chairman Denham, Ranking Member Norton, and Members of the Subcommittee, this concludes our testimony. We are pleased to answer any questions you might have. Contact Information For further information on this testimony, please contact Mark L. Goldstein, (202) 512-2834 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Keith Cunningham, Assistant Director, Susan Michal-Smith, and Alwynne Wilbur. Appendix I: Additional Information on GAO’s Courtroom Sharing Model To learn more about the level of courtroom sharing that the judiciary’s data support, we used the judiciary’s 2008 district courtroom scheduling and use data to create a simulation model to determine the level of courtroom sharing supported by the data. The data used to create the simulation model for courtroom usage were collected by the Federal Judicial Center (FJC)—the research arm of the federal judiciary—for its Report on the Usage of Federal District Court Courtrooms, published in 2008. The data collected by FJC were a stratified random sample of federal court districts to ensure a nationally representative sample of courthouses—that is, FJC sampled from small, medium, and large districts, as well as districts with low, medium, and high weighted filings. Altogether, there were 23 randomly selected districts and 3 case study districts, which included 91 courthouses, 602 courtrooms, and every circuit except that of the District of Columbia. The data sample was taken in 3-month increments over a 6-month period in 2007 for a total of 63 federal workdays, by trained court staff who recorded all courtroom usage, including scheduled but unused time. These data were then verified against three independently recorded sources of data about courtroom use. Specifically, the sample data were compared with JS-10 data routinely recorded for courtroom events conducted by district judges, MJSTAR data routinely recorded for courtroom events conducted by magistrate judges, and data collected by independent observers in a randomly selected subset of districts in the sample. We verified that these methods were reliable and empirically sound for use in simulation modeling. Working with a contractor, we designed this sharing model in conjunction with a specialist in discrete event simulation and the company that designed the simulation software to ensure that the model conformed to generally accepted simulation modeling standards and was reasonable for the federal court system. Simulation is widely used in modeling any system where there is competition for scarce resources. The goal of the model was to determine how many courtrooms are required for courtroom utilization rates similar to that recorded by FJC. This determination is based on data for all courtroom use time collected by FJC, including time when the courtroom was scheduled to be used but the event was cancelled within one week of the scheduled date. The completed model allows, for each courthouse, user input of the number and types of judges and courtrooms, and the output states whether the utilization of the courtrooms does not exceed the availability of the courtrooms in the long run. When using the model to determine the level of sharing possible at each courthouse based on scheduled courtroom availability on weekdays from 8 a.m. to 6 p.m., we established a baseline of one courtroom per judge to the extent that this sharing level exists at the 33 courthouses built since 2000. In selecting the 8 a.m. to 6 p.m. time frame for courtroom scheduling, we used the courtroom scheduling profile that judges currently use, reflecting the many uses and flexibility needed for a courtroom. Judges stated that during trials courtrooms may be needed by attorneys before trial times in order to set up materials. This set up time was captured in the judiciary’s data; other uses of a courtroom captured by the judiciary are time spent on ceremonies, education, training, and maintenance. We differentiated events and time in the model by grouping them as case-related events, nonjudge-related events, and unused scheduled time, and we allotted enough time for each of these events to occur without delay. Then we entered the number of judges from each courthouse and determined the fewest number of courtrooms needed for no backlog in court proceedings. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 2000, as part of a multibillion-dollar courthouse construction initiative, the judiciary requested and the General Services Administration (GSA) proposed building a new courthouse in Los Angeles to increase security, efficiency, and space--but construction never began. About $400 million was appropriated for the L.A. courthouse project. For this testimony, GAO was asked to report on (1) the status of the L.A. courthouse project, (2) challenges GAO has identified affecting federal courthouses nationwide, and (3) the extent to which these challenges are applicable to the L.A. courthouse project. This testimony is based on GAO-10-417 and GAO's other prior work on federal courthouses, during which GAO analyzed courthouse planning and use data, visited courthouses, modeled courtroom sharing scenarios, and interviewed judges, GSA officials, and others. In GAO-10-417 , GAO recommended that (1) GSA ensure that new courthouses are constructed within their authorized size or, if not, that congressional committees are notified, (2) the Judicial Conference of the United States retain caseload projections to improve the accuracy of its 10-year-judge planning, and (3) the Conference establish and use courtroom sharing policies based on scheduling and use data. GSA and the judiciary agreed with most of the recommendations, but expressed concerns with GAO's methodology and key findings. GAO continues to believe that its findings were well supported and developed using an appropriate methodology.. GAO reported in 2008 that GSA spent about $33 million on design and site preparations for a new 41-courtroom L.A. courthouse, leaving about $366 million available for construction. However, project delays, unforeseen cost escalation, and low contractor interest had caused GSA to cancel the project in 2006 before any construction took place. GSA later identified other options for housing the L.A. Court, including constructing a smaller new courthouse (36 courtrooms) or using the existing courthouses--the Spring Street Courthouse and the Edward R. Roybal Federal Building and Courthouse. As GAO also reported, the estimated cost of the 36-courthouse option as of 2008 was over $1.1 billion, significantly higher than the current appropriation. The challenges that GAO has identified in recent reports on federal courthouses include increasing rent and extra operating, maintenance, and construction costs stemming from courthouses being built larger than necessary. For example, in 2004, the judiciary requested a $483 million permanent, annual exemption from rent payments to GSA due to difficulties in paying for its increasing rent costs. GAO found in 2006 that these increasing rent costs were primarily due to increases in total courthouse space--and in 2010, GAO reported that more than a quarter of the new space in recently constructed courthouses is unneeded. Specifically, in the 33 federal courthouses completed since 2000, GAO found 3.56 million square feet of excess space. This extra space is a result of (1) courthouses exceeding the congressionally authorized size, (2) the number of judges in the courthouses being overestimated, and (3) not planning for judges to share courtrooms. In total, the extra space GAO identified is equal in square footage to about 9 average-sized courthouses. The estimated cost to construct this extra space, when adjusted to 2010 dollars, is $835 million, and the estimated annual cost to rent, operate and maintain it is $51 million. Each of the challenges GAO identified related to unnecessary space in courthouses completed since 2000 is applicable to the L.A. courthouse project. First, as GAO reported in 2008, GSA designed the L.A. Courthouse with 13 more courtrooms than congressionally authorized. This increase in size led to cost increases and delays. Second, in 2004, GAO found that the proposed courthouse was designed to provide courtrooms to accommodate the judiciary's estimate of 61 district and magistrate judges in the L.A. Court by 2011--which, as of October 2011, exceeds the actual number of such judges by 14. This disparity calls into question the space assumptions on which the original proposals were based. Third, the L.A. court was planning for less courtroom sharing than is possible. While in 2008 the judiciary favored an option proposed by GSA that provided for some sharing by senior judges, according to GAO's 2010 analysis, there is enough unscheduled time in courtrooms for three senior judges to share one courtroom, two magistrate judges to share one courtroom, and three district judges to share two courtrooms. In 2011, the judiciary also approved sharing for bankruptcy judges. Additional courtroom sharing could reduce the number of additional courtrooms needed for the L.A. courthouse, thereby increasing the potential options for housing the L.A. Court. |
Background About 90 percent of the German population obtains its health insurance through one of the more than 900 Statutory Health Insurance Funds, usually called sickness funds. Virtually all working Germans with an income below a statutory threshold—Deutsche Mark (DM) 68,400 (about $44,200) in 1994—are required to join one of these funds, and their nonworking spouses and dependents are also automatically covered. The sickness funds also cover most retirees and persons receiving unemployment or disability payments. Persons with incomes above the threshold may choose to remain in the statutory system, and many do. The German Statutory Health Insurance System is mainly financed through an income-based premium, 50 percent paid by employers and 50 percent by employees, on wages up to the statutory threshold amount mentioned above. At the beginning of 1993, this premium, called a contribution, averaged 13.4 percent of wages up to the income threshold in former West Germany. However, this contribution rate can vary across sickness funds, depending on the income and demographic structure of the fund’s membership. In 1993, contribution rates varied from 8.5 percent to 16.5 percent. Contribution Rate Increase Triggered 1993 Reforms Between July 1991 and the end of 1992, the average contribution rate of the statutory sickness funds rose from 12.2 percent to 13.4 percent. Alarmed by the size and speed of this increase, all the major political parties in Germany agreed that action to control health care spending was needed. The result was the Health Care Structure Reform Act of 1993. This act imposed strict budgets beginning January 1, 1993, for periods of up to 3 years on the major sectors of the statutory health insurance system, including hospitals, ambulatory care physicians, prescription pharmaceuticals, and dentists. These budgets were designed to stabilize the contribution rate by restricting the rate of increase in spending to the rate of increase in the total amount of workers’ wages subject to the contribution. Spending increases in each sector are subject to this restriction. If successful, this would have the effect of stabilizing the contribution rate at its current level. If any sector exceeds its budget, its payment rates may be reduced during the next year to recoup the excess spending. The Health Care Structure Reform Act also provided for a series of major structural reforms to be implemented over the remainder of the decade and intended to build cost control structures and incentives into the Statutory Health Insurance System. Some of these changes are discussed in appendix I. Scope and Methodology We interviewed officials of the German Ministry of Health and key German health experts, obtained relevant health care spending data, and reviewed English and German language literature on the results of the first year of implementation of strict sector budgets and on progress toward implementation of structural changes mandated by the German Health Care Structure Reform Act of 1993. This review also incorporates information from our 1993 review of German health care reforms and from current and past work using other international studies. Although the German Statutory Health Insurance System covers unified Germany, this report, like our 1993 report, focuses on the results of changes in the former West Germany because it provides a better basis for comparison with the United States and with earlier conditions in Germany. We conducted this review between June 1993 and June 1994 in accordance with generally accepted government auditing standards. Strict Budgets Restrained Cost Growth in 1993 During 1993, strict budgets for each health care sector initiated under the Health Care Structure Reform Act restrained growth in expenditures and stabilized the contribution rate in the Statutory Health Insurance System. Table 1 shows that the 1993 rate of growth in all major sectors declined sharply compared with the previous year in former West Germany. Total expenditures in the system in former West Germany declined slightly in 1993. The major percentage decreases in outlays were in pharmaceuticals and dentures, which declined 19.6 and 26.9 percent, respectively. However, even if pharmaceuticals and dentures are removed, outlays per member of the Statutory Health Care System rose only about 3 percent, easily meeting the Health Care Structure Reform Act’s goal of restraining growth to the rate of growth of income of members of the system subject to the contribution rate. From a 1992 deficit of DM 9.1 billion (about $5.9 billion) in the area of former West Germany, the Statutory Health Insurance System showed a DM 9.1 billion surplus in 1993. Contribution rates have stabilized and even declined slightly. From a high of 13.42 percent on January 1, 1993, the average general contribution rate for the system had fallen to 13.25 percent by April 1, 1994. Pharmaceuticals and Dentures The largest rates of decrease in expenditures were seen in the sectors of pharmaceuticals and dentures, which had negative growth rates of 19.6 and 26.9 percent, respectively. Of these two, by far the largest absolute decrease was in pharmaceuticals. Pharmaceutical outlays fell from DM 27.1 billion in 1992 to DM 21.9 billion in 1993, an absolute decrease of DM 5.2 billion (about $3.4 billion). Several factors contributed to this startling decrease in outlays for pharmaceuticals. First, about 15 percent of the decrease represents a shifting of drug costs to consumers in the form of co-payments for prescription pharmaceuticals. The Ministry of Health ascribes an additional 20 percent of savings to a combination of three factors: the effects of the reference price system for pharmaceuticals, created by the Health Care Reform Act of 1989; a 5-percent reduction in the price of prescription pharmaceuticals not under the reference price system mandated by the 1993 act; and a mandated 2-percent reduction in the price of over-the-counter pharmaceuticals also mandated by the 1993 act. The Ministry of Health ascribes the remainder of the decrease—about DM 3 billion—to changes in the behavior of physicians towards prescribing drugs. These changes include a decrease in the number of prescriptions; increased prescribing of less costly but qualitatively similar pharmaceuticals, including generic drugs and pharmaceuticals with prices under the reference price; and reduced prescribing of certain categories of pharmaceuticals, including drugs considered by the Germans to be excessively or inappropriately prescribed, such as vitamins, mineral preparations, and vascularity improving drugs. Statutory system outlays for dentures fell from DM 6.8 to DM 5.0 billion, about DM 1.8 billion ($1.2 billion). This decrease was all the more remarkable in that there was no fixed budget for dentures themselves, although there was a budget for general dental services, including the prescribing and fitting of dental prostheses. Little Evidence of Impaired Access to Appropriate Care Despite the introduction of stringent budgeting in most major sectors of the German Statutory Health Insurance System, access of patients to appropriate care was not impaired. In particular, fears had been raised that physicians might not prescribe needed pharmaceuticals to their patients; physicians might seek to hospitalize costly patients to transfer these costs to the hospital’s budget rather than treat them on the outpatient budget where they might affect future payments; and hospitals might transfer (dump) costly patients to other hospitals, usually tertiary care hospitals, to move these costs from the transferring hospital’s budget to the receiving hospital’s budget, which is usually higher. Pharmaceutical Prescribing Patterns Statistics on prescribing patterns of German physicians suggest that fears that physicians would not prescribe needed drugs to patients did not materialize. According to the German Ministry of Health, preliminary prescription statistics suggest that physicians responded to the budgetary constraints in part by prescribing less expensive but qualitatively similar generic drugs instead of brand-name pharmaceuticals and by decreasing prescribing of pharmaceuticals in categories where some drugs are considered by the Germans to be of questionable therapeutic effectiveness or frequently inappropriately prescribed. As shown in figure 2, the number of prescriptions in several pharmaceutical groups, including vein drugs, gallbladder and duct drugs, immunological drugs, vascularity improving drugs, urologic agents, mouth and throat drugs, and antihypotensive agents, declined 20 percent or more. Most of these categories contain a relatively high percentage of doubtful or inappropriately prescribed preparations. In contrast, the number of prescriptions for some pharmaceutical groups containing a high percentage of drugs considered to be both therapeutically effective and usually appropriately prescribed, such as diabetes-related drugs, antibiotics, and angiotensin converting enzyme-inhibitors, remained stable or increased slightly in 1993. These statistics do not support the view that the global budget for pharmaceuticals caused widespread problems of patient access to appropriate drugs in Germany in 1993. Independent experts and Health Ministry officials with whom we spoke in Germany generally agreed that the pharmaceutical budget had not caused significant access problems in 1993. Experts from the Research Institute of the Local Sickness Funds said that the pharmaceutical budget can be credited with improving quality of care because the amount of inappropriately prescribed pharmaceuticals has decreased. However, one physician pointed out that long-term quality effects may eventually become apparent. For example, the decline in prescription of lipid-lowering drugs might simply reflect past overuse of this class of pharmaceuticals or might result in future increases in the incidence of heart attacks and strokes. Hospital Admission Patterns Hospital admission patterns suggest that the fears that physicians and hospitals would unnecessarily hospitalize or transfer costly patients did not materialize. The Ministry of Health found no statistical evidence that would support these allegations, such as significant increases in the numbers of hospitalizations or of transfers among hospitals. Furthermore, even when allegations of patient dumping were investigated, few cases could be confirmed. The Ministry noted that in Bavaria, for example, the number of cases in the university clinics, tertiary care hospitals that often receive transferred patients from lower-level hospitals, fell about 2 percent, while cases in hospitals offering only basic care rose about 2 percent, and cases in hospitals offering intermediate levels of care rose 3 percent. Also, the Ministry found that in the state of Rhineland-Palatinate there were 215,000 fewer billable bed days than had been budgeted for in 1993. Furthermore, when surveyed by the Hesse Association of Sickness Fund Physicians, 82 percent of hospital-based physicians in that state responded that they had observed “no admissions or transfers because of cost,” and 17 percent responded that they “very seldom observed such transfers.” According to the Ministry of Health, many university clinics did not reach their budgeted level of bed days. For example, the Bonn University Clinic was some 27,000 bed days and the Münster university clinic about 7,000 bed days below budgeted levels. This means that both clinics will receive more payments per patient than they otherwise would have during 1994 because under the fixed budget, if hospitals do not bill up to their budget, they are paid the difference between their billed amounts and their budgeted amounts during the following year. While most experts we talked to agreed that unnecessary referrals to hospitals by ambulatory care physicians had not been a significant problem, some believed that transfers of costly patients among hospitals had occurred but the extent was not yet known. Bed Closures in Münster One potential reaction of hospitals to the fixed budgets would be to eliminate types of services, especially those serving costly patients. The only reported case of such closures was at the Münster University Clinic, a tertiary care center, which closed some acquired immunodeficiency syndrome (AIDS) and pediatric oncology beds. The clinic management stated that because of a rise in the number of cases in these areas the clinic’s budget was too low. The German Federal Ministry of Health was critical of this decision to close beds because the Münster clinic ended the year with fewer bed days than they were budgeted for. A representative of the Local Sickness Funds told us that the AIDS and pediatric oncology patients were probably admitted despite the closed beds but into other departments. She viewed this episode as an attempt on the part of the clinic to obtain additional money from the sickness funds. Will German Reforms Control Cost Growth in 1994? Data available at the time of our work were too scant to permit any firm predictions regarding the future success of the budgets and reforms in controlling cost growth in 1994 and future years. But the second year of imposed budgets will not likely be as dramatically successful in controlling costs as the first. Rates of decrease in expenditures experienced by pharmaceuticals and dentures are probably unsustainable at the 1993 rates. One expert told us that the cost of pharmaceuticals and dentures had also fallen dramatically in response to earlier cost control efforts, and had resumed their rate of growth in 1994. Also, some of the 1993 decrease in expenditures may have been due to increased spending on pharmaceuticals and dentures in December 1992 in anticipation of implementation of the Health Care Structure Reform Act. Moreover, since pharmaceutical expenditures for 1993 were well under the budget limits, the disincentive for drug prescribing by physicians is less threatening and may not have as constraining an effect on physicians. In addition, few of the structural reforms intended for long-term cost control are yet in place, and those that are have not had time to exert much effect. German government data from the first quarter of 1994 suggest that the reforms were still controlling cost growth at that time. Although spending for dentures, and to a lesser degree pharmaceuticals, was well above levels for the first quarter of 1993, overall spending was only 5 percent above 1993 levels. Furthermore, comparison with the first quarter of 1993 may be somewhat misleading because spending in that quarter was depressed due to anticipation of the effects of the Health Care Structure Reform Act. Compared with the first quarter of 1992, 2 years previously, first quarter spending was up only about 4 percent, and it was down about 3 percent from the last quarter of 1993, the quarter immediately previous. However, these data are inadequate to permit drawing conclusions for 1994. Long-Term Cost Control Through Structural Reforms The Health Care Reform Act of 1993 set up the temporary global budgets to control health care expenditures while structural reforms intended to control costs over the longer term could be worked out and put into place. The act contains structural reform measures for most sectors of the German Statutory Health Care System. These reform measures include risk-adjustment among the sickness funds; broadened choice of sickness fund for members of the statutory system; lowering barriers between the ambulatory and inpatient sectors of the health care system; a complete restructuring of the inpatient hospital reimbursement system; a system for auditing physicians’ pharmaceutical prescribing practices. Some of these reforms are yet to be implemented. Others have not been in place long enough to have a significant impact. These reforms are discussed in detail in appendix I. We plan to send copies of this report to the appropriate congressional committees and interested parties. We also will make copies available to others on request. This report was prepared under the direction of Mark V. Nadel, Associate Director, and Michael Gutowski, Assistant Director, Health Financing and Policy Issues. If you or your staff have any questions about this report, please contact me at (202) 512-7115. Other major contributors to this report are listed in appendix III. Major Structural Reforms The Health Care Structure Reform Act of 1993 contains a series of reform measures intended to control costs in most major sectors of the health care system that will be implemented over the remainder of the decade. The current status of selected reforms, primarily those that have been or will soon be implemented, are discussed below. Risk-Structure Equalization and Freedom of Choice On January 1, 1993, the German Statutory Health Insurance System implemented the first phase of the so-called risk-structure equalization. This risk-adjustment process is intended to compensate for the differing demographic and income compositions of sickness funds, and so reduce the wide differences among the contribution rates of the funds. This is being done partly to increase equity among the funds and partly as a necessary preparation for the extension of the right of blue-collar workers to choose among sickness funds, due to become effective January 1, 1997. The German risk-adjustment process is somewhat different from others because it includes an adjustment for sickness fund income as well as for risk of health care expenditures. This is both possible and necessary because of Germany’s income-based premium structure. If a sickness fund has a disproportionate percentage of low-income members, its income will be low (or its contribution rate high) relative to a fund with a large percentage of high-income members. The adjustment on the expenditure side is relatively simple, covering only age and sex. In this adjustment process, all persons insured by the sickness fund (including co-insured family members, but excluding pensioners for the time being) were divided into 1-year groups by age and sex. A national average expenditure amount for each year and sex group was computed. For example, the average expenditure was computed for 20-year-old females and 60-year-old males. These amounts were then multiplied by the number of persons in each group in each sickness fund and added together to obtain the risk-adjusted financial requirements for each sickness fund. The same calculation was done to develop the risk-adjusted financial requirement for all sickness funds. On the income side, the total income subject to the contribution rate was determined for each sickness fund and for all sickness funds together. The ratio between the risk-adjusted financial requirements and the total income subject to the contribution rate of all sickness funds together constitutes the uniform equalization rate. The same ratio is then calculated for each sickness fund individually, and compared with the uniform equalization rate. If the fund’s equalization rate is lower than the uniform equalization rate, it must pay into the equalization fund. If higher, it receives payment from the fund. It is too early to tell how effective the system will be in reducing the variation in contribution rates among sickness funds. However, a ministry official noted that after 4 months experience, the range of contribution rates had declined from between about 8 percent and 16 percent to between 9 percent and less than 15 percent. He noted that the intention of this reform was not to make all price differences among the funds disappear. While Germany does not want the funds to compete by excluding sick persons from coverage, he said that the government does want some price competition to force the sickness funds to become more efficient. Hospital Reforms The German Health Care Structure Reform Act contains two important structural changes for hospitals. First, it partially lowered the barrier between ambulatory care and hospital physicians by permitting the latter to perform ambulatory surgery and to care for patients for short periods before and after inpatient admissions. Second, it provided for a major reform of hospital reimbursement for inpatient services to be fully implemented by 1996. These are long-term structural reforms intended to give hospitals incentives to reduce lengths of stay and operate more efficiently. Lowering Barriers The German health care system has long had a barrier between inpatient hospital care and ambulatory care. For the most part, hospital physicians have not been allowed to see ambulatory patients, and ambulatory physicians have not been allowed to practice in hospitals. This has created some perverse incentives for hospitals. Hospital physicians often had to admit patients early because they could not have medical tests done on an outpatient basis and keep their patients in hospital longer than necessary to oversee their recovery. In addition, patients who would be treated as outpatients in the United States were often admitted to the hospital in Germany because the hospital physicians were not allowed to treat them on an outpatient basis. The Health Care Structure Reform Act began to break down this barrier between the inpatient and ambulatory sectors. Ambulatory Surgery The act permits hospitals to open ambulatory surgery departments. The government expects this change to reduce the amount of unnecessary inpatient care and improve cooperation between the ambulatory and hospital sectors, for example, with ambulatory care surgeons using hospital surgical facilities. Despite an implementation agreement of March 22, 1993, among the sickness fund associations, the German Hospital Association, and the Association of Sickness Fund Physicians, the provisions of the Health Care Structure Reform Act of 1993 for ambulatory surgery in hospitals remained largely unused. Thus, these provisions had little effect on German hospital costs in 1993. According to the hospitals, the major reason for the lack of implementation of this agreement is that any income will be counted against the fixed hospital budget. In addition, they feared that increased provision of ambulatory surgery would lead to reduction in the fixed budget because of decreased need for inpatient care resources. New hospital payment regulations, which the hospitals will have to adopt by 1996, provide that the income from ambulatory surgery will no longer be included in the hospital budget. Rather, the ambulatory surgery area will form an independent income source for the hospitals. The Ministry of Health believes that this change will encourage the hospitals to realize the possibilities for cost reduction related to ambulatory surgery. Preadmission and Postdischarge Care Previously, for the most part, hospital physicians could not see patients before admission or after discharge. This frequently led to early admissions for tests and to retaining patients in the hospital after they could be safely discharged so that hospital physicians could oversee their convalescence. The Health Care Structure Reform Act of 1993 set out to change this pattern by permitting hospital-based physicians to see patients for as many as 3 days within the 5-day period before an admission and up to 7 days within a 14-day period after discharge. The act specified that reimbursement was to be agreed upon between the hospitals and the sickness funds on the state level. The government expected that this change would shorten length of stay and, thus, increase the efficiency and lower the costs of hospitals. The National Associations of Sickness Funds and the German Hospital Association developed an advisory agreement on reimbursement, which was made retroactive to July 1, 1993. Under this agreement, preadmission care would be paid a lump sum amount of 1.8 times the hospital’s general daily rate. Postdischarge care would be reimbursed at a rate of 0.6 times the general daily rate per visit. However, these amounts would be payable only if the services were not already covered by other payments to the hospital. Despite this agreement, Ministry and other experts we talked to said that hospitals had not adopted this preadmission and postdischarge care to a significant extent. They generally agreed that the hospitals did not have a sufficient incentive to change their long-standing practices. Hospital Payment Reforms Under the Health Care Structure Reform Act, the predominant existing German hospital reimbursement system of a single negotiated daily rate for each hospital, supplemented by special payments for a few categories of costly procedures, will be replaced by a system comprising three types of payment. First, approximately 60 procedures (as of Jan. 1, 1995) will be paid using a prospective case payment system similar to the U.S. Medicare diagnosis related group (DRG) payment system. Payment for these 60 procedures will cover all hospital care. Second, another approximately 155 procedures (also as of Jan. 1, 1995) will be paid using a system of special payments. Under this type of payment, the principal medical services for the admission will be paid by a prospectively fixed lump-sum amount. Other costs, such as administrative overhead and room and board, will be covered by the hospital-specific basic daily rate and a reduced departmental daily rate, both discussed below. All other types of cases will be reimbursed by a combination of two hospital-specific daily rates. Medical costs will be covered by a departmental daily rate, which will vary depending on the medical department that admits the patient. That is, a cardiac patient may be reimbursed by a daily rate different from that of a general internal medicine patient. Nonmedical services, including food and housekeeping, will be reimbursed by a basic daily rate common to all departments. Reimbursement rates for both case payments and special payments will be set using a combination of national relative value scales and conversion factors negotiated on a statewide basis. Thus, all hospitals in a German state will receive the same prospective lump sum payment for a given procedure under these two types of payment. If the costs for the services covered by the payment type are lower than the payment rate, they may keep the difference. If higher, they are at risk. One group of experts told us that the method of determining the special payment rates resulted in more generous rates than that for the case payment system. They noted that over time it is expected that the rates will be made consistent. German hospitals have the option of choosing to be reimbursed under the new system beginning January 1, 1995. All hospitals must be reimbursed using this system beginning January 1, 1996. Hospitals choosing the new reimbursement system for 1995 will be released from the strict budget limits of the Health Care Structure Reform Act. The Ministry of Health expects that this new reimbursement system will give hospitals effective incentives for improved efficiency and for reducing lengths of stay. However, health care experts at the Research Institute of the Local Sickness Funds (Wissenschaftliches Institut der Allgemeine Ortskrankenkassen) believe that the case payments were set too high because of problems with data on length of stay. They believe that correcting for the length-of-stay problem would save about DM 450 million annually. The New German Case Payment System The new German case payment system is conceptually similar to the prospective payment system used for most U.S. Medicare hospital payments. However, the categories used to separate patients into payment classes in the German System are not DRGs, as in the U.S. Medicare system, but patient management categories (PMC). This system was developed during the early 1980s by Wanda Young of the Pittsburgh Research Institute, the research institute of Blue Cross of Western Pennsylvania. In contrast to DRGs, which are mainly defined in terms of principal diagnosis and procedure, each PMC has an associated patient management path, which is the expected clinical strategy, defined in terms of a bundle of related tests, procedures, and other interventions, that physicians typically utilize to diagnose and treat that type of case. The Germans used this bundle of related services associated with each PMC to develop related cost weights for each PMC corresponding to the 60 procedures initially to be covered by the full case payment system. The PMC system also differs from DRGs in two other important respects. First, PMCs are tightly defined around a specific illness, whereas DRGs group patients whose treatments are expected to consume similar levels of hospital resources. As a result, the number of PMCs is nearly twice as large as the number of DRGs (848 vs. 494). Second, the PMC system permits assigning more than one PMC to a patient, based on unrelated comorbid conditions. The DRG system, in contrast, permits assignment of a patient to only one DRG. These two differences may permit the PMC system to better adjust for severity of illness than the DRG system. On the other hand, one group of experts with whom we spoke indicated that they believed that PMCs are easier for providers to manipulate to maximize reimbursement than are DRGs. Experts told us that the ultimate intent of the German government is to bring most hospital inpatient care under the case payment system. However, they indicated that further implementation of the system would probably not take place until Germany had some experience with the new system. Pharmaceutical Reforms The Health Care Structure Reform Act provided that the fixed budget for pharmaceuticals would be lifted in 1994 and 1995 if the sickness funds and physicians agreed on a system of auditing physicians’ prescribing practices on the basis of pharmaceutical guidelines. Physicians who exceeded the guidelines by more than 15 percent were to be audited, while payments to physicians exceeding the guidelines by more than 25 percent were to be automatically reduced. However, this system has not yet been implemented, at least in part because the sickness funds and the Associations of Pharmacists could not agree on prescription reporting requirements necessary for setting and administering the guidelines. Thus, the strict global pharmaceutical budget remains in effect for 1994 and possibly beyond. Meanwhile, the Federal Association of Sickness Fund Physicians and the National Associations of Sickness Funds have reached an advisory agreement that the total 1994 outlays for pharmaceuticals, dressings, and remedies in former West Germany should be set at about DM 27.7 billion ($14.9 billion), which corresponds to the sum of these budgets for 1993. Outlays of the German Statutory Health Insurance System (1989-93) Year (no. of members, in thousands) 1989 (37,229) 1990 (37,939) 1991 (38,704) 1992 (39,246) 1993 (39,459) Year (no. of members, in thousands) 1989 (37,229) 1990 (37,939) 1991 (38,704) 1992 (39,246) 1993 (39,459) Major Contributors to This Report Peter Schmidt, Project Manager, (410) 965-5587 Christopher Hess Thomas Laetz James Perez The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Address Correction Requested | GAO reviewed the German health care system, focusing on the: (1) effects of nonnegotiable budgets on health care cost and access to care; and (2) status of some of the structural changes intended to control costs over the longer term. GAO found that: (1) during 1993, the German health care sectors generally succeeded in controlling the growth of health care costs, with outlays per member falling more than one percent from 1992 levels; (2) the largest decrease in expenditures occurred in the pharmaceutical and denture sectors; (3) the negative growth rate allowed participants' insurance premiums to decline slightly, but expenditure growth could resume in the future; (4) there was little evidence that the low cost growth rate limited participants' access to appropriate care; (5) there was no evidence that German physicians failed to prescribe needed drugs or unnecessarily admitted patients to hospitals in order to shift them from the physician budget to the hospital budget; (6) it was not known how many community hospitals unnecessarily transferred patients to tertiary care facilities to remove them from their budgets; (7) there was insufficient data to predict the future success of German health care reforms; and (8) some structural reforms have not been implemented and others have not been in place long enough to determine their impact. |
Background The Package Delivery Market Is Complex and Changing Rapidly The package delivery market is growing and changing rapidly. In 2013, U.S. businesses and consumers reportedly spent more than $68 billion to ship packages domestically using the three largest national package delivery providers in the U.S.—United Parcel Service (UPS), Federal Express Corporation (FedEx), and USPS. This spending is driven in large part by recent growth in electronic commerce, which is forecast to grow in the double digits year-over-year to bring electronic commerce’s share of overall U.S. retail sales to almost 9 percent, or $490 billion, by the end of 2018. According to USPS, revenues from its Shipping and Packages services—which includes Parcel Select—have grown from about $11.6 billion in fiscal year 2012 to over $13.7 billion in fiscal year 2014, and generate about 20 percent of USPS’s total operating revenues. USPS is placing increased emphasis on growth in these services to partially offset a continued decline in First-Class Mail. Mailers have developed new delivery options to grow their businesses as the package delivery market evolves. For example, Amazon and Google started offering same-day package delivery service in select metropolitan areas. Some mailers have also introduced services that allow consumers to avoid package deliveries when they are not at home by picking-up packages themselves at alternative delivery locations. This development could reduce the need to provide last-mile delivery to individual residences. For example, Amazon allows customers to retrieve packages at self-service lockers in shopping centers, retail stores, transit stations, Similarly, in October 2014 and other access points in selected locations.UPS announced that consumers in two cities may retrieve packages from nearby locations—primarily neighborhood convenience and grocery stores—and it plans to expand the service to cover all major U.S. metropolitan markets in 2015. In addition to large national mailers, smaller regional companies compete for business in the package delivery market. For example, LaserShip is a regional package delivery company that provides last-mile delivery to major East Coast markets. Similarly, OnTrac provides regional overnight package delivery service within several states on the West Coast. As the package delivery market has changed, USPS must manage complex business relationships with its competitors. For example, although UPS and FedEx both pay USPS to deliver packages the last mile under certain circumstances, they also compete with USPS for end- to-end package delivery business—a concept that USPS officials refer to as “coopetition”. Similarly, FedEx is USPS’s largest contractor providing air transportation for Priority Mail Express (formerly Express Mail), Priority Mail, and First-Class Mail. UPS also is one of USPS’s largest contractors providing long-distance mail transportation. Parcel Select NSAs Provide Increased Volume and Revenue to USPS and Discounted Prices to Mailers USPS uses Parcel Select NSAs to encourage additional mail volume and revenue by providing mailers with discounted prices in exchange for meeting the contract’s terms and conditions. USPS data show that the Parcel Select product, including Parcel Select NSAs, is an increasingly important source of additional volume and revenue (see fig. 1). While USPS uses Parcel Select NSAs to generate additional revenue, among other benefits, mailers use them to lower delivery costs and to take advantage of USPS’s extensive “last mile” network that serves more than 140 million residential delivery points 6 days a week. Contracts may require the mailer to ship a minimum volume of packages each contract year (minimum volume requirement) to qualify for discounted prices. Some contracts may require the mailer to pay USPS the difference between the discounted price and published price for Parcel Select if the mailer fails to meet this minimum volume requirement. Shipping packages under Parcel Select lowers mailers’ delivery costs by allowing them to enter bulk shipments of packages into USPS’s network—generally at Destination Delivery Units (DDU)close to the final delivery point (see fig. 2). As a result, these packages generally bypass most of USPS’s mail processing and transportation network. Prices vary, depending on package weight, entry point, and the total volume shipped. USPS must ensure that all Parcel Select NSAs comply with statutory requirements under the Postal Accountability and Enhancement Act (PAEA). Notably, each Parcel Select NSA is required to generate enough revenue to cover its attributable costs. For Parcel Select, these costs primarily include USPS labor involved in sorting packages at postal facilities and delivering them to the final delivery point. PRC reviews each Parcel Select NSA to ensure that it is projected to comply with this and other requirements before USPS can implement it. According to PRC, this pre-implementation review is justified to preserve fair competition.also reviews each contract after implementation to ensure compliance with statutory criteria—including the contract’s attributable cost coverage—in its Annual Compliance Determination Report. USPS’s Recently Established Standard Procedures for Parcel Select NSAs Have Gaps USPS Established Standard Procedures in June 2014 USPS’s Sales Department (Sales) has lead responsibility for managing Parcel Select NSAs and recently established standard procedures for managing NSA contracts. Specifically, in June 2014, Sales created the Standard Procedure for Managing NSA Contracts (Standard Procedures), which is an internal document that provides guidance that USPS departments should follow to manage all NSAs. Sales officials told us that they created the guidance in an effort to incorporate best practices and to improve USPS’s NSA contract management procedures by balancing standardization with flexibility. The Standard Procedures address, in part, some leading contract management practices. For example, as we discuss below, the procedures define some contract management responsibilities, such as performance monitoring and evaluation activities. Previously, USPS officials told us that no document or policy outlined USPS’s contract management procedures. Absent such guidance, USPS managers told us that they relied primarily upon their professional expertise and experience to manage the contracts. Standard Procedures Define Some Contract Management Responsibilities The Standard Procedures define some responsibilities for USPS departments involved in developing, monitoring, and evaluating the performance of Parcel Select NSAs, as indicated below. USPS officials told us about other responsibilities that are not specifically included in the Standard Procedures. Contract Development The Standard Procedures delineate contract development responsibilities for various departments within USPS, including Sales, USPS’s Finance Department (Finance), and USPS’s Legal Department (Legal) (see fig. 3). After PRC issues an order approving a Parcel Select NSA, USPS begins to monitor and evaluate the performance of the contract. These responsibilities include internal performance reporting by Sales and Finance, submission of an annual compliance review to the PRC by Finance, periodic business reviews between USPS and mailers, and following contract termination or renewal procedures, as described below. Internal performance reporting: The Standard Procedures call for Sales and Finance to use internal reports to monitor and evaluate performance under each Parcel Select NSA. Thus, Sales or Finance creates monthly, quarterly, and annual reports that analyze data that include the volume of packages shipped and revenue generated under the contract. For example, Finance develops a quarterly performance report that is used internally by Sales. This report communicates the Parcel Select NSA’s contractual requirements, actual and projected volume and revenue data, as well as descriptive information such as the contract’s effective and expiration dates. Submission of compliance review to the PRC: In addition to internal reporting, USPS reports each NSA’s performance externally to the PRC. Specifically, Finance files USPS’s Annual Compliance Report to the PRC; this report covers the extent to which USPS finds that each NSA has met statutory requirements including whether each contract has covered its attributable costs. According to PRC officials, all Parcel Select NSAs have complied with these statutory requirements each year. However, PRC officials noted that they do not review and are not statutorily required to review the extent to which mailers or USPS have complied with other contractual requirements, such as minimum volume requirements and any payments to USPS for failure to meet these requirements as part of its Annual Compliance Determination Report. Business reviews: USPS officials said that they use performance reports to conduct business reviews with mailers. Business reviews are meetings where USPS and the mailer discuss business opportunities, contract terms—such as volume requirements and pricing—and contract performance. The Standard Procedures define additional responsibilities USPS should perform if a mailer is at risk of failing to meet contractual conditions. For example, if Parcel Select NSA performance is below contractual terms, Sales should establish a plan and timeline for improved performance. Furthermore, Sales should schedule additional monthly or bi-monthly business reviews to ensure performance improves in accordance with the established plan. Contract termination or renewal: According to Sales officials, the final activity for managing a Parcel Select NSA is to terminate, renew, or allow the contract to expire. Each Parcel Select NSA describes how the contract can be terminated. Contract terms and conditions typically allow USPS and mailers to terminate a Parcel Select NSA by mutual agreement in writing for convenience. According to the PRC, USPS should promptly notify the PRC if a Parcel Select NSA is terminated prior to the scheduled expiration date. Sales officials said that they generally contact mailers 3 to 6 months before a Parcel Select NSA expires to explore whether renewing the contract would be mutually beneficial. If both parties agree, Sales officials told us that USPS and the mailer should follow the same procedures used to create the original contract to execute the renewal. If either party does not wish to renew the contract, the Parcel Select NSA expires on its contractually specified date. Standard Procedures Have Gaps in Contract Monitoring and Evaluation While the Standard Procedures address some responsibilities involved in managing Parcel Select NSAs, some gaps exist. Specifically, the procedures lack documentation requirements and clearly defined management responsibilities for some contract monitoring and evaluation activities. A senior USPS official acknowledged that the Standard Procedures contained gaps when the document was initially established and that Sales had not reviewed or updated the procedures, which could have provided USPS with an opportunity to identify and address such gaps. Documenting contract monitoring and evaluation activities: USPS’s Standard Procedures lack documentation requirements for some contract monitoring and evaluation activities such as the occurrence and results of business reviews for active Parcel Select NSAs. Each of the seven contracts that were effective in October 2014 requires USPS to conduct business reviews to discuss contract performance semiannually. However, Sales officials told us they did not know the extent to which they conduct business reviews for each contract, because they do not always document that the meetings actually happen. Sales officials verified—through emails, meeting agendas, and other documents—that staff scheduled or conducted at least one business review for five of these seven contracts. Similarly, the Standard Procedures lack requirements for documenting some key management decisions pertaining to contract monitoring and evaluation. For example, USPS did not document management decisions about an underperforming contract. Specifically, according to Sales officials, when one mailer did not reach its minimum volume requirement, Sales did not document any discussions with the mailer to address the issue. Moreover, USPS did not require the mailer to pay the difference between the discounted price and the published price, as called for under the contract. Sales officials told us that they decided not to require the payment for business reasons, including maintaining this mailer as a source of package volume and allowing the mailer time to adjust to changing market conditions that influenced its performance. Although we recognize that USPS should have flexibility to reach management decisions based upon the facts and circumstances of each contract, Sales officials did not document the reasons for their decision to leave the performance issue unresolved and forgo the additional revenue. Documenting such information would help ensure that future decisions are based upon past results and enhance accountability for the effective and efficient use of USPS resources. Internal control standards state that information should be documented in a form and within a time frame that enables individuals to carry out their internal control and other responsibilities and that the documentation should be properly managed and maintained. Reviewing and updating the Standard Procedures to include documentation requirements would provide additional assurance that USPS’s Parcel Select NSAs are effectively managed. Inconsistent management and maintenance of documentation also increases the risk that USPS may not retain important institutional knowledge. As previously mentioned, management responsibility for Parcel Select NSAs has transitioned among three different USPS entities since calendar year 2008 (see fig. 4). USPS officials told us that documentation of key management decisions was lost or discarded when lead responsibility for managing the contracts transitioned from Domestic Products to Sales in 2012. The projected importance of package shipping and Parcel Select NSAs to USPS’s financial future highlights the importance for USPS to document contract management activities to retain institutional knowledge and inform future decision-making. Defining management responsibilities for contract monitoring and evaluation activities: USPS’s Standard Procedures do not define who is responsible for addressing unresolved contract performance issues and how staff should resolve them. Internal control standards state that organizations should clearly define key areas of responsibility and establish procedures to ensure that management promptly resolves performance issues. Without clearly defining who is responsible for promptly addressing performance issues and the procedures these individuals should follow, USPS is at risk of leaving performance issues unresolved—such as the example of a mailer’s failure to reach minimum volumes discussed above—issues that could impact future contract performance. In addition, the Standard Procedures do not clearly define who is responsible for reporting contract changes, such as rate adjustments and amendments, to PRC. Internal control standards and leading contract management practices we identified state that organizations should establish appropriate lines of reporting. Without clearly defining who is responsible for reporting information to PRC in the Standard Procedures, USPS is at risk of not reporting contract changes to PRC as required. For example, although PRC requested that USPS file an amendment to a Parcel Select contract in November 2014, USPS did not file the requested amendment as ordered. Similarly, according to PRC notices, on several occasions USPS did not report discretionary rate adjustments for Parcel Select NSAs as required. USPS Could Improve Its Method to Estimate Attributable Costs for Parcel Select NSAs Each competitive product, including each Parcel Select NSA, must earn sufficient revenues to cover its attributable costs; however, USPS’s costing method to estimate attributable costs for individual Parcel Select contracts does not account for all key cost factors. USPS compiles attributable costs for the Parcel Select product by using data from various information systems—notably financial accounting systems—that collect data on employee compensation and benefit costs. USPS then estimates how much employee time is spent handling Parcel Select packages by collecting additional data.USPS then estimates the total attributable costs for the Parcel Select product. However, USPS officials told us that they do not collect or study some information that could improve these estimates for individual Parcel Select contracts. Specifically, while USPS does collect data on the weight of all Parcel Select packages, it does not collect information on the size of NSA packages and has not studied the impact of either of these factors on USPS’s delivery costs for specific contracts. This limits USPS’s analysis of attributable costs for Parcel Select NSAs. USPS uses three basic delivery modes: door, “curbline,” and centralized delivery. Door delivery includes delivery to mail slots in the door as well as mailboxes attached to houses. Curbline delivery includes delivery to curbline mailboxes that are typically unlocked mail receptacles on a post and commonly used on routes serving residential customers. Centralized delivery is provided to centrally located mail receptacles, such as apartment house mailboxes and cluster box units. As we reported in May 2014, USPS estimated that its delivery costs in fiscal year 2012 ranged from about $380 annually for the average door delivery point to about $240 for curbline delivery and about $170 for centralized delivery. See GAO, U.S. Postal Service: Delivery Mode Conversions Could Yield Large Savings, but More Current Data Are Needed, GAO-14-444 (Washington, D.C.: May 12, 2014). or split routes, resulting in decreased operational efficiency and additional operating costs (see fig. 5). Moreover, private sector mailers use package weight and size information to inform their own business decisions. For example, UPS and FedEx— both of which pay USPS to deliver packages the last mile under certain circumstances and compete with USPS for end-to-end package delivery business—recently began basing the ground delivery rates they charge on both the size and weight of packages, a concept called “dimensional Representatives from another mailer told us that in serving weighting.”their own customers, they continually assess the characteristics of each package—including weight and size—to determine the costs of handling each package and to make cost-effective decisions regarding the delivery method they choose. If mailers choose to route larger or heavier packages via Parcel Select, collecting and studying information on the weight and size of packages could position USPS to better understand how this affects its delivery costs and the extent to which revenues for individual Parcel Select NSAs cover their attributable costs. In addition to not collecting detailed cost information that accounts for key package characteristics, USPS’s method to estimate attributable costs uses national averages for Parcel Select packages instead of contract- specific cost estimates for each NSA. This estimating further limits USPS’s analysis of the extent to which each contract covers its attributable costs. Specifically, because USPS does not have contract- specific cost estimates, USPS compares (a) the average cost per piece for Parcel Select packages with (b) the average revenue per piece for each Parcel Select NSA. PRC officials told us that based on currently available information, they had not observed significant variations in the characteristics of packages shipped under individual Parcel Select NSAs that would suggest that using average costs is not reasonable. However, as we describe above, USPS does not collect or study information that could help inform the reasonableness of its method to estimate attributable costs. Finance officials stated that because the weight of packages shipped under the Parcel Select NSAs did not vary significantly from the average weight, they had no reason to believe that heavier Parcel Select packages delivered under the NSAs would be more costly to process and deliver than the average cost to deliver packages that are not under the NSAs. However, these officials noted that they had not studied the extent to which package characteristics—including package weight and dimension—varied under individual Parcel Select NSAs or whether they deviated from the characteristics of packages that USPS used to estimate the national average. USPS officials questioned whether the time and expense required to develop contract-specific attributable cost estimates that account for package weight and dimension would significantly improve decision- making. Specifically, USPS officials said that for packages that mailers enter at DDUs, developing such estimates would require USPS to determine the costs that each DDU in its network would accrue— including costs associated with package size, package weight, delivery mode, and exact handling and delivery personnel—for each contract. For packages entered higher in USPS’s delivery stream (at sectional center facilities and network distribution centers), USPS officials noted that obtaining contract-specific information would require USPS to fully integrate the entire USPS mail data system and entirely replace its existing cost system, which would cost many hundreds of millions of dollars. We recognize that USPS must carefully balance the costs and benefits associated with developing contract-specific attributable cost estimates for Parcel Select NSAs. However, without taking steps to collect and study key cost information to develop these estimates, limitations on USPS’s analysis of attributable costs will persist. In the past, USPS has used other less intensive methods to improve its cost estimates for other products. For example, USPS has used “special studies” to improve cost information by examining operations at a sample of USPS facilities over a limited period of time. According to USPS officials, such special studies have generally cost hundreds of thousands of dollars to conduct in the past, with costs varying depending on the study’s scope. For example, in 2008 PRC noted that disaggregated Parcel Select and Parcel To disaggregate these costs, USPS Return Service costs were needed.conducted a field study at sampled DDUs in the summer of 2008 and used the data it collected to isolate transportation and mail processing costs specifically for Parcel Select and Parcel Return Service products. As another example, USPS recently conducted a comprehensive study of city carrier street time activities and costs, which involved collecting data on sampled city routes to help attribute city carrier costs to various types of mail, including parcels. Conclusions USPS continues to face significant financial challenges. Parcel Select helps to address these challenges by providing USPS with a revenue source that has increased from $466 million in fiscal year 2009 to over $2.5 billion in fiscal year 2014. Over time, USPS has taken important steps to standardize the procedures it uses to develop, monitor, and evaluate the performance of its Parcel Select NSAs, including developing its Standard Procedures in June 2014. However, the Standard Procedures lack documentation requirements and clearly defined management responsibilities for some contract monitoring and evaluation activities. Reviewing and updating the standard procedures to address these gaps would provide additional assurance that USPS’s Parcel Select NSAs are effectively managed. In addition, collecting and studying information on the size and weight of Parcel Select NSA packages could improve USPS’s analysis of attributable cost coverage for each contract and position USPS to better understand how mailers’ business decisions affect these costs. This may prove to be important, as mailers have developed more discrete costing methods based on both the size and weight of packages, which may influence the characteristics of packages that they provide to USPS for last-mile delivery. Recommendations for Executive Action To provide additional assurance that the procedures USPS uses to develop, monitor, and evaluate the performance of its Parcel Select NSAs are effective, the Postmaster General should direct executive leaders in Sales to: review the Standard Procedures to identify gaps in contract management responsibilities, including documentation requirements and assigning clearly defined management responsibilities for contract monitoring and evaluation activities, and update the procedures to address the identified gaps. To better understand attributable costs for individual Parcel Select NSAs, the Postmaster General should direct the appropriate staff to: identify and implement cost-effective methods, such as using a sample, to collect and study information on the costs of delivering Parcel Select packages of varying characteristics in order to develop contract-specific attributable cost estimates. Agency Comments and Our Evaluation We provided a draft of this report to USPS and PRC for review and comment. USPS and PRC provided written comments, which are summarized below and reproduced in appendix II and appendix III, respectively. USPS and PRC also provided technical comments, which we incorporated, as appropriate. In its comments, USPS agreed to update its Standard Procedures and concurred in principle to develop contract-specific attributable cost estimates for Parcel Select NSAs. USPS noted that it did not believe that package size and weight significantly affect the costs of Parcel Select NSAs, because most packages are entered at the DDU and thus bypass much of USPS’s processing and transportation network. However, as the report notes, while USPS does collect data on the weight of all Parcel Select packages, it does not collect information on the size of NSA packages and has not studied the impact of either of these factors on USPS’s delivery costs for specific contracts. This limits USPS’s analysis of attributable costs for Parcel Select NSAs. USPS stated it would explore obtaining additional mailer characteristics and ascertaining their relationships to costs for specific Parcel Select NSAs. USPS noted in its comments that it has already taken steps to obtain contract-specific characteristics for Sunday delivery of NSA packages. Sunday delivery, however, currently constitutes a relatively small percentage of Parcel Select volume. As the report notes, collecting and studying information on the size and weight of Parcel Select packages is important, because larger and heavier packages can increase USPS’s delivery costs. The report also points out that private sector mailers use package weight and size information to inform their own business decisions, such as making cost- effective decisions regarding the delivery method they choose. As the report concludes, collecting and studying information on the size and weight of Parcel Select NSA packages could improve USPS’s analysis of attributable cost coverage for each contract and position USPS to better understand how mailers’ business decisions affect these costs. This understanding will be important given the rapidly changing package delivery market. In its comments, PRC said that it found our draft report well-researched and balanced and agreed with both of our recommendations. PRC supported the cost-effective development of more accurate attributable cost estimates for Parcel Select NSAs. PRC also clarified that USPS’s method to estimate average costs for Parcel Select packages excludes Parcel Select mail weighing less than one pound. We modified the relevant text to note this clarification. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Postmaster General, Acting Chairman of PRC, USPS Office of Inspector General, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology To examine the procedures that the U.S. Postal Service (USPS) has established to develop, monitor, and evaluate the performance of its Parcel Select negotiated service agreements (NSA), including implementation, we reviewed relevant laws and regulations such as the Postal Accountability and Enhancement Act (PAEA). We also examined USPS documents such as the Standard Procedure for Managing NSA Contracts that describe the procedures. We also reviewed copies of all 13 Parcel Select NSAs executed by USPS along with their supporting file documentation, such as contract performance and business review reports. This includes eight Parcel Select and five Parcel Select & Parcel Return Service contracts. For purposes of our report, we refer to both of these contract categories as Parcel Select NSAs, since each includes the Parcel Select product. We also examined Postal Regulatory Commission (PRC) documents, such as orders and notices. We did not review the extent to which USPS complied with all of its management procedures because this did not fall within the scope of this review. We also did not report volume, revenue, or mailer-specific information for individual NSAs because USPS and mailers consider the information proprietary, and according to USPS officials, public disclosure would violate contractual confidentiality provisions. In addition, we conducted interviews with USPS officials from the Sales, Finance, and Legal departments, who are responsible for developing, monitoring, and evaluating the performance of Parcel Select NSAs; PRC officials responsible for reviewing the contracts; and representatives from eight mailers that signed 12 of the 13 Parcel Select NSAs to determine how USPS implements its procedures. The number of mailers is less than the number of implemented contracts because some mailers signed more than one contract. We did not interview representatives from one of the mailers that signed a Parcel Select NSA, because the company was sold and knowledgeable representatives were no longer available to meet with us. According to USPS, no packages were ever shipped under that contract. We compared USPS’s procedures for managing the contracts against selected leading contract management practices. To select these practices, we: Identified criteria sources that included leading contract management practices and then confirmed that these sources were appropriate through consultation with internal stakeholders. These sources included: (a) our standards for internal control in the federal government; (b) guidance documents and regulations for federal agencies, such as best practices issued by the Office of Federal Procurement Policy; and (c) documents from contract management and administration organizations including the National Contract Management Association and the Institute for Supply Management, which were recognized for their expertise in this area. Reviewed information included in each criteria source to identify practices that are relevant to either (a) buyers and sellers or (b) just sellers. Because the Postal Service is the service provider (or seller) under Parcel Select NSAs, we excluded leading practices that are only pertinent to buyers. In addition, we identified practices that were applicable to USPS and its unique statutory and regulatory requirements as an independent establishment of the executive branch of the federal government. Exercised professional judgment to select practices that we identified as particularly relevant to the managing Parcel Select NSAs. Based upon these steps, we selected the following leading contract management practices: defining roles and responsibilities; establishing procedures for the contract management process; planning and negotiation; managing performance; communicating effectively; and documenting management activities and decisions. USPS agreed that these practices were relevant and reasonable. To determine revenue and volume trends for Parcel Select, which we describe in the background of our report, we reviewed USPS Revenue, Pieces, & Weight reports, which present official USPS estimates. We assessed the reliability of these data by reviewing related documentation, such as a December 2007 USPS Office of Inspector General report on and by collecting information from knowledgeable USPS’s data system,USPS officials, and determined that the data were sufficiently reliable for our reporting purpose. To examine the method that USPS uses to determine whether each Parcel Select NSA covers its attributable costs, we reviewed relevant laws and regulations that establish requirements for product categories and contracts such as PAEA. We also analyzed agency documents that describe or discuss the methodology that USPS uses, such as USPS’s fiscal year 2014 Annual Compliance Report and PRC’s Annual Compliance Determination Reports for fiscal years 2008 through 2013, the most recent reports available. We also reviewed relevant federal agency, academic, and GAO reports that describe USPS’s costing approach and related issues, such as the USPS Office of Inspector General’s audit reports. In addition, we conducted interviews with USPS and PRC officials to obtain their views on USPS’s cost coverage methodology. Finally, we made written requests for information regarding the methodology used to calculate attributable cost to USPS and PRC officials, who provided written responses on the subject. We did not review USPS’s postal costing methodology generally, such as methods to divide costs into attributable and institutional costs and methods to distribute attributable costs to various USPS products and services. We conducted this performance audit from April 2014 through April 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the United States Postal Service Appendix III: Comments from the Postal Regulatory Commission Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, key contributors to this report were Lorelei St. James (Director); Derrick Collins (Assistant Director); Amy Abramowitz; Teresa Anderson; William Colwell; Aaron Colsher; Caitlin Dardenne; Colin Fallon; Kenneth John; DuEwa Kamara; Kimberly McGatlin; SaraAnn Moessbauer; Josh Ormond; Amelia Shachoy; Crystal Wesco; and Bill Woods. | USPS faces significant financial challenges. Parcel Select is a key USPS package shipping product to help address these challenges, with revenues growing from $466 million to over $2.5 billion from fiscal year 2009 through fiscal year 2014. As of April 3, 2015, USPS has executed 13 Parcel Select NSAs—customized contracts that lower mailers' shipping prices in exchange for meeting volume targets and other requirements. GAO was asked to review how USPS manages these contracts. GAO examined (1) USPS's procedures to manage Parcel Select NSAs and (2) its method to determine attributable cost coverage for each contract. To perform this work, GAO reviewed relevant laws and regulations; analyzed documents for all 13 contracts; compared USPS's procedures against selected leading contract management practices most applicable to USPS's role; and interviewed USPS and PRC officials and representatives from 8 mailers that signed 12 of the 13 contracts. In June 2014, the U.S. Postal Service (USPS) established standard procedures that departments should follow to manage all negotiated service agreement (NSA) contracts, including Parcel Select NSAs. Although the procedures, in part, address some leading contract management practices, such as defining performance management activities, they lack documentation requirements and clearly defined management responsibilities for some activities. For example, the procedures do not require USPS to document some key management decisions, such as USPS's decision to forego additional revenue when a mailer did not ship a minimum volume of packages, as contractually required. Documenting such information could improve future decision making and enhance accountability for the effective and efficient use of USPS resources. USPS acknowledged that the procedures contained gaps when they were initially established. Reviewing and updating the standard procedures to include documentation requirements and clearly defined management responsibilities would provide additional assurance that USPS's Parcel Select NSAs are effectively managed. USPS's costing method for Parcel Select NSAs does not account for package size or weight or use contract-specific cost estimates. Each Parcel Select NSA is required to earn sufficient revenues to cover USPS's costs—referred to as “attributable costs” in the postal context. The Postal Regulatory Commission (PRC), which annually reviews compliance, determined that each contract met this requirement. However, USPS's analysis of attributable costs for Parcel Select NSAs is limited, because USPS has not studied the impact of package size or weight on specific contracts or developed contract-specific cost estimates. Package size and weight information : USPS does not collect information on the size of NSA packages and has not studied the impact of package size and weight on USPS's delivery costs for specific contracts. However, larger and heavier packages can increase USPS's costs. For example, carriers must walk packages that are too large for a centralized mailbox to the customer's door, which increases costs. Moreover, mailers use size and weight to inform their own business decisions. For example, one mailer continually assesses package size and weight to make cost-effective decisions about the delivery method it chooses. If mailers route larger or heavier packages via Parcel Select, collecting and studying such information could improve USPS's analysis of attributable costs for Parcel Select NSAs. Contract-specific cost estimates : USPS's method to determine attributable costs uses average cost estimates for the Parcel Select product instead of contract-specific cost estimates. USPS's use of averages further limits its analysis of the extent to which each Parcel Select NSA covers its attributable costs, because USPS had not studied the extent to which the size and weight of packages shipped under individual contracts deviated from the characteristics of packages USPS used to estimate the average. USPS officials questioned whether the benefits of developing contract-specific cost estimates would exceed the costs; however, USPS has used less intensive methods, such as sampling, to improve estimates for other products in the past. |
Background Several key laws and an executive order directly relate to federal agencies’ use of ESPCs (see table 1). In addition, in December 2011, the President challenged federal agencies to enter into $2 billion in performance-based contracts, including ESPCs and utility energy service contracts through the President’s Performance Contracting Challenge. In May 2014, the President expanded this challenge to total $4 billion in performance-based contracts by the end of 2016. The process that agencies and contractors generally follow for developing and implementing an ESPC project spans five phases, from acquisition planning—during which agencies identify project requirements and assemble their acquisition team—to project performance—during which energy conservation measures are in place and operating, and agencies pay contractors. Figure 1 illustrates the general process for developing and implementing an ESPC project. During the process of developing and implementing an ESPC project, agency officials often work with DOE’s or the U.S. Army Corps of Engineers’ (Corps) federal contracting centers. Both DOE and the Corps have awarded indefinite-delivery, indefinite-quantity ESPC contract vehicles to a set of prequalified energy services contractors. Agencies using these “umbrella” contract vehicles can award an ESPC for an individual project to any of the prequalified contractors. Using one of these contract vehicles allows agencies to develop and implement an ESPC project in less time because the process of competitively selecting qualified contractors has already been completed, and key aspects of contracts have been broadly negotiated. In addition, both DOE and the Corps provide contracting and technical support to agencies that use their contract vehicles. For example, DOE’s FEMP provides facilitation services, where a third party assists the agency and contractor in agreeing on the terms of a contract. FEMP also issues guidance and offers training for agencies on the various steps of developing and implementing an ESPC project. The Corps provides technical support, cost estimating services, and legal support to agencies using its contract vehicle. As part of revisions that DOE and the Corps made to their contract vehicles in 2008, both DOE and the Corps now require that agencies use a qualified project facilitator when developing and implementing an ESPC, which addresses the recommendation in our June 2005 report to ensure that agencies use appropriate expertise when undertaking an ESPC. Additionally, each of the seven agencies in our review has established a central office to support individual sites with developing and implementing ESPC projects, and several of these agencies have increased the role of their central offices since our last review to provide additional support, such as oversight of ESPCs. An ESPC project’s expected cost and energy savings are established during project development, finalized when the contract is awarded, and measured and verified over the course of a project’s performance period. These savings can include reductions in costs for energy, water, operation and maintenance, and repair and replacement directly related to the energy conservation measures. Agencies must pay contractors from funds appropriated or otherwise made available to pay for such utilities and related operational expenses. Payments to contractors generally cover the costs associated with equipment and installation, contractor-provided operation and maintenance services, financing charges, and other costs. ESPC projects generally include two types of expected savings: (1) proposed cost and energy savings, which contractors estimate will result from the energy conservation measures installed, and (2) guaranteed cost savings, which must be achieved for Generally, contractors guarantee about the contractor to be fully paid.95 percent of a project’s proposed cost savings, which gives them room for some amount of proposed savings to not be achieved without a reduction in their payments. Energy and cost savings are the difference between a projected baseline of the energy use without the energy conservation measures and with the measures. The process used to determine ESPC savings is referred to as measurement and verification. Most ESPC projects include the following four key documents that outline how cost and energy savings are to be measured and verified: Measurement and verification plan. During the project development phase, the contractor and agency develop a plan that establishes how to measure and verify that savings are achieved. Measurement and verification methods can include surveys, inspections, direct measurements of energy use, and other activities to ensure that equipment is operating correctly and has the potential to generate expected savings. Risk and responsibility matrix. During the contractor selection phase, the contractor and agency develop a risk and responsibility matrix that identifies key project risks and their potential effects, and specifies whether the agency or the contractor will be responsible for managing financial risks, such as changing interest rates; operational risks, such as operating hours and weather; and performance risks, such as equipment performance and preventative maintenance. Postinstallation measurement and verification report. After the energy conservation measures are installed, the contractor conducts measurement and verification activities and presents the results in a postinstallation measurement and verification report. Annual measurement and verification report. Throughout an ESPC’s performance period, the contractor conducts measurement and verification activities and submits an annual report to the agency to document the cost and energy savings achieved. According to FEMP guidance, one of the primary purposes of measurement and verification is to reduce the risk that expected savings will not be achieved. FEMP guidance describes risks related to (1) equipment use, which stem from uncertainty in operational factors, such as the number of hours equipment is used or changes in the planned operation of equipment, and (2) equipment performance, which stem from uncertainty in projecting a specified level of performance. Contractors are usually reluctant to assume risks related to equipment use because they often have no control over operational factors. In contrast, according to FEMP guidance, the contractor is ultimately responsible for the selection, design, and installation of equipment and typically assumes responsibility for performance risks. FEMP guidance outlines a range of options that contractors may select to measure and verify the cost and energy savings achieved by each energy conservation measure. If certain factors that affect savings, such as weather conditions, utility prices, and hours of agency operation, are either too complex or costly to measure, agency and contractors may choose to agree in advance on—or stipulate—the values for those factors regardless of the actual behavior of those factors. For example, because ESPCs can be long contracts, the contractor and agency typically stipulate escalation rates to estimate future utility prices during the performance period. If the savings that are achieved are less than the savings calculated using stipulated values, the agency pays the contractor for the savings calculated using stipulated values. If achieved savings are greater than the savings calculated using stipulated values, the agency retains the additional savings. The measurement and verification options that FEMP guidance outlines vary in their rigor and costs. The option that is generally the least rigorous and costly involves measuring the key factors affecting energy use—such as the number of lighting fixtures or efficiency of a heating unit—before and after installation, but typically does not involve measuring such factors over the term of the contract. In contrast, other options outlined in FEMP guidance generally involve ongoing measurements of energy use, or proxies of energy use, over the contract term. FEMP guidance helps identify when each option should be used and states that the selection of a measurement and verification method is based on project costs and savings, complexity of the energy conservation measure, and the uncertainty or risk of savings being achieved, among others. According to FEMP guidance, costs for measurement and verification generally increase with the level of accuracy required in energy savings analyses and the number and complexity of variables that are analyzed, among other factors. Moreover, the incremental value of additional measurement and verification will at some point be less than its cost. For instance, the energy consumed by a light fixture does not change appreciably over time, and requiring contractors to measure fixtures annually would increase the cost of measurement and verification for little benefit, according to FEMP officials. Selected Agencies Have Awarded About $12.1 Billion for ESPCs for a Variety of Projects, and Their Plans for Continued Use Vary In fiscal years 1995 through 2014, the seven selected agencies in our review awarded approximately $12.1 billion in ESPCs for a variety of projects, such as constructing biomass facilities to heat federal buildings. According to agency officials and documents, agencies plan to continue using ESPCs to meet federal energy directives and initiatives, but some agency officials said they are hesitant to use ESPCs to consolidate data centers. Selected Agencies Awarded About $12.1 Billion for ESPCs in Fiscal Years 1995 through 2014 In fiscal years 1995 through 2014, the seven selected agencies awarded approximately $12.1 billion for more than 500 ESPC projects to help fund energy conservation measures in federal facilities. The total amount awarded in ESPCs varied by agency, with the Army awarding the most— approximately $3.7 billion. (See fig. 2.) Data on the ESPCs awarded by each agency are included in appendix II. The seven selected agencies awarded approximately 530 ESPCs in fiscal years 1995 through 2014. The length of the contracts for these projects ranged from approximately 2 to 25 years, with an average of about 16 years. Additionally, the projects for ESPCs awarded during this period had a total guaranteed cost savings of roughly $12.4 billion and total proposed energy savings of approximately 563 trillion British thermal units (Btu).energy savings by agency. Agencies Have Used ESPCs for a Variety of Projects The seven agencies have used ESPCs for a variety of projects ranging from smaller-scale projects to install more energy-efficient light bulbs or water flow restrictors in toilets, to larger-scale projects, such as power generation projects. For example, GSA officials used ESPCs for three projects at its White Oak, Maryland, facility to install infrastructure and equipment with cogeneration capabilities, which involves the simultaneous production of electricity and heat from a single fuel source, such as natural gas. Figure 3 shows some of the White Oak cogeneration project components. Additionally, DOE installed a biomass facility at the National Renewable Energy Laboratory in Golden, Colorado. The biomass facility, the first of its kind for DOE, according to project officials, uses wood chips from forest thinnings and trees killed by either beetles or fire as fuel to generate heat that warms water for buildings at the campus. Figure 3 shows some of the components of DOE’s biomass facility. Some agencies have started to use ESPCs to develop larger and more comprehensive projects to try to achieve greater cost and energy savings. For example, in 2012, GSA began using ESPCs for its National Deep Energy Retrofit program which, according to an analysis by the Oak Ridge National Laboratory, achieved an average level of savings more than twice that of other federal ESPC projects. Furthermore, to help achieve these cost and energy savings, agencies have increasingly turned to bundling energy conservation measures together under an ESPC, which is more efficient than using separate contracts, according to FEMP officials. Agencies’ Plans for Using ESPCs in the Future Vary, Particularly for Data Center Consolidation Projects Selected agencies’ plans to use ESPCs in the future vary. Officials from five of the agencies we spoke with said their plans for continuing to use ESPCs will help their agencies meet goals in federal executive orders and other energy goals, including the President’s Performance Contracting Challenge. For example, in response to federal energy goals, Army officials said they plan to aggressively pursue using ESPCs, among other financing options, to improve energy efficiency. Justice officials said they plan to extensively use ESPCs at all of their Bureau of Prisons sites to upgrade and repair many buildings that have aging infrastructure. VA officials said ESPCs are one of many tools to meet energy goals and the agency prioritizes ESPCs at all facilities where feasible. With regard to the President’s Performance Contracting Challenge, agencies government-wide had awarded approximately $1.9 billion in performance- based contracts out of the $4 billion goal as of January 2015, as shown in table 3, with the seven selected agencies in our review awarding most of these contracts. If agencies award the contracts they currently have planned, they will meet the Challenge’s goal of awarding $4 billion in performance-based contracts by the end of 2016 (see app. III for federal agencies’ status in achieving their goals under the President’s Performance Contracting Challenge). Some agency officials we interviewed said they are interested in using ESPCs to consolidate data centers, which consume significant amounts of energy and can be costly to operate, but the agency officials are hesitant to move forward with such projects because of concerns OMB staff have raised about using ESPCs for such projects. By law, ESPCs must be used “solely for the purpose of achieving energy savings and benefits ancillary to that purpose.” However, the law does not specify what qualifies as ancillary benefits—also referred to as energy-related savings—or the proportion of an ESPC’s overall savings that can be energy-related. OMB guidance on federal use of performance contracts outlines some general criteria that projects must meet to be scored under OMB’s annual budget scoring process but does not provide specific guidance on energy-related savings. If an ESPC project does not meet OMB’s criteria, then to pursue the project the agency would need to obligate funding for the entire contract “up front” in its first year, rather than annually. This can be an issue because agencies might not have the funding for the entire contract during its first year, which would leave agencies with the option of either canceling the contract or moving funding from other agency efforts. According to DOE officials, they nearly completed the project development phase of the ESPC development and implementation process in May 2011 for a project to use an ESPC to consolidate data centers. However, they delayed awarding the contract in March 2013 because OMB staff raised concerns about the project. DOE officials said the project, as originally proposed, would consolidate two data centers and replace 5,000 desktop computers with computers that are more energy efficient. The project was expected to save DOE approximately $76 million, and 97 percent of the overall cost savings would come from operations and maintenance such as maintaining computer hardware and software, or energy-related savings, and the remaining 3 percent from energy savings. According to DOE officials, the concerns that OMB staff raised included (1) whether savings resulting from more efficient information technology equipment qualify as energy-related savings and (2) the project’s high proportion of cost savings resulting from the reduction in operations and maintenance costs, rather than energy cost savings. At the time of our review, DOE had resumed consideration of the project and had not awarded a contract, but said that OMB staff had not clarified their position regarding their concerns. According to Army officials, the Army is also interested in using ESPCs to consolidate data centers, but they are hesitant to move forward with any projects because they have heard about OMB’s concerns and are waiting to learn OMB’s position regarding DOE’s data center consolidation. Army officials said they have not seen any information on OMB’s position officially released, which they said they need before pursuing the use of ESPCs for data center consolidation projects. Furthermore, DOD officials who oversee Army and other DOD agencies said the agencies need clarification on whether moving data to a more energy efficient off-site storage facility (rather than storing it on servers in DOD facilities) or eliminating help desk support and software licenses would qualify as energy-related savings under OMB guidance. According to federal standards for internal control, information should be communicated to those who need it in a form and within a time frame that Because OMB staff have enables them to carry out their responsibilities.expressed concerns about but have not clarified their position on what qualifies as energy-related savings and the allowable proportion of energy and energy-related cost savings, DOE delayed its data center consolidation project and some agencies, such as the Army, have been hesitant to pursue using ESPCs for such projects. As a result, agencies might be needlessly missing opportunities for potential energy and energy-related cost savings. We solicited OMB staff’s comments during our review regarding their position on DOE’s data center consolidation ESPC project, as well as the use of ESPCs for data center consolidation projects generally, regarding what qualifies as energy-related savings and the proportion of cost savings resulting from operations and maintenance and energy use reduction. In response, OMB staff said, in part, that it is generally not appropriate for them to comment on the merits of specific contracts. Reported Savings Generally Exceeded Expectations, but Some Savings for Selected ESPC Projects Were Overstated The cost and energy savings that contractors reported for most ESPCs met or exceeded expected savings, according to studies by DOE’s Oak Ridge National Laboratory, but some of these savings may be overstated. Our review of a nongeneralizable sample of 20 projects found that contractors overstated cost and energy savings for 14 projects by reporting some savings that, due to agency actions, were not achieved. Contractors must calculate and report savings in accordance with plans agreed to in their contracts with agencies. If factors beyond contractors’ control reduce the savings achieved, contractors generally are not required to reduce the amount of savings they report or measure the effects of such factors on savings. Agencies were not always aware of the amount of expected savings that were not achieved among their projects, in part, because contractors generally do not provide this information in measurement and verification reports. Reported Cost and Energy Savings for Most ESPCs Met or Exceeded Expected Savings DOE’s Oak Ridge National Laboratory found in its six studies of contractor-reported savings for agencies that awarded ESPCs through DOE’s contract vehicle, that the total cost and energy savings reported for these ESPCs exceeded their expected savings. The total cost savings reported in the 6 years of annual measurement and verification reports that Oak Ridge analyzed was about 106 percent of the total guaranteed cost savings for these ESPCs. Moreover, in each of the 6 years, total reported cost savings across all projects were at least 105 percent of total guaranteed savings. Similarly, the Oak Ridge studies found that the total energy savings reported for ESPCs awarded through DOE’s contract vehicle exceeded proposed energy savings. Specifically, the total energy savings reported in all of the annual measurement and verification reports analyzed over the 6 years was about 102 percent of the total proposed energy savings for these ESPCs. Moreover, in each of the 6 years, total reported energy savings across all projects were at least equal to total proposed savings. Most contractors reported cost savings that exceeded guaranteed savings, but the Oak Ridge studies found that some contractors reported cost savings below guaranteed amounts, also referred to as cost savings shortfalls, in about 6 percent of the reports that Oak Ridge reviewed. The average shortfall in cost savings for the small number of ESPCs with a reported shortfall was 17 percent, meaning reported cost savings were 83 percent of guaranteed amounts for these ESPCs. However, these shortfalls ranged widely, from 0.5 percent to 75 percent of guaranteed cost savings. Appendix V provides additional information from the Oak Ridge studies. Similarly, we found that reported savings for 19 of the 20 projects in the nongeneralizable sample we reviewed met or exceeded their guaranteed cost savings for the year reviewed. The remaining project, DOE’s National Renewable Energy Laboratory, had a reported cost savings shortfall of about $76,000—about 18 percent of its guaranteed cost savings—in its most recent measurement and verification report. According to the project’s measurement and verification report, the shortfall was primarily due to warmer weather, which reduced the number of days the equipment was used, and there was an outage due to a failed motor. In the measurement and verification report, the contractor identified planned changes to the equipment that are expected to address the performance deficiencies and savings shortfalls. FEMP tracks cost savings shortfalls monthly for ESPCs that agencies have awarded through DOE’s contract vehicle with its “dashboard report” and has found that contractors are reporting that most ESPC projects are meeting or exceeding their guaranteed savings. For instance, a sample dashboard report from 2014 showed nine projects had reported cost savings shortfalls, ranging from less than 1 percent of guaranteed savings to more than 30 percent of guaranteed savings. The dashboard report includes details on the reasons for shortfalls and actions for FEMP to take to help agencies address shortfalls. According to FEMP officials, the dashboard report, which FEMP developed in 2007, provides FEMP management with a snapshot of key aspects of the ESPC projects and has enabled them to more effectively monitor ESPC projects, which GAO recommended in 2005. Most Selected Measurement and Verification Reports We Reviewed Overstated Some Cost and Energy Savings Measurement and verification reports for 14 of the 20 projects in the nongeneralizable sample we reviewed—including projects from each of the seven selected agencies—overstated some cost and energy savings in that they reported savings that were not achieved. Contractors must calculate and report annual savings in accordance with the measurement and verification plans agreed to in their contracts with agencies. These plans include measuring equipment performance. They also include assumptions about factors that are beyond contractors’ control, such as agencies’ use of energy-saving equipment and utility prices, which may change over the life of the contract. If changes in such factors reduce savings, contractors generally are not required to reduce the amount of savings they report or measure the effects of such changes. For example, contractors do not generally reduce the savings they report when an agency alters the agreed upon hours of operation, thus reducing the number of hours that energy-saving equipment is used. Conversely, if savings increase because of changes in factors beyond contractors’ control, contractors generally do not increase the amount of savings they report, and agencies generally retain any surplus savings and do not increase payments to the contractor. Measurement and verification reports for 14 projects in our sample overstated some cost and energy savings in that they reported savings that were not achieved because of agencies’ actions, including (1) agencies not operating or maintaining equipment as agreed when the ESPC was awarded and (2) agencies’ removal of equipment from or closure of facilities where energy conservation measures had been installed. For projects in our sample, contractors’ reports generally did not quantify or estimate the effects of these factors on savings, although some reports noted that savings were affected in some way. Because of the large number of factors that can result in overstated or understated savings, we did not determine the net effect of all factors on projects’ achieved savings. For example, some energy conservation measures in the projects we reviewed outperformed expectations, which may have offset the lower-than-expected savings of other energy conservation measures in those projects. Table 4 shows the projects we reviewed and the agencies’ actions that affected savings that were reported in the most recent measurement and verification report, as of September 2014. (For further detail on the effects of these factors on savings for these projects, see app. VI.) The following are examples from the ESPC projects we reviewed of agency actions that resulted in reported savings that were not achieved: Agency did not operate or maintain equipment as agreed The most common factor resulting in overstated savings for the ESPC projects we reviewed was an agency making changes to operating hours and temperature set points on programmable heating, ventilation, and air conditioning (HVAC) equipment, which occurred in 8 of the 20 projects. According to available agency estimates, these changes generally resulted in lower energy and associated cost savings than expected, but contractors did not reflect these effects in reported savings amounts because they were due to agency actions. In other cases, agencies did not fulfill their responsibilities for operating or maintaining equipment. For instance, the contractor for a project at a Justice facility found that steam distribution equipment the contractor installed had been damaged, reducing the savings the equipment achieved. However, the contractor did not reduce reported savings because it stated that the damage resulted from improper operation by Justice staff. In some cases, agencies took actions that reduced savings, such as changing operating hours or temperature set points, to meet changing agency mission needs. Agency removed or abandoned equipment components of energy conservation measures or entire measures were removed by the agency during the performance period, but contractors did not reduce reported savings because these changes were due to agency decisions. For instance, the Army closed a section of an installation that had numerous buildings with energy conservation measure equipment installed. As a result, savings were not being generated by this equipment, but the contractor reported the savings that would have been achieved for the year had the equipment continued to operate. In some cases, agencies removed or abandoned equipment to meet changing agency mission needs. The amount of savings reported but not achieved ranged from negligible to nearly half of an ESPC project’s reported savings for the year, based on information provided by agencies and our analysis of available information from the most recent measurement and verification reports for selected projects. For example, where estimates were available, agency changes to operating hours and temperature set points on programmable HVAC equipment generally resulted in savings that were reported but not achieved that were negligible as a percentage of the total savings reported, according to agency officials. In contrast, the Air Force’s removal of equipment associated with a sewer system upgrade resulted in over $104,000 in annual savings that were reported but not achieved— about 40 percent of the annual savings reported for the project. (For a full list of the projects we reviewed and information on the effects of factors beyond contractors’ control on savings, see app. VI.) Officials from several agencies noted that there are benefits to funding energy conservation projects through ESPCs, as opposed to using up-front appropriations. The officials noted that like ESPC projects, the expected savings for projects funded with up-front appropriations may not be achieved. However, savings that are not achieved are more likely to be identified for ESPC projects because savings must be measured and verified. Unlike changes in agencies’ use of equipment, agencies cannot control changes in utility prices, but changes in utility prices compared with the amounts stipulated in the contracts could affect the savings for ESPC projects. Agencies commonly stipulate annual escalation rates for energy costs based on projected utility prices published by the National Institute of Standards and Technology and developed by DOE’s Energy Information Administration. DOE has reported that the projected utility prices for ESPCs awarded through its contract vehicle have generally underestimated the actual increase in utility prices, and therefore ESPC projects are generally saving more than expected. Specifically, in 2007, DOE’s Oak Ridge National Laboratory analyzed 22 ESPC projects to calculate savings using actual, rather than projected, utility prices. After adjusting for actual utility prices, savings for 16 of the 22 ESPC projects Oak Ridge examined were greater than the savings contractors reported, while savings for the remaining 6 ESPC projects were lower than reported. Energy markets have changed significantly since 2007, and are likely to change in the future. For example, improvements in horizontal drilling and hydraulic fracturing led to large increases in the production of natural gas from shale formations, which contributed to significant decreases in the price of natural gas. Such changes likely affected the savings that certain ESPC projects achieved, such as those whose savings were based predominantly on reductions in natural gas use. However, it is not clear whether the assumptions that agencies are using for utility prices are reasonable because DOE has not conducted an analysis of ESPC projects awarded under its contract vehicle since its 2007 report. As a result, agencies may not have the information they need to know whether their projects are achieving expected savings and achieved savings may be significantly different—either higher or lower—than reported savings. There are drawbacks of assumptions about utility prices being consistently higher or lower than actual rates. DOE guidance states that stipulating higher utility rates, which generally results in higher expected savings, will provide better cash-flow for projects. However, the guidance also states that overvaluing savings is a serious concern that can cause budgetary problems for the agency. This is because contractor payments must come from agency funds used to pay for energy, water, and related expenses. Therefore, contractor payments that exceed achieved energy, water, and related savings will limit the funds agencies can use to cover these expenses. We identified three projects in our nongeneralizable sample for which achieved savings were lower than reported savings because utility prices differed from those stipulated in the contract. Specifically, for one DOE project and one Justice project, natural gas prices were significantly lower than the amounts stipulated in the contracts, which led to achieved savings that were about $147,000 and $477,000 less than the reported cost savings for the year, respectively. These amounts represented about 44 percent of the reported cost savings for the DOE project and about 30 percent of the reported cost savings for the Justice project. Additionally, an Air Force project that involved switching inefficient oil heating units to natural gas units projected rates for natural gas that did not reflect seasonal price increases in winter months. Because actual natural gas prices were substantially higher than projected in the winter, the costs of running the new natural gas units were higher than projected. This resulted in achieved savings that were about $160,000 less than the reported savings for the year, which was about 5 percent of the project’s total reported savings for the year. Utility prices vary from year to year, so it is to be expected that prices will differ from the stipulated values in some years. Without a periodic analysis of utility prices over several years and across projects, agencies may not have the information they need to know whether examples like the three in our sample are typical and indicative of problems with the assumptions or anomalies. Agencies Were Not Always Aware of How Much Expected Savings Were Not Achieved Agencies were not always aware of the amount of expected savings that were not achieved, in part because contractors generally did not—and were not required to—report this information when savings were not achieved due to agency actions. FEMP guidance states that when reviewing measurement and verification reports, agencies should understand changes in project performance and savings levels from year to year, and what corrective actions should be taken to address deficiencies resulting in savings that are not achieved. In addition, the DOE and Corps contract vehicles provide an outline for contractors to use in writing annual reports, which includes sections detailing performance, operating, and maintenance deficiencies that need to be addressed by the contractor or the agency, and the effect of deficiencies on savings. However, the DOE and Corps contract vehicles do not explicitly require contractors to provide estimates of expected cost and energy savings that have not been achieved due to factors beyond contractors’ control. Most reports we reviewed did not contain information that would allow us to estimate the amount of savings that were not achieved because of agency actions. According to FEMP documentation, most contractors’ measurement and verification reports describe performance issues related to agency actions, without providing information on the magnitude of the effect on cost savings. During the course of our review, FEMP drafted guidance for reporting on cost savings that are affected by factors beyond contractors’ control. Specifically, the guidance includes tables to be added to measurement and verification reports to provide specific information on cost savings that are not achieved due to agency actions and on the net cost savings to agencies from the projects after accounting for the effects of these actions. However, FEMP had not provided this guidance to agencies or incorporated it into DOE’s contract vehicle as of December 2014. Without revising the reporting requirements in the DOE and Corps contract vehicles to incorporate the updated guidance for future contracts or providing the guidance to agencies, agencies may continue to be unaware of the scale of savings that are not achieved, and may therefore be unable to determine what corrective actions should be taken. projects that have already been implemented under existing contracts, their oversight of ongoing projects could be limited unless they work with contractors to determine the best way to obtain such information. DOE and the Corps could make such revisions during the planned process of recompeting the contract vehicles. DOE issued a solicitation for a new contract vehicle in March 2015, and officials said they plan to award the contract vehicle in early 2016. Corps officials said they plan to award the Corps’ new contract vehicle by June 2015. Agencies’ Oversight and Evaluation of ESPC Projects Is Limited The seven agencies in our review have conducted limited oversight and evaluation of their ESPC projects. Specifically, none of the agencies fully implemented FEMP guidance regarding observing contractors’ measurement and verification activities or reviewing and certifying contractors’ measurement and verification reports for individual ESPC projects. Moreover, most of the agencies in our review have not systematically evaluated their ESPC portfolios to determine the effects of changing circumstances—such as facility use, utility prices, or interest rates—on project performance because they do not have processes in place to do so. Selected Agencies Did Not Fully Implement FEMP Guidance on Project Oversight Our review of a nongeneralizable sample of 20 ESPC projects across the seven selected agencies found that agencies did not fully implement FEMP’s guidance for observing contractors’ measurement and verification activities or document that the agency had reviewed and certified contractor’s most recent measurement and verification reports. In 2007, FEMP issued guidance that identified practices to assist agencies with overseeing contractors’ measurement and verification activities. The guidance states, among other things, that an agency representative should observe the contractor’s measurement and verification activities, review the contractor’s measurement and verification report, and certify in writing that the report is acceptable to the agency. According to FEMP’s guidance, these activities are designed, in part, to provide the agency assurance that the project is performing as expected and to provide increased confidence that the expected savings are achieved. FEMP has also issued guidance that provides a framework for reviewing postinstallation and annual measurement and verification reports and includes a template that agencies can use to document their review of these reports.recommended, and, in some cases, required in agencies’ own guidance. These oversight activities are also More specifically, five of the seven selected agencies recommend or require that agency representatives observe contractors’ measurement and verification activities, and two agencies require that agency representatives review the measurement and verification report and certify acceptance of the report. Agency representatives observed the contractors’ measurement and verification activities for all energy conservation measures for 9 of the 20 projects in our nongeneralizable sample; observed measurement and verification activities for some, but not all, energy conservation measures for 4 projects; and did not observe these activities for any energy conservation measures for 7 projects. Additionally, agency officials had not reviewed the most recent measurement and verification report for 4 of the 20 projects in our sample and did not certify acceptance of the report for 11 projects. According to project officials, review was in process for 3 of the 4 reports that had not been reviewed, and officials were in the process of approving reports for 5 of the 11 projects for which acceptance was not certified at the time of our review. Other audit agencies have also identified problems associated with agency representatives’ observing of contractors’ measurement and verification activities, reviewing reports, or certifying acceptance of the reports. For example, a 2011 Naval Audit Service report found that oversight practices were not sufficiently formalized to ensure that contractors’ measurement and verification reports were reviewed by Navy personnel and made 11 recommendations based on its findings. In 2013, the Naval Audit Service conducted a follow-up audit and found that, among other things, Navy management did not provide sufficient oversight to ensure that Navy personnel fully completed and clearly stated on the standard measurement and verification review template whether Navy personnel observed contractors’ measurement and verification activities. Similar findings regarding insufficient oversight were also included in audit reports from the DOE Inspector General and the Air Force Audit Agency. Some project and agency officials told us that agency representatives did not observe some measurement and verification activities or review and approve the contractors’ reports because they were unaware of these duties—or the steps they are supposed to take to perform them—or believed them unnecessary. According to FEMP officials we interviewed in December 2014, FEMP has expanded training related to ESPCs, some of which discusses oversight activities. The officials also stated that there was no specific training course dedicated to performing agency oversight and that there would be benefits to having such a course. In commenting on our report, DOE officials stated that FEMP hosted a webinar in September 2014 that discussed agencies’ responsibilities during the performance period. Additionally, DOE stated in its comments that the webinar included a review of FEMP’s guidance on observing contractors’ measurement and verification activities and reviewing and certifying the measurement and verification reports, among other issues. DOD officials we interviewed in December 2014 suggested having additional training on oversight; however, it is unclear whether they were aware of the webinar. Because DOE provided information on the webinar late in our review, we did not assess the webinar, but we believe, based on the analysis conducted for this review, that issues related to training may have been a factor in agencies’ inconsistent oversight of contractors’ measurement and verification activities that we found in our sample of ESPC projects. According to federal standards for internal control, all personnel need to possess and maintain a level of competence that allows them to accomplish their assigned duties, and management needs to identify appropriate knowledge and skills needed for various jobs and provide needed training. information needed to understand how to perform their oversight responsibilities, agencies may continue to inconsistently perform these oversight responsibilities. As a result, agencies may not be aware of whether ESPC projects are achieving the expected savings. GAO/AIMD-00-21.3.1. FEMP officials said they were aware that agency officials are not always observing the contractor’s measurement and verification activities or reviewing and certifying the reports for all projects, but they do not know the extent to which such oversight activities are occurring. The officials said FEMP’s Life of Contract program, established in 2009, was an attempt to ensure that agencies carry out their oversight responsibilities. Under the program, FEMP calls agencies twice a year—once before measurement and verification is supposed to be performed by the contractor and once after measurement and verification has occurred—to ensure that agencies have the assistance they need to perform their oversight responsibilities. However, FEMP officials said they do not know the extent to which agencies have witnessed the contractor’s measurement and verification activities or reviewed and certified the contractors’ measurement and verification reports because they do not monitor whether agencies have carried out these oversight responsibilities. According to the federal standards for internal control, internal controls should generally be designed to assure ongoing monitoring of their performance over time, and any identified deficiencies should be communicated and corrected.relying on calls made through the Life of Contract program to ensure that the oversight takes place, but that monitoring whether agencies performed the oversight would be useful in light of ongoing concerns about oversight. Because FEMP does not monitor whether agencies are observing contractors’ measurement and verification activities and reviewing and certifying contractors’ measurement and verification reports, FEMP does not know whether its Life of Contract program or its guidance is effective and cannot identify deficiencies, if any, in the program or its guidance that need to be corrected. Most Agencies Have Not Systematically Evaluated the Effects of Changing Circumstances on the Performance of Their ESPC Projects Estimating future savings is inherently uncertain, and given the length of ESPCs—those awarded under DOE’s contract vehicle last 17 years on average and can last as long as 25 years—changes are likely to occur in utility prices, agency mission needs, and other factors that affect cost and energy savings. However, most of the seven selected agencies in our review have not systematically evaluated their ESPC portfolios to determine the effects of changing circumstances—such as facility use, utility prices, or interest rates—on project performance, because they do not have processes in place to do so. One agency in our review, the Air Force, evaluated its ESPC portfolio from 2009 through 2011, but has not established a process for agency- wide portfolio evaluations going forward. During this evaluation, the Air Force identified over 50 projects that it determined were not economical due to facility closures, high interest rates, or minimal measurement and verification requirements, among other issues. In addition, during the course of our review, FEMP established a process that would allow it to identify changes in agencies’ use of energy conservation measures and associated facilities and other agency actions that could negatively affect savings for ESPCs awarded under the DOE contract vehicle. This process is intended to help FEMP better advise and oversee agencies implementing ESPCs. However, the process does not include comparing expected energy prices to actual prices, or comparing interest rates for ESPC projects to current market rates. GAO/AIMD-00-21.3.1. or other project characteristics that have high potential for savings in future projects. Conversely, if an agency determines that an ESPC project’s achieved savings are less than expected, the agency can use the information to inform decisions about future projects. Officials from some agencies said such reviews could be tied to specific triggers. For example, agencies could conduct reviews after a certain number of years, or in response to specific events, such as changes in utility prices or market interest rates, or appropriations becoming available that could be used for terminations. Officials at some agencies said that staff at project sites are generally aware of performance deficiencies and savings shortfalls of their individual projects. However, most agencies in our review did not have processes in place for agency-wide reviews of ESPCs’ performance. Without systematically reviewing agency-wide ESPC performance, such as by reevaluating baseline assumptions in light of changing energy prices or use of facilities, agency officials cannot make fully-informed decisions about their portfolios of projects. For instance, limited information on ESPC performance could hinder agencies’ ESPC program managers in planning future ESPCs, and it could hinder facility managers in determining how best to utilize facilities and operate and maintain conservation measures. In addition, to the extent that changes beyond contractors’ control cause projects not to achieve their guaranteed savings, agencies’ payments to contractors may be greater than the reductions in agencies’ utility costs, even though FEMP guidance states that agencies must achieve savings that exceed payments to the contractor. Furthermore, because contractor payments must come from agency funds used to pay for energy, water, and related expenses, contractor payments that exceed achieved energy, water, and related savings will limit the funds agencies can use to cover these expenses. Conclusions Agencies have used ESPCs in a variety of ways and plan to continue to do so to help meet various energy-related goals. However, some agency officials are hesitant to develop projects to consolidate federal data centers—which consume large amounts of energy—because the law does not specify and OMB has not clarified its position on what qualifies as energy-related savings and the allowable proportion of energy and energy-related cost savings with regard to scoring ESPCs. OMB’s position on these issues is important because it determines whether an agency would need to obligate funding for the entire contract up front in the first year of the contract or annually throughout the life of the contract. Unless OMB clarifies its position on these issues, consistent with federal standards for internal control, agencies may needlessly forego opportunities to reduce energy consumption by developing ESPCs to consolidate data centers. Having access to the information needed to fully understand the cost and energy savings that projects are—or are not—achieving is a key aspect in overseeing ESPCs. Contractors reported some cost and energy savings that were not achieved due to agency actions for 14 of the 20 projects in our sample. Contracts typically do not require contractors to reduce the amount of savings they report in such cases, but FEMP guidance, as well as the DOE and Corps contract vehicles, encourages contractors to identify deficiencies that may lead to savings that are not achieved. However, unless DOE and the Corps revise their contract vehicles or provide agencies with updated guidance that requires contractors to provide estimates of cost and energy savings that are not achieved because of agencies’ actions, agencies may not be able to identify the extent to which expected savings are not achieved. In addition, even if DOE and the Corps change the language in the contract vehicles to require contractors to provide estimates of cost and energy savings that are not achieved, the changes would likely not affect the contract requirements for ongoing projects. To obtain this information for ongoing projects, agencies could, for example, work with the contractors for individual projects to determine the best way to obtain this information. Without this information, agencies may not be able to determine what, if any, corrective actions they should take. Further, changes in energy markets in recent years have affected utility prices, but DOE has not updated its analysis of utility prices for projects under its contract vehicle since 2007. Without information on the accuracy of the assumptions about utility rates, agencies may not have the information they need to know whether their projects are achieving expected savings. Agencies have implemented some changes to increase the oversight of ESPC projects, such as establishing or strengthening central offices to help manage ESPC projects. However, for the projects we reviewed, agencies did not always implement practices identified in FEMP guidance for overseeing the contractors’ measurement and verification activities. Specifically, agencies did not consistently observe the contractors’ measurement and verification activities, review the most recent measurement and verification reports, or certify that the reports were acceptable to the agency. In some cases, officials did not know they were responsible for this oversight or thought that it was not necessary, in part because they may not have received specific training on this oversight. Without ensuring that training provides officials with the information needed to understand how to perform their oversight responsibilities, agencies may continue to inconsistently perform their oversight responsibilities. As a result, agencies may not be aware of whether ESPC projects are achieving the expected savings. FEMP designed its Life of Contract program to help agency officials carry out oversight called for in ESPC guidance, but FEMP does not monitor whether agency officials are witnessing contractors’ measurement and verification activities or reviewing and certifying the contractors’ measurement and verification report, as called for in guidance. Without such monitoring, FEMP does not have information necessary to identify any deficiencies that need to be corrected in its Life of Contract program or its guidance. Furthermore, most agencies we reviewed have not systematically evaluated the effects of changes to certain circumstances, like facility use, utility prices, or interest rates, on their portfolios of ESPC projects because they do not have processes in place to do so. Estimating future savings is inherently uncertain and, if assumptions about facility use or utility prices are not accurate, then agencies could be paying more for projects than they are saving. There are challenges to and drawbacks of frequent reviews. However, such evaluations could be tied to specific triggers, such as passage of a certain number of years or certain events such as changes in utility prices, market interest rates, or appropriations becoming available that could be used for modifications or terminations. If agencies do not systematically review the performance of ESPC projects agency-wide compared with the assumptions developed when the contract was signed, agency officials may be unaware of how changing circumstances have affected the performance of their ESPCs and cannot make fully-informed decisions about how to best strategically manage their ESPC portfolios. Recommendations for Executive Action We are making six recommendations to help improve the oversight of agencies’ ESPC projects. To help agencies decide whether to use ESPCs to consolidate federal data centers, we recommend that the Director of OMB document, for the purposes of scoring ESPCs, (1) what qualifies as energy-related savings and (2) the allowable proportion of energy and energy-related cost savings. To help ensure that agencies have sufficient information on ESPC performance to oversee whether future and current contracts are achieving their expected savings, we recommend that the Secretaries of Defense and Energy specify in the scheduled revisions to their ESPC contract vehicles or in guidance to agencies that measurement and verification reports for future projects are to include estimates of cost and energy savings that were not achieved because of agency actions. Additionally, DOE may wish to consider periodically analyzing data on other factors that may affect savings, such as utility prices, to provide information on how savings achieved by ESPCs awarded through its contract vehicle have been affected by changing utility prices since its prior study in 2007. the Secretaries of Defense, Energy, and Veterans Affairs; the Attorney General; and the Administrator of the General Services Administration work with contractors to determine the best way to obtain estimates of cost and energy savings that are not achieved because of agency actions in order to include these estimates in future measurement and verification reports for existing contracts, in accordance with DOE guidance, and where economically feasible. To help agencies more consistently perform their oversight responsibilities and oversee contractors’ measurement and verification activities, we recommend that the Secretary of Energy direct FEMP to evaluate existing training and determine whether additional training is needed on observing contractors’ measurement and verification activities and reviewing and certifying measurement and verification reports, and monitor agencies’ oversight of ESPC projects that agencies have awarded using the DOE contract vehicle, including whether agencies witnessed the contractors’ measurement and verification activities and reviewed and certified acceptance of the measurement and verification report. To help ensure that agencies have sufficient information on the effects of changing circumstances on the performance of their ESPC portfolios, we recommend that the Secretaries of Defense, Energy, and Veterans Affairs; the Attorney General; and the Administrator of the General Services Administration establish a process to systematically evaluate their ESPC projects—including baseline assumptions about facilities’ energy use, utility prices, and interest rates—to determine how their ESPC portfolios are performing and the extent to which they are achieving expected savings. Agencies could consider conducting such evaluations either after a certain number of years, or in response to events, such as changes in utility prices or market interest rates, or appropriations becoming available that could be used for modifications or terminations. Agency Comments and Our Evaluation We provided a draft of this report to the agencies in our review—DOD, DOE, Justice, VA, and GSA. We also provided a draft to OMB. DOD, DOE, VA, and GSA provided written comments, which are reproduced in appendixes VII through X, respectively. DOD, DOE, Justice, GSA, and OMB provided technical comments, which we incorporated, as appropriate. Justice and GSA concurred with our findings and recommendations, and the other agencies provided specific comments on our findings and recommendations, which we discuss in more detail below. OMB did not comment on our first recommendation, which originally called on OMB to clarify certain information about using ESPCs to consolidate federal data centers. However, DOE commented that it has the authority to administer the ESPC program and issue guidance accordingly and that OMB issues guidance on the budget scoring treatment of ESPCs. We agree with these statements and have clarified our recommendation to specify that it pertains to OMB’s scoring of ESPCs. DOD concurred with our second recommendation, related to requiring that measurement and verification reports for future contracts contain estimates of savings that are not achieved due to various factors beyond contractors’ control. DOE partially concurred with the recommendation. DOE agreed that factors such as physical changes to buildings, which were not contemplated prior to the contract, should be verified by annual measurement and verification activities. DOE stated that FEMP is addressing these issues through a revision of its measurement and verification reporting template. Further, DOE said that FEMP will investigate using the revised reporting template in future contracts. We modified the recommendation to allow for use of an alternative mechanism, such as the template, to implement the requirement. DOE also stated that methodologies for dealing with risks, such as changes in utility prices, are incorporated in the measurement and verification plan that is part of the contract for each project. DOE stated that further action is not warranted for factors, such as changes in utility prices, that are beyond contractor and agency control because their variability is accounted for at the time of contract formation. According to DOE, attempting to evaluate the impact of such factors on savings would be potentially costly and burdensome to agencies and contractors and would have little benefit. Furthermore, DOE stated that there is evidence, based on a 2007 Oak Ridge National Laboratory study, that ESPC projects have underestimated utility prices and have achieved greater overall savings than contractors reported. We recognize DOE’s concerns and have modified our report and recommendation to focus on estimating savings that were not achieved due to agency actions. Additionally, we have modified the recommendation to include that DOE consider periodically analyzing the impacts of utility prices on ESPC savings, given significant changes in energy markets since Oak Ridge’s 2007 study. DOD and DOE partially concurred, and VA did not concur, with our third recommendation about working with contractors to determine the best way to obtain estimates of savings that are not achieved for existing contracts. DOD and VA suggested changes to the wording of the recommendation, which we have incorporated. In its comments, DOE reiterated that agencies would benefit from verifying factors, such as physical changes to buildings that were not contemplated prior to contract implementation, through annual measurement and verification activities. DOE also stated that FEMP is addressing these issues through revision of its measurement and verification reporting template and will investigate the use of the revised reporting template for existing contracts. DOE reiterated its concerns about reporting savings that were not achieved due to factors, such as changes in utility prices, that are beyond contractor and agency control. We recognize this concern, as discussed in our response to the second recommendation above. We have modified our report and revised the wording of the recommendation to (1) focus on estimating savings that were not achieved due to agency actions, (2) more clearly indicate that the estimates are to be obtained for inclusion in future measurement and verification reports for existing contracts, and (3) limit its implementation to instances where it is economically feasible. DOE partially concurred with our fourth recommendation about providing training on certain oversight activities. In December 2014, we interviewed DOE officials, including FEMP program managers, about available training pertaining to agencies’ oversight responsibilities. These officials told us that there was no specific course dedicated to performing agency oversight and that such a course would be beneficial. The need for additional training on oversight activities was also suggested by DOD officials during our review. In commenting on our report, however, DOE stated that, in September 2014, FEMP added to its training courses a webinar that addressed agency responsibilities for oversight during the contract performance period. DOE also stated that FEMP would examine available training and resources; make updates, as appropriate; and investigate how to encourage their use among agencies. We have noted this new information in the body of the report. However, because DOE provided the information on the webinar late in our review, we did not assess the webinar cited by DOE. We continue to believe, based on the analysis conducted for this review, that issues related to training may have been a factor in agencies’ inconsistent oversight of contractors’ measurement and verification activities that we found in our sample of ESPC projects. We are encouraged by DOE’s plans and have modified our recommendation to include evaluating its existing training and determining whether additional training on oversight is needed. DOE concurred with our fifth recommendation related to FEMP monitoring of agencies’ oversight of ESPC projects awarded using the DOE contract vehicle. DOE stated that FEMP will examine its Life of the Contract program for an improved means of quantifying agencies’ compliance in observing measurement and verification activities and reviewing and certifying the resulting reports. DOD concurred with our sixth recommendation about establishing a process to systematically evaluate its ESPC portfolio. DOE and VA partially concurred with this recommendation. In its comments, DOE stated that FEMP would review its process that addresses performance issues and its process for engaging with agencies to determine whether to modify or terminate a contract. DOE stated that its review process includes evaluating cost savings that are not achieved as a result of agency actions and evaluating interest rates to assist agencies in determining the potential cost savings available through refinancing. In its comments, VA concurred in principle with the recommendation but stated that the agency would be limited in how it could use information obtained from such evaluations and that the evaluations would not provide significant value relative to the time and money required to conduct them. We have modified our recommendation to be less prescriptive about how the information is to be used. We continue to believe, as VA stated in its comments, that such evaluations could make agency officials aware of how changing circumstances have affected ESPC performance. Moreover, our recommendation allows for agencies to consider conducting such evaluations after a certain number of years or in response to events, which should decrease the burden of such evaluations. In its technical comments, DOE stated that it disagreed with our use of the term "actual savings," and that actual savings would be better characterized as "the cost and energy savings that contractors measure and verify in accordance with the plan the agency agreed to when developing and awarding the contract." DOE stated doing so is consistent with the ESPC authority, which authorizes a methodology to determine energy savings using models and assumptions that the federal agency and contractor agree on prior to contract formation. DOE also stated that there are factors that could affect savings that cannot be known and that analyzing only known factors will produce a skewed analysis of "actual" savings. We acknowledge these limitations. However, we found that savings that contractors reported, in accordance with the plan the agency agreed to, sometimes included savings that were not achieved because of agency actions, such as physical changes to buildings. DOE agrees that such factors, which were not contemplated prior to contract formation, should be verified by annual measurement and verification activities. We continue to believe that it is important for agencies to obtain information from contractors on savings that are not achieved because of agency actions and have modified our report to discuss “achieved savings” instead of “actual savings.” In DOD’s technical comments, it provided some missing data for the amount the Army awarded in ESPCs for fiscal years 2011, 2012, and 2014. DOD also provided updated guaranteed costs savings data for some Army projects after it submitted comments on the draft report. We updated the report with these data. DOD also requested that we list DOD as the agency in tables 2 and 4 and list the Air Force, Army, and Navy as components, but we did not do so because we did not include other DOD components in the scope of our audit, and we wanted to highlight the details specific to the Air Force, Army, and Navy. We recognize that the Air Force, Army, and Navy are components of DOD, and we acknowledge this in the beginning of the report. We are sending copies of this report to the appropriate congressional committees and the Secretaries of Defense, Energy, and Veterans Affairs; the Attorney General; the Administrator of the General Services Administration; and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XI. Appendix I: Objectives, Scope, and Methodology We were asked to review federal use of energy savings performance contracts (ESPC) and process changes since 2005. This report examines the extent to which (1) selected agencies have used ESPCs and plan to use them in the future, (2) selected agencies’ ESPC projects have achieved their expected cost and energy savings, and (3) selected agencies have overseen and evaluated their ESPC projects. To determine which federal agencies to include in our review, we selected agencies with the highest energy usage and greatest facility square footage, based on government-wide data collected by the Federal Energy Management Program (FEMP). We chose the following seven agencies based on the above criteria the Departments of Energy (DOE), Justice, and Veterans Affairs (VA); the General Services Administration (GSA); and the Army, Navy, and Air Force within the Department of Defense (DOD). We refer to these agencies as the seven selected agencies. As of fiscal year 2013, DOD, DOE, Justice, VA, and GSA represented 78 percent of the federal government’s total floor space and 80 percent of the government’s energy use. Findings based on these agencies cannot be generalized to other agencies. To provide information on all of our objectives, we interviewed knowledgeable agency officials, reviewed relevant agency and contractor reports, and conducted site visits to ESPC projects in Golden, Colorado, and White Oak, Maryland. We selected these sites based on whether they were undertaken by federal agencies within our review; innovativeness, such as use of newer technology; and proximity to locations of GAO staff. Findings from these site visits cannot be generalized to other projects. To determine the extent to which selected agencies used ESPCs, we collected and analyzed available data on ESPCs awarded in fiscal years 1995 through 2014. We found that there is no source of comprehensive data on federal agencies’ use of ESPCs, either in DOE, the contracting centers, or the agencies. The seven selected agencies started collecting data comprehensively and electronically at different points in time, and they keep some contract data only in project files at the facilities where the contracts are being implemented. We combined agencies’ available data into the most consistent format available, deleted duplicate records, performed basic tests to determine the reliability of the data, reviewed existing information about the data and the systems that produced them, and interviewed agency officials knowledgeable about the data. We found that selected agencies were missing some data, but we found the data used in this report to be sufficiently reliable for our purposes. We compiled data into the following fields: project title, contractor, contract vehicle, award date, agency, implementation price, total contract price, guaranteed savings, contract term length, and annual energy savings. If agencies did not provide data that were defined in the same way as other agency data, we used the most comparable data available. We used FEMP’s data as our primary ESPC data source for all seven selected agencies, and we supplemented it with Air Force, Army, GSA, and Navy data. Because data from Justice and VA on ESPCs were not sufficiently reliable for the purposes of this report, we relied on FEMP’s data on these agencies. To determine the extent to which agencies plan to use ESPCs and challenges faced when using ESPCs to consolidate data centers, we reviewed relevant federal laws, executive orders, the President’s Performance Contracting Challenge, the seven selected agencies’ fiscal year 2014 Strategic Sustainability Performance Plans, and Office of Management and Budget guidance. To determine the extent to which selected agencies’ ESPCs projects have achieved their expected cost and energy savings, we reviewed six annual studies by DOE’s Oak Ridge National Laboratory that analyzed cost and energy savings reported by contractors in annual measurement and verification reports for ESPCs awarded under DOE’s contract vehicle. These ESPCs represent about 70 percent of federal ESPCs awarded since 1995 by total contract value. Oak Ridge National Laboratory’s first annual study was issued in 2007 and reflected savings reported by contractors in calendar year 2005, and its most recent annual study was issued in 2013 and it reflected savings reported by contractors in calendar year 2012. Oak Ridge National Laboratory did not issue an annual study for savings reported in 2006 or 2007. In addition, the years for which savings were reported were approximated based on the average start and end dates of the reporting periods covered by the annual measurement and verification reports included in Oak Ridge National Laboratory’s analysis. For instance, the reports included in the 2012 analysis had an average start date of January 4, 2012, and an average end date of January 5, 2013, for an approximate reporting period of calendar year 2012. Oak Ridge National Laboratory’s studies included ESPCs for projects that were in their performance period and for which the contractor had produced at least one measurement and verification report in the year before the study. Projects in the planning or construction phases, first year of the performance period, or postperformance period were not reflected in a given year’s study. We reviewed Oak Ridge National Laboratory’s methodology for these studies, interviewed the authors of the studies, and determined the findings of the studies were sufficiently reliable for purposes of our report. We did not analyze trends in reported savings for ESPCs awarded through the Corps’ contract vehicle because the Corps had not centrally tracked or analyzed reported savings for these ESPCs. In addition, to provide illustrative examples of the extent to which selected agencies’ ESPC projects have achieved their expected savings, we reviewed annual measurement and verification reports submitted by contractors and other project documentation for a nongeneralizable sample of 20 ESPC projects, with a total contract value of about $824 million. (See app. VI for a list of projects that we selected.) We selected projects from among the 530 projects listed in DOE, Corps, and agency data on ESPCs awarded by the seven selected agencies in fiscal years 1995 through 2014. We selected projects that reflected a range of award dates, contract values, and other characteristics. We selected at least one project at each of the seven selected agencies, and more projects at agencies that had awarded more ESPCs. Our review was generally limited to projects that had completed at least 1 year of the performance period for which an annual measurement and verification report was submitted. The one exception was VA’s Veterans Integrated Service Network 22, Greater Los Angeles project, which was in the first year of its performance period at the time of our review, and it did not yet have an annual measurement and verification report. However, because VA did not award any ESPCs in 2004 through 2011, this was the only VA project available for our sample that was in its performance period. Therefore, in order to include VA in our sample, we reviewed the postinstallation measurement and verification report for the project, which included information on projected savings for the first year of the performance period based on measurement and verification activities conducted after project installation. For all 20 projects in our sample, we reviewed measurement and verification reports and other documentation to identify instances where contractors noted changes in the performance or operation of equipment that could have affected the savings they generated. We also reviewed information in the documents on projected utility rates. We contacted agency officials directly involved with the projects to obtain additional information, such as estimates of the savings that are not achieved due to changes in equipment performance or operation, the reasons for those changes, and actual utility rates for the most recent year. The findings from our review of these projects are not generalizable to other projects. To inform our review of the projects, we reviewed FEMP’s measurement and verification guidance, which includes information on procedures and guidelines for quantifying the savings resulting from ESPCs, and is intended for agency staff and contractors. We also reviewed supplemental measurement and verification guidance from the seven selected agencies in our review and interviewed officials from these agencies regarding their processes for measuring and verifying ESPC savings. To determine the extent to which selected agencies have overseen and evaluated ESPC projects, we reviewed and analyzed annual measurement and verification reports submitted by contractors and other project documentation for a nongeneralizable sample of 20 ESPCs at the seven selected agencies to determine the extent that agencies observed the contractor’s measurement and verification activities and reviewed and approved the latest measurement and verification report. We conducted follow-up inquiries with agency officials to obtain any missing data in the project files. We also interviewed FEMP and other agency officials about the results associated with the sample projects. We also reviewed audit agency reports conducted on ESPCs since 2005. Furthermore, we interviewed agency officials about internal procedures for evaluating agency ESPC projects and analyzed agency documents related to these evaluations. We conducted this performance audit from March 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Value of Annually Awarded ESPCs by Selected Agencies, in Fiscal Years 1995 through 2014 Year 1995 Appendix III: Federal Agencies’ Awarded and Planned Contracts for the President’s Performance Contracting Challenge Table 5 shows the amount agencies have awarded in performance-based contracts, including energy savings performance contracts (ESPC) and utility energy service contracts, and the amounts that agencies plan to award by December 2016. Planned awards have not yet been awarded, which means these data are subject to change and contracts might not be awarded. Appendix IV: Budgetary Treatment of ESPCs In recent years, members of Congress and industry officials have raised questions about how energy savings performance contracts’ (ESPC) costs and savings should be reflected in the federal budget. The full amount of the government’s financial commitment under an ESPC is not reflected—”scored”—up front in the budget when the contract is signed. Moreover, federal budget agencies disagree about whether this should be the case. The Office of Management and Budget’s (OMB) scoring treatment is based on the contingent nature of the contract—payments are contingent on achieving expected cost savings and, therefore, the government is not fully committed to the entire long-term cost of the ESPC at the time it is signed. Under OMB’s scoring treatment, an agency must obligate, at the time the contract is executed, sufficient budgetary resources to cover the agency’s contract payments for the fiscal year in which the contract is signed. For each subsequent fiscal year during the contract period, the agency must obligate funds to cover the contract payments the agency is required to make for that year. OMB has not changed its approach to scoring ESPCs since it first issued formal guidance in 1998, and OMB staff said they have no plans to do so. The Congressional Budget Office (CBO), on the other hand, scores the full cost of ESPCs up front in its cost estimates of legislation authorizing agencies to enter into ESPCs. It views this treatment as consistent with government-wide principles that the budget should reflect the government’s full commitment at the time decisions are made. In the case of an ESPC, this means a new obligation would be made at the time the ESPC is signed. CBO’s cost estimates of legislation authorizing agencies to enter into ESPCs reflect the annual net effects of such obligations in the current fiscal year and 10 subsequent years. CBO has developed several cost estimates for legislation affecting ESPCs and, in a recent estimate, changed how it reflects the cost savings that may result from ESPCs. estimate for legislation that, among other things, expanded the definition of allowable energy conservation measures under an ESPC, CBO showed an increase in spending resulting from increased ESPC use. CBO, S. 1321 Energy Savings Act of 2007 (Washington, D.C.: June 11, 2007); CBO, S. 761 Energy Savings and Industrial Competitiveness Act of 2013 (Washington, D.C.: May 21, 2013); CBO, H.R. 2689 Energy Savings Through Public-Private Partnerships Act of 2014 (Washington, D.C.: Sept. 24, 2014). CBO estimated that contractual commitments to pay vendors for energy conservation measures implemented pursuant to the legislation would amount to $450 million over 10 years. This treatment is consistent with CBO’s previous estimates of ESPC-related legislation. However, unlike previous estimates for such legislation, the 2014 cost estimate also factored in reductions in spending due to anticipated reductions in energy costs. CBO estimated that reductions in federal costs attributable to contracts implemented pursuant to the legislation would total $210 million over 10 years, with additional reductions in subsequent years. In addition, CBO issued a report in February 2015 with further information on its new scoring treatment of ESPCs, including how it accounts for reductions in agencies’ energy costs. Appendix V: Oak Ridge National Laboratory Analysis of Reported Savings for ESPCs Awarded through DOE’s Contract Vehicle The following tables provide information on the reported energy and cost savings for energy savings performance contracts (ESPC) awarded through the Department of Energy’s (DOE) contract vehicle, based on analysis by DOE’s Oak Ridge National Laboratory. Oak Ridge National Laboratory has issued six studies of contractor-reported savings for ESPCs awarded through DOE’s contract vehicle. Table 6 shows the reported and guaranteed cost savings for these ESPCs in the 6 years analyzed by Oak Ridge. Table 7 shows the reported and proposed energy savings for ESPCs awarded through DOE’s contract vehicle in the 6 years analyzed by Oak Ridge National Laboratory. Table 8 provides information on the extent to which contractors reported cost and energy savings below expected amounts—also referred to as savings shortfalls—for ESPCs awarded through DOE’s contract vehicle in the 6 years analyzed by Oak Ridge National Laboratory. Appendix VI: Effects of Agency Actions on Savings for Selected ESPC Projects Table 9 shows the energy savings performance contract (ESPC) projects we reviewed, and the agency actions that affected savings and resulted in overstatements of reported savings in the most recent measurement and verification report. Appendix VII: Comments from the Department of Defense Appendix VIII: Comments from the Department of Energy Appendix IX: Comments from the Department of Veterans Affairs Appendix X: Comments from the General Services Administration Appendix XI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Hilary Benedict (Assistant Director), Joshua Becker, John Delicath, Keesha Egebrecht, Cindy Gilbert, Carol Henn, Miles Ingram, John Johnson, Brian Lepore, Armetha Liles, Cynthia Norris, Barbara Timmerman, and Bill Woods made significant contributions to this report. | Constrained budgets and increasing energy efficiency goals have led federal agencies to explore innovative ways to fund energy improvements, including ESPCs. An expected increase in the use of ESPCs has raised questions about agencies' ability to ensure that the government's interests are protected. ESPCs can span up to 25 years and be valued at millions of dollars each. GAO was asked to review federal use of ESPCs since 2005. This report examines the extent to which (1) agencies have used ESPCs and plan to use them; (2) projects have achieved their expected cost and energy savings; and (3) agencies have overseen and evaluated such projects. GAO compiled data on awarded ESPCs; reviewed agency guidance and files for a nongeneralizable sample of 20 ESPC projects that reflected a range of contract award dates, contract values, and other characteristics; and interviewed officials from the seven agencies with the highest energy usage and greatest facility square footage—the Air Force, Army, and Navy within the Department of Defense; the Departments of Energy, Justice, and Veterans Affairs; and the General Services Administration. The seven selected agencies in GAO's review awarded approximately $12.1 billion in energy savings performance contracts (ESPC) in fiscal years 1995 through 2014 and plan to continue using them to help meet federal energy directives and initiatives. Under ESPCs, private contractors finance the up-front costs of energy improvements. Agencies then repay contractors from the savings, such as those resulting from lower utility bills. The seven agencies GAO reviewed have used more than 500 ESPCs for projects, such as installing energy-efficient lighting or power generation projects. Agencies' plans to use ESPCs vary, particularly for data center consolidation projects, which could reduce a significant amount of energy. Cost and energy savings that contractors reported to agencies for most ESPCs met or exceeded expectations, but some of these savings may be overstated. GAO's review of a nongeneralizable sample of 20 projects found that contractors' reports overstated cost and energy savings for 14 projects. Contractors calculate and report savings annually in accordance with plans agreed to in their contracts with agencies. These plans include assumptions about agencies' use of equipment, which may change over the life of the contract. If changes reduce project savings, such as when an agency does not operate or maintain the equipment as agreed, contractors are not required to reduce the amount of savings they report or measure the changes' effects. GAO evaluated the extent to which changes in agency operations or other factors within agencies' control may have reduced energy savings for a nongeneralizable sample of projects. Estimates agencies provided to GAO of savings that were reported but not achieved ranged from negligible to nearly half of a project's reported annual savings. For example, one agency removed equipment for a sewer system upgrade, which resulted in over $104,000 in annual savings that were reported but not achieved, or about 40 percent of the project's reported savings. Federal guidance states that when reviewing contractor reports, agencies should understand changes in project performance and savings levels, and what actions should be taken to address deficiencies. However, agencies were not always aware of how much savings were not achieved due to agency actions because contractors were not required to report this information. Without clearer reporting of savings that are not achieved, agencies may be unable to determine what, if any, corrective actions should be taken. The seven agencies in GAO's review have conducted limited oversight and evaluation of their ESPC projects. In GAO's sample of 20 projects, agency representatives did not perform some oversight activities included in guidance because they were unaware of these duties or how to perform them, among other reasons. Without ensuring that training provides officials with the information needed to understand how to perform their oversight responsibilities, agencies may continue to inconsistently perform oversight. Moreover, most of the agencies in GAO's review have not systematically evaluated their ESPC portfolios to determine the effects of changing circumstances—such as facility use—on project performance because they do not have processes to do so. Without such oversight and evaluation, agency officials cannot make fully informed decisions about how best to strategically manage their ESPCs. |
Background The DRC’s size, location, and wealth of natural resources contribute to its importance to U.S. interests in the region. With an area of more than 900,000 square miles, the DRC is roughly the size of the United States east of the Mississippi River. Located in the center of Africa, the DRC borders nine nations (see fig. 1). Its abundant natural resources, which constitute its primary export products, include 34 percent of world cobalt reserves; 10 percent of world copper reserves; 64 percent of world coltan reserves; and significant amounts of wood, oil, coffee, diamonds, gold, cassiterite, and other minerals. In addition, rain forests in the DRC provide 8 percent of world carbon reserves. The DRC has a population of 58 million to 65 million people, including members of more than 200 ethnic groups. The DRC has had a turbulent history. In 1965, fewer than 5 years after the nation achieved its independence from Belgium, a military regime seized control of the DRC and ruled, often brutally, for more than three decades. It was toppled in 1997 by a coalition of internal groups and neighboring countries to the east, including Rwanda and Uganda, after dissident Rwandan groups began operating in the DRC. Subsequent efforts by a new DRC government to secure the withdrawal of Rwandan and Ugandan troops prompted a second war in 1998 that eventually drew the armies of three more African nations into the DRC. According to the International Rescue Committee, this second war resulted in an estimated 3.9 million deaths. Beginning in 1999, a United Nations (UN) peacekeeping force was deployed to the DRC. After a series of peace talks, the other nations withdrew all or most of their troops and an interim government was established. Elections held in 2006 with logistical support provided by UN peacekeepers culminated in the December 6, 2006, inauguration of the DRC’s first democratically elected president in more than 40 years. Partially as a result of this turbulent history, the DRC suffers from a wide range of problems, including acute poverty. The DRC is one of the poorest and least developed countries in the world. It was ranked 167th of 177 nations surveyed by the UN Development Program in terms of life expectancy, education, and standard of living, and its ranking on these measures has declined more than 10 percent over the past decade. The current life expectancy is 43 years, in part because the DRC suffers from high rates of tuberculosis, HIV/AIDS, and malaria. According to USAID, more than 2 of every 10 children born in the DRC die before their fifth birthday (owing in part to chronic malnutrition and low vaccination rates), and the maternal death rate is the world’s highest. Congolese women also suffer from the effects of rampant sexual attacks and other forms of gender-based violence against women, particularly in the eastern regions of the country. A UN expert reported in July 2007 that widespread atrocities against women in one eastern DRC province constituted the worst crisis of sexual violence that the expert had yet encountered. An international group of donor nations recently concluded that the DRC’s educational system is failing and in a state of crisis. Most rural children do not attend school at all, in part because their parents cannot afford to pay school fees. As a result of such problems, the Fund for Peace ranked the DRC second on its “failed states” scale, after Sudan. The DRC’s economic prospects are uncertain. It once derived about 75 percent of its export revenues and 25 percent of its gross domestic product from its natural resources, but wars and turmoil have reduced its economy to dependence on subsistence agriculture and informal activities. The International Monetary Fund (IMF) reported that as of 2001, the DRC’s per capita gross domestic product had contracted to $100 from a preindependence level of $400, in constant dollars. Although the DRC’s gross domestic product grew at an average rate of 5.5 percent from 2002 through 2005, growth has recently slowed. Also, the DRC’s prospects are encumbered by an external debt load of around $8 billion. The value of this debt—which represents more than 90 percent of the DRC’s gross domestic product, 300 percent of its exports, and 700 percent of its government’s revenues—is three times greater than the level of debt that the World Bank and the IMF consider sustainable. The DRC has not fully qualified for debt relief under the enhanced Heavily Indebted Poor Country (HIPC) initiative. The DRC receives assistance from an array of donor nations and organizations. During 2004 and 2005, the 10 largest donors to the DRC were the World Bank’s International Development Association, the European Commission, Japan, Belgium, the United Kingdom, the United States, France, Germany, the IMF, and the Netherlands. The World Bank is preparing a country assistance strategy to support the DRC’s 2007-2010 poverty-reduction goals. The United States and 16 other donor nations and organizations are contributing to the World Bank’s effort by preparing a country assistance framework document that assesses the major challenges facing the DRC and identifies major areas for donor focus. Some donor nations and organizations have also begun an effort to coordinate assistance for reforming the DRC’s troubled army, police, and judiciary. According to the Department of State, the United States’ goal for its assistance to the DRC is to strengthen the process of internal reconciliation and democratization to promote a stable, developing, and democratic DRC. The Department of State has also reported that the United States is seeking to ensure that the DRC professionalizes its security forces and is at peace; develops democratic institutions; supports private-sector economic growth and achieves macroeconomic stability; meets the basic needs of its people; and, with its international partners, provides relief in humanitarian crises. As described by the Assistant Secretary of State for African Affairs, U.S. policy is to support—but not lead—the efforts of the DRC to address its problems. In October 2006, continued violence and armed conflict in the eastern DRC led the President of the United States to issue an executive order blocking the property of certain persons contributing to the conflict in the DRC. In October 2006, the President reiterated the United States’ commitment to the goal of creating a prosperous Congolese democracy. In October 2007, the President, meeting with the newly elected president of the DRC, again cited the importance of democracy and economic growth in the DRC and noted the need for progress on security and health issues. Section 102 of the DRC Relief, Security, and Democracy Promotion Act of 2006 includes 15 U.S. policy objectives for the DRC. Table 1 presents these objectives in five categories of assistance—emergency humanitarian, social development, economic and natural resources, governance, and security. The National Security Council has established an interagency working group to focus attention on issues affecting the Great Lakes region of central Africa, which encompasses the DRC. The group meets bimonthly and includes officials from DOD, State, USAID, and Treasury. Its mission is to establish a coordinated approach, policies, and actions to address issues (such as security) in the DRC and other countries in the region. To ensure that foreign assistance, including assistance provided to the DRC, is used as effectively as possible to meet broad foreign policy objectives, the Secretary of State in 2006 appointed a Director of Foreign Assistance (DFA), who also serves as the Administrator of USAID. The DFA is charged with developing a coordinated U.S. government foreign assistance strategy, including multiyear country-specific assistance strategies and annual country-specific assistance operational plans; creating and directing consolidated policy, planning, budget, and implementation mechanisms and staff functions required to provide umbrella leadership to foreign assistance; and providing guidance to foreign assistance delivered through other agencies and entities of the U.S. government, including the Millennium Challenge Corporation and the Office of the Global AIDS Coordinator. U.S. Programs and Activities Support the Act’s Policy Objectives U.S. programs and activities provide support to the Act’s policy objectives. Most recently, in fiscal years 2006 and 2007, U.S. agencies allocated the largest share of their funds for the DRC to programs that supported the Act’s humanitarian and social development goals. Although the U.S. government has not acted on the Act’s policy objective that it bilaterally urge nations contributing UN peacekeepers to prosecute abusive peacekeeping troops, it has taken other steps to address this objective. U.S. Funding for the DRC Recent U.S. funding for the DRC has focused primarily on the Act’s humanitarian and development goals. Seven U.S. agencies allocated about $217.9 million and $181.5 million for aid to the DRC in fiscal years 2006 and 2007, respectively, as shown in table 2. As shown in figure 2, most of these funds were allocated by State and USAID. The agencies allocated about 70 percent of these funds for programs that would support the Act’s emergency humanitarian and social development objectives (see fig. 3). They allocated about 30 percent of the funds for programs and activities that would support the Act’s economic, governance, and security objectives. Humanitarian Assistance USAID and State have provided humanitarian assistance to help the DRC meet the basic needs of its citizens and vulnerable populations. The following examples illustrate these efforts. USAID has provided emergency food assistance to the DRC, primarily through the UN World Food Program and Food for the Hungry International. USAID-funded emergency food assistance included general distribution of food to internally displaced persons who need food aid; vulnerable groups such as people infected with, and orphans and widows affected by, HIV/AIDS; and victims of sexual abuse by soldiers. USAID emergency assistance also supported road rehabilitation and bridge reconstruction projects; schools; and the socioeconomic reintegration of ex-child soldiers, adult combatants, and their families. In addition, USAID provided emergency supplies, health care, nutrition programs, water and sanitation improvements, food, and agriculture assistance to vulnerable populations in the DRC—including malnourished children, war-affected populations, internally displaced people, and formerly displaced households—primarily through NGOs. Recent program activities have focused on road rehabilitation; primary health care and specialized care services to malnourished children in certain eastern regions; medical care, treatment, and confidential counseling to victims of sexual and gender- based violence; and access to water and sanitation at health facilities. State has provided humanitarian assistance to help repatriate, integrate, and resettle refugees in the DRC. It has also helped fund refugees’ food needs and supported mental health assistance and market access programs in areas of high refugee return. In fiscal year 2007, State supported refugee assistance activities in the DRC, which were implemented primarily by the UN High Commissioner on Refugees, other international organizations, and NGOs. In addition, State contributed to overall Africa assistance programs implemented by the UN High Commissioner on Refugees and the International Committee of the Red Cross, which help support refugees and conflict victims in central Africa. Social Development Assistance USAID, HHS, and DOL allocated funds to support the Act’s social development and rehabilitation objectives. The following examples illustrate these efforts. USAID has worked through NGOs to improve education, health care, and family planning. It has implemented activities to reduce abandonment of children; provide psychosocial support, medical assistance, and reintegration support to survivors of sexual and gender-based violence in the eastern DRC; train teachers; and increase access to education for vulnerable children. USAID also funds efforts to train medical staff and nurses in the management of primary health care, distribute bed nets to prevent the spread of malaria and polio, provide family planning services, and support voluntary counseling and testing centers for HIV/AIDS. HHS has allocated funds for immunization against, and the surveillance and control of, infectious diseases such as polio, measles, and HIV/AIDS. HHS’s Centers for Disease Control and Prevention has also sought to strengthen the capacity of public health personnel, promote infrastructure development and improve the quality of clinical laboratories through grants and cooperative agreements. The Centers for Disease Control and Prevention have also (1) provided ongoing technical, programmatic, and funding support through the World Health Organization and the UN Children’s Fund for the DRC immunization program with an emphasis on polio eradication and measles mortality reduction, and (2) assisted the World Health Organization with a recent outbreak of Ebola virus. In addition, HHS’s National Institutes of Health has granted funds to U.S. academic institutions to conduct basic and clinical biomedical research, which involves collaboration with research partners in the DRC. DOL has allocated funds to address children’s involvement in mining and related services, small-scale commerce, child soldiering, and other forms of child labor in the DRC. This effort would build on a recently completed project that assisted a small number of former child soldiers by fostering their withdrawal from militias and discouraging their reenlistment. Economic and Natural Resource Management Assistance The Treasury, USAID, State, and USDA have provided support for the Act’s economic objectives. The following examples illustrate these efforts. The Treasury has worked with the World Bank and the IMF to relieve the DRC of some of its foreign debt. The United States provided the DRC with interim debt relief (primarily through reduced interest payments) in fiscal years 2005 through 2007, following the DRC’s admittance into the HIPC debt relief program. Once the DRC qualifies for the completion of its HIPC debt relief, Treasury plans to pay the budgetary costs of full U.S. bilateral debt relief to the DRC ($1.3 billion) with $44.6 million allocated in fiscal year 2006, about $80 million in previously appropriated funds, and about $178 million in fiscal year 2008 funds. USAID has allocated funds to support sustainable natural resource management, forest protection, and biodiversity in the DRC through the Central African Regional Program for the Environment. The program is a 20-year regional initiative that aims to reduce deforestation and loss of biological diversity in the DRC and its eastern neighbors. A component of the U.S.-sponsored Congo Basin Forest Partnership, the program also promotes forest-based livelihoods in the DRC. USAID has also allocated funds to encourage productivity in the agricultural, private, and small enterprise sectors and to support agriculture development. In addition, USAID’s Global Development Alliance program works with private companies to promote transparent mining practices and reinvestment into DRC mining communities. State has supported efforts to promote transparency in the natural resource sector by serving as the U.S. representative to the Kimberley Process Certification Scheme which deals with rough diamond trade, and the Extractive Industries Transparency Initiative (EITI). USDA has allocated funds to improve agricultural productivity, increase rural market development, provide credit for agribusiness and rural infrastructure, and increase access to potable water and water for irrigation in the DRC. Governance Assistance USAID and State have allocated funds for programs that support the Act’s governance objectives. The following examples illustrate such assistance. USAID has allocated funds to organize itinerant court sessions in relatively inaccessible parts of the DRC. These sessions are intended to bring justice institutions closer to citizens, facilitate greater access to justice for vulnerable people, and provide quality legal assistance to the population. It has also supported an NGO’s establishment of democracy resource centers to assist political party leaders, civic activists, elected local and national officials, and government institutions in consolidating good governance and democracy. To promote judicial independence, USAID has supported an NGO’s efforts to (1) foster the adoption and implementation of priority improvements to the DRC’s legal framework, including laws on sexual violence and the rights of women, and (2) provide legal assistance activities for victims of sexual and gender-based violence. State allocated funds for more than 30 programs by the National Endowment for Democracy during 2006. Several of these programs were aimed at informing women of their rights, addressing issues of abuse and corruption, and promoting political participation. For example, the endowment used State funds to support the political role of women in one eastern province before and after the elections, to call attention to the continued victimization of women in eastern Congo, and to visit detention centers throughout the DRC to facilitate release of illegally detained men and women. Security Assistance State, USAID, and DOD programs and activities have provided support for most of the Act’s security-related policy objectives. The following examples illustrate these efforts. State has facilitated a multinational forum, the Tripartite Plus Commission, to encourage other nations to play a constructive role in the DRC’s security affairs. The commission provides a forum for the DRC and the nations on its troubled eastern border—Uganda, Rwanda, and Burundi—to discuss regional security issues, including militias operating illegally in the eastern DRC. State has also supported a center where these nations can share intelligence regarding militias. USAID has launched programs to promote the reintegration of some former fighters into Congolese society. The programs are intended to provide the former fighters incentives to remain in civilian society. State is refurbishing the DRC’s military officer training school and training multiple levels of the military, including brigade- and battalion-staff level officers, on military justice reform, civil-military relations, and other issues of concern. According to State officials, State funds will be used for an initial DOD assessment of the military justice sector to identify needs to be addressed with future funds. State may also use these funds to help train DRC personnel to combat armed fighters in the eastern regions of the DRC. Key State senior-level and program officials informed us that they were unaware of any U.S. efforts to bilaterally urge nations contributing UN peacekeeping troops to take steps to help those nations prosecute any of their peacekeeping troops who may commit abuses in the DRC. State officials informed us that the United States has encouraged the UN to take actions to guard against further abuses of DRC citizens by UN peacekeepers. The United States also supports the Global Peace Operations Initiative, a 5-year program to train and, as appropriate, equip at least 75,000 peacekeepers worldwide with a focus on African nations. Major Challenges in the DRC Impede Efforts to Achieve the Act’s Policy Objectives U.S., NGO, and other officials and experts identified several major challenges that impede U.S. efforts to achieve the Act’s policy objectives. These challenges include (1) the unstable security situation, (2) weak governance and widespread corruption, (3) mismanagement of natural resources, and (4) lack of basic infrastructure. Because these challenges are interrelated, they negatively impact progress in multiple areas. Unstable Security Situation The DRC’s weak and abusive security forces have been unable to quell continuing militia activities in the DRC’s eastern regions, where security grew worse during 2007. During 2006 and 2007, reports by several organizations described the security challenge in the DRC. According to a report by the International Crisis Group, militias control large portions of the eastern regions of the DRC. The report concludes that the DRC’s security forces are poorly disciplined, ill equipped, and the worst abusers of human rights in the DRC. According to a UN report, the DRC army is responsible for 40 percent of recently reported human rights violations—including rapes, mass killings of civilians, and summary executions—and DRC police and other security forces have killed and tortured civilians with total impunity. The report states that the DRC has generally promoted, rather than investigated and prosecuted, army officers suspected of such abuses. According to a report by Amnesty International, women have been raped in large numbers by government and other armed forces throughout the DRC. According to State, government and other armed forces in the DRC have committed a wide range of human rights abuses, including forcing children into the security forces. The DRC’s unstable security situation has worsened the DRC’s humanitarian and social problems and impeded efforts to address these problems, according to NGO representatives, agency officials, and other sources. The renewed conflict has prompted increased NGO and UN assistance programs, including those aimed at addressing basic needs and psychosocial, legal, and socioeconomic support for victims of sexual and gender-based violence. NGOs have noted that active combatants typically commit crimes of sexual violence against women, with 4,500 sexual violence cases reported in the first 6 months of 2007 alone. The lack of security in the DRC has impeded efforts to address humanitarian needs as well as efforts aimed at promoting social development. U.S. agency officials informed us that the conflict has forced them to curtail some emergency assistance programs, and NGOs implementing development and humanitarian assistance activities in the DRC have reported that the lack of security has resulted in attacks on their staff or led them to suspend site visits and cancel and reschedule work. The UN has also stated that although access to displaced populations has improved somewhat in a few areas, in general it remains difficult because of the lack of security. The DRC’s unstable security situation has negatively affected the country’s economic potential by discouraging investment, which in turn could worsen security through renewed conflict. DRC donors and the IMF agree that improved security in the DRC is necessary to strengthen the economy. Research on the security of property rights confirms this view. World Bank research has also found that a lack of economic growth increases a postconflict nation’s likelihood of falling back into conflict. Other researchers have estimated that a democratic nation is roughly 10 times more likely to be overthrown if its economy experiences negative growth 2 years in a row. Weak Governance and Corruption By many accounts, corruption in the DRC is widespread, civil liberties are limited, and the DRC’s governance institutions have been severely damaged. State has described corruption in the DRC as “pervasive.” In 2007, an international donor study concluded that corruption in the DRC “remains widespread and is taking a heavy toll on public service capacity to deliver key services.” Transparency International’s 2007 Corruption Perceptions Index identifies the DRC as one of the 10th most corrupt countries in the world. Freedom House in 2007 continued to rate the DRC as “Not Free” and scored it near the bottom of its scales for civil liberties and political freedom. USAID has pointed to limited opportunities for Congolese women to participate in the DRC’s governance. The World Bank has reported that the DRC’s judicial system is one of the world’s six weakest in terms of enforcing commercial contracts. The State Department has described significant failures in the criminal justice system, as well as “harsh and life-threatening” prison conditions. Historically weak governance and corruption in the DRC have hindered efforts to reform the security sector and hold human rights violators accountable. According to U.S. officials, the lack of a DRC government office with clear authority on security issues has impeded efforts to promote security sector reform. The officials informed us that the absence of clear authority over security sector issues has hindered efforts to determine both the DRC government’s priorities for security sector reform and the most effective role for international donors in promoting security sector reform. According to the country assistance framework, the DRC has not established a clear and functioning payroll system for its armed forces. One NGO reported that much of the $8 million the DRC paid in 2005 for its soldiers’ salaries was “diverted” and the remainder rarely reached soldiers in a timely manner. NGOs and media sources have reported that soldiers have committed human rights abuses as a result. The country assistance framework states that the DRC Ministry of Defense controls only a small number of budget items and is not accountable for the defense budget’s use. According to one NGO report, efforts to reform the command structure, size, and control of the security forces have been frustrated by political manipulation, pervasive corruption, and a failure to hold officials accountable. A U.S. State Department official told us that efforts to reform the DRC’s police may be impeded by lack of support from DRC institutions that suffer from corruption and have no interest in reform. According to NGO representatives, the lack of an effective judiciary impedes efforts to hold human rights violators accountable for their actions, which in turn promotes a “culture of impunity.” One NGO reported that a severe shortage of DRC judicial personnel—particularly in the eastern portion of the nation—prevents courts from hearing cases, public prosecutor offices from conducting investigations, and prisons from operating. Another NGO stated that the judiciary is subject to corruption and manipulation by both official and unofficial actors. As a result, courts have recently failed to hold individuals accountable for human rights violations, including a massacre of more than 70 people and the reported rape by police of 37 women and girls in a village in a western province. A representative of one NGO told us that local government officials had tortured his organization’s grantees in an effort to stop their democracy and governance training programs. Governance problems have also hindered efforts to implement economic reforms required for debt relief and promote economic growth. According to Treasury officials and IMF documents, the government’s lack of commitment to meet certain requirements has jeopardized the DRC’s ability to receive some interim debt relief, qualify for full debt relief, and improve the country’s overall economic prospects. To receive the estimated $6.3 billion in debt relief for which it may qualify under HIPC, the DRC must meet various conditions that include satisfactory macroeconomic performance under an IMF-supported program, improved public sector management, and implementation of structural reforms. Although donors had expected the DRC to qualify for full debt relief in 2006, the government instead has fallen back into arrears and has failed to implement needed policies; as a result, IMF has suspended its program assistance to the DRC. Although IMF has determined that the DRC cannot sustain its current debt levels, donors do not expect the DRC to qualify for full debt relief until mid-2008. The judiciary’s ineffective enforcement of commercial contracts in the DRC has likely discouraged private sector investment and hence economic growth. The enforcement of contracts, typically a responsibility of the judicial system, is important to establishing incentives for economic activity. According to the World Bank, the DRC’s enforcement of contracts is among the weakest in the world, such that a company might need to expend roughly 150 percent of a typical contract’s value to ensure enforcement through court proceedings. Mismanagement of Natural Resources International donors, NGOs, and the DRC government have focused on improving natural resource management through increased transparency and international instruments of enforcement. However, owing in part to governance and capacity challenges, these efforts have made only limited progress. Until recently, the DRC had not met EITI implementation requirements or followed EITI guidelines, according to U.S. officials. These officials informed us that the DRC had excluded civil society representatives and replaced EITI’s Permanent Secretary with a new representative. As a result, EITI was reviewing the DRC’s signatory status, and key donors were withdrawing technical assistance. U.S. officials informed us that in September 2007, EITI granted the DRC additional time to meet threshold criteria to continue participation in the initiative and that the DRC subsequently made progress in meeting those criteria. The Kimberley Process Certification Scheme has criticized the DRC for weak internal controls, customs capacity, and ability to track diamonds extracted by large number of self-employed miners. State and USAID officials reported that the DRC’s certification process is failing to capture as much as 50 percent of diamonds mined in the DRC. U.S. and NGO officials have expressed concern that the DRC is not enforcing a moratorium on forestry concessions instituted in May 2002. An NGO reported that after the moratorium took effect, the DRC signed 107 of 156 forestry contracts now under review and that a third of the contracts involve areas identified for conservation. Although the DRC government is reviewing mining and forestry concessions signed during the war, U.S. officials told us that the DRC is conducting the mining contract review with limited transparency. U.S. and NGO officials expressed concern that the DRC has not published its terms of reference or all of the contracts or clearly defined the role of representatives of civil society. Mismanagement of the DRC’s natural resources has fueled continued conflict and corruption, according to U.S. officials, the UN, international donors, and NGOs. The DRC’s abundant natural resources are serving as an incentive for conflict between neighboring countries’ militias and armed domestic factions. These groups seek to control specific mining sites and illegal trade networks to finance operations and buy arms. For example, the UN has reported that profits from Congolese coltan have financed a large part of Rwanda’s military budget and that gold smuggled into Uganda continues to finance militias. Such reports are consistent with World Bank research, which commonly finds that countries with valuable natural resources have more conflict than countries without such resources. In addition to fueling conflict, the DRC’s abundance of natural resources continues to foster corruption as government officials use bribery to share in resource profits. For example, NGOs have reported that through extensive bribery and corruption in the mining sector, exports of large quantities of DRC copper and cobalt have been undeclared and that 60 to 80 percent of the DRC’s 2005-2006 customs revenue was embezzled. USAID has also reported on the postconflict proliferation of natural resource contracts based on joint ventures between the DRC government and private partners, who are receiving a disproportionate share of profits. Lack of Basic Infrastructure The DRC lacks many key elements of basic infrastructure, such as buildings, equipment, and transportation. The transportation sector is “broken,” according to one recent international assessment. The DRC has fewer than 1,740 miles of paved roads to connect 58 million to 65 million people distributed over more than 900,000 square miles. According to a recent study prepared by 17 donor nations, no roads link 9 of the DRC’s 10 provincial capitals to the national capital, and no roads link the DRC’s northern and southern regions or its eastern and western regions. About 90 percent of DRC airfields lack paved runways. More air crashes have occurred in the DRC since 1945 than in any other African state. International observers have reported that the DRC’s educational and penal infrastructures are dilapidated. An international group of donor nations recently identified major deficiencies in electrification, communications, supplies of clean water, and credit. The DRC’s lack of basic infrastructure has hindered progress in humanitarian, developmental, and governance programs. U.S. officials told us that the lack of an adequate in-country transportation system increases the time required to get supplies to those in need. Such problems limit access to vulnerable groups and cause delays in providing humanitarian assistance such as food aid. NGO and U.S. officials implementing emergency food aid and nonemergency food security programs in the DRC have reported that excessive delays in delivering assistance are common because of the lack of roads linking the DRC’s regions and several of its major cities and ports. One NGO has reported that it must compete with commercial contracts for the limited space on the DRC’s troubled rail system and that its commodities and equipment are often given lower priority. U.S. and NGO officials also pointed out that the lack of roads in the DRC has increased the expense or difficulty associated with their programs, in part because they must increase their reliance on air transport. The dearth of accessible roads in the DRC has prompted USAID’s emergency assistance programs to use some of their funds for road rehabilitation programs, to ensure safe and reliable routes to reach those in need. The lack of roads and other adequate infrastructure also affects private companies trying to import and export goods. According to the World Bank’s Cost of Doing Business survey, DRC’s average export costs in 2006, at more than $3,100 per container, were the world’s third highest. State officials told us that the DRC government needs “everything from bricks to paper.” A USAID official told us that any effort to establish new provincial legislatures would be hindered by the lack of buildings to house the legislators or “even chairs for them to sit in.” An NGO has reported that the DRC judicial system is being undermined by destroyed infrastructure, equipment shortages, lack of reference texts, and the dearth of roads, which makes some areas inaccessible to legal authorities. A 2007 UN report noted that at least 429 detainees (including some convicted of human rights violations) had escaped from dilapidated prisons over the last 6 months of 2006. International donor nations and organizations concluded in their assistance framework document that the lack of infrastructure has made economic development almost impossible in many areas and may stifle the potential for economic growth and private sector activity in most DRC provinces. U.S. Government Has Not Assessed Its Overall Progress toward Achieving the Act’s Policy Objectives The U.S. government has not established a process to assess agencies’ overall progress toward achieving the Act’s policy objectives in the DRC. Although State and the National Security Council (NSC) have developed mechanisms to coordinate some of the agencies’ activities in the DRC, neither mechanism systematically assesses overall progress. Some of the key agencies involved in the DRC monitor their respective programs. For example, USAID’s Office of Foreign Disaster Assistance (OFDA) has two program officers in the DRC who regularly visit project sites and publish quarterly reports on OFDA activities. Their partner organizations, or implementers, also provide reports and updates on their projects. Similarly, USAID officials told us that USAID’s Central African Regional Program for the Environment program has an extensive and standard set of monitoring and evaluation tools built into all cooperative agreements with implementers, such as use of satellite imagery and remote sensing to analyze change in forest cover, one of the principal “high-level” indicators. DOL informed us that it relies on midterm and final evaluations, financial and programmatic audits, and biannual technical and financial reports to monitor its programs. USDA officials informed us that USDA requires its partner organizations to conduct assessments of their projects. However, the executive branch has not established a governmentwide process to use such information for an assessment of overall U.S. progress in the DRC. Although State and NSC have developed mechanisms aimed at providing some degree of coordination among executive branch agencies active in the DRC, neither mechanism currently provides for the systematic assessment of overall U.S. progress toward its goals. A new State-USAID joint planning process is not yet fully operational and does not include other agencies active in the DRC. State’s newly established Director of Foreign Assistance (DFA), who also serves as USAID’s administrator, has been charged with ensuring that foreign assistance is being used as effectively as possible to meet broad U.S. foreign policy objectives. Under DFA’s guidance, State and USAID have begun to develop a joint planning and budgeting process that, according to State officials, may eventually assess all U.S. foreign assistance. However, the Office of the DFA has yet to complete its plan for operations in the DRC during fiscal year 2007, which ended on September 30, 2007. As of February 2007, the draft country operations plan was incomplete and consisted of a listing of individual programs that did not include a systematic assessment of the collective impact of State and USAID efforts during fiscal year 2007. In addition, the DFA draft plan did not address activities funded by other agencies, including DOD, HHS, and the Treasury, although the DFA joint planning process may eventually include other agencies to some degree. Under the DFA process, the U.S. mission to the DRC has prepared a mission strategic plan. However, the mission strategic plan pertains only to currently projected fiscal year 2009 activities and is therefore subject to change before submission of the fiscal year 2009 budget request in 2008. The NSC interagency group, intended to help coordinate certain agencies’ activities, does not systematically assess these activities and does not include several relevant agencies. The NSC group assembles agencies such the Departments of State, Defense, and the Treasury to discuss policies and approaches to addressing the challenges in the DRC. For example, according to State and NSC officials, these discussions often focus on the eastern DRC’s unstable security. However, NSC and State officials told us that the NSC group has not developed systematic tools for assessing the impact of all U.S. agencies’ efforts to achieve the objectives of the Act. Also, the NSC effort has not included key agencies involved in the DRC, such as DOL, HHS, or USDA, in its discussions of policies and approaches. Conclusions The DRC appears to be at a crucial point in its turbulent history. After decades of dictatorship and devastating wars with its neighbors and internal groups, it has inaugurated its first democratically elected government in more than 40 years. However, U.S. and NGO officials agree that several interrelated challenges continue to pose major impediments to achievement of the Act’s policy objectives in the DRC. Failure to make near-term progress in addressing the DRC’s unstable security, rampant corruption, economic mismanagement, and lack of needed infrastructure could result in further war and instability in a region of importance to U.S. national interests. U.S. agencies have initiated a wide range of efforts to help the DRC establish and maintain peace and stability. However, because the U.S. government has not established a process to systematically assess its overall progress in the DRC, it cannot be fully assured that it has allocated these resources in the most effective manner. For example, a systematic process for assessing governmentwide progress would allow the United States to determine whether its allocations, which currently emphasize humanitarian aid, should focus more on the DRC’s unstable security, which worsens the country’s other problems and impedes the delivery of U.S. assistance. Similarly, such a process could give the U.S. government greater assurance that it has identified additional bilateral or multilateral measures that may be needed to achieve the Act’s objectives. Given the DRC’s significance to the stability of Africa, the scope, complexity, and interrelated nature of its urgent problems warrant an effective governmentwide response. Recommendation for Executive Action To provide a basis for informed decisions regarding U.S. allocations for assistance in the DRC as well as any needed bilateral or multilateral actions, we recommend that the Secretary of State, through the Director of Foreign Assistance, work with the heads of the other U.S. agencies implementing programs in the DRC to develop a plan for systematically assessing the U.S. government’s overall progress toward achieving the Act’s objectives. Agency Comments and Our Evaluation We requested comments on a draft of this report from the Secretaries of Agriculture, Defense, Labor, Health and Human Services, State, and the Treasury. We also requested comments from the Administrator of USAID and from the Director of Congressional Relations of OPIC. We received written comments from State, which are reprinted in appendix III. In its comments, State endorsed our recommendation. It further noted that it believed that the recommendation would be met as DFA’s joint planning and budgeting processes are extended to include all U.S. agencies engaged in the DRC. State also provided several other comments, for example, expressing concerns regarding the span of years addressed in our report and what it characterized as a lack of historical context. We addressed State’s comments as appropriate in this report. We also received technical comments on our draft report from DOD, HHS, DOL, the Treasury, and USAID. We have incorporated these comments into our report, as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of State, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to identify (1) U.S. programs in the Democratic Republic of the Congo (DRC), (2) major impediments hindering accomplishment of the policy objectives of the DRC Relief, Security, and Democracy Promotion Act of 2006 (the Act), and (3) U.S. government efforts to assess progress toward accomplishing the Act’s policy objectives. Because the Act directed us to review actions taken by U.S. agencies to achieve its objectives, we focused on the fiscal year in which the Act was enacted, and also considered the fiscal year before its enactment to provide context. To identify U.S. programs in the DRC, we interviewed officials from key U.S. agencies who have programs in the DRC. These agencies included the Departments of Agriculture (USDA), Defense (DOD), Labor (DOL), Health and Human Services (HHS), State, and the Treasury (Treasury); the Overseas Private Investment Corporation (OPIC); and the U.S. Agency for International Development (USAID). We also reviewed program documents, budget data, and policy statements. We identified the amount of funding each agency had allocated for its DRC programs in fiscal years 2006 and 2007 by analyzing official agency submissions to Congress and related documents. We did not attempt to determine the extent to which each agency had obligated or expended the funds it had allocated. To determine the major impediments hindering accomplishment of the Act’s policy objectives, we reviewed a range of documents, plans, and assessments provided to us by U.S. agencies with programs in the DRC. We also interviewed officials from each of these agencies. We reviewed economic literature and recent reports, program assessments, studies, and papers written by nongovernmental organizations, international organizations, multilateral banks, and think tanks. To discuss key challenges to addressing the Act’s policy objectives, we conducted a round-table session with a nonprobability sample of 11 nongovernmental organizations that offer a broad range of experience and expertise implementing programs and projects in the DRC. For example, we included panelists from organizations that focus on humanitarian, democracy, and economic development issues. Additionally, we interviewed representatives from other organizations with experience in the DRC. Based on all of these responses, we compared and contrasted the challenges identified to determine common themes and focused on challenges that were internal to the DRC. We considered all of these views as we finalized our analysis of these challenges. We defined challenges as factors that are internal to the DRC—that is, they represent impediments to the United States and other donors that are providing assistance intended to improve the situation in that country. To examine U.S. efforts to assess progress toward accomplishing the Act’s policy objectives, we identified U.S. interagency assessments, reports, and plans pertaining to programs in the DRC. We also interviewed U.S. agency officials and a cognizant official of the National Security Council. Although we did not travel to the DRC, we conducted several telephone interviews with U.S. embassy and USAID mission staff located in the DRC. We conducted our work from May 2007 to December 2007 in accordance with generally accepted government auditing standards. Appendix II: Examples of Programs by Policy Objective Category Examples of agencies active in each category Examples of programs pertaining to each category (millions of dollars) Provision of the following to vulnerable populations: Psychosocial support, medical assistance, and reintegration support to survivors of sexual and gender-based violence Immunization against infectious diseases (e.g., polio and measles) Address children’s involvement in mining and related services, small-scale commerce, child soldiering, and other forms of child labor Establishment of democracy resource centers Support for promoting political participation Support for the Central African Regional Refurbishment of a military officer training school Training brigade- and battalion-staff level officers on military justice reform, civil-military relations, and other issues. Appendix III: Comments from the Department of State Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Zina Merritt (Assistant Director), Pierre Toureille, Kristy Kennedy, Kendall Schaefer, Martin De Alteriis, Michael Hoffman, Reid Lowe, and Farhanaz Kermalli made key contributions to this report. Grace Lui provided technical assistance. | In enacting the Democratic Republic of the Congo (DRC) Relief, Security, and Democracy Promotion Act of 2006 (the Act), Congress established 15 U.S. policy objectives to address the DRC's humanitarian, development, economic and natural resource, governance, and security issues and mandated that GAO review actions taken by U.S. agencies to achieve these objectives. In this report, GAO identifies (1) U.S. programs and activities that support the Act's objectives, (2) major challenges hindering the accomplishment of the objectives, and (3) U.S. efforts to assess progress toward the objectives. GAO obtained and analyzed agencies' program documents and met with officials of agencies and nongovernmental organizations (NGO) active in the DRC. U.S. programs and activities support the Act's policy objectives. In fiscal years 2006 and 2007, respectively, the Departments of Agriculture, Defense, Health and Human Services, State, and the Treasury and the U.S. Agency for International Development (USAID) allocated $217.9 million and $181.5 million for the DRC. About 70 percent of the funds were allocated for programs that support the Act's humanitarian and social development objectives, while the remainder was allocated for programs and activities that support the Act's economic, governance, and security objectives. Although U.S. agencies have not acted on the Act's objective of bilaterally urging nations contributing peacekeeping troops to prosecute abusive peacekeepers, U.S. multilateral actions address this issue. The DRC's unstable security situation, weak governance, mismanagement of its vast natural resources, and lack of infrastructure are major interrelated challenges that impede efforts to achieve the Act's policy objectives. For example, the unstable security situation in the eastern DRC has worsened humanitarian and social problems and forced U.S. and NGO staff to curtail some efforts. The lack of roads has prevented deliveries of needed aid. DRC's weak governance structures prevent the country from meeting the requirements for debt relief and discourage private-sector investment, thus hindering economic growth. The U.S. government has not established a process for systematically assessing its progress toward achieving the Act's policy objectives. While some U.S. agencies collect information about their respective activities in the DRC, no mechanism exists for assessing overall progress. State and USAID are developing a joint planning and budgeting process that may eventually assess all U.S. foreign assistance. However, State's Director of Foreign Assistance has yet to complete the fiscal year 2007 DRC operations plan, which does not include a comprehensive assessment of the collective impact of State and USAID programs and does not address activities funded by other agencies. While a National Security Council-sponsored interagency group discusses DRC policies and helps coordinate some activities, it does not include several relevant agencies and, according to key officials, does not systematically assess progress in the DRC. |
Background Although every economic downturn reflects varied economic circumstances at the national level and among states, evaluations of prior federal fiscal assistance strategies have identified considerations to guide policymakers as they consider the design of future legislative responses to national economic downturns. These include timing assistance so that aid begins to flow as the economy is contracting and targeting assistance based on the magnitude of the economic downturn’s effects on individual states. To be effective at stabilizing state funding of Medicaid programs, assistance should be provided, or at least authorized, close to the beginning of a downturn. Additionally, to be efficient, funds should be targeted to states commensurate with their level of need due to the downturn. States that experience greater stress in their Medicaid programs—due to increased enrollment or decreased revenues—should receive a larger share of aid than states less severely affected. In addition, economists at the Federal Reserve Bank of Chicago have described the ideal countercyclical assistance program as one having an automatically activated, prearranged triggering mechanism that could remove some of the political considerations from the program’s design and eliminate delays inherent in the legislative process. Past economic downturns hampered states’ ability to fund their Medicaid programs, as Medicaid enrollment increased and tax revenues declined. Medicaid enrollment increases during and after national economic downturns, when the number of people with incomes low enough to qualify for coverage rises as state economies weaken. States also experience declines in tax revenues as a result of declines in wages, salaries, and consumer spending. Most, if not all states are affected by national recessions, although the timing and duration of state economic downturns can vary. States have different industry mixes and resources, which can affect when they enter an economic downturn and when they recover. Therefore, some states may enter an economic downturn in the early stages of a national recession, while other states enter long after the recession has set in. The timing and depth of state economic downturns affects their ability to maintain their Medicaid programs. Under the regular FMAP, the federal government pays a larger portion of Medicaid expenditures in states with low per capita income (PCI) relative to the national average, and a smaller portion for states with higher PCIs. To provide states with fiscal relief and to help states meet additional Medicaid needs during the 2001 and 2007 economic downturns, Congress passed legislation temporarily increasing the FMAP for states. The FMAP is a readily available mechanism for providing temporary assistance to states because assistance can be distributed quickly, with states obtaining funds through Medicaid’s existing payment system. The Recovery Act was the second time Congress temporarily increased the FMAP to provide fiscal relief to states during a national economic downturn. Following the 2001 recession, the Jobs and Growth Tax Relief Reconciliation Act of 2003 (Reconciliation Act) provided states $10 billion in assistance through an increased FMAP from April 2003 through June 2004. In August 2010, Congress extended the increased FMAP provided by the Recovery Act by providing states with an additional $16.1 billion in assistance from January through June 2011. In March 2011, we reported that overall the Recovery Act funds were better timed for state Medicaid funding needs than were funds provided following the 2001 recession; assistance began during the recession while nearly all states were experiencing Medicaid enrollment increases and revenue decreases. Nonetheless, 19 states implemented or proposed eligibility restrictions in response to the economic downturn prior to the passage of the Recovery Act some 15 months after the beginning of the national recession as identified by NBER. In order to be eligible for Recovery Act funds, these states had to reverse the restrictions on eligibility to come into compliance with the Recovery Act’s maintenance of eligibility requirements. The Recovery Act formula incorporated three components for calculating the increased FMAP: a hold-harmless provision that maintained each state’s regular FMAP to at least its highest rate since fiscal year 2008; an across-the-board increase of 6.2 percentage points; and an additional increase in each state’s FMAP based on a qualifying increase in the state’s rate of unemployment. In our March 2011 report we also reported that the unemployment-based component of the Recovery Act formula targeted assistance to states with greater Medicaid enrollment growth as indicated by increases in their unemployment rate. However, the across-the-board increase and the hold-harmless components did not distinguish among states that experienced varying degrees of increased unemployment. (See app. III for more information about the Recovery Act’s across-the-board and hold-harmless provisions.) Furthermore, none of the Recovery Act provisions distinguished among states with varying degrees of reduced revenue in the allocation of assistance. The prototype formula we outlined in our March 2011 report provides a more targeted approach than the increased FMAP formula used in the Recovery Act. It also improves the responsiveness of assistance provided, in part by having an automatic trigger to begin and end assistance. In particular, we discussed mechanisms that (1) improve the timing for starting assistance, (2) better target for state needs, and (3) taper off the end of assistance. More responsive federal assistance can aid states in addressing increased Medicaid enrollment resulting from a national economic downturn, as well as addressing reductions in states’ revenues. Improving targeting is essential to meet the goals of providing assistance to states in an efficient and effective manner. Prototype FMAP Formula Offers Automatic, Timely, and Targeted Assistance In response to the mandate, our prototype formula offers an automatic, timely, and targeted option for providing states temporary assistance during national economic downturns. Once a threshold number of states show a sustained decrease in their employment-to-population (EPOP) ratio, temporary increases to states’ FMAPs would be triggered automatically and targeted to each state’s Medicaid program. Our prototype formula uses two targeting components: (1) unemployment, and (2) wages and salaries. The amount of Medicaid assistance states receive would be commensurate with their increases in unemployment and decreases in wages and salaries. The prototype formula would end the temporary assistance once fewer than the threshold number of states shows a decline in their EPOP ratio over 2 consecutive months. Prototype Formula Automatically Triggers Targeted Assistance to States Our prototype formula uses the monthly EPOP ratio and a threshold number of states to identify the start of a national economic downturn, and to automatically trigger the start of the increased FMAP assistance. (See fig. 1.) The automatic trigger would use readily available economic data to begin assistance rather than rely on legislative action at the time of a future national economic downturn. Once the increased FMAP is triggered, targeted state assistance would be calculated based on (1) increases in state unemployment, as a proxy for increased Medicaid enrollment; and (2) reductions in total wages and salaries, as a proxy for decreased revenues for maintaining state Medicaid programs. The increased FMAP would end when the EPOP ratio indicated that less than the threshold number of states was in an economic downturn. Under our prototype formula, states would have received increased Medicaid funding in response to each of the past three national recessions. For example, in response to the most recent national recession, states would have received up to 15 quarters of assistance that would have begun in January 2008 and extended through September 2011. The total federal cost of this assistance for state Medicaid needs would have been approximately $36 billion. Table 1 provides information on when states would have received assistance in response to the past three national recessions under our prototype formula and the total cost of this assistance for state Medicaid needs. Based on our simulations, the EPOP ratio is a reliable, timely indicator of the start of national economic downturns. At the start of each of the last five national recessions, as defined by NBER, we found a sharp increase in the number of states with declining EPOP ratios. A timely automatic trigger for temporary FMAP assistance would be based on a threshold number of states that show a decrease in their monthly EPOP ratio. We found the beginning of each of these recessions approximately coincided with 26 states having declining EPOP ratios. (See fig. 2.) Therefore, our prototype formula identifies the start of a national economic downturn when 26 states show a decrease in their 3-month average EPOP ratio, compared to the same 3-month period in the previous year, over 2 consecutive months. For the most recent national recession, our prototype would have identified the beginning of the downturn in October 2007 (i.e., the fourth quarter of 2007) and triggered temporary assistance to states beginning in January 2008 (the first quarter of 2008). (See fig. 3.) The increased FMAP payments to states would begin in the first calendar quarter following the quarter in which the EPOP measure indicated the start of an economic downturn. The period of temporary assistance would end after the 26-state threshold is no longer met. In the case of our prototype, the end would have been triggered in April 2011 and would make the third quarter of 2011 the last quarter of the assistance period. The last quarter of payment would be the first calendar quarter following the quarter in which the EPOP threshold was no longer met for 2 consecutive months. The threshold trigger may need to be adjusted periodically, however, because the EPOP ratio is projected to slowly drift downward over the next 30 years due to the aging of the population. If our EPOP measure had been used to determine the beginning and end of assistance during the most recent national recession, temporary increased FMAP assistance would have been provided for a total of 15 quarters, from the first quarter of 2008 (January-March) through the third quarter of 2011 (July-September). This compares to an 11-quarter assistance period under the Recovery Act (9 quarters) and extension (2 quarters), from the fourth quarter of 2008 (October-December) through the second quarter of 2011 (April-June). Because our prototype formula relies on readily available labor market data to automatically trigger the beginning and end of the increased FMAP, assistance would have begun earlier and extended longer than that provided by the Recovery Act during the most recent national recession. As with the Recovery Act, relying on NBER to obtain sufficient data to identify the beginning of a national recession and then providing fiscal assistance through the legislative process results in a time lag before aid is available; the Recovery Act was passed in February 2009, nearly 5 quarters after the national recession began in December 2007. Prototype Formula Targets Assistance Based on Increased Enrollment and Losses in Revenue States’ efforts to fund Medicaid during an economic downturn face two main challenges: financing increased enrollment and replacing lost revenue. To assist states in addressing both challenges, our prototype formula includes two components for targeting funding: one for a state’s increase in unemployment as a proxy for increased Medicaid enrollment, and a second for a state’s decrease in total wages and salaries as a proxy for the loss of revenue. The total assistance for a state would be the sum of the employment- and wage-based components. Our prototype formula provides states with a reduction in their financial contribution for Medicaid proportional to their increase in unemployment during the national economic downturn. This component is based on data showing a 1 percentage point increase in a state’s unemployment rate produces approximately a 1 percent increase in state Medicaid spending due to increased enrollment. The unemployment rate change used to calculate assistance for a given quarter is the unemployment rate for that quarter compared to the lowest unemployment rate in the prior 8 calendar quarters. As shown in the formula below, the unemployment-based FMAP increase (FMAP increaseU) for a given quarter is the product of the state share of Medicaid (100-FMAP) and the change in the unemployment rate (UR). FMAP increaseU = (100-FMAP) * ΔUR For example, under our prototype formula, a 10 percentage point increase in the unemployment rate would result in a 10 percent decrease in the state share of Medicaid. If a state had a 60 percent FMAP, and a 40 percent state share, the state share would fall by 4 percentage points (40 percent multiplied by 10 percent) to 36 percent, and commensurately its FMAP would rise to 64 percent. Figure 4 illustrates the targeting of FMAP increases the formula would have provided to states based on their increases in unemployment during the fourth quarter of 2009 (October-December). It indicates a strong proportional relationship between the FMAP increases and increases in unemployment. During the fourth quarter of 2009, FMAP increases due to changes in unemployment ranged from a low of 0.52 percentage points in North Dakota, which had a 1.4 point increase in the unemployment rate, to a high of 5.03 percentage points in Nevada, which experienced a 10.1 percentage point rise in unemployment. (See table 5 in app. IV for the state-by-state data on which this simulation is based, and the state- by-state results of the simulation.) Our prototype formula provides states with a separate reduction in their financial contribution for Medicaid that is proportional to their decrease in wages and salaries during the economic downturn. This component is based on data showing that a 1 percent decrease in total state wages and salaries corresponds to approximately a 1 percent decrease in state tax revenues. The total state wage and salary level used to calculate assistance for a given quarter is the total wage and salary level for that quarter compared to the highest wage and salary level in the prior eight quarters, expressed as a percent change. As shown in the formula below, the wage-based FMAP increase (FMAP increaseW) for a given quarter is the product of the state share of Medicaid (100-FMAP) and the percent change in total state wages and salaries (%∆W). FMAP increaseW = (100-FMAP) * %ΔW For example, under our prototype formula, a 20 percent decline in state wages and salaries would result in a 20 percent decrease in the state share of Medicaid. If a state had a 60 percent FMAP, and therefore a 40 percent state share, the state share would fall by 8 percentage points (40 percent multiplied by 20 percent) to 32 percent, and its FMAP would rise to 68 percent. Figure 5 illustrates the FMAP increases our prototype formula would have provided to states based on their decreases in wages and salaries during the fourth quarter of 2009 (October-December). It indicates a strong proportional relationship between the FMAP increases and decreases in wages and salaries, as the formula was designed to do. During the fourth quarter of 2009, FMAP increases due to declines in state wages and salaries would have ranged from a high of 8.64 percentage points in Nevada, which experienced a 17.3 percent decline in wages and salaries, to a low of 0.0 in three states—Alaska, the District of Columbia, and North Dakota— which experienced no decline in wages and salaries during the period. (See table 5 in app. IV for the state-by-state data on which this simulation is based, and the state-by-state results of the simulation.) The total FMAP increase a state would receive for a given quarter would be the sum of the FMAP increases for the unemployment-based and wage-based components. FMAP increaseTotal = FMAP increaseU + FMAP increaseW For example, during the fourth quarter of 2009, Nevada would have received the largest total FMAP increase of 13.68 percentage points, combining its 5.03 percentage point unemployment increase and 8.64 percentage point wage-based increase; its FMAP would have increased from 50.16 to 63.84. North Dakota would have received the smallest total increase of 0.52 percentage points, combining its 0.52 percentage point increase for unemployment and 0.00 percentage point increase for wage declines; its FMAP would have increased from 63.01 to 63.53. As shown in figure 6, the national average increased FMAP for both the unemployment-based and wage-based components combined is less than 1 percentage point during the first quarter of the assistance period. It rises to 5.6 percentage points in the third quarter of 2009 (July- September) and begins to fall beginning in the fourth quarter of 2009 (October-December). State Medicaid needs resulting from declining revenues exceed state Medicaid needs due to increased Medicaid enrollment through most quarters of the economic downturn. Consequently, the wage-based FMAP increase exceeds the unemployment-based increase through most quarters of the assistance period. Tables 6 and 7 in appendix IV present the results of a simulation of the total temporary increased FMAP provided by our prototype formula by quarter, by state, in response to the most recent national economic downturn. For a given quarter, a state could receive an unemployment-based increase, a wage-based increase, both, or neither, depending on its need. Therefore, not every state would receive an unemployment-based or a wage-based increase in every quarter. As shown in figure 7, for the first quarter of 2008, 40 states would have received assistance based on an increase in unemployment, and 14 states would have received assistance based on a decline in wages. However, the majority of states would have received increases for both components during most quarters of the assistance period. Under our prototype formula, temporary FMAP assistance to states would be triggered off when fewer than 26 states show a decline in their monthly EPOP ratio over 2 consecutive months. Once the program is triggered off, there are a number of ways in which the decrease in FMAP could be introduced to ease states’ transition back to the regular FMAP. Under our prototype formula, states would have a more gradual transition back to their regular FMAP once temporary assistance ended than under the Recovery Act or the Reconciliation Act. First, our prototype formula does not include an across-the-board or hold-harmless provision, as provided by the Recovery Act and the Reconciliation Act. Therefore, states would not be faced with an abrupt loss of a large amount of assistance under our prototype formula. Second, because of the way that our formula generates the unemployment-based and wage-based increases, the increased FMAP generally declines toward the end of the assistance period. As a result, for most states, the drop in FMAP once temporary assistance ended would be modest compared to the Recovery Act. For example, under our prototype, if the first quarter of 2011 would have been the last quarter of assistance during the most recent economic downturn, the average drop in FMAP would have been 0.54 percentage points, ranging from 0.00 in seven states to 4.16 in Nevada. Only seven states would have faced a drop in FMAP of greater than 1 percentage point. In contrast, under the Recovery Act and extension, the average drop in FMAP after the end of assistance in the second quarter of 2011 was 6.2 percentage points, ranging from 10.8 in Hawaii to 4.4 in Kentucky; thus, all states experienced a drop in FMAP of over 4 percentage points. Conclusions Since its inception, efforts to finance the Medicaid program have been at odds with the cyclical nature of its design and operation, particularly during economic downturns. At such times, states typically experience increased Medicaid enrollment while at the same time their own revenues are declining. During the two most recent recessions, Congress acted to provide states with a temporary increase in federal funds through an increased FMAP. However, these efforts to provide states with increased FMAP assistance during national recessions were not as responsive to state Medicaid needs as they could have been. Legislative action at the time of a recession has not been as timely as an automatic response to changing economic conditions. Such a mechanism would reduce the time between the start of the economic downturn and the beginning of assistance by, in part, eliminating the lag between recognition of the economic downturn and congressional action to authorize assistance. By providing this predictability to states, an automatic trigger would facilitate budget planning and provide states with greater fiscal stability. Similarly, targeting assistance based on each state’s level of need ensures that federal assistance is aligned with the magnitude of the economic downturn’s effects on individual states. The prototype formula we present offers an option for providing automatic, timely, and targeted assistance to states during a national economic downturn. As called for in the mandate, our prototype formula improves the starting and ending of assistance, accounts for variations in state economic conditions, and responds to state Medicaid needs by providing a baseline for full funding of state Medicaid needs during a downturn. However, the level of funding and other design elements— such as the choice of thresholds for starting, ending, and targeting assistance—are variables that policymakers could adjust depending on circumstances such as competing budget demands, macroeconomic conditions, and other state fiscal needs beyond Medicaid. Matter for Congressional Consideration To ensure that federal funding efficiently and effectively responds to the countercyclical nature of the Medicaid program, Congress could consider enacting an FMAP formula that is targeted for variable state Medicaid needs and provides automatic, timely, and temporary increased FMAP assistance in response to national economic downturns. Agency Comments and Our Evaluation We provided a draft of this report for review to the Department of Health and Human Services (HHS). HHS on behalf of the Centers for Medicare & Medicaid Services (CMS) and the Office of the Assistant Secretary for Planning and Evaluation (ASPE) provided written comments on the draft, which are reprinted in appendix V. CMS officials stated that they agreed with the analysis and goals of the report, and they emphasized the importance of aligning changes to the FMAP formula as closely as possible to individual state circumstances in order to avoid unintended consequences for beneficiaries and to provide budget planning stability for states. However, they stated that the complexity of the prototype formula we present may be difficult for states and the federal government to implement, and the quarter-to-quarter variability of the increased FMAP could present challenges for state and federal budget planning. We note that the level of complexity and variability in our prototype is comparable to the increased FMAP provided under the Recovery Act. While there are inherent trade-offs between precision and complexity in any model, a certain level of complexity is necessary to achieve the goal of better targeting assistance in order to align the level of funding with individual state circumstances. CMS officials also stated that they do not recommend using the formula to provide general fund relief to states through the Medicaid program. Our prototype formula is designed for state Medicaid needs only. However, since Congress has used the increased FMAP for general fund relief in the past, most recently under the Recovery Act, we present several modifications that would permit increased funding to states for general fund relief. In their comments, ASPE officials noted that the prototype formula is designed for a national recession that impacts many states, but it does not deal with more regional economic declines or slower recoveries that are geographically concentrated. We agree that our prototype formula was not designed for economic downturns limited to an individual state or group of states, and we note this limitation in the report. As the mandate specifically called for recommendations to address the needs of states during periods of national economic downturn, such an analysis is beyond the scope of this report. ASPE officials also commented that having an automatic trigger for the temporary increased FMAP was a good idea. Further, ASPE agreed that our use of the employment-to-population ratio (EPOP) is a better measure for beginning assistance than the unemployment rate because the EPOP ratio reflects both unemployed and discouraged workers. ASPE officials also suggested that the EPOP ratio may be a better measure than unemployment for assessing state need and targeting assistance. We relied in part on unemployment for targeting, however, because there is an established relationship between changes in the unemployment rate and Medicaid enrollment. ASPE officials also noted that the relationship between unemployment and Medicaid enrollment may change in the future following full implementation of the Patient Protection and Affordable Care Act of 2010 (PPACA). We agree that the implementation of this act will have implications for the relationship between unemployment and Medicaid enrollment, particularly since an estimated 18 million additional individuals could qualify for Medicaid under PPACA. Such an analysis, however, was beyond the scope of our report, but formula elements could be adjusted to take these changes and effects into account. In their comments, ASPE officials also offered several considerations to guide policy choices regarding appropriate thresholds for timing and targeting of funds. We would note that in the development of our prototype formula and our illustrative simulations, we made a number of choices about specific elements of the formula design, including thresholds for timing and targeting. Alternatives to those design choices— such as those we present in appendix II of our report––involve balancing the advantages of one choice against another. We are sending copies of this report to the Secretary of HHS, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact Carolyn L. Yocom at (202) 512-7114 or [email protected] or Thomas J. McCool at (202) 512-2642 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VI. Appendix I: Scope and Methodology To evaluate the performance of the prototype formula, we examined the timing and targeting of the formula, and we simulated its application over the period of the most recent national economic downturn. We also considered the effects of using alternative design choices, and these are discussed in appendix II. Our prototype formula relies on changes in the employment-to-population (EPOP) ratio to identify the start of a national economic downturn, and to provide a trigger for a targeted temporary increase in states’ Federal Medical Assistance Percentage (FMAP). We define the EPOP ratio as the ratio of the number of jobs to the working-age population aged 16 and older. Employment data are represented by the number of jobs by state, and come from the Bureau of Labor Statistics’ Current Employment Statistics. To simulate the use of the EPOP ratio to identify the start of a national economic downturn and trigger temporary FMAP assistance, we calculated a 3-month moving average of the EPOP ratio starting in March 1977 through May 2011 for each state and the District of Columbia. This time period covered the last five national recessions as defined by the National Bureau of Economic Research (NBER). To calculate the decline in EPOP, we subtracted the EPOP 3-month moving average for a given month from the average for the same month in the preceding year. For example, for each state, the EPOP ratio’s 3-month moving average for May 2011 was subtracted from the average for May 2010. We identified a threshold number of states with declining EPOP ratios that was consistent with the start of each of the last five NBER recessions. Our prototype formula is designed to provide assistance during periods of national economic downturn; it is not designed for economic downturns limited to an individual state or region. To simulate the targeting of temporary increased FMAP assistance to states, we calculated increased FMAPs under our prototype formula during the most recent national economic downturn. (The period of targeted increases in FMAPs is defined by the EPOP “trigger.”) As outlined in our March 2011 report, we used increases in states’ unemployment rates and declines in states’ wages and salaries as indicators of states’ increased funding needs for Medicaid. The increased needs result from (1) increased Medicaid enrollments as people are affected by the economic downturn and become eligible for Medicaid; and (2) states’ revenue losses, which affect their ability to fund their share of Medicaid. To avoid taking into account states’ choices regarding Medicaid policies and procedures, we used increases in state unemployment rates as a proxy to indicate Medicaid enrollment growth attributable to the economic downturn—our unemployment component. Similarly, to avoid taking into account state policy choices (e.g., statutory tax rates), we used decreases in wages and salaries as a proxy to indicate revenue losses attributable to effects of the economic downturn—our wage and salary component. Our calculation of targeted assistance to every state is done on a quarterly basis, and the total increase in FMAP is the sum of amounts calculated separately under the unemployment component, and the wage and salary component. The overall method involves first selecting for each state a “base quarter” from which increases in unemployment and decreases in wages and salaries are calculated. While individual states may experience many factors or trends that contribute to increases or decreases in their unemployment rate or wages and salaries, such as changes over time in the mix of industries in a state, we determined that accounting for such factors for each state would make the prototype formula too complex. Thus, the formula is based on the assumption that all increases in unemployment or decreases in wages during the period of assistance are attributed to the effects of the economic downturn. For the unemployment component, when the program is triggered on, a base quarter is identified for each state by looking back over 8 quarters from the current quarter and selecting the quarter with the lowest unemployment rate. For the first 8 quarters of the program, the amount of assistance for each quarter is calculated based on the difference between the unemployment rate in this low base quarter and the rate in the current quarter. After the first 8 quarters of assistance, the state’s increase in unemployment is calculated using the unemployment rate in the current quarter minus the lowest unemployment rate in the previous 8 quarters. As shown in figure 8, the start of the look-back period remains fixed for the first 8 quarters of assistance; thereafter, the look-back period is limited to the prior 8 quarters. For example, in the fourth quarter of 2009, the look-back period extends for 15 quarters; however, beginning in the first quarter of 2010, the look-back period is limited to the prior 8 quarters. Under the unemployment component of the prototype, the increase in unemployment provides states with a reduction in their financial contribution for Medicaid that is proportional to their increase in unemployment over the base quarter. A 1.0 percentage point increase in a state’s unemployment rate corresponds to an increase in Medicaid enrollment, which produces a 1 percent increase in state Medicaid spending. As shown in the formula below, the unemployment-based FMAP increase (FMAP increaseU) for a given quarter is the product of the state share of Medicaid (100-FMAP) and the change in the unemployment rate (UR). FMAP increaseU = (100-FMAP) * ΔUR For example, under our prototype formula, a 10 percentage point increase in the unemployment rate would result in a 10 percent decrease in the state share of Medicaid. If a state had a 60 percent FMAP, and a 40 percent state share, the state share would fall by 4 percentage points (40 percent multiplied by 10 percent) to 36 percent, and commensurately its FMAP would rise to 64 percent. For the prototype formula’s wages and salaries component, the base quarter is again found for each state by looking back over eight quarters when the temporary assistance begins. The base quarter selected is the quarter with the peak value in wages and salaries. For the first eight quarters of the program, the amount of assistance for each quarter is calculated based on the difference between the total state wages and salaries in the peak quarter and the total state wages and salaries in the current quarter. After the first eight quarters of assistance, the state’s decrease in wages and salaries is calculated using the peak total wages and salaries in the previous eight quarters minus the wages and salaries in the current quarter. As with the unemployment component, the start of the look-back period remains fixed for the first eight quarters of assistance; thereafter, the look-back period is limited to the prior eight quarters. Our prototype formula provides states with a separate reduction in their financial contribution for Medicaid that is proportional to their decrease in wages and salaries during the economic downturn. This component is based on evidence showing a 1 percent decrease in total state wages and salaries corresponds to a 1 percent decrease in state tax revenues. For the purposes of our formula we assume a reduction in state tax revenues corresponds to an equal percent reduction in the funds available for funding the state share of Medicaid. The total state wage and salary amount used to calculate assistance for a given quarter is the total wage and salary amount for that quarter compared to the highest wage and salary level in the prior eight quarters, expressed as a percent change. As shown in the formula below, the wage-based FMAP increase (FMAP increaseW) for a given quarter is the product of the state share of Medicaid (100-FMAP) and the percent change in total state wages and salaries (%ΔW). FMAP increaseW = (100-FMAP) * %ΔW For example, under our formula, a 20 percent decline in state wages and salaries would result in a 20 percent decrease in the state share of Medicaid. If a state had a 60 percent FMAP, and therefore a 40 percent state share, the state share would fall by 8 percentage points (40 percent multiplied by 20 percent) to 32 percent, and its FMAP would rise to 68 percent. To simulate the use of the prototype formula to end temporary FMAP assistance, we examined the EPOP ratio to determine when program assistance would stop. If the number of states having declining ratios falls below a threshold level (26) for 2 consecutive months, the FMAP assistance would end in the following quarter. While the EPOP test employed to initiate temporary FMAP assistance is also used to end it, the ending of assistance is not comparable to the NBER dates of the end of a recession. In the five past national recessions we examined, the EPOP test would end assistance after the NBER-designated economic recovery began. This is appropriate because, as indicated in our March 2011 report, state Medicaid needs persist into the early stages of recovery. In the case of the automatic trigger to start and stop the temporary assistance to states, increased FMAP payments to states would begin in the first calendar quarter following the quarter in which the EPOP measure indicated the start of a economic downturn, and the last quarter of payment would be the first calendar quarter following the quarter in which the EPOP threshold was no longer met. In the case of the targeted assistance to states, for the purposes of our simulation we did not build in any delay between the availability of data and the calculation of increased FMAPs. For example, although unemployment and wage and salary data both become available with a delay of up to several months, we used state unemployment and wage data for the first quarter of 2008 to calculate the increased FMAP for the same quarter in our simulation. Given the lag time associated with the availability of unemployment and wage data, it may be advisable to calculate preliminary FMAPs based on the most recent quarterly data available, and then to calculate final FMAPs for that quarter when final data for the quarter are published. Appendix II: Some Design Elements in the Prototype Formula and Alternatives In the development of our prototype formula for increased Federal Medical Assistance Percentages (FMAP) during national economic downturns, we made a number of choices about specific elements of the formula design. Alternatives to those design choices involve balancing the advantages of one choice against those of another. For example, a design alternative that would lessen the quarter-to-quarter variation in FMAPs would also lessen the formula’s responsiveness to quarterly changes in economic conditions. Thus, there is a trade-off between establishing an increased FMAP that has relatively greater stability and predictability and FMAPs that are reflective of states’ current economic conditions. Table 2 presents a selection of formula design features contained in our prototype formula and presents some alternatives. With each alternative, there is a discussion of key considerations involved in making that choice. Table 3 describes two additional adjustments that were not included in our prototype formula, but could be applied. Appendix III: The Recovery Act’s Across-the- board and Hold-harmless Provisions Were Not Targeted for States’ Medicaid Needs The across-the-board and hold-harmless provisions of the American Recovery and Reinvestment Act of 2009 (Recovery Act) did not provide a needs-based method for targeting Medicaid assistance to states during an economic downturn. Because these provisions did not distinguish among states that experienced varying degrees of increased unemployment or decreased wages and salaries during an economic downturn, they are not included in our prototype formula. The Recovery Act’s Across- the-board Provision Did Not Reflect the Variation among States in the Effect of the Economic Downturn The largest share of total assistance to states under the Recovery Act was the 6.2 percentage point across-the-board FMAP increase that each state received. However, because states are not equally affected by national economic downturns an equal FMAP increase does not address variable state Medicaid funding needs. Furthermore, as we discussed in our March report, equal percentage point changes in FMAPs do not result in equal percent reductions in state contributions for Medicaid. States with higher regular FMAPs received a disproportionately large reduction in their state contribution for Medicaid under the across-the-board provision. For example, during the fourth quarter of assistance under the Recovery Act, Nevada—a low FMAP state—had a 7.1 percentage point increase in unemployment and a 27.9 percent decline in the state share of Medicaid, while Arkansas—a high FMAP state—experienced a 2.1 percentage point increase in unemployment, but a similar 28.1 percent decline in state share. The Recovery Act’s Hold- harmless Provision Was Not Targeted for Variable State Needs Under the hold-harmless provision of the Recovery Act, each state’s regular FMAP rate was held to the state’s highest rate since fiscal year 2008, regardless of changes in the state’s per capita income (PCI). As a result, the largest FMAP increases due to the hold-harmless provision went to the states with the greatest improvements in their underlying economic condition, as measured by PCI, relative to the national average. Furthermore, states with both higher unemployment and rising unemployment tended to receive the least benefit from the hold-harmless provision. As shown in table 4, many states that benefited from the Recovery Act hold-harmless provision often had relative increases in their PCI compared to the national average, while some states that had little or no increase in PCI received little, if any, benefit from the hold-harmless provision. Percentage change in 3-year average per capita income (PCI) from 2003-05 to 2006-08, relative to U.S. average -0.9% (3-month average ending Sept. 2010) 11.7 3-month average since Jan. 2006) During the first quarter of 2011, 9 of the 10 states that received the largest FMAP increases due to the hold-harmless provision had rising PCIs relative to the national average. For example, Hawaii, which had a 4.0 percent increase in its PCI, received a FMAP increase of 4.71 percentage points. Conversely, 11 of the 12 states that received no benefit from the hold-harmless provision in the first quarter of 2011 had declining per capita incomes relative to the national average. In addition, states with the greatest recession-related needs, such as high and rising unemployment rates tended to receive the least benefit from the hold- harmless provision. For example, Michigan received no benefit from the hold-harmless provision despite a much worse economic condition relative to other states: a 13.1 percent unemployment rate and a 6.4 percent increase in its unemployment. Appendix IV: Temporary Increased FMAP Data by State, GAO Prototype Formula Appendix IV: Temporary Increased FMAP Data by State, GAO Prototype Formula (2009, Qtr 4) Change in unemployment (2009, Qtr 4) (2009, Qtr 4) Appendix V: Comments from the Department of Health and Human Services Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, major contributors included Robert Copeland, Assistant Director; Eric R. Anderson; Robert Dinkelmeyer; Greg Dybalski; Drew Long; and Max Sawicky. | In response to the recession of 2007, Congress passed the American Recovery and Reinvestment Act of 2009 (Recovery Act). Recovery Act funds provided states with fiscal relief and helped to maintain state Medicaid programs through a temporary increase to the federal share of Medicaid funding-the Federal Medical Assistance Percentage (FMAP)-from October 2008 through December 2010. In March 2011, GAO reported that states' ability to fund Medicaid was hampered due to increased Medicaid enrollment and declines in states' revenues that typically occur during a national downturn. The Recovery Act mandated that GAO provide recommendations for modifying the increased FMAP formula to make it more responsive to state Medicaid program needs during future economic downturns. In this report, GAO presents a prototype formula for a temporary increased FMAP and evaluates its effects on the allocation of assistance to states. To evaluate the three components of the prototype formula--starting assistance, targeting assistance, and ending assistance-- GAO uses the 2007 recession. GAO's prototype formula offers a timely and targeted option for providing states temporary Medicaid assistance during a national economic downturn. Once a threshold number of states--26 in GAO's prototype formula--show a sustained decrease in their employment-to-population (EPOP) ratio, temporary increases to states' FMAPs would be triggered automatically. The EPOP ratio compares the number of employed persons in a state to the working age population aged 16 and older. This assistance would end when fewer than the threshold number of states shows a decline in their EPOP ratio. Because the prototype formula relies on labor market data as an automatic trigger rather than legislative action, assistance would have begun earlier and extended longer than the assistance provided by the Recovery Act. The prototype formula would have triggered assistance to begin in January 2008 and end in September 2011, compared with the Recovery Act which provided an increased FMAP from October 2008 through June 2011. Once the increased FMAP is triggered, targeted state assistance would be calculated based on two components: (1) increases in unemployment, as a proxy for changes in Medicaid enrollment; and (2) reductions in total wages and salaries, as a proxy for changes in states' revenues. GAO's prototype formula provides a baseline of funding for state Medicaid needs during an economic downturn by offering automatic, timely, and targeted assistance to states. Such assistance would facilitate state budget planning, provide states with greater fiscal stability, and better align federal assistance with the magnitude of the economic downturn's effects on individual states. In commenting on a draft of this report, the Department of Health and Human Services (HHS) agreed with the analysis and goals of the report and emphasized the importance of aligning changes to the FMAP formula with individual state circumstances. HHS noted the complexity of the prototype formula and offered several considerations to guide policy choices regarding appropriate thresholds for timing and targeting of increased FMAP funds. To ensure that federal funding efficiently and effectively responds to the countercyclical nature of the Medicaid program, Congress could consider enacting an increased FMAP formula that targets variable state Medicaid needs and provides automatic, timely, and temporary assistance in response to national economic downturns. |
Background Federal Institutional Investors Institutional investors include public and private entities that pool funds on behalf of others and invest the funds in securities and other investment assets. Examples of institutional investors include federal, state, and local government, and private retirement plans, endowments, and foundations. In 2015, the eight entities we reviewed held close to $388 billion in externally managed investment assets used to support their investment objectives, including funding retirement benefits for federal employees (see table 1). They administered or oversaw defined benefit retirement plans, defined contribution retirement plans, an endowment, and an insurance program, each with distinct investment objectives. Three of the eight entities we reviewed manage defined benefit plans that provide participants a retirement benefit amount using a formula based on factors such as years of employment, age at retirement, and salary level. Typically, benefits are paid from a fund made up of assets from annual contributions by employers, employees, or a combination of the two and investment earnings from those contributions. Defined benefit plans typically have investment policies and guidance that outline goals for how the funds are to be invested. One of the eight entities we reviewed sponsors a defined contribution plan in which employees and employers contribute to an account directed by the employee into investment options offered by the plan. Policy objectives for defined contribution plans typically focus on offering a range of prudent investments suitable for participants to direct contributions in ways that meet their personal investment objectives. Two of the eight entities we reviewed both manage defined benefit plans and sponsor defined contribution plans. Two of the eight entities we reviewed hold other types of investments. The Smithsonian Institution manages an endowment, which is comprised of trust funds, the majority of which have been permanently restricted by donors for a particular use, such as acquisition of artwork, funding curator positions, or public programs, according to agency officials. The Pension Benefit Guaranty Corporation operates an insurance program designed to partially insure defined benefit pensions sponsored by private employers. While the Pension Benefit Guaranty Corporation insures the pension benefits of nearly 40 million workers and retirees, as a federal guarantor of these plans it takes over the assets of underfunded terminated plans and is responsible for paying benefits to participants who are entitled to receive them. Investment Decisions and Stakeholders in the Investment Process For the defined benefit plans, endowment, and insurance program we reviewed, the entities generally make investment decisions within a framework spelled out in investment policy statements approved by boards of directors, trustees, or regents. Their investment policy statements generally define an asset allocation, or mix of asset classes in proportions designed to meet the entity’s overarching investment objectives. For defined contribution plans, participants make investment decisions by deciding how to allocate the contributions they and their employer make among investment options offered through the plan. Entities that administer these plans may outline these options in plans’ investment policy statements. For three entities in our review, legislative mandates also specify certain investment decisions, such as asset diversification and investment options. The federal entities we reviewed and the National Railroad Retirement Investment Trust are fiduciaries that have responsibilities that are typically similar to those of private sector retirement plans in that they are required to act solely in the interest of participants and beneficiaries in the retirement plans. These responsibilities may include acting with the exclusive purpose of providing benefits to the participants and beneficiaries and defraying reasonable expenses of plan administration; carrying out their duties prudently; following the plan documents; and diversifying plan investments. A number of stakeholders are typically involved in the investment process. Generally, investment boards of directors or trustees, investment committees, investment officials and staff, investment consultants, asset managers, and brokers work together to invest clients’ funds in securities that match clients’ financial objectives (see fig. 1). The defined benefit plans, endowment, and insurance program we reviewed generally retained external asset managers to select individual investments in accordance with the asset allocation frameworks in their investment policy statements. The defined contribution plans generally retained external asset managers to provide plan participants with investment options. Asset management firms typically select investments on behalf of the investors that hired them and regularly report on investment performance. Many earn income by charging service fees based on a percentage of assets they manage. These fees generally include the costs of trading charged by brokerage firms. Asset Classes and Portfolio Management Asset management firms registered in the United States manage more than $70 trillion. Although thousands are registered to operate in the United States, the 100 largest asset managers account for more than 50 percent of total reported assets under management. Institutional investors we reviewed work with asset managers to typically invest in or offer participants investments in four broad asset classes: equity, fixed income, alternative assets, and cash and cash equivalents. Equity indicates ownership in a business in the form of common stock or preferred stock. The equity asset class includes mutual funds, collective investment trusts, and exchange-traded funds that invest in equity securities. Fixed income refers to any type of investment under which the borrower or issuer is obligated to make payments of a fixed amount on a fixed schedule. The fixed-income asset class includes mutual funds, collective investment trusts, and exchange-traded funds that invest in fixed-income securities. Alternative assets can include hedge funds, private equity, real estate, and commodities. Plans may make such investments in an attempt to diversify their portfolios, achieve higher returns, or for other reasons. In recent years, two of the most common alternative assets that institutional investors held were hedge funds and private equity. Cash and cash equivalents are the company’s assets that are cash or can be converted into cash in a very short period of time. They include bank accounts, marketable securities, commercial paper, Treasury securities, short-term government bonds (with maturities of 3 months or less), short-term certificates of deposit, and money-market funds. Institutional investors generally use (or offer to participants) passive or active portfolio management strategies, or a mix of both. Passive management involves buying or creating an investment portfolio that closely tracks the performance of a broad class of assets usually defined by an index, such as the S&P 500. Passive managers attempt to match the performance of an index, typically with lower fees than active managers. Active managers attempt to exceed the performance of an index using their judgment about which individual investments in that asset class will do better than average. In defined contribution plans, the portfolio management is directed by the participants based on the range of options provided by the plan. Federal Regulatory Requirements The Federal Acquisition Regulation (FAR) establishes uniform policies and procedures for the acquisition of goods and services by executive agencies. Among other things, the FAR includes requirements agencies must meet, such as full and open competition through competitive procedures. With the exception of the Pension Benefit Guaranty Corporation, the FAR does not apply to the asset manager selection processes used by the federal entities or the National Railroad Retirement Investment Trust, according to representatives we interviewed. Minority- and Women- Owned Asset Managers Face Challenges, but Found Opportunities at Some Nonfederal Retirement Plans and Foundations According to many asset managers and industry associations with which we spoke, MWO asset managers face various challenges when competing for investment management opportunities with institutional investors, including retirement plans and foundations. However, nonfederal plans and foundations with which we spoke used various approaches to help address many of these challenges. These plans and foundations cited several factors that led them to take steps to increase opportunities for MWO firms. For example, some noted organizational interest in diversity as the driving force behind more inclusive selection practices for asset managers. Others cited potential benefits of using MWO firms, such as helping to diversify risk in their portfolios. Investor and consultant brand bias. According to most asset managers and industry associations we interviewed, institutional investors and their consultants often prefer to contract with larger asset managers with brand recognition or with whom they are familiar. Furthermore, according to some asset managers and industry associations, unless clients directly ask for MWO firms to be included in asset manager searches, consultants generally will not include them. For example, one industry association noted that consultants generally exclude MWO asset managers due to an implicit bias that their clients’ investment portfolio performance could potentially suffer if they use a MWO firm despite no information to indicate this would in fact be the case. Having recognized brand bias as a challenge for MWO firms, some nonfederal plans and foundations have asked their investment consultants to maintain an inclusive process for sourcing, evaluating, and recommending investment managers across race, ethnicity, and gender. One of these plans and a local plan in Illinois told us that they asked their investment staff or consultants to ensure that at least one qualified MWO asset manager was invited to present to the institutional investor’s decision-making body. In addition, one foundation ensured that its consultant was held accountable to more inclusive processes by requiring its investment consultant to annually report the number of diverse managers evaluated, recommended, and hired across the consultant’s client base. Perception of weaker performance. According to most MWO asset managers and industry associations with whom we spoke, MWO firms may face challenges because institutional investors generally have a perception that MWO asset managers do not perform as well as non- MWO firms. However, a May 2017 study on diversity in the asset management industry by an academic institution and a research group found no differences in the performance of funds managed by MWO firms and the performance of those managed by non-MWO firms, among the firms they analyzed. Furthermore, all nonfederal plans and foundations we interviewed told us that all firms managing assets in their respective portfolios, including MWO asset managers, were selected based on track record of performance and evaluated against the same performance standards as other asset managers in their portfolios. In addition, some nonfederal plans noted that these asset management firms may provide certain benefits in generating profit for their clients. For example, one plan noted that MWO firms offer more differentiated investment strategies than larger firms. Size and infrastructure. The size and limited infrastructure of smaller, newer MWO firms also may pose challenges. For example, according to most asset managers and industry associations with which we spoke, small MWO asset managers are frequently not able to meet threshold requirements set by institutional investors, such as minimum limits established for assets under management, liability insurance, and length of track record. Moreover, an asset manager and some nonfederal plans and foundations noted that back office functions and operational costs, such as for accounting and compliance, are high and make investments in these areas difficult for smaller, newer firms (including many MWO firms). In light of these minimum threshold challenges for MWO firms and smaller firms in general, many nonfederal plans adjusted requirements to allow these firms to compete, while noting that they maintained the same performance requirements for all asset managers in their selection processes. Specifically, most nonfederal plans and two foundations either lowered their minimum requirements for assets under management, length of track record, or amount of liability insurance to help ensure the requirements were proportional to the size of the firms, or did not set any minimum or maximum assets under management threshold levels. As we will discuss later, the Federal Reserve System and Pension Benefit Guaranty Corporation have made similar adjustments to increase opportunities for MWO asset management firms. Representatives from most nonfederal plans, foundations, the Federal Reserve System, and Pension Benefit Guaranty Corporation stated that they have not sacrificed performance or fallen short of their fiduciary responsibility in increasing opportunities for MWO firms. Industry trends. In May 2016, we reported that defined contribution plans have replaced defined benefit plans and become the dominant form of retirement plan for U.S. workers over the past three decades. According to some asset management firms and industry associations, MWO firms face challenges due to this industry shift from defined benefit plans toward defined contribution plans where business opportunities for MWO firms may be too costly. The marketplace shift from defined benefit to defined contribution plans also will likely drive up costs for asset managers and branding will be more important for defined contribution plans because asset managers will need to interact more directly with participants, according to an industry association we interviewed. The industry shift from active management to passive management may also be a challenge. According to some asset managers, industry associations, and nonfederal plans we interviewed, MWO firms are less able to compete at defined contribution plans with passive management investment strategies because MWO firms lack the size and resources that larger firms have to keep asset management fees low for clients. Furthermore, because asset management fees for passive management strategies are low, an asset management firm must have a volume of business large enough to be profitable. In addition to adjusting minimum size and length of track record requirements, the nonfederal plans and foundations with which we spoke developed other strategies to help increase opportunities for MWO asset management firms. Some nonfederal plans and one foundation allocated a target amount of their investment portfolio to MWO asset managers. For example, to help increase opportunities for MWO asset management firms, a local plan in Texas developed a program that sets aside 10 percent of its total assets across all asset classes to be managed by asset management firms with $50 million or less in assets under management and at least 30 percent ownership by minorities or women. In addition, some nonfederal plans said they have developed emerging manager programs. For example, a state retirement plan in California has an emerging manager program— generally defined as a program geared towards newer, smaller asset managers—wherein each of the plan’s asset classes has emerging manager definitions based on assets under management, length of track record, or both. In the plan’s private equity class, emerging managers do not have to meet a minimum track record requirement and must have $1 billion or less in assets under management. Most of the nonfederal plans with whom we spoke used a fund-of- funds structure, in which a larger fund works as an intermediary for multiple, smaller managers. Three of these nonfederal plans noted that working with one firm to recruit, select, and manage underlying firms was an efficient and effective means of working with smaller, newer managers, including MWO firms. For example, a local government retirement plan in Illinois noted that the fund-of-funds manager they hired was able to hire MWO firms much more quickly than they would have been able to directly. Similarly, a foundation that sought to increase diversity told us that it specifically hired a fund-of- funds manager to help find MWO firms for its foundation. In addition, three nonfederal plans noted that another benefit to hiring a fund-of- funds manager was that the firm could serve as a mentor (providing guidance and coaching to the smaller asset management firms and MWO firms), which can help build institutional relationships over the long-term. Nonfederal plans and foundations also helped increase opportunities for MWO firms by using outreach strategies to identify MWO managers that could meet their investment needs. For example, many nonfederal plans regularly participated in networking events, such as conferences, with MWO asset managers or networked with asset management trade associations that represent MWO firms. Many of these nonfederal plans facilitated one-on-one interactions with MWO firms so prospective asset managers could better understand their selection processes. Federal Entities We Reviewed Invest in Asset Classes in Which Minority- and Women-Owned Asset Managers Have a Presence, but Use of these Firms Varied The federal entities we reviewed and the National Railroad Retirement Investment Trust invested primarily in equity and fixed income, while some also invested in alternative assets or maintained some level of cash and cash equivalents. The 2015 allocations of the defined contribution plans we reviewed, shown in table 2, reflect both the investment options plans offered to participants and participants’ decisions about how to invest among them. Valley Authority Savings and Deferral Retirement Plan also used BlackRock to offer passively managed funds in which participants invested over 60 percent of total 2016 plan assets (see sidebar). Similarly, the Federal Reserve Thrift Plan contracted with six large asset managers to offer the plan’s 12 funds. Eight of these, accounting for nearly three-quarters of participant investments, were passively managed. The defined contribution plans’ use of passively managed funds may limit opportunities for MWO firms. As discussed earlier, the shift toward defined contribution plans and passive management poses a challenge for MWO firms. In particular, some asset managers and industry associations with whom we spoke said MWO firms lack the size and resources to compete with the low fees larger firms can offer through passive management. The federal defined benefit plans we reviewed and the Smithsonian Institution endowment generally invested more extensively in alternative assets in addition to equity and fixed income. As shown in table 3, in 2015 the Smithsonian Institution invested more than half the endowment’s assets in alternative assets. Fiduciaries of four of the five defined benefit plans in our review also invested substantially in alternative assets, ranging from about one-quarter to about one-third of plan assets. The Pension Benefit Guaranty Corporation generally does not invest in alternative assets, but does hold a limited amount of these assets in line with its investment policy. We determined that MWO asset managers operate in the asset classes in which federal entities and the National Railroad Retirement Investment Trust invest. To do so, we identified more than 180 asset management firms designated through publicly available sources as having some level of minority- or women-owned ownership. These MWO asset managers operate in all four asset classes in which the entities in our review invested or offered investment options to participants. MWO asset managers we identified reported managing assets totaling more than $529 billion. As shown in table 4, most operate in the equity, fixed income, or alternative asset classes. The market presence of these firms equaled less than 1 percent of all regulatory assets under management. Two of the firms, with combined regulatory assets under management of about $42 billion, identified themselves as specializing in passive management investment strategies. Although we identified MWO asset managers operating in each of the four major asset classes, use of MWO asset managers in 2015 and 2016 by the federal entities we reviewed and the National Railroad Retirement Investment Trust varied (see table 5). For example, in 2015, none of the three defined contribution plans we reviewed used MWO asset managers. Four of the five defined benefit plans we reviewed reported using at least some MWO asset managers, but one of these plans did not track this information and was unable to provide us with specific data on its use. Finally, the Pension Benefit Guaranty Corporation did not use MWO asset managers in 2015, but did retain four MWO asset managers in 2016. The Federal Reserve System’s, National Railroad Retirement Investment Trust’s, and Smithsonian Institution’s use of MWO asset managers for their defined benefit plans and endowment was proportionately higher than the market presence of such firms. In 2015, 5 of the Federal Reserve System Retirement Plan’s 32 asset management firms (16 percent) were MWO firms. These firms managed 2 percent of the plan’s total assets, totaling $253 million. Four of these asset managers were private equity firms. The fifth handled a portion of the plan’s fixed income portfolio with two other asset managers handling much larger portions of plan assets under a similar investment strategy. The MWO asset manager achieved investment performance for the plan in 2015 comparable to that achieved by the two larger asset managers. In 2016, 10 of the National Railroad Retirement Investment Trust’s approximately 140 asset management firms (about 7 percent) were majority owned by women and minorities. Collectively, these 10 firms managed about 5 percent of total plan assets. Additionally, another 46 of the plan’s asset management firms had some level of ownership by minorities and women. Representatives of the Smithsonian Institution estimated 14 percent of their asset management firms—equal to about 13 of the more than 90 firms the endowment retained as of the end of 2016—were owned by women or minorities. In comparison, we identified a number of nonfederal entities that use MWO asset managers. These include retirement plans in five states, one of which set a goal of having 20 percent of its plan’s assets managed by MWO asset managers. We also identified three foundations that increased opportunities for MWO asset managers, for example by amending their investment policy or establishing a program to hire MWO asset managers. Finally, we interviewed officials from two corporations, one of which retains MWO firms for both its defined contribution and defined benefit plans. Representatives of the other corporation told us they retain almost 30 MWO firms to manage nearly $600 million, or about 4 percent, of the corporation’s defined benefit plans and foundation. Despite not having an explicit program in place to hire these firms, the corporation cited two key actions that create opportunities for MWO firms to compete. The first is conducting outreach so MWO asset managers understand how to do business with the corporation. The second is communicating to the plan consultant that the corporation expects to see a diverse array of firms when selecting asset managers. Some Federal Entities Made Limited Use of Key Practices When Selecting Asset Managers Asset manager selection processes varied by federal entity, but generally included conducting research to identify potential asset managers and conducting an evaluation of potential asset managers before making final selections. We identified four key practices that institutional investors can use to increase opportunities for MWO asset managers: establishing and maintaining top leadership commitment, removing potential barriers, conducting outreach to MWO firms, and communicating priorities and expectations about inclusive practices to investment staff and consultants. Some of the federal entities we reviewed implemented all these key practices, but others made partial, limited, or no use of the practices. Asset Manager Selection Processes Varied by Federal Entity and Some Entities Relied More on Consultants Than Others All the federal entities we reviewed had internal policies related to their selection processes for asset managers. The processes generally included conducting research to identify potential asset managers and evaluating candidates before making final selections, but varied in some respects as the examples below show. We did not evaluate the National Railroad Retirement Investment Trust’s asset manager selection processes because it is not a department, agency, or instrumentality of the federal government, and it is not subject to the federal law that governs the financial operations of the federal government and establishes the powers and duties of the GAO. Selection criteria. All the entities had performance requirements and most had minimum requirements related to size (assets under management) and length of track record, but the thresholds asset managers had to meet for these requirements varied. For example, the minimum size requirements were $1 billion in assets under management for two of the defined benefit plans we reviewed, and ranged from $1 billion to $60 billion in assets under management for the defined contributions plans. The minimum size requirement for the Pension Benefit Guaranty Corporation was $250 million in assets under management for applicants in the smaller asset managers pilot program (discussed later in the report), and ranged from $10 billion to $50 billion for managers outside of the program. Minimum length of track record requirements ranged from 3–5 years for two of the defined benefit plans we reviewed, and ranged from 3–15 years for the defined contribution plans. The minimum length of track record requirement for all managers used by the Pension Benefit Guaranty Corporation was 5 years. In addition to performance, size, and length of track record, the entities we reviewed used other criteria when selecting asset managers, such as investment strategies, diversification of portfolio, organization and resources, and best value. Use of public solicitation process. The Pension Benefit Guaranty Corporation and the Federal Retirement Thrift Investment Board use full and open competition when selecting asset managers. Full and open competition allows all prospective asset managers who meet certain requirements to submit bids or competitive proposals. The Pension Benefit Guaranty Corporation and Federal Retirement Thrift Investment Board seek competitive proposals from prospective asset managers by issuing requests for proposals, which are published publicly on the Federal Business Opportunities website. Asset management opportunities for the other entities are not published publicly. Instead, the entities use other avenues such as internal investment staff or consultants to identify potential asset managers, as discussed later in this report. Frequency of asset manager searches. The Federal Retirement Thrift Investment Board solicits asset management services for its four externally managed funds every 5 years. The Federal Reserve System reviews its public markets asset manager arrangements at least every 5 years and, based on the results of the review, may search for new asset managers. Pension Benefit Guaranty Corporation officials told us that their asset manager searches are ongoing, with generally from one to three searches occurring each year for varying asset classes. The rest of the entities we reviewed do not have a set schedule for conducting asset manager searches. Asset manager searches for these entities typically occur when an asset manager underperforms and needs to be replaced, there is a change in asset allocation that warrants a new asset manager, or when a new investment opportunity arises. Almost all of the entities used consultants to some extent in their selection processes, but some entities relied on consultants more than others. Three entities (Army and Air Force Exchange Service, Navy Exchange Service Command, and Tennessee Valley Authority Retirement System) work closely with consultants. While these entities approve final asset manager selections, the consultants primarily drive the search and evaluation process. Specifically, the consultants identify potential asset managers from their proprietary databases based on the entities’ selection criteria. The consultants then produce evaluation reports on candidates for the entities to review and provide recommendations about which asset managers to select. Three entities (Federal Retirement Thrift Investment Board, Pension Benefit Guaranty Corporation, and Smithsonian Institution) told us they use consultants to help identify potential asset managers or provide expertise on other areas, such as specific asset classes or asset manager fees. However, evaluations to select asset managers are conducted internally by the entities. As stated earlier, the Federal Retirement Thrift Investment Board and the Pension Benefit Guaranty Corporation issue requests for proposals, which allow prospective asset managers to submit proposals to be considered for asset management opportunities. The entities evaluate the submissions and then make final selections. The Smithsonian Institution’s investment staff prepare a memorandum after reviewing and vetting asset managers. The memorandum is provided to the endowment’s Investment Committee, which makes final selections. Federal Reserve System representatives told us they primarily rely on internal market searches to identify potential asset managers for both of its retirement plans, but use consultants in limited instances. Specifically, the Federal Reserve System uses a consultant for asset classes in which it may be difficult to find asset managers, such as real estate. After identifying potential asset managers, the Federal Reserve System issues a request for proposal to potential candidates. Candidate submissions are graded using a scoring matrix and the results are used to identify from three to five firms, which then are invited to interview. Final selections are made following the interviews. Some of the Federal Entities We Reviewed Made Limited or No Use of Key Practices We identified four key practices that can be used as part of investors’ asset manager selection processes to help broaden the processes and ensure that qualified MWO firms are considered (see table 6). We identified the key practices based on a review of industry reports and interviews with a sample of state, local, and private retirement plans and foundations. We then validated the practices by obtaining feedback from experts and industry stakeholders. The key practices are closely related, and improvements or shortfalls in one practice may contribute to improvements or shortfalls in another practice. The practices do not require investors to develop targets or allocations for MWO asset management firms or to change performance standards. In addition, we identified examples of how some institutional investors have implemented the key practices, which may provide insights to other investors as they undertake or attempt to strengthen or improve their own initiatives related to MWO firms. The examples are not exhaustive and each institutional investor may implement the identified practices in its own way. As stated earlier, many nonfederal plans and foundations we interviewed have implemented these practices to increase opportunities for MWO firms and told us that using the practices does not conflict with their fiduciary responsibilities. According to these plans and foundations, MWO firms they used were selected based on track record of performance and may provide certain benefits in generating profit for their clients. Furthermore, according to plan representatives and other industry stakeholders with whom we spoke, diversifying plan investments to manage risk can be accomplished through the diversification of asset managers. Industry reports and industry stakeholders, including plan representatives with whom we spoke, also noted that implementing a more inclusive selection process for asset managers can widen the pool of candidates, which can help ensure that retirement plans and foundations are identifying the best asset managers. Additionally, as stated earlier, the federal government has an interest in helping increase opportunities for minority- and women-owned businesses and addressing barriers they face. The key practices we identified, which can broaden an investor’s pool of asset manager candidates to help ensure qualified MWO firms are considered for asset management opportunities and increase opportunities for these firms, are consistent with this interest. We found that three of the federal entities we reviewed used all the practices, but the other four entities made partial, limited, or no use of them (see table 7). Specifically, the Federal Reserve System and Smithsonian Institution have developed inclusive policies and taken other steps to increase opportunities for MWO asset managers, and the Pension Benefit Guaranty Corporation has developed a program to help increase opportunities for smaller asset management firms, including smaller MWO firms. Federal Reserve System. The Federal Reserve System’s Office of Employee Benefits (OEB) has documented top leadership commitment by developing policies that support the inclusion of MWO firms in its asset manager searches. For example, OEB’s procurement guidelines, which describe the process that staff should follow when procuring goods and services, state that the office fully supports equal opportunity in procurement. The guidelines also state that prospective lists of vendors (including asset managers) to receive requests for proposals should take into account OEB’s policy that qualified firms interested in doing business with the office, including minority- and women-owned businesses, should be included in the candidate pool as appropriate. Additionally, in 2014 the Federal Reserve System’s Committee on Investment Performance established a policy requiring that each investment mandate for its two retirement plans be reviewed at least every 5 years on a rolling basis, which has provided additional opportunities to qualified MWO firms to compete for all of OEB’s investment mandates on a regular basis. The Federal Reserve System removed a potential barrier to MWO firm participation by lowering its minimum requirement for assets under management. According to Federal Reserve System representatives, about 5 years ago OEB lowered the requirement from $5 billion in assets under management to $1 billion in assets under management for its defined benefit and defined contribution plans. To conduct outreach, the Federal Reserve System has participated in meetings and conferences organized by industry associations that represent MWO firms and met with MWO firms individually, according to Federal Reserve System representatives. Federal Reserve System representatives have communicated their priorities and expectations to investment staff by implementing policies that have opened opportunities for MWO firms, as discussed earlier. The entity has also tracked its use of MWO asset management firms and the proposals it received from prospective MWO asset managers. Federal Reserve System officials told us that their efforts to increase opportunities for MWO asset managers were primarily driven by two factors. First, the decision was driven by a desire to find the best managers in the industry. Second, officials stated that they were aware of the issues surrounding the lack of inclusion in the asset management industry and made changes to help address these issues. Federal Reserve System officials noted that actively including MWO asset management firms in their searches was consistent with prudent investor requirements and fiduciary obligations. Officials also noted that some potential benefits could be gained by a more inclusive selection process, such as helping manage risk by diversifying the portfolio and mitigating manager concentration risk through new MWO managers. Guaranty Corporation demonstrated top leadership commitment by launching a Smaller Asset Managers pilot program in 2016. According to the entity’s 2016 annual report, the program was created to reduce barriers that smaller investment firms face when competing for the agency’s business. As part of this program, the Pension Benefit Guaranty Corporation removed a potential barrier to entry faced by smaller firms by lowering its minimum requirement for assets under management. Specifically, its minimum, which representatives told us typically ranges from $10 billion to $50 billion, was lowered to $250 million for program applicants. All other asset manager selection requirements, including performance, remained the same. The Pension Benefit Guaranty Corporation conducted outreach to smaller asset managers to promote its pilot program. For example, the Pension Benefit Guaranty Corporation held a pre-bidding conference designed to help applicants understand the selection process. The entity also listed the request for proposal for its program on the Federal Business Opportunities website and advertised the program in industry publications. In addition, the Pension Benefit Guaranty Corporation communicated its priorities and expectations regarding its program to its consultant by requesting the consultant help identify prospective MWO firms for the pilot program, according to officials. After completing the selection process for its pilot program, the Pension Benefit Guaranty Corporation selected five firms to participate. Of the five firms, four were minority- or women-owned firms or both. Pension Benefit Guaranty Corporation officials stated that, moving forward, they will evaluate the firms on their performance against the portfolio benchmark over a full market cycle. Smithsonian Institution. The Smithsonian Institution has demonstrated top leadership commitment by revising its investment guidelines and asset manager questionnaire to reflect its commitment to diversity. Specifically, in March 2017 the Smithsonian Institution revised its investment guidelines by adding “environmental, social, and corporate governance considerations in the investment process” as one of its selection criteria. According to Smithsonian Institution representatives, this change allows the Smithsonian Institution to incorporate diversity as additional criteria for asset manager selection. The Smithsonian Institution also included a statement in its annual asset manager questionnaire stating that its Office of Investments strives to promote diversity and inclusion among its manager hires. Furthermore, representatives stated that the Board of Regents and Investment Committee encourage them to look broadly at portfolio diversification not only in terms of asset classes and investment strategies, but also of the racial and gender diversity of asset managers. In addition, the Smithsonian Institution does not have minimum size or length of track record requirements, which has allowed smaller asset managers, including small MWO firms, to compete for asset management opportunities with the endowment. The Smithsonian Institution has also conducted outreach to MWO firms. According to Smithsonian Institution representatives, investment staff meet with MWO asset managers on an ongoing basis. Staff have also participated in conferences focused on women- owned and emerging managers. Finally, the Smithsonian Institution has communicated its priorities and expectations about inclusive selection processes by asking its consultants for a list of diverse managers, according to Smithsonian Institution representatives. Representatives also told us that investment staff have been directed to proactively identify MWO asset managers to add to this list based on their capability and expertise. Smithsonian Institution representatives told us they have often found investing with smaller asset managers (including smaller MWO firms) to be attractive to them. According to the Smithsonian Institution, small firms tend to be entrepreneurial and entrepreneurial firms are more successful when dealing with changing market dynamics, which can increase the chances of delivering superior investment returns. Representatives also noted that even though the endowment has a small staff and limited resources, they have not found it difficult to identify MWO asset managers for the endowment. Three entities have used one or more of the practices, but have not fully implemented all of them. Army and Air Force Exchange Service. The Army and Air Force Exchange Service has used two of the key practices we identified (outreach and communicating priorities and expectations) and partially used (has started but has not completed actions related to) the other two practices (top leadership commitment and removing potential barriers). Specifically, the Army and Air Force Exchange Service has taken several steps to conduct outreach to MWO asset managers. For example, representatives told us that they have held face-to-face interviews with many MWO firms and gathered information on these firms for potential future investments. Army and Air Force Exchange Service representatives have also attended conferences and networking events with MWO asset managers, directed these firms to their consultant to learn more about the Army and Air Force Exchange Service’s selection process, and accepted visitation requests from MWO asset managers, according to representatives. The Army and Air Force Exchange Service also has communicated its priorities and expectations for a more inclusive asset manager selection process. For example, the Army and Air Force Exchange Service has instructed its investment staff to meet with representatives from MWO trade groups. Additionally, the entity has requested that its consultant include MWO asset managers who meet its portfolio needs in asset manager searches. However, the Army and Air Force Exchange Service has not completed actions related to top leadership commitment and removing potential barriers. According to representatives, plan trustees have had initial discussions about taking additional steps to increase opportunities for MWO firms at trustee meetings, but have not taken actions related to these initial discussions. In addition, representatives told us they use some fund-of-funds investments, which have allowed smaller firms, including some MWO firms, to compete for opportunities with the Army and Air Force Exchange Service and have started to review their policies and practices to include more participation of MWO firms in asset manager searches, but have not completed this review as of June 2017. Army and Air Force Exchange Service representatives told us they consider the Army and Air Force Exchange Service to have used all of the key practices we identified because they have taken at least some action related to each of the practices. However, they did not have an expected completion date for two of these actions. Navy Exchange Service Command. The Navy Exchange Service Command has used one key practice (removing potential barriers), but has not used the other three practices (top leadership commitment, outreach, and communicating priorities and expectations). Specifically, the Navy Exchange Service Command does not have minimum size or length of track record requirements for its asset managers. According to officials, the Navy Exchange Service Command does not use these two criteria to select asset managers in part to help ensure asset manager searches are conducted as widely as possible and include smaller asset managers, including smaller MWO firms. However, the Navy Exchange Service Command has not directly made efforts to reach out to MWO firms. Moreover, the entity has not taken any actions to demonstrate top leadership commitment and has not communicated its priorities and expectations to its consultant because it has not specifically requested that its consultant include qualified MWO asset managers in its searches for the Navy Exchange Service Command. According to Navy Exchange Service Command representatives, they have not taken steps to implement all of the key practices because they rely on their consultant to take these actions. Navy Exchange Service Command representatives told us that their consultant is committed to conducting inclusive asset manager searches for all of its clients, performs outreach to MWO asset managers, and regularly reports to the Navy Exchange Service Command on its outreach efforts. Navy Exchange Service Command representatives also noted that it may not be practical for the Navy Exchange Service Command to directly conduct outreach to MWO firms because the entity is small and its consultant has the resources and practices in place for performing outreach to MWO firms. However, Navy Exchange Service Command representatives have not specifically directed their consultant to be more inclusive in their asset manager searches or to conduct outreach. Tennessee Valley Authority Retirement System. The Tennessee Valley Authority Retirement System has used one key practice (removing potential barriers). The entity does not have minimum size or length of track record requirements, which can help widen the pool of asset managers that the Tennessee Valley Authority Retirement System considers for selection. However, the Tennessee Valley Authority Retirement System has not used the other three practices (top leadership commitment, outreach, and communicating priorities and expectations). According to Tennessee Valley Authority Retirement System representatives, it is their understanding that their consultants research and evaluate MWO asset managers and that to the extent such firms meet selection criteria and rank high in the evaluation of a particular investment mandate or strategy that the Tennessee Valley Authority Retirement System is pursuing, the firms would be considered for selection. Tennessee Valley Authority Retirement System representatives also told us that limited staff and resources make it difficult for them to conduct outreach to MWO firms. However, the entity has not communicated its priorities or expectations to its consultants because it has not directly requested or ensured that its consultants include qualified MWO asset managers in searches specifically conducted for the Tennessee Valley Authority Retirement System, nor has the entity directed its consultants to conduct outreach to MWO firms. Furthermore, the Tennessee Valley Authority Retirement System has not taken any actions to demonstrate top leadership commitment. The Federal Retirement Thrift Investment Board has not used any of the key practices. The board described a variety of reasons that have prevented it from taking steps to increase opportunities for MWO asset managers. Federal Retirement Thrift Investment Board. According to Federal Retirement Thrift Investment Board representatives, the Federal Retirement Thrift Investment Board and Executive Director serve as fiduciaries to Thrift Savings Plan participants and as such administer the Thrift Savings Plan in the sole interest of participants and beneficiaries. According to Federal Retirement Thrift Investment Board representatives, the criteria used to select asset managers for the Thrift Savings Plan funds include providing a passive index strategy (as mandated by statute) and best value, which is a combination of technical ability and low price. Representatives stated that while qualified MWO firms are not restricted from responding to their requests for proposals, given the size of Thrift Savings Plan funds and the operational scale needed to manage a significant amount of assets at a low cost, only a relatively small number of firms respond. Further, Federal Retirement Thrift Investment Board representatives told us they plan to implement a mutual fund window platform for the Thrift Savings Plan in 2020, and acknowledged that the platform could provide an opportunity for MWO asset management firms. According to plan representatives, they are seeking a platform that will provide participants with a “broad array” of options, which they define as a platform that includes a large number of mutual funds covering a wide range of investment options. However, plan representatives told us they do not plan to incorporate key practices into their selection process when selecting a mutual fund platform. According to Federal Retirement Thrift Investment Board representatives, they are fiduciaries required to act solely in the best interest of plan participants and beneficiaries and therefore cannot make a decision based on criteria that favor any vendor or potential vendor and cannot require that MWO firms be included in the platform. As stated earlier, the key practices we identified can help broaden the pool of qualified asset managers that investors can select from, and do not mandate the hiring of MWO firms or sacrificing performance standards. Moreover, other entities that have implemented the key practices, including two that administer defined contribution plans, have found that using inclusive selection practices do not conflict with fiduciary obligations. By fully implementing the key practices, the four entities could widen the pool of potential candidates in their asset manager searches and help ensure that they are finding the most qualified firms that meet their investment needs. In keeping with federal interests, the practices could also help address barriers MWO firms face and increase opportunities for these firms. Conclusions Some of the federal entities we reviewed have taken steps to implement more inclusive selection processes for asset managers, including developing a pilot program for smaller asset managers and establishing policies that support the inclusion of MWO firms in asset manager searches. However, opportunities exist for other entities we reviewed to take additional actions by implementing key practices. Specifically, the Army and Air Force Exchange Service, Navy Exchange Service Command, and Tennessee Valley Authority Retirement System have used one or more of the key practices to increase opportunities for MWO firms, but have not fully implemented all of them. The Federal Retirement Thrift Investment Board has not used any of the key practices. Implementing (or fully implementing) the key practices could widen the pool of potential candidates in their asset manager searches and help ensure that they are finding the most qualified firms that meet their or their plan participants’ investment needs. Additionally, in keeping with federal interests, if implemented, the practices could eliminate or mitigate some of the barriers that MWO firms face and increase opportunities for MWO firms. Recommendations for Executive Action We are making a total of four recommendations to four agencies: The Chief Investment Officer of the Army and Air Force Exchange Service should fully implement key practices to increase opportunities for MWO asset managers as part of its selection processes. Specifically, the Chief Investment Officer should complete actions related to top leadership commitment and removing potential barriers. (Recommendation 1) The Chief Investment Officer of the Federal Retirement Thrift Investment Board should use key practices as appropriate to increase opportunities for MWO asset managers if and when implementing its mutual fund window platform. Specifically, the Chief Investment Officer should take actions to demonstrate top leadership commitment, remove potential barriers, conduct outreach to MWO firms, and communicate its priorities and expectations for an inclusive selection process to its staff and consultants if and when it begins to search for a mutual fund window platform. (Recommendation 2) The Chief Investment Officer of the Navy Exchange Service Command should fully implement key practices to increase opportunities for MWO asset managers as part of its selection processes. Specifically, the Chief Investment Officer should take actions to demonstrate top leadership commitment, and to the extent that staff and resources are a constraint, should direct its consultant to conduct outreach to MWO firms and communicate its priorities and expectations for an inclusive selection process by requesting its consultant conduct more inclusive asset manager searches specifically for the Navy Exchange Service Command. (Recommendation 3) The Chief Investment Officer of the Tennessee Valley Authority Retirement System should fully implement key practices to increase opportunities for MWO asset managers as part of its selection processes. Specifically, the Chief Investment Officer should take actions to demonstrate top leadership commitment, and to the extent that staff and resources are a constraint, should direct its consultant to conduct outreach to MWO firms and communicate its priorities and expectations for an inclusive selection process by requesting its consultant conduct more inclusive asset manager searches specifically for the Tennessee Valley Authority Retirement System. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to the Army and Air Force Exchange Service, Federal Reserve System, Federal Retirement Thrift Investment Board, Railroad Retirement Board, Smithsonian Institution, Pension Benefit Guaranty Corporation, Navy Exchange Service Command, National Railroad Retirement Investment Trust, and Tennessee Valley Authority Retirement System for review and comment. We received written comments from the Army and Air Force Exchange Service, Federal Retirement Thrift Investment Board, Navy Exchange Service Command, and Tennessee Valley Authority Retirement System. In their comments, reprinted in appendix IV, the Army and Air Force Exchange Service and Navy Exchange Service Command agreed with our recommendations. The Army and Air Force Exchange Service stated that it would continue to press forward in implementing the key practices to increase opportunities for MWO asset managers in a manner consistent with its investment goals and guidelines. The Navy Exchange Service Command stated that it will demonstrate top leadership commitment by formally directing its consultant in writing to conduct outreach to MWO firms on the entity’s behalf and communicate expectations for an inclusive selection process for asset managers. The Navy Exchange Service Command also provided technical comments, which we incorporated as appropriate. In its comments, reprinted in appendix V, the Federal Retirement Thrift Investment Board disagreed with our recommendation. The Federal Retirement Thrift Investment Board said that when developed, the request for proposal for the mutual fund window will cover topics such as information security and compatibility with the Thrift Savings Plan recordkeeping software, and breadth of fund choices offered. The Federal Retirement Thrift Investment Board stated that this will allow the Thrift Savings Plan to offer participants that wish to use the mutual fund window the greatest amount of choice in a secure, seamless, and efficient manner. The Board also noted that fiduciary rules make it difficult to make guarantees about any fund that might be offered in a future mutual fund window. However, as we discuss in our report, implementing a more inclusive selection process could bring in a broad array of asset managers with different investment strategies and products that can help the Federal Retirement Thrift Investment Board provide participants with more investment options. Further, our recommendation would not require the use of MWO asset managers in the mutual fund window, but instead would help broaden the Federal Retirement Thrift Investment Board’s selection processes and help ensure that qualified MWO firms are considered. The Federal Retirement Thrift Investment Board also provided technical comments, which we incorporated as appropriate. In its comments, reprinted in appendix VI, the Tennessee Valley Authority Retirement System agreed with our recommendation. The entity stated that it was committed to a process with its consultants and asset managers that provides equal opportunities for asset managers of all types of ownership, including MWO firms. The entity also outlined actions it would take to implement the key practices, such as documenting its commitment to equal opportunity for all asset managers, including MWO firms, in its investment policy statement and working with its consultant to set up a process for providing the Tennessee Valley Authority Retirement System information on potential MWO asset managers researched and evaluated by its consultant. The Federal Reserve System, Pension Benefit Guaranty Corporation, and the Smithsonian Institution provided technical comments, which we incorporated as appropriate. The National Railroad Retirement Investment Trust and the Railroad Retirement Board informed us that they had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the agencies and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact (202) 512-8678 or [email protected], or (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology In this report we examined (1) the challenges minority- and women- owned (MWO) asset managers may face when competing for investment opportunities, and practices used by selected state, local, and private entities that administer or oversee retirement plans and foundations to increase opportunities for MWO firms; (2) the major asset classes in which selected federal entities invested, their use of MWO firms, and the market presence of MWO firms in these asset classes; and (3) the policies and processes selected federal entities use to identify and select asset management firms, and their use of key practices to increase opportunities for MWO firms. We reviewed federal retirement plans, an endowment, and an insurance program administered or overseen by eight entities (collectively referred to as federal entities): Army and Air Force Exchange Service, Federal Reserve System, Federal Retirement Thrift Investment Board, Railroad Retirement Board, Smithsonian Institution, Pension Benefit Guaranty Corporation, Navy Exchange Service Command, and Tennessee Valley Authority. We selected the federal entities and the National Railroad Retirement Investment Trust based on size (assets of more than $1 billion in 2015) and investment strategies (passive or active management or both). We excluded federal entities that solely invested in Treasury securities that are not traded (and thus do not use external asset managers), such as the Civil Service Retirement System and the Federal Employees Retirement System. These two retirement systems fund the primary defined benefit pension benefits for the vast majority of federal employees. Objective 1: Challenges Faced by MWO Asset Managers To identify challenges that MWO asset management firms face, we analyzed reports by industry stakeholders, such as asset management trade associations, institutional investors, and investment managers. We also conducted interviews with asset management trade associations (National Association of Investment Companies, National Association of Securities Professionals, Association of Black Foundation Executives, Association of Asian American Investment Managers, New America Alliance, Private Equity Women’s Initiative, and Diverse Asset Managers Initiative), 10 MWO asset managers, and 4 MWO brokerage firms. These firms were selected based on size of the firms (assets under management for asset managers and revenue for brokers), literature reviews, and recommendations from industry stakeholders. Information gathered from these interviews cannot be generalized to all MWO asset managers or MWO brokerage firms. To identify how entities that administer or oversee state, local, and private retirement plans and foundations (nonfederal plans and foundations) increase opportunities for MWO asset management firms, we interviewed 14 entities: California Public Employees’ Retirement System, California State Teachers’ Retirement System, Employees’ Retirement Fund of the City of Dallas, Exelon, Illinois Municipal Retirement Fund, Kellogg Foundation, Knight Foundation, Maryland State Retirement and Pension System, New York State Common Retirement Fund, Prudential’s Strategic Investment Research Group, Silicon Valley Community Foundation, Teachers’ Retirement System of Illinois, Teacher Retirement System of Texas, and Verizon Investment Management Corp. Seven of these entities administered defined benefit plans; three administered defined benefit and defined contribution plans; three were foundations; and one was an investment group. These entities were selected based on size (assets), use of an MWO program or other initiative designed to increase opportunities for MWO asset management, a literature review, and recommendations by industry stakeholders. Information gathered from these interviews cannot be generalized to all nonfederal plans and foundations. We also analyzed reports by industry stakeholders on key practices for MWO programs or similar initiatives. When relevant, we also gathered information from these sources on the challenges faced by MWO brokerage firms and practices used to increase their business opportunities. Objective 2: Asset Classes Invested in by Selected Federal Entities and Use of MWO Firms To determine the major asset classes invested in by selected federal entities and the National Railroad Retirement Investment Trust, we obtained data from the entities and reviewed their annual reports and audited financial statements for information on annual allocations made to each asset class. Allocation data from annual reports and audited financial statements reflect in some instances the investment objectives of funds managed by external asset management firms, which may not align precisely with the holdings of each fund. However, we concluded these data were sufficiently reliable to document the asset classes in which the federal entities we reviewed and the National Railroad Retirement Investment Trust, invest. To determine the presence of MWO firms in the same asset classes in which the federal entities in our review and the National Railroad Retirement Investment Trust invested, we reviewed publicly available directories and databases of MWO asset managers compiled by industry stakeholders and state retirement plans. We then reviewed the firms’ Form ADV filings in the Investment Adviser Public Disclosure database of the Securities and Exchange Commission to identify the asset classes in which they operate and the assets they reported managing. Finally, we compared the total assets reported by MWO firms we identified to total regulatory assets under management reported by all registered investment advisers in the database as of May 2017. To assess the reliability of data in the Securities and Exchange Commission database, we identified and removed duplicate entries. We also checked data in the database against the Form ADV filings of a random selection of firms to ensure the database accurately reflects firms’ filings. Finally, we compared total regulatory assets under management for all investment advisers in the database to a July 2016 industry estimate of the global asset management industry and found them to be similar. We also compared the proportion of assets reported by the MWO firms we identified to one calculated by researchers in May 2017 and also found them to be similar. Although we did not independently verify the accuracy of Form ADV filings, we concluded the data we obtained from them were sufficiently reliable to estimate the proportion of regulatory assets under management held by MWO asset managers. To the extent available, we reviewed data on the percentage of each federal entity’s and the National Railroad Retirement Investment Trust’s asset managers that are minority- or women-owned, the percentage of assets managed by these MWO firms, and the asset classes in which the firms invested. To assess the reliability of these data, we interviewed representatives of three federal entities and the National Railroad Retirement Investment Trust with knowledge of the systems and methods used to produce these data. We determined that the data we used were sufficiently reliable for purposes of estimating federal entities’ and the National Railroad Retirement Investment Trust’s use of MWO asset managers in 2015 and 2016. Objective 3: Asset Manager Selection Processes and Use of Key Practices To determine the policies and processes that selected federal entities and the National Railroad Retirement Investment Trust used to identify and select asset managers and efforts to include MWO firms, we analyzed investment policies; documentation on the processes and criteria used to select asset managers, including internal guidelines and examples of requests for proposals for investment management services; and available documentation on programs, policies, or initiatives related to MWO firms. In addition, we interviewed representatives from the entities we reviewed to learn more about their selection processes and efforts related to MWO asset managers. We assessed the extent to which federal entities used key practices to increase opportunities for MWO firms. Specifically, we identified key practices by examining the practices used by nonfederal plans and foundations and by reviewing industry reports. We then validated the practices by obtaining input from 10 industry stakeholders and experts selected based on factors such as depth of experience working with MWO firms and published research. The stakeholders and experts generally agreed with the practices we identified, but one expressed a concern about implementing two of the practices. We then assessed the extent to which the federal entities we reviewed used each key practice using three categories. “Uses” indicates that the entity completed an action or actions to implement the practice; “partially uses” indicates that the entity has started an action or actions to implement the practice, but has not completed the action(s); and “does not use,” which indicates that the entity has not started or completed any action(s) to implement the practice. One analyst reviewed the entities’ policies and practices and made the initial assessment. A second analyst then verified these steps to ensure consistent results. We conducted this performance audit from April 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: National Railroad Retirement Investment Trust Representatives from the National Railroad Retirement Investment Trust (Trust) stated that trustees and investment staff have a fiduciary obligation to invest the Trust’s assets solely in the best interest of the Trust and its beneficiaries. Representatives told us they do not believe their asset manager selection process, described below, has precluded the Trust from hiring smaller or minority- and women-owned (MWO) firms. Selection criteria and process. According to Trust representatives, the Trust does not have minimum size or length of track record requirements for its asset managers, but considers an asset manager’s size, as well as the appropriate size of a possible allocation to the manager, when determining whether to hire the manger. They also stated that it was important to hire managers that have a demonstrable history of successfully investing and that generally, track records that cover longer time periods are more statistically significant than shorter-term track records because there are more data points to analyze. Trust representatives emphasized that considering these factors has not prevented the Trust from hiring smaller firms, and that about half of their asset managers are small to mid-size firms. According to Trust representatives, the Federal Acquisition Regulation does not apply to the Trust’s asset manager selection process. The Trust uses internal investment staff to identify potential asset managers. Trust representatives told us that searches for asset managers in the public markets (for example, publicly traded stocks and bonds) typically occur when an asset manager needs to be replaced (due to a variety of reasons, such as personnel changes), whereas searches for asset managers in private markets (for example, alternative assets such as private equity and real estate) occur on an ongoing basis. Use of Consultants. Trust representatives told us that they do not use consultants; rather, in-house investment staff conduct the research and evaluation process. Generally, investment staff identify potential asset managers by researching commercial or internally developed databases and then conduct an internal review, which includes reviewing information on prospective firms’ investment strategies and meeting with the firms. Investment staff subsequently develop a memorandum on the prospective asset managers and conduct a final internal analysis before recommending asset managers to trustees, who approve all final selections and the amount of the planned investment. Key Practices. According to Trust representatives, the Trust has taken steps to increase opportunities for MWO firms. Specifically, the Trust’s board has discussed the Trust’s selection practices as they relate to MWO firms on an ongoing basis during its quarterly meetings and has analyzed its use of MWO firms. In addition, Trust representatives told us they conducted outreach by meeting with trade associations representing MWO firms. Further, according to Trust representatives, the Trust does not have minimum size or length of track record requirements for its asset managers, which has removed potential barriers to smaller firms, including smaller MWO firms. Finally, according to Trust representatives, the Trust has tentatively approved revisions to its investment procedures manual that would incorporate statements about valuing diversity and having an inclusive asset manager selection process. Trust representatives told us that the Trust’s board intends to formally approve the revisions at its upcoming board meeting. Appendix III: Minority- and Women-Owned Brokerage Firms The federal entities we reviewed and the National Railroad Retirement Investment Trust do not directly hire brokerage firms. However, in conducting our work on nonfederal plans, we identified challenges minority- and women-owned (MWO) brokerage firms face that may limit them from fully competing for opportunities with institutional investors, including retirement plans and foundations, and the asset managers these entities use. Brand bias. According to two MWO brokerage firms with which we spoke, MWO brokers may have difficulty competing with larger brokerage firms that are well known in the industry and have long- standing relationships with asset managers. Size. One brokerage firm with which we spoke noted that some institutional investors and asset managers may have the misconception that smaller broker dealers cannot execute trades as effectively as large brokerage firms, and that using MWO brokers will cost more. Another brokerage firm told us that net capital thresholds were too high for newer, smaller MWO brokers to meet. As a result, MWO brokers may have greater difficulty competing for opportunities. Track record. Shorter track records are also a hindrance, according to brokerage firms we interviewed. For example, one brokerage firm noted that although most brokers may have worked with large corporations before starting their own companies, they may not get the credit for the experience they have because they are a new firm. As a result, they may be overlooked when investors search for brokerage firms. Some state, local, and private plans have taken actions to address these challenges and increase opportunities for MWO brokerage firms. Communicate expectations. Two nonfederal plans and one foundation we interviewed directed their asset managers to use more inclusive practices for sourcing and hiring qualified MWO brokerage firms. Conduct outreach. A state plan also directly communicated with MWO brokerage firms at networking events, and directed its asset managers to do so as well. Report usage of MWO brokers. Some state and local retirement plans have legislative mandates to promote the use of MWO firms, including brokers, and require that their asset managers report on their use of MWO brokerage firms. For example, a local plan in Illinois sets MWO broker-dealer utilization percentage goals across asset classes for its asset managers, and requires that asset managers report monthly on MWO broker utilization. Appendix IV: Comments from Army and Air Force Exchange Service and Navy Exchange Service Command Appendix V: Comments from Federal Retirement Thrift Investment Board Appendix VI: Comments from Tennessee Valley Authority Retirement System Appendix VII: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Kay Kuhlman (Assistant Director), Kimberley Granger (Assistant Director), Erika Navarro (Analyst in Charge), Farah Angersola, Bethany Benitez, Raheem Hanifa, John Karikari, Risto Laboski, Jill Lacey, Marc Molino, Tom Moscovitch, Barbara Roesmann, Adam Wendel, and Amber Yancey-Carroll made key contributions to this report. | Asset management firms registered in the United States manage more than $70 trillion. MWO firms manage less than 1 percent of those assets. The federal government has an interest in increasing opportunities for MWO businesses. Questions have been raised about how often federal entities use MWO asset managers and the transparency of their selection processes. GAO was asked to examine, among other things, (1) competitive challenges MWO firms face and how institutional investors address them, (2) selected federal entities' use of MWO firms, and (3) the entities' asset manager selection processes, including their use of key practices. GAO reviewed investment policies and financial statements of 8 entities that manage or sponsor federal retirement plans, an endowment, and an insurance program. GAO also interviewed 14 state, local, and private retirement plans and foundations and 10 MWO asset managers (selected based on size and other factors). According to asset managers and industry associations with which GAO spoke, minority- and women-owned (MWO) asset managers face challenges when competing for investment management opportunities with institutional investors, such as retirement plans and foundations. For example, institutional investors and their consultants often prefer to contract with large asset managers with brand recognition and with whom they are familiar. Also, small firms, including MWO firms, are often unable to meet minimum requirements set by institutional investors, such as size (assets under management) and past experience (length of track record). State, local, and private retirement plans and foundations GAO interviewed addressed these challenges in a variety of ways, such as asking their consultants to include MWO firms in their searches. Many plans also lowered their minimum threshold requirements so that the requirements were proportional to the size of the firms while maintaining the same performance requirements for all asset managers in their selection processes. Federal retirement plans, the endowment, and the insurance program GAO reviewed invest in asset classes in which MWO asset managers have a market presence, but overall use of MWO firms varied. For example, some retirement plans either did not use any MWO firms or did not track this information. The endowment and insurance program reported using some MWO asset managers. GAO identified four key practices institutional investors can use to increase opportunities for MWO asset managers. These practices are consistent with federal interests in increasing opportunities for MWO businesses. Top leadership commitment . Demonstrate commitment to increasing opportunities for MWO asset managers. Remove potential barriers. Review investment policies and practices to remove barriers that limit the participation of smaller, newer firms. Outreach. Conduct outreach to inform MWO asset managers about investment opportunities and selection processes. Communicate priorities and expectations. Explicitly communicate priorities and expectations about inclusive practices to investment staff and consultants and ensure those expectations are met. Some federal entities we reviewed, such as the Federal Reserve System, have used all the practices, but others made partial, limited, or no use of the practices. The Federal Retirement Thrift Investment Board does not intend to use the practices in its planned mutual fund window platform. The Navy Exchange Service Command and Tennessee Valley Authority Retirement System used one practice, but have not used the others. The Army and Air Force Exchange Service has used two practices, and partially used two practices. By using the key practices, the entities GAO reviewed could widen the pool of candidates in their asset manager searches and help ensure that they find the most qualified firms. In keeping with federal interests, the practices could also help address barriers MWO firms face and increase opportunities for them. |
MTS: Envisioning a Single Claims-Processing System Medicare is a huge program. As the nation’s largest health insurer, it serves some 38 million Americans by providing health insurance to those aged 65 and over and to many of the nation’s disabled. It disburses over $200 billion in health care benefits every year, and by 2000 is expected to be processing 1 billion claims annually. The Medicare program is divided into two components—part A and part B. Part A encompasses facility-based services, with claims paid to hospitals, skilled nursing facilities, hospices, and home health agencies. Part B comprises outpatient services, with claims paid to physicians, laboratories, medical equipment suppliers, and other outpatient providers and practitioners. Claims processing for the Medicare program is handled at some 45 sites throughout the country by about 70 private companies under contract with HCFA. Contractors handling part A services, called intermediaries, had been using three different computer systems to process claims; those handling part B, called carriers, used six different systems. In order to improve the efficiency and effectiveness of Medicare operations and better address fraud and abuse, HCFA planned to develop one unified computer system to replace the existing systems. In January 1994, HCFA awarded a contract to a software developer to design, develop, and implement the MTS automated claims-processing information system. In so doing, MTS was to aid HCFA in identifying fraud and abuse by utilizing an integrated database that would greatly improve HCFA’s ability to profile data by type of service on a national or regional basis. The single system would integrate data from Medicare part A and part B and managed care (a newer, third component), provide a comprehensive view of billing practices, and incorporate new technology to facilitate innovative investigative procedures. The MTS project encountered problems from the very beginning. It was plagued with schedule delays, cost overruns, and the lack of effective management and oversight. We repeatedly reported that HCFA was not applying effective investment management practices in its planning and management and, as a result, had no assurance that the project would be cost-effective, delivered within estimated time frames, or even improve the processing of Medicare claims. MTS costs had also escalated dramatically. As we testified in May, total estimated project costs jumped sevenfold in 5 years, from $151 million in 1992 to about $1 billion in 1997. I should point out that the $1 billion figure included costs for transitioning from the three part A and six part B systems to a single part A and a single part B system prior to implementing MTS, and for acquiring MTS operating sites. To justify the continuation of MTS, we recommended in May 1997 that HHS require HCFA to prepare a valid cost-benefit and alternatives analysis. Further, we recommended at that time that HHS withhold funding for proposed MTS operating sites until these sites were justified. We likewise identified critical areas in which HCFA was not using sound systems-development practices in managing its MTS software development contractor. HCFA had not developed the kinds of plans that are critical to systems success. This included missing or inadequate plans for three important components of systems development: requirements management, configuration management, and systems integration. Finally, HCFA had not adequately monitored its contractor’s activities using measures of software development quality. These problems decreased HCFA’s chances of controlling the development of systems requirements and software. MTS Contract Is Terminated Given the magnitude of problems surfaced with MTS, along with runaway costs, HCFA further assessed the project’s viability. Faced with the prospect of spending hundreds of millions of dollars to acquire MTS operating sites along with additional millions of dollars for the software development effort, HCFA decided to terminate both the request for proposals for the sites and the entire software-development contract as well. On August 15, 1997, HCFA terminated the MTS contract on which it had spent about 3 and a half years and about $80 million to date—about $50 million for software development and about another $30 million for internal HCFA costs. What has that money purchased? A huge learning experience about the difficulty of acquiring such a large system under a single contract and a better understanding of the requirements for developing a Medicare claims processing system, but no integrated claims processing software to aid HCFA in preventing fraud and abuse. Still to be delivered to HCFA, at additional cost under the original contract, is a set of application requirements for what was to have been the managed care module. The agency is considering awarding another contract for the development and implementation of managed care software using these requirements. In addition, it is now beginning to reconsider its approach for identifying requirements and developing software for two features that were planned as part of MTS: a beneficiary insurance file and a financial management component. Ongoing HCFA Technology Initiatives to Combat Fraud and Abuse While the MTS termination delays one means of possibly combatting fraud and abuse, HCFA has two other independent information technology initiatives in this area that are continuing. These separate initiatives are analyzing the potential for using existing commercial software and exploring the possibilities for developing antifraud software. In May 1995, we reported on the potential benefits of HCFA’s use of commercial software to help detect inappropriate medical coding, a common form of billing abuse. We concluded that HCFA had not kept pace with private industry’s use of such software, and that HCFA’s internal efforts to develop the capability to detect such code manipulation were limited and unlikely to fully stem the losses being suffered from these abuses. We recommended that HCFA require Medicare carriers to use a commercial system to detect code manipulation when processing Medicare claims for physicians’ services and supplies. Although senior HCFA officials voiced their support for our recommendation to use modern information technology to strengthen payment controls, they did not begin to test the feasibility of using commercial code manipulation-detection software to process Medicare claims until about a year after we reported on its potential. Furthermore, any positive results from this testing are not expected to be implemented nationally for at least several years. In the meantime, hundreds of millions of dollars continue to be lost annually, some of which could have possibly been saved with timely implementation of this software. In addition to our report on opportunities to use commercial software to detect billing abuse, we reported in 1995 that new antifraud systems were available and being used by private insurers, some of whom were also Medicare carriers. Concluding that this technology could possibly complement existing Medicare systems, we recommended that HHS direct HCFA to develop a plan for implementing antifraud technology. However, HHS expressed three reservations about implementing new technology for identifying fraudulent patterns of behavior in the Medicare program. First, it said, the technology might not be applicable in a health insurance setting; second, that it might require substantial modification; and third, that more testing would be needed to assess its usefulness in detecting fraud in Medicare claims data. Rather than trying to adopt the commercially available software, HCFA chose to enter into an agreement that allowed it to explore the possibility of developing such software. Specifically, HCFA signed a 2-year, $6-million interagency agreement with the Los Alamos National Laboratory to assess the potential for identifying patterns of fraud. This agreement was recently extended for 3 additional months, until December of this year. As part of this agreement, Los Alamos has developed prototype approaches to detecting some suspicious part B claims. These approaches are currently being tested. To bring its work to fruition, the laboratory has submitted a 4-year, $13-million follow-up proposal to HCFA to use these approaches to design a system that will detect those and other suspicious claims. According to HCFA officials, they have agreed to a 4-year follow-up commitment and have approved $2.7 million for the fiscal year 1998 work. Usable results from this effort appear to be years away because, once the system’s design is complete, HCFA would have to award another contract to a software developer to create software from the Los Alamos design. Further, according to laboratory officials, HCFA will have to acquire separate computers to implement any Los Alamos-based fraud detection system because its approaches, which were originally to become part of MTS, are not designed to be integrated with the standard part A and part B Medicare claims-processing systems to which HCFA is now transitioning. Strong Management Essential Regardless of Project Direction HCFA’s negative experience with its automation projects represents a pattern we see throughout the federal sector: it is weaknesses in management, not technology itself, that stymie effective systems development and implementation. Managing information technology is not easy. But the payoffs of success—and the significant cost of failure, in time and money—demand that agencies implement sound information technology practices. How can agency officials begin implementing such practices? A good place to start is with the Clinger-Cohen Act of 1996. Fueled by a decade of poor information technology planning and program management across government, the act sought to strengthen executive leadership in information management and institute sound investment decision-making to maximize the return on costly technology investments. It is important to note that just as technology is most effective when it supports defined business needs and objectives, Clinger-Cohen will be more powerful if it can be integrated with the objectives of broader governmentwide management reform legislation that HHS, HCFA’s parent department, is also required to implement. One such reform is the Paperwork Reduction Act of 1995, which emphasizes the need for an overall information resources management strategic planning framework, with information technology decisions linked directly to mission needs. Another reform is the Chief Financial Officers Act of 1990, which requires, among other things, that sound financial management practices and systems essential for tracking program costs and expenditures be in place. Still another reform is the 1993 Government Performance and Results Act, which focuses on defining mission goals and objectives, measuring and evaluating performance, and reporting results. Together, Clinger-Cohen and these other laws provide a powerful framework under which federal agencies have the best opportunity to improve the management and acquisition of information technology. We believe that if properly and fully implemented, the requirements of Clinger-Cohen and the Paperwork Reduction Act should help HHS and HCFA make real change and improve the way they acquire information technology and manage these investments. These acts emphasize establishing senior-level chief information officers (CIO), involving senior executives in information management decisions, and tightening controls over technology spending. HCFA has recognized the need to more effectively manage its information technology acquisitions and has taken several important steps. For example, late last year it established a CIO position and is now reportedly in the final stages of selecting an individual for the position. Such a position is essential to ensuring the success of the agency’s information technology initiatives. HCFA has also established an information technology investment review board involving senior executives. HCFA sees these actions as providing an integrated process for planning, budget development, performance-based management, and evaluation of information technology investments. We endorse these positive steps. However, much remains to be done to ensure that HCFA’s initiatives—or those of any agency—are cost-effective and serve its mission. HCFA has not yet implemented our recommendations in establishing investment processes that will allow it to maximize the value and manage the risks of its information technology acquisitions, and tightly control spending. In HCFA’s case, officials state that establishing investment management practices to support its recent changes will be an “iterative process” that will take time. To effectively manage as an investment any information technology it seeks to acquire, an agency—including HCFA—must be structured organizationally in a way that allows—even promotes—such an approach. This means providing a qualified top official with the authority and accountability to make critical management decisions on the basis of sound information. This structure should provide such information through systematic analyses that predict the kind of return on investment envisioned, in both a fiscal and technical sense. The agency then is obliged to use sound systems-development practices in managing its automation projects. Where such management has not been the norm, both HHS and the Office of Management and Budget should provide close oversight to ensure swift implementation of sound information technology management. Continuing congressional oversight would further assist in accomplishing this. This concludes my statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed: (1) the Health Care Financing Administration's (HCFA) Medicare Transaction System (MTS) and the recommendations GAO made to correct serious weaknesses in its management identified as part of a GAO review; (2) two continuing HCFA initiatives to combat fraud and abuse; and (3) underlying information technology management issues. GAO noted that: (1) to improve the efficiency and effectiveness of Medicare operations and better address fraud and abuse, HCFA planned to develop one unified computer system to replace the existing system, but the project encountered problems from the beginning; (2) HCFA assessed the project's viability and decided to terminate both the request for proposals for the MTS sites and the entire software-development contract; (3) HCFA has two other independent information technology initiatives in the areas of fraud and abuse that are continuing--analyzing the potential for using existing commercial software and exploring the possibilities for developing antifraud software; (4) HCFA did not begin to test the feasibility of using commercial code-manipulation software to process Medicare claims until about 1 year after GAO reported on its potential; (5) any positive results from this testing are not expected to be implemented nationally for at least several years; (6) HCFA chose to enter an agreement with Los Alamos National Laboratory that allowed it to explore developing such software; (7) results from this program appear to be years away because once the system's design is complete, HCFA would have to award another contract to a software developer to create from the Los Alamos design; (8) HCFA would have to acquire new computers to implement any Los Alamos-based fraud detection system because its approaches, which were originally to become part of MTS, are not designed to be integrated with the Medicare claims processing systems to which HCFA is transitioning; (9) HCFA's negative experience with its automation projects represents a pattern of weaknesses GAO sees throughout the federal sector; weaknesses in management that stymie effective systems development and implementation; (10) the Clinger-Cohen Act of 1996, the Paperwork Reduction Act of 1995, the Chief Financial Officers Act of 1990, and the 1993 Government Performance and Results Act, provide a powerful framework under which federal agencies have the best opportunity to improve the management and acquisition of information technology; (11) HCFA has recognized the need to more effectively manage its information technology acquisitions, and has taken several important steps, but much remains to be done to ensure that HCFA's initiatives are cost-effective and serve its mission; and (12) HCFA has not yet implemented GAO's recommendations in establishing investment processes that will allow it to maximize the value, manage the risks of its information technology acquisitions, and tightly control spending. |
Background Engaged employees are more than simply satisfied with their jobs. Instead, engaged employees are passionate about, and energized by what they do, are committed to the organization, the mission, and their job, and are more likely to put forth extra effort to get the job done. take pride in their work, The Merit Systems Protection Board (MSPB) found that higher levels of employee engagement in federal agencies led to improved agency performance, less absenteeism, and fewer equal employment opportunity complaints. Similarly, a number of studies of private- and public-sector organizations have found that increased levels of engagement result in improved individual and organizational performance. In addition, studies of the private sector have established that firms with higher levels of employee engagement exhibit increased individual employee performance, increased productivity, and have higher customer service ratings, while also having fewer safety incidents, and less absenteeism and turnover. OPM has conducted the FEVS—a survey that measures employees’ perceptions of whether, and to what extent, conditions characterizing successful organizations are present in their agencies—every year since 2010. The EEI was started in 2010 when FEVS became an annual survey and is composed of 15 FEVS questions covering the following areas: Leaders lead, which surveys employees’ perceptions of the integrity of leadership, as well as employees’ perception of leadership behaviors such as communication and workforce motivation. Supervisors, which surveys employees’ perceptions of the interpersonal relationship between worker and supervisor, including trust, respect, and support. Intrinsic work experience, which surveys employees’ feelings of motivation and competency relating to their role in the workplace. According to OPM, the EEI does not directly measure employee engagement, but it does cover most of the conditions likely to lead to employee engagement. Sometimes the EEI is discussed in the same context as another workforce metric known as the Best Places to Work rankings. Although the Best Places to Work scores are also derived from the FEVS, it differs from the EEI in that the Partnership for Public Service (Partnership) created the rankings as a way of rating employee satisfaction and commitment across federal agencies. The rankings are calculated using a weighted formula of three different questions from OPM’s FEVS: (1) I recommend my organization as a good place to work, (2) considering everything, how satisfied are you with your job, and (3) considering everything, how satisfied are you with your organization. Most Agencies Defied Government-wide Downward Trend and Maintained or Improved Engagement Levels Our ongoing work indicates that the recent government-wide average decline in the EEI masks the fact that the majority of federal agencies either sustained or increased employee engagement levels during the same period. From 2006 through 2014, government-wide employee engagement levels initially increased—reaching a high of 67 percent in 2011—and then declined to 63 percent in 2014, as shown in figure 1. However, the decline in engagement is the result of several large agencies bringing down the government-wide average. Specifically, our preliminary work indicates that 13 out of 47 agencies saw a statistically significant decline in their EEI from 2013 to 2014; while this is only 28 percent of agencies, nearly 69 percent of federal employees are at one of those agencies, including the Department of Defense, Department of Homeland Security, and Department of Veterans Affairs.majority of agencies sustained or improved engagement, as shown in figure 2. Between 2013 and 2014, of 47 agencies included in our analysis of the EEI, three increased their scores; 31 held steady; and 13 declined. Leadership Component of the EEI Consistently Scores the Lowest Based on our preliminary analysis, of the three components that comprise the EEI—employees’ perceptions of agency leaders, supervisors, and their intrinsic work experience—employees’ perceptions of leaders consistently received the lowest score, and at times was about 20 percentage points lower than other components. Moreover, from a high- point in 2011, leadership scores saw the greatest decrease and accounted for much of the government-wide average decline in the EEI, as figure 3 shows. The questions comprising the EEI leadership component focus on integrity of leadership and on leadership behaviors such as communication and workforce motivation. Three of the five questions are specific to senior leaders—department or agency heads and their immediate leadership team, responsible for directing policies and priorities and typically members of the Senior Executive Service or equivalent (career or political). Two are specific to managers—those in management positions who typically supervise one or more supervisors. We have previously reported that leaders are the key to organizational change—they must set the direction, pace, and tone, and provide a clear, consistent rationale that brings everyone together behind a single mission. Federal Employee Viewpoint Survey Questions that Comprise the In my organization, senior leaders generate high levels of motivation and commitment in the workforce. My organization’s senior leaders maintain high standards of honesty and integrity. Managers communicate the goals and priorities of the organization. Overall, how good a job do you feel is being done by the manager directly above your immediate supervisor? I have a high level of respect for my organization’s senior leaders. Supervisors in my work unit support employee development. My supervisor listens to what I have to say. My supervisor treats me with respect. I have trust and confidence in my supervisor. Overall, how good a job do you feel is being done by your immediate supervisor? I feel encouraged to come up with new and better ways of doing things. My work gives me a feeling of personal accomplishment. I know what is expected of me on the job. My talents are used well in the workplace. I know how my work relates to the agency’s goals and priorities. The strength of the EEI supervisors component suggests that the employee-supervisor relationship is an important aspect of employee engagement. These questions focus on the interpersonal relationship between worker and supervisor and concern supervisors’ support for employee development, employees’ respect, trust, and confidence in their supervisor, and employee perceptions of an immediate supervisor’s performance. Intrinsic work experience was the strongest EEI component prior to 2011, but fell during the period of government-wide decline in engagement levels. These questions reflect employees’ feelings of motivation and competency related to their role in the workplace, such as their sense of accomplishment and their perception of utilization of their skills. Pay Category and Supervisory Status Had the Widest Range of Engagement Levels Our ongoing work has found that government-wide, the demographic groups with the widest gap between most engaged and least engaged were pay category and supervisory status. For example, respondents in progressively lower General Schedule (GS) pay categories had progressively lower levels of engagement government-wide. In contrast, employees in the SES pay category reported consistently higher engagement levels—at least 10 percent more than any lower pay category. According to our preliminary analysis, while there was less difference between the engagement levels of other pay categories, employees in the GS 13-15 categories were consistently higher than all other lower GS pay categories. Employees in the Federal Wage System consistently reported the lowest levels of engagement. Similarly, respondents with fewer supervisory responsibilities had progressively lower levels of engagement government-wide. Generally, employees with higher supervisory status have more autonomy in how they do their work. Employees in higher pay categories are likely to have more supervisory responsibilities, so it is not surprising that the trends for each are similar. Variations in engagement by supervisory status are shown in figure 4. With respect to other demographic cohorts, our preliminary analysis shows that engagement levels tended to be similar, regardless of the respondents’ gender, ethnicity (Hispanic or non-Hispanic), or work location (agency headquarters or field). Key Practices Found to Strengthen Employee Engagement Performance Conversations Are the Strongest Driver of Employee Engagement Levels For our ongoing work we used regression analysis to test which selected FEVS questions best predicted levels of employee engagement as measured by our index, after controlling for other factors such as demographic characteristics and agency. Of the various topics covered by the FEVS that we analyzed, we identified six that had the strongest association with higher EEI levels compared to others, including (1) having constructive performance conversations, (2) career development and training, (3) work-life balance, (4) inclusive work environment, (5) employee involvement, and (6) communication from management (see table 1). In many ways, these and similar practices are not simply steps to better engage employees; they are also consistent with the key attributes of high performing organizations. Our preliminary results show that having constructive performance conversations was the strongest driver of employee engagement. For the question “My supervisor provides me with constructive suggestions to improve my job performance,” we found that, controlling for other factors, someone who answered “strongly agree” on that FEVS question would have on average a 20 percentage point higher engagement score, compared to someone who answered “strongly disagree” on the 5-point response scale. As we found in our March 2003 report on performance management, candid and constructive feedback helps individuals maximize their contribution and potential for understanding and realizing the goals and objectives of the organization. Our preliminary results also show that after constructive performance conversations, career development and training was the strongest driver. For the question, “I am given a real opportunity to improve my skills in my organization,” we found that someone who answered strongly agree to that question would have on average a 16 percentage point higher engagement score, controlling for other factors, compared to someone who answered strongly disagree. As we found in our earlier work on this topic, the essential aim of training and development programs is to assist the agency in achieving its mission and goals by improving individual and, ultimately, organizational performance. For the remaining four drivers, our preliminary results indicate that someone who answered strongly agree to those questions would have on average a 12 percentage point higher engagement score, controlling for other factors, compared to someone who answered strongly disagree. Importantly, our ongoing work suggests that these six practices were generally the consistent drivers of higher EEI levels when we analyzed them government-wide, by agency, and by selected demographic groups (such as agency tenure and supervisory status).practices are the strongest predictors of engagement, this suggests they could be key starting points for all agencies embarking on efforts to improve engagement. Agencies Are Taking Specific Steps to Strengthen Engagement During our ongoing work, we have found that agencies that have improved employee engagement, or that already have high levels of engagement, apply the drivers noted above. Their experience with what works can provide practical guidance for other agencies as they attempt to improve their own engagement scores. For example, at GAO—which has consistently placed among the top five agencies on the Partnership for Public Service’s Best Places to Work list since 2005—we have a number of initiatives related to the drivers of engagement. With respect to constructive performance conversations, at GAO, effective performance management is a priority. Performance conversations—including ongoing feedback and coaching—are expected to occur on a regular basis and not just as part of the annual appraisal process. Moreover, at all levels of the agency, supervisors are expected to create a “line of sight” connecting individual performance to organizational results. Likewise, with respect to an inclusive work environment, with involvement and support of top management, our Human Capital Office and our Office of Opportunity and Inclusiveness lead the agency through several continuous efforts, including (1) communicating the importance of diversity and inclusiveness from senior leaders, (2) linking SES/Senior Leader performance expectations to emphasize diversity, and (3) attracting and retaining a diverse workforce by, among other things, recruiting at historically black colleges and universities. Actions taken by other agencies can also provide insights about implementing key engagement drivers. For example, during our ongoing work, Education’s Office of the General Counsel (OGC) officials told us that they convened an office-wide meeting with employees at all levels to discuss the FEVS results—both to identify areas in which they could continue to build on positive trends, and also to identify opportunities for taking constructive steps to improve in other specific areas of the EEI scores. The focus of the conversation included steps that they could take to enhance and strengthen communication throughout the office, employee training and professional development, performance evaluation processes, and employee empowerment overall; as a result, Education’s OGC management introduced additional training and professional development opportunities and improved employee on-boarding through a new handbook and mentoring program. Education’s OGC officials said these opportunities—and the permanent, staff-driven Workforce Improvement Team (WIT) that formed as a result—have created feelings of stronger ownership, engagement, and influence in office decision making. Education’s OGC officials said that OGC’s management relies on the WIT for feedback to evaluate the effectiveness of improvement efforts. This strengthens two-way communication, which improves employee engagement and organizational performance. In another example, National Credit Union Administration (NCUA) officials told us that the head of the agency and its senior leaders communicate with line employees (who are mostly in the field) through quarterly webinar meetings. The meetings are scheduled to accommodate the field employees’ frequent travel schedule and generally start with any “hot topics” and continue with discussion of agency efforts to meet mission goals. The agency head takes questions in advance and during the webinar and, when needed, participants research and share responses with agency employees. According to NCUA officials, these regular, substantive conversations demonstrate top leadership’s commitment to line workers as valued business partners. Agencies Need to be Sensitive to Limitations with EEI Data and Use Supplemental Information to Identify and Address Engagement Issues OPM provides a range of different tools and resources to help agencies use EEI data to strengthen employee engagement. They include, for example, an online mechanism to share OPM-generated survey reports (at government-wide, agency specific, and sub-agency levels) to facilitate data analysis. OPM has also created an online community of practice to help share best practices. Our ongoing work indicates that these resources could provide agencies with needed support. However, when analyzing the information, it is critical that OPM highlight (and for agencies to be aware of) various limitations in the EEI data that could affect agencies’ analyses. Our preliminary results found that these limitations include, for example, the following: The EEI Does Not Show Whether Changes Are Statistically Significant. OPM does not report whether changes to an agency’s EEI are statistically significant—that is, whether an up or down change is not due to random chance. As a result, agency officials may be misinterpreting changes to the EEI and acting on data that may not be meaningful. Although OPM provides agencies with absolute changes in the EEI, those increases and decreases are not always statistically significant. Our preliminary analysis of the FEVS showed that 34 percent (16 of 47) of the absolute changes in agency EEI scores from 2013 to 2014 were actually statistically significant. In smaller agencies and at component or lower levels within larger agencies, large absolute differences are less likely to be significant. The EEI Calculation Does Not Allow for Analysis of Engagement Drivers. Research on employee engagement emphasizes the importance of identifying the drivers of an engagement score as an initial step in improving employee engagement. For example, the Partnership for Public Service’s Best Places to Work guidance lists a driver analysis as a key element in determining where agencies should focus their action planning efforts. However, we found that the way OPM calculates the EEI precludes a driver analysis because individual level data are needed to assess correlates of engagement, controlling for other factors. The Short Cycle Time Between Surveys Presents Analytical Challenges. According to some agency officials we spoke with, the short cycle time between one annual survey and the next and the amount of time it takes for organizational change to take effect could be problematic. For example, because the FEVS survey cycle begins around May and agencies receive results in September or October, it may be late-winter or early-spring before an agency will have designed an action plan. By this time, the next survey cycle is on the horizon, allowing little time for agencies to analyze, interpret, and implement their action plans. Moreover, the annual survey cycle may not allow enough time for employees’ perceptions to change before the next cycle begins. According to agency officials we interviewed, it can take at least few years, sometimes more, for a particular organizational change to have an impact on employee engagement. As a result, when examining a particular change in engagement level, it could be unclear whether that change is due to an action implemented the previous year or a different action implemented several years earlier. Thus, determining what works and what does not could be challenging. While acknowledging the issues with short survey cycle time, OPM stated that agencies are increasingly using the FEVS as a management tool to help them understand issues at all levels of an organization and to take specific action to improve employee engagement and performance. An annual survey such as FEVS can help ensure that newly appointed agency officials (or a new administration) can maintain momentum for change, as the surveys suggest employees are expecting their voices to be heard. Further, OPM noted if agencies, managers, and supervisors know that their employees will have the opportunity to provide feedback each year, they are more likely to take responsibility for influencing positive change. Given these limitations and agencies’ current uses of FEVS data, our preliminary results suggest that agencies will need to supplement FEVS data with other sources of information. For example, some agencies use facilitated discussions to better understand their EEI scores and to identify and implement strategies for improvement. Other quantitative data—such as turnover rates, equal employment opportunity complaints, and sick leave use—may provide insights as well. In conclusion, research on both private firms and government agencies has demonstrated the linkage between high levels of employee engagement and improved organizational performance. Given the complex and challenging missions agencies face as well as the myriad number of routine actions and services they perform on a daily basis—all within a constrained fiscal environment—agencies must make strengthening and sustaining employee engagement an integral part of their organizational culture and not simply a set of isolated practices. OPM recognizes this and has taken a variety of actions that, in concept, show promise for improving employee engagement government-wide. They include (1) focusing agencies’ attention on strengthening engagement by leading efforts to implement the CAP goal; (2) establishing a performance target; (3) providing a variety of tools and resources to help agencies analyze FEVS data and share best practices; and (4) holding agencies and senior leaders accountable for specific efforts and achieving key results. At the same time, our ongoing work has shown that the EEI has limitations and the short time between survey cycles could be problematic. Agencies need to understand and address these limitations so that they properly interpret the information and target corrective actions accordingly. Chairman Meadows, Ranking Member Connolly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have. | A growing body of research on both private- and public-sector organizations has found that increased levels of engagement—generally defined as the sense of purpose and commitment employees feel towards their employer and its mission—can lead to better organizational performance. This testimony is based on GAO's ongoing work examining the federal government's efforts to improve employee engagement, including (1) trends in employee engagement from 2006 through 2014; (2) practices that could strengthen engagement levels based on the EEI results and the experiences of selected agencies and GAO; and (3) certain limitations of the EEI that will be important for agency managers and leaders to consider as they use this metric to assess and improve engagement within their own organizations. To identify engagement trends, GAO analyzed responses to FEVS questions from 2006 through 2014 from which the EEI is derived. To identify drivers of the EEI in 2014, GAO conducted a regression analysis. To identify practices that could strengthen engagement, GAO interviewed officials at OPM and three case study agencies (selected for sustained or increased EEI levels) that were responsible for engagement efforts. GAO's ongoing work indicates that the recent government-wide decline in engagement, as measured by the Office of Personnel Management's (OPM) Employee Engagement Index (EEI) masks the fact that the majority of federal agencies either sustained or increased employee engagement levels during the same period. Government-wide, engagement has declined 4 percentage points from an estimated 67 percent in 2011 to an estimated 63 percent in 2014. This decline is attributable to several large agencies—like the Department of Defense and Department of Homeland Security—bringing down the government-wide average. Specifically, 13 out of 47 agencies saw a statistically significant decline in their EEI from 2013 to 2014. While this is 28 percent of agencies, they represent nearly 69 percent of federal workforce. However, the majority of federal agencies either sustained or increased engagement levels during this period. Specifically, from 2013 to 2014, 31 agencies sustained and 3 agencies increased their engagement level. GAO's preliminary analysis of selected Federal Employee Viewpoint Survey (FEVS) questions indicates that six practices were key drivers of the EEI: constructive performance conversations, career development and training opportunities, work-life balance, inclusive work environment, employee involvement, and communication from management. Importantly, these practices were generally the consistent drivers of higher EEI levels government-wide, by agency, and by selected employee characteristics (such as federal agency tenure) and therefore could be key starting points for agency efforts to improve engagement. Some agencies that have improved employee engagement, or that already have high levels of engagement, apply these practices. OPM provides a range of tools and resources to help agencies use EEI data to strengthen employee engagement. They include, for example, an online tool to share OPM-generated survey reports to facilitate agency data analysis. GAO's ongoing work indicates that these resources could provide agencies with needed support. However, OPM does not report whether changes to an agency's EEI are statistically significant—that is, whether an up or down change is not due to random chance. As a result, agency officials may be misinterpreting changes to the EEI and acting on data that may not be meaningful. GAO's preliminary analysis of the FEVS shows that 34 percent of the absolute changes in agency EEI scores from 2013 to 2014 were statistically significant. In smaller agencies and at component or lower levels within larger agencies, large absolute differences are not always significant. GAO's ongoing work has noted that agency officials need to understand and take this (and other limitations) into account so that they properly interpret the information and target corrective actions accordingly. |
Background USPS’s capital investment process incorporates two main activities: planning and project approval (see table 2). In addition, USPS implemented tollgates in December 2006 to enable IRC awareness, involvement, and decision making for all investments greater than $5 million (see fig. 2). According to USPS, tollgates are updates intended to keep the IRC informed by raising and resolving issues throughout the phases of the capital investment review and approval process. However, given its financial position, USPS is limited in its ability to finance capital investments and has spent $17.5 billion on capital investments over the past 10 fiscal years, but has sharply decreased its spending since fiscal year 2007 (see fig. 3). Since 2009, USPS has taken the following actions, to address its financial situation, which affect its management of capital investments. For example: In 2009, USPS implemented a capital-spending freeze to conserve resources. Any capital initiative seeking funding must receive an exception-approval from USPS’s Finance and Planning unit demonstrating that the initiative (1) is needed for safety, health, or legal requirements, or (2) is required to sustain customer service, such as mail delivery, or (3) will have a high return on investment with a short payback period. In 2011, USPS created Delivering Results, Innovation, Value and Efficiency (DRIVE), a portfolio of strategic initiatives, which is intended to improve business strategy development and execution. DRIVE consists of managing a portfolio of 44 initiatives, half of which are active, that are intended to help close USPS’s income gap by 2015. For example, one DRIVE initiative—Build a World Class Package Platform—seeks to build an infrastructure to support and promote USPS’s growing package delivery business. USPS’s executive leaders developed and oversee the DRIVE initiatives. USPS’s Office of Inspector General (OIG) recently assessed DRIVE and found that the management process compares favorably to the identified 20 best-in-class program management practices of 13 companies in the private sector. In 2012, USPS developed a 5-year plan to restructure and modernize its operations and improve its financial situation. The 5-year plan estimates $2 billion annually for capital outlays for fiscal years 2015 through 2017. Notwithstanding these actions, our prior work has shown that although a successful capital investment depends on a range of factors, following leading practices will more likely result in investments that meet mission needs, are well-managed, and achieve cost, schedule, and performance goals. Figure 4 shows the capital investment phases and the leading practices that we identified for each. USPS’s Conformance to Leading Practices Varies by Capital Investment Phase In summary, we determined that USPS’s conformance to leading practices is medium for planning, selecting, and managing capital investment projects, and low for evaluating them, based on our review of USPS policy and the practices employed for the five selected projects. For Planning Capital Investments, USPS’s Conformance to Leading Practices Is Medium USPS substantially conformed to three, and partially conformed to two, of five leading practices for planning capital investments (see fig. 6). Specifically, we found that USPS’s process for planning capital investments substantially: Identifies mission needs and gaps in service: Each year, USPS issues a capital commitment budget call to the executive leaders, vice presidents, and budget coordinators to identify mission needs and gaps in services. Executives are to work with their teams to submit their capital investment requests for the next fiscal year and include a brief narrative describing each proposed project, why it is needed, the amount needed for the project, and whether it supports a DRIVE initiative. Reviews and approves the framework for selecting investments: USPS has several different sources of guidance for capital investments. The most comprehensive guidance is USPS’s General Investment Policies and Procedures. Since those policies and procedures were developed, USPS has created its tollgates (see figure 2). Program managers told us that they also use USPS’s Technology Acquisition Management Process Guidelines as their primary guidance.updates made in December 2011. USPS also conducts annual internal investment overviews for selected projects. These various sources comprise USPS’s framework for selecting its investments. USPS officials told us that the current guidance is a slide presentation available on the agency’s internal website. They also said that they are planning to update the guidance, but did not provide a time frame for completing this effort. Thus, while USPS has reviewed and approved a framework for selecting its investments, it does not currently have a clear, single-source, standard set of policies and procedures that reflect the selection framework. A time frame for completing efforts to update its policies and procedures into a single- source guide could better position USPS to hold its managers accountable for completing the effort as intended. Moreover, a single- source guide could enable better transparency for selecting investments. Such transparency would establish crucial accountability for limited resources. This guidance, however, does not include the Develops a long-term capital investment plan: USPS has a 10-year capital investment plan that is used for internal-planning purposes, even though actual capital investments depend on budgetary resources. In addition, we found that USPS’s process for planning capital investments partially: Links investments to its strategic plan: USPS has linked capital investments to strategic initiatives listed in its 5-year business plan, which was developed in 2012 to bring USPS to a point of financial viability. The business plan addresses financial challenges facing USPS, actions the agency plans to take to address its financial outlook, and external factors USPS believes could inhibit it from financial viability. USPS’s 5-year business plan also contains seven “strategic initiatives” that are linked to DRIVE initiatives (see app. II for more detail). USPS refers to its 5-year business plan as its strategic plan. However, USPS’s business plan focuses on the agency’s financial condition while a traditional strategic plan is more comprehensive and is intended to address the agency’s overall mission.traditionally identified in a strategic plan, that could affect USPS’s ability to achieve some of the DRIVE initiatives. Specifically, OMB A-11 guidance states that management should assess whether the investment needs to be undertaken by the requesting agency because no alternative private sector or governmental source can better support the function. private sector entity to support part or all of a processing or delivery function. For example, USPS contracted for the development of its PARS software system, but did not consider whether a private sector entity— either in partnership with USPS or independently—could perform part or all of the function of automatically redirecting undeliverable mail. USPS officials, however, told us that a provision in its labor contracts limits its ability to consider external entities for supporting an entire function; however, USPS is not specifically prohibited from doing so. The officials added that USPS has a Strategic Initiative Action Group that reviews, approves, and monitors proposed outsourcing initiatives to ensure that they meet the requirements of these bargaining unit agreements. However, USPS did not provide evidence that it took such consideration for its mail-processing or delivery functions. It is also not clear whether USPS considered certain strategies used by foreign postal operators. For example, according to the USPS OIG, contracting with private operators to sell aging vehicles could provide immediate cash, and leasing a new fleet of vehicles could result in the operational benefits of having a modern fleet without assuming fixed costs. Until USPS modifies its policies to require such consideration, USPS may not be placing itself in a position to identify the best option for reducing costs and increasing the quality of its capital investments. USPS officials told us, however, that many foreign postal operations have been privatized and are not subject to the same government oversight as USPS. In February 2011, we also reported on how the strategies of foreign posts’ can inform USPS modernization. For Selecting Capital Investments, USPS’s Conformance to Leading Practices Is Medium USPS substantially conformed to three, and partially conformed to two, of five leading practices for selecting capital investments (see fig. 7). Specifically, we found that USPS’s process for selecting capital investments substantially: Ranks and prioritizes investments based on mission needs and projected return on investment: After USPS makes its initial budget call, the executive leaders rank and prioritize capital requests based on need, the expected return on investment, the impact on customer experience, and the ability to support key initiatives. Submits business cases to an external entity: As part of the IRC review process, USPS submits business cases for all projects with total funding over $5 million to its OIG for review. For investments $25 million or greater, USPS OIG shares its assessments and conclusions with the USPS program sponsor, USPS headquarters officials, the IRC, and with Congress. Links investments with budget considerations: USPS links capital investments with the overall budget when developing its annual Integrated Financial Plan. Specifically, as described in table 2, during budget planning, the Finance Infrastructure unit recommends projects for inclusion in the capital budget through a multi-step review process, the results of which are then incorporated into the Integrated Financial Plan. In fiscal year 2013, capital budget requests totaled approximately $2.1 billion, while forecasted capital needs totaled $752 million. USPS partially conformed to developing its business cases and allocating resources toward a desired portfolio. Investment portfolios are broad categories of investments that are linked by similar missions to better fulfill that specific mission and minimize overlapping functions. A portfolio perspective enables an organization to focus on projects that best meet its overall goals, rather than on projects that only meet the objectives of specific program areas. Given USPS’s financial condition, a portfolio approach is especially important. However, with the exception of the DRIVE initiative, USPS develops its business cases for approval and allocates resources by project rather than by portfolio. More specifically, USPS officials develop business cases for each specific project that include projected return on investment for approval and funding. As our prior work at federal agencies making substantial capital investments has shown, selecting investments on a project rather than portfolio approach may lead to duplicative functions that do not integrate well together to perform the desired mission. Furthermore, modifying USPS’s policies to require a comprehensive portfolio approach would enable USPS to consider proposed projects alongside those that have been funded to select the mix of investments that best meets its mission needs. For Managing Capital Investments, USPS’s Conformance to Leading Practices is Medium USPS substantially conformed to two, and partially conformed to two, of four leading practices for managing capital investments (see fig. 8). Specifically, we found that USPS’s process for managing capital investments substantially: Establishes accountability and oversight for prudent use of resources: USPS policy establishes accountability and oversight of resources by assigning a leader to oversee capital investment projects. For example, the executive leaders assign one member as accountable for each DRIVE initiative. USPS assigned a program manager to each of the five selected projects that we reviewed. In addition to oversight, these program managers were responsible for directing and controlling program activities and addressing challenges. For example, with respect to challenges, the APBS program manager told us that he coordinated the logistics and timing of work for three individual contracts on his project—which included managing three requests for proposal, three statements of work, and three suppliers. Tracks cost, schedule, and performance data for investments: USPS policy calls for tracking cost, schedule, and performance data for investments with approved funding greater than $5 million. For DRIVE initiatives, USPS developed a dashboard to monitor the progress of financial and nonfinancial milestones and impacts. Generally, the dashboard flags cost, schedule, and performance milestones that are in danger of being missed or have been missed. For capital investments greater than $25 million, USPS issues an Investment Highlights publication semiannually to provide a detailed, single- source overview. The Investment Highlights publication also contains an electronic reference that tracks cost, schedule, and performance data for all capital investments over $5 million, including those for each of the five projects we reviewed. Program managers for the selected projects told us that updates might be provided to senior management more frequently—on a weekly, monthly, or as needed basis. In addition, we found that USPS’s process for managing capital investments partially: Reassesses risk by identifying investments that are over budget, behind schedule, performing poorly, and lacking capability: While USPS policy reassesses risk by identifying investments that are over budget, performing poorly, and lacking capability, this practice was not consistently followed for the five projects we reviewed. Prior to December 2011, USPS policy stated that the IRC was to receive two briefings at the conversion and execution tollgates explaining how planned investment, timeline, and performance metrics were comparing to actual results. The purpose was to reassess and determine whether to continue, amend, or terminate a project. USPS officials told us that they made an internal decision in December 2011 to no longer require in-person briefings and instead have executive leaders work more closely with program managers and staff. The executive leader is then responsible for updating the IRC on the project status. In addition, USPS policy calls for a program sponsor to prepare a modified business case with updated costs, timelines, benefits, and scope if a capital investment is expected to exceed its approved funding or deviate significantly from its approved scope. The modified business case is then presented to the original authorities who vote whether to approve the modification, which would include any corrective actions, in order to continue the capital investment. However, USPS officials could only verify that a reassessment decision to continue, amend, or terminate an investment occurred for one of the five projects we reviewed. While a conversion briefing was held for four of the projects, and a business case modification was held for the fifth, USPS officials could only provide evidence of an IRC reassessment for DBCS, which USPS decided to continue. Examining the extent to which managers regularly reassess projects to continue, amend, or stop a project, would help manage risk, given limited resources. Identifies problems and implements corrective actions as needed: USPS officials told us that executive leaders typically hold regular meetings with the program managers and other team members, meetings that have led to identifying problems and implementing corrective actions for continuing a project. For the five selected projects we reviewed, USPS did not provide documentation that such discussions were held. Effective decision making relies on free exchange of information among a variety of stakeholders—particularly those who might be skeptical about an investment and can provide constructive insight and information; the more open the process, the more likely errors in fact or methodology will be uncovered. The absence of a transparent reassessment of risk to identify projects that need to be amended or terminated inhibits USPS’s ability to implement needed corrective actions for projects that are over budget, behind schedule, or not meeting performance targets. Therefore, examining the extent to which managers identify problems and implement corrective actions can better position USPS to make the best use of its resources. For Evaluating Capital Investments, USPS’s Conformance to Leading Practices Is Low USPS partially conformed to the three leading evaluation practices (see fig. 9). Specifically, we found that USPS partially conformed to the leading practice of: Evaluating cost, schedule, and performance results of implemented investments: USPS policy calls for a comparison of the actual return- on-investment and performance data for completed projects, against the expected return-on-investment and performance results in the business case. This comparison is usually included in the detailed capital investment reports—based on compliance reports—that are used to create the Investment Highlights publication. To assist with evaluating an investment, the compliance reports are to be regularly updated. In addition, USPS sometimes performs a final performance study. However, the detailed capital investment reports for four of the five projects we reviewed did not have observed return-on- investment data that could be compared to expected return on investment, and two projects did not have actual performance metrics compared to their expected results. To explain the missing comparisons, USPS officials told us that if a project is on budget and on schedule, they assume that it will achieve the expected return on investment. If performance data are missing, USPS officials told us that the program sponsor can ask the program manager for this information. One project, APBS, had complete return-on-investment and performance data comparable to its business case. Return-on- investment measures provide managers with valuable insight regarding any financial benefit attributable to a project. Performance measures help to identify problems, evaluate underlying factors, and determine needed adjustments. The absence of updated return on investment and performance data means that USPS cannot completely (1) assess the investment’s impact on strategic performance, (2) identify modifications that may be needed to improve performance, and (3) revise the investment process based on lessons learned. As a result, regularly reassessing projects by reviewing actual performance results after investment completion can provide USPS with a valuable opportunity to gain feedback necessary for improving future capital investments. Leveraging external oversight and review of its capital investments: For initiatives with approved capital funding greater than $25 million, USPS semiannually provides its OIG the Investment Highlights publication for informational purposes, but does not seek oversight or feedback from its OIG or other entities, such as a consultant or peer reviewer. Subject matter experts have found that a third party should evaluate a capital investment using a predetermined set of metrics that will result in real data on which to make improvements in the process or to inform future decisions on capital investments. External oversight and review is important and useful to hold an entity accountable for its performance. Incorporating best practices and lessons learned into the investment process: USPS does not require developing or updating best practices after project completion. Nevertheless, one of the program managers for a selected project that we reviewed told us that USPS identified, documented and followed industry best practices for program implementation—based on experience with various program vendors and contractors. Regarding lessons learned, USPS policy calls for documentation of “unexpected situations” upon project completion but does not require communicating lessons learned to other program managers. One of the program managers for the five selected projects provided us with documented lessons learned, but could not provide evidence of how such lessons were used. Another program manager told us that sometimes, when a project is completed, managers are assigned to new projects without adequate time to document lessons learned on the completed project. The absence of documented best practices and lessons learned that could be incorporated into the capital investments process limits opportunities for USPS to improve its process in a way that could benefit future investments. Conclusion Given that USPS’s financial situation and the limitations of its business model hamper its ability to prevent future losses, it is crucial for USPS to use its scarce resources to prioritize and make wise capital investments, particularly those that reduce costs. USPS has taken positive steps in this direction, and our analysis of a range of projects shows that USPS substantially followed the majority of leading practices for planning and selecting capital investments. USPS’s 5-year business plan, for example, is a positive step toward an agency-wide strategic plan that links capital investments to strategic initiatives. However, substantially following all leading practices—including those for managing and evaluating capital investments—could better ensure that USPS’s investments are well- managed and achieve cost, schedule, and performance goals, an accomplishment that in turn could enhance USPS’s financial viability. Conversely, not substantially following all leading practices may result in inefficient spending of limited resources, thereby putting USPS at even further financial risk. Recommendations for Executive Action To strengthen USPS’s capital investment process, we are making three recommendations related to USPS policy and consistent application of leading practices. The Postmaster General and executive leaders should: 1. Establish a time frame for developing a clear, detailed, single-source, standard set of policies and procedures that reflect the capital investment selection phase; 2. Modify capital investment policies to more closely align with the following leading practices, including: for planning capital investments, consider whether an external entity could better support all or part of a desired function when evaluating alternative capital investment options; for selecting capital investments, use a portfolio approach for developing business cases and finalizing and allocating resources; and for evaluating capital investments, seek and leverage external oversight and review, from a consultant or peer reviewer, and require that best practices and lessons learned be incorporated into the review process; and 3. Regularly examine the extent to which executives and program managers consistently follow all leading practices, particularly for: identifying problems and reassessing risk while managing a project; and evaluating the cost, schedule, and performance results of completed projects. Agency Comments and Our Evaluation We provided a draft of this report to USPS for review and comment. USPS provided comments, which are reprinted in appendix III. USPS concurred or partially concurred with our recommendations and stated that there are always opportunities for improvement and that it can clearly benefit from our recommended actions to strengthen its investment process. USPS partially concurred with our first recommendation that USPS establish a time frame for developing a clear, detailed, single-source set of standard policies and procedures that reflect the capital-investment selection phase. USPS responded that it plans to revise its General Investment Policies and Procedures handbook during the second quarter of fiscal year 2014 to include the capital-investment selection process. However, USPS also stated that an established capital selection process is already in place, although it is not included in the handbook. As described in the report, USPS has several sources of guidance and it does not have a clear, single-source, standard set of policies and procedures that reflect the selection framework. We are pleased that USPS has now established a time frame for developing such guidance, which could better position USPS to hold its managers accountable and better enable transparency for selecting investments. USPS partially concurred with our second recommendation to modify its capital investment policies to more closely align with leading practices in the areas of planning, selecting, and evaluating capital investments. With regard to planning capital investments by considering whether an external entity could better support all or part of a desired function when evaluating alternative investment options, USPS concurred. However, USPS stated it currently has procedures in place to consider viable options and provided examples of cases in which it has outsourced non- core functions. USPS further noted that it considers outsourcing work that is currently performed by bargaining unit employees and outsourcing will always be considered if it is in the financial best interest of USPS and meets collective-bargaining requirements. However, as noted in the report, USPS did not provide us with documentation that it considered outsourcing part or all of a mail-processing or delivery function. USPS could better conform to this leading practice if it considered the potential role of an external entity for all capital investments. This could place USPS in a better position to identify the best option for reducing costs and to increase the quality of its investments. With regard to selecting capital investments by using a portfolio approach for developing business cases and finalizing and allocating resources, USPS partially concurred and stated that it uses an organization-wide portfolio approach for review during the capital-budget-planning process, and that it will continue to clearly communicate the portfolio approach and incorporate it into USPS’s revised investment policy. However, we found that USPS develops its business cases for approval and allocates resources by project rather than portfolio. As noted in the report, selecting investments on a project rather than portfolio approach may lead to duplicative functions that do not integrate well to perform the desired mission. Requiring a more comprehensive portfolio approach would enable USPS to consider proposed projects alongside those that have been funded to select the mix of investments that best meets its mission needs. Thus, we continue to believe that the USPS should use a portfolio approach for developing business cases to fully address the recommendation. With regard to evaluating capital investments by seeking and leveraging external oversight and review and requiring that best practices and lessons learned be incorporated into the investment process, USPS concurred. However, USPS responded that it has previously contracted with highly respected external entities to review capital investment plans, and that it will continue to evaluate the need to hire consultants when necessary. While this is a positive practice, USPS can extend their external oversight and review when evaluating capital investments in addition to reviewing investment plans. In addition, external oversight and review are leading practices for all investments that could help USPS obtain data on which to make improvements in the process or to inform future investment decisions. USPS also stated that it will ensure that lessons learned and investment performance results are shared with its management and will be available for review on USPS’s internal website. With further regard to lessons learned, USPS also stated that it will ensure that each group is aware of the status of major investments and the lessons learned for current and future projects. However, USPS did not state that it would modify its investment policies to require that best practices and lessons learned be incorporated into the review process. We continue to believe that USPS should modify its investment polices as recommended. The absence of consistent documents for all investments limits opportunities to improve the process to benefit future investments. USPS concurred with our third recommendation that it regularly examine the extent to which executives and program managers consistently follow all leading practices particularly for: identifying problems and reassessing risk while managing a project and evaluating the cost, schedule, and performance results of completed projects. USPS stated that it would require program sponsors to ensure that their presentations appropriately address project cost, schedule, and performance information. This is a positive step toward improving USPS’s ability to gain feedback necessary for improving future capital investments. As agreed with your office offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Postmaster General, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and key contributors to the report are listed in appendix IV. Appendix I: Scope and Methodology To describe the extent to which the U.S. Postal Service (USPS) follows leading practices, we compared USPS’s process for planning, selecting, managing, and evaluating capital investments to leading practices. We identified leading practices for the four phases—planning, selecting, managing, and evaluating capital investments—through analysis and review of the Office of Management and Budget (OMB) Capital Programming Guide supplement to Circular A-11, which identified leading practices from government agencies and the private sector, and examples of executive agency implementation of the Capital Planning and Investment Control (CPIC) process. We identified leading practices that are applicable to USPS’s business model, which requires it to fulfill its mission of providing prompt, reliable, and efficient universal service to the public while remaining financially self-sustaining. However, unlike a private corporation, USPS is bound by legal and other restrictions that limit its ability to make certain types of business decisions—such as eliminating particular lines of businesses, cutting back on services, and/or altering its business model in ways that inhibit its universal service provision. External and internal subject matter experts with experience in both the public the private sectors reviewed the leading practices we identified and found them to be reasonable for USPS capital investments as applicable. These experts all provided comments that we incorporated into our capital investment leading practices. We compared our identified leading practices to USPS’s capital investment process. To understand USPS’s capital investment process, we reviewed USPS policy, documentation, and testimonial evidence on the capital investment process. USPS officials reviewed our initial description and provided feedback, which we incorporated into our work. We also met with USPS officials responsible for the Delivering Results, Innovation, Value and Efficiency (DRIVE) strategic initiatives to assess the extent to which USPS conformed to the leading practice of linking its capital investments to its strategic plans. We also met with and obtained documentation from USPS’s Office of Inspector General (OIG), to obtain an overall assessment of USPS’s capital investment process. To gather more detailed information about how USPS policies were applied in specific cases, and to determine whether USPS policies were consistently followed for a selection of high-cost capital investment projects, we selected 5 of 28 projects that were approved for over $25 million and were approved for funding after USPS experienced net losses in fiscal year 2007. From these 28 projects, we selected the four that had a positive projected return on investment as determined by USPS, were not specific to a particular geographical area, and were completed by fiscal year 2012. We included a fifth project (that was fully deployed, but not yet complete) due to USPS’s significant investment in the project (see table 3). We met with the program managers for each of the projects and reviewed documentation to assess the process for managing and evaluating these projects, and we conducted site visits to see four of the five projects. The selected investments do not support generalizations about the overall extent to which USPS followed leading practices for its capital investments, but rather illustrate whether and how policies were applied in specific cases. We also met with the USPS OIG to discuss USPS’s management and evaluation of the selected projects. Our analysis found that each investment phase—planning, selecting, managing, and evaluating—should consist of a series of leading practices that should be followed while the projects are within that phase. We used two different rating scales to assess the leading practices and capital investment phases. For each leading practice, we assessed USPS’s level of conformance, as follows: Substantial: USPS policy conformed to all or almost all elements. Partial: Either (1) USPS policy conformed to some elements; or (2) USPS policy conformed substantially, but we identified instances in the five projects we reviewed where the policies were not consistently applied. Minimal/none: USPS policy conformed to few or no elements. Then, for each investment phase, we assessed USPS’s level of conformance, as follows: High: USPS substantially conformed to all or almost all of the leading practices. Medium: USPS substantially conformed to multiple leading practices. Low: USPS substantially conformed to one or none of the leading practices. To describe the effects of not substantially conforming to a leading practice, we reviewed prior GAO work and the work of others including the USPS OIG and the National Research Council, and OMB and CPIC guides. To report on USPS expenditures on capital investments for the past 10 years and to identify the five projects that met our selection criteria, we requested data from USPS. We assessed the reliability of these data through review of related documents and interviews with knowledgeable agency officials. We found the data sufficiently reliable for our purposes of reporting on the amount spent on capital investments in the past 10 years and for selecting the five high-cost capital investments we reviewed. We conducted this performance audit from March 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Appendix II: U.S. Postal Service’s Strategic Initiatives and Corresponding Delivering Results, Innovation, Value, and Efficiency (DRIVE) Initiatives Networks Consolidation: A two-phase approach to consolidate approximately 200 mail-processing facilities and relocate equipment from the reduced volume workload for estimated savings of $3.4 billion by 2017. Phase 1 implemented in Spring 2013, and Phase 2 is to begin in Spring 2014. Retail Optimization: In high-traffic post offices, increase self- service equipment, and in rural areas, establish Village Post Offices for estimated savings of $1.6 billion by 2017. Delivery Optimization: Increase mail delivery to centralized locations instead of curbside or door-to-door for estimated savings of $1.8 billion by 2017. Legislative initiatives 5-Day Mail including 6-Day Package Delivery: Eliminate Saturday mail delivery and continue Saturday package delivery in support of online purchases and commercial packages for estimated savings of $2 billion annually. Postal Health Plan: Adopt a new USPS-administered health care plan for current employees and new hires, eliminating $5.7 billion of prefunding to the federal health insurance program and transferring retirees into the new health care plan. Estimated savings are $8 billion annually through 2016. Federal Employment Retiree System (FERS) overfunding refund: Reduce FERS obligation and normal cost contribution, based on USPS-specific assumptions and demographics for estimated savings of $0.3 billion annually. Workforce and non-personnel USPS strategic initiativeOperational initiatives Renegotiate and arbitrate with unions on wages and increase the proportion of non-career employees to 20 percent for estimated savings of $4.3 billion by 2017. Appendix III: Comments from the U.S. Postal Service Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Amelia Shachoy, Assistant Director; Samer Abbas; Amy Abramowitz; Russell Burnett; Jennifer Clayborne; Thanh Lu; Joshua Ormond; Amy Rosewarne; and Crystal Wesco made key contributions to this report. | USPS has reached its statutory borrowing limit and has projected unsustainable losses. GAO's prior work has stated USPS's financial challenges hinder its ability to make capital investments. GAO was asked to review USPS's capital investment process. This report addresses the extent to which USPS follows leading practices for four phases of capital investments: planning, selecting, managing, and evaluating. GAO identified the phases and leading practices primarily by analyzing the Office of Management and Budget's capital investment guide and compared them with USPS's policies and practices. External stakeholders with both public and private-sector experience reviewed the leading practices and found them to be reasonable for USPS. To examine how USPS policies were applied in specific cases, GAO reviewed 5 of 28 capital investments greater than $25 million that were approved for funding since fiscal year 2007. For each of the four phases of capital investments, USPS's conformance with leading practices varied. There are several practices within each of the phases. GAO assessed conformance as "substantial" if USPS's policy conformed to all or almost all elements of the practice, and as "partial" if USPS's policy conformed to some elements, or GAO identified cases in the five projects reviewed where the policies were not consistently applied. capital investments, USPS substantially conformed to most of the leading practices, such as identifying mission needs and gaps in services, reviewing and approving a framework for selecting its investments, and developing a long-term capital investment plan. However, USPS did not substantially conform to other practices such as evaluating alternative investments by considering whether an external entity could perform all or part of a function because USPS's investment policies do not require such evaluations. However, USPS is not precluded from conducting such evaluations. Modifying its policies to require such evaluations could place USPS in a better position to ensure the evaluations are completed and to identify the best option for reducing costs and increasing the quality of investments. For selecting capital investments, USPS substantially conformed to most of the leading practices, such as ranking and prioritizing, and linking its investments with budget considerations. However, consistent with its investment policy, USPS developed business cases for approval by project rather than following leading practices that call for using a portfolio approach of allocating resources based on overall organizational goals linked to the agency's mission. Modifying policies to require a comprehensive portfolio approach would better enable USPS to consider projects alongside those that have been funded to select the mix of investments that best meets its mission needs. For managing capital investments, USPS conformance with leading practices was mixed. For example, consistent with leading practices, USPS established oversight for its capital investments and tracks cost, schedule, and performance data for initiatives. USPS policy requires comparing the planned-investment timeline and performance metrics to actual results to reassess and determine whether to continue, amend, or terminate a project, consistent with leading practices. USPS managers, however, could only verify that such a reassessment occurred for one of the five projects GAO reviewed. Examining the extent to which managers regularly reassess projects to continue, amend, or stop a project would help to establish crucial accountability for limited resources. For evaluating capital investments, USPS conformance with leading practices was partial. USPS policy calls for a comparison of actual return-on-investment and performance data for completed projects against expected results, consistent with leading practices. However, four of the five projects GAO reviewed did not have comparable return-on-investment data, thereby limiting the ability of managers to assess the investment's impact, identify modifications to potentially improve performance, and revise the investment process. Finally, USPS policy does not require incorporating best practices or lessons learned after project completion--another leading practice--which limits opportunities for USPS to improve its process in a way that could benefit future investments. |
Background Investment advisers provide a wide range of investment advisory services and help individuals and institutions make financial decisions. From individuals and families seeking to plan for retirement or save for college to large institutions managing billions of dollars, clients seek the services of investment advisers to help them evaluate their investment needs, plan for their future, and develop and implement investment strategies. Advisers can include money managers, investment consultants, and financial planners. They commonly manage the investment portfolios of individuals, businesses, and pooled investment vehicles, such as mutual funds, pension funds, and hedge and other private funds. Many investment advisers also engage in other businesses, such as insurance broker or broker-dealer services. Many investment advisers charge clients fees for investment advisory services based on the percentage of assets under management, but others may charge hourly or fixed rates and, in certain circumstances, performance fees. 15 U.S.C. § 80b-2(a)(11). Advisers Act’s registration requirement.broad fiduciary duty on advisers to act in the best interest of their clients. The Advisers Act imposes a Most small- and mid-sized advisers are regulated by the states and prohibited from registering with SEC.an exemption from registration must register with SEC. To register, applicants file a Form ADV with SEC. Once registered, an adviser must update the form at least annually. SEC-registered advisers are subject to five types of requirements: (1) fiduciary duties to clients; (2) substantive prohibitions and requirements, including that advisers with custody of client assets take steps designed to safeguard those client assets; (3) contractual requirements; (4) record-keeping requirements; and (5) oversight by SEC. Large advisers who do not meet SEC oversees registered investment advisers primarily through its Office of Compliance Inspections and Examinations, Division of Investment Management, and Division of Enforcement. Specifically, the Office of Compliance Inspections and Examinations examines investment advisers to evaluate their compliance with federal securities laws, determines whether these firms are fulfilling their fiduciary duty to clients and operating in accordance with disclosures made to investors and contractual obligations, and assesses the effectiveness of their compliance-control systems. The Division of Investment Management administers the securities laws affecting investment advisers and engages in rule making for consideration by SEC and other policy initiatives that are intended, among other things, to strengthen SEC’s oversight of investment advisers. The Division of Enforcement investigates and prosecutes certain violations of securities laws and regulations. Nearly 10,000 advisers were registered with SEC as of April 1, 2013.Collectively, these advisers managed nearly $54 trillion in assets for about 24 million clients. The majority of these SEC-registered advisers each managed less than $1 billion in assets, and a majority had 100 or fewer clients. Specifically, as shown in figure 1, about 71 percent of the registered advisers (around 7,133 advisers) managed less than $1 billion in assets. Furthermore, the largest 94 registered advisers (about 1 percent of all SEC-registered advisers) managed about 50 percent of the total regulatory assets under management. In addition, as shown in figure 2, about 6,000 registered advisers (nearly 60 percent of all registered advisers) reported having 100 or fewer clients, while approximately 1,200 advisers (around 12 percent of all registered advisers) reported having more than 500 clients. Custody Rule Requirements and Compliance Costs Vary across Advisers An adviser indirectly, client funds or securities or has any authority to obtain possession of them, in connection with advisory services provided by the adviser to the client. Custody includes: possession of client funds or securities; any capacity that gives the adviser legal ownership or access to client assets, for example, as a general partner of a limited partnership, managing member of a limited liability company, or a comparable position for another pooled investment vehicle (e.g., hedge fund); or any arrangement, including a general power of attorney, under which the adviser is authorized or permitted to withdraw client funds or securities maintained with a custodian upon its instruction to the custodian. Custody Rule Requirements Are Intended to Safeguard Client Assets SEC’s custody rule regulates the custody practices of investment advisers and contains a number of investor protections. The rule requires advisers that have custody to maintain client assets with a “qualified custodian,” which includes banks and savings associations, registered broker-dealers, registered futures commission merchants, and certain foreign financial institutions. This requirement, along with other parts of the rule, helps prevent client assets from being lost or stolen. Furthermore, qualified custodians are subject to regulation and oversight by federal financial regulators and self-regulatory organizations. Some registered advisers also engage in other businesses, such as broker- dealers that provide custodial services to themselves or related advisers. The rule requires advisers that have custody of client assets to have a reasonable basis, after due inquiry, for believing that the custodian sends periodic statements directly to the clients. An adviser can satisfy the due-inquiry requirement in a number of ways, such as by receiving a copy of the account statements sent to the clients or written confirmation from the custodian that account statements were sent to the adviser’s clients. This requirement serves to help assure the integrity of account statements and permit clients to identify any erroneous or unauthorized transactions or withdrawals by an adviser. If an adviser also elects to send its own clients account statements, it must include a note urging its clients to compare the custodian’s and adviser’s account statements. The SEC custody rule requires advisers with custody of client assets to hire an independent public accountant to conduct an annual surprise examination, unless the advisers qualify for an exception. A surprise examination is intended to help deter and detect fraudulent activity by having an independent accountant verify that client assets—of which an adviser has custody—are held by a qualified custodian in an appropriate account and in the correct amount. The accountant determines the time of the examination without prior notice to the adviser, and the accountant is to vary the timing of the examination from year to year. SEC initially required all advisers to undergo surprise examinations when it adopted the custody rule in 1962. Over the following decades of administering the custody rule, SEC staff provided no-action relief from the surprise examination requirement where other substitute client safeguards were implemented. In 2003, SEC amended the custody rule by generally requiring an adviser to maintain client assets with qualified custodians and relieving the adviser from the examination requirement if its qualified custodian sent account statements directly to the adviser’s clients. In its proposed rule at that time, SEC noted that the examination was performed only annually, and many months could pass before the accountant had an opportunity to detect a fraud. In its 2009 proposed amendments, SEC revisited the 2003 rule making in light of its significant enforcement actions alleging misappropriation of client assets. In expanding the surprise examination requirement, SEC noted that an independent public accountant may identify misuse that clients have not, which would result in the earlier detection of fraudulent activities and reduce resulting client losses. While SEC expanded the reach of the surprise examination requirement in its final 2009 rule amendments, it provided several exceptions to the requirement. As shown in figure 3, advisers meeting the following conditions may not be required to undergo a surprise examination: an adviser that is deemed to have custody of client assets solely because of its authority to deduct fees from client accounts; an adviser that is deemed to have custody because a related person has custody, and the adviser is “operationally independent” of the related person serving as the custodian; or an adviser to a pooled investment vehicle (e.g., hedge fund) that is subject to an annual financial statement audit by an independent public accountant registered with and subject to regular inspection by the Public Company Accounting Oversight Board (PCAOB) and distributes the audited financial statements prepared in accordance with generally accepted accounting principles to its clients is deemed to have satisfied the surprise examination requirement. Advisers that maintain client assets as the qualified custodian or use a related person qualified custodian rather than maintaining client assets with an independent qualified custodian may present higher risk to clients. In recognition of such risk, SEC also imposed in its 2009 rule amendments a new internal control reporting requirement on advisers that maintain client assets or use related person qualified custodians (see fig. 3 above). The internal control report must include an opinion of an independent public accountant as to whether suitable controls are in place and operating effectively to meet control objectives relating to custodial services. This includes the safeguarding of assets held by the adviser or related person. An adviser that directly maintains client assets as a qualified custodian or maintains client assets with a related person qualified custodian must obtain or receive from its related person an internal control report annually from an accountant that is registered with and subject to regular inspection by PCAOB. Advisers qualifying for a surprise examination exception because of their use of a related person but operationally independent custodian still must obtain an internal control report from their related person. In conjunction with the amendments to the custody rule, SEC also amended its record-keeping rule. The revised rule requires advisers to maintain a copy of any internal control report obtained or received pursuant to the SEC custody rule. The rule also requires advisers, if applicable, to maintain a memorandum describing the basis upon which they determined that the presumption that any related person is not operationally independent under the custody rule has been overcome. According to SEC, requiring an adviser to retain a copy of these items provides SEC examiners with important information about the safeguards in place and assists SEC examiners in assessing custody-related risks. Compliance Requirements and Costs Vary Around 4,400 advisers, about 45 percent of all SEC-registered advisers, reported having custody (for reasons other than their authority to deduct In addition, fees) of over $14 trillion in client assets as of April 1, 2013.around 500 advisers, about 11 percent of the 4,400 advisers with custody, reported serving as the qualified custodian or having a related person qualified custodian of client assets. As discussed, the SEC custody rule imposes certain minimum requirements generally on all advisers with custody, but not all of the rule’s requirements apply to all advisers. Instead, the rule generally imposes more stringent requirements on advisers whose custodial arrangements, in SEC’s view, pose greater risk of misappropriation or other misuse of client assets. According to representatives from industry associations and advisers that we interviewed, advisers can incur an array of direct and indirect costs to comply with the SEC custody rule. Direct costs, such as accounting and legal fees paid by advisers, tend to be more easily measured than indirect costs, such as staff hours spent by an adviser to comply with the rule. The representatives told us that compliance costs include the following: Initial costs: After the SEC custody rule was amended in 2009, advisers initially incurred indirect costs (largely management and staff hours) and, in some cases, direct costs (largely consulting or legal fees) to interpret the amendments and comply with the rule’s new or amended requirements. For example, one adviser told us that his firm hired a law firm to help it interpret the amended rule, hired a part-time person for 6 months to review and determine over which accounts the adviser had custody, and utilized staff to reprogram the firm’s information system to code accounts under custody. Another adviser told us that his firm had the necessary in-house expertise to interpret the amended rule but nevertheless expended considerable internal resources for training staff about the surprise examination requirements and searching for and hiring an accountant to conduct the examinations. Recurring costs: On an ongoing basis, advisers incur indirect and, in some cases, direct costs to comply with the custody rule. Advisers expend internal staff hours to maintain records and prepare required statements and disclosures, including Form ADV (the form that advisers use to register with SEC and must update annually). Advisers subject to the surprise examination or internal control report requirement expend staff hours to prepare for and facilitate such reviews. For example, an official from an adviser told us that the firm expends considerable staff hours each year educating the accountant about the firm’s operations, generating reports for and providing other support to the accountant, and answering questions from clients related to the examination. In addition, these advisers may incur the direct cost of the examination or audit, and the amount of these fees varies from adviser to adviser (as discussed later in the report). Although advisers to pooled investment vehicles often undergo an annual financial statement audit in lieu of a surprise examination, they incur the indirect and direct costs associated with the audit. According to SEC staff and representatives from three industry associations that we spoke with, surprise examinations and internal control reporting, if applicable, tended to be two of the more costly requirements associated with SEC’s custody rule. In contrast, record- keeping costs were not significant, according to officials from three associations, two securities law attorneys, and seven of the advisers with whom we spoke. According to representatives we interviewed from four accounting firms, their surprise examination fee is based on the amount of hours required to conduct the examinations, which is a function of a number of factors. One of the most important factors is the number of client accounts under custody, which influences the number of accounts that accountants will need to review to verify custody. Other factors affecting examination cost include the amount of client assets under custody, types of securities under custody, and number and location of the custodians. Over 1,300 advisers with custody of client assets, about 30 percent of the 4,400 advisers with custody, reported being subject to the surprise examination requirement as of April 1, 2013. Importantly, these advisers vary widely in terms of the number of their clients under custody—reported by advisers as ranging from 1 client to over 1 million clients—and other factors that affect the cost of surprise examinations. Consequently, the cost of surprise examinations varies widely across the advisers. Although no comprehensive data exist on surprise examination costs, several industry associations and SEC have provided estimates. In response to SEC’s 2009 proposed amendments to the custody rule, industry associations provided SEC with cost estimates. For example, the Investment Advisers Association, representing SEC-registered advisers, estimated that surprise examinations would likely cost each of its members between $20,000 and $300,000. The Securities Industry and Financial Markets Association, representing major asset management firms and custodians, estimated that surprise examination costs would range from $8,000 to $275,000 for each of its members. However, these estimates were based on the then-current SEC guidance to accountants that required verification of 100 percent of client assets under custody. In conjunction with its 2009 final rule amendment, SEC issued a companion release that revised the guidance to allow accountants to verify a sample of client assets. In its 2009 final amendments to the custody rule, SEC estimated the cost of surprise examinations for large, medium, and small advisers in consideration of revisions to its guidance that allowed accountants to verify a sample of client assets. In particular, SEC estimated that the average cost of a surprise examination for large, medium, and small advisers would be $125,000, $20,000, and $10,000, respectively. To help determine the range of potential costs of surprise examinations for selected subgroups of advisers, we obtained data on the examination fees for 12 advisers. As shown in figure 4, the fees that the 12 advisers paid to their independent public accountants for recent surprise examinations ranged from $3,500 to $31,000. Figure 4 also shows that fees varied among advisers we selected within each of the subgroups. For example, fees in subgroup 2 ranged from $3,500 to $16,000 for the three advisers we selected. Fewer than 500 advisers, about 11 percent of advisers with custody, reported obtaining internal control reports as of April 1, 2013. Similar to surprise examination costs, the cost of internal control reports varies based on a number of factors, such as the size of and services offered by the qualified custodian. In its final 2009 rule, SEC estimated that an internal control report relating to custody would cost, on average, $250,000. According to officials from four accounting firms we spoke with, internal control reporting costs for their clients may range from $25,000 to $500,000. Unlike with surprise examinations and associated costs, some advisers and their related person qualified custodians may obtain internal control reports for reasons other than the custody rule. For example, representatives from two industry associations told us that institutional investors commonly require their custodians that are related persons to their advisers to obtain internal control reports. SEC Views Advisers Using Related but Operationally Independent Custodians as Posing Relatively Low Risk SEC provided certain investment advisers with an exception from the surprise examination requirement because their custodial practices pose relatively lower risk or they adopted other controls to protect client assets, such as annual financial statement audits. The broad range of industry, regulatory, and other parties that we interviewed generally supported or did not have a view on the surprise examination exception provided to advisers using related but operationally independent custodians to hold client assets. SEC Excepted Advisers from Surprise Examination Requirement for Several Reasons Although SEC’s 2009 proposed amendments to the custody rule would have required all registered advisers with custody of client assets to undergo a surprise examination, SEC provided exceptions from the requirement to certain advisers in the final 2009 rule amendments. In the 2009 amendments, SEC expressed that the surprise examination requirement should help deter fraud because advisers will know their client assets are subject to verification at any time and, thus, may be less likely to engage in misconduct. SEC noted that if fraud does occur, the examination will increase the likelihood that the fraud will be detected earlier. As discussed earlier, advisers deemed to have custody solely because of their authority to deduct fees from client accounts are excluded from the surprise examination requirement. In SEC’s view, the magnitude of the risks of client losses from overcharging advisory fees did not warrant the costs of obtaining a surprise examination. Also excluded from the requirement are advisers to a pooled investment vehicle that undergo annual audits of their financial statements by an independent public accountant and distribute the audited statements to investors. According to SEC, procedures performed by accountants during the course of a financial statement audit provide meaningful protections to investors, and the surprise examination would not significantly add to these protections. 75 Fed. Reg. at 1464. advisory personnel and personnel of the related person who have access to advisory client assets are not under common supervision; and advisory personnel do not hold any position with the related person or share premises with the related person. Although an adviser that meets these conditions would not be required to undergo a surprise examination, the adviser still would be required to comply with the rule’s other applicable provisions, including obtaining an internal control report from its related person. SEC emphasized that an adviser that has custody due to reasons in addition to a related person having custody cannot rely on the exception because it is only applicable if an adviser has custody solely because its related person has custody. For example, an adviser that has custody because he or she serves as a trustee with respect to client assets held in an account at a broker-dealer that is a related person could not rely on the exception from the surprise examination on the grounds that the broker-dealer was operationally independent, because the adviser has custody for reasons other than through its operationally independent related person. A Limited Number of Advisers Do Not Undergo Surprise Examinations Because They Use Related but Operationally Independent Custodians As of April 1, 2013, 169 registered advisers reported having custody of client assets and using related but operationally independent custodians and not undergoing an annual surprise examination for certain clients.These advisers account for around 2 percent of all SEC-registered advisers and about 42 percent of the approximately 400 SEC-registered advisers that have a related person holding client assets. These advisers collectively have over $6 trillion in regulatory assets under management and custody of over $1 trillion of client assets. The structure of large institutions with functionally independent subsidiaries tends to lend itself to meet the operationally independent conditions. More specifically, we identified some advisers using related but operationally independent custodians that are part of large financial institutions with numerous subsidiaries, such as Deutsche Bank, JPMorgan Chase, Morgan Stanley, and Wells Fargo. According to SEC staff, this outcome is to be expected given that the adviser and custodian staff cannot be considered operationally independent while under common supervision and sharing the same premises. If advisers currently qualifying for an exception from the surprise examination requirement were required to undergo such examinations, the costs of the examinations would likely vary considerably across the advisers. Like advisers currently subject to the surprise examination requirement, advisers excepted from the requirement vary considerably in terms of the factors that affect the cost of the examinations. For example, the number of clients these advisers had under custody ranged from 1 client to over 500,000 clients, as of April 1, 2013. Similarly, the amount of client assets under their custody ranged from $680,000 to $320 billion. Views on the Surprise Examination’s Exception and Effectiveness The broad range of industry, accounting firms, and other parties we interviewed largely told us that they either support or do not have a view on the surprise examination exception. However, several of these representatives said that the exception’s operationally independent None of the investment conditions were too stringent or difficult to meet.advisers we interviewed use a related custodian to hold client assets, and most did not have a view on the exception or SEC’s rationale. The North American Securities Administrators Association (NASAA) staff told us that when a custodian is a related person of the adviser, ensuring that the adviser meets and complies with the operationally independent conditions would require the firm to conduct a thorough analysis of its operations and any changes that may affect the custodian’s operational independence. The staff further noted that NASAA’s custody model rule for use by state securities regulators, unlike the SEC custody rule, does not include an exception from the surprise examination requirement based on the concept of operational independence between an investment adviser and a custodian that is a related person.advocacy representative told us that he generally opposes allowing advisers to use related custodians, but if that were allowed, he said that, in his opinion, the surprise examination exception would be appropriate for only large, complex entities subject to existing regulation, such as banks and broker-dealers. Many of the industry, regulatory, and other parties we interviewed agreed with SEC’s view that surprise examinations can help to deter fraud. However, some told us that one of the examination’s weaknesses is that accountants must rely on advisers to provide them with a complete list of the client assets under custody to verify. According to some of these representatives, an adviser defrauding a client could omit that client’s account from the list provided to the accountant to avoid detection. Officials from an accounting firm told us that no infallible procedure exists to test the completeness of the client list, given that the list must come from the adviser. According to SEC staff, an adviser with custody and intent on defrauding its clients also may not register with SEC or, if it does, may not report that it has custody of client assets or hire an accountant to conduct a surprise examination. SEC staff also noted that the surprise examination requirement, like any regulation, cannot prevent fraud 100 percent of the time but that it helps deter such misconduct. SEC data indicate that surprise examinations have identified compliance issues and helped target higher-risk advisers for examination. SEC staff told us that since the 2009 custody rule amendments became effective in March 2010, auditors conducting surprise examinations have found around 100 advisers with one or more instances of material noncompliance with the rule, such as failing to maintain client securities at a qualified custodian. According to SEC staff, the results of surprise examinations serve as an early warning of potential risks and are used by staff to help assess the risk level of advisers and, in turn, select advisers for SEC examination. For example, in March 2013, SEC’s Office of Compliance Inspections and Examinations issued a “Risk Alert” that noted that about 33 percent (over 140 examinations) of recent SEC examinations found custody-related deficiencies.included failures to comply with the rule’s surprise examination requirement and qualified custodian requirements and resulted in actions ranging from immediate remediation to enforcement referrals and subsequent litigation. Agency Comments We requested comments from SEC, but none were provided. SEC provided technical comments, which we incorporated, as appropriate. We are sending copies of this report to SEC, interested congressional committees and members, and others. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology This report describes (1) the requirements of and costs associated with the Securities and Exchange Commission (SEC) custody rule, including any related record-keeping requirements, for registered investment advisers, and (2) SEC’s rationale for not requiring advisers using related but operationally independent custodians to undergo surprise examinations, and the number and characteristics of such advisers. To address both objectives, we analyzed SEC’s record-keeping and custody rules under the Investment Advisers Act of 1940 to document compliance requirements (e.g., surprise examinations and internal control reports) for SEC-registered investment advisers. Furthermore, we reviewed proposed and final SEC amendments to the custody rule, comment letters, and other information, such as GAO and other studies, to analyze how and why the custody rule requirements have changed, particularly the surprise examination requirement and exceptions, and obtain information on compliance costs. We analyzed publicly available data in the Investment Adviser Registration Depository (IARD) to identify the number of SEC-registered investment advisers and information advisers reported about their compliance with the SEC custody rule’s requirements. IARD data are submitted by advisers in Form ADV, which is used by advisers to register with SEC and must be updated annually by advisers. We assessed the reliability of Form ADV data by interviewing SEC staff and testing the data for errors, and we determined the data were sufficiently reliable for our purposes. Specifically, we interviewed SEC staff about the IARD database and Form ADV to understand how the data are collected, what types of edit checks are incorporated into the system, and the staff’s overall views of the system’s data reliability with respect to our purposes. We also performed electronic testing to identify potential errors, and we discussed analysis methodology considerations, such as excluding particular records, with SEC staff for any inconsistencies that we identified. For the purposes of our final analysis, we excluded records for advisers that reported zero as the regulatory assets under management or total clients and any record with the latest Form ADV filing date older than January 2012. To obtain data on the costs of complying with the SEC custody rule, particularly its surprise examination and internal control report requirements, and other information, we interviewed a limited number of investment advisers and accounting firms. Based on Form ADV data as of December 3, 2012, we identified approximately 1,300 advisers that reported undergoing surprise examinations. To systematically target advisers and firms, we first divided the total group of advisers that reported undergoing surprise examinations into four subgroups based on whether the amount of their client assets under custody were above or below the group’s median value of approximately $101 million and whether their number of clients under custody were above or below the median of 19 clients, as shown in figure 5. Thus, at one end of the spectrum, subgroup 1 includes advisers whose number of clients and amount of client assets under custody were both below the group’s median values. At the other end of the spectrum, subgroup 3 includes advisers whose number of clients and amount of client assets were both above the group’s median values. Within each subgroup, we then selected advisers whose client assets were around the subgroup’s median value. For the 12 selected advisers, we interviewed eight of the advisers and interviewed four accountants who conducted the surprise examinations for the other four advisers. To obtain information on the cost of complying with the custody rule and other information, we also interviewed regulators, including SEC staff, officials from the North American Securities Administrators Association, and a representative from a state securities authority, and representatives from investment adviser, accountant, investor advocacy, and other associations, including the American Institute of Certified Public Accountants, American Bankers Association, Financial Services Institute, Fund Democracy, Investment Advisers Association, Managed Futures Association, Private Equity Growth and Capital Council, and Securities Industry and Financial Markets Association. In addition, we interviewed two securities law attorneys. We conducted this performance audit from September 2012 through July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Richard Tsuhara, Assistant Director; Carl Barden; William Chatlos; F. Chase Cook; Kristen Kociolek; Risto Laboski; Grant Mallie; Patricia Moye; Jennifer Schwartz; Jena Sinkfield; and Verginie Tarpinian made major contributions to this report. | Investment advisers provide a wide range of services and collectively manage around $54 trillion in assets for around 24 million clients. Unlike banks and broker-dealers, investment advisers typically do not maintain physical custody of client assets. However, under federal securities regulations, advisers may be deemed to have custody because of their authority to access client assets, for example, by deducting advisory fees from a client account. High-profile fraud cases in recent years highlighted the risks faced by investors when an adviser has custody of their assets. In response, SEC amended its custody rule in 2009 to require a broader range of advisers to undergo annual surprise examinations by independent accountants. At the same time, SEC provided relief from this requirement to certain advisers, including those deemed to have custody solely because of their use of related but "operationally independent" custodians. The Dodd-Frank Wall Street Reform and Consumer Protection Act mandates GAO to study the costs associated with the custody rule. This report describes (1) the requirements of and costs associated with the custody rule and (2) SEC's rationale for not requiring advisers using related but operationally independent custodians to undergo surprise examinations. To address the objectives, GAO reviewed federal securities laws and related rules, analyzed data on advisers, and met with SEC, advisers, accounting firms, and industry and other associations. Designed to safeguard client assets, the Securities and Exchange Commission's(SEC) rule governing advisers' custody of client assets (custody rule) imposes various requirements and, in turn, costs on investment advisers. To protect investors, the rule requires advisers that have custody to (1) use qualified custodians (e.g., banks or broker-dealers) to hold client assets and (2) have a reasonable basis for believing that the custodian sends account statements directly to clients. The rule also requires advisers with custody, unless they qualify for an exception, to hire an independent public accountant to conduct annually a surprise examination to verify custody of client assets. According to accountants that GAO interviewed, examination cost depends on an adviser's number of clients under custody and other factors. These factors vary widely across advisers that currently report undergoing surprise examinations: for example, their reported number of clients under custody ranged from 1 client to over 1 million clients as of April 2013. Thus, the cost of the examinations varies widely across the advisers. The rule also requires advisers maintaining client assets or using a qualified custodian that is a related person to obtain an internal control report to assess the suitability and effectiveness of controls in place. The cost of these reports varies across custodians based on their size and services. SEC provided an exception from the surprise examination requirement to, among others, advisers deemed to have custody solely because of their use of related but "operationally independent" custodians. According to SEC, an adviser and custodian under common ownership but having operationally independent management pose relatively lower client custodial risks, because the misuse of client assets would tend to require collusion between the firms' employees. To be considered operationally independent, an adviser and its related custodian must not be under common supervision, not share premises, and meet other conditions. About 2 percent of the SEC-registered advisers qualify for this exception for at least some of their clients. If the exception were eliminated, the cost of the surprise examination would vary across the advisers because the factors that affect examination cost vary widely across the advisers. |
Background DOD is a massive and complex organization. In fiscal year 2004, the department reported that its operations involved $1.2 trillion in assets, $1.7 trillion in liabilities, over 3.3 million military and civilian personnel, and over $605 billion in net cost of operations. For fiscal year 2005, the department received appropriations of about $417 billion. Moreover, execution of its operations spans a wide range of defense organizations, including the military services and their respective major commands and functional activities, numerous large defense agencies and field activities, and various combatant and joint operational commands, which are responsible for military operations for specific geographic regions or theaters of operations. In executing these military operations, the department performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. DOD reports that, in order to support these business functions, it currently relies on about 4,200 systems—including accounting, acquisition, finance, logistics, and personnel. As we have previously reported, this systems environment is overly complex and error prone and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) manual data entry into multiple systems. These problems continue despite the department’s spending billions of dollars annually to operate, maintain, and modernize its business systems. DOD received approximately $13.3 billion for fiscal year 2005 to operate, maintain, and modernize this environment. In addition, our reports continue to show that the department’s stovepiped and duplicative systems contribute to fraud, waste, and abuse. Of the 25 areas on GAO’s governmentwide “high-risk” list, 8 are DOD program areas, and the department shares responsibility for 6 other high-risk areas that are governmentwide in scope. Because of the department’s size and complexity, modernizing its business systems is a huge management challenge that we first designated as one of the department’s high-risk areas in 1995, and we continue to do so today. To help meet this challenge, DOD established its business systems modernization program in 2001. As we testified in 2003, one of the seven key elements that are necessary to successfully execute this modernization program is to establish and implement an enterprise architecture. Subsequently, in its Fiscal Year 2004 Performance and Accountability Report, DOD acknowledged that deficiencies in its systems and business processes hindered the department’s ability to collect and report financial and performance information that was accurate, reliable, and timely. The DOD report noted that to address its systemic problems and assist in the modernization of its business operations, the department had undertaken the development and implementation of a BEA. Enterprise Architecture Is Critical to Achieving Successful Modernization Effective use of an enterprise architecture, or a modernization blueprint, is a trademark of successful public and private organizations. For more than a decade, we have promoted the use of architectures to guide and constrain systems modernization, recognizing them as a crucial means to a challenging goal: agency operational structures that are optimally defined in both business and technological environments. Congress, the Office of Management and Budget (OMB), and the federal Chief Information Officer (CIO) Council have also recognized the importance of an architecture- centric approach to modernization. The Clinger-Cohen Act of 1996 mandates that an agency’s CIO develop, maintain, and facilitate the implementation of an information technology (IT) architecture. Further, the E-Government Act of 2002 requires OMB to oversee the development of enterprise architectures within and across agencies. In addition, OMB has issued guidance that, among other things, requires system investments to be consistent with these architectures. What Is an Enterprise Architecture? An enterprise architecture provides a clear and comprehensive picture of an entity, whether it is an organization (e.g., a federal department) or a functional or mission area that cuts across more than one organization (e.g., financial maagement). This picture consists of snapshots of both the enterprise’s current or “As Is” environment and its target or “To Be” environment, as well as a capital investment road map for transitioning from the current to the target environment. These snapshots consist of “views,” which are one or more architecture products that provide logical or technical representations of the enterprise. The suite of products and their content that form a given entity’s enterprise architecture are largely governed by the framework used to develop the architecture. Since the 1980s, various architecture frameworks have emerged and been applied. See appendix III for a discussion of these various frameworks. The importance of developing, implementing, and maintaining an enterprise architecture is a basic tenet of both organizational transformation and systems modernization. Managed properly, an enterprise architecture can clarify and help to optimize the interdependencies and relationships among an organization’s business operations and the underlying IT infrastructure and applications that support these operations. Employed in concert with other important management controls, such as portfolio-based capital planning and investment control practices, architectures can greatly increase the chances that an organization’s operational and IT environments will be configured to optimize its mission performance. Our experience with federal agencies has shown that investing in IT without defining these investments in the context of an architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. DOD’s Business Management Modernization Program: A Brief Description and Chronology In July 2001, the Secretary of Defense established the Business Management Modernization Program (BMMP) to improve the efficiency and effectiveness of DOD’s business operations and provide the department’s leaders with accurate and timely information through the development and implementation of a BEA. At that time, the Secretary tasked the Under Secretary of Defense (Comptroller), in coordination with the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) and the Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer (ASD(NII)/CIO), with responsibility for overseeing the program. To accomplish this, in October 2001, the Comptroller established governance bodies and assigned them responsibilities associated with developing, maintaining, and implementing the BEA. These entities and their respective roles and responsibilities are shown in table 1. DOD is currently revising its BEA governance structure, including recently eliminating its long-standing governance entities. These revisions are discussed later in this report. Also in 2001, the BMMP program office was established to execute the program’s day-to-day activities, including implementing internal management controls and other mechanisms to provide reasonable assurance that the office would develop and implement an effective BEA. The office is led by a program director and comprised of seven program divisions, each of which is headed by an assistant deputy director. Figure 1 is a simplified diagram of the organizational structure of the program office, and table 2 shows the roles and responsibilities of the program divisions. Initially, DOD planned to develop the architecture in 1 year. Subsequently, the department stated that it would develop its architecture in three increments, each increment addressing a subset of objectives and consisting of specific architecture releases. Table 3 shows these increments, the corresponding architecture releases, and the planned completion dates for the increments. To develop the architecture, DOD entered into a 5-year, $95 million contract with International Business Machines (IBM) in April 2002, under which the department has issued a series of task orders aimed at developing the architecture. In 2004, DOD increased the contract amount to $250 million; however, the contract did not provide a reason for this increase, and program officials have yet to provide an explanation. As of September 2004, DOD reported that it had obligated approximately $318 million for the program, which is primarily for contractor support. Prior Reviews of DOD’s Architecture Efforts Have Identified Many Weaknesses and Challenges Over the last 4 years, we have identified the need for, and reviewed DOD’s efforts to develop an enterprise architecture for modernizing its business operations and systems, and we have made a series of recommendations to assist the department in successfully developing the architecture and using it to guide and constrain its ongoing and planned business systems investments. In particular, we reported in May 2001 that the department did not have an architecture for its financial and financial-related business operations, nor did it have the management structures, processes, and controls in place to effectively develop and implement one. We concluded that continuing to spend billions of dollars on new and modified systems would result in more processes and systems that were duplicative, not interoperable, unnecessarily costly to maintain and interface, and not optimizing mission performance and accountability. We made eight recommendations to the Secretary of Defense that were aimed at providing the means for effectively developing and implementing an architecture and limiting DOD components’ systems investments until it had a well-defined architecture and the means to enforce it. In July 2001, DOD initiated the BMMP. In February 2003, we reported that the department was following some architecture best practices, such as establishing a program office to be responsible for managing the enterprise architecture development effort. We also reported challenges and weaknesses in DOD’s architecture efforts. For example, we reported that DOD had not yet (1) established a governance structure and the process controls needed to ensure ownership of and accountability for the architecture across the department; (2) clearly communicated to intended stakeholders its purpose, scope, and approach for developing the architecture; and (3) defined and implemented an independent quality assurance process. We reiterated our earlier recommendations and made six new recommendations aimed at enhancing DOD’s ability to develop its architecture and to guide and constrain its business systems modernization investments. In March 2003, we reported that DOD’s draft release of its architecture did not include a number of items that were recommended by relevant architectural guidance, and that DOD’s plans would not fully satisfy the requirements of the National Defense Authorization Act for Fiscal Year 2003. For example, the draft architecture did not include a “To Be” security view, which would define the security requirements—including relevant standards to be applied in implementing security policies, procedures, and controls. DOD officials generally agreed, and they stated that subsequent releases of the architecture would provide these missing details. In July and September 2003, we reported that the initial release of the department’s architecture, including the transition plan, did not adequately address statutory requirements and other relevant architectural requirements. For example, the description of the “As Is” environment did not include (1) the current business operations in terms of entities and people who perform the functions, processes, and activities and (2) the locations where the functions, processes, and activities are performed. The description of the “To Be” environment did not include actual systems to be developed or acquired to support future business operations and the physical infrastructure that would be needed to support the business systems. The transition plan did not include time frames for phasing out existing systems within DOD’s then reported inventory of about 2,300 business systems. We concluded that the department’s initial release of the architecture did not contain sufficient scope and detail either to satisfy the act’s requirements or to effectively guide and constrain departmentwide business systems modernization. In our September 2003 report, we reiterated the open recommendations that we had made in our May 2001 and February 2003 reports, and we made 10 new recommendations to, among other things, improve DOD’s efforts for developing the next release of the architecture. In March and July 2004, we testified that DOD’s substantial long-standing financial and business management problems adversely affected the economy, effectiveness, and efficiency of its operations and had resulted in a lack of adequate transparency and appropriate accountability across all major business areas. Further, we said that substantial work remained before the BEA would begin to have a tangible impact on improving the department’s overall business operations. We concluded that until DOD completed a number of actions, including developing a well-defined BEA, its business systems modernization efforts would be at a high risk of failure. In May 2004, we reported that after 3 years of effort and over $203 million in obligations, DOD had not made significant change in the content of the BEA or in its approach to investing billions of dollars annually in existing and new systems. We reported that few actions had been taken to address the recommendations we had made in our September 2003 report, which were aimed at improving the department’s plans for developing the next release of the architecture and implementing the institutional means for selecting and controlling both planned and ongoing business systems investments. We also reported that DOD had still not adopted key architecture management best practices that we had recommended, such as assigning accountability and responsibility for directing, overseeing, and approving the architecture and explicitly defining performance metrics to evaluate the architecture’s quality, content, and utility. Further, DOD had not added the scope and detail to its architecture that we had previously identified as missing. For example, in the latest release of the BEA— Release 2.0—DOD did not provide sufficient descriptive content related to future business operations and supporting technology to permit effective acquisition and implementation of system solutions and associated operational change. Moreover, the department had not yet explicitly defined program plans, including milestones, detailing how it intended to extend and evolve the architecture to incorporate this missing content. We concluded that the future of DOD’s architecture development and implementation activities was at risk, which in turn placed the department’s business transformation effort in jeopardy of failing. Therefore, we added that it was imperative that the department move swiftly to implement our open recommendations. Because many of our prior recommendations remained open, we did not make any new recommendations in our May 2004 report, but we reiterated the open recommendations that we had made in our May 2001, February 2003, and September 2003 reports. In November 2004 and April 2005, we testified that for DOD to successfully transform its business operations, it would need a comprehensive and integrated business transformation plan; people with the skills, responsibility, and authority to implement the plan; an effective process and related tools, such as a BEA; and results-oriented performance measures that would link institutional, unit, and individual personnel goals and expectations to promote accountability for results. We testified that over the last 3 years, we had made a series of recommendations to DOD and suggested legislative changes that, if implemented, could help the department move forward in establishing the means to successfully address the challenges it faces in transforming its business operations. We also testified that, after about 3 years of effort and over $203 million in reported obligations, the architecture’s content and the department’s approach to investing billions of dollars annually in existing and new systems had not changed significantly, and that the program had yielded very little, if any, tangible improvements in DOD’s business operations. DOD Has Yet to Implement Effective Governance and Communications, but Improvements Are Under Way Long-standing weaknesses in DOD’s BEA governance structure and communications strategy still remain. While DOD has established a new senior committee to oversee its business transformation efforts, including BEA development, much remains to be accomplished before proposed governance and communications concepts are fully defined and implemented. Until the department has made its intended governance and communications concept operational, the success of DOD’s architecture efforts will remain in doubt. Long-standing Program Governance Weaknesses Remain, Although Recent Proposals Are Intended to Address Weaknesses An enterprise architecture is a corporate asset that, among other things, is intended to represent the strategic direction of the enterprise. Accordingly, best practices recommend that to demonstrate commitment, organizations should vest accountability and assign responsibility for directing, overseeing, and approving the architecture to a committee or group with representation from across the enterprise. Sustained support and commitment by this committee to the architecture, as well as the committee’s ownership of it, are critical to a successful enterprise architecture development effort. We have previously recommended that DOD establish this kind of architecture responsibility and accountability structure. (See app. II for our prior recommendations and their current status.) During the last 4 years, DOD has relied on three primary management entities to govern BEA development, maintenance, and implementation— the Executive Committee, the Steering Committee, and the Domain Owners Integration Team (DO/IT). (See table 1 for their roles and responsibilities.) This governance approach, however, does not assign accountability and responsibility for directing, overseeing, and approving the BEA to these entities either singularly or collectively. As we reported in February 2003, the Executive and Steering Committees were advisory in nature, and their responsibilities were limited to providing guidance to the program office and advising the Comptroller and the Executive Committee. Moreover, since they were established, neither the two committees nor the DO/IT have adequately fulfilled their assigned responsibilities, as we discuss below. The Executive Committee was chartered to, among other things, provide strategic direction to the Steering Committee, champion program execution, and hold DOD components—including the military services—responsible for program results. To accomplish these things, the charter states that the committee should establish a meeting schedule. However, no schedule was established, and the committee met only four times in over 3½ years. Moreover, no minutes of the four meetings were prepared, according to the program’s Acting Assistant Deputy Director for Communications, and no other documentation exists to demonstrate the committee’s performance of its chartered functions. In fact, during numerous DO/IT meetings that we attended, participants stated that the Executive Committee was not providing strategic direction, nor was it championing program execution. The Steering Committee was chartered to advise the Executive Committee on program performance, serve as the forum for discussion of component issues, and provide guidance to the program office. According to the program’s Acting Assistant Deputy Director for Communications, neither Executive Committee meeting minutes nor any other documentation exists to demonstrate that the Steering Committee advised the Executive Committee. Moreover, during the Steering Committee meetings that we attended over the last 4 years, we saw no evidence that this committee either planned to or actually did advise the Executive Committee on program performance. While we did observe in these meetings that issues were raised and discussed, which is a chartered responsibility of the committee, we also observed that the committee did not provide guidance and direction to the program office during these meetings. The Steering Committee last met about 1 year ago (June 2004). The DO/IT was chartered to provide recommendations to the Steering Committee; provide guidance regarding architecture updates and their effects; and identify, address, and resolve domain and program issues. The charter also states that the DO/IT was to ensure that its representation included the military services. However, during the Steering Committee meetings that we attended, DO/IT representatives did not provide any recommendations to the Steering Committee, nor did they provide guidance on architecture updates and their effects. Moreover, the DO/IT did not include military service representatives, and it did not establish any policies and procedures for how to address and resolve issues. As a result, issues that were identified during meetings were not resolved. For example, in July 2004, there were discussions regarding the lack of involvement from the services, the lack of detail in the architecture content, and the lack of clear understanding of the roles and responsibilities among the domains and the services. During this meeting, however, no decisions were made about how these issues were to be resolved, and no actions were taken to provide recommendations to the Steering Committee for resolving the issues. As a result, we observed the same issues being discussed, without resolution, 5 months later. DOD has recently taken steps to improve the program’s governance structure and, according to program officials, further steps are planned. For example, the department has implemented the provisions in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, which are aimed at increasing senior DOD leadership involvement in the program. In particular, it includes DOD’s establishment of the Defense Business Systems Management Committee (DBSMC) to replace both the Executive and Steering Committees and to serve as the highest ranking governance body overseeing business transformation. According to the DBSMC charter, the committee is accountable and responsible for directing, overseeing, and approving all program activities. Specific responsibilities of the committee include establishing strategic direction and plans for the business mission area in coordination with the warfighting and enterprise information environment mission areas; approving business mission area transformation plans and initiatives and coordinating transition planning in a documented program baseline with critical success factors, milestones, metrics, deliverables, and periodic program reviews; establishing key metrics and targets by which to track business establishing policies and approving the business mission area strategic plan, the transition plan for implementation for business systems modernization, the transformation program baseline, and the BEA; and executing a comprehensive communications strategy. Consistent with recent legislation, the DBSMC is chaired by the Deputy Secretary of Defense, and the USD(AT&L) serves as the vice chair. Its membership consists of senior leadership from across the department, including the military service secretaries, the defense agency heads, and the principal staff assistants. The department has also moved the program office from the Comptroller to the USD(AT&L), reporting to the Special Assistant for Business Transformation. According to DOD, this transfer of functions and responsibilities will allow USD(AT&L) to establish the level of activity necessary to support and coordinate the activities of the newly established DBSMC. Other entities may be established to support the DBSMC. According to program officials, including the Program Director, the DO/IT has been eliminated and may be replaced by a DOD Enterprise Transformation Integration Group. Further, the DO/IT had identified the need for six additional boards to assist the program office. These boards have not yet been chartered, but potential board members have met to discuss the boards’ roles, responsibilities, and operating procedures, as well as program issues. However, the program director stated that not all of these boards may be established under the new structure. According to the Special Assistant for Business Transformation, a new governance structure will be in place and operational by September 30, 2005. To this end, DOD officials told us that the DBSMC held its first meeting in February 2005 to finalize its charter and member composition, and that to move governance reforms forward, the committee will initially hold monthly meetings. The weaknesses in the BEA governance structure over the last 4 years, according to program officials, are attributable to a lack of authority and accountability in program leadership by the various management entities (e.g., Executive and Steering Committees) and to the limited direction provided by these entities. While DOD’s recent actions are intended to address these root causes, almost 4 years and approximately $318 million in obligations have been invested, and the department is still attempting to establish an effective governance structure. Moreover, much remains to be accomplished until DOD’s new governance structure becomes an operational reality. Until it does, it is unlikely that the department will be able to develop an effective BEA. An Effective Communications Strategy Has Yet to Be Implemented, but Some Activities Are Under Way Effective communications among architecture stakeholders are closely aligned with effective governance. According to relevant guidance, once initial stakeholder participation in an architecture program is achieved, a communications strategy should be defined and implemented to facilitate the exchange of information among architecture stakeholders about all aspects of the program, such as program purpose, scope, methodologies, progress, results, and key decisions. Such communication is essential to executing an architecture program effectively, including obtaining institutional buy-in for developing and using the architecture for corporate decision making. Accordingly, in 2003 we recommended that DOD provide for ensuring stakeholder commitment and buy-in through proactive architecture marketing and communication activities. (See app. II for our prior recommendation and its current status.) In response, the program office defined and approved a strategic communications plan in March 2004. According to the plan, its purpose is to direct the flow of information to inform, collaborate with, and educate DOD internal and external stakeholders about the program. To accomplish this, the plan identifies categories of stakeholders and includes a five-phase implementation approach. The five phases are as follows: 1. Plan Start-up: Conduct a formal tactical review, including an assessment of current communication tools, communication procedures, available resources, and agency or industry best practices. 2. Discovery and Assessment: Identify and verify all internal and external stakeholders and begin defining the benchmarking and reporting requirements. 3. Branding: Determine key messages to be communicated and select tools for disseminating these messages. 4. Communications Planning and Execution: Execute and assess the application of the strategy and continue to develop the communication tools. 5. Evaluation: Evaluate the progress and overall success of the plan’s implementation, using metrics. The strategic communications plan states that the five phases were to have been fully implemented by December 2004. Although 5 months have passed since the plan was to be implemented, none of the five phases have been completed, and communications activities that have been performed by those responsible for implementing the plan have been limited in scope. Specifically, communications activities have focused on raising program awareness through the distribution of posters and press releases and the establishment of a Web site that provides information about the program. However, an assessment to evaluate the effectiveness of communication tools and procedures, available resources, and agency or industry best practices in communicating the program’s purpose to the various stakeholders (e.g., the domains and the military services) has not been performed. In addition, the program’s Acting Assistant Deputy Director for Communications stated that a systematic approach, including metrics, to measure the effectiveness of the communications strategy has not been defined. According to this official, communications activities have been ad hoc. Further, DO/IT representatives told us that the program’s Web site does not contain consistent and current information about the program; as a result, stakeholders’ understanding of the BEA is limited. Moreover, the plan focuses first on internal stakeholders, and it defers efforts to proactively promote understanding and buy-in with external stakeholders, such as Congress. The plan defines the internal stakeholders to exclude key departmental components, such as the military services and the defense agencies. Instead, it defines internal stakeholders to include only the program office and the domains. As a result, both the internal and external DOD stakeholders with whom we spoke said that they did not have a clear understanding of the program, including its purpose, scope, approach, activities, and status, as well as stakeholders’ roles and responsibilities. For example, the Installation and Environment domain representative stated that it was unclear how the program would achieve one of the program’s original goals (i.e., achieving an unqualified audit opinion by fiscal year 2007), and what the role of this representative’s domain would be in achieving this goal. Similarly, the Acquisition domain representative stated that the roles and responsibilities of the domains for the program are confusing, because the domains should be the entities that are developing the BEA, and not the program office. According to this representative, the current approach will result in redundancy and increased program costs. In addition, the program office’s Acting Assistant Deputy Director for Communications acknowledged that stakeholders are confused, stating that they do not have a good understanding of either the BEA’s goals, objectives, and intended outcomes or stakeholders’ roles and responsibilities. This official also stated that cross-domain integration issues remain that require strong DOD leadership to move the communications efforts beyond conducting awareness activities to achieving departmentwide buy-in. The Special Assistant for Business Transformation has recently begun to better communicate with key external stakeholders, such as Congress. For example, this official stated that department officials have met with and intend to hold future meetings with congressional staff to brief them on the department’s plans and efforts to date. Without effective communication with BEA stakeholders, the likelihood that DOD will be able to effectively develop and implement its BEA is greatly reduced. DOD Has Yet to Develop Program Plans and Supporting Workforce Plans, but Intends to Make Improvements DOD does not have the program plans that it needs to effectively manage the development, maintenance, and implementation of its BEA. In particular, the department has yet to specify measurable goals and outcomes to be achieved, nor has it defined the tasks to be performed to achieve these goals and outcomes, the resources needed to perform the tasks, or the time frames within which the tasks will be performed. DOD has not assessed, as part of its program planning, the workforce capabilities that it has in place and that it needs to manage its architecture development, maintenance, and implementation efforts, nor does it have a strategy for meeting its human capital needs. The absence of effective program planning, including program workforce planning, has contributed to DOD’s limited progress in developing a well-defined architecture and clearly reporting on its progress to date. Unless its program planning improves, it is unlikely that the department will be successful in its attempt to develop, maintain, and implement its BEA. Recognizing this long- standing void in planning, DOD stated that it intends to complete, by September 30, 2005, a transition plan that will include a program baseline that will be used to oversee and manage program activities. DOD Has Yet to Develop Effective Program Plans Architecture management guidance states that organizations should develop and execute program plans, and that these plans should provide an explicit road map for accomplishing architecture development, maintenance, and implementation goals. Among other things, effective plans should specify tasks to be performed, resources needed to perform these tasks (e.g., funding, staffing, tools, and training), program management and contractor roles and responsibilities, time frames for completing tasks, expected outcomes, performance measures, program management controls, and reporting requirements. We have previously reported on the program’s lack of effective planning and have recommended that DOD develop BEA program plans. (See app. II for our prior recommendation and its current status.) Since the program was launched in 2001, DOD has operated without a program plan. Instead, the department has set target dates for producing a series of architecture releases as part of three generally defined architecture increments (see table 3). However, DOD has not clearly defined what the purpose of the respective increments are, either individually or collectively, and it has not developed near-, mid-, or long- term plans for producing these increments. At a minimum, such plans would identify specific tasks (i.e., provide a detailed work breakdown structure) for producing the architecture releases that possess predefined content and utility. These plans would also contain specific and reliable estimates of the time and resources it will take to perform these tasks. Also in lieu of program plans, the program office provided us with documents in November 2004 that the deputy program director stated were to address our open recommendations for (1) adding needed content and consistency to the BEA’s “As Is” and “To Be” architectural products and (2) developing a well-defined BEA transition plan. However, the documents that we were provided were plans for developing plans to address our recommendations and, thus, were not documents explaining how and when our recommendations would be addressed. DOD has recognized the need for program planning. According to the Acting Assistant Deputy Director for Strategic Planning, the program office had committed to developing a program baseline by December 2004. According to the program director, this baseline was to include program goals, objectives, and activities as well as performance, cost, and schedule commitments. Further, the baseline was to establish program roles, responsibilities, and accountabilities and was to be used as a managerial and oversight tool to allocate resources, manage risk, and measure and report progress. However, the department has yet to develop this program baseline. Moreover, while this program baseline would be a useful tool for strengthening program management, it nevertheless was not to have included all of the elements of an effective program plan, such as a detailed work breakdown structure and associated resource estimates. According to senior officials, including the Special Assistant for Business Transformation and the Deputy Under Secretary of Defense (Financial Management), the department is drafting a transition plan that is to be approved by the DBSMC and issued by September 30, 2005. According to these officials, this plan will include a program baseline and a BEA development approach. The lack of well-defined program plans has contributed significantly to the limited BEA progress that we have reported over the last several years. Moreover, this absence of program plans has created a lack of transparency and understanding about what is occurring on the program and what will occur—both inside and outside the department. It has also inhibited BEA governance entities’ ability to ensure that resources (e.g., program office, domains, and contractors) are being effectively used to achieve worthwhile outcomes and results. Although the contractor’s work statements have provided some additional detail, these task descriptions lack the specificity necessary to use them effectively to monitor the contractor’s progress and performance. For example, the latest work statement includes the task “develop business rules based on the available sources of information,” which has been included in prior work statements. However, the latest work statement does not define the scope of this effort, nor does it define how this latest task will support prior efforts to develop business rules. As a result, the extent to which business rules have been developed and the work remaining to complete the development of these rules is unclear. Further, the relationship of this task to DOD’s ability to satisfy the objectives for increment 1 also has not been defined. These architecture relationships need to be defined before the department can develop explicit plans for effective BEA development. Moreover, because there is no plan linking them together, it is not clear how the contractor’s work statements and other BEA working groups’ efforts relate to or contribute to larger BEA goals and objectives. For example, the program office has continued to task the contractor to develop architecture releases, although the intended use, and thus the explicit content, of the various releases has not been clearly linked to the goals and objectives of increment 1. Representatives of the DOD business domains raised similar concerns with the contractor’s work statements, telling us that the work statements have been vague and have not been linked to the specific architecture products. According to these representatives, the department does not know if it is investing resources on tasks that are needed and add value to the program. According to the deputy program director, continuous changes in the direction and scope of the program have hindered DOD’s ability to develop effective program plans. Without such plans, DOD has been, and will continue to be, limited in its ability to develop a well-defined architecture on time and within budget. DOD Has Not Performed Effective Workforce Planning As we have previously reported, workforce planning is an essential component of program management. Workforce planning enables an entity to be aware of and prepared for its current and future human capital needs, such as workforce size, knowledge and skills, and training. Such planning involves assessing the knowledge and skills needed to execute the program, inventorying existing staff knowledge and skills, identifying any shortfalls, and providing for addressing these shortfalls. Through effective workforce planning, programs and organizations can have the right people with the right skills doing the right jobs in the right place at the right time. Relevant architecture management guidance recognizes the importance of planning for and having adequate human resources in developing, maintaining, and implementing enterprise architectures. DOD has yet to perform workforce planning for its BEA program. Nevertheless, it has established a program office consisting of seven program divisions (see fig. 1) and staffed the office with 60 government employees and approximately 300 contractor staff. In addition, the department has assigned other staff to support the various program government entities, such as the domains and the DO/IT, and it has established various formal and informal working groups. However, DOD has not taken steps to ensure that the people assigned to the program have the right skill sets or qualifications. In particular, DOD has not defined the requisite workforce skills and abilities that the department needs in order to develop, maintain, and implement the architecture. To illustrate, the program’s Assistant Deputy Director for Administrative Support told us that the position descriptions used to staff the program office were generic and were not tailored to the needs of an enterprise architecture program. This official added that, as a result, a person might satisfy the qualifications contained in the position description, but still not meet the needs of the BEA program. In addition to not defining its workforce needs, the program also does not have an inventory of the capabilities of its currently assigned program workforce. For example, the Assistant Deputy Director for Administrative Support told us that the department did not know the number of individuals assigned to support the various governance entities. In the absence of an inventory of existing workforce knowledge and skills, we requested any available information on the qualifications and training of program staff in key leadership positions (e.g., assistant deputy directors). In response, the Assistant Deputy Director for Administrative Support said that such information was not readily available for all staff, and this official provided us with résumés for 15 program officials, 4 of whose résumés we were told were created in response to our request for key staff qualifications and training. Exacerbating the program’s lack of workforce planning is the fact that several key program office positions are vacant. For example, four of the seven program division leadership positions (i.e., assistant deputy directors’ positions) are temporarily filled by persons “acting” in this position (see fig. 2). In addition, key supporting positions within the program divisions, such as the positions for performance management and independent verification and validation/quality assurance, are vacant. As a result, 1 program official is currently acting in three positions—strategic planning/organization development (includes risk management), performance management (earned value management system), and independent verification and validation/quality assurance. Further, two of the positions this person occupies are incompatible and do not allow for appropriate separation of duties. Specifically, this individual is responsible for both program performance management and independent oversight of performance management. In addition, significant staff turnover has occurred in key program positions. For example, the program office has had three directors in 4 years, three transition planning directors since March 2004, and four contracting officer technical representatives responsible for the prime contract since January 2003. The Assistant Deputy Director for Administrative Support stated that the program lacks valid and reliable data about its human capital needs and current capabilities. This official told us that plans are being developed to begin addressing this situation. For example, to begin monitoring staff turnover, the program office recently began maintaining a list of program staff with start dates. However, this official also told us that the plans for improving the program’s management of its human capital were not complete, and dates for when the plans would be complete have not been set. The absence of effective workforce planning has contributed significantly to DOD’s limited progress to date in developing its architecture. Unless the program’s approach to human capital management improves, it is unlikely that the department’s future efforts to develop, maintain, and implement the BEA will be successful. DOD Is Not Performing Effective Configuration Management Relevant architecture guidance, including DOD guidance, recognizes the importance of configuration management when developing and maintaining an architecture. The purpose of configuration management is to maintain integrity and traceability, and control modifications or changes to the architecture products throughout their life cycles. Effective configuration management, among other things, enables integration and alignment among related architecture products. As we have previously reported, an effective configuration management process comprises four primary elements, each of which should be described in a configuration management plan and implemented according to the plan. In addition, responsibility, accountability, and authority for configuration management should be assigned to a configuration manager. The four elements are: Configuration identification: Procedures for identifying, documenting, and assigning unique identifiers (e.g., serial number and name) to product types generated for the architecture program, generally referred to as configuration items. Configuration control: Procedures for evaluating and deciding whether to approve changes to a product’s baseline configuration, generally accomplished through configuration control boards, which evaluate proposed changes on the basis of costs, benefits, and risks and decide whether to permit a change. Configuration status accounting: Procedures for documenting and reporting on the status of configuration items as a product evolves. Documentation, such as historical change lists and original architecture products, are generated and kept in a library, thereby allowing organizations to continuously know the state of a product’s configuration and to be in a position to make informed decisions about changing the configuration. Configuration auditing: Procedures for determining alignment between the actual product and the documentation describing it, thereby ensuring that the documentation used to support the configuration control board’s decision making is complete and correct. Configuration audits, both functional and physical, are performed when a significant product change is introduced, and they help to ensure that only authorized changes are being made. DOD has a draft configuration management plan and related procedures that address all four of these areas. However, the plan is not being followed. For example, according to the plan and procedures, certain product types should be placed under configuration management and be assigned a unique identifier. However, in one case, the verification and validation contractor reported that DOD had updated one of the BEA products (i.e., All View-1) that was initially published in Release 2.3, but that this updated product was not given a unique identifier and a new release date, and no entry was made in the version history to enable stakeholders to differentiate between the two versions. Configuration naming conventions also have not been consistently followed, resulting in the updates to a single document being given different unique identifiers than the original document. For example, the November 2003 configuration management plan had the unique identifier “C0008_05_03_BMMP_Configuration_Management_Plan.doc,” which was comprised of the call number, the task number, the subtask number, and the name of the document. However, the department later assigned this document the following unique identifier “Configuration_Management_Plan.doc,” which did not include the call and task numbers. Such inconsistencies could permit changes to be made to the wrong version of a product, thereby compromising the integrity and reliability of the information. Consistent with relevant guidance, the procedures require that a configuration manager be assigned and that this individual be responsible for ensuring that the four elements are executed. However, after almost 4 years of architecture products development, a configuration manager has not been assigned. In addition, while the department established a configuration control board and chartered it to evaluate and decide whether to approve proposed product changes, this board is not fully functioning. Specifically, the board’s charter has yet to be approved, and its authority has been limited to a subset of BEA products. For example, its authority does not extend to the BEA transition plan. With respect to configuration status accounting activities that have been conducted to ensure the integrity of product baselines, we were provided with two reports even though program officials, including the Configuration Control Board Chair, told us that no configuration status accounting reports exist and that neither do any auditing procedures and processes (e.g., audit checklists, agendas, or plans). However, one of these reports was missing key data, such as the date of the report, the submitter, and the version of the product currently being reviewed and, thus, was of limited use. It was also unclear as to which version of the product was referenced in the report, and these officials told us that the current baseline of approved configuration items, including the configuration management plan, is unknown. As a result, configuration items can be duplicated. For example, the Acting Assistant Deputy Director for Communications had a second communications plan prepared, and this official told us that he did not know that a prior draft plan existed. Because of this, the new strategy did not leverage any of the work that had previously been done, and duplicative plans exist. Program officials, including the Configuration Control Board Chair, stated that they recognize the importance of effective configuration management. They attributed the absence of effective configuration management to a number of factors, including no policy or directive requiring it and the lack of a common understanding of effective configuration management practices. The absence of effective configuration management raises questions about whether changes to the BEA and other relevant products have been justified and accounted for in a manner that maintains the integrity of the documentation. Unless this situation is remedied, the department will not have adequate assurance that it has maintained accountability and ensured the consistency of information among the products it is developing. In addition to the governance and planning weaknesses we previously discussed, the department’s lack of effective configuration management has also contributed to the state of the BEA discussed in the next section of this report. DOD Has Yet to Develop a Well-Defined BEA to Guide Its Modernization Efforts Despite six BEA releases and two updates, DOD still does not have a version of an enterprise architecture that can be considered well defined, meaning that the architecture, for example, has a clearly defined purpose that can be linked to the department’s goals and objectives and describes both the “As Is” and the “To Be” environments; consists of integrated and consistent architecture products; and has been approved by department leadership. According to program officials, the state of the BEA reflects the program’s focus on meeting arbitrary milestones rather than architecture content needs. Recognizing the need to develop well-defined architecture products that have utility and provide a foundation upon which to build, program officials have stated the department’s intent to change its BEA development approach, refocusing its efforts on fewer, higher quality products. Until a BEA development approach embodying architecture development and content best practices is clearly defined and implemented, it is not likely that DOD will produce an enterprise architecture that will provide needed content and utility. “As Is” Description, Transition Plan, and Purpose of BEA Releases Are Missing As we previously discussed, the various frameworks used to develop architecture products consistently provide for describing a given enterprise in both logical (e.g., business, performance, application, and information) and technical (e.g., hardware, software, and data) terms, and for doing so for the enterprise’s current or “As Is” environment and its target or “To Be” environment; these frameworks also provide for defining a capital investment sequencing plan to transition from the “As Is” to the “To Be” environment. However, the frameworks do not prescribe the degree to which the component parts should be described to be considered correct, complete, understandable, and usable—essential attributes of any architecture. This is because the depth and detail of the descriptive content depends on what the architecture is to be used for (i.e., its intended purpose and scope). Relevant architecture guidance states that a well- defined architecture should have a specific and commonly understood purpose and scope, and that it should be developed in incremental releases. Using this purpose and scope, the necessary content of architecture releases can then be defined. In September 2003, we reported that Release 1.0 of the BEA was missing important content, and we made 62 recommendations to add this content. The latest releases of the BEA (see table 4) do not address the 32 of our 62 recommendations that are related to the “As Is” description and the transition plan. Specifically, the releases do not include products describing the “As Is” environment for those areas of the enterprise that are likely to change, and they do not include a sequencing plan that serves as a road map for transitioning from the “As Is” state to the “To Be” state. For example, the BEA releases do not contain a description of the “As Is” environment that would include current business operations in terms of the entities or people who perform the functions, processes, and activities, and the locations where the functions, processes, and activities are performed. It also does not describe the data or information being used by the functions, processes, and activities. As a result, DOD does not have a picture of its current environment to permit development of a meaningful and useful transition plan. The BEA releases also do not contain a transition plan that would include information such as time frames for phasing out existing systems within DOD’s current inventory and resource requirements for implementing the “To Be” architecture. As a result, DOD does not yet have a meaningful and reliable basis for managing the disposition of its existing inventory of systems or for sequencing the introduction of modernized business operations and supporting systems. As we previously reported, not having defined the “As Is” operations and technology at this juncture is risky because it defers until too late in the architecture development cycle creation of sufficient descriptive content and context to develop an effective transition plan. (See app. II for our prior recommendations and their current status.) DOD’s architecture framework (DODAF), which the department is using to develop the BEA, does not require the development of an “As Is” architectural description or a transition plan, and thus neither has been the focus of the program. However, according to program officials, including the Chief Architect, the September 2005 BEA release will include both the “As Is” architectural description and a transition plan. In addition, DOD has not clearly linked the purpose of any of the “To Be” architecture releases produced to date to the goals and objectives of increment 1. Further, these releases also do not have a clearly defined scope with respect to what business processes and supporting systems each release would focus on. According to program officials, the last five versions (i.e., Releases 2.1, 2.2, and 2.3 and the January and March 2005 Updates) support the objectives of increment 1 (see table 3). These objectives are broad-based strategic goals or outcomes that DOD proposed achieving through a series of architecture releases. However, DOD did not define how many releases were needed to achieve each objective and how the purpose of each release is associated with the objectives. Restated, while each incremental objective would describe the mission outcome that DOD intended to achieve via implementation of the series of releases that make up that increment, the purpose of the releases was not specified in terms of architectural questions that were related to the objectives. To illustrate, one objective of increment 1 is to achieve an unqualified audit opinion for consolidated DOD financial statements. Examples of the purpose questions that would support this objective include the following: 1. What changes need to be made to existing business processes and the supporting systems to address material internal control weaknesses affecting significant line items on the financial statements? 2. Where are gaps in IT support (systems functions) that need to be addressed in order to provide the business capabilities for ensuring that property, plant, and equipment are appropriately valued and recorded on the financial statements? The department did not include architecture products that would answer these types of questions to support the increment 1 objective. As a result, the context needed to plan and measure content sufficiency was not established. Program officials, including the Special Assistant for Business Transformation and the Chief Architect agreed, stating that prior releases have not included a specific purpose and scope. Moreover, the Chief Architect told us that the architecture releases do not fully support increment 1’s objectives, nor do they describe the extent to which they do address the increment 1 objectives. According to program officials, including the deputy program director and the Chief Architect, the prior releases were developed with the goal of producing as many architecture products as possible within a predefined schedule. The releases were not developed to provide the content needed to meet the defined purpose and scope of the release. Until the department defines the intended purpose and scope of its BEA, including its incremental releases, and ensures that the architecture products include adequate descriptions of the “As Is” and “To Be” environments, as well as a plan for transitioning between the two, it will not have a well-defined architecture to guide and constrain its systems modernization efforts. BEA Products Are Incomplete, Inconsistent, and Not Integrated Architecture frameworks provide for multiple products, each describing a particular aspect of the enterprise, such as data or systems. Moreover, each of these products is interdependent, meaning that they have relationships with one another that must be explicitly defined and maintained to ensure that the products form a coherent whole. In light of these relationships, it is important to develop the architecture products in a logical sequence that reflects this relationship. DODAF recognizes this need for integration of the products that make up its three “To Be” views—operational view (OV), systems view (SV), and technical standards view (TV). (See app. IV for a brief description of the products that comprise each of these views.) According to the framework, an architecture must be well structured so that the products can be readily assembled or decomposed into higher or lower levels of detail, as needed. It should also be consistent—that is, information elements should be the same throughout the architecture. As noted in the previous section, we reported in September 2003, that Release 1.0 of the BEA was missing important content, and we made 62 recommendations to add this content. The latest releases of the BEA do not adequately address our 30 prior recommendations related to the “To Be” description. For example, these releases do not include descriptions of the actual systems to be developed or acquired to support future business operations and the physical infrastructure (e.g., hardware and software) that will be needed to support the business systems. As a result, the “To Be” environment lacks the detail needed to provide DOD with a common vision and frame of reference for defining a transition plan to guide and constrain capital investments and, thus, to effectively leverage technology to orchestrate logical, systematic change and to optimize enterprisewide mission performance. (See app. II for details on the status of our prior recommendations.) In addition, the respective products of each of the latest BEA releases continue to be inconsistent and not integrated, because key architecture products were either not developed or not updated to reflect changes made in other products. Examples of where Releases 2.2 and 2.3 are not consistent and integrated follow: In Release 2.2, the department updated the system data exchange matrix (SV-6), which assigns attributes (e.g., timeliness) to the data to be exchanged (e.g., Performance Metrics) between system functions— “Manage Business Enterprise Reporting” and “Establish and Maintain Performance Information”—to support business operations. However, the OV-3 in Release 2.2, which shows the attributes of the information to be exchanged to support operations, is not consistent with the attributes defined in the SV-6. For example, in the OV-3, the attribute referred to as “timeliness” is defined in terms of either “hours,” “minutes,” or “seconds;” however, in the SV-6, the attribute referred to as “timeliness” is only defined in terms of either “high, medium, or basic.” In Releases 2.2 and 2.3, the department updated the respective operational event-trace description product (OV-6c), which depicts when activities are to occur within operational processes. However, the department did not update, in either release, the operational activity model (OV-5), which shows the operational activities (or tasks) that are to occur and the input and output flows among these activities. For example, the OV-6c shows the sequence of the activities to occur for the process labeled “produce obligation reports;” however, the activities shown in the OV-5 were not associated with this process. The latest releases also do not provide for traceability among the architecture products by clearly identifying the linkages and dependencies among the products, such as associating processes (e.g., produce obligation reports) with activities (e.g., compare expense to obligation) in the operational views and then associating these same processes to systems (e.g., financial reporting system) in the systems view. In addition, the linkage between the two functions (i.e., “Manage Business Enterprise Reporting” and “Establish and Maintain Performance Information”) previously discussed cannot be traced to the OV-3 in Release 2.2. This is because Release 2.2 did not include an SV-5, which would provide a traceability of system functions back to operational activities. The lack of an updated SV-5 also raises questions as to whether all operational requirements are satisfied by the system functions. In addition, the architecture products were not developed in a logical sequence, as called for in relevant guidance and standards (e.g., the OV-6c, which shows the timing or sequencing of activities, was built before the OV-5, which shows the activities that are to occur). Further, according to the verification and validation contractor, the department has yet to address 242 of its 299 outstanding comments since Release 1.0. The verification and validation contractor also cited similar concerns, as previously described, for Releases 2.2 and 2.3. Specifically, the contractor reported that the BEA products were not integrated, and that key products were missing or had not been updated—such as the operational nodes connectivity description (OV-2) and the operational information exchange matrix (OV-3). In its report on the January 2005 Update, the contractor stated that the architecture products were developed in an order different from that recommended in DODAF, and that the dependency relationships between and among BEA products were not clearly depicted. For example, the contractor reported that the logical data model (OV-7) was to have been developed using the OV-3, OV-5, and OV-6c artifacts as inputs. However, this was not the case. Instead, the OV-7 was developed using information that may have been reverse engineered from existing systems and architectures external to the BEA. The contractor reported that unless these dependencies are clearly documented and depicted, new systems may be implemented without satisfying all operational requirements, with missing functions and interfaces or based on obsolete data models, recreating many of the problems the modernization is intended to resolve. The contractor also reported that the resulting technical problems in the OV-7 could interfere with the department’s achievement of the increment 1 objectives. The March 2005 Update also did not have fully integrated products. Specifically, while some of the products were integrated, this integration occurred at the highest level only and could not be found at lower levels of decomposition (e.g., subprocesses and subactivities) within the architecture. For example, the level of integration does not enable the user to determine all information inputs for the activities at all levels, nor does it clearly reflect the dissemination or use of the information after it has been processed. In addition, this update did not include key architecture products that are recommended by DODAF—such as the system data exchange artifact (SV-6), which assigns attributes (e.g., timeliness) to the data to be exchanged between system functions and the system inventory (SV-8). The SV-8 provides a basis for portfolio investment decisions by depicting the evolution of systems, systems integration, and systems improvements over time. We also found that the architecture was not user friendly in that it was difficult to navigate. For example, the linkages among the architecture products did not always work, thereby, requiring manual navigation through the architecture to find the linkages. This would take hours to do, especially since it was complicated by the fact that certain artifacts (e.g., diagrams) could not be read online and had to be printed. Relatedly, as shown in table 4, the latter BEA releases have not included all of the recommended DODAF products. DODAF recommends that the BEA include 23 out of 26 possible architecture products to meet the department’s stated intention to use the BEA as the basis for departmentwide business and systems modernization. However, Release 2.2 of the architecture included only 16 of the 23 recommended architecture products, and 6 of the 16 products (OV-1, OV-2, OV-3, OV-4, OV-5, and SV-9) were actually Release 2.0 products that had not been updated to align with the changes that had been made to the 10 products that were updated in this release. Similarly, Release 2.3 included only 4 products; the January 2005 Update included 6, and the March 2005 Update included 15. According to the Chief Architect, all prior architecture products that were not included in a specific release or update are obsolete. For example, Release 2.2 included 16 architecture products, of which 10 had been updated. The remaining 6 products had not been updated, but they were still considered to be valid because they were included in this release. This means that for all releases and updates, only those products included in the release or update are relevant. For example, DOD updated 15 products and included them in the March 2005 Update; therefore, as of March 2005, only these 15 products were considered to be valid artifacts of the BEA. Program and contractor officials, including the Acting Assistant Deputy Director for Transition Planning, stated that although the department’s first release of its architecture included a fairly consistent and integrated set of architecture products, DOD’s current releases do not because the department did not update all the recommended architecture products. These officials, including the Chief Architect, also stated that, as a result, the utility of the architecture is limited. However, according to key program officials, including the Special Assistant for Business Transformation and the Chief Architect, the integration of the architecture products was not the focus; rather, DOD’s primary goal was to produce as many products as it could within a specified time period (see tables 3 and 4). Recognizing these weaknesses, the Special Assistant for Business Transformation stated that the department intends to reduce the scope of the architecture and revise the development approach, which will be reflected in the September 30, 2005, architecture release. However, according to program officials, including the Special Assistant for Business Transformation, the September 2005 BEA release will not be comprehensive (i.e., it will not meet all the act’s requirements). Further, the department has yet to develop plans and a methodology to execute this new focus and vet it through the department. Program officials also stated that as a result of the new focus, they are trying to decide which products from prior releases could be salvaged and used. Nevertheless, the department has spent almost 4 years and approximately $318 million in obligations to develop an architecture that is incomplete, inconsistent, and not integrated and, thus, has limited utility. Until the department develops an approved, well-defined architecture that includes a clear purpose and scope and integrated products, it remains at risk of not achieving its intended business modernization goals and of not having an architecture that the stakeholders can use to guide and constrain ongoing and planned business systems investments to prevent duplicative and noninteroperable systems. BEA Releases Have Not Been Approved Relevant architecture guidance state that architecture versions should be approved by the committee overseeing the development and maintenance of the architecture; the CIO; the chief architect; and senior management, including the department head. Such approval recognizes and endorses the architecture for what it is intended to be—a corporate tool for managing both business and technological change and transformation. Consistent with guidance, DOD has stated its intention to approve all BEA releases. However, Release 1.0 of the BEA is the only release that DOD reports as having been approved. As we previously reported, DOD officials told us that Release 1.0 was approved by the former Executive Committee, the department’s CIO as a member of the Executive Committee, and the DOD Comptroller on behalf of the Secretary of Defense in May 2003, but they also said that documentation to verify these approvals did not exist. Since Release 1.0, DOD has issued five additional releases and two updates. None of these have been approved by any individual or committee in the BEA governance structure. According to program officials, including the Special Assistant for Business Transformation and the Chief Architect, Release 3.0 of the BEA, which will be issued in September 2005, will be the next release of the architecture to be approved by the department. These officials stated that the architecture releases have not been approved because the department did not have a governance structure and process in place for doing so. Without the appropriate approvals, buy-in to and recognition of the BEA as an institutionally endorsed change management and transformation tool is not achievable. DOD Has Yet to Fully Address Most of Our Other Recommendations In addition to the governance, planning, and content issues previously discussed, we have made other recommendations relative to DOD’s ability to effectively develop, maintain, and implement an enterprise architecture for its business operations. To date, the department has fully addressed one of our other recommendations, which is to report every 6 months to the congressional committees on the status of the BEA effort, but it has yet to fully address the remaining recommendations. (See app. II for details on the status of these recommendations.) For example, the department has yet to address our recommendations to develop a position description for the Chief Architect that defines requisite duties and responsibilities, update policies to assign responsibility and accountability for approving BEA releases, update policies to address the issuance of waivers for business systems that are not compliant with the architecture but are nevertheless justified on the basis of documented analysis, and develop and implement a quality assurance plan. According to the program director and deputy director, the current state of the BEA, including progress in addressing our recommendations, reflects the program’s prior focus on producing as many products as it could within a specific time period. The focus had not been on the content and quality of the releases, but rather on the timing of their delivery. In contrast, our recommendations have all focused on establishing the means by which to deliver a well-defined BEA and ensuring that delivered releases of the architecture contain this requisite content. Until DOD adopts the kind of approach embodied in our recommendations, it is unlikely that it will produce a well-defined BEA within reasonable time frames and at an affordable cost. Conclusions Having and using a well-defined enterprise architecture are essential for DOD to effectively and efficiently modernize its nonintegrated and duplicative business operations and systems environment. However, the department does not have such an architecture, and the architecture products that it has produced to date do not provide sufficient content and utility to effectively guide and constrain the department’s ongoing and planned business systems investments. This means that despite spending almost 4 years and about $318 million to develop its BEA, the department is not positioned to meet its legislative mandates. In our view, the state of the architecture is due largely to long-standing architecture management weaknesses that the recommendations we have made over the last 4 years are aimed at correcting, as well as the department’s prior focus on producing as many products as it could within a specific time period. To date, the department has not taken adequate steps to implement most of our recommendations. While recent steps to begin revamping its BEA governance structure and to begin program planning are positive first steps and are consistent with some of the recommendations that we made to lay a foundation for architecture development, maintenance, and implementation, much more remains to be accomplished. Thus, it is imperative for the department to move swiftly to strengthen its BEA program in a manner that incorporates our prior recommendations and recognizes its current architecture management capabilities. Until it does, the department will continue to put billions of dollars at risk of being invested in systems that are duplicative, are not interoperable, cost more to maintain than necessary, and do not optimize mission performance and accountability. Recommendations for Executive Action We recommend that the Secretary of Defense direct the Deputy Secretary of Defense, as the chair of the DBSMC and in collaboration with DBSMC members, to immediately fully disclose the state of its BEA program to DOD’s congressional authorization and appropriations committees, including its limited progress and results to date, as well as specific plans and commitments for strengthening program management and producing measurable results that reflect the department’s capability to do so; ensure that each of our recommendations related to the BEA management and content are reflected in the above plans and commitments; and ensure that plans and commitments provide for effective BEA workforce planning, including assessing workforce knowledge and skills needs, determining existing workforce capabilities, identifying gaps, and filling these gaps. Agency Comments In written comments on a draft of this report signed by the Special Assistant for Business Transformation in the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics) and the Deputy Under Secretary of Defense (Financial Management) (reprinted in app. V), the department concurred with our recommendations and stated its intent to implement them. Specifically, DOD stated that it would (1) disclose plans, progress, and results of its BEA efforts to DOD’s congressional committees; (2) address our recommendations related to BEA management and content; and (3) assess its workforce needs and adjust its workforce to meet requirements. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Under Secretary of Defense (Comptroller); the Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer; the Under Secretary of Defense (Personnel and Readiness); and the Director, Defense Finance and Accounting Service. This report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-3439 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Objectives, Scope, and Methodology Our objectives were to determine whether the Department of Defense (DOD) has (1) established an effective governance structure; (2) developed program plans, including supporting workforce plans; (3) performed effective configuration management; (4) developed well-defined business enterprise architecture (BEA) products; and (5) addressed our other recommendations. To determine whether DOD has established an effective governance structure for its efforts, we reviewed program documentation—such as approved charters for the Executive and Steering Committees, the Domain Owners Integration Team (DO/IT), the Transformation Support Office, and the Defense Business Systems Management Committee—and the communications strategy and supporting documents. We compared these documents with the elements in our Enterprise Architecture Management Maturity Framework and federal Chief Information Officer (CIO) Council guidance. To determine whether DOD has developed program plans, including supporting workforce plans, we interviewed the Director and deputy program director, and the assistant deputy directors for communications, strategic planning, and transition planning. We also reviewed draft plans that showed the department’s intent to address our prior recommendations for the content previously missing from the “As Is” architecture and the transition plan. We also reviewed the department’s March 15, 2005, annual report to Congress, briefing slides on the department’s BEA development approach, and the various statements of work for the contractor responsible for extending and evolving the architecture. For human capital, we reviewed program organization charts and position descriptions for key program officials. In addition, we interviewed key program officials, such as the assistant deputy directors for communications, strategic planning, and enterprise architecture, to discuss their roles and responsibilities. To determine whether effective configuration management was being performed, we reviewed the configuration management plan and associated procedures, and the draft configuration control board charter. We compared these documents with best practices, including the federal CIO Council’s Practical Guide, to determine the extent to which DOD had adopted key management practices. In addition, we reviewed meeting minutes to determine whether the board was operating effectively and performing activities according to best practices. We also interviewed program officials, including the Chief Architect and the Configuration Control Board Chair, to discuss the process and its effect on the department’s ability to develop and maintain the BEA products. To determine whether DOD had developed well-defined BEA products, we reviewed the latest BEA releases (i.e., Releases 2.2, 2.2.1, and 2.3 and the March 2005 Update) and the program’s verification and validation contractor’s reports documenting its assessment of Releases 2.2 and 2.3 and the January 2005 Update of the architecture. To determine whether these BEA releases addressed our prior recommendations on missing architecture content and inconsistencies, we requested contractual change requests related to our recommendations. Program officials, including the program director and Chief Architect, stated that change requests to address our recommendations do not exist. We also reviewed the verification and validation contractor’s assessment of DOD’s efforts to address its outstanding comments on prior versions of the BEA and DOD stakeholders’ comments on Release 2.2 of the BEA. Further, we reviewed DOD’s approach to developing the architecture products since Release 1.0 and compared it with relevant guidance, such as DOD’s architecture framework. We also observed architecture walk-through sessions held by program officials to discuss concerns and provide progress updates. In addition, we interviewed program officials, including the Special Assistant for Business Transformation, Deputy Under Secretary of Defense (Financial Management), Chief Architect, the Configuration Control Board Chair, and the verification and validation contractor to discuss the development and maintenance of the BEA products. To determine the status of DOD’s efforts to address our other recommendations related to BEA development and maintenance, we reviewed program documentation, such as the draft quality assurance plan, and compared them with the elements in our Enterprise Architecture Management Maturity Framework. We requested updates to the Information Technology Portfolio Management Policy and the position description for the Chief Architect. We also interviewed program and contractor officials, such as the Director and deputy program director, and the assistant deputy directors for quality assurance and communications. To augment our documentation reviews and analyses, we attended regularly scheduled meetings, such as the DO/IT meetings, the program execution status meetings, and configuration control board meetings. We also held monthly teleconferences with the program and deputy program directors to discuss any issues and to obtain explanations or clarification on the results of our audit work. We did not independently validate cost and budget information provided by the department. We conducted our work primarily at DOD headquarters in Arlington, Virginia, and we performed our work from July 2004 through May 2005, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. Written comments from the Special Assistant for Business Transformation in the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics) and the Deputy Under Secretary of Defense (Financial Management) are addressed in the “Agency Comments” section of this report and are reprinted in appendix V. Status of Prior Recommendations on DOD’s Development and Maintenance of Its Business Enterprise Architecture Implemented? GAO-01-525: Information Technology: Architecture Needed to Guide Modernization of DOD’s Financial Operations. May 17, 2001. (1) The Secretary immediately issue a Department of Defense (DOD) policy that directs the development, implementation, and maintenance of a business enterprise architecture (BEA). DOD has developed the Information Technology Portfolio Management policy. While this policy, in conjunction with the overarching Global Information Grid policy, assigns responsibilities for the development, implementation, and maintenance of the BEA, it does not provide for having accountability for and approval of updates to the architecture processes for architecture oversight and control and architecture review and validation, and it does not address the issuance of waivers for business systems that are not compliant with the BEA but are nevertheless justified on the basis of documented analysis. Program officials stated that the department plans to revise this policy, but they did not provide a time frame for doing so. (2) The Secretary immediately modify the Senior Financial Management Oversight Council’s charter to designate the Deputy Secretary of Defense as the Council Chair and the Under Secretary of Defense (Comptroller) as the Council vice-Chair; and empower the council to serve as DOD’s BEA steering committee, giving it the responsibility and authority to ensure that a DOD BEA is developed and maintained in accordance with the DOD Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) Architecture Framework. We previously reported that DOD had established the Executive and Steering Committees, which were advisory in nature. The department also had established the Domain Owners Integration Team (DO/IT) and stated that these three bodies were responsible for governing the program. However, these groups had not been assigned responsibilities for directing, overseeing, and approving the BEA. According to key department officials, these three entities will be replaced. Specifically, in February 2005, DOD established the Defense Business Systems Management Committee (DBSMC), which replaced the Executive and Steering Committees. According to its charter, the DBSMC is accountable and responsible for the program. The department plans to establish an underlying management structure to support the DBSMC in carrying out its roles and responsibilities. In addition, program officials have stated the department’s intention to replace the DO/IT with the DOD Enterprise Transformation Integration Group whose roles and responsibilities and concept of operations have yet to be defined. Implemented? Yes Partial No DOD comments and our assessment (3) The Secretary immediately make the Assistant Secretary of Defense (Command, Control, Communications & Intelligence), in collaboration with the Under Secretary of Defense (Comptroller), accountable to the Senior Financial Management Oversight Council for developing and maintaining a DOD BEA. The Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer (ASD(NII)/CIO) is a member of the recently established DBSMC; however, it is not known how this committee will operate. DOD established a program office in July 2001. DOD also appointed a Chief Architect, and, according to the department, it has adequate program funding and staff for developing and maintaining its architecture. However, DOD has yet to define the roles and responsibilities for the Chief Architect or provide a time frame for doing so. defines the architecture process and approach; develops the baseline architecture, the target facilitates the use of the architecture to guide business management modernization projects and investments; and (4) The ASD(NII)/CIO report at least quarterly to the Senior Financial Management Oversight Council on the Chief Architect’s progress in developing a BEA, including the Chief Architect’s adherence to enterprise architecture policy and guidance from the Office of Management and Budget (OMB), the CIO Council, and DOD. The ASD(NII)/CIO is a member of the recently established DBSMC; however, it is not known how this committee will operate. The Steering Committee was briefed monthly by the program office on various program activities until June 2004, when it held its last meeting. As a result, the Steering Committee was not updated on the content and status of Releases 2.2 and 2.3 and the January 2005 and March 2005 Updates of the BEA. According to program officials, the DBSMC held an executive session in February 2005 and its second meeting in April 2005, and the committee will initially hold monthly meetings. (5) The Senior Financial Management Oversight Council report to the Secretary of Defense every 6 months on progress in developing and implementing a BEA. The Deputy Chief Financial Officer briefed the Secretary of Defense in November 2003 on behalf of DOD’s Comptroller, who chairs the Executive Committee. According to the program director, the Secretary of Defense has not been briefed since November 2003 on the department’s progress in developing and implementing the BEA. Implemented? Yes Partial No DOD comments and our assessment (6) The Secretary report every 6 months to the congressional defense authorizing and appropriating committees on progress in developing and implementing a BEA. Senate Report 107-213 directs that the department report every 6 months on the status of the BEA effort. DOD submitted status reports on January 31 and July 31, 2003; January 31 and July 30, 2004; and March 15, 2005. The 2003 and 2004 reports were submitted by DOD’s Comptroller but were not signed by the members of the Executive or Steering Committees. The 2005 report was signed by the Acting Under Secretary of Defense for Acquisition, Technology, and Logistics, who is the vice-chair of the DBSMC. GAO-03-458: DOD Business Systems Modernization: Improvements to Enterprise Architecture Development and Implementation Efforts Needed. February 28, 2003. (1) The Secretary of Defense ensure that the enterprise architecture executive committee members are singularly and collectively made explicitly accountable to the Secretary for the delivery of the enterprise architecture, including approval of each release of the architecture. We previously reported that DOD had established the Executive and Steering Committees, which were advisory in nature. The department had also established the DO/IT and stated that these three bodies were responsible for governing the program. However, these groups had not been assigned responsibilities for directing, overseeing, and approving the BEA. According to key department officials, these three entities will be replaced. Specifically, in February 2005, DOD established the DBSMC, which replaced the Executive and Steering Committees. According to its charter, the DBSMC is accountable and responsible for the program. The department plans to establish an underlying management structure to support the DBSMC in carrying out its roles and responsibilities. In addition, program officials have stated the department’s intention to replace the DO/IT with the DOD Enterprise Transformation Integration Group, whose roles and responsibilities and concept of operations have yet to be defined. (2) The Secretary of Defense ensure that the enterprise architecture program is supported by a proactive marketing and communication program. DOD has a strategic communications plan; however, the plan has yet to be implemented. According to the communications team, its activities have been limited to raising awareness because it lacks the authority to fully implement the other components of its plan, such as achieving buy-in. According to program officials, the department intends to revise the governance structure, including the communications strategy, in September 2005. Implemented? Yes Partial No DOD comments and our assessment (3) The Secretary of Defense ensure that the quality assurance function includes the review of adherence to process standards and reliability of reported program performance, is made independent of the program management is not performed by subject matter experts involved in the development of key architecture products. DOD has established a quality assurance function; however, this function does not address process standards and program performance, nor is it an independent function. Further, DOD subject matter experts continue to be involved in the quality assurance function. Program officials stated that the department had yet to address our recommendation, and they could not provide a time frame for when they would begin addressing this recommendation. GAO-03-1018: DOD Business Systems Modernization: Important Progress Made to Develop Business Enterprise Architecture, but Much Work Remains. September 19, 2003. (1) The Secretary of Defense or his appropriate designee implement the core elements in our Enterprise Architecture Framework for Assessing and Improving Enterprise Architecture Management that we identify in this report as not satisfied, including ensuring that minutes of the meetings of the executive body charged with directing, overseeing, and approving the architecture are prepared and maintained. DOD has taken some actions, but these actions do not fully address our previous concerns. For example, DOD has begun to revise its governance structure to provide for improved management and oversight, such as establishing the DBSMC and assigning it accountability and responsibility for directing, overseeing, and approving the BEA; and developed a configuration management plan and the related procedures, and established a configuration control board. However, the department has not established additional governance entities to support the DBSMC and outlined their roles and responsibilities; updated the policy for BEA development, maintenance, and implementation; included the missing scope and detail in the BEA; finalized, approved, and effectively implemented the plan, procedures, and charter governing the configuration management process; and metrics. (2) The Secretary of Defense or his appropriate designee update version 1.0 of the architecture to include the 29 key elements governing the “As Is” architectural content that our report identified as not being fully satisfied. Of the 29 elements, program officials stated that 3 were not applicable and that it planned to address an additional 11 by January 2005. However, these officials did not provide any documentation to support this statement. Instead, they provided a draft plan that shows the department’s intent to develop a detailed action plan to guide the development of an “As Is” architecture. According to program officials, they plan to update the “As Is” architectural description in September 2005. Implemented? Yes Partial No DOD comments and our assessment (3) The Secretary of Defense or his appropriate designee update version 1.0 of the BEA to include the 30 key elements governing the “To Be” architectural content that our report identified as not being fully satisfied. DOD officials have provided no evidence that this recommendation has been addressed or that it intends to implement this recommendation. (4) The Secretary of Defense or his appropriate designee update version 1.0 to ensure that “To Be” architecture artifacts are internally consistent, to include addressing the inconsistencies described in this report, as well as including user instructions or guidance for easier architecture navigation and use. DOD officials have provided no evidence that this recommendation has been addressed or that it intends to implement this recommendation. (5) The Secretary of Defense or his appropriate designee update version 1.0 of the architecture to include (a) the 3 key elements governing the transition plan content that our report identified as not being fully satisfied and (b) those system investments that will not become part of the “To Be” architecture, including time frames for phasing out those systems. DOD officials provided a draft plan that shows the department’s intent to develop a detailed action plan to guide the development of the transition plan; however, the draft plan does not provide time frames for doing so. According to program officials, the department will issue a revised transition plan in September 2005; but, this version will not fully address our recommendation. (6) The Secretary of Defense or his appropriate designee update version 1.0 of the architecture to address comments made by the verification and validation contractor. According to program officials, of the 299 outstanding comments, 137 have been addressed in Release 2.3 and earlier releases, 100 were not applicable, and the remaining 62 will be addressed in future releases. These officials did not provide any documentation supporting their rationale for the 100 that they considered not applicable nor did they provide plans for addressing the 62 remaining comments. The verification and validation contractor stated that of the 137 comments that program officials stated had been addressed, 35 had been addressed, 22 were not applicable because they were either duplicate or no longer relevant based on updates to prior releases, 22 had yet to be addressed, and 58 were not assessed. The contractor has yet to provide its assessment on the 100 comments that DOD said were not applicable. (7) The Secretary of Defense or his appropriate designee develop a well-defined, near-term plan for extending and evolving the architecture and ensure that this plan includes addressing our recommendations, defining roles and responsibilities of all stakeholders involved in extending and evolving the architecture, explaining dependencies among planned activities, and defining measures of activity progress. As discussed in this report, DOD has not developed explicit detailed plans to guide day-to-day program activities and to enable it to evaluate its progress. According to program officials, the department will develop a program baseline by September 30, 2005. Summary of Several Architecture Frameworks There are various enterprise architecture frameworks that an organization can follow. Although these frameworks differ in their nomenclatures and modeling approaches, they consistently provide for defining an enterprise’s operations in both (1) logical terms, such as interrelated business processes and business rules, information needs and flows, and work locations and users, and (2) technical terms, such as hardware, software, data, communications, and security attributes and performance standards. The frameworks also provide for defining these perspectives for both the enterprise’s current or “As Is” environment and its target or “To Be” environment, as well as a transition plan for moving from the “As Is” to the “To Be” environment. For example, John A. Zachman developed a structure or framework for defining and capturing an architecture. This framework provides for six windows from which to view the enterprise, which Zachman terms “perspectives” on how a given entity operates: those of (1) the strategic planner, (2) the system user, (3) the system designer, (4) the system developer, (5) the subcontractor, and (6) the system itself. Zachman also proposed six models that are associated with each of these perspectives; these models describe (1) how the entity operates, (2) what the entity uses to operate, (3) where the entity operates, (4) who operates the entity, (5) when entity operations occur, and (6) why the entity operates. Zachman’s framework provides a conceptual schema that can be used to identify and describe an entity’s existing and planned components and their relationships to one another before beginning the costly and time- consuming efforts associated with developing or transforming the entity. Since Zachman introduced his framework, a number of other frameworks has been proposed. In February 1998, DOD directed its components to use its C4ISR Architecture Framework, Version 2.0. In August 2003, the department released Version 1.0 of the DOD Architecture Framework (DODAF)—an evolution of the C4ISR Architecture Framework, which supersedes the C4ISR. The DODAF defines the type and content of the architectural artifacts, as well as the relationships among the artifacts that are needed to produce a useful architecture. Briefly, the framework decomposes an architecture into three primary views: operational, systems, and technical standards. See figure 3 for an illustration of these three views. According to DOD, the three interdependent views are needed to ensure that IT systems support operational needs, and that they are developed and implemented in an interoperable and cost-effective manner. In September 1999, the federal CIO Council published the Federal Enterprise Architecture Framework (FEAF), which is intended to provide federal agencies with a common construct on which to base their respective architectures and to facilitate the coordination of common business processes, technology insertion, information flows, and system investments among federal agencies. FEAF describes an approach, including models and definitions, for developing and documenting architecture descriptions for multiorganizational functional segments of the federal government. Similar to most frameworks, FEAF’s proposed models describe an entity’s business, the data necessary to conduct the business, applications to manage the data, and technology to support the applications. More recently, the Office of Management and Budget (OMB) established the Federal Enterprise Architecture (FEA) Program Management Office to develop a federated enterprise architecture in the context of five “reference models, and a security and privacy profile that overlays the five models.” The Business Reference Model is intended to describe the federal government’s businesses, independent of the agencies that perform them. This model consists of four business areas: (1) services for citizens, (2) mode of delivery, (3) support delivery of services, and (4) management of government resources. It serves as the foundation for the FEA. OMB expects agencies to use this model, as part of their capital planning and investment control processes, to help identify opportunities to consolidate information technology (IT) investments across the federal government. Version 2.0 of this model was released in June 2003. The Performance Reference Model is intended to describe a set of performance measures for major IT initiatives and their contribution to program performance. According to OMB, this model will help agencies produce enhanced performance information; improve the alignment and better articulate the contribution of inputs, such as technology, to outputs and outcomes; and identify improvement opportunities that span traditional organizational boundaries. Version 1.0 of this model was released in September 2003. The Service Component Reference Model is intended to identify and classify IT service (i.e., application) components that support federal agencies and promote the reuse of components across agencies. This model is intended to provide the foundation for the reuse of applications, application capabilities, components (defined as “a self- contained business process or service with predetermined functionality that may be exposed through a business or technology interface”), and business services. According to OMB, this model is a business-driven, functional framework that classifies service components with respect to how they support business and/or performance objectives. Version 1.0 of this model was released in June 2003. The Data Reference Model is intended to describe, at an aggregate level, the types of data and information that support program and business line operations and the relationships among these types. This model is intended to help describe the types of interactions and information exchanges that occur across the federal government. Version 1.0 of this model was released in September 2004. The Technical Reference Model is intended to describe the standards, specifications, and technologies that collectively support the secure delivery, exchange, and construction of service components. Version 1.1 of this model was released in August 2003. The Security and Privacy Profile is intended to provide guidance on designing and deploying measures that ensure the protection of information resources. OMB has released Version 1.0 of the profile. Description of DOD Architecture Framework Products, Version 1.0 Executive-level summary information on the scope, purpose, and context of the architecture Architecture data repository with definitions of all terms used in all products High-Level Operational Concept Graphic High-level graphical/textual description of what the architecture is supposed to do, and how it is supposed to do it Graphic depiction of the operational nodes (or organizations) with needlines that indicate a need to exchange information Information exchanged between nodes and the relevant attributes of that exchange Command structure or relationships among human roles, organizations, or organization types that are the key players in an architecture Operations that are normally conducted in the course of achieving a mission or a business goal, such as capabilities, operational activities (or tasks), input and output flows between activities, and input and output flows to/from activities that are outside the scope of the architecture One of three products used to describe operational activity—identifies business rules that constrain operations One of three products used to describe operational activity—identifies business process responses to events One of three products used to describe operational activity—traces actions in a scenario or sequence of events Documentation of the system data requirements and structural business process rules of the operational view Identification of systems nodes, systems, and systems items and their interconnections, within and between nodes Specific communications links or communications networks and the details of their configurations through which systems interface Relationships among systems in a given architecture; can be designed to show relationships of interest (e.g., system-type interfaces, planned versus existing interfaces) Mapping of relationships between the set of operational activities and the set of system functions applicable to that architecture Characteristics of the system data exchanged between systems Systems Performance Parameters Matrix Quantitative characteristics of systems and systems hardware/software items, their interfaces, and their functions Planned incremental steps toward migrating a suite of systems to a more efficient suite, or toward evolving a current system to a future implementation Emerging technologies and software/hardware products that are expected to be available in a given set of time frames and that will affect future development of the architecture One of three products used to describe system functionality—identifies constraints that are imposed on systems functionality due to some aspect of systems design or implementation One of three products used to describe system functionality—identifies responses of a system to events One of three products used to describe system functionality—lays out the sequence of system data exchanges that occur between systems (external and internal), system functions, or human role for a given scenario Physical implementation of the Logical Data Model entities (e.g., message formats, file structures, and physical schema) Comments from the Department of Defense GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Cynthia Jackson, Assistant Director; Barbara Collier; Joanne Fiorino; Neelaxi Lakhmani; Anh Le; Freda Paintsil; Randolph Tekeley; and William Wadsworth made key contributions to this report. | The Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 directed the Department of Defense (DOD) to develop, by September 2005, a well-defined business enterprise architecture (BEA) and a transition plan. GAO has made numerous recommendations to assist the department in successfully doing so. As part of ongoing monitoring of the architecture, GAO assessed whether the department had (1) established an effective governance structure; (2) developed program plans, including supporting workforce plans; (3) performed effective configuration management; (4) developed well-defined BEA products; and (5) addressed GAO's other recommendations. To effectively and efficiently modernize its nonintegrated and duplicative business operations and systems, it is essential for DOD to develop and use a well-defined BEA. However, it does not have such an architecture, and the products that it has produced do not provide sufficient content and utility to effectively guide and constrain ongoing and planned systems investments. As a result, despite spending almost 4 years and about $318 million, DOD does not have an effective architecture program. The current state of the program is due largely to long-standing architecture management weaknesses that GAO has made recommendations over the last 4 years to correct. In particular, DOD has not done the following: (1) established an effective governance structure, including an effective communications strategy, to achieve stakeholder buy-in. In particular, the structure that has been in place since 2001 lacks the requisite authority and accountability to be effective, and the key entities that made up this structure have not performed according to their respective charters; (2) developed program plans that explicitly identify measurable goals and outcomes to be achieved, nor has it defined the tasks to be performed to achieve the goals and outcomes, the resources needed to perform these tasks, or the time frames within which the tasks will be performed. DOD also has not assessed, as part of its program planning, the workforce capabilities that it needs in order to effectively manage its architecture efforts, nor does it have a strategy for doing so; (3) performed effective configuration management, which is a formal approach to controlling product parts to ensure their integrity. The configuration management plan and the charter for the configuration control board are draft; the board has limited authority; and, after 4 years of development, the department has not assigned a configuration manager; (4) developed a well-defined architecture. The latest versions of the architecture do not include products describing the "As Is" business and technology environments and a transition plan for investing in business systems. In addition, the versions that have been produced for the "To Be" environment have not had a clearly defined purpose and scope, are still missing important content, and contain products that are neither consistent nor integrated; and (5) fully addressed other GAO recommendations. DOD recognizes that these weaknesses need to be addressed and has recently assigned a new BEA leadership team. DOD also has either begun steps to or stated its intentions to (1) revise its governance structure; (2) develop a program baseline, by September 30, 2005, that will be used as a managerial and oversight tool to allocate resources, manage risks, and measure and report progress; and (3) revise the scope of the architecture and establish a new approach for developing it. However, much remains to be accomplished to establish an effective architecture program. Until it does, its business systems modernization effort will remain a high-risk program. |
Background Navy ships are complex defense systems, using advanced designs with state-of-the-art weapons, communications, and navigation technologies and requiring many years to plan, budget, design, and build. Navy Shipbuilding Contract Types The Navy uses three primary contract types for shipbuilding programs— firm-fixed-price, FPI, and cost-reimbursement type contracts. Contract type selection is a key factor in determining risk apportionment between the Navy and the shipbuilder. According to the Director of Defense Pricing, choosing a contract type is an important way of aligning the incentives between the government and the shipbuilder. No single contract type will work for all shipbuilding programs in all cases. The following is a brief description of each contract type used in Navy shipbuilding programs: Cost-Reimbursement Contracts—the government pays the shipbuilder’s allowable incurred costs to the extent specified in the contract and may include an additional fee (profit). These contracts establish an estimate of total costs and a ceiling that the contract may not exceed without the approval of the government. The shipbuilder must put forth its best efforts to perform the work within the estimated costs. However, the government must reimburse the builder for its allowable costs regardless of whether the work is completed. Generally this contract type is used when requirements are not well defined or lack of knowledge does not permit costs to be sufficiently estimated to use a fixed-price contract, such as in the case of designing and building lead ships. Fixed-Price Incentive (FPI) Contracts—the contract specifies several contract elements including a profit adjustment formula referred to as a share line. In accordance with the share line, the government and the shipbuilder share responsibility for cost increases, or decreases, compared to the agreed upon target cost. The final negotiated cost is subject to a ceiling price, which is the maximum that may be paid to the contractor, except for any adjustment under other contract clauses. Generally, the share line functions to decrease the shipbuilder’s profit as actual costs exceed the target cost. Likewise, the shipbuilder’s profit increases when actual costs are less than the target cost for the ship. Since the shipbuilder’s profit is linked to actual performance, FPI contracts provide an incentive for the shipbuilder to control costs. Incentive arrangements can be designed to achieve specific objectives by motivating contractor efforts that might not otherwise be emphasized and discouraging contractor inefficiency and waste. Firm-Fixed-Price Contracts—the government agrees to purchase a ship for a set price and the shipbuilder is required to deliver a ship regardless of their actual costs. The shipbuilder bears the full responsibility for increases in the cost of construction and therefore can earn a higher profit if actual costs are below the contract price. The shipbuilder bears the maximum risk and full responsibility for all costs and resulting profit or loss. This contract type is suitable for situations where the government and shipbuilder have a clear understanding of the scope of work and are confident in the cost of ship construction. Elements of Fixed-Price Incentive Contracts FPI contracts are complex and comprised of a target cost, target profit, target price, ceiling price, and a profit adjustment formula, which the Navy refers to as a sharing ratio, or a share line, which is used to determine profit earned by the shipbuilder. The target cost, schedule, terms and conditions, and the scope of work influence how the share line and ceiling price are established. The structure of the share line establishes how cost overruns (over target cost) and cost underruns (below target cost) are shared between the government and shipbuilder, and is used to calculate final profit earned by the shipbuilder. The ceiling price is the maximum the government can pay under the contract, except for adjustments under other clauses, and is expressed as a percentage of the target cost. The share line is intended to be the primary incentive for the shipbuilder to control costs. The Navy uses various share line structures under FPI shipbuilding contracts. A commonly used share line utilizes the same share ratio between the shipbuilder and the government for both under target and over target performance. Figure 2 depicts a hypothetical example contract with a 50/50 share line above and below the target cost, with the ceiling price set at 120 percent of target cost. This means that the cost overrun or cost underrun savings would be shared equally between the Navy and shipbuilder. As shown in the figure, the ceiling price represents the government’s maximum liability under the contract. The figure also details the elements of an FPI contract that are negotiated at the outset. In instances when cost risk apportionment is not equally shared between the shipbuilder and the Navy, a share line with different share ratios for under target and over target performance—such as an 80/20 underrun and a 70/30 overrun—can be used. In these scenarios, the Navy would receive 80 cents of every dollar of cost savings or conversely pay 70 cents of every dollar of cost overrun. DOD and Navy Guidance on FPI Contracts In recent years, DOD has pushed for the increased use of FPI contracts in major defense acquisition programs, where appropriate. Guidance and regulations on the use of FPI contracts include: A series of Better Buying Power initiative memorandums, including a September 2010 memorandum in which DOD’s Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD (AT&L)) encouraged the use of FPI contracts for acquisition programs in early production—this memorandum was followed by more detailed guidance that specified actions that USD (AT&L) expected to be executed. The September 2010, Better Buying Power initiative encouraged the use of FPI contracts, in part, because in the past DOD had awarded cost-plus-award-fee contracts with subjective award fees not clearly tied to cost control. A November 2010 memorandum stated the expectation that acquisition teams pay particular attention to share lines and ceiling prices, and that FPI contracts with a 50/50 share line and 120 percent ceiling price should be the norm, or starting point. The Director of Defense Pricing stated that he believes DOD’s increased use of FPI contracts, coupled with a more professional workforce that is more cost conscious, has saved billions of dollars and will ultimately result in savings far in excess of any other initiative associated with Better Buying Power. In response to DOD’s Better Buying Power initiatives, the Office of the Secretary of Defense’s Defense Procurement Acquisition Policy Office released guidance for using incentive contracts in April 2016. This guidance stated that the use of FPI contracts by programs leads to better cost and schedule performance outcomes, and therefore suggested employing FPI contracts when appropriate—such as in a program’s early production phase or near the end of engineering and manufacturing development. The Defense Federal Acquisition Regulation Supplement (DFARS), which has implemented the guidance set forth in the 2010 Better Buying Power initiatives, states that the contracting officer shall pay particular attention to share lines and ceiling prices for FPI contracts, with a 120 percent ceiling and a 50/50 share ratio, or sharing arrangement, as the point of departure for establishing the incentive arrangement. The Federal Acquisition Regulation (FAR) outlines the appropriate use of FPI contracts. The FAR states that an FPI contract should be considered appropriate when a firm-fixed-price contract is not suitable or the nature of the supplies or services being acquired and other circumstances of the acquisition are such that the contractor’s assumption of a degree of cost responsibility will provide a positive profit incentive for effective cost control and performance. The FAR goes on to state that, if the contract also includes incentives on technical performance and/or delivery, the performance requirements should provide a reasonable opportunity for the incentives to have a meaningful impact on the contractor’s management of the work. Further, the FAR specifies that FPI contracts are appropriate when the parties can negotiate at the outset a firm target cost, target profit, and profit adjustment formula that will provide a fair and reasonable incentive and a ceiling that provides for the contractor to assume an appropriate share of the risk. When the contractor assumes a considerable or major share of the cost responsibility under the adjustment formula, the target profit should reflect this responsibility. A series of memorandums released by the Assistant Secretary of the Navy for Research, Development and Acquisition between 2003 and 2010 stressed the importance of structuring incentive contracts in a way that best motivates the shipbuilder to meet requirements and protects the government’s cost position. While DOD has recently promoted the use of FPI contracts through its Better Buying Power initiatives and changes to the DFARS, the Navy has used FPI contracts for shipbuilding programs for over 40 years. Specifically, FPI contracts have been the Navy’s primary contract type since the mid-1970s, when the Navy shifted away from fixed-price sealed bid contracting—which began in 1963 at the recommendation of the Secretary of Defense. Moreover, during the 1980s, FPI contracts became the preferred contract type for detail design and construction of lead ships and follow-on ship construction, with many of the contracts using a 50/50 share line. According to DOD officials, FPI contracts fell out of fashion after the 1980s, with the Navy awarding more cost-type contracts throughout the 1990s. Beginning in 2000, FPI contracts once again became the primary contracting method awarded for Navy ship construction for all but lead and early follow-on ships. We elaborate below on the picture over the past 10 years. Differences between Navy and Commercial Shipbuilding We have reported for many years on the long-standing problem of cost growth in shipbuilding programs. In our prior work examining numerous shipbuilding programs, we have found that cost growth was often attributed to the Navy awarding contracts to design and construct its ships before retiring technical risk. Without a full understanding of the effort needed to deliver the ship, the government negotiated contracts where the Navy assumed all or a large percentage of risk, and was largely responsible for cost growth. In contrast, in our prior work when we examined commercial shipbuilding practices we found that ships are usually delivered on time, at cost, and with expected quality. Our prior work highlights differences in Navy and commercial shipbuilding contracting practices, which reflect differences in the two environments: In May 2009, we found: (1) for commercial shipbuilders and ship buyers, only firm-fixed-price contracts were used for design and construction activities, and the delivery date of the ship is clearly established in the contract with accompanying penalties for delays; (2) commercial buyers were able to choose from a competitive global base of available shipyards and suppliers without generally needing to consider the long-term health of any individual yard or supplier; and (3) buyers and shipbuilders both made acquisition decisions based on anticipated return on investment. In November 2013, we reported that as opposed to commercial buyers, which typically operate in a robust, competitive environment, the Navy has a limited industrial base to build its ships. We noted that one result of the limited industrial base is that the Navy may award sole source contracts in order to sustain workloads and the solvency of the companies involved. This is because the Navy has fewer choices of shipbuilders and has an interest in sustaining these shipbuilders despite shortfalls in performance. The Navy Primarily Used FPI Contracts for Shipbuilding over the Past 10 Years in an Effort to Share Cost Risk While Sustaining the Industrial Base From November 2005 to November 2015, the majority—19 of 23—of the Navy’s shipbuilding contracts were FPI, with contract obligations of $66.5 billion. According to Navy contracting officials, FPI contracts can enable the Navy and the shipbuilder to share cost risk more equitably than other contract types. However, three of the six selected FPI contracts we reviewed lacked a key document describing the Navy’s rationale for selecting an FPI contract. In addition, business clearance memorandums—which document the basis for approval of the action and for determining that negotiated prices are fair and reasonable, provided varying levels of insight into the rationale for final contract terms. The Navy is awarding these complex FPI contracts in an environment with a limited industrial base and low volume of ship procurement. These factors constrain the Navy’s ability to award shipbuilding contracts competitively. While competition can better position the Navy to influence final contract terms, the Navy is, for the most part, the only customer for the major U.S. shipbuilders and therefore has a desire to sustain shipbuilders despite shortfalls in performance. Many factors—such as the degree of competition and the number of ships expected to be procured under a contract—are considered during contract negotiations to determine FPI contract terms. But the realities of the U.S. shipbuilding environment reduce the Navy’s leverage to negotiate favorable contract terms even when shipbuilder performance falters. The Navy Primarily Used Fixed-Price Incentive Contracts for Shipbuilding over the Past 10 Years Between November 1, 2005, and November 30, 2015, the Navy awarded 23 detail design and construction contracts for Navy shipbuilding programs. Of the 23 contracts awarded in the 10-year time frame, 19 were FPI contracts as shown in figure 3. A total of 83 individual ships were included under these 23 contracts, accounting for over $72 billion in contract obligations. A majority of these ships, 79, were awarded on an FPI basis, representing $66.5 billion in contract obligations, or over 92 percent of the Navy’s total obligations for detail design and construction during the time frame, as shown in figure 4. The Navy Prefers FPI Contracts but Did Not Always Document Its Rationale for Determining Contract Type and Elements According to Navy contracting officials, the Navy prefers FPI contracts for shipbuilding programs over other contract types because cost risk can be shared between the Navy and the shipbuilder more equitably. According to the Director of Defense Pricing, contractors are trained to include the price of all risks in their cost; therefore, FPI contracts can result in lower contract costs than firm-fixed-price contracts because if risks do not materialize under an FPI contract the government and the contractor share any cost savings. Under a cost-plus-incentive-fee contract, once costs exceed the target cost, the Navy continues to pay the allowable costs and the shipbuilder only earns the minimum fee (profit). Under an FPI contract, the shipbuilder generally absorbs costs above the ceiling price. In contrast to both of these situations, under a firm-fixed-price contract the shipbuilder assumes full responsibility for all costs and the resulting profit or loss. Navy contracting officials explained that the Navy rarely uses firm-fixed-price contracts for shipbuilding because it would likely result in higher offers. They noted that for ship construction, shipbuilders would likely factor in additional costs to account for their assumption of risk—particularly given the lack of competitive cost pressures due to the limited shipbuilding industrial base. Figure 5 identifies cost risk to the Navy and shipbuilder by contract type. A required contract document that is intended to convey the rationale for selecting an FPI contract was not present in half of the six contract files we reviewed. Beginning in October 2009, the FAR required the government to complete a determination and findings document and include it in the contract file for all incentive- and award-fee contracts justifying that the use of this type of contract is in the best interest of the government. Three of our six selected contract files, awarded after the interim FAR rule was published in October 2009, did not contain this required determination and findings document. When we raised this issue, a senior Navy contracting official acknowledged that the NAVSEA Contracts Directorate had not been consistent in completing a determination and findings document for incentive-fee type contracts. Better documentation would help ensure that contract files reflect the rationale for why an FPI contract was determined to be the preferred contract type. From a business perspective, contracting officials who later revisit the file to make modifications or plan for future awards may not have a thorough understanding of why this contract type was selected. According to this same official, although the directorate had not been consistent in completing a determination and findings document for incentive-fee contracts, the general business process in place within the directorate, even prior to the FAR change, was to include discussion of contract type, including any applicable incentive fees, in the business clearance memorandum or acquisition planning documents. Navy contracting officials told us that business clearance memorandums, specifically, are key documents for obtaining insight into decisions regarding FPI elements (e.g., target cost, share line, and ceiling price). These memorandums are generally required for each negotiated contract action by the Navy Marine Corps Acquisition Regulation Supplement, which states that the purpose of a pre-negotiation business clearance memorandum (completed prior to negotiations) and post-negotiation business clearance memorandum (completed prior to a settlement commitment) is to document the basis for approval of the action, and the basis for determination that the negotiated prices are fair and reasonable. Additionally, the FAR requires the contracting officer to include this type of document in the contract file, and details specific requirements as to the content. However, the business clearance memorandums we reviewed provided varied levels of insight into decisions surrounding the rationale for final FPI contract terms. For example, the post-negotiation business clearance memorandum for the LPD 22-25 contract was relatively robust in its level of insight; it included a detailed rationale for the share lines, which were structured to reach agreement for two risks—shipbuilder’s property insurance and facility closure. Specifically, the post-negotiation business clearance memorandum provided detailed information on the nature of each risk, including a quantification of the potential cost impacts, to support the steps in the overrun share lines. In contrast, we found that: The SSN 784-791 contract, which was awarded on a sole source basis, did not have a post-negotiation business clearance memorandum in the contract file. According to a Navy contracting official, the post-negotiation business clearance memorandum was never completed because the staff responsible was reassigned to other higher priority work. The combined pre- and post-negotiation business clearance memorandum for the DDG 115 and DDG 116 noted that the contract includes a milestone-based incentive for each ship in order to motivate the shipbuilder on technical and management performance. However, while the memorandum provides a general statement on the purpose of the incentive, it did not explain the rationale for how the contracting officials determined the amount of the incentive or how it would possibly impact the target profit available on the contract. The contracting officials involved believed that the information in the business clearance memorandum was adequate. The post-negotiation business clearance memorandum for the two LCS block buy contracts did not include any information on the Navy’s rationale for selecting the share line and ceiling price for all 10 ships on each contract. These terms had been specified in the RFPs and were ultimately included in the contracts. Without proper documentation for these complex contracts and the key decisions made about contract selection and FPI contract elements, the Navy does not position its contracting officials to clearly understand how decisions were made or to understand how to negotiate new contract terms in a way that would be most beneficial to the government. This is particularly important for the Navy going forward, as it intends to invest many billions of dollars in these same shipbuilding programs—in addition to others—over the next few decades. While a Limited Shipbuilding Industrial Base Restricts the Navy’s Bargaining Power, Competition Can Better Position the Navy to Influence Final Contract Terms A limited number of U.S. shipbuilders and low volume of ship procurement limit the Navy’s ability to award shipbuilding contracts competitively. Two companies—General Dynamics Corporation and Huntington Ingalls Industries—own five of the seven major U.S. shipyards that build Navy vessels. Further, several of these shipyards have specialized production capabilities that constrain and dictate the types of vessels each can build, and limit opportunities for competition within the shipbuilding sector. For instance, of the seven shipyards, only Newport News and Electric Boat have facilities for constructing nuclear submarines. We previously reported that this is in contrast to commercial ship buyers, who have an array of yards and suppliers to choose from. The desire to sustain workloads of the major U.S. shipyards is a key concern for the Navy, which also affects the Navy’s ability to procure ships using full and open competition, because of the need to preserve the industrial base long-term for future shipbuilding programs. However, even in competitive procurements, the Navy’s interest in maintaining the long-term health of U.S. shipbuilders can weaken its leverage at the negotiation table. This is because, unlike commercial buyers, the Navy and its shipbuilders largely operate in a symbiotic relationship, meaning that the Navy is, for the most part, the only customer and the major shipbuilders are the only providers of the desired product—the Navy ship. Of the seven major shipyards, only General Dynamics NASSCO regularly builds commercial ships alongside the Navy’s. As a result, the Navy can be driven to sustain the shipbuilders for future programs despite shortfalls in performance. The impact of the shipbuilding industrial base on the Navy’s ability to drive a favorable business deal is particularly highlighted in a sole source environment. Two of the six contracts we reviewed were initially sole source awards. In these cases, generally all contract terms—including contract type—were established through bilateral negotiations between the Navy and the shipbuilder. Therefore, once the Navy selects an FPI contract, the contract elements that are used to determine final contract price—including the share line and ceiling price—are subject to negotiations; the Navy cannot rely on competitive forces to strengthen its negotiation position. The LPD 21 is an example of the reduced leverage the government has in a sole source environment. During negotiations for detail design and construction of LPD 21, the Navy agreed with the shipbuilder’s request to change from an FPI/award-fee contract—as had been specified in the RFP—to a cost-plus-incentive-fee/award-fee contract. Specifically, the shipbuilder informed the Navy that an FPI/award-fee contract would require it to offer an excessively conservative and unaffordable target cost. In light of this position, Navy contracting officials believed that a cost-plus-incentive/award-fee contract would be an acceptable solution. As a result, the program requested and received approval from the Assistant Secretary of the Navy for Research, Development, and Acquisition to proceed with cost-plus-incentive- fee/award-fee negotiations. Even under the inherent constraints posed by the U.S. shipbuilding industrial base, competition can better position the Navy in negotiations than in a sole source environment. In a competitive RFP, for example, the contract type and other terms, including the share line and ceiling price, that the Navy specifies in its RFPs generally remain the same through negotiations and award in competitive acquisitions. Under this scenario, competition can help ensure that the shipbuilders put forth their best offer. For example, the Navy’s RFP for DDG 114-116, which used limited competition between the two builders of the DDG 51-class destroyers, specified an FPI contract type, 50/50 share line, target profit, and ceiling price. These contract terms remained consistent throughout negotiations. The Navy and Shipbuilders Weigh Multiple Factors to Determine FPI Contract Elements During our discussions with Navy contracting officials and shipbuilder representatives they emphasized that they consider multiple factors during contract negotiations to determine their negotiating positions regarding the various FPI contract elements. As shown in figure 6 below, how these factors are balanced in a contract ultimately determines the Navy’s share of cost risk. Each contract is distinct with individual factors that are weighed differently depending on the unique circumstances of the acquisition. For example, if the contract is for a lead ship of a new class, the Navy and shipbuilder may weigh factors differently than if the contract is for a class where numerous ships have been delivered, since as we have reported previously, cost growth and schedule delays can be amplified for lead ships. The factors range from broad considerations— such as degree of competition and the health of the industrial base—to considerations specific to individual shipbuilding programs, such as the number of ships expected to be procured under a contract. Across the six selected contracts we reviewed, several factors shaped the structure of FPI contract elements. For example, one factor that shaped the structure of the FPI contract for the ESD/ESB program was that competition existed to build the ships, which enabled the Navy to issue a competitive RFP for system design with options for ship construction (lead ship and two follow-on ships) specifying an FPI contract type, share line, and ceiling price. Two shipbuilders submitted offers; however, one of the offerors withdrew and the contract was awarded to NASSCO. When the Navy exercised the options for ship construction, NASSCO representatives told us that NASSCO considered several factors in developing its proposal, including the use of a commercial ship design and the other workload NASSCO planned to have in the shipyard, which impacts overhead rates. In response to the shipbuilder agreeing to the Navy’s contract terms, the Navy altered the underrun share line stated in the RFP and agreed to allow the shipbuilder to receive a greater share of the profit in the event actual costs were lower than expected. An important factor in determining FPI contract elements for the Navy and shipbuilder is whether the contract will be a multi-year or block buy contract. These special contracting methods, which can only be used if Congress takes certain actions, allow DOD to acquire more than one year’s requirements under a single contract award without having to exercise a contract option for each year after the first. Three of the six contracts we reviewed were awarded as block buy or multi-year procurement contracts. In 2010, the Navy awarded a block buy contract for 10 ships to each of the two LCS shipbuilders, and the SSN 774 class contract was awarded in 2008 as a multi-year contract. These contract methods have the potential to create cost savings compared to a series of annual contracts because the shipbuilder is given an expectation of future workload, and allow for more economic procurement from suppliers and more efficient production which can translate into lower ship prices for the government. Under these contracting methods, the Navy and the shipbuilder typically negotiate prices for construction of all the ships on the contract at the same time, as well as the share line and ceiling price. Under such negotiations, the shipbuilder and Navy need to make assumptions regarding the shipbuilder’s efficiencies, learning, and suppliers far into the future. For the three block buy or multi-year contracts we reviewed, we found that the share lines and ceiling prices did not change across the ships on each of these contracts, with target costs generally lower for ships procured later in the build profile, presumably to account for the shipbuilders’ efficiencies and learning. Structure of FPI Contract Elements Often Resulted in the Navy Absorbing More Cost Risk Than Guidance Advises, and Added Incentives Increased the Shipbuilders’ Potential Profitability Although FPI contracts are intended to motivate the shipbuilder to control costs by requiring the shipbuilder to assume a suitable share of the cost risk, the Navy often structured the FPI contracts we reviewed such that it shouldered more cost risk than guidance suggests as a starting point. While the guidance takes into account that each contract negotiation has its own unique aspects, we found a number of occasions where the Navy had departed from it, suggesting that the Navy may not be reaping the expected benefits of this contract type. Specifically, we found that: (1) for two of the six selected contracts, uncertainties at the time of contract award made establishing a realistic target cost challenging and (2) for most of the six contracts, share lines or ceiling prices were not in line with what guidance suggests as a point of departure. Further, the Navy included over $700 million in additional incentives in the contracts— outside of the share line. These incentives had the potential to increase shipbuilder profitability and—in the event actual costs exceeded the ceiling—cushion the shipbuilders’ losses. When the Navy assumes a greater share of cost overruns above the target cost, accepts a higher ceiling price, or both, the FPI elements may not provide sufficient motivation for the shipbuilders to control costs. Structure of FPI Contract Elements Resulted in the Navy Absorbing a Greater Burden of Cost Risk Than the Shipbuilder in Most of Our Selected Contracts FPI contracts can promote shipbuilder efficiency and reduce overall cost risk to the government when a firm-fixed-price contract is not appropriate. However, if the structure of the contract elements results in the government bearing too much of the cost risk, the effectiveness of FPI contracts in motivating the shipbuilder to control costs may be weakened. According to the Director of Defense Pricing, DOD’s guidance has been to utilize FPI contract structures to align profitability with contract performance when there are well-founded cost expectations. As previously noted, DOD’s Better Buying Power initiative also encourages use of FPI contracts, when appropriate, as a means to achieve better cost performance. We compared certain FPI contract elements in our sample to DOD and Navy regulations, guidance, and recommended practices for establishing target cost, share line, target profit, and ceiling price, and found that the Navy often ends up bearing more cost risk than these criteria support as a starting point. Significant Cost Uncertainty Made Establishing a Realistic Target Cost Challenging in Two Cases Office of the Secretary of Defense and Navy officials, as well as the shipbuilding representatives we spoke with, agreed that contract negotiations often focus on the target cost, which according to a senior DOD official and guidance should be an achievable—but somewhat challenging—amount. According to April 2016 DOD guidance, the target cost should factor in the costs of known risks. Thus, to determine a target cost that reflects anticipated costs of performance, the Navy and shipbuilders need to evaluate and attempt to determine the cost of the full risk involved in constructing a ship. As we have previously found, the Navy has often proceeded to contract award with significant technical risk, unclear expectations between buyer and builder, and cost uncertainty. This was in part because the Navy had not allocated sufficient time prior to contract award to retire technical risks. We found that in two of the six contracts we reviewed—the LPD 17 class and LCS ships—significant cost uncertainties made establishing a realistic target cost challenging. While various considerations need to be taken into account in negotiating target cost—and, in the case of LPD 17 class, Hurricane Katrina was an unusually disruptive factor—the Navy’s desire to sustain the industrial base, as discussed earlier, was a key driver. LPD 22-25: There was significant uncertainty at the time of contract negotiations for LPD 22-23. During the course of contract negotiations in summer 2005, the lead ship, LPD 17, was delivered incomplete at a cost of $800 million more than planned. Then in August 2005, Hurricane Katrina caused major damage to the Gulf Coast area and the shipbuilder’s facilities, which resulted in the shipbuilder withdrawing all of its proposals until operations resumed. The Navy subsequently amended its solicitation to include options for two more ships (LPD 24 and LPD 25). The shipbuilder increased its proposed vessel labor hours for all 4 of the ships to account for increased use of inexperienced labor, out of sequence work, and additional rework, among other things, resulting from the hurricane. The contract was negotiated even though these were the first LPD 17 class ships beginning construction after Hurricane Katrina and there was still considerable risk surrounding the ships’ likely costs; the unique circumstances posed by the hurricane made it difficult to know whether the vessel labor hour increase would be representative of the actual impact on labor hours. In contrast, the Navy agreed to an increase in the target cost for LPD 26 when compared to LPD 25 because outcomes for LPD 22-25 were better understood at that time and because of unique schedule challenges with LPD 26. The business clearance memo for LPD 26 states that the Navy considered the target cost aggressive because it was considerably below the average estimates at completion for LPD 22-25. LCS 5-23 odd only and LCS 6-24 even only: At the time the Navy awarded the two LCS contracts with options for up to 10 ships each, the shipbuilders had only delivered one ship each, both far exceeding the Navy’s original contract value. Further, as we previously reported, these ships were delivered in an incomplete state and had outstanding technical issues. As a result, there was an incomplete understanding of the costs on which to base the target costs when the FPI contracts were awarded. In contrast, on the SSN 784-791 contract, the Navy had greater certainty about the shipbuilder’s ability to achieve its cost targets because the shipbuilder had already delivered five ships at the time of contract award. Navy and Shipbuilders Usually Shared Equally in Cost Overruns on the Share Line, but Navy Shouldered Additional Cost Risk by Setting Higher Ceiling Prices Another critical element of an FPI contract is how the burden of cost overruns or underruns is shared between the Navy and the shipbuilder, which is a function of the share line. The ceiling price, or the maximum amount the government will pay as part of the FPI structure (excluding other contract clauses), is also used to apportion risk between the Navy and shipbuilder. It is a combination of both the share line and ceiling price that determines the amount of cost risk placed on both the Navy and the shipbuilder. We found that, for the six shipbuilding contracts we reviewed, the share lines or ceiling prices, or both placed more cost risk on the Navy than guidance and regulation recommends as a starting point, as seen in figure 7. A memorandum issued by the Assistant Secretary of the Navy for Research, Development, and Acquisition in 2003 stated that Navy contracting officials should consider at least an equal sharing arrangement, or “50/50 share line” for most FPI contracts. This guidance also states that the contracting officer should use “aggressive” sharing arrangements, requiring the shipbuilder to share a substantial portion of both cost underruns and overruns whenever appropriate. The guidance also states that the Navy should consider cost sharing arrangements that increase the shipbuilder’s share if cost overruns increase, rather than sharing them equitably. For example, in many cases it may be appropriate to use a 50/50 share line for cost outcomes that are within plus or minus 5 percent of the target costs and 40/60 or 30/70 for other cost outcomes. In addition, since 2011, the DFARS has stated that the contracting officer shall pay particular attention to share lines with a 50/50 share ratio as the point of departure for establishing the incentive arrangement. With the exception of two contracts, the selected contracts we reviewed used a share line with a 50/50 share ratio. The exceptions were (1) the LPD 22-25 contract, which had overrun share lines that varied depending on target cost performance and that held the Navy accountable on the share line for a greater degree of cost growth than the shipbuilder and (2) the SSN 784-791 contract, which had an overrun share ratio that held the shipbuilder accountable on the share line for a greater degree of cost growth than the Navy. The ceiling price is also used to apportion risk between the Navy and shipbuilder. The ceiling price is often expressed as a percentage of the target cost; however, ceiling prices are dollar values, not percentages. Since 2011, the DFARS has stated that for FPI contracts contracting officers should consider a ceiling price of 120 percent of the target cost as a point of departure for establishing the incentive arrangement, meaning that the maximum the government could pay would be target cost, plus an additional 20 percent. DOD recently reiterated this point in April 2016 guidance. The Director of Defense Pricing stated that the actual ceiling price percentage to be used is a function of the perception of risk and who should bear that risk on any particular contract. He noted that in most instances, negotiated ceiling prices in contracts for major weapon systems other than shipbuilding, have been less than 120 percent. In 38 of the 40 ships on the contracts we reviewed (the exceptions being ESD 1 and ESB 4), the Navy shouldered additional risk by setting higher ceiling prices than guidance suggests as a point of departure. In the case of the LPD 26 contract, the Navy agreed to a higher ceiling percentage than guidance suggests; the Navy believed this was appropriate given, among other things, the increased risk due to a gap in construction with the prior hulls and the shipbuilder assuming a greater degree of risk on the share line. Our analysis found that even an additional 5 percent above what the guidance recommends as a starting point can significantly increase the government’s potential liability, particularly given the high value of shipbuilding contracts. Majority of Contracts Included Additional Incentives That Provided the Potential for Shipbuilders to Earn Profit Outside of the Share Line Although the cost incentive on the share line is intended to be the primary incentive for the shipbuilder to control costs, in 5 of the 6 contracts we selected for review, the Navy included additional incentives, which have the effect of increasing the shipbuilder’s potential to earn profit or cushion its potential loss in the event of cost growth. These incentives, which totaled over $700 million available to the shipbuilders, fell into four broad categories: award fee, cost incentives, milestone-based incentives, and shipyard investment incentives—added both at the time of award and post-award—that provided profit in addition to the target profit on the share line: Award fee: A shipbuilder may earn fee commensurate with overall cost, schedule, and technical performance as measured against contractual requirements in accordance with the criteria stated in the award-fee plan. Cost incentive (other than the share line): Only available to the shipbuilder if actual costs incurred meet a predetermined threshold. Milestone-based incentive: Encourages the shipbuilder to meet objectives that may or may not be tied to a date (e.g., achieving specified levels of ship completion at launch such as piping and cable installation, resolving shipbuilder responsible construction defects, or delivery on or before agreed upon delivery date, etc.). Shipyard investment incentive: Encourages the shipbuilder to make investments that reduce shipbuilding costs by improving construction, facilities, equipment, and processes. Only the ESD/ESB contract used the share line as its only incentive mechanism, as shown in table 1. Examples of the additional incentives added include: LPD 22-25: The Navy added millions of dollars in milestone-based incentives post-award to encourage the shipbuilder to complete work more efficiently, by completing heavy industrial work (pipe installation, hot work, cable pull, etc.) on land as opposed to in the water. Contract file documentation states that the Navy’s analysis of the shipbuilder’s performance on prior hulls indicated that there was a 50 percent premium to complete heavy industrial work in the water; therefore, the milestone-based incentives were structured to incentivize higher levels of ship completion prior to launch. However, maximizing construction work completed on land is an essential aspect of an efficient build plan—and, presumably, already incentivized through the profit that could be earned through the contract share line. DDG 115 and DDG 116: The Navy increased the shipyard investment incentive post-award. Contract file documentation states that the additional shipyard investment incentives were added as part of a comprehensive settlement agreement negotiated by the Navy and Bath Iron Works in July 2013 on ship construction efforts for the DDG 51 program. LCS 5-23 odd only and LCS 6-24 even only: The Navy added millions of dollars in milestone-based incentives to each LCS block buy contract post-award. According to a Navy contracting official, the rationale for adding these incentives was twofold: as consideration in exchange for the shipbuilders agreeing to fiscal year 2010 competitive pricing for the fiscal year 2016 ship that was added to each existing contract, and to motivate the shipbuilder to improve performance given poor cost and schedule outcomes on prior ships. These incentives did not apply to LCS 5 and LCS 6, the first ships on each contract. SSN 784-791: The Navy added additional incentives to encourage the shipbuilder to deliver the ships at or below an agreed upon percent of each ship’s target cost. This created, in effect, a further incentive for the shipbuilder to minimize target cost overruns: (1) the share line for overruns provided an incentive for the shipbuilder to minimize costs to avoid losing profit, and (2) the additional cost incentive further elevated profit opportunity if the shipbuilder could deliver at a total cost that did not exceed the agreed upon percent of the target. Under this incentive structure, if the shipbuilder delivered at or below the agreed upon percent above target cost, then its share of an overrun cost would be largely or completely—depending on the amount of the overrun—covered by the additional cost incentive. In essence this means the Navy would cover any overruns up to the agreed upon percent. Figure 8 illustrates this duplicative incentive structure with a hypothetical example of SSN 791 coming in at the agreed upon percent over target cost. The April 2016 DOD guidance stresses that the contractor should be primarily incentivized by receiving profit through the reduction of costs—a primary function of the share line, and in certain cases, by exceeding performance thresholds or reducing schedule. While program officials agreed that the share line should be a primary motivator on a contract, they noted that additional incentives could be used to encourage the shipbuilders to make targeted performance changes. Program and contracting officials stated that they must examine these issues on a program by program basis to determine whether an additional incentive is appropriate. Fixed-Price Incentive Contracts Did Not Always Lead to Desired Outcomes, and the Navy Has Not Assessed Whether Additional Incentives Improved Shipbuilder Performance Although FPI contracts are used to manage some degree of uncertainty (as compared with firm-fixed-price contracts), contract type alone cannot always ensure that desired outcomes are achieved. DOD guidance states that actual cost outcomes should approximate estimated costs within 2 to 4 percent before moving to a firm-fixed-price contract. We assessed actual costs for 11 delivered ships under the six selected contracts and found that the majority experienced cost growth above 4 percent, with six ships having significant cost growth of at least 15 percent. Due to the structure of the cost sharing arrangements, the Navy paid for the majority of the cost growth. Further as mentioned previously, for the ships we reviewed, the Navy added over $700 million in additional incentives outside the share line. While these incentives reduced the shipbuilders’ loss in some cases, the Navy has not undertaken an assessment of the effectiveness of these added incentives in terms of improved contract outcomes. Costs Grew above Target on the Majority of FPI Contracts We Reviewed Of the 11 ships on the contracts we reviewed that had been delivered to the Navy as of December 2015, 8 experienced cost growth, defined as actual costs exceeding the target cost. Six of the delivered ships had actual costs that were over 4 percent above target—with cost growth reaching as high as nearly 45 percent. Table 2 shows the actual cost outcomes for the 11 delivered ships. In addition to the ships that had been delivered, ships that had not yet been delivered as of December 2015 were also experiencing cost increases. For example, DDG 115 and 116, which had not yet been delivered as of December 2015, had incurred significant cost increases. On the ships not yet delivered on the LPD 17 class and LCS contracts, cost growth also has occurred. While specific reasons for cost growth varied, unanticipated labor hour increases were a key factor identified by shipbuilding and Navy officials. In the case of LPD 22- 25, target costs may not have fully accounted for the lingering inefficiencies associated with recovery from Hurricane Katrina. Navy and shipbuilding officials cited increased labor hours associated with an inexperienced labor force and increased rework which impacted production schedules, along with increased costs due to outsourced work. Additional labor hours were also needed to implement a number of design changes starting on LPD 22 to address numerous defects found on delivered LPD 17 class ships. On the LCS and LPD 17 class contracts, both of which had ships that surpassed target cost, the government shared in at least 50 percent of the cost overrun up until costs reached a specified point, with the government’s maximum liability capped at the ceiling price. Cost increases above target have required the Navy to request additional funding from Congress, since ship construction budgets are generally funded to the target price. The LPD 17 class and LCS ships in our selected contracts have received a total of $711.40 million between fiscal years 2007 and 2016 in additional funding above their original budgets which includes the government’s portion of contract overruns. In addition to the funding that has already been received, the Navy is likely to have a continuing need for additional funding based on the cost growth identified for ships still under construction. The LPD 17 program accounted for $551.77 million of the $711.40 million over the past 10 years, including an additional $45.10 million for the LPD 27 in the Navy’s fiscal year 2017 budget request, primarily to cover the government’s portion of the shipbuilding contract overrun. The Navy Paid the Majority of the Cost Overruns for Delivered Ships on Selected Contracts Our analysis also found that, for the ships delivered on the contracts in our review, the Navy paid over $549 million in cost growth and the shipbuilder paid approximately $430 million in cost growth. For the six ships that had been delivered as of December 2015 with significant cost growth (over 15 percent), we determined the government’s and shipbuilder’s share of cost overruns on the share line. Note that as costs increase above target price the shipbuilder’s profit is reduced. The Navy’s share of the cost growth for LPD 22-25 was hundreds of millions of dollars. The cost to deliver three of these ships far exceeded the ceiling price and the fourth ship came close to exceeding the ceiling price, despite the fact that the contract had some of the highest ceiling prices by percentage of all of the contracts we reviewed. The shipbuilder lost hundreds of millions of dollars in profit and had to absorb hundreds of millions of dollars in cost growth. For LCS 5 and LCS 6 the Navy was responsible for paying millions of dollars to address cost growth. Both ships were delivered over a year late and significantly overran target price, but did not exceed ceiling price. On LCS 5, the target cost was exceeded and the shipbuilder earned minimal profit. On LCS 6, the target cost was exceeded, the shipbuilder earned no profit, and incurred additional costs. In contrast, the shipbuilders’ cost performance on SSN 784-785 and ESD/ESB 1-3 resulted in better overall outcomes for the government and the shipbuilders. In the case of SSN 784 and SSN 785, the Navy and the shipbuilder paid a share of the cost overruns. The shipbuilder also earned hundreds of millions in profit on the ships. In the case of the ESD/ESB 1- 3, the shipbuilder underran the target cost, resulting in the Navy saving millions of dollars (its share of the cost underrun). Further, all three ships were delivered on time or ahead of schedule. Ultimately, the shipbuilder’s positive cost performance resulted in the shipbuilder earning hundreds of millions of dollars in profit including tens of millions from the share line incentive for underrunning its cost. The Navy Has Not Assessed Whether Additional Incentives Helped to Achieve Desired Outcomes For ships delivered under selected contracts, we analyzed the amounts that the shipbuilders ended up earning, including the over $700 million in additional incentives that had been added to the contracts. We found that the Navy has paid over $166 million in these additional incentives (beyond the incentive of the share line). However, it is unclear whether these incentives resulted in the outcomes that the Navy desired since, according to a senior Navy official, the Navy has not assessed the effectiveness of these incentives across its shipbuilding portfolio. Our analysis indicates that for the 11 ships delivered in our case study contracts, cost and schedule outcomes were mixed, as shown in table 3. For example, in the case of the SSN 784 and 785, the shipbuilder received millions of dollars in additional incentives, a large portion of which were shipyard investment incentives, and the ships were delivered early and within the agreed upon percent of target cost. In contrast, for LPD 22-25, the shipbuilder received millions of dollars in additional incentives, but the ships were delivered 15 to 20 months behind their initial schedules and three of the four exceeded their ceiling prices. While the shipbuilder did not earn a target profit for LPD 22-24, the additional incentives had the effect of reducing some of the shipbuilder’s loss. Overall for these ships, the Navy paid more than what was expected, added extra incentives, and did not receive the ships on time or near the cost it originally expected. On the DDG 115 and 116 contract, the shipbuilder was eligible to earn a milestone-based incentive on each ship. Despite being over target cost and behind schedule, the shipbuilder received a portion of the available incentive for each ship. This is, in part, because the milestone-based incentives are not tied to specific cost criteria; therefore, the shipbuilder is eligible to earn those incentives regardless of cost performance outcomes. The FAR states that agencies should determine, on a regular basis, the effectiveness additional incentives have in improving contractor performance and achieving desired program outcomes. This is to be done through the collection of relevant data on incentives paid to contractors and include performance measures to evaluate the data. The FAR goes on to state that this information should be considered as part of the acquisition planning process in determining the appropriate type of contract to be utilized for future acquisitions and that proven incentive strategies be shared among contracting and program management officials. Some contract files we reviewed included general statements on the rationale and perceived benefits to the government for individual incentives at contract award. However, according to a senior Navy contracting official, the Navy has never completed an analysis on the effectiveness of additional incentives on FPI contracts across its shipbuilding programs. Further, this official stated that such an analysis has not been on management’s radar screen, even though the Navy has almost exclusively used FPI shipbuilding contracts for many years with the exception of the first few ships in a class. Without such analysis, the Navy cannot know whether or not these added incentives have achieved their desired outcomes across its shipbuilding portfolio. The Navy is also missing an opportunity to share information among its contracting and program officials about how incentives may or may not yield their intended benefits—particularly given the inherent complexities associated with the U.S. shipbuilding industrial base. The Navy plans to continue to invest billions of taxpayer dollars in procuring ships over the next 30 years—including more of the ships on our selected contracts as well as the Columbia Class Ballistic Missile Submarine (formally known as the Ohio Class Replacement, a $95 billion program). As a result, competition for funding among shipbuilding priorities will continue, and it is critical that the Navy has data on the effectiveness of additional incentives in improving performance and outcomes that can help inform future contract award decisions. Conclusions Navy shipbuilding is a long and complicated process, which, coupled with the symbiotic relationship between buyer and builder that characterizes the Navy shipbuilding environment, makes contracting decisions challenging. The Navy has relied heavily on FPI contracts for the last decade, but has not taken some actions that could help ensure the Navy is maximizing the effectiveness of these contracts. Given the looming funding needs of major shipbuilding programs—including more of the ships included in our case studies—there are opportunities to do so. One way is to ensure that contracting officers document the rationale for using an FPI contract, and that the basis for FPI contract elements be clearly set forth in contract documents—in particular, determination and findings documents and the pre- and post-negotiation business clearance memorandums. Such documentation is required by regulation but we found that it had not been completed consistently for our selected contracts. From a business perspective, not having a record of these decisions could put future contracting officers and decision makers at a disadvantage when negotiating future contract awards or modifications. A second way is to assess, on a shipbuilding-wide portfolio level, whether the additional incentives added outside of the FPI share lines are achieving desired outcomes and to gather insights from contracting and program officials who have experience with these incentives. We recognize that the additional incentives are but one of many factors the Navy must take into account as it negotiates with shipbuilders within the context of the U.S. industrial base. Nevertheless, for our selected contracts, the Navy had made over $700 million available in additional incentives, but has not taken steps to understand whether this money is resulting in good outcomes for the government. Regulation, while not prescriptive, highlights the benefits of measuring the effectiveness of such incentives. Additionally, these actions make good business sense. Recommendations for Executive Action To help ensure the Navy thoroughly considers the relative benefits of using FPI contracts for shipbuilding versus other contract types, we recommend that the Secretary of Defense direct the Secretary of the Navy to take the following two actions. Issue a memorandum alerting contracting officials to ensure that they are following guidance laid out in the Navy Marine Corps Acquisition Regulation Supplement with regard to completing determination and findings documents that explain the rationale for using an FPI contract and pre- and post-negotiation business clearance memorandums, which clearly explain the rationale for FPI contracts’ incentive fee structures (including the share line, ceiling price, and any additional incentives). Conduct a portfolio-wide assessment of the Navy’s use of additional incentives on FPI contracts across its shipbuilding programs. This assessment should include a mechanism to share proven incentive strategies for achieving intended cost, schedule, and quality outcomes among contracting and program office officials. Agency Comments We provided a draft of the sensitive but unclassified version of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with our recommendations and identified dates by which it plans to implement them. Specifically, the Navy plans to implement the recommendation related to issuing a memorandum regarding the completion of determination and findings documents that explain the rationale for using an FPI contract and pre- and post- negotiation business clearance memorandums which clearly explain the rationale for FPI contracts’ incentive fee structures by March 31, 2017. The Navy also plans to complete the recommended portfolio-wide assessment of its use of additional incentives on FPI contracts across shipbuilding programs by December 15, 2017. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Navy. In addition, the report is available on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this review assessed: (1) the extent to which the Navy has entered into fixed-price incentive (FPI) contracts over the last 10 years and what influences the Navy’s contracting approach when awarding FPI contracts for ship construction, (2) how the Navy apportions risk between the government and the shipbuilder for these contracts, and (3) the extent to which the FPI contract type led to desired outcomes. This report is a public version of a sensitive but unclassified report that was issued on November 9, 2016. DOD regarded some of the material in that report as sensitive but unclassified information, which must be protected from public disclosure and is available for official use only. As a result, this public version of the original report does not contain certain information deemed to be sensitive but unclassified by DOD, including specific share lines, ceiling prices, and target costs of the Navy ships we assessed, our assessment of the share of cost risk between the Navy and shipbuilder on the 40 ships reviewed, specific dollar amounts of added incentives on ships we assessed, and Navy and shipbuilder cost outcomes on six delivered ships with significant cost growth. This report uses data from December 2015 to be consistent with the report issued in November 2016. To determine the Navy’s use of FPI contracts over the last 10 years, we compiled and analyzed Department of Defense (DOD) data on contracts awarded for detail design and construction of new ships from November 2005 through November 2015. To ensure the reliability DOD data, we compared it to data from DOD’s Naval Vessel Registry to confirm the award date of each ship, and for all contracts awarded during our 10-year time frame, we reviewed the documentation available in DOD’s Electronic Document Access System to verify contract number, award date, and contract type. As part of this process we determined that the data provided by DOD was sufficiently reliable for the purpose of this audit. Using this information, we identified the universe of Navy detail design and construction contracts awarded on an FPI basis during our 10-year time frame. To identify factors that influence the Navy’s contracting approach when awarding FPI contracts for ship construction and how the Navy apportions risk between the government and the shipbuilder for these contracts, we reviewed a non-generalizable sample of six FPI contracts for the detail design and construction of five different shipbuilding programs using the following characteristics: contract award between November 1, 2005, and November 30, 2015; number of ships on the contract; at least one ship on the contract had previously been delivered or would be delivered imminently; and representative of the majority of U.S. shipyards that build Navy vessels, including Austal USA in Mobile, Alabama; General Dynamics Bath Iron Works in Bath, Maine; General Dynamics Electric Boat in Groton, Connecticut; General Dynamics NASSCO in San Diego, California; Huntington Ingalls Industries in Pascagoula, Mississippi; Marinette Marine Corporation in Marinette, Wisconsin. As shown in table 4, the five shipbuilding programs executed under the six FPI contracts that met these criteria include 40 ships: one contract with two Arleigh Burke-class guided missile destroyers (DDG 51 class), one contract with two expeditionary transfer dock (ESD) and two expeditionary mobile base (ESB) ships, two contracts each with 10 littoral combat ships (LCS), one contract for six San Antonio-class amphibious transport dock ships (LPD 17 class), and one contract with eight Virginia-class submarines (SSN 774 class). For our six selected contracts, we reviewed contract file documentation including acquisition planning documents, requests for proposals (RFP), business clearance memorandums—key documents that explain the rationale for contract selection and structure of FPI contract elements including target cost, target profit, share line, and ceiling price, cost and schedule data, and program briefings, among other documents. To identify changes in FPI contract elements through contract modifications over the life of the contract, we compared information in the base contract at the time of initial award, including the target cost, share line, ceiling price, and incentives, to this same information in the conformed contract, or the most up-to-date contract as of December 2015, which reflects any changes made to the contract since initial award. Note that ceiling prices are often expressed as a percentage of the target cost in the contract documentation; however, ceiling prices by definition are dollar values, not percentages. Since target costs and ceiling prices can change through modifications to a contract, the ceiling price and target cost indicated in updated contract documentation may not equate to the previously established ceiling price percentage denoted in the same documentation. For consistency, we used the ceiling price percentage cited for each ship when available to complete our analysis of this contract element. We also reviewed relevant guidance on contract selection and the use of FPI contracts including Federal Acquisition Regulation, DOD and Navy guidance on FPI contracts including the DOD and National Aeronautics and Space Administration Incentive Contracting Guide, and memorandum from DOD and Navy regarding the implementation of FPI contracts. We supplemented our review of contract file information by interviewing Navy program and contracting officials for each shipbuilding program associated with our selected contracts, senior contracting officials in the Naval Sea Systems Command (NAVSEA) Contracts Directorate, officials from the Navy’s Supervisor of Shipbuilding, Conversion and Repair (SUPSHIP), officials from the Under Secretary of Defense (Acquisition, Technology and Logistics) Office for Defense Procurement and Acquisition Policy, and shipbuilding officials. To determine the extent to which FPI contracts led to desired outcomes, we analyzed contract file information and cost data for ships delivered as of December 2015 on our selected contracts to identify the delta (cost overrun or underrun) between the target cost for each ship in the most up-to-date contract as of December 2015, to SUPSHIP’s estimated construction cost of the ship at completion as of December 2015. SUPSHIP officials agreed that the estimated construction cost of the ship at completion is a reasonable estimate of ship construction cost at delivery. According to SUPSHIP officials, the actual final cost of the ship is determined when the contract is closed out which typically occurs several years after the ship has been delivered, and none of our six selected contracts had been closed out. We then calculated the price paid by the Navy and shipbuilder profit or loss for ships delivered as of December 2015 in the following manner: 1. We calculated the delta between the target cost for each ship in the most up-to-date contract as of December 2015 and SUPSHIP’s estimated construction cost of the ship at completion as of December 2015 to identify if the ship was in a cost overrun or underrun scenario. 2. Using the contract share lines, we calculated both the Navy and the shipbuilder’s financial responsibility for the cost overrun, or conversely any cost savings. 3. We then calculated profit earned by the shipbuilder, if any, and added this amount to the total cost that the Navy was responsible for to determine price to the Navy (cost and profit earned on the share line) for detail design and construction of the ship. 4. To determine profit or loss for the shipbuilder, we calculated the difference between target profit and the shipbuilder’s responsibility for the cost overrun, or conversely any cost savings. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, the following staff members made key contributions to this report: Diana Moldafsky (Assistant Director), Jennifer Echard, Nathan Foster, Laura Greifner, Julie Hadley, Kurt Gurka, Julia Kennon, Jean McSween, Roxanna Sun, Abby Volk, Alyssa Weir, and Andrea Yohe. | DOD encourages the use of FPI contracts because they allow for equitable sharing of costs savings and risk with the shipbuilder. Under FPI contracts, the shipbuilder's ability to earn a profit or a fee is tied to performance. After costs reach the agreed upon target cost, the shipbuilder's profit decreases in relation to the increasing costs. A ceiling price fixes the government's maximum liability. A House Report on the Fiscal Year 2014 National Defense Authorization Act included a provision for GAO to examine the Navy's use of FPI contracts for shipbuilding. This report examines (1) the extent to which the Navy has entered into FPI contracts over the past 10 years, (2) how FPI contracts apportion risk between the Navy and the shipbuilder, and (3) the extent to which FPI contracts led to desired cost outcomes. GAO selected a non-generalizable sample of six contracts (for 40 ships) awarded during the past 10 years, analyzed Navy contract documents, and interviewed program, contract, and shipbuilding officials. This is the public version of a sensitive but unclassified report issued in November 2016. Over 80 percent of the Navy's shipbuilding contracts awarded over the past 10 years were fixed-price incentive (FPI). However, GAO found that half of the six selected contracts it reviewed did not document the Navy's justification for selecting this contract type. Moreover, key documents that should describe the rationale for selecting contract elements varied across these contracts. Given the Navy's plans to invest billions of dollars in shipbuilding programs in the future, without adequate documentation on the rationale for use of an FPI contract and key decisions made about FPI contract elements, contracting officers will not have the information they need to make sound decisions at the negotiation table. Department of Defense (DOD) regulation suggests, as a point of departure for contract negotiations, that the government and shipbuilders share the cost risk equally and set a ceiling price 20 percent higher than the negotiated target cost. GAO found that, for most of the 40 ships on the contracts reviewed, these contract terms resulted in the Navy absorbing more cost risk, as shown below. Many factors inform the Navy's and shipbuilder's negotiation positions, including the stability of the supplier base and extent of competition. That said, guidance states that the FPI contract elements should be the primary incentive for motivating the shipbuilder to control costs. But GAO found that in five of the six contracts, the Navy added over $700 million in incentives. Of the 11 ships delivered as of December 2015 under the six contracts, 8 experienced cost growth. In one case, costs grew nearly 45 percent higher than the negotiated target cost. Further, it is unclear whether the additional incentives achieved intended cost and schedule outcomes, as GAO found a mixed picture among the contracts reviewed. Regulation, while not prescriptive, highlights the benefits of measuring the effectiveness of incentives. According to a senior Navy contracting official, the Navy has not measured incentive outcomes for its shipbuilding portfolio. Without assessing whether adding incentives is effective in improving shipbuilder performance, the Navy is missing an opportunity to better inform decisions about whether to include additional incentives in future awards. |
Background NRC is an independent agency established by the Energy Reorganization Act of 1974 to regulate the civilian use of nuclear materials. It is headed by a five-member commission, with one commission member designated by the President to serve as chairman and official spokesperson. The commission as a whole formulates policies and regulations governing nuclear reactor and materials safety and security, issues orders to licensees, and adjudicates legal matters brought before it. Security for commercial nuclear power plants is addressed by NRC’s Office of Nuclear Security and Incident Response. This office develops policy on security at nuclear facilities and is the agency’s security interface with the Department of Homeland Security (DHS), the intelligence and law enforcement communities, the Department of Energy (DOE), and other agencies. Within this office, the Threat Assessment Section assesses security threats involving NRC-licensed activities and develops recommendations regarding the DBT for the commission’s consideration. The DBT for radiological sabotage applied to nuclear power plants identifies the terrorist capabilities (or “adversary characteristics”) that sites are required to defend against. The adversary characteristics generally describe the components of a ground assault and include the number of attackers; the size of a vehicle bomb; and the weapons, equipment, and tactics that could be used in an attack. Other threats in the DBT include a waterborne assault and the threat of an insider. The DBT does not include the threat of an airborne attack. Force-on-force inspections are NRC’s performance-based means for testing the effectiveness of nuclear power plant security programs. These inspections are intended to demonstrate how well a nuclear power plant might defend against a real-life threat. In a force-on-force inspection, a professional team of adversaries attempts to reach specific “target sets” within a nuclear power plant that would allow them to commit radiological sabotage. These target sets represent the minimum pieces of equipment or infrastructure an attacker would need to destroy or disable in order to commit radiological sabotage that results in an elevated release of radioactive material to the environment. NRC also conducts baseline inspections at nuclear power plants. During these inspections, security inspectors examine areas such as officer training, fitness for duty, positioning and operational readiness of multiple physical and technical security components, and the controls the licensee has in place to ensure that unauthorized personnel do not gain access to the protected area. NRC’s policy is to conduct a baseline inspection at each site every year, with the complete range of baseline inspection activities conducted over a 3-year cycle. For both force-on-force and baseline inspections, licensees are responsible for immediately correcting or compensating for any deficiency in which NRC concludes that security is not in accordance with the approved security plans or other security orders. NRC’s Process for Revising the DBT Was Generally Logical and Well Defined, but Some Changes Were Not Clearly Linked to an Analysis of the Terrorist Threat The process by which NRC revised the DBT for nuclear power plants was generally logical and well defined in that trained threat assessment staff made recommendations for changes based on an analysis of demonstrated terrorist capabilities. The NRC commissioners evaluated the recommendations and considered whether the proposed changes constituted characteristics representative of an enemy of the United States, or were otherwise not reasonable for a private security force to defend against. However, while the final version of the revised DBT generally corresponded to the original recommendations of the threat assessment staff, some elements did not, which raised questions about the extent to which the revised DBT represents the terrorist threat. NRC’s Process for Revising Its DBT Was Generally Logical and Well Defined NRC made its 2003 revisions to the DBT for nuclear power plants using a process that the agency has had in place since issuing the first DBT in the late 1970s. In this process, NRC staff trained in threat assessment use reports and secure databases provided by the intelligence community to monitor information on terrorist activities worldwide. (NRC does not directly gather intelligence information but rather receives intelligence from other agencies that it uses to formulate the DBT for nuclear power plants.) The staff analyze this information both to identify specific references to nuclear power plants and to determine what capabilities terrorists have acquired and how they might use those capabilities to attack nuclear power plants in the United States. The staff normally summarize applicable intelligence information and any recommendations for changes to the DBT in semiannual reports to the NRC commissioners on the threat environment. In 1999, the NRC staff began developing a set of criteria—the adversary characteristics screening process—to decide whether to recommend particular adversary characteristics for inclusion in the DBT and to enhance the predictability and consistency of their recommendations. The staff use initial screening criteria to exclude from further consideration certain adversary characteristics, such as those that would more likely be used by a foreign military than by a terrorist group. For adversary characteristics that pass the initial round of screening, the threat assessment staff apply additional screening factors, such as the type of terrorist group that demonstrated the characteristic. For example, the staff consider whether an adversary characteristic has been demonstrated by transnational or terrorist groups operating in the United States, or by terrorist groups that operate only in foreign countries. Finally, on the basis of their analysis and interaction with intelligence and other agencies, the staff decide whether to recommend that the commission include the adversary characteristics in the DBT for nuclear power plants. NRC’s Office of Nuclear Security and Incident Response, which includes the Threat Assessment Section, reviews and endorses the threat assessment staff’s analysis and recommendations. Terrorist attacks have generally occurred outside the United States, and intelligence information specific to nuclear power plants is very limited. As a result, one of the NRC threat assessment staff’s major challenges has been to decide how to apply this limited information to nuclear power plants in the United States. For example, one of the key elements in the revised DBT, the number of attackers, is based on NRC’s analysis of the group size of previous terrorist attacks worldwide. According to NRC threat assessment staff, the number of attackers in the revised DBT falls within the range of most known terrorist cells worldwide. NRC staff recommendations regarding other adversary characteristics also reflected the staff’s interpretation of intelligence information. For example, the staff considered a range of sizes for increasing the vehicle bomb in the revised DBT and ultimately recommended a size that was based on an analysis of previous terrorist attacks using vehicle bombs. Intelligence and law enforcement officials we spoke with did not have information contradicting NRC’s interpretation regarding the number of attackers or other parts of the NRC DBT but did point to the uncertainty regarding the size of potential attacks and the relative lack of intelligence on the terrorist threat to nuclear power plants. In addition to analyzing intelligence information, NRC monitored and exchanged information with DOE, which also has a DBT for comparable facilities that process or store radiological materials and are, therefore, potential targets for radiological sabotage. However, while certain aspects of the two agencies’ DBTs for radiological sabotage are similar, NRC generally established less rigorous requirements than DOE— for example, with regard to the types of equipment that could be used in an attack. The DOE DBT includes a number of weapons not included in the NRC DBT. Inclusion of such weapons in the NRC DBT for nuclear power plants would have required plants to take substantial additional security measures. Furthermore, DOE included other capabilities in its DBT that are not included in the NRC DBT. Despite these differences, both agencies used similar intelligence information to derive key aspects of their DBTs. For example, both DOE and NRC based the number of attackers on intelligence on the size of terrorist cells, and DOE officials told us they used intelligence similar to NRC’s to derive the number of attackers. Likewise, DOE and NRC officials provided us with similar analyses of intelligence information on previous terrorist attacks using vehicle bombs. DOE and NRC officials also told us that most vehicle bombs used in terrorist attacks are smaller than the size of the vehicle bomb in NRC’s revised DBT. Changes to the Threat Assessment Staff’s Initial Recommendations Were Not Clearly Linked to an Analysis of the Terrorist Threat While NRC followed a generally logical and well-defined process to revise the DBT for nuclear power plants, two aspects of the process raised a fundamental question—the extent to which the DBT represents the terrorist threat as indicated by intelligence data compared with the extent to which it represents the threat that NRC considers reasonable for the plants to defend against. These two aspects were (1) the process NRC used to obtain stakeholder feedback on a draft of the DBT and (2) changes made by the commissioners to the NRC staff’s recommended DBT. With regard to the first aspect, the process NRC used to obtain feedback from stakeholders, including the nuclear industry, created the appearance of industry influence on the threat assessment regarding the characteristics of an attack. NRC staff sent a draft DBT to stakeholders in January 2003, held a series of meetings with them to obtain their comments, and received written comments. NRC specifically sought and received feedback from the nuclear industry on what is reasonable for a private security force to defend against and the cost of and time frame for implementing security measures to defend against specific adversary characteristics. During this same period, the threat assessment staff continued to analyze intelligence information and modify the draft DBT. In its written comments on the January 2003 draft DBT, the Nuclear Energy Institute (NEI), which represents the nuclear power industry, objected to a number of the adversary characteristics the NRC staff had included. Subsequently, the NRC staff made changes to the draft DBT, which they then submitted to the NRC commissioners. The changes made by the NRC staff—in particular, the size of the vehicle bomb and list of weapons that could be used in an attack—reflected some (but not all) of NEI’s objections. For example, NEI wrote that some sites would not be able to protect against the size of the vehicle bomb proposed by NRC because of insufficient land for installation of vehicle barrier systems at a necessary distance. Instead, NEI agreed that it would be reasonable to protect against a smaller vehicle bomb. Similarly, NEI argued against the inclusion of certain weapons because of the cost of protecting against the weapons. NEI wrote that such weapons (as well as the vehicle bomb size initially proposed by the NRC staff) would be indicative of an enemy of the United States, which sites are not required to protect against under NRC regulations. In its final recommendations to the commissioners, the NRC staff reduced the size of the vehicle bomb to the amount NEI had proposed and removed a number of weapons NEI had objected to. On the other hand, NRC did not make changes that reflected all of the industry’s objections. For example, NRC staff did not remove one particular weapon NEI had objected to, which, according to NRC’s analysis, has been a staple in the terrorist arsenal since the 1970s and has been used extensively worldwide. With regard to the commissioners’ review and approval of the NRC staff’s recommendations, the commissioners largely supported the staff’s recommendations but also made some significant changes that reflected policy judgments. Specifically, the commissioners considered whether any of the recommended changes to the DBT constituted characteristics representative of an enemy of the United States, which sites are not required to protect against under NRC regulations. In approving the revised DBT, the commission stated that nuclear power plants’ civilian security forces cannot reasonably be expected to defend against all threats, and that defense against certain threats (such as an airborne attack) is the primary responsibility of the federal government, in coordination with state and local law enforcement officials. Based on such considerations, the commission voted to remove two weapons the NRC staff had recommended for inclusion in the revised DBT based on its threat assessment. However, the document summarizing the commission’s decision to approve the revised DBT did not provide a reason for excluding these weapons. For example, the commission did not indicate whether its decision was based on criteria, such as the cost for nuclear power plants to defend against an adversary characteristic or the efforts of local, state, and federal agencies to address particular threats. In our view, the lack of such criteria reduced the transparency of the commission’s decisions to make changes to the threat assessment staff’s recommendations. Nuclear Power Plants Made Substantial Changes to Their Security to Address the Revised DBT, but NRC Inspections Have Uncovered Problems The four nuclear power plant sites we visited made substantial changes in response to the revised DBT, including measures to detect, delay, and respond to the increased number of attackers and to address the increased vehicle bomb size. These security enhancements were in addition to other measures licensees implemented—such as stricter requirements for obtaining physical access to nuclear power plants—in response to a series of security orders NRC issued after September 11, 2001. According to NEI, as of June 2004, the cost of security enhancements made since September 11, 2001, for all sites amounts to over $1.2 billion. To enhance their detection capabilities, the four sites we visited installed additional cameras throughout different areas of the sites and instituted random patrols in the owner-controlled areas. Furthermore, the sites we visited installed a variety of devices designed to delay attackers and allow security officers more time to respond to their posts and fire upon attackers. The sites generally installed these delay devices throughout the protected areas as well as inside the reactor and other buildings. Sites also enhanced their ability to respond to an attack by constructing bullet- resistant structures at various locations in the protected area or within buildings, increasing the minimum number of security officers defending the sites at all times, and expanding the amount of training provided to them. (See fig. 1 for an example of a bullet-resistant structure.) According to NRC, other sites took comparable actions to defend against the revised DBT. In addition to adding measures designed to detect, delay, and respond to an attack, the licensees at the four sites we visited installed new vehicle barrier systems to defend against the larger vehicle bomb in the revised DBT. In particular, the licensees designed comprehensive systems that included sturdy barriers to (1) prevent a potential vehicle bomb from approaching the sites and (2) channel vehicles to entrances where security officers could search them for explosives and other prohibited items. The vehicle barrier systems either completely encircled the plants (except for entrances manned by armed security officers) or formed a continuous barrier in combination with natural or manmade terrain features, such as bodies of water or trenches, that would prevent a vehicle from approaching the sites. In general, the four sites we visited all implemented a “defense-in-depth” strategy, with multiple layers of security systems that attackers would have to defeat before reaching vital areas or equipment and destroying or disabling systems sufficient to cause an elevated release of radiation off site. The sites varied in how they implemented these measures, primarily depending on site-specific characteristics such as topography and on the degree to which they planned to interdict attackers within the owner- controlled area and far from the sites’ vital area, as opposed to inside the protected area but before they could reach the vital equipment. For example, one site with a predominantly external strategy installed an intrusion detection system in the owner-controlled area so that security officers would be able to identify intruders as early as possible. The site was able to install such a system because of the large amount of open, unobstructed space in the owner-controlled area. In contrast, security managers at another site we visited described a protective strategy that combined elements of an external strategy and an internal strategy. For example, the site identified “choke points”—locations attackers would need to pass before reaching their targets—inside the protected area and installed bullet-resistant structures at the choke points where officers would be waiting to interdict the attackers. NRC officials told us that licensees have the freedom to design their protective strategies to accommodate site-specific conditions, so long as the strategies satisfy NRC requirements and prove successful in a force-on-force inspection. In addition to the security enhancements we observed, security managers at each site described ways in which they had exceeded NRC requirements and changes they plan to make as they continue to improve their protective strategies. For example, security managers at three of the sites we visited told us the number of security officers on duty at any one shift exceeded the minimum number of security officers that NRC requires be dedicated to responding to attacks. Similarly, in at least some areas of the sites, the new vehicle barrier systems were farther from the reactors and other vital equipment than necessary to protect the sites against the size of vehicle bomb in the revised DBT. Despite the substantial security improvements we observed at the four sites we visited, it is too early to conclude, either from NRC’s force-on- force or baseline inspections, that all nuclear power plant sites are capable of defending against the revised DBT for the following two reasons: First, as of March 30, 2006, NRC had completed force-on-force inspections at 27 of the 65 sites, and it is not planning to complete force-on-force inspections at all sites until 2007, in accordance with its 3-year schedule. NRC officials told us that plants have generally performed well during force-on-force inspections. However, we observed a force-on-force inspection at one site in which the site’s ability to defend against the DBT was at best questionable. The site’s security measures appeared impressive and were similar to those we observed at other sites. Nevertheless, some or all of the attackers were able to enter the protected area in each of the three exercise scenarios. Furthermore, attackers made it to the targets in two of the scenarios, although the outcomes of the two scenarios were called into question by uncertainties regarding whether the attackers had actually been neutralized before reaching the targets. As a result, NRC decided to conduct another force-on-force inspection at the site, which we also observed. The site made substantial additional security improvements—at a cost of $37 million, according to the licensee—and NRC concluded after the second force-on-force inspection that the site had adequately defended against a DBT-style attack. Second, we noted from our review of 18 baseline inspection reports and 9 force-on-force inspection reports that sites have encountered a range of problems in meeting NRC’s security requirements. NRC officials told us that all sites have implemented all of the security measures described in their new plans submitted in response to the revised DBT. However, 12 of the 18 baseline inspection reports and 4 of the 9 force-on-force inspection reports we reviewed identified problems or items needing correction. For example, during two different baseline inspections, NRC found (1) an intrusion detection system in which multiple alarms were not functioning properly, making the entire intrusion detection system inoperable, according to the site, and (2) three examples of failure to properly search personnel entering the protected area, which NRC concluded could reduce the overall effectiveness of the protective strategy by allowing the uncontrolled introduction of weapons or explosives into the protected area. According to NRC, the licensees at these two sites, as well as at the other sites where NRC inspection reports noted other problems, took immediate corrective actions. NRC Has Significantly Improved the Force- on-Force Inspection Program, but Challenges Remain NRC has made a number of improvements to the force-on-force inspection program, several of which address recommendations we made in our September 2003 report on NRC’s oversight of security at commercial nuclear power plants. We had made our recommendations when NRC was restructuring the force-on-force program to provide a more rigorous test of security at the sites in accordance with the DBT, which was also under revision. For example, we recommended that NRC conduct the inspections more frequently at each site, use laser equipment to better simulate attackers’ and security officers’ weapons, and require the inspections to make use of the full terrorist capabilities stated in the DBT. Actions NRC has taken that satisfy these recommendations include conducting the exercises more frequently at each site (every 3 years rather than every 8 years), and NRC so far is on track to complete the first round of force-on-force inspections on schedule, by 2007. Furthermore, NRC is using laser equipment to simulate weapons, and the attackers in the force- on-force exercise inspections that we observed used key adversary characteristics of the revised DBT, including the number of attackers, a vehicle bomb, a passive insider, and explosives. Nevertheless, we identified issues in the force-on-force inspection program that could affect the quality of the inspections and that continue to warrant NRC’s attention. For example, the level of security expertise and training among controllers—individuals provided by the licensee who observe each security officer and attacker to ensure the safety and effectiveness of the exercise—varied in the force-on-force inspections we observed. One site used personnel with security backgrounds while another site used plant employees who did not have security-related backgrounds but who volunteered to help. In its force-on-force inspection report for this latter site, NRC concluded that the level of controller training contributed to the uncertain outcome of the force-on-force exercises, which resulted in NRC’s conducting a second force-on-force inspection at the site. Furthermore, we noted that the force-on-force exercises end when a site’s security force successfully stops an attack. Consequently, at sites that successfully defeat the mock adversary force early in the exercise scenario, NRC does not have an opportunity to observe the performance of sites’ internal security—that is, the strategies sites would use to defeat attackers inside the vital area. When we raised this issue, NRC officials appeared to recognize the benefit of designing the force-on-force inspections to test sites’ internal security strategies but said that doing so would require further consideration of how to implement changes to the force-on-force inspections. Based on our observations of three force-on- force inspections, other areas where NRC may be able to make further improvements included the following: ensuring the proper use of laser equipment; varying the timing of inspection activities, such as the starting times of the mock attacks, in order to minimize the artificiality of the inspections; ensuring the protection of information about the planned scenarios for the mock attacks so that security officers do not obtain knowledge that would allow them to perform better than they otherwise would; and providing complete feedback to licensees on NRC inspectors’ observations on the results of the force-on-force exercises. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other Members of the Subcommittee may have at this time. GAO Contact and Staff Acknowledgments For further information about this testimony, please contact me at (202) 512-3841 (or at [email protected]). Raymond H. Smith, Jr. (Assistant Director), Joseph H. Cook, Carol Herrnstadt Shulman, and Michelle K. Treistman made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The nation's commercial nuclear power plants are potential targets for terrorists seeking to cause the release of radioactive material. The Nuclear Regulatory Commission (NRC), an independent agency headed by five commissioners, regulates and oversees security at the plants. In April 2003, in response to the terrorist attacks of September 11, 2001, NRC revised the design basis threat (DBT), which describes the threat that plants must be prepared to defend against in terms of the number of attackers and their training, weapons, and tactics. NRC also restructured its program for testing security at the plants through force-on-force inspections (mock terrorist attacks). This testimony addresses the following: (1) the process NRC used to develop the April 2003 DBT for nuclear power plants, (2) the actions nuclear power plants have taken to enhance security in response to the revised DBT, and (3) NRC's efforts to strengthen the conduct of its force-on-force inspections. This testimony is based on GAO's report on security at nuclear power plants, issued on March 14, 2006 (GAO-06-388). NRC revised the DBT for nuclear power plants using a process that was generally logical and well-defined. Specifically, trained threat assessment staff made recommendations for changes based on an analysis of demonstrated terrorist capabilities. The resulting DBT requires plants to defend against a larger terrorist threat, including a larger number of attackers, a refined and expanded list of weapons, and an increase in the maximum size of a vehicle bomb. Key elements of the revised DBT, such as the number of attackers, generally correspond to the NRC threat assessment staff's original recommendations, but other important elements do not. For example, the NRC staff made changes to some recommendations after obtaining feedback from stakeholders, including the nuclear industry, which objected to certain proposed changes, such as the inclusion of certain weapons. NRC officials said the changes resulted from further analysis of intelligence information. Nevertheless, GAO found that the process used to obtain stakeholder feedback created the appearance that changes were made based on what the industry considered reasonable and feasible to defend against rather than on what an assessment of the terrorist threat called for. Nuclear power plants made substantial security improvements in response to the September 11, 2001, attacks and the revised DBT, including security barriers and detection equipment, new protective strategies, and additional security officers. It is too early, however, to conclude that all sites are capable of defending against the DBT because, as of March 30, 2006, NRC had conducted force-on-force inspections at 27, or less than half, of the 65 nuclear power plant sites. NRC has improved its force-on-force inspections--for example, by conducting inspections more frequently at each site. Nevertheless, in observing three inspections and discussing the program with NRC, GAO noted potential issues in the inspections that warrant NRC's continued attention. For example, a lapse in the protection of information about the planned scenario for a mock attack GAO observed may have given the plant's security officers knowledge that allowed them to perform better than they otherwise would have. A classified version of GAO's report provides additional details about the DBT and security at nuclear power plants. |
Background FFRDCs are federally sponsored entities—operated by universities, nonprofit institutions, or industrial firms under contract with the federal government—that provide research, development, systems engineering, and analytical services to federal government agencies. In awarding these contracts, the government need not seek open competition, and it traditionally has undertaken a commitment to provide a sufficient, stable body of work to maintain the essential core of scientific and engineering talent at an FFRDC. The Director of Defense Research and Engineering oversees DOD’s FFRDCs. MITRE, a nonprofit company, operates an FFRDC for DOD under contracts with the Air Force and the Army. It also operates an FFRDC under a contract with the Federal Aviation Administration (FAA). In addition, through its non-FFRDC divisions, MITRE provides services to DOD agencies, federal civilian agencies, and state and foreign governments. Payment of contract fees to organizations that operate FFRDCs is addressed in DOD regulations. The regulations instruct contracting officers to first examine an organization’s retained earnings to determine whether a fee is needed. If a fee is needed, contracting officers must consider the organization’s needs to purchase capital equipment, to rebuild working capital, and to pay certain ordinary and necessary business expenses that are not reimbursable under procurement regulations. The Air Force has expanded on this guidance by instructing contracting officers to consider such items as depreciation charges, investment earnings, and fees earned on non-DOD contracts as sources of funds to offset the need for fees, a process that the Army also follows. Once a fee is awarded, its use is left to an FFRDC’s discretion. Each year, MITRE submits a fee proposal to the Army and the Air Force outlining its anticipated needs to purchase capital equipment, rebuild working capital, and pay nonreimbursable expenses for the coming year. The proposal includes estimates of the depreciation charges, investment earnings, and fees on non-DOD contracts that will provide sources of funds to meet these needs as well as a proposed fee for its DOD contracts. Generally, MITRE uses historical trends to estimate its anticipated funding needs and the sources of funds available to meet these needs. For fiscal year 1994, the Army provided MITRE a fixed fee of $7.6 million, representing 4.4 percent of estimated contract costs, and the Air Force provided a fee of $10.2 million, or 4.5 percent of estimated contract costs. These amounts included a traditional 3 percent of estimated contract costs—$5.2 million for the Army and $7.2 million for the Air Force—to support MITRE’s independent research program. Beginning in fiscal year 1995, both services are funding MITRE’s independent research through charges to overhead—comparable to the treatment of independent research for commercial contractors. This change has allowed the Army and the Air Force to significantly reduce fees awarded MITRE. Lack of Guidance on Payment of Nonreimbursable Costs Results in Recurring Disputes Neither OMB nor DOD has issued guidance that specifies the nonreimbursable costs contracting officers should consider in negotiating contract fees, as we recommended in our 1969 report. In that report, we concluded that some fees were appropriate because some necessary business expenses may not be reimbursed under government procurement regulations but questioned whether some costs paid from fee were necessary. Thus, to assist contracting officers in negotiating fees, we recommended that guidance be developed providing examples of costs that could appropriately be considered in negotiating fees. DOD’s current guidance notes that FFRDCs may incur some necessary but nonreimbursable costs but provides no examples of costs contracting officers may consider as ordinary and necessary. In the absence of specific guidelines, the use of a fee for nonreimbursable costs has stimulated continuing controversy. During the late 1980s, the Air Force became concerned that MITRE used its contract fees for excessive and unnecessary expenditures and urged MITRE to reduce these expenses. MITRE agreed to various actions to reduce expenses. For example, MITRE agreed to limit the size of holiday parties and to reduce their costs. Further, MITRE instructed company officers to use first-class air travel only when they needed to perform work during a trip that could not be done in the coach cabin. Recognizing the potential for controversy regarding fee expenditures, Army and Air Force contracting officers said they have strengthened oversight of fee use and have challenged expenditures they considered inappropriate. For example, in fiscal year 1995, the Army and Air Force contracting officers began using detailed quarterly expenditure reports to monitor fee usage. Among the proposed expenses the contracting officers challenged during negotiations on fiscal year 1995 fees were costs for social functions and meals provided at business meetings. DCAA has also raised questions about MITRE’s use of fees. During its review of MITRE’s fiscal year 1993 fee expenditures, DCAA concluded that fees were used to pay for lavish entertainment, personal expenses for company officers, and generous employee benefits. In addition, DCAA concluded that MITRE charged expenses to fees that would ordinarily be considered allowable, thereby avoiding the routine audit oversight normally accorded such costs. DCAA concluded that only 11 percent of the expenditures reviewed were “ordinary and necessary” business expenses. MITRE made similar expenditures during fiscal year 1994, and it plans to continue such expenditures—at a reduced level in some cases—during fiscal year 1995. DCAA reported on numerous instances where MITRE used fees for entertainment expenses. For example, during fiscal year 1993, MITRE used fees to pay for a holiday party for company executives held in McLean, Virginia. This party cost $37,719, or about $110 for each of the 342 guests attending. During fiscal year 1994, MITRE held a similar holiday party at the McLean Hilton that cost $33,177. DCAA also cited use of fees to pay for a reception and dinners for the Board of Trustees during May 1993 at a cost of $21,208, or $118 per person, as well as $2,500 for a luncheon and tour of Washington, D.C., for spouses of the Trustees during the spring Trustees meeting. During fiscal year 1994, MITRE used fees to pay for a similar reception and dinners held in connection with the fall Trustees meeting; the cost was $18,778. DCAA also reported that MITRE used fees to pay personal expenses for company officers. For example, MITRE used $5,547 in fees during fiscal year 1993 to install a home security system in the company president’s residence. During fiscal year 1994, MITRE used fees to pay the $22 monthly monitoring fee for the president’s home security system. Similarly, DCAA questioned the practice of paying for personal use of company-furnished automobiles with fees, which totaled $28,605 during fiscal year 1994. DCAA also noted generous benefits for employees during its review. For example, DCAA noted that during fiscal year 1993 MITRE used fees to pay the company president a miscellaneous relocation allowance of $31,292. MITRE continued to use fees for these allowances during fiscal year 1994, charging $689,265 or an average of $5,696 per employee relocated. In response to concerns raised by the services, and direction from the Congress, MITRE has reduced some fee expenditures. For example, MITRE has suspended the holiday party for executives. Consistent with restrictions in the Fiscal Year 1995 National Defense Authorization Act, MITRE no longer uses fees to match employees’ contributions to educational institutions and is not making corporate contributions to civic and service organizations. On the other hand, MITRE plans to continue using fees for some expenses DCAA criticized, such as providing officers company cars for personal use and generous miscellaneous allowances for employees who are relocated. MITRE maintains that its fee expenditures are comparable to the costs commercial concerns incur and are necessary to attract and retain top-quality technical and management personnel. DOD Could Reduce Need to Fund Nonreimbursable Interest Costs Because the Army and the Air Force delay providing contract funding at the start of a fiscal year, MITRE needs discretionary funding—provided through fees—to cover estimates of nonreimbursable interest costs. MITRE operates under a series of annual contract options awarded by the Army and the Air Force, and funds allotted to one fiscal year’s contract may not be carried over to a following fiscal year. Further, funding comes from the various program offices for which MITRE does work, rather than from a single Army- or Air Force-wide source. Once the various program offices transfer funds to the contracting officer, the contracting officer issues contract changes to allot the funds. MITRE’s contracts with the two services provide for reimbursement of allowable costs incurred, limited to the amount of funds allotted to the contracts. Thus, MITRE cannot submit bills for the cost of the work it has started until the funds have been allotted. During fiscal year 1994, delays in providing funding for MITRE’s Army and Air Force contracts were significant. The Army, for example, first allotted funds to MITRE’s contract on November 30, 1993—2 months after work on the contract began. As of January 1994, allotments amounted to only 16 percent of estimated cost and did not reach 95 percent of estimated costs until August 1994. For several large projects, no funds were allotted until March 1994—almost 6 months after work started. Funding delays affect MITRE’s finances. MITRE records costs incurred for which no billings have been submitted as “unbilled costs.” The level of unbilled costs carried on MITRE’s books varied through fiscal year 1994 and reached $85.6 million at the end of January 1994—about 73 percent of the company’s net worth. Unbilled costs on Army and Air Force contracts accounted for $66.6 million of the $85.6 million total. MITRE had $47.6 million in loans outstanding at the end of January 1994 and incurred $866,000 in interest costs during the year. We estimate that, if unbilled costs on Army and Air Force contracts had been due only to normal bill processing delays, average unbilled costs for the contracts would have been reduced from $38.2 million to $15.1 million during fiscal year 1994. Reducing average unbilled costs by $23.1 million would significantly reduce MITRE’s financing burden. Since MITRE’s average borrowings during fiscal year 1994 were $21.7 million, the need to provide fees to cover nonreimbursable interest costs would have been substantially reduced or eliminated. Several military program management personnel cited a desire to retain funds to deal with contingencies as a reason for having delayed funding MITRE’s work. One said that he was unaware that funding delays adversely affected the company. Another program management official, however, stated that they have a good idea of how much MITRE support they will use during a year; thus, there is no excuse for delaying funds. In fiscal year 1995, the Air Force placed a high priority on obtaining prompt funding of MITRE projects. By January 1995, the Air Force had allotted funds representing about 85 percent of the estimated contract costs. Other options for reducing MITRE’s financing requirements include the advance payment pool mechanism that some university-sponsored FFRDCs use and the revolving budget authority account that the Air Force uses to provide advance funding to The Aerospace Corporation. Oversight and Negotiation of Fees Could Be Improved DOD’s oversight of MITRE’s fee expenditures does not ensure that negotiated fee awards are equitable and consistent. Army and Air Force contracting officers generally analyze past MITRE fee expenditures to estimate fee needs for future years. However, because MITRE has contracts with many other federal agencies and state and foreign governments, we believe it is important that each customer bear an equitable share of fee expenditures. Since contracting officers do not routinely screen fee expenditures for nonrecurring costs, we are concerned that estimates of future fee needs may be distorted. Finally, we noted that lack of clear guidance on using fee to provide financing led the Army to award MITRE a fee for fiscal year 1995 that was higher than the Air Force contracting officer’s. Army and Air Force contracting officers have not determined if MITRE’s estimates of fees on non-DOD contracts, which reflect anticipated fee rates and volumes of business, are reasonable. As shown in table 1, in 3 of the last 4 years, MITRE underestimated non-DOD fee earnings. In fiscal year 1994, the underestimate amounted to about $1.2 million, or almost 12 percent of estimated non-DOD fee earnings. Contracting officers, in some cases, have analyzed past trends in non-DOD fee earnings but have not reviewed the reasonableness of estimates for future years. Thus, MITRE was able to obtain larger fees from combined DOD and non-DOD sources than contemplated in the Army and the Air Force fee negotiations. Army and Air Force contracting officers have only occasionally determined whether fee expenditures related to DOD and non-DOD work were proportional to the work performed. This is partly due to MITRE’s accounting system commingling fee expenditures, making it difficult to identify fee expenses that relate primarily to non-DOD customers. For example, in fiscal year 1994, MITRE recorded $79,181 in costs incurred for first-class airfare and similar nonreimbursable travel expenses in a single company account. One individual, the chief of the MITRE division that does air traffic control work for FAA, incurred $21,000, or about 27 percent, of these charges. In one instance, the chief spent $6,486 for a first-class flight to London, England, to attend an air traffic control conference. MITRE claimed reimbursement for the $2,471 cost of a coach ticket and charged the additional $4,015 cost of a first-class ticket to the commingled fee account even though MITRE acknowledged that these trips were related to MITRE’s air traffic control system work for foreign governments. MITRE has recently changed its accounting system to account for fee expenditures by division—roughly representing major customers—providing an opportunity to more readily determine what customer benefits from particular expenses. In addition, contracting officers have not analyzed the relative needs for working capital related to different customers. Payment cycles on MITRE’s non-DOD contracts are typically longer than those on DOD contracts. Both the Army and the Air Force contracts provide for biweekly billings, and timeliness of payment is routinely discussed during fee negotiations. Many non-DOD contracts, however, provide for monthly, rather than biweekly, billings, and payments on these contracts are generally less prompt. Consequently, non-DOD customers have made proportionately heavier demands on MITRE’s working capital than the Army and the Air Force. During fiscal year 1994, accounts receivable due from non-DOD customers represented an average of 60 days of revenue, compared to 12.7 days for the Army and the Air Force. The unbilled costs for both DOD and non-DOD customers each averaged roughly 35 days of revenue. Thus, the total financing burden for non-DOD customers of 94.3 days of revenue was almost twice the financing burden for the Army and the Air Force, which was 48.2 days. Contracting officers did not routinely analyze MITRE fee expenditures to identify nonrecurring costs that would distort projections of future fee needs. Major categories of nonrecurring expenses have been occasionally identified. DCAA, for example, identified several nonrecurring items in its review of 1993 fee expenditures. We noted several expenditures that appeared to be nonrecurring in nature during our review of 1994 fee expenditures, as the following shows. A charge of $507,000 to reconcile MITRE’s accounting records to its property management records. During fiscal year 1994, MITRE undertook a major effort to identify discrepancies between its property and accounting record-keeping systems because independent auditors had criticized it for not reconciling the two systems regularly. A charge of $270,000 to record anticipated costs of providing meals and refreshments at meetings. MITRE accounting staff told us that at the end of fiscal year 1994, costs incurred for meals and refreshments at meetings were substantially less than in previous years. This charge was recorded because the accounting staff anticipated that these costs would eventually equal those of previous years, but the anticipated costs did not materialize. A charge of $310,845 to write off losses on contracts with the German government. These losses were written off as part of an agreement to resolve payment disputes on work performed between 1979 and 1992. Lack of sufficient guidance on use of fee to provide financing for FFRDCs led Army and Air Force contracting officers to award significantly different fee rates. In 1993, MITRE obtained a 3-year term loan to take advantage of favorable, fixed interest rates rather than the fluctuating rates on its short-term borrowing. In fee negotiations for fiscal year 1995, MITRE proposed that it obtain another term loan during 1995. The Air Force considered the proceeds of this loan as a source of cash, offset by a need to make principal payments on the term loan, and awarded a fee of $2 million, or about 0.9 percent of estimated contract costs. The Army, on the other hand, excluded both loan proceeds and principal payments from its analysis of 1995 fee requirements because it was unwilling to make a commitment to provide fee to cover principal payments in future years. Consequently, the Army awarded a fee of $3.7 million on its somewhat smaller contract, amounting to 2.3 percent of estimated costs. DOD guidance provides no suggestions on how contracting officers should treat financing transactions in analyzing fee needs. DOD Recognizes Fee Guidance Should Be Strengthened In the Conference Report of the DOD Appropriations Act for Fiscal Year 1995, the Congress directed DOD to review how its FFRDCs have used fees and provide recommendations for revising the DOD FFRDC fee structure. The results of that review, reported in May 1995, are consistent with the findings of our current work and recommendations we made in 1969 regarding fees granted sponsored nonprofit research organizations. DOD’s report identified a need for stronger guidance on FFRDC fees and more consistent fee awards. In its report, DOD concludes that because the Weighted Guidelines Method normally results in a fee greater than demonstrated need, some contracting officers have awarded unneeded fees. DOD recommended that the guidance be revised to (1) make it clear that need will be the criterion for awarding fees to FFRDCs, (2) avoid using undefined and ambiguous terms to describe fee needs, and (3) identify specific costs that are inappropriate to pay from fees. We found that Army and Air Force fee analysis procedures are intended to limit MITRE’s fee to demonstrated need. The lack of a clear description of costs that fees can be used to cover, however, has complicated contracting officers’ efforts to ascertain MITRE’s fee needs. DOD’s report also identified a need for greater audit oversight of costs FFRDCs have historically paid from fees, such as the costs for independent research programs, contract termination, and capital equipment. In its report, DOD recommends, as we did in 1969, that independent research be treated as a reimbursable cost so that expenditures will be subject to routine audit oversight. The Army and the Air Force have implemented this treatment of independent research costs at MITRE. DOD’s report also recommended that termination costs should be audited and reimbursed directly when and if an FFRDC’s contract is terminated; fees should not be provided for such costs. We have opposed using fees to build contingency reserves, and MITRE has not requested fees to build termination cost reserves. As to financing capital equipment with fees, DOD recommends that the FFRDCs capital acquisition plans be thoroughly audited. We have recommended that FFRDC sponsors fund capital equipment purchases directly through contract charges rather than through fee. We noted that the capital equipment acquisition plan MITRE proposed during fiscal year 1994 fee negotiations differed markedly from MITRE’s actual purchases for the year. Recommendations to the Secretary of Defense We recommend that the Secretary of Defense issue guidance that, to the extent practicable, specifically identifies the nature and extent of nonreimbursable costs that may be covered by fee and the costs for which fees should not be provided; consider the feasibility of issuing guidance specifying the circumstances in which each of the various funding and payment methods devised by the services should be used; and assign responsibility to the Director of Defense Research and Engineering for routinely surveying the services’ fee-granting processes for FFRDCs, identifying and promoting the use of effective or innovative analytical practices, and recommending needed changes to eliminate inconsistencies in awarding fees. DOD and MITRE Comments DOD generally concurred with a draft of this report. It stated that the report would be helpful to ongoing DOD efforts to strengthen its procedures for the oversight and use of management fees by DOD-sponsored FFRDCs. However, DOD pointed out that none of the data in the report represented improper activity, as currently defined by contract or regulation, on the part of either the Air Force, the Army, or the MITRE Corporation. DOD also agreed with our recommendations and indicated it would take steps to address them. It added that (1) in fiscal year 1996 DOD will address inappropriate use of fees during the contract negotiation process and (2) beginning in fiscal year 1995, fee expenditures are being associated directly with the cost centers (i.e., contracts) benefiting from the expenses. DOD said that these steps will result in reductions in the amount of fee paid to an FFRDC and help ensure that it pays only its fair share of fee expenses. DOD’s comments are presented in their entirety in appendix I. MITRE agreed with our recommendation on the need for strengthened guidance on the nature and extent of nonreimbursable costs that may be covered by fee and the costs for which fees should not be provided. It also agreed with our observation regarding interest costs incurred as a result of delays in funding, as well as in billing and payment cycles and said it would welcome some form of advance funding/payment mechanism. MITRE’s comments are presented in their entirety in appendix II. Scope and Methodology We reviewed documentation relating to company organization and management and interviewed MITRE officials at the company’s Bedford, Massachusetts, and McLean, Virginia, locations. We also reviewed accounting records and supporting documentation relating to fee expenditures during fiscal year 1994. We selected fiscal year 1994 because it was the most recently competed fiscal year at the time of our review and because expenditures for fiscal year 1993 had been reviewed by DCAA. We interviewed officials and reviewed documentation maintained at the Director of Defense Research and Engineering, the DOD Inspector General, and DCAA. We also interviewed officials and reviewed documentation relating to contract fee awards at the three agencies that contract with MITRE for FFRDC operations: the U.S. Army Communications-Electronics Command, Fort Monmouth, New Jersey; the Air Force Electronic Systems Center, Hanscom Air Force Base, Massachusetts; and FAA, Washington, D.C.. We conducted our review from October 1994 to August 1995 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to other interested congressional committees, the Secretary of Defense, the Director of the Office of Management and Budget, and the President of the MITRE Corporation. We will also make copies available to others on request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Other major contributors to this report are listed in appendix III. Comments From the Department of Defense Comments From the MITRE Corporation The following is GAO’s comment on the MITRE Corporation’s letter dated October 26, 1995. GAO Comment 1. During the period covered by our review, MITRE did not have accounting mechanisms in place to track fee expenditures separately for each contract or customer. Consequently, data was not available to perform a structured analysis of whether contracts or customers paid a fair share of fees. As we note in our report, MITRE has changed its accounting system to account for fee expenditures by division or major customer. This data should facilitate analyses to determine whether fees are equitable among customers. Major Contributors to This Report Office of General Counsel, Washington, D.C. Thorton L. Harvey Monty Peters The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the use of Department of Defense (DOD) management fees provided to the Mitre Corporation, focusing on: (1) the adequacy of federal guidance on how fees may be used; (2) ways to reduce contractor management fees and strengthen DOD management fee oversight; and (3) DOD efforts to improve the fee management process for its federally funded research and development centers (FFRDC). GAO found that: (1) neither the Office of Management and Budget nor DOD has prepared sufficient guidance on negotiating FFRDC contract fees; (2) consequently, recurring questions are raised about Mitre's use of fees, as well as the use of fees by other DOD FFRDC; (3) the services have delayed providing contract funding at the start of a fiscal year; (4) consequently, Mitre has needed large amounts of fee to cover interest expenses; and (5) DOD oversight of contract fees has not ensured that fee awards to Mitre are reasonable and consistent. |
Background Some context for my remarks is appropriate. The threat of terrorism was significant throughout the 1990s; a plot to destroy 12 U.S. airliners was discovered and thwarted in 1995, for instance. Yet the task of providing security to the nation’s aviation system is unquestionably daunting, and we must reluctantly acknowledge that any form of travel can never be made totally secure. The enormous size of U.S. airspace alone defies easy protection. Furthermore, given this country’s hundreds of airports, thousands of planes, tens of thousands of daily flights, and the seemingly limitless ways terrorists or criminals can devise to attack the system, aviation security must be enforced on several fronts. Safeguarding airplanes and passengers requires, at the least, ensuring that perpetrators are kept from breaching security checkpoints and gaining access to aircraft. FAA has developed several mechanisms to prevent criminal acts against aircraft, such as adopting technology to detect explosives and establishing procedures to ensure that passengers are positively identified before boarding a flight. Still, in recent years, we and others have often demonstrated that significant weaknesses continue to plague the nation’s aviation security. The current aviation security structure, its policies, requirements, and practices have evolved since the early 1960s and were heavily influenced by a series of high profile aviation security incidents. Historically, the federal government has maintained that providing security was the responsibility of air carriers and airports as part of their cost of doing business. Beginning in 1972, air carriers were required to provide screening personnel and the airport operators to provide law enforcement support. However, with the rise in air piracy and terrorist activities that not only threatened commercial aviation but the national security of the United States, discussions began to emerge as to who should have the responsibility for providing security at our nations airports. With the events of the last week, concerns have been raised again as to who should be responsible for security and screening passengers at our nation’s airports. This issue has evoked numerous discussions through the years and just as many options of who and how security at our nation’s airports should be handled. But as pointed out in a 1998 FAA study, there had not been a consensus among the various aviation-related entities. To identify options for assigning screening responsibilities, we surveyed aviation stakeholders—security officials at the major air carriers and the largest airports, large screening companies, and industry associations— and aviation and terrorism experts. We asked our respondents to provide their opinions about the current screening program, criteria they believe are important in considering options, the advantages and disadvantages of each options, and their comments on implementing a different screening approach. It is important to understand that we gathered this information prior to September 11, 2001, and some respondents’ views may have changed. Weaknesses in Airport Access Controls Control of access to aircraft, airfields, and certain airport facilities is a critical component of aviation security. Existing access controls include requirements intended to prevent unauthorized individuals from using forged, stolen, or outdated identification or their familiarity with airport procedures to gain access to secured areas. In May 2000, we reported that our special agents, in an undercover capacity, obtained access to secure areas of two airports by using counterfeit law enforcement credentials and badges. At these airports, our agents declared themselves as armed law enforcement officers, displayed simulated badges and credentials created from commercially available software packages or downloaded from the Internet, and were issued “law enforcement” boarding passes. They were then waved around the screening checkpoints without being screened. Our agents could thus have carried weapons, explosives, chemical/biological agents, or other dangerous objects onto aircraft. In response to our findings, FAA now requires that each airport’s law enforcement officers examine the badges and credentials of any individual seeking to bypass passenger screening. FAA is also working on a “smart card” computer system that would verify law enforcement officers’ identity and authorization for bypassing passenger screening. The Department of Transportation’s Inspector General has also uncovered problems with access controls at airports. The Inspector General’s staff conducted testing in 1998 and 1999 of the access controls at eight major airports and succeeded in gaining access to secure areas in 68 percent of the tests; they were able to board aircraft 117 times. After the release of its report describing its successes in breaching security, the Inspector General conducted additional testing between December 1999 and March 2000 and found that, although improvements had been made, access to secure areas was still gained more than 30 percent of the time. Inadequate Detection of Dangerous Objects by Screeners Screening checkpoints and the screeners who operate them are a key line of defense against the introduction of dangerous objects into the aviation system. Over 2 million passengers and their baggage must be checked each day for articles that could pose threats to the safety of an aircraft and those aboard it. The air carriers are responsible for screening passengers and their baggage before they are permitted into the secure areas of an airport or onto an aircraft. Air carriers can use their own employees to conduct screening activities, but mostly air carriers hire security companies to do the screening. Currently, multiple carriers and screening companies are responsible for screening at some of the nation’s larger airports. Concerns have long existed about screeners’ ability to detect and prevent dangerous objects from entering secure areas. Each year, weapons were discovered to have passed through one checkpoint and have later been found during screening for a subsequent flight. FAA monitors the performance of screeners by periodically testing their ability to detect potentially dangerous objects carried by FAA special agents posing as passengers. In 1978, screeners failed to detect 13 percent of the objects during FAA tests. In 1987, screeners missed 20 percent of the objects during the same type of test. Test data for the 1991 to 1999 period show that the declining trend in detection rates continues. Furthermore, the recent tests show that as tests become more realistic and more closely approximate how a terrorist might attempt to penetrate a checkpoint, screeners’ ability to detect dangerous objects declines even further. As we reported last year, there is no single reason why screeners fail to identify dangerous objects. Two conditions—rapid screener turnover and inadequate attention to human factors—are believed to be important causes. Rapid turnover among screeners has been a long-standing problem, having been identified as a concern by FAA and by us in reports dating back to at least 1979. We reported in 1987 that turnover among screeners was about 100 percent a year at some airports, and according to our more recent work, the turnover is considerably higher. From May 1998 through April 1999, screener turnover averaged 126 percent at the nation’s 19 largest airports; 5 of these airports reported turnover of 200 percent or more, and one reported turnover of 416 percent. At one airport we visited, of the 993 screeners trained at that airport over about a 1-year period, only 142, or 14 percent, were still employed at the end of that year. Such rapid turnover can seriously limit the level of experience among screeners operating a checkpoint. Both FAA and the aviation industry attribute the rapid turnover to the low wages and minimal benefits screeners receive, along with the daily stress of the job. Generally, screeners are paid at or near the minimum wage. We reported last year that some of the screening companies at 14 of the nation’s 19 largest airports paid screeners a starting salary of $6.00 an hour or less and, at 5 of these airports, the starting salary was the minimum wage—$5.15 an hour. It is common for the starting wages at airport fast- food restaurants to be higher than the wages screeners receive. For instance, at one airport we visited, screeners’ wages started as low as $6.25 an hour, whereas the starting wage at one of the airport’s fast-food restaurants was $7 an hour. The demands of the job also affect performance. Screening duties require repetitive tasks as well as intense monitoring for the very rare event when a dangerous object might be observed. Too little attention has been given to factors such as (1) improving individuals’ aptitudes for effectively performing screener duties, (2) the sufficiency of the training provided to screeners and how well they comprehend it, and (3) the monotony of the job and the distractions that reduce screeners’ vigilance. As a result, screeners are being placed on the job who do not have the necessary aptitudes, nor the adequate knowledge to effectively perform the work, and who then find the duties tedious and dull. We reported in June 2000 that FAA was implementing a number of actions to improve screeners’ performance. However, FAA did not have an integrated management plan for these efforts that would identify and prioritize checkpoint and human factors problems that needed to be resolved, and identify measures—and related milestone and funding information—for addressing the performance problems. Additionally, FAA did not have adequate goals by which to measure and report its progress in improving screeners’ performance. FAA is implementing our recommendations to develop an integrated management plan. However, two key actions to improving screeners’ performance are still not complete. These actions are the deployment of threat image projection (TIP) systems—which place images of dangerous objects on the monitors of X-ray machines to keep screeners alert and monitor their performance—and a certification program to make screening companies accountable for the training and performance of the screeners they employ. Threat image projection systems are expected to keep screeners alert by periodically imposing the image of a dangerous object on the X-ray screen. They also are used to measure how well screeners perform in detecting these objects. Additionally, the systems serve as a device to train screeners to become more adept at identifying harder-to-spot objects. FAA is currently deploying the threat image projections systems and expects to have them deployed at all airports by 2003. The screening company certification program, required by the Federal Aviation Reauthorization Act of 1996, will establish performance, training, and equipment standards that screening companies will have to meet to earn and retain certification. However, FAA has still not issued its final regulation establishing the certification program. This regulation is particularly significant because it is to include requirements mandated by the Airport Security Improvement Act of 2000 to increase screener training—from 12 hours to 40 hours—as well as to expand background check requirements. FAA had been expecting to issue the final regulation this month, 2 ½ years later than it originally planned. According to FAA, it needed the additional time to develop performance standards based on screener performance data. Options for Assigning Screening Responsibility to Other Entities Because of the Subcommittee’s long-standing concerns about the performance of screeners, you asked us to examine options for conducting screening and to outline some advantages and disadvantages associated with these alternatives. Many aviation stakeholders agreed that a stable, highly trained, and professional workforce is critical to improving screening performance. They identified compensation and improved training as the highest priorities in improving performance. Respondents also believed that the implementation of performance standards, team and image building, awards for exemplary work, better supervision, and certification of individual screeners would improve performance. Some respondents believed that a professional workforce could be developed in any organizational context, and that changing the delegation of screening responsibilities would increase the costs of screening. Four Major Alternatives for Screening We identified four principal alternative approaches to screening. Each alternative could be structured and implemented in many different ways; for instance, an entity might use its own employees to screen passengers, or it might use an outside contractor to perform the job. In each alternative, we assumed that FAA would continue to be responsible for regulating screening, overseeing performance, and imposing penalties for poor performance. Table 1 outlines the four options. Criteria for Assessing Screening Alternatives Shifting responsibility for screening would be a step affecting many stakeholders and might demand many resources. Accordingly, a number of criteria must be weighed before changing the status quo. We asked aviation stakeholders to identify key criteria that should be used in assessing screening alternatives. These criteria are to establish accountability for screening performance; ensure cooperation among stakeholders, such as airlines, airports, FAA, efficiently move passengers to flights; and minimize legal and liability issues. We asked airline and airport security officials to assess each option for reassigning screener responsibility against the key criteria. Specifically, we asked them to indicate whether an alternative would be better, the same, or worse than the current situation with regard to each criterion. Table 2 summarizes their responses. Leaving Responsibility to Air Carriers With New Certification Rules At the time of our review, FAA was finalizing a certification rule that would make a number of changes to the screening program, including requiring FAA- certification of screening companies and the installation of TIP systems on X-ray machines at screening checkpoints. Our respondents believed that these actions would improve screeners’ performance and accountability. Some respondents approved of the proposed changes since they would result in FAA having a direct regulatory role vis-a-vis the screening companies. Others indicated that the installation of TIP systems nationwide could improve screener awareness and ability to detect potentially threatening objects and result in better screener performance. Respondents did not believe that this option would affect stakeholder cooperation, affect passenger movement through checkpoints, or pose any additional legal issues. Assigning Screening Responsibilities to Airports No consensus existed among aviation stakeholders about how airport control of screening would affect any of the key criteria. Almost half indicated that screener performance would not change if the airport authority were to assume responsibility, particularly if the airport authority were to contract out the screening operation. Some commented that screening accountability would likely blur because of the substantial differences among airport management and governance. Many respondents indicated that the airport option would produce the same or worse results than the current situation in terms of accountability, legal/liability issues, cooperation among stakeholders, and passenger movement. Several respondents noted that cooperation between air carriers and airports could suffer because the airports might raise the cost of passenger screening and slow down the flow of passengers through the screening checkpoint—to the detriment of the air carriers’ operations. Others indicated that the legal issue of whether employees of a government-owned airport could conduct searches of passengers might pose a significant barrier to this option. Creating a New Federal Agency Within DOT Screening performance and accountability would improve if a new agency were created in DOT to control screening operations, according to those we interviewed. Some respondents viewed having one entity whose sole focus would be security would be advantageous and believed it fitting for the federal government to take a more direct role in ensuring aviation security. Respondents indicated that federal control could lead to better screener performance because a federal entity most likely would offer better pay and benefits, attract a more professional workforce, and reduce employee turnover. There was no consensus among the respondents preferring this option on how federal control might affect stakeholder cooperation, passenger movement, or legal and liability issues. Creating a Federal Corporation For some of the same reasons mentioned above, respondents believed that screening performance and accountability would improve under a government corporation charged with screening. The majority of the respondents preferred the government corporation to the DOT agency, because they viewed it as more flexible and less bureaucratic than a federal agency. For instance, the corporation would have more autonomy in funding and budgeting requirements that typically govern the operations of federal agencies. Respondents believed that the speed of passengers through checkpoints was likely to remain unchanged. No consensus existed among respondents preferring the government corporation option about how federal control might affect stakeholder cooperation or legal and liability issues. Potential Lessons About Screening Practices From Other Countries We visited five countries—Belgium, Canada, France, the Netherlands, and the United Kingdom—viewed by FAA and the civil aviation industry as having effective screening operations to identify screening practices that differ from those in the United States. The responsibility for screening in most of these countries is placed with the airport authority or with the government, not with the air carriers as it is in the United States. In Belgium, France, and the United Kingdom, the responsibility for screening has been placed with the airports, which either hire screening companies to conduct the screening operations or, as at some airports in the United Kingdom, hire screeners and manage the checkpoints themselves. In the Netherlands, the government is responsible for passenger screening and hires a screening company to conduct checkpoint operations, which are overseen by a Dutch police force. We note that, worldwide, of 102 other countries with international airports, 100 have placed screening responsibility with the airports or the government; only 2 other countries—Canada and Bermuda—place screening responsibility with air carriers. We also identified differences between the U.S. and the five countries in three other areas: screening operations, screener qualifications, and screener pay and benefits. As we move to improve the screening function in the United States, practices of these countries may provide some useful insights. First, screening operations in some of the countries we visited are more stringent. For example, Belgium, the Netherlands, and the United Kingdom routinely touch or “pat down” passengers in response to metal detector alarms. Additionally, all five countries allow only ticketed passengers through the screening checkpoints, thereby allowing the screeners to more thoroughly check fewer people. Some countries also have a greater police or military presence near checkpoints. In the United Kingdom, for example, security forces—often armed with automatic weapons—patrol at or near checkpoints. At Belgium’s main airport in Brussels, a constant police presence is maintained at one of two glass-enclosed rooms directly behind the checkpoints. Second, screeners’ qualifications are usually more extensive. In contrast to the United States, Belgium requires screeners to be citizens; France requires screeners to be citizens of a European Union country. In the Netherlands, screeners do not have to be citizens, but they must have been residents of the country for 5 years. Training requirements for screeners were also greater in four of the countries we visited than in the United States. While FAA requires that screeners in this country have 12 hours of classroom training before they can begin work, Belgium, Canada, France, and the Netherlands require more. For example, France requires 60 hours of training and Belgium requires at least 40 hours of training with an additional 16 to 24 hours for each activity, such as X-ray machine operations, that the screener will conduct. Finally, screeners receive relatively better pay and benefits in most of these countries. Whereas screeners in the United States receive wages that are at or slightly above minimum wage, screeners in some countries receive wages that are viewed as being at the “middle income” level in those countries. In the Netherlands, for example, screeners received at least the equivalent of about $7.50 per hour. This wage was about 30 percent higher than the wages at fast-food restaurants in that country. In Belgium, screeners received the equivalent of about $14 per hour. Not only is pay higher, but the screeners in some countries receive benefits, such as health care or vacations—in large part because these benefits are required under the laws of these countries. These countries also have significantly lower screener turnover than the United States: turnover rates were about 50 percent or lower in these countries. Because each country follows its own unique set of screening practices, and because data on screeners’ performance in each country were not available to us, it is difficult to measure the impact of these different practices on improving screeners’ performance. Nevertheless, there are indications that for least one country, practices may help to improve screeners’ performance. This country conducted a screener-testing program jointly with FAA that showed that its screeners detected over twice as many test objects as did screeners in the United States. | A safe and secure civil aviation system is a critical component of the nation's overall security, physical infrastructure, and economic foundation. Billions of dollars and a myriad of programs and policies have been devoted to achieving such a system. Although it is not fully known at this time what actually occurred or what all the weaknesses in the nation's aviation security apparatus are that contributed to the horrendous terrorist acts of Semptember 11, 2001, it is clear that serious weaknesses exist in the nation's aviation security system and that their impact can be far more devastating than previously imagined. There are security concerns with (1) airport access controls, (2) passenger and carry-on baggage screening, and (3) alternatives to current screening practices, including practices in selected other countries. Controls for limiting access to secure areas, including aircraft, have not always worked as intended. In May of 2000, special agents used counterfeit law enforcement badges and credentials to gain access to secure areas at two airports, bypassing security checkpoints and walking unescorted to aircraft departure gates. In June 2000, testing of screeners showed that significant, long-standing weaknesses--measured by the screeners' abilities to detect threat objects located on passengers or contained in their carry-on luggage--continue to exist. More recent results show that as tests more closely approximate how a terrorist might attempt to penetrate a checkpoint--screeners' performance declines significantly. Weaknesses in screening and controlling access to secure are as have left questions concerning alternative approaches. In assessing alternatives, respondents identified five important criteria: improving screening performance, establishing accountability, ensuring cooperation among stakeholders, moving people efficiently, and minimizing legal and liability issues. |
Background According to 1992 congressional testimony, thieves turn stolen cars into money in three ways. The most common way is for a thief to take a car to a “chop shop,” where the car is dismantled and its parts are sold as replacement parts for other vehicles. The second way is for a thief to obtain an apparently valid title for the car and then sell it to a third party. Finally, the third way is for a thief to export the vehicles for sale abroad. The 1992 Act contains several approaches for dealing with these criminal activities. Title I directed the establishment of, among other things, a task force to study problems that may affect motor vehicle theft and created a new federal crime for armed car jacking. The task force was to be made up of representatives of related federal and state agencies and associations. Title II called for establishment of the National Motor Vehicle Title Information System to enable state departments of motor vehicles to check the validity of out-of-state titles before issuing new titles. Title II authorized grants up to 25 percent of a state’s start-up costs, with a limit of $300,000 per state. Title III expanded the parts marking program established in the Theft Act of 1984. The program was intended to reduce the selling of stolen parts. Major component parts of designated passenger motor vehicles are to be marked with identification numbers so that stolen parts can be identified. Title III also required the Attorney General to develop and maintain a national information system, known as the National Stolen Passenger Motor Vehicle Information System (NSPMVIS), that is to contain the identification numbers of stolen passenger motor vehicles and stolen passenger motor vehicle component parts. This system is to be maintained within the Federal Bureau of Investigation’s (FBI) National Crime Information Center (NCIC), unless the Attorney General determines that it should be operated separately. The 1992 Act also required that the Departments of Justice and Transportation prepare studies on various sections of the 1992 Act. Scope and Methodology To determine the implementation status of the marking and information systems parts of the 1992 Act, we reviewed the 1992 Act, including its legislative history, and the Theft Act of 1984. We also interviewed officials and reviewed documentation from the Departments of Justice and Transportation, the federal agencies responsible for implementing the 1992 Act’s marking and information systems provisions. Specifically, we obtained information from Justice’s FBI, National Institute of Justice, Criminal Division, and Office of Legislative Affairs and from Transportation’s National Highway Traffic Safety Administration (NHTSA). We also interviewed officials from the American Association of Motor Vehicle Administrators (AAMVA) and the National Insurance Crime Bureau (NICB), which are involved in developing information systems called for in the 1992 Act’s provisions. To identify any issues that may impede the implementation or influence the effectiveness of the marking and information systems parts of the 1992 Act, we developed a list of possible issues affecting the implementation or effectiveness of these parts of the act by reviewing documents and interviewing the same officials from these agencies. We then discussed this list with the officials and revised it on the basis of their comments. We did not determine the validity of these issues or verify the data provided to us. We performed our work in Washington, D.C., from November 1995 to February 1996 in accordance with generally accepted government auditing standards. On February 27, 1996, we requested comments on a draft of this report from the Attorney General, the Secretary of Transportation, the NICB Project Manger, and the AAMVA Director of Vehicle Services. We discussed this report, separately, with representatives of these organizations, including NHTSA’s Highway Safety Specialist; AAMVA Director of Vehicle Services; Executive Director of NICB-FACTA, Inc.; and the Director, Justice’s Audit Liaison Office; on March 7, 11, and 14, 1996, respectively. They generally agreed with the factual information in the report. Their comments have been incorporated where appropriate. National Motor Vehicle Title Information System The 1992 Act required Transportation to, among other things establish a task force by April 25, 1993, to study problems related to motor vehicle titling, registration, and salvage, which may affect motor vehicle theft, and to recommend (1) ways to solve these problems, including obtaining any national uniformity that it determines is necessary in these areas and related resources and (2) other needed legislative or administrative actions; review by January 1, 1994, state systems for motor vehicle titling and determine each state’s costs for providing a titling information system; and establish the title information system by January 31, 1996, unless Transportation determines that an existing system meets the statute’s requirement, and by January 1, 1997, report to Congress on those states that elected to participate in the information system and on those states not participating, including the reasons for nonparticipation. The title information system is intended to enable states and other users (e.g., law enforcement officials) to instantly and reliably determine, among other things, (1) the validity of title documents, (2) whether an automobile bearing a known identification number is titled in a particular state, and (3) whether an automobile titled in a particular state is, or has been, junked or salvaged. Implementation Status of the 1992 Act’s Requirements The task force, established in April 1993, reported in February 1994 its recommendations on the legislative and administrative actions needed to address problems in the areas of titling, registration, and controls over salvage to deter motor vehicle theft. The task force recommended, among other things, (1) the passage of federal legislation that would require uniform definitions for terms such as salvage vehicles and uniform methods for titling vehicles, (2) possible funding sources to pay for and maintain the titling system, and (3) penalties to enforce compliance by the participating states. The recommendations are detailed in appendix I. According to the task force chairman, the recommendations would have to be implemented to achieve the uniformity needed to ensure that the titling system would operate as envisioned. In October 1994, Transportation accepted most of the task force’s recommendations (see app. I regarding Transportation’s views on the task force recommendations). NHTSA contracted with AAMVA to identify the states’ costs for a titling system. AAMVA surveyed the 50 states and the District of Columbia to obtain their estimated costs for implementing the titling system. On January 31, 1994, NHTSA’s survey report stated that for the 37 states that provided cost estimates, the cost ranged from zero (1 state) to $12.2 million. For example, some states would have to modify their existing titling systems. In March 1996, AAMVA officials estimated that about $19 million in federal grants would be needed to fund states’ implementation costs. NHTSA officials said that since 13 states and the District of Columbia did not provide a cost estimate, they did not believe that the total costs to the states could be accurately determined. AAMVA pointed out that about 80 percent of the nation’s motor vehicle population is in the states that responded to the survey. In May 1994, Transportation sent proposed legislation to Congress to allow the Secretary of Transportation to extend the target date (from January 1996 to October 1997) for implementation of the national title information system. According to NHTSA officials, the proposed legislation was not introduced in Congress. Transportation requested the authority to extend the implementation date for the titling system because it understood that AAMVA was planning a pilot study of a titling information system, using only state and private sector funds and resources, and Transportation wanted to evaluate the study results. Subsequently, AAMVA requested funding from NHTSA for the pilot. In December 1994, NHTSA denied AAMVA’s request for funds to conduct a pilot study because, in NHTSA’s view, such a study would have been premature without first having uniformity in state titling laws and regulations. However, Congress provided $890,000 for a pilot study by NHTSA as part of Transportation’s fiscal year 1996 appropriation. NHTSA officials said that AAMVA would have responsibility for the pilot. According to AAMVA officials, as of January 1996, they were in the process of acquiring contractors to conduct the pilot, using AAMVA’s commercial driver’s license information system as the pilot’s model. According to NHTSA, the pilot should assist in determining the feasibility of a national titling system and identifying any needed uniform titling requirements for an efficient and cost-effective system. In addition, NHTSA expects the pilot to assist in determining the estimated costs for full implementation, the time frame to implement a nationwide system, the current status of titling information exchange between states, and possible barriers, in particular the absence of uniform system definitions, that could impede the states from participating in a national system. NHTSA said that the pilot study may not be able to identify all costs associated with a national titling system. It also said the complexity of implementing a titling system on a nationwide basis may call for additional resources above those identified in the pilot. NHTSA prepared legislation in response to the task force’s recommendations. Its Office of Safety Assurance submitted a legislative proposal to NHTSA’s Office of Chief Counsel in October 1994. The NHTSA Administrator approved the draft legislation for review by Transportation in May 1995. According to NHTSA, the draft legislative package contains two bills. One bill would provide (1) uniform definitions for categories of severely damaged passenger cars and their titles and (2) titling requirements for rebuilt salvage passenger vehicles. The other bill would remove the January 1996 implementation date and instead make the system contingent upon uniformity in state laws regarding the titling and control of severely damaged passenger vehicles. As of February 1, 1996, the bills were being reviewed by Transportation officials. Legislation (H.R. 2803, Anti-Car Theft Improvements Act of 1995), introduced in December 1995 by the Chairman and the Ranking Minority Member of the House Judiciary Subcommittee on Crime and others would, among other things, (1) transfer Transportation’s responsibilities for the titling area to Justice, (2) extend the implementation date of the titling system from January 31, 1996, to October 1, 1997, and (3) provide immunity for those participants (e.g., system operators, insurers, and salvagers) who make good faith efforts to comply with the 1992 Act’s titling requirements. Potential Issues Affecting the 1992 Act’s Implementation or Effectiveness On the basis of discussions with NHTSA and AAMVA officials, issues that may affect the 1992 Act’s implementation or effectiveness are concerns about the size and scope of the pilot study, uniformity, funding for the states, responsibility for the titling system, and other factors, including states’ willingness to participate and the complexity of the titling system. NHTSA officials said that the pilot study needs to develop information on the ability to establish a national system and operate the system. For example, NHTSA and AAMVA officials told us that the congressionally authorized pilot may demonstrate whether the titling system can be implemented without the uniformity recommended by the task force. However, NHTSA officials noted that the size and scope of the pilot study could limit the amount of information the pilot will be able to provide. The size and scope are to be determined by the number of participating states and system operators. Therefore, the study may not enable NHTSA to identify or resolve all barriers or problems that would arise in creating and operating a national system. NHTSA said that it will have to ensure to the best of its ability that the lessons learned will enable it to develop a national system that meets the 1992 Act’s requirements. NHTSA and AAMVA officials also stated that the pilot study could provide more information on other possible impediments to full implementation of the national title information system. According to NHTSA officials, the task force recommendations have not been implemented. NHTSA officials said that a national titling system should not be implemented until uniformity existed among the states. NHTSA added that the titling system would be inherently defective without uniformity in titling definitions and titling control procedures. Also, according to NHTSA, uniform definitions and motor vehicle titling procedures need to be addressed by all states before a national titling system could function effectively. AAMVA and NICB, however, said that uniformity among the states is not necessary to implement the titling system. AAMVA officials said that a titling system can be 85 to 90 percent effective without the existence of uniform definitions and motor vehicle titling procedures. AAMVA also said that the existence of a titling system would cause states to implement uniform definitions and motor vehicle titling procedures. AAMVA officials added that they have experience dealing with systems containing nonuniform data, including the commercial driver’s license information system upon which the pilot is to be based. NHTSA and AAMVA officials identified lack of federal and state funding as an impediment to full implementation of the titling information system. The 1992 Act placed a $300,000 limit on federal funds that could be granted to each state for start-up costs for the new titling system. H.R. 2803 would eliminate this limit and allow the Attorney General to make “necessary and reasonable” grants to the states that implement the system. However, according to NHTSA officials, no funds had been provided by the federal government to the states for implementing the titling system. NHTSA added that federal resources for system development, start up, and ongoing operations are harder to find each year. NHTSA officials told us that they are proceeding with the 1992 Act’s implementation, even though the responsibility for the titling area may be transferred to Justice. However, they pointed out that the question of responsibility for the 1992 Act could be an emerging issue regarding its implementation. As of January 1996, neither Transportation nor Justice had adopted an official position on the transfer of responsibilities. Other issues that may affect the 1992 Act’s implementation or effectiveness are as follows: Prosecution Immunity: NHTSA said concern outside Transportation has been raised about providing immunity to those individuals (e.g., system operators, insurers, and salvagers) acting in good faith to comply with the 1992 Act. H.R. 2803 could grant such immunity. AAMVA emphasized that the immunity language was intended for system operators, not participants such as salvagers. AAMVA told us that the need for immunity would not be an issue unless it affected a state’s decision to participate in the system. NICB officials stated that immunity is needed for all participants who will participate in any activities related to the database. Major Vehicle Damage Disclosure: Consumer groups may not support implementation of the titling system if the system, besides disclosing whether a vehicle had been previously junked or salvaged, does not identify vehicles that have sustained major damage. NHTSA said that the titling task force did not address this issue other than to note further study was needed. States’ Participation: Presently, the 1992 Act does not mandate the participation of the states. In NHTSA’s view, all states need to participate in the system to ensure the 1992 Act’s effectiveness in preventing title fraud. NHTSA noted that the uniformity needs of the system would require many states to enact legislation at a time when they have strongly opposed federal “mandates” and “burdens.” AAMVA officials said that it does not believe that states will need to pass new legislation to implement a titling system. Technological Challenges: According to NHTSA officials, the system envisioned by the 1992 Act would be extraordinarily complex. They said that the technology required to implement a large-scale system, which provides instantaneous response to inquiries, may take additional time or call for additional resources beyond those currently estimated. AAMVA officials said they recognize the complexity of the system but said that, by modeling the pilot after the commercial driver’s license information system, many potential concerns would be lessened. They said that the pilot will identify the necessary requirements, technology, and costs to process the anticipated larger volume of transactions of the national titling system in a timely manner. NICB officials pointed out that proven technology exists to develop and implement the system. Therefore, the challenge is not technical but is procedural and philosophical—i.e., states will need to establish policies and procedures to act on identified problems and correct them. Marking Major Component Parts of Passenger Motor Vehicles The Theft Act of 1984 identified the parts subject to marking and allowed NHTSA to identify others that were to be marked. NHTSA issued regulations on marking major original and replacement component parts of high-theft lines of passenger motor vehicles. NHTSA could exempt some lines from marking if the vehicles included antitheft devices that NHTSA determined were likely to be as effective as marking in deterring thefts. The 1992 Act broadened and extended the 1984 Act’s marking provisions. Specifically, the 1992 Act broadened the definition of the types of passenger motor vehicles to be marked to include any multipurpose vehicle and light duty trucks rated at 6,000 pounds (gross vehicle weight) or less. It extended the marking requirement to designated vehicles, except for light duty trucks, regardless of their theft rate. However, the trucks could be subject to marking if the major parts were interchangeable with high-theft passenger vehicles. No limit was placed on the number of parts that NHTSA could require to be marked, except that the marking costs are not to exceed $15 per vehicle (in 1984 dollars). According to an NHTSA official, local law enforcement officials look for markings when investigating stolen vehicles and parts. The additional marking of passenger vehicles was to be done in two phases. By October 25, 1994, NHTSA was to issue regulations governing the marking for half of these additional passenger motor vehicles (excluding the light duty trucks), and by October 25, 1997, for the remaining additional vehicles. These regulations were to be issued provided the Attorney General did not determine that further marking would not be effective (i.e., would not substantially inhibit chop shop operations and motor vehicle thefts). Justice’s National Institute of Justice will be responsible for conducting the required study upon which the Attorney General will make the determination concerning effectiveness. Like the earlier legislation, the 1992 Act also permitted exemptions from marking. The 1992 Act required a number of additional evaluations. NHTSA was required to report on theft rate-related issues and marking effectiveness by October 25, 1995, and October 25, 1997, respectively. (The 1984 Act contained similar reporting requirements for Transportation.) Furthermore, the Attorney General is to report by December 31, 1999, on the long-range effectiveness of parts marking and on the effectiveness of the antitheft devices permitted as alternatives to marking. Implementation Status of the 1992 Act’s Requirements NHTSA issued the regulations for the first phase on December 13, 1994. With respect to the study that was due on October 25, 1995, NHTSA was preparing its report for public comment as of January 1996. According to an NHTSA official responsible for the marking requirements, the results will not be made public until about May or June 1996. According to the National Institute of Justice, it was to receive grant proposals to carry out its study on March 29, 1996. The Institute expects work to begin on this study in May 1996. Potential Issues Affecting the 1992 Act’s Effectiveness A determination of the effectiveness of the marking of major components of passenger motor vehicles is not expected to be made until the Justice and Transportation reports are completed. However, on the basis of a study done in response to the 1984 Act’s reporting requirements, NHTSA reported that it was unable to statistically prove that marking reduced motor vehicle thefts. NHTSA noted, however, that there was wide support for parts marking in the law enforcement community. Further, according to NHTSA and FBI officials, marking effectiveness could be adversely affected by confusion that exists within the law enforcement community regarding those vehicles whose parts are to be marked. This confusion could occur when law enforcement officials investigate stolen vehicles and parts, for example, at chop shops. The NHTSA official said that during discussions with some federal prosecutors, the prosecutors were not aware of the marking provisions. The official said that NHTSA will provide guidance when requested by law enforcement officials. NHTSA and FBI officials also noted that some of the markings for certain major component parts were able to be removed from the parts, thus preventing checking the part against NSPMVIS. The NHTSA marking official told us that the manufacturer of the involved marking stickers had agreed to fix the problem. National Stolen Passenger Motor Vehicle Information System The 1992 Act required that by July 25, 1993, the Attorney General establish and maintain in NCIC an information system that was to contain vehicle identification numbers and other related data for stolen passenger motor vehicles and parts. If the Attorney General determined that NCIC was not able to perform the required functions, then the 1992 Act permitted the Attorney General to enter into an agreement for the operation of the system separate from NCIC. The Attorney General is to prescribe procedures for the NSPMVIS verification system under which persons/entities intending to transfer vehicles or parts would check the system to determine if the vehicle or part had been reported as stolen. These persons/entities include insurance carriers when transferring titles to junk or salvage vehicles and motor vehicle salvagers, dismantlers, recyclers, or repairers when selling, transferring, or installing a major part marked with an identification number. The 1992 Act also required the Attorney General to establish an advisory committee by December 24, 1992, which was to issue a report by April 25, 1993, with recommendations on developing and carrying out NSPMVIS. The effectiveness of this system may also be addressed in the NHTSA studies that are to be completed on parts marking by October 25, 1995, and October 25, 1997, respectively. Implementation Status of the 1992 Act’s Requirements The Attorney General authorized NICB to operate NSPMVIS on January 18, 1995. The FBI said that the authorization was the result of the Attorney General’s approval of the final report and recommendations of the NSPMVIS Federal Advisory Committee. (The advisory committee recommendations are detailed in app. II.) According to FBI officials, all of the advisory committee’s recommendations, including system administration activities, system security, theft status determination, and visual sight checks were addressed during the pilot study, as described below. However, according to the FBI, several of the recommendations cannot be implemented until regulations are developed to implement the system nationwide. According to the FBI, NICB received approval from the NCIC Advisory Policy Board in June 1993 to receive a copy of the NCIC vehicle file to establish the system. According to the FBI, the resulting system became operational in June 1994, providing the NICB with the capability to process vehicle identification numbers against the NCIC vehicle records. In March 1995, Justice established a 6-month pilot study in Texas to examine the concept and feasibility of implementing NSPMVIS nationwide. In July 1995, the pilot was extended another 6 months and included another state, Illinois. According to the FBI, the pilot study was completed in December 1995. As of April 1, 1996, the FBI said that its report is to be issued by mid-to-late April 1996. FBI officials said that the pilot showed that the system is feasible but many issues, such as funding, will have to be addressed. FBI also said it will not proceed with implementing the system until further direction is provided by Congress. Potential Issues Affecting the 1992 Act’s Implementation or Effectiveness On the basis of discussions with FBI officials and review of the advisory committee report and FBI-provided information, a number of issues were identified regarding NSPMVIS. According to the FBI, these issues are related to the system’s feasibility and effectiveness and will be addressed in its pilot study report. The FBI added that the response by law enforcement to NSPMVIS thefts is a state and local issue. It is impossible to predict the level of response from law enforcement to NSPMVIS thefts because the response is likely to vary on a case-by-case basis. However, there is no provision in the 1992 Act to fund NSPMVIS, including parts inspections, salvage vehicle inspections, or law enforcement participation and assistance. NICB officials stated that local law enforcement officials would need more resources to report stolen parts and follow up on possible thefts identified through NSPMVIS. Also, according to FBI officials, the implementation of NSPMVIS might have an adverse economic impact on insurance companies and smaller businesses involved in vehicle parts. For example, insurance carriers would have to identify the vehicle identification number of each vehicle part that is disposed. The FBI added that the insurance industry is concerned about the cost of inspecting parts. The insurance industry cooperated with the FBI throughout the pilot study and conducted parts inspections. owever, the FBI stated that industry officials have said that it may be too time-consuming and costly for insurance adjustors to inspect vehicle identification numbers on all total-loss, high-theft vehicles. According to FBI officials, the parts inspections are a major concern to all of the affected industries because of the potential costs associated with the process. NICB officials stated that the pilot study should not be the basis for assuming that the entire insurance industry would not support a parts identification process. According to FBI and NICB officials, there is a need to provide immunity from prosecution to participants acting in good faith to comply with the NSPMVIS requirements. H.R. 2803 would grant such immunity. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Departments of Justice and Transportation, AAMVA, and NICB and make copies available to others upon request. The major contributors to this report are listed in appendix III. If you need additional information, please contact me on (202) 512-8777. Recommendations of the Motor Vehicle Titling, Registration, and Salvage Task Force The following information is based on the Final Report of the Motor Vehicle Titling, Registration, and Salvage Task Force, dated February 10, 1994. (1) Uniform Definitions: The task force recommended the enactment of federal legislation to require the following definitions be used nationwide to describe seriously damaged vehicles and to require all states to use these definitions. Salvage Vehicle: Any vehicle that has been wrecked, destroyed, or damaged to the extent total estimated or actual cost to rebuild exceeds 75 percent of the vehicle retail value as set forth in a nationally recognized compilation of retail values approved by Transportation. Salvage Title: Issued by the state to the owner of a salvage vehicle. The title document will be conspicuously labeled with the word “salvage” across its front. Rebuilt Salvage Title: Issued by the state to the owner of a vehicle that was previously issued a salvage title. The vehicle has passed antitheft and safety inspections by the state. The title document will be conspicuously labeled with the words “rebuilt salvage - inspections passed” across its front. Nonrepairable Vehicle: A vehicle incapable of safe operation and has no resale value other than as source for parts or scrap only. Such vehicle will be issued a nonrepairable vehicle certificate and shall never be titled or registered. Nonrepairable Vehicle Certificate: Issued for nonrepairable vehicle. The certificate will be conspicuously labeled with “nonrepairable” across its front. Flood Vehicle: Any vehicle that has been submerged in water over door sill. Any subsequent titles will carry brand “flood.” (2) Titling and Control Methods: The task force recommended the enactment of federal legislation to require the following. If an insurance company is not involved in a damage settlement, the owner must apply for a salvage title or nonrepairable vehicle certificate. If an insurance company is involved, it must apply. State records shall be noted when nonrepairable vehicle certificate is issued. When a vehicle has been flattened, baled, or shredded, the title or nonrepairable vehicle certificate is to be returned to the state. State records will show the destruction, and no further ownership transactions for the vehicle will be permitted. State records shall be noted when a salvage title is issued. The vehicle cannot be titled without a certificate of inspection. After a vehicle with a salvage title has passed antitheft and safety inspections, a decal will be affixed to left front door, and a certificate will be issued indicating that inspections were passed. Owner of a vehicle with a salvage title may obtain a rebuilt salvage title by presenting the salvage title and certificate that inspections were passed. (3) Duplicate Title Issuance: The task force recommended the states strengthen and have uniform controls on the issuance of duplicate titles as follows. If duplicate titles are issued over the counter, they will be issued only to the vehicle owner and only after proof of ownership and personal identification are presented. Applications for duplicate titles should be multipart forms with sworn statements as to truth of the contents. When power of attorney is involved, the duplicate title should be mailed to a street address, and not to a post office box. Also, states should consider mailing one part of the multipart application form to the owner of record. Fees are to be set to offset costs of adoption of these recommendations. Criminal penalty for offenses in this area should be a felony crime. Duplicate titles should be conspicuously marked as duplicate. (4) National Uniform Antitheft Inspection for Rebuilt Salvage Vehicles: The Task Force recommended the following specific steps. Requesters for inspections provide declaration of vehicle damages and replacement parts, supported by vehicle titles, etc. Component parts and/or vehicles, if unidentified, having an altered, defaced, or falsified vehicle identification number be contraband and destroyed. Provide minimum selection and training standards for certified inspectors who are employed by the states. The inspectors should be afforded immunity when acting in good faith. Inspection program should be self-supported by fees. (5) National Uniform Safety Inspection for Rebuilt Salvage Vehicles: The Task Force recommended the following. All states institute a safety inspection for rebuilt salvage vehicles. (The Task Force recommended criteria that it said should be considered as the minimum standards.) If contracted to a private enterprise, the entity must meet Transportation-established training and equipment standards. The vehicles be inspected and certified with respect to individual repair and inspections, but not with respect to the states’ obligation to license and audit the performance of private enterprise chosen as licensees. (6) Exportation of Vehicles: The task force recommended the following. No exportation without proof of ownership being provided U.S. Customs Service. Customs will provide vehicle identification numbers to the titling information system. (7) Funding: The task force recommended that the federal, state, and local costs be funded from the following sources: — federal appropriations and grants, — state revenues and user fees, — federally mandated fees, and — money obtained from enforcement penalties and from sale of seized contraband. (8) Enforcement: The task force recommended the following. Investigative authority and sanctions should parallel those contained in Title IV of the Motor Vehicle Information and Cost Savings Act. A portion of federal highway funds should be withheld if a state does not comply with federal legislation implementing the task force’s recommendations, within 3 years after enactment. Department of Transportation Position Transportation agreed with all task force recommendations except the exportation (recommendation 6) and highway fund sanctions recommendations (part of recommendation 8). It took no position on the exportation recommendations, saying that was the responsibility of the U.S. Customs Service. Transportation opposed using the highway fund as an enforcement tool. Recommendations of the Federal Advisory Committee on the National Stolen Auto Part Information System The following information was excerpted from the Final Report of the National Stolen Auto Part Information System (NSAPIS) Federal Advisory Committee, dated November 10, 1994. System Administration and Oversight (1) The Committee recommends that the National Insurance Crime Bureau (NICB) serve as the System Administrator for NSAPIS, and the Attorney General enter into an agreement with NICB, at no cost or a nominal cost to the government, for the operation of NSAPIS. The Committee believes that NICB possesses the necessary resources, skills, and infrastructure to successfully maintain and administer NSAPIS. (2) The Committee recommends that a written agreement be developed that clearly defines the role, responsibilities, and requirements for NICB as the NSAPIS Administrator. (3) The Committee recommends that Congress enact legislation establishing an Oversight Committee to work with NICB to develop and maintain NSAPIS. The Committee recommends that the NSAPIS Oversight Committee be formed immediately. In addition, the Committee recommended a list of pre-and post-implementation functions that the NSAPIS Oversight Committee should handle. (4) The Committee strongly recommends that the Oversight Committee have representation from all affected elements of the automobile industry, insurance industry, and law enforcement. Specific industries and organizations the Committee believes should have representation on the Oversight Committee include the NSAPIS Administrator, Justice, NHTSA, Consumer Affairs Group, and two members each representing the Automobile Recycling Industry, Automobile Repair Industry, Automobile Insurance Industry, Law Enforcement Agencies, and Automobile Parts Rebuilders Industry. (5) The Committee recognizes that NICB may establish a Vehicle Parts History File. The Committee said that tracking recycled parts data may deter using stolen auto parts in repairing vehicles. The information in the NICB’s Vehicle Parts History File would be supplied to law enforcement for investigative purposes. (6) The Committee recommends that any organization serving as the NSAPIS Administrator be prohibited from engaging in a parts locating service. The Committee wants to ensure that the NSAPIS Administrator does not compete with current parts locating services as a result of their NSAPIS association and activity. (7) The Committee recommends that the FBI, in conjunction with Transportation and affected associations, engage in a comprehensive training and awareness program to educate manufacturers, repairers, insurers, safety inspectors, and law enforcement officials on relevant issues, which affect the success of NSAPIS, such as parts marking regulations and enforcement tactics. Law Enforcement and Notification (1) The Committee recommends that NSAPIS provide automatic notification to a law enforcement agency having investigative jurisdiction over the locality in which the inquiring NSAPIS user is located, on stolen vehicle and vehicle part NSAPIS hits. The notification should include a message to the law enforcement agency to “confirm the current theft status through NCIC and conduct a logical investigation.” (2) The Committee recommends, in the case of an NSAPIS hit, the following message be sent to the person attempting to sell, transfer, or install the vehicle part: “THE VEHICLE OR PART QUERIED HAS BEEN REPORTED STOLEN AND THE SALE, TRANSFER, OR INSTALLATION OF THIS VEHICLE OR PART MUST BE TERMINATED. YOUR LOCAL LAW ENFORCEMENT AGENCY HAS BEEN PROVIDED THE DETAILS OF THIS TRANSACTION.” (3) The Committee recommends, in the case where there is no NSAPIS hit, that the person or organization attempting to sell, transfer, or install the vehicle or part receive an NSAPIS-generated authorization number. System Security (1) The Committee recommends that NSAPIS, at minimum, meet the C2 level security requirements as stated in the Department of Defense Trusted Computer System Evaluation Criteria (DOD 5200.28-STD), commonly referred to as the Orange Book. Data Quality (1) The Committee recommends that manufacturers be encouraged to provide updated information to NICB, including component numbering sequences. (2) The Committee recommends that efforts be undertaken to further encourage law enforcement officials to dutifully report and verify data for the NCIC Vehicle file. (3) The Committee recommends that NSAPIS documentation include information that informs inquirers of what occurs following both a positive and a negative hit from NSAPIS. Salvage and Junk Vehicle Definition (1) The Committee suggests that any vehicle that sustains damage equal to or greater than 100 percent of its predamaged actual cash value be declared “unrepairable - parts only.” The NSAPIS Committee said that the number of motor vehicle thefts can be significantly reduced by eliminating the availability of salvage and junk vehicle identification numbers and related paperwork. Theft Status Determination and Verification (1) The Committee recommends that the theft status determination occur through an electronic verification process that provides an NSAPIS-generated authorization number to the inquirer. (2) The Committee recommends that the only exception to electronic verification be in those instances where NSAPIS cannot provide a response within a “timely manner.” (3) The Committee recommends that in those instances where NSAPIS cannot provide insurers a theft status verification in a timely manner, a certificate be provided to the insurer, or a contracting agent for the insurer, which allows for the sale or transfer of the vehicle or part. The certificate shall be generated by the NSAPIS Administrator. The Committee listed specific information that at a minimum should be contained on the certificate. (4) The Committee recommends that Congress enact legislation that would provide for limited immunity (e.g., persons or organizations authorized to receive or disseminate information from NSAPIS) to protect NSAPIS participants acting in good faith. Visual Sight Check and Verification (1) The NSAPIS Committee recommends that any person engaged in business as an insurance carrier shall, if such carrier obtains possession of and transfers a junk motor vehicle or a salvage motor vehicle (a) verify, after performing a visual sight check on all applicable major parts, whether any of those major parts are reported stolen. The applicable major parts are those parts that have been designated by NHTSA. (b) provide verification to whomever such carrier transfers or sells any such salvage or junk motor vehicle. (2) The Committee recommends that insurers be allowed to contract out the verification tasks, but the insurer must still be identified on the certificate, when necessary, to the purchaser. (3) The Committee recommends that all self-insured entities be required to perform vehicle and parts verifications in the same manner that insurance companies are required to do. (4) The Committee recommends that salvage and junk vehicles that are impounded and to be sold at government auction be verified through NSAPIS before any sale or transfer takes place. Major Contributors to This Report General Government Division, Washington, D.C. Office of the General Counsel, Washington, D.C. Accounting and Information Management Division, Washington, D.C. Nancy M. Donnellan, Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the implementation of the Anti-Car Theft Act, focusing on the: (1) status of national information systems on motor vehicle titles and stolen passenger cars and parts; (2) marking of major component parts of passenger cars with identification numbers; and (3) issues that may impede the act's implementation. GAO found that: (1) the Department of Justice (DOJ) and the Department of Transportation (DOT) have begun developing information systems and DOT has issued initial parts-marking regulations; (2) a DOT task force has made recommendations on the legislative and administrative actions needed to address problems in titling, registration, and controls over salvage to deter motor vehicle theft; (3) states need about $19 million in federal grants to implement their part of the titling system; (4) the National Highway Traffic Safety Administration has proposed legislation to implement the task force's recommendations; (5) issues affecting the implementation or effectiveness of the proposed titling information system include prosecution immunity, major vehicle damage disclosure, the system's complexity, and state participation, funding, and responsibility; (6) the association that DOJ authorized to set up the stolen vehicle and parts database and complete a pilot study on the database's concept and maintenance feasibility expects to begin studying parts-marking effectiveness in May 1996; and (7) potential barriers to the implementation or effectiveness of the act's parts marking provisions include state funding for the database, confusion over what vehicles and parts are to be marked, whether local law enforcement agencies have the resources necessary to follow up on identified stolen vehicles and parts, and the potential adverse economic impact on insurance companies and small businesses. |
Background Demographic Shifts and Life Expectancy Because Americans are, on average, living longer and having fewer children, the average age of the population is rising and that trend is expected to continue. As of 2015, people age 65 and over accounted for 15 percent of the population, but by 2045 they are expected to comprise more than 20 percent of the population. Life expectancy is the average estimated number of years of life for a particular demographic or group of people at a given age. Life expectancy can be expressed in two different ways: (1) as the average number of years of life remaining for a group, or (2) as the average age at death for a group. Life span for a particular individual within a group may fall above or below this average. Researchers use a variety of statistical methods and assumptions in making their estimates, such as how longevity trends are expected to change in the future. Researchers also may use different data sources to develop life expectancy estimates. For example, some may use death data maintained by SSA, while others may use Centers for Disease Control and Prevention (CDC) mortality data or Census data. As noted, life expectancy can be estimated from different initial ages, such as from birth or from some older age. For a given population, the earlier the starting age, the greater the remaining years of life expectancy, but the lower the average age at death. This is because, when projected from birth, measures of life expectancy reflect the probability of death over one’s entire lifetime, including from childhood infectious diseases. In contrast, life expectancy calculated at older ages, such as age 65, generally predicts that individuals will live to an older age than when life expectancy is calculated at birth, since the averages for older persons do not include those who have died before that age. As a result, the average age that will be reached from birth will be lower than the average age that will be reached by those who have already reached age 65. Studies have found various factors associated with disparities in life expectancy. For example, women tend to live longer than men, although that gap has been getting smaller, according to SSA data. In addition, 65-year-old men could expect to live until age 79.7 in 1915, on average; in 2015, they could expect to live until age 86.1—an increase of about 6.4 years. Meanwhile, 65-year-old women could expect to live until age 83.7 in 1915, on average; in 2015, they could expect to live until age 88.7—an increase of about 5 years. Other factors that have been shown to be associated with differences in life expectancy include income, race, education, and geography. A recent study examined trends in life expectancy at the county level from 1985 to 2010 and found increasing disparities across counties over the 25-year period, especially in certain areas of the country. The lowest life expectancy for both men and women was found in the South, the Mississippi basin, West Virginia, Kentucky, and selected counties in the West and Midwest. In contrast, substantial improvements in life expectancy were found in multiple locations: parts of California, most of Nevada, Colorado, rural Minnesota, Iowa, parts of the Dakotas, some Northeastern states, and parts of Florida. The study found that while income, education, and economic inequality are likely important factors, they are not the only determinants of the increasing disparity across counties. Certain environmental factors, such as lack of access to health care, and behaviors such as smoking, poor diet, and lack of exercise, have also been shown to be associated with shorter life expectancy. The U.S. Retirement System In the United States, income in retirement may come from multiple sources, including (1) Social Security retirement benefits, (2) payments from employer-sponsored defined benefit (DB) plans, and (3) retirement savings accounts, including accounts in employer-sponsored defined contribution (DC) plans, such as 401(k) plans; and individual retirement accounts (IRA). Social Security Social Security pays retirement benefits to eligible individuals and family members such as their spouses and their survivors, as well as other benefits to eligible disabled workers and their families. According to SSA, in 2014 about 39 million retired workers received Social Security retirement benefits. Individuals are generally eligible to receive these benefits if they meet requirements for the amount of time they have worked in covered employment—i.e., jobs through which they have paid Social Security taxes. This includes jobs covering about 94 percent of U.S. workers in 2014, according to SSA. Social Security retirement benefits offer two features that offset some key risks people face in retirement: (1) they provide a monthly stream of payments that continue until death, so that there is no risk of outliving a person’s benefits; and (2) they are generally adjusted annually for cost-of-living increases, so there is less risk of inflation eroding the value of a person’s benefits. Social Security retirement benefits are based on a worker’s earnings history in covered employment. The formula for calculating monthly benefits is progressive, which means that Social Security replaces a higher percentage of monthly earnings for lower-earners than for higher- earners. As we reported in 2015, retired workers with relatively lower average career earnings receive monthly benefits that, on average, equal about half of what they made while working, whereas workers with relatively high career earnings receive benefits that equal about 30 percent of earnings. In 2013, SSA reported that the program provided at least half of retirement income for 64 percent of beneficiaries age 65 or older in 2011 and that 35 percent of beneficiaries in this age range received 90 percent or more of their income from Social Security. For retired workers, Social Security pays full (unreduced) benefits at the full retirement age, which ranges from 65 to 67 depending on an individual’s birth year. Workers can claim Social Security retirement benefits as early as age 62, resulting in a reduced monthly benefit, or can delay claiming after they reach full retirement age, resulting in an increased monthly benefit until age 70 (i.e., no further increases are provided for delayed claiming after age 70). According to SSA documentation, the Social Security benefit formula adjusts the amount of monthly benefits to reflect the average remaining life expectancy at each claiming age. More specifically, benefits are adjusted up or down based on claiming age so that, on average, the actuarial present value of a beneficiary’s total lifetime benefits is about the same regardless of claiming age. For example, workers currently age 62 who would reach full retirement age at 66 would receive a monthly benefit about 25 percent lower if claiming early, at age 62, compared with the benefit that would be paid at their full retirement age. Those delaying claiming until age 70 would receive about 32 percent more per month than their full retirement age benefit, according to SSA. Despite higher monthly benefits for those who delay claiming, in 2014, age 62 was the most prevalent age to claim Social Security retirement benefits: About 37 percent of total retired worker benefits awarded were awarded at age 62. When workers die before reaching age 62, they may not receive any of the Social Security retirement benefits that they would have been entitled to receive had they lived longer. In cases where a worker dies before or during retirement, there are survivors benefits that provide widows and widowers up to 100 percent of the deceased spouse’s benefit. Defined Benefit Plans Defined benefit (DB) plans are generally tax-advantaged retirement plans that typically provide a specified monthly benefit at retirement, known as an annuity, for the lifetime of the retiree. Qualified private sector DB plans may be single-employer or multiemployer plans. Single-employer plans make up the majority of private sector DB plans (about 94 percent) and cover the majority of private sector DB participants (75 percent of about 41.2 million workers and retirees in 2014). The amount of the annuity provided by a DB plan is determined according to a formula specified by the plan, and is typically based on factors such as salary, years of service, and age at retirement. Plan sponsors generally bear the risks associated with investing the plan’s assets and ensuring that sufficient funds are available to pay the benefits to plan participants as they come due. As indicated in figure 1, over the past several decades employment-based retirement plan coverage, especially in the private sector, has shifted away from DB plans to defined contribution (DC) plans, which generally require participants to bear the risks of managing their assets. Retirement savings accounts can provide individuals with a tax- advantaged way to save for retirement, but, unlike DB plans, they generally require individuals to manage their own assets. There are two primary types of retirement savings vehicles: employer-sponsored DC plans, such as 401(k)s, and individual retirement accounts (IRA). DC plans’ benefits are based on contributions made by workers (and sometimes by their employers) and the performance of the investments in participants’ individual accounts. Workers are generally responsible for determining their contribution rate, managing their savings and investments, and deciding how to draw down their assets after retirement. There are also tax-advantaged retirement savings accounts that are not employer-sponsored, such as traditional IRAs and Roth IRAs. Eligible individuals may make contributions to traditional IRAs with pre-tax earnings and any savings in traditional IRAs are tax-deferred—that is, taxed at the time of distribution. Eligible individuals’ contributions to Roth IRAs are made with after-tax earnings and are generally not taxed at the time of distribution. Individuals may choose to roll over their employer- sponsored DC plans into an IRA when they leave employment. Increasing Life Expectancy Adds to Challenges for Retirement Planning The projected continuing increase in life expectancy for both men and women in the United States contributes to longevity risk in retirement planning. For the Social Security program and employer-sponsored defined benefit plans, longevity risk is the risk that the program or plan assets may not be sufficient to meet obligations over their beneficiaries’ lifetimes. For individuals, longevity risk is the risk that they may outlive any retirement savings they are responsible for managing, such as in a DC plan. Increasing Life Expectancy Adds to Challenges for Social Security Increasing life expectancy adds to the long-term financial challenges facing Social Security by contributing to the growing gap between annual program costs and revenues. Although life expectancy is only one factor contributing to this gap, as individuals live longer, on average, each year there are more individuals receiving benefits, adding to the upward pressure on program costs. According to the 2015 report from the Board of Trustees of the Federal Old-Age and Survivors Insurance (OASI) and Federal Disability Insurance (DI) Trust Funds, the Social Security OASI trust fund is projected to have sufficient funds to pay all promised benefits for nearly two decades, but continues to face long-term financial challenges. In 2010, program costs for the combined OASI and DI trust funds began exceeding non-interest revenues and are projected to continue to do so into the future (see fig. 2). The 2015 Trustees Report projected that the OASI trust fund would be depleted in 2035, at which point continuing revenue would be sufficient to cover 77 percent of scheduled benefits. To help address the long-term financial challenges facing the Social Security retirement program, various changes have been made over the years. For example, the Social Security Amendments of 1983 established a phased-in increase in the full retirement age, gradually raising it from age 65 (for workers born in 1937 or earlier) to age 67 (for workers born in 1960 and later). Also, challenges facing the DI trust fund have affected OASI. For example, in late 2015, Congress passed a law that reallocates some tax revenue from the OASI trust fund to the DI trust fund, thus delaying benefit reductions to DI beneficiaries that were projected to occur in 2016 until 2022. In addition, a wide range of options to adjust Social Security further have been proposed. To illustrate this range of options, table 1 provides examples from among the options and summarizes their effect according to SSA’s Office of the Chief Actuary (OCACT). Some options would reduce benefit costs, such as by making adjustments to the retirement age. Other options would increase revenues, such as by making adjustments to payroll tax contributions. The table shows the most recent OCACT analysis, which is based on the intermediate assumptions of the 2015 Trustees Report and reflects the impact on both the OASI and DI trust funds combined over the next 75 years. The trustees estimate that, using intermediate projections, the shortfall toward the end of their 75-year projections would reach 4.65 percent of taxable payroll for 2089. The options are based on proposals introduced in Congress or suggested by experts, but are not exhaustive. Each has advantages and disadvantages, and GAO is not recommending or endorsing the adoption of any of the specific options presented. Increasing Life Expectancy Adds to Challenges for Defined Benefit Plans Although there are many factors at play in the decline of defined benefit (DB) plans, increasing life expectancy adds to the challenges these plans face by increasing the financial obligations needed to make promised payments for their beneficiaries’ lifetimes. For example, plan sponsors and industry experts estimate that the Society of Actuaries’ 2014 revised mortality tables, if adopted for DB plans, would increase plan obligations by 3.4 to 10 percent, depending on the characteristics of a plan’s participants. As of 2012, more than 85 percent of single-employer DB plans were underfunded by a total of more than $800 billion, according to the most recent data available from the Pension Benefit Guaranty Corporation (PBGC). DB plan sponsors have increasingly been taking steps, known as “de-risking,” to either reduce risk or shift risk away from sponsors, often to participants. De-risking can be classified as internal or external. Internal de-risking approaches include reducing risk by (1) shifting plan assets into safer investments that better match certain characteristics of a plan’s benefit liabilities, and (2) restricting growth in the size of the plan by restricting future plan participation or benefit accruals, such as by “freezing” the plan (variations of which include closing the plan to newly- hired workers or eliminating the additional accrual of benefits by those already participating in the plan). In 2012, more than 40 percent of single- employer DB plans were frozen in some form, according to the most recent data available from PBGC, and many frozen plans are ultimately terminated, which can shift the risk of ensuring an adequate lifetime retirement income to individuals, as discussed below. External de-risking involves closing the plan completely (referred to as terminating the plan) or reducing the size of the plan by transferring a portion of plan liabilities, plan assets, and their associated risk to external parties—typically either to participants or to an insurance company. For example, an employer may terminate its DB plan, if it can fund all of the benefits owed through the purchase of a group annuity contract from an insurance company (sometimes called a “group annuity buy-out”). Short of termination, an employer can also transfer a portion of plan assets and liabilities to an insurance company for a certain group of plan participants, such as former employees with vested benefits. Alternatively, an employer may, under certain circumstances, terminate its DB plan by paying all the benefits owed in another form, such as by providing a lump sum to each participant or beneficiary of the plan, if the plan permits. The employer could also opt to make a lump sum buy-out offer only to certain plan participants. When such an offer is made, plan participants have a specified amount of time, known as the lump sum “window,” to choose between keeping their lifetime annuity or taking a lump sum. Participants who accept the lump sum assume all of the risk of managing the funds for the remainder of their lives. Increasing Life Expectancy Adds to Challenges for Individuals’ Retirement Planning A key reason that individuals face challenges in planning for retirement is that many people do not understand their life expectancy, the number of years they will likely spend in retirement, or the amount they should save to support their retirement. For example, a survey conducted by the Society of Actuaries showed that there is a greater tendency for retired respondents to underestimate rather than overestimate their life expectancy. In addition, many individuals will live beyond their life expectancy in any case, since it is an average. Further, as we reported in 2015, older workers tend to retire sooner than they expected. Coupled with increasing life expectancy, this means they will likely spend more years in retirement than anticipated. In 2015, more than a third of workers surveyed by the Employee Benefit Research Institute reported that they expected to retire at age 66 or later and an additional 10 percent expected to never retire; however, only 14 percent of current retirees reported that they retired after age 65. Similarly, 9 percent of workers said they expected to retire before age 60, while 36 percent of current retirees reported they retired earlier. The median age of retirement reported was age 62. Additionally, only 48 percent of those surveyed had calculated how much in savings they would need for retirement. Beyond underestimating life expectancy, individuals preparing for retirement face a number of additional challenges in accumulating retirement savings sufficient to sustain them for their lifetime. In previous work we found that many households near or in retirement have little or no retirement savings (see table 2). Nearly 30 percent of households headed by individuals age 55 and older have neither retirement savings nor a DB plan. About half of private sector employees do not participate in any employer- sponsored retirement plan. In previous work, we found that among those not participating, 84 percent reported that their employer did not offer a plan or they were not eligible for the program their employer offered. Those that do participate in employer-sponsored retirement plans are increasingly offered access only to DC plans, which—unlike DB plans—do not typically provide a guaranteed monthly benefit for life. For many of these participants, the level of savings accumulated in their DC retirement accounts at the time they leave the workforce will not be sufficient to sustain their retirement. Moreover, employer-sponsored DC plans typically offer only an account balance at retirement, leaving participants to identify longevity risks and manage how they will draw down their funds over the course of their retirement. To help address individuals’ difficultly in estimating their life expectancy and the resources needed to avoid outliving their savings, the federal government, plan sponsors, and others have developed certain tools to aid with retirement planning. For example, benefits calculators assist participants in translating their savings into potential annual retirement income. One such calculator, available on the U.S. Department of Labor’s website, assumes survival to age 95, which is beyond the average life expectancy for individuals currently age 65. To encourage saving among those who lack access to employer-sponsored plans, in November 2015, myRA, a federal government-managed retirement savings program, was opened to individuals below a certain income threshold. Also, as we reported in 2015, a number of states are exploring strategies to expand private sector coverage for people who otherwise do not have access to a plan. In addition, for those with employer-sponsored plans, the Pension Protection Act of 2006 included provisions that made it easier for certain DC plan sponsors to implement automatic enrollment and automatic escalation so that workers can be defaulted into plan participation with rising contributions over time. Default investment arrangements, including target date funds which invest according to length of time until retirement, can also help participants to maintain a balanced investment portfolio with a level of risk that is appropriate to their retirement dates. Moreover, to provide greater assurance that individuals with DC plans will not outlive their savings, some plan sponsors are adding an annuity option at retirement. In addition, Internal Revenue Service (IRS) regulations that went into effect in July 2014 allow for a Qualified Longevity Annuity Contract whereby participants in 401(k) and other qualified DC plans and traditional IRAs may use a portion of their accounts to purchase annuities that begin payout no later than age 85. In sum, despite the efforts by the federal government, plan sponsors, and others to encourage greater retirement savings, many individuals may not be adequately prepared for retirement. The trend toward increasing life expectancy may mean that more individuals outlive their savings, with only their Social Security benefits to rely on. Life Expectancy Disparities Negatively Affect Retirement Resources for Lower- Income Groups Lower-income individuals have shorter-than-average life expectancy, which means that they can expect to receive Social Security retirement benefits for substantially fewer years than higher-income individuals who have longer-than-average life expectancy. As a result, when disparities in life expectancy are taken into account, our analysis indicates that, on average, projected lifetime Social Security retirement benefits are reduced for lower-income individuals but are increased for higher-income individuals, relative to what they would have received if they lived the average life expectancy for their cohort. Also, our analysis indicates that one frequently suggested change to address Social Security’s financial challenges, raising the retirement age, would further reduce projected lifetime benefits for lower-income groups proportionally more than for higher-income groups. Lower-Income Groups’ Life Expectancy Has Not Increased as Much as Higher-Income Groups’ Life Expectancy People with lower incomes can expect to live substantially fewer years as they approach retirement than those with higher incomes, on average, according to studies we identified and reviewed. For example, these studies estimate that lower-income men approaching retirement live between 3.6 and 12.7 fewer years than those in higher-income groups, on average, depending on birth year and other factors such as whether income groups were calculated by top or bottom half, quartile, quintile, or decile (see table 3). Similarly, studies we reviewed found that lower- income women also live fewer years than higher-income women, on average, with the differences ranging more widely, from 1.5 years to 13.6 years. However, there are factors that make projecting life expectancy for women by income more difficult than for men. It is not unexpected for life expectancy estimates to vary as they depend, among other things, on the particular data sources, populations, and age ranges analyzed. While the studies we reviewed found a range of life expectancy differences by income, each of them finds that disparities exist. Moreover, disparities in life expectancy by income have grown, according to the studies that examined trends over time (see table 3). Specifically, all of the six studies we reviewed that examined trends over time found growth in life expectancy differences, ranging from 0.9 to 7.6 years for men, depending on the age, birth years, and measure of income used. For example, a 2007 study by SSA’s Hilary Waldron found that for men age 65 who were born in 1912, there was only a 0.7 year difference in expected years of life remaining between top and bottom earners, but for those born in 1941, the expected difference grew to 5.3 years (see fig. 3). Similarly, for women, the studies we reviewed found differences in life expectancy by income were greater in more recent years, and the range in years was wider than for men. This is perhaps unsurprising, as some analysts have noted that disparities in household income also increased over time. According to a 2014 Congressional Budget Office (CBO) report, between 1979 and 2011, average real after-tax earnings for the top one percent of households grew about four times as fast as those in the lowest fifth. While higher-income groups have experienced significant growth in their life expectancy at older ages, lower-income groups have either experienced less growth or declines in recent decades, according to studies we reviewed. For example, Waldron’s 2007 study projected that 65-year-old men born in 1941 with below-median earnings would live 1.3 years longer than their counterparts born in 1912, while 65-year-old men born in 1941 with above-median earnings would live 6 years longer than their counterparts born in 1912. Some other studies estimate that life expectancy declined for those in the bottom of the income distribution. For instance, a 2015 study by the National Academy of Sciences found that life expectancy at age 50 has declined for both men and women in the bottom income quintile. Specifically, men and women in the bottom income quintile saw life expectancy decreases of 0.5 and 4 years, respectively, when comparing the 1930 and 1960 cohorts. While the studies we reviewed do not all agree about whether life expectancy is decreasing or increasing slightly for the lowest earners, they all agree that the higher-income groups are gaining more years than the lower-income groups. Some studies show that there are disparities in life expectancy by other characteristics that have been linked with income, such as race and education. For example, the CDC reported that the life expectancy for 65- year-old black individuals was 1.2 years less than for their white counterparts in 2013. Other studies have also examined links with education and found that individuals with a high school degree or less tend to have shorter lives than those with a college education. However, because the primary focus of our analysis was on life expectancy for adults approaching retirement by income group, we did not conduct a complete review of the studies related to other characteristics. Differences in Life Expectancy Result in Reduced Projected Lifetime Social Security Benefits for Lower-Income Groups Lower-income individuals generally rely on Social Security as their primary source of retirement income, so their retirement security is affected most by how that program is structured. We found that when differences in life expectancy by income are factored in, the amount of projected lifetime benefits received by lower-income individuals is reduced, while the amount of projected lifetime benefits received by higher-income individuals is increased. As a result, although the formula for calculating monthly Social Security retirement benefits is progressive—replacing a greater percentage of a lower-income than a higher-income worker’s pre-retirement income on a monthly basis— differential life expectancy reduces the progressivity of Social Security benefits received over a lifetime. Lower-Income Groups Rely Primarily on Social Security Social Security is the largest determinant of lower-income individuals’ retirement security because, for most such individuals, it is the main source of their retirement income. In a previous report, we analyzed data from the 2013 Survey of Consumer Finances and estimated that 86 percent of recent retiree households in the lowest income quintile rely on Social Security for the majority of their income. About half of recent retiree households in the lowest income quintile rely on Social Security for more than 90 percent of their income. Overall, in that report we found that those with lower incomes have more limited resources for retirement aside from Social Security. Recent retiree households in the lowest income quintile are much less likely to have retirement savings and DB plans than those in higher quintiles. Specifically, as we previously found, only 9 percent have any retirement savings (compared to 84 percent in the top income quintile) and 19 percent have a DB plan (compared to 65 percent in the top income quintile), which typically provides a monthly stream of retirement income for life. Households without retirement savings have few other resources, we found, which puts them at a high risk of outliving their non- Social Security resources. One reason for the lack of retirement savings among lower-income individuals is their lack of access to employer- sponsored retirement savings plans. As we reported in 2015, coverage by and participation in workplace retirement savings programs are also lower among lower-income workers. Specifically, workers in the lowest income quartile were nearly four times less likely than workers in the highest income quartile to work for an employer that offers a retirement savings program, after controlling for other factors. Similarly, we found that approximately 14 percent of workers in the lowest income quartile participated in a workplace retirement savings program compared to 76 percent of those in the highest income quartile. Shorter Life Expectancy Results in Lower Projected Lifetime Benefits According to our analysis, shorter-than-average life expectancy for lower- income individuals results in a projected reduction in lifetime Social Security benefits received. We calculated the projected lifetime Social Security benefits that would be received for men in various hypothetical scenarios to illustrate the effect of lower-than-average life expectancy on lower-income groups (which we defined as those with individual annual incomes at the 25th percentile, or about $20,000, according to Census data). Our analysis indicates that, on average, the projected lifetime benefits for these lower-income individuals would be reduced by as much as 11 to 14 percent due to their shorter-than-average life expectancy (or “differential” life expectancy) when compared to what they would receive if they had an average life expectancy (see fig. 4). For example, our calculations show that for a hypothetical 62-year-old man in the lower-income group: If he were to claim Social Security benefits now, and live to age 83 (the average life expectancy for men age 62 in the United States), he would receive an estimated $156,000 over his lifetime, or about 7.8 times his current income. If he were to claim Social Security benefits now, and live to age 80 (the differential life expectancy for 62-year-old men in his income group), he would receive an estimated $138,000 over his lifetime, or about 6.9 times his current income, a reduction of 11 percent. However, if a man in the lower-income group delayed claiming Social Security until age 70, the maximum age that will result in increased monthly benefits, and then lived until age 83 (the differential life expectancy for 70-year-old men in his income group), he would receive an estimated $185,000, or a 14 percent reduction in his lifetime benefits when compared to what he would receive if he lived until age 85 (the average life expectancy for 70-year-old men). We also calculated the projected lifetime Social Security benefits that would be received for men in the same hypothetical scenarios to illustrate the effects of differential life expectancy on higher-income groups (which we defined as those with individual annual incomes at the 75th percentile, or about $80,000, according to Census data). In contrast to lower-income individuals, higher-than-average life expectancy for higher-income individuals results in an increase in lifetime Social Security benefits received when compared to average life expectancy—as much as 16 to 18 percent (see fig. 5). For example, our calculations show that for a hypothetical 62-year-old man in the higher-income group: If he were to claim Social Security benefits now and live to age 83 (the average life expectancy for men age 62 in the United States), he would receive an estimated $355,000 over his lifetime, or about 4.4 times his current income. If he were to claim Social Security benefits now and live to age 86 (the differential life expectancy for 62-year-old men in his income group), he would receive an estimated $411,000 over his lifetime, or about 5.1 times his current income, an increase of 16 percent. However, if a man in the higher-income group delayed claiming Social Security until age 70, the maximum age that will result in increased monthly benefits, and then lived until age 88 (the differential life expectancy for 70-year-old men in his income group), he would receive an estimated $595,000, or an 18 percent increase in his lifetime benefits when compared to what he would receive if he lived until age 85 (the average life expectancy for 70-year-old men). Rather than claiming benefits when first eligible at age 62, it is often beneficial for individuals to delay claiming Social Security benefits because it results in larger monthly benefits. However, lower-than- average life expectancy may reduce the value of delayed claiming of benefits for lower-income individuals. For example, in our scenarios, shorter life expectancy reduces the added lifetime benefit of delaying claiming until age 70 (compared to early claiming at age 62) by nearly two-thirds of one year’s earnings for a lower-income man. In addition, it may be more difficult for a low-income individual to delay claiming, for example, after a job loss or a depletion of retirement savings. While many factors may influence someone’s decision about when to claim benefits—such as when a spouse claims—deciding when to claim benefits may be particularly important for women, who tend to have lower earnings but longer lives than men. Lower Projected Lifetime Benefits Result in Reduced Progressivity Social Security’s formula for calculating monthly benefits is progressive— that is, it provides a proportionally larger monthly earnings replacement for lower-earners than for higher-earners. However, our analysis of SSA data indicates that life expectancy differences reduce the size of this progressivity over a beneficiary’s lifetime (see fig. 6). Specifically, differential life expectancy results in reduced projected lifetime benefits for lower-income groups and increased projected lifetime benefits for higher-income groups, relative to average life expectancy, thereby decreasing the lifetime progressivity of the program. Moreover, studies we reviewed suggest the gap in life expectancy has grown. If the gap continues to grow, the progressivity in Social Security’s lifetime benefits will likely continue to decrease. Although present value adjustments are an important economic tool to account for the time value of money, we chose to use unadjusted figures in our scenarios for several reasons, but primarily because of our focus on the effects of differential life expectancy, including the importance of benefits at older ages. Our analysis shows that it is at these older ages when life expectancy differences predict that some income groups will receive, on average, more or fewer years of benefits. However, in appendix II, we also provide calculations with adjustments for present value. These present value adjusted figures are consistent with our basic findings—that differential life expectancy reduces the lifetime progressivity of Social Security retirement benefits—though the magnitude of the reduction in progressivity is somewhat smaller because the adjustments discount the value of money received in the future. Six studies we reviewed also examined the impact of life expectancy differences by income group, and they also generally found that differences in life expectancy by income erode the lifetime progressivity of Social Security benefits. For example, the 2015 National Academy of Sciences study found that the lifetime retirement benefits advantage of the top income quintile over the bottom income quintile had grown by $70,000 (on a present value basis) because of increases in life expectancy differences between 1930 and 1960. Moreover, when considering lifetime benefits from additional government programs, the study found that the change in life expectancy has made these programs less progressive. Another study, conducted for the National Bureau of Economic Research in 2011, indicated that when differences in life expectancy are taken into account, Social Security retirement benefits may have become regressive for some groups. For example, the study found that men in the 75th income percentile earned a higher rate of return from Social Security (based on benefits received compared to taxes paid) than do men in the 25th income percentile. Raising the Retirement Age May More Negatively Affect Lower-Income Groups One frequently-cited option to address increasing average life expectancy and Social Security’s long-term financial challenges is increasing the early and full retirement ages. While other options exist, such as changing payroll tax contributions or the structure of benefits , raising the retirement age can be considered a direct response to increasing life expectancy. We adjusted our hypothetical scenario calculations to illustrate the effect of increasing these retirement ages and found that taking such action could more negatively affect lower-income individuals because of their shorter life expectancy. Specifically, we calculated the effect of increasing all retirement ages by 2 years and found that the overall projected lifetime benefit is reduced more for lower-income men than for higher-income men, given their different life expectancy. For example, compared to the amount of benefits received under current program requirements, if retirement ages were increased by 2 years: A man in the lower-income group retiring at the increased full retirement age would receive lifetime benefits that are reduced by the equivalent of over two-thirds of his current annual income, assuming he lived to the average age expected for his income group. A man in the higher-income group retiring at the increased full retirement age would receive lifetime benefits that are reduced by the equivalent of nearly half of his current annual income, assuming he lived to the average age expected for his income group. A 2014 CBO study also examined the effect of raising the full and early retirement ages and found that it would reduce lifetime benefits more for lower-income groups than for higher-income groups, relative to payroll taxes paid. While the CBO study found that raising the early and full retirement ages together resulted in a slight benefit decrease both for lower- and higher-income individuals, it also found that, if life expectancy disparities continue to increase, raising the retirement ages would lead to larger declines in lifetime benefits for lower-income individuals than for higher-income individuals, relative to the Social Security taxes they pay. The 2015 National Academy of Sciences study similarly found that raising the full retirement age would lead to proportionately lower lifetime benefits for lower-income groups because of life expectancy differences. Specifically, the study found that raising the full retirement age to 70 resulted in reducing lifetime benefits for men with income in the bottom quintile by 25 percent, while reducing lifetime benefits for men with income in the top quintile by 20 percent. The National Academy of Sciences study further reported that raising the early retirement age together with the full retirement age would lead to similar results (a larger benefit decrease for lower-income groups than higher-income groups). While researchers sometimes suggest that workers could adjust to an increased retirement age by working longer, our prior work has shown that this may not be feasible for many who are low-income workers. In a 2014 report, we concluded that people who claim Social Security benefits early, such as those with physically-demanding blue collar jobs, may have done so because they faced challenges continuing to work at older ages. Similarly, in a 2010 report, we noted that many older workers could face health or physical challenges that would prevent them from working longer. For example, in that report we found that the workers who report more difficulty working longer and postponing retirement due to work-limiting health conditions tend to have less education and lower household income than those who do not report health limitations. For these individuals, raising the early or full retirement age could erode an important safety net. Some policies have been proposed to mitigate the potential adverse effects of raising the early or full retirement age on those with lower incomes and shorter life expectancies. For example, some researchers have suggested making early or full retirement ages lower for those with lower lifetime earnings, though others have suggested that this may be difficult to implement. Some experts we spoke with also suggested that eligible, lower-income individuals could receive Disability Insurance to bridge the gap created by raising the early retirement age, though the Disability Insurance program is also under financial pressure. As we reported in 2009, concerns about vulnerable populations have led to proposals to restructure Social Security benefits to help these groups. For example, we reported on proposals to guarantee a minimum benefit, supplement benefits for low-income single workers, or increase survivors benefits. Another identified proposal would provide an additional Social Security benefit to those over the age of 80 or 85, which may be particularly helpful for low-income women. These proposals could have a negative effect on the projected long-term solvency of Social Security, although compensating revisions could help moderate costs. Therefore, in sum, proposals to address Social Security’s financial challenges may affect different groups differently. Lower-income groups, in particular, may be more adversely affected by certain proposed changes because they are more reliant on Social Security retirement benefits and because they have shorter-than-average life expectancy. It is important that any proposals to change the Social Security program take into account how disparities in life expectancy affect the total benefits received by different groups over their lifetimes. Agency Comments We provided a draft of this report to the U.S. Department of Labor, the U.S. Department of the Treasury, the Internal Revenue Service, and the Social Security Administration for their review and comment. SSA provided comments, reproduced in appendix IV, agreeing with our finding that it is important to understand how the life expectancy in different income groups may affect retirement income. SSA also provided technical comments, as did each of the other agencies, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees, the Secretary of Labor, the Secretary of the Treasury, the Commissioner of the Internal Revenue Service, the Acting Commissioner of Social Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: List of Selected Studies on Life Expectancy Differences, by Income To examine the effect of life expectancy on the retirement resources for different groups, especially those with low incomes, we analyzed 11 studies that estimated life expectancy or mortality for different income groups and 9 studies that described the effect of these differences regarding Social Security retirement benefits, and in some cases, the studies also included Social Security disability benefits. These studies are listed in table 4 below. We selected these studies based on our review of longevity studies identified through expert referral and an Internet search, focusing on those that were published in the past 10 years and that included an analysis of effects by income groups. We limited our review to those that were published by government agencies, research organizations, or other scholarly publications, used data from accepted sources (such as the University of Michigan Health and Retirement Study or SSA administrative data), and had findings we determined were valid for our purposes. Appendix II: Scenario Calculation Methodology and Additional Examples To examine the effect of life expectancy on the retirement resources for different groups, especially those with low incomes, we developed scenarios to illustrate how disparities in average life expectancy by income group affect the average amount of lifetime Social Security retirement benefits received. While our report discusses various forms of retirement resources, for our scenarios we compare only projected lifetime Social Security benefits against current income. We do not factor in other resources that an individual may draw upon in retirement, which could include (but are not limited to) future payments from employer- sponsored defined benefit plans, retirement savings accounts, or housing equity. Scenarios that included these other retirement resources could show different outcomes. However, we focused on Social Security because it is the primary source of income for most people with lower- incomes. Moreover, other retirement resources are much less prevalent in this population. We reviewed relevant studies and U.S. Census Bureau (Census) data to determine our scenario assumptions, such as life expectancy by income group, and we calculated monthly Social Security benefits using the Social Security Administration’s (SSA) quick calculator. We chose to use the quick calculator because it is transparent, publicly available, and produces quick, approximate estimates using a methodology developed by SSA actuaries. These scenarios are illustrative in nature and should not be used to determine future outcomes for a particular individual. In addition to gathering input from internal experts, we sought and incorporated feedback on our methodology from one of the co-chairs for a recent study on life expectancy by income by the National Academy of Sciences. All figures are in 2015 dollars. We assessed the reliability of the data we used by reviewing relevant documentation and interviewing knowledgeable agency officials. We found the data to be reliable for the purposes used in this report. Life Expectancy Estimates by Income We made several assumptions in our scenario calculations, the most important of which is the life expectancy estimate for income groups. We identified and reviewed 11 relevant studies published in the past decade (see app. I) and ultimately used the life expectancy estimates from a 2007 study by SSA’s Hilary Waldron. We chose this study primarily because it produced cohort life expectancy estimates at a number of ages at which an individual can claim Social Security retirement benefits. Other advantages of the 2007 Waldron study are that it relies on comprehensive data on Social Security-covered workers that is not generally available to other researchers, it describes patterns over several decades, and it measures earnings over a multi-year period rather than just over a single year. Moreover, the study’s life expectancy estimates are in-line with other estimates and they are used in two of nine studies we identified that describe the effects of life expectancy disparities. Although we found it sufficient for our purposes, the 2007 Waldron study had some notable drawbacks. First, it produces estimates only for Social Security-covered men. Despite this drawback, we believe the estimates are appropriate for our purposes because the vast majority (94 percent) of workers are covered by Social Security, according to SSA, and because a number of researchers have raised questions about the reliability of life expectancy estimates by income group for women. Second, the estimates are broken-out by those in the top and bottom half of the earnings distribution. While it would have been useful to have a finer break-out by earning groups, the estimates were sufficient for our purposes to describe the effects on individuals with income at the 25th and 75th percentiles (which we describe as lower- and higher- income, respectively). One final drawback is that the study produces life expectancy estimates for individuals born in 1941, who are now past retirement age. It is possible that this cohort is different than past or future cohorts. In particular, given that most studies we reviewed found increasing disparities in life expectancy, our use of this study may underestimate the effect of life expectancy differences for more recent cohorts. Hypothetical Individuals’ Characteristics and Analysis In order to calculate lifetime benefits using SSA’s quick calculator, we assumed a set of characteristics for two hypothetical individuals, both men (given the limitations of life expectancy estimates for women). One individual was assumed to have an income in the bottom half of the individual income distribution, and the other an income in the top half. For the mid-point of each half of the income distribution (i.e., the 25th and 75th percentiles), we estimated income for men approaching retirement age using Census’s 2015 Current Population Survey, the most recent available. Based on this, we used $20,000 annual income for our lower- income group and $80,000 annual income for the higher-income group, using income as a proxy for earnings. The SSA quick calculator assumes a prior earnings history based on current earnings covered by Social Security, which we did not alter for transparency. Similarly, we assumed no future change in earnings. For both individuals, we assumed a birthday of 12/1/1953, which makes them 62 years old as of 12/1/2015. Finally, for both individuals, we assumed a retirement date of December for a series of years beginning at age 62, the Social Security early retirement age. In order to further understand the effects of life expectancy differences on income groups, we compared outcomes with and without taking life expectancy differences into account. Specifically, we further calculated lifetime benefits using SSA’s actuarially assumed (average) life expectancy and compared them to the benefits based on the life expectancies by lifetime earnings group from Waldron’s 2007 study. We describe the difference between these two outcomes as the change in benefits due to differential life expectancy. Lifetime Benefit Figures Adjusted for Present Value For simplicity and in order to focus on the effects of life expectancy differences in the report, we did not adjust the lifetime Social Security benefits for present value. Present value calculations reflect the time value of money, based on the assumption that a dollar in the future is worth less than a dollar today because the dollar today can be invested and earn interest. While present value adjustments are an important economic tool, we chose instead to report unadjusted figures for several reasons: for simplicity; so that average lifetime benefits could be viewed as a multiple of current income; and in order to focus on the effects of differential life expectancy, including the importance of benefits at older ages. It is at these older ages when life expectancy differences predict that some income groups will receive, on average, more or fewer years of benefits, which is the crux of our analysis. Further, present value adjustments were not incorporated by all of the studies we reviewed, and one expert we consulted suggested that it would be valuable to show both unadjusted and adjusted figures. However, as a check on our analysis, and in order to provide more complete information, we also performed our calculations with adjustments for present value. The results of these calculations are consistent with our basic finding—that differential life expectancy reduces the progressivity of projected lifetime Social Security retirement benefits. The lifetime benefit figures adjusted for present value are presented below. The present value adjustments are based on an assumed real (i.e., inflation-adjusted) interest rate of 2.9 percent, which is what SSA used as the intermediate long-range assumption for the Social Security trust funds in its 2015 Trustees Report. Appendix III: Trend in the Cap on Social Security Taxable Earnings Workers pay a payroll tax of 6.2 percent of their covered earnings into the Social Security trust funds. Their employers pay an equal amount, for a combined total rate of 12.4 percent. This tax only applies to workers’ earnings up to an annual limit; for 2016, it is $118,500. This cap is technically known as the “contribution and benefit base” because the same cap is used to limit the amount of earnings subject to the payroll tax, as well as the amount of earnings used in the formula to determine benefit levels. The cap on taxable earnings has changed over time. The maximum annual earnings subject to the payroll tax was $3,000 in 1937. However, in 1937, 97 percent of all covered workers had total earnings below that level. In recent years, about 94 percent have had total earnings below the taxable maximum. Meanwhile, the percentage of covered earnings that are subject to the payroll tax has fluctuated before generally declining since the mid-1980s, according to the most recent data available. In 1983, this figure was more than 90 percent, but it has declined since then and, in 2013, about 83 percent of earnings fell below the taxable maximum (see fig. 10). This percentage has declined because earnings among higher earners (those earning above the maximum) have grown faster than earnings among the rest of the working population. Appendix IV: Comments from the Social Security Administration Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Margie K. Shields (Assistant Director), Margaret Weber (Analyst-in-Charge), Laura Hoffrey, and Vincent Lui made key contributions to this report. Also contributing to this report were Susan Aschoff, Deborah Bland, Mindy Bowman, Alicia Puente Cackley, Sarah Cornetto, John Dicken, Jennifer Gregory, Kathy Leslie, Sheila McCoy, Drew Nelson, Mimi Nguyen, Susan Offutt, Rhiannon Patterson, Oliver Richard, Max Sawicky, Joseph Silvestri, Frank Todisco, and Walter Vance. Related GAO Products Retirement Security: Better Information on Income Replacement Rates Needed to Help Workers Plan for Retirement. GAO-16-242. Washington, D.C.: March 1, 2016. Social Security’s Future: Answers to Key Questions. GAO-16-75SP. Washington, D.C.: October 2015. Retirement Security: Federal Action Could Help State Efforts to Expand Private Sector Coverage. GAO-15-556. Washington, D.C.: September 10, 2015. Retirement Security: Most Households Approaching Retirement Have Low Savings. GAO-15-419. Washington, D. C.: May 12, 2015. Private Pensions: Participants Need Better Information When Offered Lump Sums That Replace Their Lifetime Benefits. GAO-15-74. Washington, D.C.: January 27, 2015. Retirement Security: Challenges for Those Claiming Social Security Benefits Early and New Health Coverage Options. GAO-14-311. Washington, D.C.: April 23, 2014. 401(K) Plans: Improvements Can Be Made to Better Protect Participants in Managed Accounts. GAO-14-310. Washington, D.C.: June 25, 2014. Retirement Security: Annuities with Guaranteed Lifetime Withdrawals Have Both Benefits and Risks, but Regulation Varies across States, GAO-13-75. Washington, D.C.: December 10, 2012. Retirement Security: Women Still Face Challenges. GAO-12-699. Washington, D.C.: July 19, 2012. Unemployed Older Workers: Many Experience Challenges Regaining Employment and Face Reduced Retirement Security. GAO-12-445. Washington, D.C.: April 25, 2012. Retirement Income: Ensuring Income throughout Retirement Requires Difficult Choices. GAO-11-400. Washington, D.C.: June 7, 2011. Private Pensions: Some Key Features Lead to an Uneven Distribution of Benefits. GAO-11-333. Washington, D.C.: March 30, 2011. Social Security Reform: Raising the Retirement Ages Would Have Implications for Older Workers and SSA Disability Rolls. GAO-11-125. Washington, D.C.: November 18, 2010. Social Security: Options to Protect Benefits for Vulnerable Groups When Addressing Program Solvency. GAO-10-101R. Washington, D.C.: December 7, 2009. | An increase in average life expectancy for individuals in the United States is a positive development, but also requires more planning and saving to support longer retirements. At the same time, as life expectancy has not increased uniformly across all income groups, proposed actions to address the effects of longevity on programs and plan sponsors may impact lower-income and higher-income individuals differently. GAO was asked to examine disparities in life expectancy and the implications for retirement security. In this report, GAO examined (1) the implications of increasing life expectancy for retirement planning, and (2) the effect of life expectancy on the retirement resources for different groups, especially those with low incomes. GAO reviewed studies on life expectancy for individuals approaching retirement, relevant agency documents, and other publications; developed hypothetical scenarios to illustrate the effects of differences in life expectancy on projected lifetime Social Security retirement benefits for lower-income and higher-income groups based on analyses of U.S. Census Bureau and Social Security Administration (SSA) data; and interviewed SSA officials and various retirement experts. GAO is making no recommendations in this report. In its comments, SSA agreed with our finding that it is important to understand how the life expectancy in different income groups may affect retirement income. The increase in average life expectancy for older adults in the United States contributes to challenges for retirement planning by the government, employers, and individuals. Social Security retirement benefits and traditional defined benefit (DB) pension plans, both key sources of retirement income that promise lifetime benefits, are now required to make payments to retirees for an increasing number of years. This development, among others, has prompted a wide range of possible actions to help curb the rising future liabilities for the federal government and DB sponsors. For example, to address financial challenges for the Social Security program, various options have been proposed, such as adjusting tax contributions, retirement age, and benefit amounts. Individuals also face challenges resulting from increases in life expectancy because they must save more to provide for the possibility of a longer retirement. Life expectancy varies substantially across different groups with significant effects on retirement resources, especially for those with low incomes. For example, according to studies GAO reviewed, lower-income men approaching retirement live, on average, 3.6 to 12.7 fewer years than higher-income men. GAO developed hypothetical scenarios to calculate the projected amount of lifetime Social Security retirement benefits received, on average, for men with different income levels born in the same year. In these scenarios, GAO compared projected benefits based on each income groups' shorter or longer life expectancy with projected benefits based on average life expectancy, and found that lower-income groups' shorter-than-average life expectancy reduced their projected lifetime benefits by as much as 11 to 14 percent. Effects on Social Security retirement benefits are particularly important to lower-income groups because Social Security is their primary source of retirement income. Social Security's formula for calculating monthly benefits is progressive—that is, it provides a proportionally larger monthly earnings replacement for lower-earners than for higher-earners. However, when viewed in terms of benefit received over a lifetime, the disparities in life expectancy across income groups erode the progressive effect of the program. |
Introduction Between 1987 and 1994, errors by pilots whose backgrounds had not been checked prior to hiring were identified as contributing factors in seven crashes of scheduled carriers involving 111 fatalities. The National Transportation Safety Board (NTSB), which investigated these crashes, found that each of the pilots involved had been hired despite a poor performance history, prior safety violations, or both. In each case, NTSB reported, the carrier had lacked access to, or had failed to obtain, the pilot’s records with previous employers before hiring. Accordingly, on four separate occasions between September 1988 and October 1995, NTSB recommended that carriers obtain information from the Federal Aviation Administration (FAA) and previous employers on a pilot’s training, performance, and safety history before hiring. NTSB later also recommended that information about the pilot’s driving record be checked with the National Driver Register (NDR). In June 1988, we likewise recommended, after surveying carriers’ pilot-hiring practices, that FAA encourage carriers to review a pilot’s safety history before making a hiring decision. On October 9, 1996, Congress enacted the Pilot Records Improvement Act (PRIA) to help ensure that fatal crashes would not again occur because, in part, carriers had not investigated the backgrounds of the pilots they hired. The act, which took effect on February 6, 1997, requires that air carriers conduct background checks on all pilot applicants. The vast majority of commercial carriers carrying paying passengers or transporting cargo are classified as air carriers because these carriers meet specific statutory requirements that are discussed in more detail later in this chapter. These carriers are, therefore, subject to the act. Besides requiring that carriers obtain key records from FAA, past or current employers for whom the applicant worked as a pilot, and NDR, PRIA includes provisions to protect pilots’ rights and to protect those furnishing records from liability for providing the information. PRIA also gives FAA responsibility for overseeing compliance with the act, by stating that FAA may prescribe regulations as necessary to ensure compliance with the requesting and receiving of pilot records, protect the personal privacy of anyone whose records are requested as well as the confidentiality of those records, and preclude further dissemination of those records by the person requesting them. Furthermore, as the agency responsible for aviation safety, FAA has a broader responsibility to promote the safe flight of civil aircraft in air commerce by prescribing regulations and minimum standards for the aviation industry. To carry out this responsibility, FAA issues regulations and develops guidance. FAA also performs inspections to ensure compliance with federal statutes and regulations and has the authority to take enforcement actions against violators. Specifically, FAA regulates and monitors the safety of air transportation and air commerce through its safety programs, which provide the initial certification, periodic surveillance, and inspection of airlines, airports, repair stations, and other aviation entities, including pilots. These inspections are intended to primarily detect actual violations of statutes or regulations. When safety inspectors identify violations, FAA guidance requires that such violations be investigated, appropriately addressed, and reported. The Chairman and Ranking Democratic Member of the Subcommittee on Aviation, House Committee on Transportation and Infrastructure, asked us to review the status of PRIA since its enactment in October 1996. Specifically, they asked us to determine the following: whether (1) hiring air carriers have complied with the act by requesting and receiving key documents about pilot applicants before making final hiring decisions and (2) FAA, NDR, and other carriers have complied with the act by providing these documents in a timely manner; whether air carriers are aware of PRIA’s requirements for protecting what FAA has done to oversee compliance with the act; and whether air carriers believe PRIA has been helpful to them in making pilot-hiring decisions and is worth the cost. Until recently, pilot hiring was expected to keep pace over the next decade with projected growth in air traffic and anticipated pilot retirements. With the economic downturn in 2001 and the September 11, 2001, terrorist attacks, however, the demand for air travel declined. As a result of the September 11 attacks, concerns about aviation safety and security are likely to remain central, and pilot background checks, such as those required by PRIA, may assume even greater importance. PRIA Requires Air Carriers to Conduct Background Checks on Pilot Job Applicants PRIA requires that air carriers conduct background checks on a pilot job applicant. Specifically, PRIA requires them to request, from FAA, previous air carrier and other employers, and NDR, and review information about the applicant’s qualifications, performance, and training over the past 5 years. This information is to be provided within 30 days, and a reasonable fee may be charged to the requesting carrier for the service. Table 1 identifies the information required from each source. PRIA Applies to Air Carriers PRIA’s definition of an air carrier is based on several statutes. An air carrier subject to PRIA is operated by a U.S. citizen who directly or indirectly provides air provides interstate air transportation—that is, transports passengers or property across state lines by aircraft as a common carrier for compensation, or transports mail by aircraft; and operates as a common carrier—that is, advertises to the public to carry persons, property, or mail for hire. To operate as an air carrier, a carrier must have an air carrier certificate issued by FAA under Part 119 of Title 14 of the Code of Federal Regulations (CFR). FAA issues a number of operating certificates under various parts of the CFR. FAA may require an aviation operator to have several of these certificates, depending on the number of passengers carried, the weight of the aircraft, and whether the aircraft is used to fly out of state or carry mail. For example, Part 121 certificates are generally issued to major carriers who operate turbojet-powered airplanes or airplanes with more than nine passenger seats, excluding crew members’ seats, or airplanes having a payload capacity of more than 7,500 pounds. Part 135 certificates are generally issued to small carriers operating other than turbojet-powered airplanes having no more than nine passenger seats and a payload capacity of 7,500 pounds or less. The criteria for issuing certificates under Parts 121 and 135 have changed since the late 1980s and early 1990s, when the seven fatal accidents that led to PRIA’s enactment occurred. At that time, all seven carriers operated under Part 135 certificates; however, under the new criteria, all of these carriers would operate under Part 121 certificates. As of April 13, 2001, FAA had identified 3,059 operators with active certificates to operate under Parts 121 and 135 or with dual certificates to operate under both. The vast majority of these operators are air carriers and thus are subject to PRIA. (See table 2.) PRIA Includes Provisions to Protect Pilots’ Rights PRIA includes several provisions to protect the privacy of a pilot’s records during the hiring process and indicates how a pilot can obtain and comment on the records contained in PRIA files. First, PRIA specifies that, generally, only information covering a 5-year period preceding the date of the employment application or the date of the request is to be forwarded to the hiring air carrier. Second, to help ensure that a pilot’s records are not requested without permission, a carrier must obtain the pilot’s written consent before requesting the release of records from FAA, NDR, and current or former employers. PRIA also includes provisions to protect air carriers from liability for providing a pilot’s records and requires that they not provide records without first ensuring that the pilot’s consent has been obtained. Finally, PRIA limits access to a pilot’s records to those individuals directly involved in the hiring process and restricts the use of those records to assessing the pilot’s qualifications as part of making a hiring decision. To further protect a pilot’s privacy, a carrier must protect the confidentiality of these records. PRIA also provides a pilot with access to his/her PRIA records. Whenever a request is received, a carrier, employer, or agency has 20 days to notify the pilot of the request and of the pilot’s right to receive a copy of the PRIA file. If requested in writing by a pilot, a copy of the PRIA file must be provided within 30 days of the pilot’s request. Under PRIA, a pilot has the right to submit written comments to correct inaccuracies in the records before a final hiring decision is made. To further protect the rights of pilots under PRIA, FAA may prescribe such regulations as may be necessary to protect the personal privacy of any pilots whose records are requested, preclude the further dissemination of records received, and ensure prompt compliance with requests for PRIA records. PRIA Requires FAA to Provide Periodic Reports and Three Studies PRIA required FAA to provide a written report to Congress on the act’s implementation 18 months after the act was passed and at least once every 3 years thereafter on proposed changes to FAA’s records, carriers’ records, and other employers’ records. If FAA does not recommend changes to PRIA, the act also requires the agency to give reasons for its position. FAA provided Congress with two reports that were issued in October 2000 and April 2002. Neither report recommends any change to (1) the agency’s current system of collecting and maintaining certificate records on airmen or on legal enforcement actions and (2) air carrier and other records required to be furnished under PRIA. FAA did not recommend any change because it thought that the existing records and system were effective and met PRIA’s requirements. PRIA also required FAA to conduct three studies related to carriers’ procedures for hiring pilots—two jointly with representatives of the aviation industry. The three studies that FAA transmitted to Congress recommended additional research on, rather than changes to, carriers’ pilot-hiring practices, but FAA has not yet begun any of the recommended research. According to officials in the Air Transportation Division, FAA has not pursued any of the proposed research because it has not yet heard whether Congress agrees with the studies’ findings. Congress Amended PRIA Twice To clarify some of PRIA’s requirements and to lessen the act’s burden on smaller carriers, Congress passed amendments in December 1997 and April 2000 that narrowed and clarified PRIA’s scope and provided some relief in areas that had proven burdensome to some carriers. The April 2000 amendment also directed FAA to carry out certain actions related to PRIA. Table 3 summarizes the major changes resulting from these amendments. The December 1997 amendment made a key change that allows a carrier to request and review a PRIA file after hiring an applicant as long as the carrier completes the background check before allowing the pilot to fly an aircraft with passengers or cargo. Initially, the act required all carriers to request and review a pilot’s records from FAA, NDR, and previous employers before hiring the pilot. This requirement caused delays in hiring decisions because, at the time of the amendment’s enactment, FAA and some carriers could not meet the 30-day deadline. As a result, Congress amended the act to permit a carrier to perform these background checks after the pilot was hired as long as they were completed before the carrier used the pilot to fly passengers or cargo—a step often referred to as the final hiring decision. In essence, this amendment gives carriers the option to use PRIA information not as part of a pilot’s initial screening process but as a last check before the pilot is put into the cockpit. Objectives, Scope, and Methodology To meet our objectives, we gathered quantitative and qualitative information from a variety of sources for the period from PRIA’s implementation in February 1997 through July 2002. Our primary method for addressing the four objectives was two nationwide, anonymous mail surveys—one for Part 121 carriers and one for Part 135 carriers. Carriers with dual certificates, that is, authorized to operate under both Part 121 and Part 135, received the Part 121 survey, and we include the responses of these carriers with the Part 121 responses throughout this report. The surveys, conducted from June through September 2001, provided data on the carriers’ compliance with the act, including the timeliness with which they received records from FAA, NDR, and other carriers; use of information to hire pilots; costs incurred because of the act’s requirements; awareness of actions to protect pilots’ rights; views on PRIA’s usefulness; opinions on which aspects of PRIA require more clarification or recommendations for improving PRIA. The survey population included the Part 121 and Part 135 air carriers that had made at least one request to FAA for PRIA information from July 1998, when FAA began tracking such requests electronically, through April 30, 2001. This population includes 124, or 86 percent, of the 144 carriers that operate under Part 121 or have dual certificates to operate under both Parts 121 and 135. All 124 of these carriers received the survey. However, the survey population covers only 1,144, or 39 percent, of the 2,915 Part 135 carriers. Of these 1,144 carriers, we randomly selected 350 to receive the survey. Although we would have preferred to survey a representative sample of all Part 135 carriers, we were unable to do so in a manner that would produce reliable data because we were unable to identify and pretest carriers that were out of compliance with the requirement to request documents from FAA. Thus, we cannot discuss the opinions and experiences of the 1,771 Part 135 carriers that did not submit requests to FAA for PRIA information. The surveys requested historical data from PRIA’s implementation in February 1997 through December 2000 for those questions to which carriers said during pretesting that they could provide more complete information. Where this was not the case, we requested that carriers provide data for calendar year 2000 to offer the most current and reliable perspective on carriers’ compliance with the act. We did not verify the information provided in our surveys. (See app. II for a more detailed discussion of our methodology.) Besides analyzing our survey responses, we used other methods to address the first objective on the extent to which carriers have complied with the act by obtaining key documents about pilot applicants before making final hiring decisions and whether FAA, NDR, and carriers have provided these documents in a timely manner. Specifically, we interviewed carrier officials who were responsible for requesting and reviewing PRIA information as part of their hiring decisions. We also analyzed FAA and NDR data on carriers’ PRIA requests. We performed limited internal testing of the database that FAA uses to respond to PRIA requests, but we did not independently review the validity of the data it derives from three other FAA databases. We also did not independently review the validity of the NDR database that states use to provide information to carriers about pilots’ driving records. To obtain further information on compliance with PRIA by Part 135 carriers, FAA, at our request, asked its principal operations inspectors in April 2001 for information on which Part 135 carriers should have made requests to FAA for PRIA records in calendar year 2000. Specifically, FAA asked these inspectors to determine how many pilots each of these Part 135 carriers had hired that year and how many of the carriers operated only intrastate and thus might be exempt from PRIA’s requirements. FAA received information on about 842 of the 2,915 Part 135 carriers in operation, 798 of which were interstate carriers subject to PRIA. We analyzed the information about hiring and certification to operate interstate operations and compared it with requests for PRIA information made to FAA by these same carriers in 2000 to determine whether these 798 carriers had requested information from FAA for at least as many pilots as they had hired. To help determine our second objective of whether FAA, NDR, and carriers were aware of PRIA’s requirements to protect pilots’ rights, we also interviewed officials from these two federal agencies, carrier hiring officials, aviation associations, representatives of a major pilot union, and private aviation attorneys to understand how well PRIA has been working, including the effectiveness of measures to protect pilots’ rights. To understand pilots’ views of PRIA, we interviewed a sample of 20 pilots at hiring fairs and carriers. We also interviewed and reviewed the PRIA files of 27 pilots whom we identified as having reported experiencing problems with their PRIA records because they had contacted FAA, congressional staff, or our office. To determine our third objective of what FAA has done to oversee compliance with PRIA, we reviewed the agency’s policies, guidance, and internal documents about implementation as well as the reports and studies to Congress that FAA generated in response to the act. We also interviewed program officials at the Department of Transportation (DOT), FAA headquarters, and FAA field offices responsible for responding to PRIA requests from carriers, generating the data needed for these responses, and overseeing the program’s implementation. We also reviewed enforcement cases initiated by FAA against carriers that had violated PRIA’s requirements. To address our fourth objective on the extent to which carriers believe the act has helped them make better pilot-hiring decisions, we surveyed Part 121 and Part 135 carriers, as previously stated. We also interviewed officials from aviation associations, representatives from a major pilot union, and private aviation attorneys to understand the impact of PRIA on making better pilot-hiring decisions. In addition, to obtain information on the act’s costs to FAA, NDR, carriers, and pilots, we reviewed FAA’s submissions on costs to the Office of Management and Budget, which are required under the Paperwork Reduction Act. We discussed the costs of PRIA with officials from all of these organizations as well as with officials from state motor vehicle agencies that process the vast majority of carriers’ PRIA requests for pilots’ NDR information. We conducted our work from August 2000 through July 2002 in accordance with generally accepted government auditing standards. Actions to Comply with Background Check Requirements Are Increasing, but Compliance Is Not Always Complete or Timely Efforts to comply with PRIA have increased since the act took effect in February 1997, but compliance is not always complete or timely. Although available data are not adequate to determine the extent of industrywide compliance, our analyses indicate that hiring carriers have requested and agencies and other carriers have provided background checks on increasing numbers of pilots. However, our analyses also suggest greater compliance with some PRIA requirements than with others. Both FAA and NDR databases and carriers’ responses to our surveys indicate that hiring carriers requested the required records more often from FAA than from NDR, even though the carriers are required to request records from both organizations for all prospective pilots. The survey responses further indicate that hiring carriers requested the required records still less frequently from other carriers. In general, the hiring carriers reported receiving the requested records on time more frequently from FAA and NDR than from other carriers. Delays in receiving these records can negatively affect both pilots and carriers. Required Requests for Pilot Records Increased, but Available Data Are Not Adequate to Determine the Extent of Compliance As discussed in chapter 1, PRIA, as amended, requires hiring carriers to request and review information for the past 5 years on a pilot applicant’s qualifications, performance, and training. This information is to be obtained from FAA, NDR, and carriers and other employers, apart from the military, who employed the applicant to fly passengers or cargo. While FAA and NDR maintain data on the requests for PRIA records that they receive, there are no centralized data on requests between carriers. According to our analyses of FAA and NDR databases and carriers’ responses to our surveys, the number of requests for background checks increased steadily from 1997 through 2000. However, requests did fall in 2001, reflecting the downturn in air traffic resulting from the economic recession and the terrorist attacks of September 11. Although this generally steady growth in the number of requests suggests increasing compliance with PRIA’s requirements for requesting records, we could not assess carriers’ compliance because data are not available on how many and which pilots were hired each year by each carrier that is subject to PRIA and whether each subject carrier requested records from all three required sources for each pilot hired. When we began our review, FAA did not know which carriers were subject to PRIA, but following discussions with us, FAA agreed to analyze its Operations Specification database to make this determination. Information regarding carriers’ requests for records from the three required sources is not available because federal laws and regulations do not require that carriers report it to FAA or that FAA maintain it. FAA does not believe that the costs of gathering and maintaining these data would be worth the benefits to aviation safety. Requests for FAA Background Checks Increased According to FAA data, the number of requests for background checks nearly doubled from 14,938 in 1997 to 27,104 in 2000. With the recession and the terrorist attacks, the number dropped to 21,047 in 2001. From February 1997, when PRIA was implemented, through December 2001, carriers requested background checks on 111,552 pilots from FAA. (See fig. 1.) The required records include a pilot applicant’s flight certificate, medical certificate, and enforcement history. Although the number of requests to FAA for background checks increased, not all carriers requested records. According to our analysis of FAA’s records, fewer than half of the 3,059 carriers in operation as of April 13, 2001, requested PRIA background checks from FAA on at least one pilot from July 1998, when FAA began tracking PRIA requests by carrier, through April 2001. Without data on how many pilots were hired or on how many carriers hired at least one pilot during the period of our review, we could not determine how many carriers should have requested records. In addition, the Part 135 carriers were less likely than the Part 121 carriers to request records. According to our analysis of the available automated data for July 1998 through April 2001, 39 percent of the Part 135 carriers requested PRIA records from FAA at least once, compared with 86 percent of the Part 121 carriers. (See table 4.) Again, data were not available to determine whether or to what extent the lower percentage for Part 135 carriers was related to compliance. On the one hand, information provided by FAA indicated that over 900 Part 135 carriers have only one pilot, making it unlikely, FAA officials said, that they hired any pilots during this period. Furthermore, 4 percent of the Part 135 carriers we surveyed—all of whom had made at least one request to FAA for PRIA records—reported that they did not hire any pilots from 1997 through 2000. On the other hand, as discussed later in this chapter, we found evidence of Part 135 carriers that hired pilots but may not have complied fully with PRIA’s background check requirements. Requests for NDR Information Increased The number of requests for NDR driver information records, primarily from carriers but occasionally from pilots, also increased, although NDR data, like FAA data, show a drop in 2001 in response to the recession and the September 11 terrorist attacks. Specifically, the number of requests increased from 9,549 in 1997 to 23,104 in 2000, but dropped to 18,175 in 2001. (See fig. 2.) According to NDR data, carriers and pilots made 81,509 requests for driver information from PRIA’s implementation through December 2001. The NDR information required for PRIA purposes includes records of revocations or suspensions of a driver's license for such serious offenses as reckless driving, driving while intoxicated, or drug convictions. Carriers Reported Increased Requests for Pilot Records The carriers we surveyed reported receiving increased numbers of requests for pilot records from hiring carriers for each year from PRIA’s implementation through 2000, the last year covered by our survey. These requested records include information about a pilot’s training and performance as well as the results of drug and alcohol testing. For the period from 1997 through 2000, the Part 121 carriers reported that the number of requests received nearly tripled, and the Part 135 carriers reported that the number of requests nearly doubled. (See fig. 3.) The number of Part 121 carriers that reported receiving such requests rose from 67 in 1997 to 91 in 2000, and we estimate that the number of Part 135 carriers receiving such requests increased from 461 to 931 during this period. Hiring Carriers Requested the Most Records from FAA and Made the Fewest Requests to Other Carriers Under PRIA, carriers that hire pilots should make the same number of requests to FAA and NDR, and they may be required to make more or fewer requests to other carriers, depending on how many employers their pilot applicants had in the preceding 5 years and whether records are required from those employers. According to our analysis of data from FAA and NDR and from our surveys of carriers, FAA received thousands more requests for records than NDR and carriers, and both FAA and NDR received thousands more requests than carriers. Requests for FAA Records Far Exceeded Requests for NDR Information According to FAA and NDR data, from 1997 through 2001, carriers requested records for about 30,000 more pilots from FAA than from NDR, and in each of those years, carriers made thousands more requests to FAA than to NDR. (See table 5.) Although disparities of this magnitude would seem to indicate some degree of noncompliance with the requirement to request driver information from NDR, FAA and NDR data cannot readily be compared. First, NDR does not track its data by carrier or by pilot, as FAA does. Therefore, the two agencies’ data cannot be matched to verify that a carrier has requested background checks on a pilot from both federal agencies. Second, NDR data are, to some extent, understated, partly because NDR cannot always identify requests from pilots for NDR information as PRIA requests and partly because NDR did not include known PRIA requests made by pilots under the Privacy Act in its PRIA database until 1999. Although requests from carriers for NDR information are readily identifiable as PRIA requests, carriers sometimes delegate their responsibility for obtaining NDR information to pilot applicants, even though PRIA requires that a carrier receive the records directly from NDR or the state agency. Pilots’ requests for NDR information—whether made directly to NDR under the Privacy Act or to state motor vehicle agencies—are not identifiable as PRIA requests unless the pilots specify as much. From 1999, when NDR began tracking pilots’ requests separately from other Privacy Act requests, through 2001, pilots made 1,187 (2 percent) of 58,627 requests to NDR for background checks under PRIA. Carriers’ delegation of their responsibility for obtaining NDR information to pilot applicants raises issues beyond how to account accurately for the number of PRIA requests made for NDR information. First, PRIA directs carriers to obtain the NDR information on each pilot with the pilot’s consent. Although PRIA allows carriers to have a pilot applicant request that either NDR or a state motor vehicle agency provide the NDR information directly to the carrier, PRIA does not allow the pilot to obtain the information and then provide it to the carrier. This practice, which gives the pilot custody of the information, potentially compromises the reliability of the information. According to NDR officials, at least one major Part 121 carrier requires pilot applicants to obtain NDR information under the Privacy Act and bring it with them to an interview—a procedure that violates PRIA. Reported Requests to Other Carriers Fell Short of Requests to FAA and NDR, but Records from Some Carriers Are Not Required The carriers responding to our surveys reported receiving substantially fewer requests for background checks each year than did FAA and NDR. From 1997 through 2000, the last year covered by our surveys, the carriers reported receiving about 44,000 requests, compared with about 91,000 requests to FAA and about 63,000 requests to NDR. Again, disparities of this magnitude would seem to indicate some degree of noncompliance, but our analysis also identified other possible reasons for differences. First, carriers estimated the number of requests they received each year, whereas FAA and NDR tracked their requests electronically. Second, as previously explained, our survey covered only those carriers that had made at least one request to FAA for pilot records. Some carriers might have requested records from other carriers or NDR, but not from FAA. Finally, under PRIA, as amended, carriers do not need to request records from the military, from employers for whom pilot applicants worked in jobs unrelated to flying, or from certain types of aviation operators. Eighty-eight percent of the Part 121 carriers and 46 percent of the Part 135 carriers we surveyed reported hiring at least some pilots with military flight experience in 2000. In addition, many smaller Part 135 carriers may hire pilots whose recent experience includes working for an aviation operator that is not required to maintain the kinds of information on training and performance included in PRIA records, such as a private flight school or an operator that provides agricultural crop dusting, banner towing, travel by corporate jet, or aerial surveying. Furthermore, PRIA requires only that carriers make a “good- faith” attempt to obtain records from foreign employers or bankrupt carriers. Even though there are several reasons why carriers may have reported receiving fewer requests than FAA and NDR received, carriers’ survey responses pointed to greater noncompliance with the requirement to request records from other carriers or employers than with the requirements to request records from FAA or NDR. For example, 57 percent of the Part 121 carriers reported requesting records from other carriers for all or almost all pilots they hired in 2000, compared with 97 percent for FAA and 95 percent for NDR. Similar percentages of the Part 135 carriers we surveyed reported requesting records from each of the three sources. (See fig. 4.) Additional Evidence Suggests That Recent Compliance by Some Carriers Is Not Complete Two types of evidence we gathered suggest that some carriers, especially some Part 135 carriers, may not have complied fully with requirements to complete background checks on pilots they hired in 2000. First, our analysis of hiring data gathered by FAA inspectors showed that hundreds of pilots were hired by Part 135 carriers that either had not requested PRIA records from FAA or had requested records for fewer pilots than they had hired. To better understand why many Part 135 carriers had not requested records, we asked FAA to have its inspectors determine how many pilots were hired in 2000 by Part 135 carriers that operate interstate and therefore are subject to PRIA’s requirements. Data generated by the inspectors raised questions about the compliance of 227 (28 percent) of the 798 Part 135 carriers for which the inspectors obtained information. These 227 carriers had requested records for 318 pilots but had hired 1,078. While the carriers that hired some of the remaining 760 pilots might have complied with PRIA if the pilots were not placed in service in 2000 or if the pilots’ records were requested at the end of 1999 or at the beginning of 2001, it is unlikely that these circumstances applied to all 760 pilots. According to FAA, its Office of Flight Standards Service has asked the regions responsible for overseeing these 227 carriers to review their compliance with PRIA. Second, to obtain a snapshot of carriers’ compliance with PRIA’s requirements for obtaining background information, we asked our survey respondents about the records they requested for pilots hired in 2000. A few carriers self-reported significant noncompliance with PRIA’s requirements for requesting records. Of the Part 121 carriers, 1 percent reported requesting FAA records less than half the time for the pilots hired in 2000, 3 percent reported requesting NDR information less than half the time for the pilots hired, and 6 percent reported requesting other employers’ records for half or fewer of the pilots hired. The percentages for Part 135 carriers were generally comparable. Carriers Said They Generally Receive Records on Time from FAA and NDR but Still Have Some Problems Obtaining Records from Other Carriers PRIA requires FAA, NDR, and carriers to provide PRIA records to a hiring carrier within 30 days of receiving a written request. The act also requires hiring carriers to receive the records from all three sources for each pilot applicant before making a final hiring decision—that is, before using the pilot to fly passengers or cargo. According to industry representatives and carrier hiring officials we surveyed or interviewed, most carriers received PRIA records from FAA and NDR, and, in the majority of instances, they reported receiving the records within 30 days, as required. However, in a few cases, carriers reported never receiving the required records. Furthermore, carriers said they sometimes needed more time to follow up with state motor vehicle agencies on the initial NDR information they received. Carriers reported more problems in receiving records on time from other carriers and, in a few cases, reported never receiving the required information. Without complete information, a carrier is not allowed to use a pilot to fly passengers or cargo, and delays in receiving the required information can be costly to both the carrier and the pilot. FAA Has Largely Overcome Initial Delays in Providing PRIA Records to Carriers During the first 6 months after PRIA was implemented, FAA said it was not always able to respond to requests for background checks within 30 days. FAA said it did not have enough staff to keep up with the volume of requests it received. In addition, the agency needed to gather the required records—pilot’s flight certificates, medical certificates, and enforcement histories—from three separate databases maintained in three different offices. As a result, FAA’s responses sometimes took months, delaying carriers’ hiring of pilots. To reduce the delays in responding to requests for records, Congress amended PRIA and FAA modified its procedures. Noting that delays in receiving records presented a particular burden to small aviation businesses, Congress amended PRIA in December 1997 to provide relief to the on-demand air carriers by allowing them to use pilots to fly passengers for up to 90 days before receiving their PRIA records. In the summer of 1997, FAA transferred the responsibility for responding to PRIA requests from its Civil Aerospace Medical Institute to its Aviation Data Systems Branch in the Office of Flight Standards, which had more staff available to respond to requests. In addition, in July 1998, FAA developed a centralized database that is automatically updated each night with new information from the three databases that contain FAA’s flight, medical, and enforcement records. We observed FAA staff using this database and saw that they can, within minutes, generate a response letter for a carrier and a copy for a pilot. Our review of FAA’s response times for the 27,104 requests received in 2000 showed that FAA generally provided PRIA information in less than 2 work days after receiving a carrier’s request. According to FAA staff responsible for responding to PRIA requests, delays can occur if information such as a pilot’s name or certificate number is incorrect or illegible. In these instances, the staff said, they usually call the carrier to obtain the correct information so that they can process the requests on time. They further noted that some carriers reduce response times by transmitting requests to FAA by fax or Express Mail. In responding to our surveys, carriers also indicated that FAA generally provides records and provides them on time. Ninety percent of the Part 121 carriers reported receiving the required records from FAA for all or almost all pilots hired in 2000. Of these carriers, 71 percent reported receiving almost all records on time, and 4 percent reported receiving the records on time less than half the time. Of the Part 135 carriers we surveyed, 74 percent reported receiving almost all FAA records on time, and 12 percent reported receiving these records on time less than half the time. Most Carriers Said They Received NDR Information on Time, but Following Up on Information Can Be Time Consuming and Burdensome Initially, NDR processed carriers’ PRIA requests directly in Washington, D.C., because state motor vehicle agencies’ computer systems were not yet set up to handle the requests electronically. Until December 31, 1997, carriers could submit PRIA requests directly to NDR for processing. Beginning in January 1998, state agencies largely assumed this responsibility, and in 2000, the state agencies processed 92 percent of carriers’ 22,201 requests, while NDR processed 8 percent on an emergency basis. Six states now process over three-quarters of the requests for NDR information, including most of the requests for residents of four states and the District of Columbia that do not process any requests because computer testing has not yet been completed to ensure the reliability of the NDR search process for those entities. To gain perspective on recent NDR activity in response to carriers’ PRIA requests, we asked carriers whether they had received the NDR information they had requested and whether they had received the information within 30 days. Those responding to our surveys generally reported receiving NDR information from state motor vehicle agencies in response to their requests. Specifically, 85 percent of the Part 121 carriers reported receiving NDR information for all or almost all pilots hired in 2000, as did 77 percent of the Part 135 carriers. Smaller percentages of carriers reported receiving the NDR information on time: 67 percent of the Part 121 carriers and 69 percent of the Part 135 carriers reported receiving all or almost all records on time. Furthermore, some carriers reported problems with timeliness: 4 percent of the Part 121 carriers and 18 percent of the Part 135 carriers reported receiving the records on time less than half the time. If a driver’s license has been revoked or suspended for violations, the process of following up with the motor vehicle agency in each state where violations occurred can take much longer, particularly if the NDR information provided by a state motor vehicle agency does not include identifiers, such as the driver’s Social Security number, height, weight, and eye color. Without such identifying information, the carrier must take additional steps to determine whether the pilot applicant or someone else committed the violation. Furthermore, when a state motor vehicle agency fails to provide the NDR information required under PRIA, a carrier cannot legally hire a pilot. The good-faith exception that Congress established for instances when carriers cannot obtain PRIA information from foreign carriers or from domestic carriers that have gone out of business does not apply to instances when carriers cannot obtain NDR information. Smaller Percentages of Hiring Carriers Reported Receiving Records from Other Carriers Than from FAA and NDR Over two-thirds of the carriers responding to our surveys reported receiving the PRIA records required from other carriers for all or almost all pilots hired in 2000, but the percentages that said they received these records were smaller than the percentages that said they received the records required from FAA and NDR. (See fig. 5.) Of the carriers that said they did not receive the required records from other carriers for all or almost all pilots hired in 2000, 2 percent of the Part 121 carriers and 10 percent of the Part 135 carriers reported receiving the records from other carriers for few or none of the pilots hired. The responses to our surveys also indicated that carriers had more problems with receiving records on time from other carriers than from FAA or NDR. The Part 121 carriers reported that they were more likely to receive records within 30 days, as required, from major, regional, and commuter carriers than from small cargo or on-demand carriers. The information on timeliness reported by the Part 135 carriers was similar to the information reported by the Part 121 carriers, although the Part 135 carriers reported more problems with the timeliness of records from large cargo carriers than did the Part 121 carriers. Many of the carrier hiring officials, aviation association officials, and pilots we interviewed voiced concerns about problems in obtaining PRIA records on time or at all from other carriers. Several of the carrier officials said they often need to follow up with additional telephone calls and letters when these records are not received within 30 days. Of the 400 comments about PRIA that the Aircraft Owners and Pilots Association (AOPA) has received from its members since PRIA’s implementation in February 1997, the most common ones were that past employers did not send the requested records to the hiring carrier at all or did not send them in a timely fashion and that no one was enforcing the 30-day mandate. Several of the pilots we interviewed also maintained that it took months to get current and former carrier employers to forward PRIA records to prospective employers. Delays in Providing PRIA Records Can Negatively Affect Carriers and Pilots Not providing PRIA records to hiring carriers or providing them late can adversely affect both carriers and pilots. In a few cases, carriers reported that they never received the required information. Hiring carriers are not allowed to use pilots to transport passengers or cargo until all PRIA records have been obtained and reviewed, except in the on-demand air charter industry, which allows carriers to use pilots for up to 90 days while completing background checks. Hiring officials at two carriers told us they had to let pilots go after providing expensive training. One Part 121 carrier official told us that the carrier had waited over 5 months for one pilot’s records. In the interim, the carrier said it put this pilot through 70 days of training and then sent him back for more simulator training, since it could not use him for transporting passengers or cargo. Until this matter was settled, the carrier said it paid the pilot a weekly training salary of $200 instead of a much higher salary as a crew member. The carrier said it did not want to release the pilot because he was good and because it had spent more than $25,000 on his training. Although PRIA’s good-faith exception covers instances when a hiring carrier cannot obtain records from a bankrupt or foreign carrier, this exception does not apply when an operating U.S. carrier fails to provide records. Not providing a PRIA file or delays in providing it can also cost a pilot an opportunity for career advancement to a larger carrier. With delays, a pilot can lose a job offer or receive a lower seniority number, which in turn limits job security, choice of flights, and pay. For example, according to one pilot, delays by his former employer in forwarding his PRIA file postponed his training by 2 months, caused him to receive a less desirable seniority number, gave him less choice in assignments, and delayed his promotion to captain by 4 months. He said this delay in promotion alone cost him nearly $7,500. See chapter 4 for our analysis of how FAA could help to improve compliance with PRIA’s requirements for providing pilot records within 30 days. Air Carriers Have Not Consistently Followed PRIA’s Requirements for Protecting Pilots’ Rights Some carriers indicated that they are not aware of some of PRIA’s requirements to protect pilots’ rights. For example, many carriers said they were unaware of requirements for notifying pilots and for giving them opportunities to review and submit corrections to their records. In addition, state motor vehicle agencies sometimes provided records to carriers that they should not have provided. As a result, hiring carriers sometimes received PRIA files that contained information that was outdated or was not related to an individual’s performance as a pilot. While PRIA gives pilots an opportunity to submit written comments to correct records before final hiring decisions are made, this opportunity comes too late to prevent hiring carriers from seeing inappropriate information that could potentially jeopardize pilots’ chances of being hired. Some Carriers Said They Are Unaware of PRIA’s Requirements for Protecting Pilots’ Rights Carriers’ reported awareness of PRIA’s requirements for protecting pilots’ rights varied considerably. In general, carriers said they were more aware of requirements that are applicable when a carrier is hiring than they were of requirements that are applicable when a carrier is responding to a PRIA request. Nearly all of the Part 121 hiring carriers we surveyed said they were aware of PRIA’s requirements to obtain a pilot’s consent before seeking records from other carriers, NDR, and FAA and to keep those records confidential, as shown in figure 6. The carriers were somewhat less aware of requirements to limit access to those records and to use them only during the hiring process. They were least aware of the requirement to give the pilot an opportunity to correct inaccuracies in the PRIA records before making a final hiring decision. Specifically, 31 percent of the Part 121 carriers said they were unaware of the requirement to give the pilot an opportunity to correct inaccuracies in the PRIA records before making a final hiring decision, whereas 44 percent of the Part 135 carriers said they were unaware of this requirement. When responding to PRIA requests, as figure 7 shows, 88 percent of the Part 121 carriers responding to our survey said they were aware of PRIA’s requirement that, within 30 days of receiving a pilot’s written request, they provide the pilot with a copy of the PRIA records that they sent to a prospective employer. However, 80 percent of the Part 121 carriers were unaware of a requirement that gives a pilot the right to review the PRIA records kept by a current or former employer at any time. In addition, about half of the carriers were unaware of two notification requirements designed to let the pilot know that a prospective employer had requested his or her records and to give the pilot an opportunity to review those records for accuracy and completeness. Under these requirements, a carrier must notify a pilot within 20 days of (1) a prospective employer’s request for PRIA records and (2) the pilot’s right to request a copy of the records that were sent. These notification requirements are especially important if a pilot’s application and signed PRIA consent form have been on file with a carrier in a pool of possible candidates. The Part 135 carriers that we surveyed indicated similar levels of awareness of these provisions. According to AOPA officials and several of the pilots we interviewed, many pilots as well as carriers were unaware of the PRIA provisions giving pilots opportunities to review their records. AOPA found that one of the most common questions raised by the 400 pilots who inquired about PRIA was how to obtain copies of their records. Several of the pilots who contacted us with concerns about their PRIA records also said they were uncertain about how to obtain copies of their records from their current and former employers. Some added that they had tried unsuccessfully to obtain their records. Some PRIA Files Contained Inappropriate Information That Should Have Been Excluded Several of the PRIA files that we reviewed contained information that should have been excluded, and carrier officials and pilots we interviewed also cited examples of inappropriate information in these files. To protect pilots’ rights, PRIA specifies what records should and should not be included in pilot files. Although FAA met these criteria in the records it forwarded, NDR and other carriers did not consistently follow these criteria. Both carrier officials and pilots we interviewed told us of instances in which outdated records were included in pilots’ files. Officials at a Part 121 carrier explained that deleting older records can be difficult, especially if the records are part of computerized training reports that may cover much longer periods of time. How widespread the inclusion of outdated records in PRIA files may be is unknown; however, this practice raises concerns because there is no mechanism for a pilot to have them deleted until after the hiring carrier has seen them. Outdated driver information may also sometimes be included in pilot files. According to the chief of NDR’s Driver Register and Traffic Records Division, a number of states provide a driver’s complete record rather than limit the information to 5 years. The Vice President of the Aviation Services Department at AOPA also noted that the inclusion of old driving information has been a problem with PRIA files. One pilot who contacted us complained that his driving record still showed a violation from about 20 years earlier for having unopened beer in his car when he was 18. NDR officials explained that if a pilot requests driver information directly from NDR or through a state motor vehicle agency without specifying that the information is needed for PRIA, the information provided may include information older than the 5 years required by this law. Although an amendment to PRIA in April 2000 limits the records to be provided by a former employer to those directly related to a pilot’s performance as a pilot, we found limited evidence that at least three carriers had forwarded unrelated records to hiring carriers. In reviewing the PRIA files of the 27 pilots who contacted us, we found one file that contained the pilot’s personal bankruptcy papers. Another file contained copies of documents from a dispute between the pilot and the carrier about unemployment compensation. A third file included court records of a carrier’s suit against a pilot for failing to repay training funds but excluded the judge’s ruling that the pilot did not have to repay those funds. These court documents were unrelated to the pilot’s flying abilities. About 29 percent of the Part 121 carriers we surveyed indicated a great need for clarification of which records are related to an individual’s performance as a pilot and thus should be included in the files forwarded to hiring carriers. The Part 135 carriers we surveyed indicated a similar need for clarification. Information unrelated to a pilot’s driving record is also sometimes included in the files NDR and states send to hiring carriers, according to the Chief of NDR’s Driver Register and Traffic Records Division. For example, he said that some states have included information on nonmoving violations, such as parking tickets, as well as on tax liens, nonpayment of child support, and unpaid library fines. Such information can appear in PRIA files because NDR gives the states some flexibility in determining what information they submit to its computerized registry. NDR requires the states to submit the names of persons whose driver licenses have been denied, canceled, revoked, or suspended for cause as well as of those who have been convicted of certain serious traffic offenses, such as driving while impaired by alcohol or other drugs. However, NDR also allows the states to submit information on convictions and withdrawals of licenses for other offenses. While NDR limits the information provided in response to a PRIA request to the most recent 5 years, it does not otherwise screen the information provided. In contrast, when FAA reviews a pilot’s fitness to hold a medical certificate to fly, it obtains driver information from NDR that is screened to include only certain serious moving violations and drug- and alcohol- related convictions over a 3-year period. FAA must then obtain detailed information about the pilot’s driving record from any states where violations occurred. Some of the PRIA files we reviewed included records of disciplinary actions that were subsequently overturned. Although PRIA requires carriers to include records of disciplinary actions that were not overturned, carrier officials, aviation attorneys, and one pilot who contacted us raised concerns about including records of overturned disciplinary actions. For example, the file for one pilot showed he was suspended briefly for departing late, but did not show that his suspension was later overturned and he was paid in full after providing copies of air traffic control tapes showing that he had departed on time. But because a copy of the final ruling that cleared him was not included in his file, the prospective employer did not hire him, according to the pilot. This example shows how any reference in a pilot’s file to a past problem, even a problem that has been resolved in the pilot’s favor, can be damaging to the pilot’s chances of being hired. In another case, one of the pilots who contacted us with concerns about PRIA said that his current employer had provided inconsistent records to carriers to which he had applied. Our review of his records sent to a major carrier indicated that he had tested positive on a required drug or alcohol test, while his records sent to a small Part 135 carrier indicated that he had submitted to and passed all such tests. In such a case, inaccurate information could potentially cause a carrier to unfairly reject a competent pilot whose record is clean or to unknowingly hire a pilot who has tested positive. One aviation attorney we interviewed also told us that carriers have provided inconsistent PRIA records for the same pilot to different hiring carriers. The Opportunity for Pilots to Correct Their Records May Come Too Late Though enacted to keep unsafe pilots from being hired to fly commercial flights, PRIA may have some unintended negative effects. While PRIA allows pilots to review their records at any time, it does not require that they have an opportunity to submit written comments to correct their records before a hiring carrier receives their file. Thus, corrections, if submitted, may come too late to prevent the hiring carrier from seeing inaccuracies; information that should not, under PRIA, have been included in the files; or disputed information presented only from the carrier’s perspective. The timing of the opportunity to correct records can create problems for both pilots and carriers. Several of the pilots we interviewed said that after successfully completing interviews, simulator testing, or other screening procedures, carriers declined to hire them because of incorrect records in their PRIA records. Some of these pilots said they were unaware of the incorrect information in their PRIA records and had no opportunity to correct it before the hiring carrier turned them down. The potential costs to both the pilot and the hiring carrier are even greater if the pilot has already accepted a position with the carrier, begun training, and given up a previous job before the PRIA records are reviewed. Eighty-two percent of the Part 121 carriers and 73 percent of the Part 135 carriers we surveyed reported that they do not have the PRIA records to review until after the pilot has accepted a conditional job offer and/or begun training. In addition to the timing of the opportunity to correct records, PRIA does not indicate, and FAA has provided no guidance on, how to submit corrections to PRIA records. As a result, even when carrier officials concur with a pilot and are willing to remove or correct a record, some said they are unsure whether the act allows them to do so. Several carrier officials said they were unsure how to remove records that they believed were unfair. For example, the president, attorney, and chief pilot for one Part 121 carrier fully concurred with a pilot that the record of a failed check ride should be removed from the pilot’s file because the training manager who had administered this and other check rides had subsequently been fired for being sexist and racist. The carrier consulted with FAA and aviation attorneys but could not determine whether it could remove the questionable training record. Instead, the carrier said it included a cover memorandum explaining why the training manager was fired, affirming the pilot’s skills, and describing an emergency evacuation in which the pilot saved the lives of two other pilots. Moreover, PRIA does not establish any procedures for arbitrating disagreements between a pilot and a current or former employer over PRIA records. Several of the pilots who contacted us described problems they had experienced in getting explanations or rebuttals to specific records included in their files. Six of these pilots knew of the incorrect information but said that they had little success in removing or rebutting disputed records. PRIA establishes no remedy for wrong or unjust entries in a pilot’s records or for failing to provide an opportunity for record correction, if the carrier obtained a release of liability. According to a law review article, several carrier officials and private aviation attorneys we interviewed, and some of the pilots who contacted us with concerns about the law, another unintended consequence of PRIA is that it can be used in ways that can diminish safety. Specifically, the weight that the act gives to records from current and former employers can be used as leverage in disputes between pilots and carriers over safety issues. According to the previously mentioned law review article, disagreements over “how much safety is enough against the background of economic competition,” can pit pilots against managers “who also happen to hold the trump card of the pilot’s job in their hands.” In these instances, the article argues that pilots may feel compelled to subordinate their concerns about safety to their employers’ economic interests to avoid having negative information placed in their PRIA files. In addition, we found some limited indication that PRIA could be used in ways that could potentially reduce aviation safety, rather than enhance it as intended. Several of the pilots who contacted us said they had been threatened with having negative records placed in their PRIA files if they did not take actions that violated FAA’s safety regulations. Although we could not corroborate them, largely because discussing them with the carriers could have further jeopardized these pilots’ careers, we raised them with FAA because these allegations involve potentially serious safety issues. For example, one pilot said he was pressured to fly an aircraft with serious maintenance problems, including an inoperable radar and autopilot, during bad weather. This pilot and another pilot who flew for the same carrier also said they had been pressured to identify mechanical problems informally on post-it notes instead of recording the problems in the aircraft’s maintenance log, as FAA requires. We provided FAA with information about the carriers involved that were still in business, but FAA was unable to substantiate the allegations, and the carriers denied them. Finally, according to the previously cited law review article, the inclusion of negative information in a PRIA file effectively negates the pilot’s ability to quit and go elsewhere. Several of the pilots who contacted us said they had been threatened with negative records in their PRIA files if they did not repay the costs of their training before leaving. Some carriers require that pilots sign training contracts and remain with the carriers for a prescribed period of months or years, after which time the cost of training is considered paid in full. According to the pilots, these contracts make it difficult to leave a carrier, especially if the carrier does not prorate the cost of training, because if a pilot departs early, he or she will owe all or a disproportionate amount of the total cost. Contract disputes about training costs were included in several PRIA files that we reviewed. For example, one pilot earning an annual salary of $15,000 said he quit after being pressured to commit safety violations. He offered in writing to repay a prorated portion of the $4,000 remaining on his training contract, but the carrier sued him. The judge ruled that the training costs did not have to be repaid. Although the broken training contract was not related to the pilot’s professional competence and the carrier lost the case, the carrier included a record of the lawsuit in the pilot’s file without reference to the judge’s ruling in the pilot’s favor. See chapter 4 for our analysis of how FAA could help to improve carriers’ awareness of PRIA’s requirements for protecting pilots’ rights and better inform pilots of their options for obtaining, reviewing, and correcting their records. FAA Oversight of PRIA Implementation Has Been Limited FAA has taken limited steps to oversee PRIA implementation and to monitor this program as part of its broader responsibility for aviation safety. To promote compliance with PRIA, FAA developed some guidance for carriers and its own staff, but the advisory circular that it issued for carriers in 1997 was soon outdated. A revised circular, which FAA produced in September 2001, addressed some of the issues on which carriers had sought clarification. FAA’s guidance also includes a new form for carriers that inappropriately (1) requires a pilot applicant to waive the rights provided by PRIA to be notified about requests and (2) changes the party PRIA makes responsible for notifying the pilot of a request for records. Although FAA has periodically provided its own inspectors with updated information on PRIA, it has not revised its handbooks or training for operations inspectors to incorporate PRIA’s requirements. To date, few inspections have identified problems with carriers’ implementation of PRIA, even though FAA’s inspectors at our request identified hundreds of Part 135 carriers that had hired pilots but did not request PRIA records from FAA. FAA has seldom initiated enforcement actions against carriers for PRIA violations. FAA Has Specific Responsibilities under PRIA as Well as Broad Responsibility for Aviation Safety Congress gave FAA exclusive authority to oversee the implementation of PRIA by authorizing FAA to issue regulations to (1) protect the personal privacy of the pilots whose records are disseminated and the confidentiality of those records, (2) preclude the further dissemination of PRIA records, and (3) ensure prompt compliance with any request for records. Congress gave FAA discretion in deciding whether to use this authority to issue regulations. To date, FAA has not used this authority. According to FAA’s Deputy Associate Administrator for Regulation and Certification, the agency views PRIA as self-implementing because it places the responsibility for collecting pilot records on the carriers, not on the government. In addition, FAA has not issued PRIA regulations because it has allocated its regulatory resources to other priorities. In addition to the specific responsibilities given to FAA by PRIA, FAA has broad oversight responsibility and authority for aviation safety. This broad responsibility and authority apply to PRIA as well as to other aviation safety programs. Specifically, in carrying out its mission to ensure a safe and efficient national air space system, FAA is responsible for issuing regulations and developing implementation guidance. FAA also performs inspections to ensure compliance with federal statutes such as PRIA and under FAA’s general civil penalty authority in 49 U.S.C. 46301 has the authority to take enforcement action against violators. FAA Has Developed Guidance on PRIA FAA issued guidance for carriers in the form of an advisory circular. FAA issued this circular in May 1997, 3 months after PRIA went into effect, and revised the circular in September 2001. The revised circular incorporates information on the December 1997 and April 2000 amendments to PRIA, which Congress enacted, in part, to clarify aspects of the law. For example, the revised circular includes a copy of the law, as amended; provides sample forms that hiring carriers may use in requesting pilot records; and discusses each of the key changes to PRIA made in the 1997 and 2000 amendments. (See table 3.) Revised Advisory Circular Addresses Some Issues on Which Carriers Sought Clarification In our survey, which we sent to carriers before FAA issued its revised advisory circular, we asked which aspects of PRIA’s hiring requirements carriers thought needed further clarification. Hiring carriers we contacted had preliminarily identified 17 key issues that they thought warranted further clarification. However, a majority of the Part 121 and Part 135 carriers indicated little or no need for clarification on the majority of these issues, including whether they were subject to PRIA, which records they were required to maintain, how long they were required to maintain the records, how they should store the records, who should be allowed to see the records, and whether the carriers could charge a fee for supplying the records. In contrast, many carriers indicated a great or very great need for clarification on how to proceed when the hiring carrier cannot obtain requested records and how to handle situations involving disciplinary actions taken against a pilot. (App. V presents complete information on which of the 17 key issues carrier hiring officials view as most in need of clarification.) FAA’s revised advisory circular addresses some, but not all, of carriers’ issues related to the need for clarification. Many carriers said there was a great need for clarification on how to proceed in four situations when they are unable to obtain records from other carriers. These situations and the percentages of Part 121 carriers identifying them as greatly needing clarification are shown in figure 8. Similar percentages of the Part 135 carriers we surveyed expressed a great need to clarify how to proceed in these four situations. FAA’s revised circular explains how to proceed in two of these situations—when a hiring carrier cannot obtain a pilot’s records from a carrier that has gone out of business or from a foreign entity—both of which were outlined in good- faith exceptions in the 1997 and 2000 amendments, respectively. However, neither PRIA, as amended, nor the revised advisory circular offers guidance on how carriers should proceed when they are unable to obtain records from other carriers that are still in business or driving records from a state. Many carriers also sought clarification on handling situations involving disciplinary actions taken against a pilot by another carrier. Of the Part 121 carriers, 43 percent identified a great or very great need to clarify which disciplinary actions are related to an individual’s performance as a pilot and therefore should be provided to carriers interested in hiring their current and former employees. In addition, 39 percent of the Part 121 carriers identified a great or very great need to clarify how carriers should handle pilot records when a disciplinary action is resolved through a negotiated settlement. Similar percentages of Part 135 carriers sought clarification on the handling of these situations. However, neither PRIA, as amended, nor FAA’s revised advisory circular defines disciplinary actions, specifies which ones should be considered relevant and documented in PRIA records, or discusses how to remove records of disciplinary actions that have been resolved through a negotiated settlement. Resolving carriers’ questions about how to proceed when other carriers do not provide required records and how to determine what information about disciplinary actions should be provided to hiring carriers is important because such questions, if unresolved, can delay or preclude final hiring decisions. FAA’s Revised Circular Includes a Sample Form That Requires Pilot Applicants to Waive Some Protections and Alters Notification Provisions Required in the Law FAA’s guidance to carriers on PRIA includes a new sample form that requires a pilot applicant to waive certain rights provided by PRIA. The form also changes the party responsible for notifying the pilot of a request for records. In September 2001, FAA revised its advisory circular on PRIA and included the sample form for hiring carriers to use when requesting records from current and former employers. (App. VI includes a copy of FAA Form 8060-11, Air Carrier and Other Records Request (PRIA)—Pilot Records Improvement Act of 1996.) Part III of the form requires the pilot to waive PRIA’s requirement that the current or former employer receiving the request for records notify the pilot within 20 days of the request and of the pilot’s right to receive a copy of the records. The form does, however, provide information on the pilot’s right to receive a copy of the records within 30 days of requesting them in writing. FAA made these changes in the form to simplify and expedite the hiring process, according to the official in the Air Transportation Division who is responsible for overseeing policy decisions related to PRIA. In addition to violating provisions in the act, part III of form 8060-11 is problematic for several reasons and could reduce a pilot’s chances of knowing when records are actually forwarded to hiring carriers and of receiving a copy of the records. First, the form makes the hiring carrier responsible for notifying a pilot of a request rather than the current or former employer as PRIA specifies. Shifting responsibility for notifying the pilot does not follow the process outlined in the law, which requires the current or former employer to provide this notification. Second, as we learned in interviewing pilots and hiring officials, forms completed at the time of application sometimes remain on file for months or years before being activated and submitted to current and former employers, particularly when the hiring carrier is a major carrier. Officials in FAA’s Aviation Data Systems Branch confirmed that the pilot often signs these forms months or even years before the hiring carrier submits them. In such cases, a pilot might not know whether and when the hiring carrier actually submits the request to the current and former employer. Furthermore, the revised form no longer includes a place for the pilot’s address, which makes it more difficult for former employers to obtain correct mailing information to notify the pilot of the hiring carrier’s request and to provide a copy of the records to the pilot, if requested. FAA Has Developed Some Guidance on PRIA for Its Own Staff but Has Not Incorporated the Guidance into Its Handbooks and Training Classes FAA has developed some additional guidance for its own staff. For example, the agency prepared draft guidance for its staff before PRIA took effect in February 1997, even though it did not issue the original advisory circular until May 1997. In addition, FAA has used E-mails and memorandums to its regional and field offices to further clarify PRIA’s requirements. Finally, FAA has assigned responsibility for responding to PRIA requests from carriers to staff in the Aviation Data Systems Branch and primary responsibility for answering policy questions about PRIA to the Air Transportation Division, both of which are within FAA’s Office of Flight Standards. FAA’s efforts to disseminate guidance on PRIA to its staff have not yet extended to revisions of the handbook that its operations inspectors are to use to monitor carriers’ training and use of pilots. Furthermore, the agency has not yet incorporated information on PRIA into its training classes for operations inspectors. FAA uses its handbooks and training classes to familiarize inspectors with laws, regulations, and inspection protocols and to enhance their oversight and monitoring of carriers’ compliance with aviation laws and regulations. Without such information, inspectors may be unaware of PRIA and amendments to the law. FAA officials said they believe information on PRIA should be included in the handbooks and training, but they are awaiting the publication of our report to ensure that all relevant information is included. In the meantime, inspectors have been addressing their questions about PRIA to staff in the Aviation Data Systems Branch. On March 22, 2002, FAA activated a new Web site with information about PRIA for carriers and pilots. The site provides brief answers to frequently asked questions about how PRIA works, which records must be provided, and what protections are afforded to pilots under the law. It also includes links to a copy of the law, to FAA’s advisory circular that provides guidance on PRIA, and to forms used by carriers to request records. As of May 1, 2002, FAA had not linked the PRIA Web site to the agency’s home site or to the Web information that FAA maintains for carriers and pilots. Linking these sites would enhance the accessibility of the PRIA information. In the spring of 2000, FAA began drafting guidance on which penalties are appropriate when carriers violate PRIA’s requirements, according to attorneys in FAA’s Office of Chief Counsel. They said that this effort has become part of a larger one to revise penalty guidance in the agency’s enforcement handbook, which is being coordinated with other FAA offices. However, this coordination stopped after September 11, 2001, because of uncertainty about FAA’s future role in aviation and airport security. These officials said the coordination would proceed once this issue is resolved. We reviewed the draft guidance that had been completed and determined that it covers most PRIA provisions and should provide inspectors with a clearer basis for identifying and, where appropriate, for taking enforcement actions against carriers for violations of PRIA’s requirements. The draft guidance proposes penalties when a carrier fails to obtain the pilot’s consent to release records, provide the records within 30 days of a request, provide a copy of the records to the pilot, and provide the pilot with an opportunity to correct any inaccuracies in those records before making a final hiring decision. Several Factors May Hamper FAA’s Operations Inspection Ability to Monitor Compliance with PRIA Several factors may explain why FAA’s operations inspectors, who conduct many thousands of inspections on carriers each year, have noted few problems with carriers’ compliance with PRIA. First, information on PRIA is not incorporated into the inspection handbooks and training classes, consequently these inspectors have no reminders to check for compliance with PRIA. Second, FAA lacks the information needed to assess compliance with PRIA’s requirements for requesting records because PRIA does not require that this information be reported (see ch. 2). Thus FAA may lack evidence that carriers have obtained the required records before making final hiring decisions because PRIA does not require carriers to retain the records they have received. As of July 3, 2001, FAA’s Air Transportation Oversight System (ATOS) database, which tracks inspections of the nation’s 10 major passenger carriers, showed no entries related to PRIA. FAA’s older Program Reporting and Tracking Subsystem database, which tracks some limited information on the 10 major carriers, as well as the results of inspections on all other carriers, contained 76 inspection entries related to PRIA since the law’s implementation in 1997. Two of these entries identified possible noncompliance with PRIA and led to the opening of enforcement cases, while the remaining 74 noted that the inspectors had provided information on PRIA to the carriers but did not identify any noncompliance. One additional entry, dated June 15, 1999, identified noncompliance but did not lead to the opening of an enforcement case. According to the entry, a random inspection of the records of 169 pilots with a large Part 121 carrier revealed noncompliance with PRIA, which the inspector reported to the carrier’s Vice President of Operations and to the responsible Principal Operations Inspector at FAA. Since PRIA’s implementation in February 1997, FAA has initiated 10 enforcement cases against six carriers. In the 3 most serious cases, which resulted in fines ranging from $2,500 to $30,000, the carriers failed to request PRIA background checks for 12 pilots they hired and, in 1 case, the carrier falsified documents related to providing PRIA checks. The remaining 7 cases resulted either in warning letters or were closed with no action. Although the number of inspection findings and enforcement actions could be indicative of widespread compliance with the act, our analyses of carriers’ requests for PRIA records and of carriers’ awareness of PRIA’s requirements for protecting pilots’ rights indicate that carriers are not always requesting the required records, especially from other carriers, and are not always sufficiently aware of the pilots’ rights protections to comply with them (see chs. 2 and 3). Alternatively, FAA inspectors may not be regularly reviewing carriers’ compliance. Without information on PRIA in their inspection handbooks and training classes, these inspectors have no reminders to check for compliance with PRIA. According to FAA’s Deputy Associate Administrator, Office of Regulation and Certification, FAA’s monitoring focuses on a carrier’s processes and procedures for complying with PRIA, not on checks of records for individual pilots. Although we concur with the importance of checking carriers’ processes and procedures for complying with laws, FAA cannot determine whether a carrier actually follows its processes and procedures without performing at least limited spot checks. This system safety approach with compliance checks is the basis for the new ATOS inspection system that FAA uses to oversee the nation’s 10 major air carriers. Regardless of whether FAA operations inspectors attempt to monitor carriers’ compliance with PRIA, they may not have sufficient evidence to do so. Just as FAA lacks information needed to assess compliance with PRIA’s requirements for requesting records because PRIA does not require that this information be reported (see ch. 2), FAA also may lack evidence that carriers have obtained the required records before making final hiring decisions because PRIA does not require carriers to retain the records they have received. According to an attorney from FAA’s Office of Chief Counsel, nothing in the PRIA statute requires carriers to maintain the pilot records they receive from FAA, NDR, or other carriers. The statute requires carriers to maintain the records they generate on their pilot employees for 5 years, but it does not require them to store or maintain the PRIA records they receive when they hire pilots. Without these records, he noted, it is very difficult for FAA to determine a carrier’s compliance with PRIA. According to another official in FAA’s Air Transportation Division, carriers have an incentive to dispose of these records to avoid any liability resulting from their hiring decisions. Nonetheless, he observed that some carriers still keep these records, and he agreed that it would be almost impossible to complete an enforcement action against a carrier without them. Requiring carriers to maintain the PRIA records they receive could, however, be costly, especially for smaller carriers, according to the Deputy Associate Administrator, and these costs would not be warranted by the safety benefits achieved. According to FAA, it has not identified pilot performance during past training events as a high-risk area because of the extensive training, testing, and checking required for pilots. We believe it is important for FAA to be able to enforce the law. As previously discussed, FAA is responsible for overseeing PRIA’s implementation and has the authority to issue regulations or establish procedures for carriers to maintain the records needed for FAA to monitor and enforce compliance with the act. FAA has not issued regulations on PRIA because it believes that carriers, not the government, are responsible for collecting PRIA information. Furthermore, FAA believes that it should focus its regulatory resources on higher aviation-safety priorities. FAA officials agreed, however, that it was important for carriers to maintain records of background checks on pilots they hire to enable both the carriers and FAA to monitor PRIA’s implementation. Conclusions By making information about pilots’ qualifications, performance, and training available to hiring carriers, PRIA improves carriers’ ability to screen pilots and may help keep unsafe pilots out of the cockpits of commercial aircraft. However, FAA’s limited oversight of the act’s implementation, together with carriers’ incomplete compliance with the requirements of the law, may have prevented PRIA from being as effective or as protective of pilots’ rights as it could be. For example: Unresolved procedural issues—such as how to correct errors in pilot records, especially before hiring carriers see inaccurate information; how to remove inappropriate records; and how to handle disputes between pilots and carriers—effectively limit pilots’ rights. As individuals, pilots have less power than carriers, and without procedures for resolving these issues, they cannot compel carriers to correct or remove inaccurate records or settle disputes. Moreover, even when carriers are willing to make changes, they may not know how to do so. Inaccurate or inappropriate information may jeopardize a pilot’s chances of being hired. FAA has not taken advantage of its Web site to make information about pilots’ rights readily available. Because the act does not mandate when pilots are to be given an opportunity to correct their records, except that it come before the final hiring decision, many pilots do not seek to correct their records until after the records have been sent to the hiring carrier. It is critical that FAA do what it can to make pilots aware of their rights to review and correct the records maintained by their current employer at any time. With knowledge of their rights, pilots can take responsibility for reviewing the accuracy of their records before the records are sent to hiring carriers. The sample form that FAA designed for hiring carriers and included in its revised guidance for carriers, though intended to streamline the hiring process, weakens pilots’ rights and inappropriately shifts the responsibility for notifying pilots of requests for their records from current or former employers to hiring carriers. If carriers follow the procedures set forth in the sample form, they will not be in compliance with PRIA’s notification provision, and pilots may not know when records are sent to hiring carriers. Although FAA has updated its information on PRIA for carriers, it has not yet included this information in key guidance for its own staff. Until the agency incorporates its guidance on the act into its inspector handbooks and provides its inspectors with appropriate training, the inspectors may not be sufficiently aware of PRIA’s provisions to review carriers’ compliance. We do not know whether the limited number of inspection findings related to PRIA is indicative of widespread compliance, infrequent compliance reviews, or a lack of evidence to determine compliance. However, there is sufficient evidence—from the discrepancies in the number of records requested from FAA, NDR, and carriers; from the reviews of 798 Part 135 carriers conducted by FAA inspectors at our request; and from carriers’ responses to our survey questions about their requests for records in 2000—to suggest that noncompliance is occurring. The number of enforcement actions taken is also difficult to evaluate, given the number of inspection findings. However, FAA has said that it cannot enforce compliance because carriers are not required to retain the records that would demonstrate their compliance. We agree. Unless carriers retain the records they receive on pilots they hire, FAA cannot monitor or enforce their compliance with PRIA’s background check requirements. As the agency with exclusive responsibility for overseeing PRIA’s implementation, FAA has the authority and, we believe the obligation, to ensure that carriers have a system that will allow the carriers and FAA to check compliance with all PRIA requirements, especially whether required pilot background checks have been completed for pilots hired. Recommendations for Executive Action To assist FAA in overseeing the implementation of PRIA and to enable FAA to determine whether carriers have conducted the required background checks on pilots before making final hiring decisions on pilots, we recommend that the Secretary of Transportation direct the FAA Administrator to update FAA’s advisory circular on PRIA to (1) clarify which records to include in PRIA files that are forwarded to hiring carriers and which records to exclude and (2) have carriers put in place a system that will allow the carriers and FAA to check compliance with all PRIA requirements, especially whether required pilot background checks have been completed for pilots hired; incorporate information on PRIA’s Web site that informs pilots of their rights, including the right to review and correct their records under PRIA; revise the Air Carrier and Other Records Request form (FAA Form 8060- 11) to conform with the law’s provisions for notification, review, and correction of records by pilots; and incorporate information on PRIA into the handbooks, inspection guidance, and training for FAA’s operations inspectors. Agency Comments We provided DOT with a copy of our draft report for review and comment. In our draft report, we recommended that FAA develop a regulation requiring that carriers maintain records of background checks on the pilots they hire for as long as the pilots remain in their employ. While FAA agreed that carriers need to maintain the records for the agency to monitor and enforce their compliance with the law, FAA proposed a change in its administrative guidance rather than a regulation to achieve this goal. We agreed that such a change could accomplish the intent of our initial recommendation and revised the recommendation accordingly. FAA concurred with all other recommendations in our draft report and suggested technical changes that we incorporated in this report where appropriate. Most Carriers Found PRIA Records Helpful but Were Divided on Whether They Were Worth the Cost Most carriers found PRIA records at least somewhat helpful, but they were divided on whether the records were worth the cost. The majority of the carriers favored changes that would make additional information available. Nearly three-quarters of the Part 121 carriers and about three-fifths of the Part 135 carriers that had made at least one request to FAA for PRIA information found PRIA records to be helpful in making their hiring decisions. Both groups of carriers found information from other sources, such as the job interview, the carrier’s flight evaluation of the pilot, and the results of the carrier’s training program, more helpful. Since PRIA’s costs are difficult to determine, Part 121 and Part 135 carriers were divided on whether the PRIA information they received in 2000 was worth the cost. Substantial majorities of both Part 121 and Part 135 carriers told us they would support changes to PRIA that would enable them to obtain additional information (1) from FAA on aviation accidents and incidents and on open, pending, and reopened enforcement cases and (2) from the Department of Defense on military pilots’ histories. Carriers Generally Found PRIA Records Helpful in Making Hiring Decisions but Less Helpful than Information from Other Sources Seventy-three percent of the Part 121 carriers found PRIA records at least somewhat helpful in making hiring decisions, and 27 percent said these records were not very helpful, as shown in figure 9. Among the Part 121 carriers, those with more than 1,000 pilots were more likely than smaller carriers to say that PRIA records were helpful in making hiring decisions, and 61 percent of these larger carriers rated PRIA as very helpful. Compared with Part 121 carriers, Part 135 carriers found PRIA less helpful in making hiring decisions: 59 percent of the Part 135 carriers found PRIA at least somewhat helpful, and 41 percent said that PRIA was not very helpful. (See fig. 10.) The Part 121 carriers found PRIA more useful in encouraging pilots to be honest about their background and experiences than did the Part 135 carriers we surveyed. Sixty-eight percent of the Part 121 carriers rated PRIA as moderately or very useful in this regard, compared with 49 percent of the Part 135 carriers. Because PRIA records allow carriers to verify pilots’ statements, receiving the records increases the likelihood that carriers will detect false statements. Nonetheless, 11 percent of the Part 121 carriers and 25 percent of the Part 135 carriers indicated that PRIA was not very useful in encouraging pilots to be honest. According to carriers’ responses to our surveys, PRIA information played a greater role in decisions not to hire pilots for Part 121 carriers than for Part 135 carriers. In 2000, 43 percent of the Part 121 carriers said they decided not to hire pilots because of PRIA information, compared with 9 percent of the Part 135 carriers we surveyed. The Part 121 carriers said they decided not to hire 156 pilots in 2000 because of PRIA information, while we estimate that the far more numerous Part 135 carriers decided not to hire 162 pilots. About two-thirds of the Part 121 carriers that said they did not hire a pilot because PRIA information indicated that the circumstances surrounding the pilot’s departure from a previous employer and the pilot’s training records were major or moderate factors that influenced their decisions. About half of these carriers identified the pilot’s driving records or enforcement history as a factor, as shown in figure 11. Similar percentages of Part 135 carriers reported being influenced by the circumstances surrounding a pilot’s departure and by a pilot’s training records. Evidence of falsification, employers’ records of comments and evaluations, and driving records were, however, much less influential for the few Part 135 carriers that did not hire pilots because of PRIA information than they were for the Part 121 carriers, and such evidence played a smaller role in their hiring decisions than did the pilot’s enforcement history. Both the Part 121 carriers and the Part 135 carriers we surveyed reported that they found some PRIA records more helpful in making their hiring decisions than others. Specifically, as shown in figure 12, they found FAA information on closed enforcement actions during the past 5 years the most helpful and FAA’s verification of the pilot’s medical record and the pilot’s NDR information the least helpful. According to the Part 121 carriers, the information not required by PRIA was generally far more helpful than PRIA records in making final hiring decisions. Virtually all of the Part 121 carriers reported that they found the results of job interviews, their own training programs, evaluations of a pilot’s flying skills, and recommendations from other pilots at least somewhat helpful in making these decisions. (See fig. 13.) This seems reasonable, given that most carriers make a conditional job offer to, and begin training, a pilot on the basis of non-PRIA information and have made their hiring decision by the time they receive and review PRIA files. According to our survey results, 82 percent of the Part 121 carriers had PRIA records available for review after a pilot had accepted a conditional job offer or begun training. Moreover, some carrier officials said, in survey comments and interviews, that they view training and performance records from other carriers as subjective. They said that PRIA information rarely changes their hiring decision unless they see multiple problems in a pilot’s file. Similarly, in making their final hiring decisions, the Part 135 carriers that had requested PRIA files from FAA generally reported finding information not required by PRIA to be more helpful than PRIA records. At least 94 percent of these Part 135 carriers said they found the results of job interviews, their own evaluations of a pilot’s flying skills, and recommendations from other pilots at least somewhat helpful in making these decisions. About 73 percent of the Part 135 carriers said they had PRIA records available for review late in the hiring process—after a pilot had accepted a conditional job offer or begun training. Carriers Differed on Whether PRIA Is Worth the Cost Although the total costs of implementing PRIA are difficult to estimate, carriers bear the largest portion of these costs. According to our survey results, the Part 121 carriers spent substantially more, on average, than the Part 135 carriers to comply with PRIA in 2000, and the Part 121 carriers were more likely to view their costs as justified by the usefulness of the information received. Costs of Implementing PRIA Are Difficult to Determine The costs of implementing PRIA—to carriers, pilots, and federal and state agencies—are difficult to determine. In December 2000, FAA estimated $5.3 million in implementation costs, including the costs to carriers, pilots, and the agency itself; however, this estimate did not include a number of costs to these entities and individuals, and it did not include the costs of providing and obtaining NDR information. For carriers, the full costs of implementing PRIA are difficult to determine because they often are not tracked separately from other hiring and record-keeping costs. For FAA and NDR, the costs are also difficult to determine because complete data are not available. FAA estimated that carriers, in requesting and responding to requests for PRIA records, incurred $4.6 million, or about 86 percent, of the estimated $5.3 million in total implementation costs. This estimate covered the costs of staff time to obtain a pilot’s signatures on release forms, to request the records, and to follow up on records that do not arrive within 30 days; staff time to review and evaluate an applicant’s file once it is received; staff time needed to prepare, copy, review the contents of, and mail PRIA records; and the maintenance of records related to PRIA’s requirements. FAA’s estimate did not include the fees that hiring carriers pay to background investigation companies and to those carrier employers and state motor vehicle agencies that charge a fee for providing records. About 46 percent of the Part 121 carriers and about 26 percent of the Part 135 carriers responding to our survey reported hiring background investigation companies to obtain at least some PRIA records. Most carriers—81 percent of the Part 121 carriers and 94 percent of the Part 135 carriers we surveyed—said they do not charge a fee for providing records. Two of the six state motor vehicle agencies that respond to the vast majority of carriers’ requests for NDR information charge for this service. In responding to our survey, the Part 121 carriers indicated that they spent an average of $7,000 to comply with PRIA in 2000. The Part 135 carriers said they spent an average of $1,000 or less. The total costs for individual Part 121 carriers ranged from $1,000 or less to $100,001 to $1 million. The total costs for individual Part 135 carriers ranged from $1,000 or less to $10,001 to $50,000. (For more detailed cost information, see app. VII.) Carrier officials told us that the costs for PRIA are difficult to distinguish from other hiring costs, partly because most carriers do not have staff dedicated to carrying out PRIA requirements and use the same staff to perform both PRIA and other responsibilities. In addition, the fees that background investigation companies charge to obtain PRIA information may not be billed separately from their fees for performing other services that are not required by PRIA, such as consumer credit or criminal records checks. For 2000, FAA estimated that pilots whose PRIA files were requested incurred total costs of nearly $436,000, or about 8 percent of the estimated $5.3 million total cost. This estimate did not include the pilots’ costs to obtain copies of their employment and training records from carriers for review and to obtain copies of their driving records. Complete data are not available on the costs to FAA, NDR, and state motor vehicle agencies of implementing PRIA. For fiscal year 2000, when FAA provided a more complete estimate of its PRIA costs than it had developed in previous years, the agency estimated its own costs at about $312,000, or about 6 percent of the $5.3 million total estimate. However, according to staff in FAA’s Aviation Data Systems Branch, this estimate did not include about $40,000 that FAA spent in fiscal year 2000 for the initial development and maintenance of its automated system for responding to PRIA requests. NDR officials reported spending $318 to complete 17,000 requests from carriers in 1999, but this figure did not include any portion of the $1 million that NDR spends annually to maintain its computer system or of the costs that NDR incurs to hand-process pilots’ requests for driver information under the Privacy Act. None of the six state motor vehicle agencies that respond to most carriers’ requests for NDR information could identify the costs of providing the information, according to responsible state officials. Carriers Were Divided on Whether PRIA’s Costs Were Justified Compared with the Part 135 carriers we surveyed, the Part 121 carriers were more persuaded that the PRIA information they received was worth the cost, but even the Part 121 carriers were split in their views. Specifically, 52 percent of the Part 121 carriers believed that their PRIA costs in 2000 were justified by the usefulness of the information received, while 48 percent did not. Conversely, nearly two-thirds of the Part 135 carriers did not believe that their PRIA costs were justified, whereas about one-third did. Most Carriers Favored Changes That Would Make Additional Information Available under PRIA Substantial majorities of both Part 121 and Part 135 carriers told us they would support changes to PRIA that would enable them to obtain additional information from FAA on aviation accidents and incidents and on open, pending, and reopened enforcement cases. Carriers can obtain this information from FAA under a Freedom of Information Act (FOIA) request, but pilots are not informed of FOIA requests and are not provided copies of the FOIA files that are sent to potential employers. FOIA records also have not undergone as much legal review as PRIA records. The majority of Part 121 carriers also told us they would support changes to PRIA that would enable them to obtain flight records from the military. Such records are not available through FOIA requests. Carriers Can Request More Safety Information on Pilots under FOIA Than under PRIA Currently, carriers can obtain more extensive information on a pilot’s safety history from FAA under a FOIA request than under a PRIA request. In responding to a FOIA request, FAA can release information on all open, closed, and pending enforcement cases from which the pilot’s identity has not been expunged, even if those cases are more than 5 years old. In responding to a PRIA request, however, FAA is precluded by the act from releasing any records more than 5 years old. In addition, as a matter of law, FAA provides information only on those accidents and incidents that have resulted in a legal enforcement action. Under PRIA, FAA is not required to provide other records that it maintains on aviation accidents and incidents as well as on open, pending, and reopened enforcement cases. According to FAA’s analysis, nearly 20,000 more records were available in response to FOIA requests than to PRIA requests as of January 10, 2002. (See table 6.) These include 9,885 records of enforcement cases that have been closed but from which the pilot’s name has not been expunged as well as of open, pending, or reopened enforcement cases. The remaining records that FAA could provide were of accidents and incidents. According to an analysis done by staff in FAA’s Aviation Data Systems Branch, fewer than one-quarter of the 1,726 carriers that requested PRIA records between February 6, 1997, when the law went into effect, and January 11, 2002, requested additional safety information under FOIA. The analysis concluded that the majority of carriers are unaware that they are receiving incomplete safety records from FAA under PRIA. Additionally, the report noted that, under FOIA, there are no requirements to notify a pilot of a request for records, to obtain the pilot’s consent for the release of those records, or to provide the pilot with a copy of the records that were released. FAA does not support for a variety of reasons a change in the law that would provide carriers with more complete safety information on pilot applicants if it released its records of accidents; incidents; and open, pending, or reopened enforcement cases. According to FAA’s Office of Chief Counsel, using the reports of accidents and incidents in FAA’s data system to evaluate pilots’ performance could be unfair because these reports may not involve pilot error. Moreover, even if they do, pilots identified in accident and incident reports do not receive the same due process protections that pilots receive when they are subject to legal enforcement actions. Additionally, enforcement actions that have not been closed have not been fully reviewed by FAA, NTSB, and perhaps a U.S. Court of Appeals. These cases could eventually be dropped or dismissed. Carriers Favored Receiving Additional Information in Response to PRIA Requests Over three-quarters of the Part 121 carriers and about two-thirds of the Part 135 carriers we surveyed supported having FAA include additional information on accidents and incidents and on open, pending, and reopened enforcement cases in response to PRIA requests. However, the carriers were much less supportive of including enforcement information over 5 years old. (See table 7.) Although Congress excluded military flight records from the sources of PRIA information in April 2000, 62 percent of the Part 121 carriers and 56 percent of the Part 135 carriers we surveyed supported a change in the law that would enable them to receive these records. Currently, carriers can review a military pilot’s logbook to obtain information on the pilot’s flight hours, types of equipment flown, and rate of progress in mastering new aircraft as well as any flight-related disciplinary actions. While substantial majorities of both Part 121 and Part 135 carriers that had hired military pilots found the military logbook helpful in making their hiring decision, the carriers nevertheless favored receiving flight records directly from the military. Military records are important to carriers because they hire many pilots with military flight experience. Eighty-eight percent of the Part 121 carriers reported hiring at least some pilots with military flight experience in 2000, and the largest Part 121 carriers, with more than 1,000 pilots, reported that about 40 percent of the pilots they hired in 2000 had military experience. Forty-six percent of the Part 135 carriers that had requested PRIA files from FAA hired pilots with military flying experience in 2000. | The Pilot Records Improvement Act, enacted on October 9, 1996, responded to seven fatal commercial air carrier accidents that were attributed, in part, to errors by pilots who had been hired without background checks. The act, which took effect on February 6, 1997, requires air carriers, before making final hiring decisions, to obtain information for the past 5 years on a pilot applicant's performance, qualifications, and training from the Department of Transportation's Federal Aviation Administration (FAA), employers, and the National Driver Register (NDR). The act also includes provisions to protect pilots' rights. FAA oversees compliance with the act and has broad responsibility for overseeing aviation safety. According to GAO's analyses of FAA and NDR databases and carriers' responses to GAO's surveys, compliance with the act has generally increased since it went into effect, but compliance is not always complete or timely. The available data are not adequate to determine industrywide compliance. According to their responses to GAO's surveys, carriers are not always aware of the act's requirements for protecting pilots' rights. FAA has taken limited steps to oversee compliance with PRIA. Under the act and its broad responsibility for aviation safety, FAA can issue implementing regulations, develop guidance, conduct inspections to monitor carriers' compliance, and initiate enforcement actions when it finds evidence of noncompliance. FAA has not issued regulations because it regards the act as self-implementing and believes that its regulatory resources should be reserved for higher agency priorities. Although FAA provided guidance for carriers, it was slow to update the guidance after the act was amended. Although they generally found records useful in making hiring decisions, carriers were divided in their opinions on whether the records were worth the cost. However, both groups of carriers found information from other sources, such as the job interview, the carrier's flight evaluation of the pilot, and the results of the carrier's training program, more helpful. |
Background IRS relies on automated information systems to process over 200 million taxpayer returns and collect over $1 trillion in taxes annually. IRS operates 10 facilities throughout the United States to process tax returns and other information supplied by taxpayers. These data are then electronically transmitted to a central computing facility, where master files of taxpayer information are maintained and updated. A second computing facility processes and stores taxpayer data used by IRS in conducting certain compliance functions. There are also hundreds of other IRS facilities (e.g., regional and district offices) that support tax processing. Because of IRS’ heavy reliance on systems, effective security controls are critical to IRS’ ability to maintain the confidentiality of taxpayer data, safeguard assets, and ensure the reliability of financial management information. Computer Security Requirements The Computer Security Act requires, among other things, the establishment of standards and guidelines for ensuring the security and privacy of sensitive information in federal computer systems. Similarly, IRS’ Tax Information Security Guidelines require that all computer and communication systems that process, store, or transmit taxpayer data adequately protect these data, and the Internal Revenue Code prohibits the unauthorized disclosure of federal returns and return information outside IRS. To adequately protect the data, IRS must ensure that (1) access to computer data, systems, and facilities is properly restricted and monitored, (2) changes to computer systems software are properly authorized and tested, (3) backup and recovery plans are prepared, tested, and maintained to ensure continuity of operations in the case of a disaster, and (4) data communications are adequately protected from unauthorized intrusion and interception. Also, Treasury requires IRS to have C2-level safeguards to protect the confidentiality of taxpayer data. The Department of Defense defines a hierarchy of security levels (i.e., A1, B3, B2, B1, C2, C1, and D) with A1 currently being the highest level of protection and D being the minimum level of protection. C2-level safeguards include all the requirements from the D and C1 levels and are required by IRS for all sensitive but unclassified data. These safeguards ensure need-to-know protection and controlled access to data, including a security policy that requires access control; identification and authentication that provide mechanisms to continually maintain accountability; operational and life-cycle assurances that include validations of system integrity and computer systems tests of security mechanisms; and documentation such as a security features user’s guide, test documentation, and design documentation. Prior GAO Work on IRS Computer Security Over the past 3 years, we testified and reported numerous times on serious weaknesses with security and other internal controls used to safeguard IRS computer systems and facilities. For instance, in August 1993, we identified weaknesses in IRS’ systems which hampered the Service’s ability to effectively protect and control taxpayer data. In this regard, we found that (1) IRS did not adequately control access given to computer support personnel over taxpayer data and (2) established controls did not provide reasonable assurance that only approved versions of computer programs were implemented. Subsequently, in December 1993, IRS identified taxpayer data security as a material weakness in its Federal Managers’ Financial Integrity Act report. In 1994, we also reported, and IRS acknowledged, that while IRS had made some progress in correcting computer security weaknesses, IRS still faced serious and longstanding control weaknesses over automated taxpayer data. Moreover, we reported that these longstanding weaknesses were symptomatic of broader computer security management issues, namely, IRS’ failure to (1) clearly delineate responsibility and accountability for the effectiveness of computer security within the agency and (2) establish an ongoing process to assess the effectiveness of the design and implementation of computer controls. To address these issues, we recommended that IRS greatly strengthen its computer security management, and IRS agreed to do so. The unauthorized electronic access of taxpayer data by IRS employees— commonly referred to as browsing—has been a longstanding problem for the Service. In October 1992, IRS’ Internal Audit reported that the Service had limited capability to (1) prevent employees from unauthorized access to taxpayers’ accounts and (2) detect an unauthorized access once it occurred. We reported in September 1993 that IRS did not adequately (1) restrict access by computer support staff to computer programs and data files or (2) monitor the use of these resources by computer support staff and users. As a result, personnel who did not need access to taxpayer data could read and possibly use this information for fraudulent purposes. Also, unauthorized changes could be made to taxpayer data, either inadvertently or deliberately for personal gain, for example, to initiate unauthorized refunds or abatements of tax. In August 1995, we reported that the Service still lacked sufficient safeguards to prevent or detect unauthorized browsing of taxpayer information. IRS Organizations Responsible for Managing Computer Security Several organizations within the IRS are responsible for the security of IRS computer resources and the facilities that house them. For example, the Office of the Chief Information Officer is responsible for formulating policies and issuing guidelines for logical security, data security, risk analysis, security awareness, security management, contingency planning, and telecommunications. The Real Estate division within the Office of the Chief for Management and Administration is responsible for formulating policies and issuing guidelines for physical security. The field offices (e.g., service centers, computing centers, regional offices, district offices) are responsible for implementing these policies and guidelines at their locations. Compliance with the policies and procedures is assessed by both the headquarters and field offices. Serious System Security Weaknesses Persist Weaknesses in IRS’ computer systems security continue to place taxpayer data and IRS’ automated information systems at risk to both internal and external threats, which could result in the loss of computer services, or in the unauthorized disclosure, modification, or destruction of taxpayer data. While IRS has made some progress in protecting taxpayer data, serious weaknesses persist. During our five on-site reviews, we found numerous weaknesses in the following eight functional areas: physical security, logical security, data communications management, risk analysis, quality assurance, internal audit and security, security awareness, and contingency planning.Primary weaknesses were in the areas of physical and logical security. Physical Security Physical security and access control measures, such as locks, guards, fences, and surveillance equipment, are critical to safeguarding taxpayer data and computer operations from internal and external threats. We found many weaknesses in physical security at the facilities visited. The following are examples of these weaknesses: Collectively, the five facilities could not account for approximately 6,400 units of magnetic storage media, such as tapes and cartridges, which could contain taxpayer data. The number per facility ranged from a low of 41 to a high of 5,946. Fire suppression trash cans were not used in several facilities. Printouts containing taxpayer data were left unprotected and unattended in open areas of two facilities where they could be compromised. Logical Security Logical security controls limit access to computing resources to only those (personnel and programs) with a need to know. Logical security control measures include the use of safeguards incorporated in computer hardware, system and application software, communication hardware and software, and related devices. We found numerous weaknesses in logical security at the facilities visited. Examples of these vulnerabilities include the following: Tapes containing taxpayer data were not overwritten prior to reuse. Access to system software was not limited to individuals with a need to know. For example, at two facilities, we found that data base administrators had access to system software, although their job functions and responsibilities did not require it. Application programmers were allowed to move development software into the production environment without adequate controls. In addition, these programmers were allowed to use taxpayer data for testing purposes, which places these data at unnecessary risk of unauthorized disclosure and modification. Data Communications Management Data communications management is the function of monitoring and controlling communications networks to ensure that they operate as intended and transmit timely, accurate, and reliable data securely. Without adequate data communications security, the data being transmitted can be destroyed, altered, or diverted, and the equipment itself can be damaged. At the five facilities, we found numerous communications management weaknesses. Risk Analysis The purpose of risk analysis is to identify security threats, determine their magnitude, and identify areas needing additional safeguards. We found risk analysis weaknesses at the five facilities. For example, none of the facilities visited conducted a complete risk analysis to identify and determine the severity of all the security threats to which they were vulnerable. Without these analyses, systems’ vulnerabilities may not be identified and appropriate controls not implemented to correct them. Quality Assurance An effective quality assurance program requires reviewing software products and activities to ensure that they comply with the applicable processes, standards, and procedures and satisfy the control and security requirements of the organization. One aspect of a quality assurance program is validating that software changes are adequately tested and will not introduce vulnerabilities into the system. We found many weaknesses in quality assurance at the five facilities visited, including instances of failing to independently test all software prior to placing it into operation. In addition, when software products were tested, this testing was sometimes incomplete (e.g., did not include integrity or stress testing).Such quality assurance weaknesses can result in systems not functioning properly, putting federal taxpayer data at risk. Internal Audit and Security Internal audit and internal security functions are needed to ensure that safeguards are adequate and to alert management to potential security problems. We found many weaknesses in the internal audit or internal security functions at the five facilities visited. For example, two of the facilities had not audited operations within the last 5 years. Security Awareness An effective security awareness program is the means through which management communicates to employees the importance of security policies, procedures, and responsibilities for protecting taxpayer data. Three of the five IRS facilities did not have an adequate security awareness program. For example, at one site there was no process in place for ensuring that management was made aware of security violations and security related issues. We found several security awareness weaknesses at four of the five facilities. Contingency Planning A contingency plan specifies emergency response, backup operations, and post disaster recovery procedures to ensure the availability of critical resources and facilitate the continuity of operations in an emergency situation. It addresses how an organization plans to deal with the full range of contingencies from electrical power failures to catastrophic events, such as earthquakes, floods, and fires. It also identifies essential business functions and prioritizes resources in order of criticality. To be effective when needed, a contingency plan must be periodically tested and personnel trained in and familiar with its use. None of the five facilities visited had comprehensive disaster recovery plans. Specifically, we found that disaster recovery procedures at two of the five facilities had not been tested, while plans for the remaining locations were incomplete, i.e., they failed to include instructions for restoring all mission-critical applications and reestablishing telecommunications. Further, none had completed business resumption plans, which should specify the disaster recovery goals and milestones required to meet the business needs of their customers. We found many weaknesses in this functional area at the five sites visited. Electronic Browsing Is Not Being Addressed Effectively Taxpayer information can be compromised when IRS employees, who do not have a need to know, electronically peruse files and records. This practice, which is commonly called browsing, is an area of continuing serious concern. To address this concern, IRS developed an information system—the Electronic Audit Research Log (EARL)—to monitor and detect browsing on the Integrated Data Retrieval System (IDRS), the primary computer system IRS employees use to access and adjust taxpayer accounts. IRS has also taken legal and disciplinary actions against employees caught browsing. However, EARL has shortcomings that limit its ability to detect browsing. In addition, IRS does not know whether the Service is making progress in reducing browsing. Further, IRS facilities inconsistently (1) review and refer incidents of employee browsing, (2) apply penalties for browsing violations, and (3) publicize the outcomes of browsing cases to deter other employees from browsing. EARL’s Ability to Detect Browsing Is Limited EARL cannot detect all instances of browsing because it only monitors employees using IDRS. EARL does not monitor the activities of IRS employees using other systems, such as the Distributed Input System, the Integrated Collection System, and the Totally Integrated Examination System, which are also used to create, access, or modify taxpayer data. In addition, information systems personnel responsible for systems development and testing can browse taxpayer information on magnetic tapes, cartridges, and other files using system utility programs, such as the Spool Display and Search Facility, which also are not monitored by EARL. Further, EARL has some weaknesses that limit its ability to identify browsing by IDRS users. For example, because EARL is not effective in distinguishing between browsing activity and legitimate work activity, it identifies so many potential browsing incidents that a subsequent manual review to find incidents of actual browsing is time-consuming and difficult. IRS is evaluating options for developing a newer version of EARL that may better distinguish between legitimate activity and browsing. Because IRS does not monitor the activities of all employees authorized to access taxpayer data and does not monitor the activities of information systems personnel authorized to access taxpayer data for testing purposes, IRS has no assurance that these employees are not browsing taxpayer data and no analytical basis on which to estimate the extent of the browsing problem or any damage being done. IRS Progress in Reducing and Disciplining Browsing Cases Is Unclear IRS’ management information systems do not provide sufficient information to describe known browsing incidents precisely or to evaluate their severity consistently. IRS personnel refer potential browsing cases to either the Labor Relations or Internal Security units, each of which records information on these potential cases in its own case tracking system. However, neither system captures sufficient information to report on the total number of unauthorized accesses. For example, neither system contains enough information on each case to determine how many taxpayer accounts were inappropriately accessed or how many times each account was accessed. Consequently, for known incidents of browsing, IRS cannot efficiently determine how many and how often taxpayers’ accounts were inappropriately accessed. Without such information, IRS cannot measure whether it is making progress from year to year in reducing browsing. A recent report by the IRS EARL Executive Steering Committee shows that the number of browsing cases closed has fluctuated from a low of 521 in fiscal year 1991 to a high of 869 in fiscal year 1995. However, the report concluded that the Service does not consistently count the number of browsing cases and that “. . . it is difficult to assess what the detection programs are producing. . . or our overall effectiveness in identifying IDRS browsing.” Further, the committee reported “the percentages of cases resulting in discipline has remained constant from year to year in spite of the Commissioner’s ’zero tolerance’ policy.” IRS browsing data for fiscal years 1991 to 1995 show that the percentage of browsing cases resulting in IRS’ three most severe categories of penalties (i.e., disciplinary action, separation, and resignation/retirement) has ranged between 23 and 34 percent, with an average of 29 percent. Incidents of Browsing Are Reviewed and Referred Inconsistently According to IRS, effectively addressing employee browsing requires consistent review and referral of potential browsing across IRS. However, IRS processing facilities do not consistently review and refer potential browsing cases. The processing facilities responsible for monitoring browsing had different policies and procedures for identifying potential violations and referring them to the appropriate unit within IRS for investigation and action. For example, at one facility, the analysts who identified potential violations referred all of them to Internal Security, while staff at another facility sent some to Internal Security and the remainder to Labor Relations. The analysts handle the review and referral of potential violations differently because IRS policies and procedures do not provide guidance in these areas. In June 1996, IRS’ Internal Audit reported that IRS management had not developed procedures to ensure that potential browsing cases were consistently reviewed and referred to management officials throughout the agency. Internal Audit further reported that analysts were not given clear guidance on where to refer certain cases, especially those involving potential Internal Security cases, and that procedures had been developed by some facilities but varied from site to site. IRS has acted to improve the consistency of its process. In June 1996, it developed specific criteria for analysts to use when making referral decisions. A recent report by the EARL Executive Steering Committee stated that IRS had implemented these criteria nationwide. Because IRS was in the process of implementing these criteria during our work, we could not validate their implementation or effectiveness. Penalties for Browsing Are Inconsistent Across IRS IRS policies and procedures on disciplining employees caught browsing direct IRS management to ensure that decisions are appropriate and consistent agencywide. After several IRS directors raised concern that field offices were not consistent in the types of discipline imposed in similar cases, IRS’ Western Region analyzed fiscal year 1995 browsing cases for all its offices and found inconsistent treatment for similar types of offenses. Examples of inconsistent discipline included Temporary employees who attempted to access their own accounts were given letters of reprimand, although historically, IRS terminated temporary employees for this type of infraction. One employee who attempted to access his own account was given a written warning, while other employees in similar situations, from the same division, were not counseled at all. The EARL Executive Steering Committee also reported widespread inconsistencies in the penalties imposed in browsing cases. For example, the committee’s report showed that for fiscal year 1995, the percentage of browsing cases resulting in employee counseling ranged from a low of 0 percent at one facility to 77 percent at another. Similarly, the report showed that the percentage of cases resulting in removal ranged from 0 percent at one facility to 7 percent at another. For punishments other than counseling or removal (e.g., suspension), the range was between 10 percent and 86 percent. Punishments Assessed for Browsing Not Consistently Publicized to Deter Violations IRS facilities did not consistently publicize the penalties assessed in browsing cases to deter such behavior. For example, we found that one facility never reported disciplinary actions. A representative at this facility told us that employees were generally aware of cases involving embezzlement and fraud if the cases received media attention. However, another facility reported the disciplinary outcomes of browsing cases in its monthly newsletter. For example, it cited a management official who accessed a relative’s account and was punished. This facility publicized cases involving employees at all grade levels to emphasize that browsing taxpayer data is a serious offense punishable by adverse administrative actions or legal sanctions, including loss of job and criminal prosecution. By inconsistently and incompletely reporting on penalties assessed for employee browsing, IRS is missing an opportunity to more effectively deter such activity. The EARL Executive Steering Committee noted that during the past 3 years IRS had published numerous documents intended to educate and sensitize employees to the importance of safeguarding taxpayer information. Nonetheless, the committee found that employees do not perceive the Service as aggressively pursuing browsing violations. It recommended that communications be more focused and highlight actual examples of disciplinary actions that have been taken against employees who browse. Conclusions IRS’ current approach to computer security is not effective. Serious weaknesses persist in security controls intended to safeguard IRS computer systems, data, and facilities and expose tax processing operations to the serious risk of disruption and taxpayer data to the risk of unauthorized use, modification, and destruction. Further, although IRS has taken some action to detect and deter browsing, it is still not effectively addressing this area of continuing concern because (1) it does not know the full extent of browsing and (2) it is inconsistently addressing cases of browsing. Recommendations Because of the serious and persistent security problems cited in our January 30, 1997, “Limited Official Use” version of this report, we recommended that the Commissioner of Internal Revenue, within 3 months of the date of that report, prepare a plan for (1) correcting all the weaknesses identified at the five facilities we visited, as detailed in the January 30, 1997 report, and (2) identifying and correcting security weaknesses at the other IRS facilities. We stated that this plan should be provided to the Chairmen and Ranking Minority Members of the Subcommittees on Treasury, Postal Service, and General Government, Senate and House Committees on Appropriations; Senate Committee on Finance; Senate Committee on Governmental Affairs; House Committee on Ways and Means; and House Committee on Government Reform and Oversight. We also stated that the Commissioner should report on IRS’ progress on these plans in its fiscal year 1999 budget submission and should identify the computer security weaknesses discussed in this report as being material in its Fiscal Year 1996 Federal Managers’ Financial Integrity Act report and subsequent reports until the weaknesses are corrected. Also, because long-standing computer security problems continue to plague IRS operations, we reiterated our prior recommendation that the Commissioner, through the Deputy Commissioner, strengthen computer security management. In doing so, we recommended that the Commissioner direct the Deputy Commissioner to (1) reevaluate IRS’ current approach to computer security along with plans for improvement, and (2) report the results of this reevaluation by June 1997, to above cited congressional committees and subcommittees. Last, in light of the continuing seriousness of IRS employees’ electronic browsing of taxpayer records, we recommended that the Commissioner ensure that IRS completely and consistently monitors, records, and reports the full extent of electronic browsing for all systems that can be used to access taxpayer data. We recommended that the Commissioner report the associated disciplinary actions taken and that these statistics along with an assessment of its progress in eliminating browsing, be included in IRS’ annual budget submission. Agency Comments and Our Evaluation In commenting on a draft of this report, IRS agreed with our conclusions and recommendations and stated that it is working to correct security weaknesses and implement our recommendations. However, it did not commit to doing so for all recommendations within the time frames specified. Specifically, we recommended that by April 30, 1997, IRS develop a plan for (1) correcting all the weaknesses identified at the five facilities we visited and (2) identifying and correcting any security weaknesses at the other facilities. We specified this time frame because of the seriousness of the weaknesses we found. In our view, it is essential that IRS implement this recommendation expeditiously, and therefore we reiterate that IRS should complete the above cited plan by April 30, 1997. Also concerning the correction of the weaknesses identified at the five facilities visited, IRS stated in its comments that “each facility is taking any corrective actions required by the GAO review.” This statement is inconsistent with comments provided by each facility on its own weaknesses and thus evokes additional concerns about the need for a more concerted security management effort to ensure a consistent and effective level of security at all IRS facilities. Specifically, while the five facilities agreed with many of our findings and described appropriate corrective actions, they disagreed with many. In some cases, their comments reflected inconsistent views on the same problems. For example, some facilities acknowledged the need for fire suppression trash cans for disposing of combustible material (including paper) and chemicals in print rooms, while others disagreed. It is imperative that IRS recognize and correct security weaknesses systematically and consistently across all its facilities. IRS also commented that “a recent reevaluation of the weaknesses by GAO’s contractor identified that 41% of the weaknesses originally identified in the GAO report have already been corrected and closed, and an additional 12% were being adequately addressed by the facilities.” Our contractor’s reevaluation assessment is not yet complete. Given the many serious security weaknesses yet to be fully dealt with or even addressed at this point, any preliminary assessment of IRS progress should be viewed with caution. In addition, IRS stated that time did not permit it to report the weaknesses identified in our report as material in its fiscal year 1996 Federal Managers’ Financial Integrity Act report. Instead, IRS has committed to reevaluating the status of material weaknesses that have and should be reported so that the fiscal year 1997 Federal Managers’ Financial Integrity Act report will provide an accurate depiction of the agency’s material weaknesses and coincide with its approach and plans for improvement. The full text of IRS’ comments on a draft of this report is in appendix II. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies to the Chairman, Senate Committee on Governmental Affairs, and the Chairmen and Ranking Minority Members of the (1) Subcommittees on Treasury, Postal Service, and General Government of the Senate and House Committees on Appropriations, (2) Senate Committee on Finance, (3) House Committee on Ways and Means, and (4) House Committee on Government Reform and Oversight. We will also send copies to the Secretary of the Treasury, Commissioner of Internal Revenue, and Director of the Office of Management and Budget. Copies will be available to others upon request. If you have questions about this report, please contact me at (202) 512-6412. Major contributors are listed in appendix III. Objectives, Scope, and Methodology The objectives of our review were to (1) determine whether IRS is effectively managing computer security and (2) determine whether IRS is effectively addressing employee browsing of electronic taxpayer data. To determine the effectiveness of IRS computer security, we first reviewed the findings from the computer security evaluation conducted by the public accounting firm of Ernst & Young in support of our audit of IRS’ fiscal year 1995 financial statements. Ernst & Young’s evaluation addressed general controls over such areas as physical security, logical security, communications, risk management, quality assurance, internal security, and contingency planning. Ernst & Young performed its evaluation at five IRS facilities, as well as IRS headquarters offices where it examined security policies and procedures. Using Ernst & Young’s evaluation results as preliminary indicators, we then evaluated and tested general computer security controls at the same five facilities in more depth. The areas we reviewed included physical security, logical security, data communications management, risk analysis, quality assurance, internal security and internal audit, security awareness, and contingency planning. Our evaluations included the review of related IRS polices and procedures; on-site tests and observations of controls in operation over all the systems in use at these locations; discussions of security controls with Integrated Data Retrieval System users, security representatives, and officials at the locations visited. Our evaluation did not include computer systems penetration testing. We sent a letter reporting our findings to each IRS facility we visited, requesting comments and the outline of a plan for corrective actions. We then analyzed the responses and discussed the results with responsible IRS headquarters officials. We did not verify IRS’ statements that certain actions had already been completed, but will do so as part of our audit of IRS’ financial statements for fiscal year 1996. To determine the effectiveness of IRS efforts to reduce employee browsing of taxpayer data, we reviewed documentation and discussed issues relating to the development and operation of the Electronic Audit Retrieval Log, the system IRS implemented to identify potential cases of employee browsing. We also reviewed data from the two systems IRS uses to track identified cases of browsing in order to determine the ability of these systems to accurately report the nature and extent of employee browsing. In addition, we discussed with IRS Internal Security officials the actions they are taking to investigate instances of browsing, and we reviewed the Electronic Audit Research Log (EARL) Executive Steering Committee Report dated September 30, 1996. To evaluate IRS’ computer management and security, we assessed information pertaining to computer controls in place at headquarters and field locations and held discussions with headquarters officials. We did not assess the controls that IRS plans to incorporate into its long-term Tax Systems Modernization program. We requested comments on a draft of this report from IRS and have reflected them in the report as appropriate. Our work was performed at IRS headquarters in Washington, D.C., and at five facilities located throughout the United States from May 1996 through November 1996. We performed our work in accordance with generally accepted government auditing standards. Comments From the Internal Revenue Service Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Atlanta Field Office Carl L. Higginbotham, Senior Information Systems Analyst Glenda C. Wright, Senior Information Systems Analyst Teresa F. Tucker, Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) computer security, focusing on whether IRS is effectively: (1) managing computer security; and (2) addressing employee browsing of electronic taxpayer data. GAO noted that: (1) over the last 3 years, GAO has reported on a number of computer security problems at IRS and has made recommendations for strengthening IRS' computer security management effectiveness; (2) nevertheless, IRS continues to have serious weaknesses in the controls used to safeguard IRS computer systems, facilities, and taxpayer data; (3) GAO's recent on-site reviews of security at five facilities disclosed many weaknesses in the areas of physical security, logical security, data communications management, risk analysis, quality assurance, internal audit and security, security awareness, and contingency planning; (4) for example, the five facilities could not account collectively for approximately 6,400 missing units of magnetic storage media, such as tapes and cartridges, which could contain taxpayer data; (5) in addition, printouts containing taxpayer data were left unprotected and unattended in open areas of two facilities where they could be compromised; (6) also, none of the facilities visited had comprehensive disaster recovery plans, which threaten the facilities' ability to restore operations following emergencies or natural disasters; (7) one area of unauthorized access that has been the focus of considerable attention is electronic browsing of taxpayer data by IRS employees; (8) despite this attention, IRS is still not effectively addressing the problem via thorough employee monitoring, accurate recording of browsing violations, or consistent application and publication of enforcement actions; (9) for example, IRS currently does not monitor all employees with access to automated systems and data for electronic browsing activities; (10) in addition, when instances of browsing are identified, IRS does not consistently investigate them or publicize them to deter others from browsing, and does not consistently punish browsers; (11) until these serious weaknesses are corrected, IRS runs the risk of its tax processing operations being disrupted and taxpayer data being improperly used, modified, or destroyed; and (12) IRS should prepare a plan for correcting the weaknesses at the five facilities GAO visited and for identifying and correcting security weaknesses at other IRS locations. |
Scope and Methodology To address our objectives, we reviewed relevant NNSA and IAEA policy, guidelines, and planning documents. For NNSA, we examined its Protection and Sustainability Criteria Document, which describes the DBT—the baseline threat for which security measures should be developed at research reactors in the GRRS program. In addition, we reviewed NNSA’s strategic plans for the GRRS program and work schedules for conducting and completing security work activities. We also met with NNSA officials responsible for implementing the GRRS program and with Sandia National Laboratories (Sandia) technical experts who provide assistance to NNSA in implementing the program. We also met with IAEA officials from IAEA’s Office of Nuclear Security, Division of Nuclear Fuel Cycle and Waste Technology, and IAEA’s Department of Safeguards. We reviewed security upgrades at a nonprobability sample of five research reactors in five different countries—Czech Republic, Hungary, Mexico, Romania, and Serbia. This sample cannot be used to generalize findings from these countries to all countries in the program. We selected these reactors based upon whether the reactors still use or store HEU fuel and when NNSA had completed physical protection upgrades. Four of the five reactors had already received security upgrades, while work was ongoing at the fifth reactor. In the course of our work, we visited each of these five reactors to tour the facilities and inspect security upgrades that had been made or were in process. During our visits, we interviewed officials managing the reactors, on-site security officials, police, and other law enforcement officials responsible for responding to security incidents, as well as government officials responsible for regulating security at these reactors. At each of these reactors, we conducted interviews with a standard set of questions concerning the physical protection of the facility, the security upgrades that were being made, and the extent of the facility’s coordination with NNSA and IAEA. We also compared the security systems at the facilities with IAEA guidelines—particularly INFCIRC 225, Rev. 4, Physical Protection of Nuclear Material and Nuclear Facilities. We also reviewed NNSA documents about each reactor, including reactor visit reports and vulnerability assessments. We conducted this performance audit from August 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Background Research reactors are generally smaller than nuclear power reactors, ranging in size from less than 1 megawatt to as high as 250 megawatts, compared with the 3,000 megawatts found for a typical commercial nuclear power reactor. In addition, unlike power reactors, many research reactors use HEU fuel instead of LEU. Although some research reactors have shut down or converted to LEU fuel and returned their HEU fuel to the United States or Russia, about 165 research reactors throughout the world continue to use HEU. NNSA efforts to convert reactors from HEU to LEU fuel use and return HEU fuel to the United States and Russia has led to the conversion of 57 reactors, the shutdown of 7 reactors, the return of HEU from 59 reactors, and the elimination of all HEU from 46 reactor facilities. NNSA plans to continue converting reactors and returning HEU fuel to its country of origin. However, because it will take several years to convert reactors to LEU fuel use and return the HEU fuel, in the interim security needs to be ensured at these reactors. Figure 1 shows the interior of a research reactor in an Eastern European country that still uses Russian supplied HEU. As NNSA and its predecessor agencies recognized the threat posed by the theft or diversion of nuclear materials—including HEU research reactor fuel—for nuclear weapons’ purposes, it initiated a number of efforts to address this threat. First, since 1974, DOE has supported a program to determine whether nuclear material provided by the United States to other countries for peaceful purposes is adequately protected. Managed by NNSA’s Division of Nonproliferation and International Security, this program prioritizes and selects facilities for physical protection assessment visits, leads such visits to determine if the facility meets IAEA guidelines for security, and, in the cases where the visited facility does not meet IAEA guidelines, makes recommendations to improve security. However, unlike the GRRS program, NNSA’s Office of Nonproliferation and International Security does not fund or install security upgrades at research reactors overseas. Second, after the collapse of the Soviet Union, DOE established the Material Protection, Control, and Accounting program in 1995 to install improved security systems for nuclear material at civilian nuclear sites (including research reactors), naval fuel sites, and nuclear weapons laboratory sites in Russia and nations in the former Soviet Union. Third, prior to the establishment of NNSA, DOE established the GRRS program in 1993 to improve the security of research reactors that are in countries that NNSA considers in need of assistance, as well as research reactors in countries that are not included in other DOE/NNSA programs. As shown in Table 1, the GRRS program has identified 22 research reactors in 16 different countries in need of assistance that are not included in other DOE/NNSA programs. Originally managed by NNSA’s Office of Nonproliferation and International Security, the GRRS program was transferred to the GTRI in 2005. The GRRS program is also beginning to provide security enhancements at research reactors located at universities in the United States, as requested by the Department of Homeland Security and the NRC. NNSA officials told us that they believe the decision to assist in upgrading the security of these reactors was based partly on our January 2008 report, which found potential security weaknesses at domestic research reactors regulated by NRC. NNSA Has Improved the Security of Research Reactors and Plans to Continue Upgrading the Security of Additional Reactors As of August 2009, NNSA reports that it had upgraded the security at 18 of the 22 foreign research reactors in the GRRS program at a total cost of approximately $8 million. NNSA plans to complete upgrades or remove all HEU prior to making upgrades at the remaining 4 reactors and to make further upgrades at some reactors where initial upgrades have already been made, spending an additional $6 million before ending physical security upgrades in 2010. For example, at one research reactor we visited, NNSA has already spent $760,000 on security upgrades and plans to spend $650,000 to pay for additional security upgrades, which will enable the facility to meet IAEA guidelines for security. NNSA also plans to spend an additional $378,000 for maintenance and sustainability of the security system at this facility over the next several years. NNSA is planning to complete all physical protection upgrades at GRRS reactors by the end of 2010. NNSA prioritizes its schedule for upgrading the security of research reactors depending on the amount and type of nuclear or radioactive material at the reactor and other threat factors, such as the vulnerability condition of sites, country-level threat, and proximity to strategic assets. To make security upgrades, NNSA works with Sandia security experts to assess security needs at reactor facilities, design security upgrades and systems, assists foreign reactor operators in making improvements, and review security upgrades once they have been made. With NNSA approval, Sandia works with local firms specializing in installing security systems to make security upgrades. Security upgrades we observed during our visits to reactors in the GRRS program included, among other things, construction of new, heavily reinforced vaults to store HEU fuel; installation of motion detector sensors and security cameras to detect unauthorized entry into reactor buildings and provide the ability to remotely monitor activities in those buildings; replacement of glass entry doors with hardened steel doors equipped with magnetic locks and controlled by card readers or keypads; and upgrades or construction of new fortified central alarm stations that allow on-site guards to monitor alarms and security cameras, and communicate with response forces. Figure 2 shows a newly built fortified central alarm station at a HEU research reactor. Figure 3 shows the upgraded alarm display and closed circuit television monitors inside a central alarm station at another HEU reactor. In addition, NNSA works with officials in countries included in the GRRS program to develop emergency plans and training exercises with on-site guard forces as well as local, regional, and national law enforcement agencies. For example, at one facility we visited, NNSA officials had worked with the reactor managers to develop emergency plans, and the managers routinely test these plans with different elements of the national emergency responders including the facility guard force, local police, regional police, and the national–level law enforcement including special assault teams. IAEA guidelines state that coordination between facility guards and off-site response forces should be regularly exercised. In addition, NNSA’s alert and notify strategy relies on off-site response forces to supplement the on-site guard force to contain, locate, and neutralize adversaries before they can successfully steal nuclear material or sabotage the reactor. The focus of NNSA’s program has been on protecting reactors that use or store HEU fuel that could potentially be used in an improvised nuclear device where security does not meet IAEA guidelines. In addition, some research reactors using LEU fuel—which cannot be used to make a nuclear bomb but are potential targets of sabotage to release radioactivity into the area surrounding a reactor—have received security upgrades because of high levels of terrorist activities in regions where the reactors are located or because of their proximity to U.S. installations. Although Reactors We Visited Generally Met IAEA Guidelines, Some Security Weaknesses Remain That Could Undermine NNSA- Funded Upgrades The foreign research reactors we visited that have received NNSA assistance generally met IAEA physical protection guidelines; however, in some cases, critical security weaknesses remained. The focus of the GRRS program is to make physical security upgrades in accordance with IAEA guidelines. For example, IAEA guidelines recommend that nuclear facilities possessing the highest-risk nuclear materials have intrusion detection equipment and that all intrusion sensors and alarms should be monitored in a central alarm station that is staffed continuously to initiate appropriate responses to alarms. At all four of the research reactors we visited where NNSA upgrades have been completed, NNSA installed intrusion detection sensors on all entrances and infrared motion detectors in areas where nuclear material is stored to detect unauthorized access. In addition, at these reactors NNSA provided assistance to construct fortified central alarm stations that are staffed continuously by on-site security personnel to monitor alarms triggered by these sensors. NNSA is in the process of providing these same upgrades at the fifth reactor we visited. Despite these upgrades, the GRRS program has not focused on whether security planning, procedures, and regulations meet IAEA guidelines at international research reactors. In contrast, in the United States, the GRRS program has assisted research reactors to ensure that security planning, procedures, and regulations meet IAEA guidelines. For example, to meet IAEA’s guidelines that emergency plans be regularly exercised, the program has provided emergency first responders with training and conducted table top exercises simulating emergency conditions. At four of the five reactors that we visited, we identified the following potential vulnerabilities that can undermine NNSA-funded upgrades. Specifically, IAEA security guidelines state that coordination between on-site guards and off-site response forces should be regularly exercised. At two reactors, however, no emergency response exercises had been conducted between the on-site guard force and off-site response forces, such as the national police, potentially limiting the effectiveness of these forces in an actual emergency. In addition, one of these reactors lacked any formal plans for emergencies involving attempts to steal HEU fuel or to sabotage reactors. IAEA security guidelines state that all persons entering or leaving reactor inner areas should be subject to a search to prevent the unauthorized removal of nuclear material. However, personnel at one research reactor we visited did not search visitors or their belongings before granting them access to restricted areas where nuclear material is present, thereby potentially compromising the security upgrades made through NNSA assistance. IAEA security guidelines also state that all vehicles entering or leaving the protected areas should be subject to search. However, at another reactor that we visited personnel did not search vehicles that were allowed onto the site or vehicles exiting the site for potentially stolen nuclear material or other contraband. IAEA security guidelines state that the ceilings, walls, and floors of areas containing vulnerable nuclear material should be constructed to delay potential adversaries from accessing the material. However, at one facility, we discovered that protective covers over storage pools that contain HEU were not being used. These covers, which typically weigh hundreds of kilograms and must be moved using a crane, provide important protection for stored HEU by significantly increasing the time required for a potential adversary to access nuclear material. Although NNSA officials told us that these covers are not part of the security system, the covers would delay potential adversaries from accessing the HEU stored in the pool. Furthermore, the four entrance doors to another research reactor—which still had HEU fuel at the time that we visited, but has subsequently returned its HEU fuel—were not upgraded and provided only limited access delay. These doors were made of wood that is only approximately 1 inch thick. In addition, the locks on these doors are not designed to prevent a determined attempt to access the research reactor facility. Officials at this facility told us that they had requested NNSA funding to replace the doors with hardened steel doors. However, NNSA did not agree to pay for hardened steel doors because it decided that the HEU fuel was sufficiently secured in a storage pool with heavy concrete covers. NNSA program guidance states that establishing and maintaining a reliable nuclear material inventory and tracking system are important elements for ensuring adequate security for these materials. However, at one reactor we learned that the operators of the reactor did not have an effective system of nuclear material control and accounting for the HEU fuel. For example, the operators of this reactor neither performed routine inventory checks on HEU fuel, nor had an exact accounting of the spent HEU fuel stored at the facility. In this case, NNSA officials told us that a lack of effective nuclear material accounting at this facility is due to the poor condition of the reactor fuel storage pool, which is contaminated with cesium that has leaked from fuel. These officials told us that an inventory will be conducted as HEU fuel is prepared for shipment back to its country of origin. IAEA security guidelines state that unescorted access to protected areas should be limited to those persons whose trustworthiness has been determined. However, at another reactor we visited, background checks were not conducted on personnel with access to areas where nuclear materials are present. At the same reactor, according to foreign government officials, the government agency charged with regulating the operation of the research reactor had neither developed safety and security regulations, nor had the country enacted laws ensuring the safe and secure operation of nuclear facilities—including licensing, inspections, and emergency exercise procedures—as called for by IAEA guidelines. NNSA and Sandia officials responsible for making security upgrades at these reactors acknowledged that, even with NNSA-funded upgrades, these continued vulnerabilities potentially compromise security. These officials stressed the importance of NNSA continuing to work with these countries to ensure that research reactors have effective and comprehensive physical protection systems and procedures consistent with IAEA guidelines. Furthermore, they expressed the need to eventually convert these reactors to LEU and return the HEU fuel to its country of origin, as well as to develop national laws and regulations to ensure the safe and secure operation of nuclear facilities. In addition, Sandia officials commented that there is no substitute for NNSA and Sandia visits to reactors that have received physical security upgrades to determine whether the upgrades have been installed, function as designed, and are properly maintained. However, these visits generally have not been used to assist the facilities in developing security policy and procedures that comply with IAEA security guidelines, and there are no specific plans to continue these visits after security upgrades at the remaining reactors are completed in 2010. NNSA Coordinates Security Upgrades with Other Countries and IAEA, but Additional Cooperation is Needed to Implement Security Procedures Provided for in IAEA Guidelines NNSA coordinates with research reactor operators to design, install, and sustain security upgrades. However, because the GRRS program is voluntary, NNSA faces challenges in obtaining consistent and timely cooperation from other countries to address remaining security weaknesses. With regard to IAEA, NNSA coordinates with the agency to identify research reactors that are in need of security upgrades and assistance. In addition, NNSA and IAEA have begun coordinating on a sustainability project to help ensure that research reactor operators adequately maintain NNSA funded upgrades by assisting in the development of equipment testing and maintenance procedures and the development of emergency response plans. NNSA Coordinates with Other Countries to Implement Upgrades but Faces Challenges in Addressing Security Weaknesses at Some Research Reactors NNSA officials and the physical security experts at Sandia coordinate with foreign government research reactor operators to design, install, and sustain physical security upgrades. To design security systems, NNSA and Sandia officials assess a research reactor’s current security condition to identify security weaknesses and verify the amount, type, and location of nuclear material at the facility. The officials then work with foreign research reactor operators to design upgrades and use either the DBT established by the foreign government or a DBT developed by NNSA if the country has not developed its own DBT for nuclear facilities. Security upgrades are generally focused on the electronic elements of the security system used to detect unauthorized access and alert response forces, as well as access delay features such as hardened steel doors and storage vaults, instead of on the development of security policies and procedures provided for in IAEA guidelines. Sandia officials also work with foreign government research reactor operators by overseeing the installation of security upgrades. In general, Sandia works with a security company that is then responsible for procuring and installing the designed security upgrades. To help ensure that the security upgrades are being installed properly, Sandia requires the security company and the foreign research reactor operators to periodically submit status reports and equipment lists for Sandia’s review. In some instances, countries will share the cost of installing the upgrades with NNSA. For example, the government of the Czech Republic provided $800,000 to upgrade the security at one of its research reactors. Once the security contractor completes the installation, NNSA and Sandia officials and foreign government research reactor operators inspect the upgrades and determine if they were installed and are functioning as designed. To help ensure that the upgrades are sustained, NNSA and Sandia officials periodically visit research reactors to review the condition of upgrades and to determine if supplemental upgrades are needed. According to NNSA and Sandia officials, these visits are crucial to maintaining a collaborative relationship with foreign research reactor operators to help ensure that security upgrades are sustained over the long term. As a result of recent security assessment visits, NNSA officials said that they are planning additional upgrades at three reactors we visited where security upgrades had already been completed. These additional upgrades are to include, among other things, new closed circuit television cameras, a device used to provide emergency electrical power, and replacement door locks; they do not include assistance in developing security policies and procedures provided for in IAEA guidelines. NNSA officials determined that supplemental upgrades at the fourth reactor were not needed because they planned to return the reactor’s HEU to Russia in the summer of 2009, which was 7 months after the assessment was made. NNSA has also been purchasing warranty and maintenance contracts for recently installed upgrades and for certain reactors where upgrades are several years old and foreign government research reactor operators lack sufficient funding for maintenance activities. NNSA requires the countries or reactor operators who receive these warranty and maintenance contracts to provide written assurance that they will continue to sustain the upgrades at their own expense after the contract expires, although NNSA will consider providing additional coverage on a case-by-case basis. In addition, NNSA is working with IAEA and governments in each of the countries that received security upgrades at research reactors to develop a long-term sustainability plan for security systems. Because the GRRS program is voluntary and cooperative, NNSA officials told us that in some cases they face challenges in obtaining foreign governments’ commitment to complete security upgrades in a timely manner. For example, progress to secure a research reactor in one country we visited has been delayed by as many as 4 years for two reasons. First, the country was initially reluctant to accept NNSA assistance and took 2 years to decide whether to accept funding for security improvements. Second, security upgrades were further delayed at this reactor because of the country’s delay in approving the design of the security upgrades and authorizing contractors to work at the reactor site. As a result, a number of security weaknesses at this facility have not yet been addressed—some of which NNSA identified as early as 2002. According to NNSA officials, the agency has been working with the Department of State to overcome these obstacles. NNSA officials also told us that they have experienced situations where a foreign government has refused its assistance to make security upgrades. Specifically, one country has refused NNSA’s multiple offers to upgrade a research reactor facility during the past 9 years. NNSA officials said that they have continued to offer this assistance through both direct bilateral negotiations and through IAEA. However, this foreign government has yet to accept NNSA assistance, and NNSA has concerns that known security weaknesses have not been addressed. In addition, NNSA has experienced two situations where the foreign government would not accept security upgrade assistance until agreements were reached with the United States on other issues related to nuclear energy and security. For example, NNSA assistance at one research reactor was delayed until the United States ratified an agreement with the foreign government authorizing and setting the conditions for transfers of U.S. civil nuclear technology and material to that government. These issues have been resolved with both foreign governments. Due to the terrorist threat level in the areas where these reactors are located, NNSA has decided to forgo making security upgrades because it would take too long to design and install new security systems. Instead, NNSA is planning to remove the HEU fuel that is at these two reactors and return it to its country of origin this year. NNSA Coordinates with the IAEA to Identify Research Reactors for the GRRS Program, and Further Cooperation Is Needed to Sustain Upgrades and Implement Security Procedures Provided for in IAEA Guidelines NNSA coordinates with the IAEA to identify research reactors in need of security upgrades that could be included in the GRRS program. Fourteen of the 19 research reactors that received NNSA-funded security upgrades were previously reviewed by an IAEA team, which recommended security improvements. According to IAEA officials, if a nation is unable to make the recommended security improvements itself, IAEA will recommend that it seek assistance from the GRRS program. In addition, NNSA works with IAEA to ensure security upgrades are complementary when both organizations are providing assistance at the same research reactor. For example, at one reactor we visited, NNSA upgraded the reactor’s central alarm station and installed new intrusion sensors and cameras. At the same facility, IAEA is planning to install an X-ray machine and metal detector at the reactor’s entrance to monitor personnel and packages entering and leaving the facility. In addition, NNSA officials implementing efforts to secure research reactors interact regularly with IAEA officials by holding quarterly coordination meetings. Furthermore, NNSA makes an annual financial pledge of between $1.6 and $1.9 million to IAEA’s Nuclear Security Fund, which supports IAEA’s Office of Nuclear Security activities, such as security reviews of international research reactors and other nuclear facilities. Further cooperation is needed to sustain NNSA-funded upgrades and implement security procedures provided for in IAEA guidelines. While NNSA is planning to complete all physical protection upgrades at GRRS reactors by the end of 2010, GRRS officials are still concerned about the continued effectiveness of upgrades and any shortcomings related to security procedures and planning. Consequently, NNSA has recently begun working with IAEA’s Office of Nuclear Security to establish a sustainability program. The purpose of the sustainability program is to help ensure that NNSA-funded security upgrades are properly maintained and to help research reactor operators implement security procedures and planning. To date, NNSA has provided IAEA with $550,000 and paid for a security expert from Pacific Northwest National Laboratory to administer the sustainability program. Under the sustainability program, IAEA will help research reactor operators develop capabilities for properly maintaining and testing installed security equipment, which will help ensure the future effectiveness of NNSA- funded upgrades; capabilities to ensure that security procedures are designed, implemented, and followed by research reactor management and personnel; and emergency response plans and agreements and procedures with a robust dedicated off-site response force for assistance in responding to emergency situations at the research reactor. In addition, the sustainability program is expected to help foreign governments strengthen their nuclear security laws and regulations, as well as the nuclear security inspection process and procedures. For example, IAEA plans to work with a country to ensure it has an appropriate nuclear regulatory agency with the legal basis, as well as inspection and enforcement capabilities, to establish and oversee security requirements at nuclear facilities. IAEA plans to conduct pilot projects of the sustainability program at three research reactors in 2009, evaluate the results of the pilot projects, and then potentially expand the program in 2010 to all reactors in the GRRS program that still possess HEU. NNSA will continue to support sustainability efforts through the IAEA after the completion of security upgrades at the remaining reactors in 2010. Conclusions Nuclear research reactors throughout the world continue to play an important role in research, education, science, and medicine. However, as long as some of these reactors continue to use HEU fuel or have HEU fuel stored on-site, they must be adequately protected from terrorists targeting them to steal the material or sabotage the reactors. NNSA’s efforts to secure research reactors in the GRRS program have resulted in physical security upgrades such as heavily-reinforced vaults to store HEU fuel and new or improved alarms and intrusion detection sensors. However, security weaknesses remain at some research reactors in the GRRS program, many of which are the result of weaknesses in security procedures and emergency planning. NNSA’s efforts have, to date, generally not included encouraging the development of effective security procedures or the development of laws and regulations ensuring the safe and secure operation of nuclear facilities. NNSA has taken the first steps toward addressing these security deficiencies and is starting to work with IAEA to implement a comprehensive sustainability program to ensure that new security upgrades installed at these reactors undergo periodic maintenance and repair. These efforts must continue, even after NNSA completes installing physical security upgrades at the remaining reactors and ends the GRRS program in 2010. Because NNSA is working with foreign countries, it is also important that NNSA work cooperatively with these countries’ governments and IAEA to develop rigorous policies and procedures governing security at these sites. Ultimately, the most effective security improvement that can be made at these research reactors is to convert them to use LEU and to return all HEU fuel to the material’s country of origin, thereby eliminating the reactors’ attractiveness to terrorists seeking material to make an improvised nuclear device. We support the effort that NNSA is now taking to accelerate the schedule to convert reactors to LEU fuel use and return HEU fuel to its country of origin. The timely removal of this material from at-risk reactors will be, in the end, the most effective security improvement NNSA can make. Recommendations for Executive Action To resolve remaining security weaknesses at foreign research reactors that use HEU fuel, we recommend that the Secretary of Energy direct the Administrator of NNSA to take the following three actions: While continuing to emphasize and accelerate NNSA efforts to convert reactors to LEU fuel use and return HEU fuel to its country of origin, we recommend that NNSA work with foreign government officials and research reactor operators in countries where security upgrades are in progress or have been completed to (1) take immediate action to address any remaining security weaknesses, including those that we identified in this report; and (2) ensure that security policies and procedures, including those for emergency response to security incidents, fully meet IAEA guidelines. In addition, in cooperation with IAEA’s Office of Nuclear Safety, we recommend that NNSA work with foreign regulatory agencies to encourage the development, where needed, of national security laws and regulations to ensure the safe and secure operation of research reactors, including licensing, inspection, and emergency exercise procedures, as called for in IAEA guidelines. Agency Comments We provided NNSA with a draft of this report for its review and comment. In its written comments, NNSA states that our report is fair and properly reflects the progress of the GRRS program to make security upgrades at vulnerable, high risk research reactors worldwide. NNSA also outlined the actions that it plans to take to address the report’s recommendations to further improve research reactor security. The complete text of NNSA’s comments are presented in appendix I. NNSA also provided technical clarifications, which we incorporated into the report as appropriate. To address the report’s recommendations, NNSA stated that it plans to assist countries in meeting security obligations by 1) ensuring that its security policies and procedures, including those for emergency response to security incidents, fully meet IAEA guidelines and 2) working in cooperation with IAEA’s Office of Nuclear Security to encourage the development, where needed, of national security laws and regulations to ensure the safe and secure operation of research reactors We are sending copies of this report to the appropriate congressional committees; the Secretary of Energy; the Administrator of NNSA; and the Director, Office of Management and Budget. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in Appendix II. Appendix I: Comments from the National Nuclear Security Administration Appendix II: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the individual named above, Ryan T. Coles, Assistant Director; Patrick Bernard; Omari Norman; Tim Persons; Ramon Rodriguez; Peter Ruedel; Rebecca Shea; Carol Herrnstadt Shulman; and Jeanette Soares made key contributions to this report. | Worldwide, about 165 research reactors use highly enriched uranium (HEU) fuel. Because HEU can also be used in nuclear weapons, the National Nuclear Security Administration (NNSA) established the Global Research Reactor Security (GRRS) program to make security upgrades at foreign research reactors whose security did not meet guidelines established by the International Atomic Energy Agency (IAEA). GAO was asked to assess (1) the status of NNSA's efforts to secure foreign research reactors, (2) the extent to which selected foreign research reactors with NNSA security upgrades meet IAEA's security guidelines, and (3) the extent to which NNSA coordinates the GRRS program with other countries and the IAEA. GAO reviewed NNSA and IAEA documents and visited five of the 22 research reactors in the GRRS program, which were selected on the basis of when upgrades had been completed and because the reactors still possess HEU. As of August 2009, NNSA reports that it had upgraded the security at 18 of the 22 foreign research reactors in the GRRS program at a total cost of approximately $8 million. NNSA plans to complete physical security upgrades at the remaining reactors by 2010 at an additional cost of $6 million. Security upgrades that GAO observed during its site visits include heavily reinforced vaults to store HEU fuel, motion detector sensors and security cameras to detect unauthorized access, and fortified central alarm stations that allow on-site guards the ability to monitor alarms and security cameras and communicate with response forces. Foreign research reactors that have received NNSA upgrades where GAO conducted site visits generally meet IAEA security guidelines; however, in some cases, critical security weaknesses remain. At four of the five reactors visited, GAO identified security conditions that did not meet IAEA guidelines. For example, (1) at two reactors, no emergency response exercises had been conducted between the on-site guard force and off-site emergency response force, and one of these reactors lacked any formal response plans for emergencies involving attempts to steal HEU fuel; and (2) personnel at one research reactor did not search visitors or their belongings before granting them access to restricted areas where nuclear material is present. Furthermore, the government agency charged with regulating the operation of one research reactor has neither developed safety and security regulations nor has the country enacted laws ensuring the safe and secure operation of nuclear facilities. NNSA and Sandia National Laboratories officials responsible for making security upgrades at these reactors acknowledged that these continued vulnerabilities potentially compromise security at these reactors. Although the officials stressed the importance of NNSA continuing to work with these countries, there are no specific plans to do so after security upgrades at the remaining reactors are completed in 2010. NNSA officials coordinate with foreign government research reactor operators to design, install, and sustain security upgrades. Because the GRRS program is a voluntary and cooperative program, in some cases, NNSA faces challenges obtaining foreign governments' commitment to complete security upgrades in a timely manner. For example, progress to secure a research reactor in one country GAO visited has been delayed by as many as 4 years due to foreign government reluctance in accepting NNSA assistance and delays approving the designed security upgrades. Recently, NNSA has begun working with IAEA's Office of Nuclear Security to establish a sustainability program to help ensure the continued effectiveness of NNSA-funded security upgrades and to help research reactor operators implement security procedures. IAEA plans to conduct pilot programs at three research reactors in 2009 and then expand the program. NNSA will continue to support sustainability efforts through the IAEA after the completion of security upgrades at the remaining reactors in 2010. |
Introduction In the National Defense Authorization Act for Fiscal Year 1991, Congress directed the Department of Defense (DOD) to determine intertheater (from one theater of operations to another) and intratheater (within a theater of operations) mobility requirements for the armed forces and develop an integrated plan to meet those requirements. DOD assessed its intertheater requirements in the 1992 Mobility Requirements Study. This study was updated in 1995 based on the results of the 1993 Bottom-Up Review. According to Joint Staff officials, intratheater requirements were addressed in the July 1996 Intratheater Lift Analysis (ILA), which was DOD’s first published intratheater lift requirements study since the 1988 Worldwide Intratheater Mobility Study. The ILA was sponsored by the Joint Staff, with representatives from the Office of the Secretary of Defense for Program Analysis and Evaluation, military services, U.S. Central Command, U.S. Forces Korea, U.S. Pacific Command, and U.S. Transportation Command. At the time of our review, the ILA had not been submitted to Congress. Historically, DOD has focused on intertheater lift requirements because intratheater lift requirements are more difficult to establish. The time-phased force deployment data process, which sets out the mode and timing of transportation of each unit, typically concentrates on the intertheater leg of the deployment. Intratheater lift requirements depend more on the combat situation and the theater concept of operations, which may not be known until the start of hostilities and are always subject to change. Intertheater lift transports troops, equipment, and supplies from U.S. airfields and seaports or prepositioned locations to the airfields and seaports in the theater of operations. The intratheater lift phase transports the troops and equipment from these airfields and seaports to the tactical assembly areas and foxholes in the theater. Figure 1.1 illustrates “fort-to-foxhole” deployment. The Secretary of Defense’s 1997 Annual Report to the President and the Congress states that the mobility objectives identified in the updated 1995 Mobility Requirements Study and the 1996 ILA will guide future force structure and investment decisions. Officials from the Joint Staff and the Office of the Secretary of Defense for Program Analysis and Evaluation said that the ILA is the official DOD study that the military services should use in developing their procurement plans for intratheater lift assets. The Army plans to spend about $1.7 billion through fiscal year 2003 to implement the ILA’s recommendations for tactical wheeled vehicles; if the airlift recommendations were implemented as well, another approximately $2.7 billion (fiscal year 1996 dollars) could be spent. According to Joint Staff officials, a new Mobility Requirements Study is expected to be completed in 1999. That study is expected to incorporate the Quadrennial Defense Review scenarios and force structure, include an update to the ILA, and examine intertheater and intratheater lift requirements simultaneously rather than separately. Modes of Intratheater Lift Five alternate modes of transportation—airlift, highway, coastal waterway, rail, and pipeline—present several options for intratheater lift. The availability of these options depends on the combat situation and theater infrastructure. For example, if the 176 bridges and 11 tunnels between Pusan and Seoul, South Korea are damaged or destroyed or if the bridges cannot accommodate heavy tracked vehicles, such as main battle tanks, alternative modes of transportation must be arranged. Airlift enables immediate positioning and delivery of unit equipment and sustainment, but it is costly and provides limited cargo capacity. The C-130 airlifter is the primary aircraft used for intratheater lift. Large intertheater airlifters, such as the C-17 and C-5, can also be used for intratheater lift if a larger payload capacity or the transport of outsize cargo—the largest items in the Army’s inventory—is needed and airfields can accommodate the aircraft. Tactical wheeled vehicles are key to (1) moving units to assembly areas in the theater of operations in preparation for combat and (2) sustaining forces with supplies essential for successful operations. The tactical wheeled vehicles included in the ILA were the Heavy Equipment Transporter System (HETS), the Palletized Load System (PLS), 5,000- and 7,500-gallon fuel tankers, and 22.5- and 34-ton line haulers. The Army’s watercraft fleet consists of 245 craft that transport cargo and combat vehicles from ship to shore or to locations in the theater of operations via intracoastal waterways. The Logistics Support Vessel can accommodate 24 M1 main battle tanks and has the capacity to carry 2,000 tons. Rail, an important mode of transport in Korea, can alleviate some highway transportation requirements, according to the ILA. Appendix I shows some intratheater lift assets. ILA Methodology The war-fighting requirements on which the ILA was based were established in the updated 1995 Mobility Requirements Study. That study, which formed the basis for DOD’s current intertheater lift program, developed a requirement that would accomplish, with moderate risk, U.S. objectives established by the Joint Staff’s Tactical Warfare model (TACWAR) for a nearly simultaneous Korea and Southwest Asia scenario. The study assessed intertheater lift requirements to deliver combat and support forces to airfields and seaports in the two theaters of operations. The 1995 study, however, did not address the intratheater lift requirements needed to transport the units to their final destinations within the theaters. The ILA, through the Scenario Unrestricted Mobility Model for Intratheater Simulation (SUMMITS), modeled the intratheater lift—including airlift, tactical wheeled vehicles, Army watercraft, and rail—needed to transport the troops and equipment from the airfields and seaports and prepositioning sites in the theater to destination air bases, staging areas, and tactical assembly areas. SUMMITS considers required delivery date, payload, rate of movement, loading and unloading times, and the available transportation assets and network capabilities; examines every feasible path from origin to destination; and selects the fastest path through the network, subject to user-defined mode selection rules. For example, the mode selection rules direct that airlift, which is more expensive than ground transportation, is to be used only if ground transportation would be late and airlift would result in an improvement of at least 24 hours. Objectives, Scope, and Methodology Because its recommendations are intended to serve as the basis for proposed DOD acquisitions, the Chairman and Ranking Minority Member of the Senate Committee on Armed Services requested that we report on the results of the 1996 ILA. Specifically, we determined whether (1) the analysis and recommendations in the study were appropriately linked, (2) the study considered all options in meeting the requirements for various lift assets, and (3) improvements could enhance the study’s value as a decision-making tool. To determine whether the ILA’s analysis and recommendations were appropriately linked, we reviewed the ILA, its Catalogues of Data and Assumptions, and other information supporting the study; theater command input to the ILA; Army and Air Force doctrine and procurement plans for tactical wheeled vehicles and airlifters; information on intratheater mobility in Operations Desert Shield and Desert Storm and Operation Joint Endeavor; the 1997 and 1998 Air Mobility Master Plans; RAND’s 1997 Documented Briefing, “Should C-17s Be Used To Carry In-Theater Cargo During Major Deployments?”; and other relevant documents. We reviewed our prior reports on Operations Desert Shield and Desert Storm, PLS, HETS, and C-17. We visited the tactical wheeled vehicles training facility at Fort Eustis, Virginia. We discussed the ILA’s assumptions, analysis, and recommendations with officials from the following organizations: Joint Staff, Washington, D.C.; Office of the Assistant Secretary of Defense for Strategy and Requirements, Washington, D.C.; Office of the Under Secretary of Defense for Policy for Program Analysis and Evaluation, Washington, D.C.; U.S. Central Command, MacDill Air Force Base, Florida; U.S. Forces Korea; U.S. Pacific Command, Camp H. M. Smith, Hawaii; U.S. Transportation Command, Scott Air Force Base, Illinois; Air Force Headquarters, Washington, D.C.; Air Combat Command, Langley Air Force Base, Virginia; Air Mobility Command, Scott Air Force Base, Illinois; Army Headquarters, Washington, D.C.; Army Training and Doctrine Command, Fort Monroe, Virginia; Army Combined Arms Support Command, Fort Lee, Virginia; Army Tactical Wheeled Vehicles Requirements Management Office, Fort Eustis, Virginia; and Boeing Corporation (formerly McDonnell Douglas Corporation), Rosslyn, Virginia. To determine whether the ILA considered all options in meeting the requirements for various lift assets, including host nation support, the C-5, and Army watercraft, we reviewed the ILA and its supporting Catalogues of Data and Assumptions; DOD’s 1997 Quadrennial Defense Review; theater commands’ operation plans for Southwest Asia and Korea; DOD documents concerning host nation support in Operations Desert Shield and Desert Storm; the DOD Inspector General’s 1997 report on host nation support in Southwest Asia; the Air Mobility Command’s May 1997 Airfield Suitability and Restrictions Report; Air Force and contractor documents concerning C-5 operations and capabilities; the Logistics Management Institute’s November 1996 report, “Joint Logistics Over The Shore Causeway Systems and Support;” the November 1996 Army Watercraft Master Plan; and other documents. We obtained information on the potential contribution of these lift assets from officials at the Joint Staff; theater commands; the Office of the Under Secretary of Defense for Policy for Program Analysis and Evaluation; the U.S. Transportation Command; Army Headquarters; the Army Training and Doctrine Command; the Army Combined Arms Support Command; the Air Combat Command; the Air Mobility Command; and Lockheed Martin Corporation, Crystal City, Virginia. We also toured the Army watercraft docked at Fort Eustis. To determine whether improvements could enhance the study’s value as a decision-making tool, we reviewed the 1988 Worldwide Intratheater Mobility Study, 1992 Mobility Requirements Study, 1995 Mobility Requirements Study Bottom-Up Review Update, and the 1996 Defense Science Board Task Force Report on Strategic Mobility. We also reviewed the theater commands’ input into the ILA, information on tactical wheeled vehicle cost and capability from the Tactical Wheeled Vehicle Requirements Management Office, and our prior reports on the 1992 and 1995 Mobility Requirements Studies. We obtained additional information from the Joint Staff; the Office of the Under Secretary of Defense for Policy for Program Analysis and Evaluation; the Office of the Assistant Secretary of Defense for Strategy and Requirements; Air Force Headquarters; Air Force Studies and Analyses Agency; Air Combat Command; Air Mobility Command; Army Headquarters; and Army Combined Arms Support Command. We did not assess the validity of the requirements or objectives identified in the fiscal year 2003 Total Army Analysis and did not independently verify the computer-generated data from the SUMMITS or TACWAR models. Our analysis focused on the decisions that were justified based on the output of these models. Our assessment of whether the outputs were properly used did not require a determination as to the accuracy of the models and the data they produce. We evaluated the links between the ILA’s recommendations and the requirements generated by the models. We performed our review between September 1996 and November 1997 in accordance with generally accepted government auditing standards. ILA Recommendations Are Not Based on the Study’s Analysis The ILA contains several recommendations that are not based on requirements developed by the study’s analysis. In some cases, this disconnect appears to be the result of invalid assumptions. For example, assumptions about how the Army would use HETS were not consistent with Army doctrine. In other cases, the cause of the disconnect is unclear. Further, the recommendations for tactical wheeled vehicles supported the Army’s planned acquisition objectives, but the study’s analysis would have resulted in a different recommendation for most types of tactical wheeled vehicles (e.g., the PLS and the 34-ton line hauler). Finally, the ILA found that the current C-130 fleet is more than sufficient to meet airlift requirements but recommended that an additional squadron of C-17s, beyond the planned procurement of 120 aircraft, should be used for intratheater lift, particularly for outsize cargo. This recommendation is not supported by the analysis in the study. Tactical Wheeled Vehicle Recommendations Are Not Based on the Study’s Requirements The tactical wheeled vehicle acquisition plan recommended by the ILA does not reflect the requirements determined by the study’s analysis. The ILA recommended that the Army continue with its tactical wheeled vehicle acquisition objectives based on the biennial Total Army Analysis for fiscal year 2003, even though the ILA requirements differed significantly from that analysis. The ILA recommended that shortfalls in some types of tactical wheeled vehicles be alleviated either by host nation support or tradeoffs with other types of excess vehicles. However, because the ILA requirements differ significantly from the Army’s acquisition objectives, the excesses asserted in the ILA may not actually exist. Further, the ILA did not consider tactical wheeled vehicle host nation support (the treatment of host nation support is discussed in ch. 3) or include a cost-effectiveness analysis of the tradeoffs among various types of vehicles. Table 2.1 shows the ILA requirements (number of companies) and recommendations for tactical wheeled vehicles. Appendix I shows the number of assets in each company. HETS Assumptions Are Not Consistent With Army Doctrine The ILA’s recommendation for the procurement of HETS is not consistent with the requirement identified in the study. The ILA requirement is for 4 HETS companies, but the recommendation supports the Army’s plan to buy 18 companies. The ILA did not model the use of HETS according to current Army doctrine and thus derived a much lower HETS requirement than the Army’s analysis. The ILA Catalogue of Data and Assumptions states that HETS were used to transport tracked vehicles only when the vehicles’ time to self-deploy would exceed the time required to load them on a HETS, transport them, and unload them. According to Army officials, however, under current Army doctrine, battle tanks do not self-deploy unless the distance to be traversed is 3 miles or less. This mission was added in 1991, before which time HETS only evacuated tanks from the battlefield. The Army’s objective of 18 HETS companies reflects the Army’s plan to procure enough HETS to relocate a heavy brigade and its support in a single lift. Another reason for ILA’s lower HETS requirement is that the study assumed a steady, even flow of heavy equipment arrivals by sea, with no surges as a result of weather or chance. Fuel Tanker Requirements Are Not Accurate The ILA identified a need for fewer 5,000-gallon fuel tankers than the Army plans to procure, but it recommended that the Army’s acquisition program be continued. The ILA acknowledges that its 5,000-gallon fuel tanker requirements are understated. According to Joint Staff and Army officials, one reason for the inaccuracy is that the ILA did not factor in fuel requirements for the tankers or the additional cargo line haulers that the analysis showed were needed to meet requirements. Another reason for the difference between the Army and ILA estimates is that the TACWAR battle on which the ILA was based was fought at a low-to-moderate intensity level. If the level of intensity had been higher, fuel requirements would have been greater. PLS Recommendation Is Not Supported The ILA’s recommendation to continue the Army’s plan to procure 32 PLS companies is not based on the ILA’s requirement of 16 companies. Rather than recommend a reduced number of PLS to reflect the requirements, the ILA recommended that the Army continue toward its acquisition objective and use the surplus PLS to help alleviate the 22.5-ton line hauler shortfalls. However, on the basis of a cost and operational effectiveness analysis, the cost-effectiveness of the PLS was determined only for an ammunition role. Further analysis has not been done to determine the cost-effectiveness of the PLS in a cargo-carrying role. One PLS costs about $391,000 (1996 dollars) compared with $158,000 (1996 dollars) for one 22.5-ton line hauler, according to an analysis by the Tactical Wheeled Vehicle Requirements Management Office. Because alternative uses for the PLS have not been assessed for cost-effectiveness, the ILA’s recommendation for the PLS is premature and not supported by analysis. Line Hauler Recommendations Are Not Consistent The ILA identifies a minimum requirement for 54 22.5-ton line hauler companies but also supports the Army’s acquisition objective of 33 companies. The ILA states that excess PLS assets can help alleviate this shortfall. However, the PLS mission would have to be changed, and a cost-effectiveness analysis for such a change has not been done. The ILA also identified a large shortfall in the 34-ton line haulers, but the Army believes enough of these assets are already in its inventory and therefore does not plan to procure any more. The ILA recommends 87 of these companies as an objective but supports the Army’s plan not to procure any additional trucks, stating that shortfalls can be offset with excess HETS assets, 7,500-gallon fuel tanker tractors (the same tractor used with the 34-ton line hauler), and host nation support. None of these options, however, were modeled. Recommendation for Additional Airlift Is Not Supported The number of C-130s in the fleet exceeds the number that the ILA identified as a requirement for the Korea and Southwest Asia scenarios.To determine the number of additional C-130s that would be needed worldwide for contingencies unrelated to these scenarios, the Joint Staff surveyed the theater commanders. Even with their additional requirement, the C-130 fleet still exceeds the number needed for intratheater lift. However, on the basis of analyses by the Air Force Studies and Analyses Agency, the ILA recommended using additional C-17s beyond the planned procurement of 120 (a squadron of 14, according to DOD officials) to augment the C-130s by providing outsize cargo capability. This recommendation has been supported by the Defense Science Board Task Force on Strategic Mobility (in a 1996 report) and by theater commanders. The Air Force analyses, however, do not support this recommendation because they only demonstrated that the C-17 could move cargo more quickly than the C-130 under certain circumstances. No intratheater requirement was established based on the C-17’s contribution to meeting TACWAR timelines, and the relative cost-effectiveness of the two aircraft was not taken into account. The Air Force is not currently planning to acquire more than the planned 120 C-17s so that a squadron could be dedicated to an intratheater role. An Air Force document shows that no C-17s are allocated solely for intratheater lift but that the U.S. Transportation Command would continue to support the intratheater lift needs of war-fighting commanders, as demonstrated in Operation Joint Endeavor in Bosnia. RAND’s National Defense Research Institute evaluated intratheater concepts of operations for the planned C-17 fleet of 120 aircraft. In a 1997 Documented Briefing,RAND identified the advantages of using the C-17 in an intratheater role and concluded that about one squadron of C-17s could be used effectively in each of the two theaters of operation. These C-17s would be part of the planned procurement of 120 aircraft and would be based in the theater, unavailable to fly intertheater missions. RAND acknowledged that deploying these C-17s as intratheater assets would slow the flow of intertheater cargo, but stated that this effect would be offset by the improved intratheater deliveries afforded by the C-17. DOD officials commented that, during the halting phase, a delay in the strategic airlift flow may not be acceptable. RAND also determined that fewer C-17s would need to be dedicated to the theater if some C-17s arriving in the theater were delayed to perform intratheater missions and then re-entered the intertheater airlift flow. According to RAND, this concept could allow nondedicated C-17s to fly most of the missions that would otherwise require theater-assigned C-17s. Further Analysis of Potential C-17 Role Is Warranted In support of the ILA, the Air Force Studies and Analyses Agency used its own models, along with SUMMITS, to determine the number of C-130s needed to meet TACWAR requirements after the addition of a squadron of C-17s beyond the 120 planned aircraft. The analysis found that about 50 percent more cargo could be delivered with only two-thirds as many sorties. However, because the C-130 fleet was more than sufficient to deliver the ILA workload, the faster deliveries resulting from the addition of C-17s were not necessary to meet the TACWAR battle requirements. The ILA also stated that, on the basis of its capability to deliver bulk cargo,every additional C-17 could replace three C-130s. However, the ratio of three C-130s to one C-17 does not take into account either cost or the reduced flexibility that would be provided to a theater commander who may need three C-130s for multiple deliveries rather than one C-17 for a single delivery. Finally, dedicating a squadron of large airlifters, such as the C-17, for intratheater use could be an inefficient use of the asset. Intratheater missions typically involve small loads. In Operations Desert Shield and Desert Storm, for example, the average C-130 load was only 3.2 tons per sortie, although the C-130 can carry 17 tons. During the three peaks in the airlift operation—August and September 1990 and February 1991—the average C-130 load was 3.5 tons per sortie, which is only 5 percent of the C-17’s 65-ton cargo-carrying capacity. The Air Force stated that more analysis is needed before a definitive conclusion on the intratheater contribution of C-17s can be reached. The ILA notes that, due to SUMMITS’ limited ability to model airlift, C-17 and C-130 capability tradeoffs warrant further analysis. The Air Force Studies and Analyses Agency had planned to complete a more detailed study of C-17 and C-130 capability in September 1996, but, according to an Air Force official, that study has been delayed indefinitely. Unit Relocation Analysis Used Questionable Assumptions and Was Not Tied to the TACWAR Requirement The Air Force also performed an analysis of the advantages and necessity of the C-17 in theater airlift operations by identifying ways the C-17 could augment the C-130 in conducting specific unit relocations in the theater. This analysis was conducted outside of the SUMMITS and TACWAR models because the TACWAR battle plan does not relocate specific units once they have arrived at their target destinations. An ILA working group determined the units that should be relocated to specific airfields based on how the move could improve the theater commander’s tactical advantage. On the basis of these discussions, the Air Force modeled 11 different unit moves, including Patriot batteries, Multiple Launch Rocket System battalions, and the 101st Air Assault Division. The Air Force analysis showed that, if a squadron of C-17s were dedicated to the theater, the selected units could be delivered to their destinations more rapidly than they could by the C-130. However, because the time frames in the analysis were not directly related to a specified requirement in the TACWAR battle plan, the benefit of the units’ earlier availability was not measured. For example, even if the analysis showed that a Patriot battery could reach its destination 3 days earlier on the C-17 than it would by other means, the analysis did not assess the effect of this unit’s move on the rest of the battle. In addition, the analysis did not assess the ripple effect of earlier delivery of the selected units on other units because the analysis was intended only to examine how the C-17 could speed the arrival of the 11 selected units. Further, the C-17 unit relocation analysis assumed that the aircraft could land on 18 planned fields. However, according to the May 1997 Air Mobility Command Airfield Suitability and Restrictions Report, only 9 of the 18 airfields have been surveyed for airlift operation suitability, and only 7 have been determined to be suitable for use by the C-17. The remaining two airfields have not been assessed for C-17 operations. Conclusions The ILA does not adequately fulfill congressional direction to develop intratheater lift requirements and establish an integrated plan to meet them because the study’s recommendations are not supported by the analysis. The ILA’s tactical wheeled vehicle recommendations, even though they support the Army’s acquisition objectives, are not consistent with the requirements identified in the ILA. In addition, some ILA assumptions are either not consistent with Army doctrine or are invalid for other reasons. These discrepancies call into question the basis for the study’s recommendations. Due to the inconsistencies between the ILA requirements and the Army’s acquisition objectives, for example, excess HETS and fuel tanker tractor assets may not actually exist. In addition, the ILA’s recommendation to use another squadron of C-17s, beyond the planned procurement of 120 aircraft, for intratheater lift is not based on sound analysis. The ILA did not establish a relationship between the use of the C-17 in a dedicated intratheater role and the rest of the battle, so the effect of the faster C-17 deliveries was not measured. Furthermore, even though the Air Force’s analysis assumed that the C-17 would be able to use all of the airfields identified by the ILA working group, there is no guarantee that they would be accessible to the C-17. The updated ILA, planned as part of the 1999 Mobility Requirements Study, will provide a good opportunity for DOD to reconsider the basis for intratheater lift requirements and ensure that they are linked appropriately to the study’s analysis. Recommendations We recommend that the Secretary of Defense direct that the 1999 ILA update (1) link the study’s recommendations to its analysis and (2) include assumptions that consider current Army doctrine when acquisition plans are based on the doctrine. Agency Comments DOD concurred with our recommendations. DOD noted that, although study assumptions are generally based on military service doctrine, DOD must be free to analyze changes to that doctrine in the interest of enhancing joint capability. However, DOD did not agree with our finding that the ILA’s recommendations are not supported by the study’s analysis. DOD stated that the study used computationally derived data, along with additional analysis and military judgment, to develop its recommendations. DOD cites service acquisition programs, input from theater commanders, and substitution of one type of intratheater asset for another as examples of the additional analysis considered in developing the requirements and recommendations in the ILA. DOD points to the study’s recommendation to use excess HETS in place of 34-ton line haulers as being based on service acquisition programs and theater commander input. The ILA does not link its requirements to its recommendations. Rather, its recommendations merely support the Army’s acquisition plans with no explanation of the disconnect between those plans and the study’s requirements. In discussing our draft report, agency officials acknowledged that the Joint Staff had difficulty linking the Army’s fiscal year 2003 acquisition program to the ILA requirements and that, for this reason, the study’s reliance on the acquisition program is not clearly explained. Furthermore, Joint Staff officials told us during our review that the decisions reached by ILA working groups concerning tradeoff assessments and the airlift tactical unit moves analysis were not documented. Without an explicit link in the ILA between the study’s requirements and recommendations, and without a means of reviewing the factors or additional analyses that led to the final recommendations in the study, we have no basis on which to concur that a link exists. Moreover, we question the reliability and independence of a DOD requirements study that bases its requirements and recommendations on service acquisition programs without examining the disconnects between those programs and the study’s own findings. Concerning DOD’s example, the number of HETS identified as a requirement in the ILA is less than the number reflected in the study’s recommendation. According to theater commanders’ input to the study and our discussions with Army officials, the HETS is not an effective or economical substitute for 34-ton line haulers. We agree with DOD’s statement that service acquisition programs were used to support the study’s recommendations for HETS. It is this fact that leads us to conclude that the recommendations in the ILA were not based on the requirements identified by the study’s analysis, but rather were based on service acquisition programs that had already been established. ILA Did Not Include the Potential Contribution of Some Lift Assets The ILA did not incorporate the potential contribution of several lift assets that could assist in meeting intratheater lift requirements. Specifically, the ILA did not include (1) the potential contribution of host nation-provided tactical wheeled vehicles, (2) the ability of the C-5s currently in the inventory and the planned fleet of 120 C-17s to meet outsize intratheater airlift requirements as needed, and (3) the potential for Army watercraft to supplant tactical wheeled vehicle requirements. As a result, the study’s requirements and solutions may be overstated. Host Nation Support Can Significantly Contribute to Intratheater Lift Host nation support (HNS) is the civil or military assistance provided by a nation to foreign forces within its territory during peacetime, crisis, or war based on agreements mutually concluded between the nations. In Operations Desert Shield and Desert Storm, HNS included commercial cargo line haulers, fuel tankers, personnel transporters, and HETS. Of the 1,404 HETS used in the Persian Gulf conflict, 333 were provided by Saudi Arabia. DOD reported that support from host and other nations during the conflict was critical and that it gave the United States the flexibility to deploy substantial amounts of combat power early in the contingency—when the risks were the greatest—while reducing the amount of tactical wheeled vehicles that needed to be deployed from the United States. The 1995 Mobility Requirements Study Bottom-Up Review Update, based on the same TACWAR battle as the ILA, assumed that HNS in Southwest Asia and Korea would include commercial cargo line haulers, fuel tankers, and HETS. However, the potential HNS tactical wheeled vehicle contribution to intratheater lift was not modeled in the ILA. Thus, ILA assumptions are inconsistent with the assumptions made in the updated 1995 Mobility Requirements Study. According to the ILA, HNS was not modeled because of a lack of signed agreements with some of the host nations. In addition, theater commanders wanted the ILA to model a worst case scenario without any HNS offsetting U.S. force structure. The ILA, however, notes repeatedly that HNS has the potential to reduce some of the reported lift shortfalls in several categories of tactical wheeled vehicles. HNS would also limit the amount of equipment required to be moved into the theater. Because it did not reflect HNS, the ILA depicted the worst case scenario as the only scenario for intratheater lift. The theater commanders’ operation plans portray HNS as very important, if not critical, to the successful outcome of wars in Southwest Asia and Korea. Even the lack of formal HNS agreements in Southwest Asia does not limit the operation plans’ expectations of substantial HNS. The Southwest Asia operation plans assume that HNS will be available in either the amounts received during Operations Desert Shield and Desert Storm or in amounts negotiated and approved bilaterally between the host nations and the United States. The plans note that outsourcing logistical requirements within the theater of operations may completely preclude the need to deploy some logistical assets or units from the United States. The operation plans for a war in Korea state that U.S. Pacific Command forces can expect to receive significant wartime HNS from the Republic of Korea. The United States negotiated a wartime HNS agreement with the Republic of Korea in 1991. Cargo transportation was one of the components of this agreement, which also included medical, bulk fuel transport, maintenance, engineering, and ammunition support. The key factors in making HNS successful are availability of the right numbers of assets when and where they would be needed and the commitment of host nation drivers and other equipment operators to perform their assigned missions. Members of the defense community, including the military services, theater commanders, the Joint Staff, and the Office of the Secretary of Defense, are debating the extent to which HNS should offset U.S. force requirements. This debate is not likely to be resolved in the near future. Intertheater Airlifters Can Help Meet Outsize Intratheater Requirements The current intertheater airlift fleet includes the C-5 and C-17, which are capable of carrying outsize cargo. The C-5 Galaxy is the Air Mobility Command’s largest intertheater airlifter, with the capacity to carry 89 tons of cargo and 36 pallets, and the smaller C-17 is capable of carrying 65 tons of cargo and 18 pallets. Although the C-5 and C-17 are intertheater lift assets, they can also be used for intratheater lift if warranted, assuming that airfields can accommodate them. However, the ILA did not model the potential contribution of the C-5 and considered the planned 120 C-17s as an offset to the C-130 fleet only in Southwest Asia. Use of the existing airlift fleet for intratheater missions as needed could increase flexibility and decrease the need to procure additional outsize airlift capability. Although the C-5 and C-17 are primarily intertheater airlifters, the ability to divert them for intratheater missions is recognized in Air Force operational documents. The Air Force has highlighted the C-17’s ability to deliver outsize cargo to small, austere airfields as a key factor in its dual role as an intertheater and intratheater airlifter. Small, austere airfields usually have a short runway and are limited in one or a combination of the following factors: taxiway systems, ramp space, security, materiel handling equipment, aircraft servicing, navigation aids, weather observing sensors, and communications. If delivering outsize cargo to small, austere airfields is necessary, the C-17 would likely be needed. However, if the airfields could accommodate the C-5, it could accomplish the mission. For example, the C-5 can quickly facilitate unit relocations. A Patriot battalion requires only 9 C-5 sorties compared with 15 sorties for the C-17. Of the 67 airfields in the ILA, 46 have been surveyed by the Air Mobility Command and are listed in its Airfield Suitability and Restrictions Report. Analysis of the 46 airfields common to the ILA and the Airfield Suitability and Restrictions Report showed that 34 airfields, or 74 percent, are suitable for all types of airlifters, including the C-5. In Korea, the C-5 can use 70 percent of the airfields, and in Southwest Asia, the C-5 can use 77 percent of the airfields. Further, the number of airfields available to the C-5 would likely be higher during a contingency, since other airfields that have not been surveyed would be available at that time. Potential Contribution of Army Watercraft Was Not Determined Due to their cargo capacity and demonstrated multiple mission capability, Army watercraft could be used for intratheater transportation and could reduce the need for reliance on rail, tactical wheeled vehicles, and HNS. However, the ILA did not identify a requirement for Army watercraft and deferred a recommendation on these assets pending a planned study by the Logistics Management Institute. That study, issued in November 1996 (4 months after the ILA), found uncertainty among planners at the theater commands about the capability and availability of watercraft for intratheater operations. The Army has developed a long-range fleet management plan that includes an acquisition strategy to procure more watercraft, but the role and capability of watercraft to help meet intratheater requirements have not been addressed at the joint level. At the end of fiscal year 1997, the Army had 245 watercraft in its fleet, according to an Army official. Some of these watercraft, such as the Logistics Support Vessel and the Landing Craft, Utility-2000 (LCU-2000), provide intratheater movement of equipment, cargo, and combat vehicles and transport cargo from ship to shore. The Logistics Support Vessel can self-deploy anywhere in the world to provide intratheater transport of large quantities of cargo, tracked and wheeled vehicles, and equipment. These vessels provided intratheater transport during Operations Desert Shield and Desert Storm. The LCU-2000 can perform tactical resupply missions to remote or underdeveloped coastlines and inland waterways. During Operation Uphold Democracy in Haiti, this vessel transported about 38,548 tons of equipment and supplies to fishing villages that had small piers or ramps. Army watercraft employment is phased to meet the theater commanders’ requirements to offload combat and support forces during major regional contingencies. During the first 3 weeks of a conflict, Army watercraft operations would focus on port operations and offloading combat and support equipment from prepositioned ships and large strategic sealift ships. After the first 3 weeks, watercraft would continue port operations and begin to transition to sustainment operations, which include establishing intracoastal main supply routes and transporting equipment and cargo to forward areas in the theater. During Operations Desert Shield and Desert Storm, for example, watercraft delivered main battle tanks, ammunition, and other cargo to several locations on the Persian Gulf coast. Thus, although the port operations are the key mission for Army watercraft during the first part of a contingency, watercraft can contribute significantly to intratheater lift missions during later phases. In several cases, the ILA demonstrated how Army watercraft could be used to offset reliance on rail, tactical wheeled vehicles, and HNS by repositioning forces in the theater of operations and moving tanks prepositioned on land to tactical assembly areas. However, the ILA did not recommend that these potential offsets be implemented, and the contribution of watercraft to intratheater lift was not reflected in the ILA’s recommendations for tactical wheeled vehicles as part of a tradeoff analysis. The Logistics Management Institute study evaluated the role of watercraft for logistics-over-the-shore and intracoastal main supply route operations in the Korea and Southwest Asia scenarios. The study did not establish an intracoastal transportation requirement, however, because of a lack of data from theater commanders regarding the types and amounts of cargo and equipment that could be transported on watercraft. The study recommended that the Joint Staff provide theater command planners the analytical tools to match intratheater lift requirements with intracoastal transportation capability. Conclusions Because several potentially significant contributions to intratheater lift were not thoroughly considered in the study, the requirements in the ILA may be overstated. Given the experience of Operations Desert Shield and Desert Storm, the inclusion of HNS in the theater commanders’ operation plans, and the fact that the 1995 Mobility Requirements Study update assumed HNS would be available, it is unreasonable to exclude HNS from the analysis. A more flexible mobility study that reflected requirements with and without HNS would better assist decisionmakers in determining the effects of HNS on U.S. mobility requirements. U.S. Central Command officials agree, acknowledging that requirements stated with and without HNS would have added flexibility to the ILA. In addition, the C-5s and the planned fleet of C-17s could be considered as needed if an outsize intratheater airlift requirement is identified. Use of these airlifters would ensure that the potential contributions of DOD’s current assets are fully taken into account. Finally, Army watercraft has the potential to reduce reliance on tactical wheeled vehicles and HNS, but a requirement for these assets that reflects their intratheater role has yet to be defined. The potential contributions of tactical wheeled vehicle HNS, the current outsize-capable airlift fleet, and Army watercraft to meeting intratheater lift requirements warrant incorporation into the 1999 ILA update. Recommendations We recommend that the Secretary of Defense direct that the 1999 updated ILA (1) consider HNS as a means of accomplishing intratheater lift and ensure that HNS assumptions are consistent with those in intertheater lift studies; (2) include the potential contribution of the C-5 airlifter and planned fleet of 120 C-17s; and (3) reflect the role, capability, and requirements for Army watercraft in an intratheater role, including an analysis of the extent to which these assets can alleviate identified shortfalls in tactical wheeled vehicles. Agency Comments DOD concurred with our recommendations and added that, as the potential intratheater role of the C-17 and C-5 are investigated, an analysis should be done to assess the impact on the warfight of taking these assets out of the intertheater airlift flow. We agree that such an analysis would be an important part of future studies that consider the use of these airlifters in an intratheater role. Opportunities Exist to Improve Study’s Value as a Decision-Making Tool Intratheater lift requirements depend on the course of the battle and theater infrastructure and thus are difficult to quantify. However, because the ILA requirements and solutions were stated as absolute numbers rather than ranges, the study does not reflect the dynamic and often unpredictable nature of intratheater lift requirements. In addition, the ILA did not include a cost-effectiveness analysis to assess tradeoffs between various lift alternatives. Such an assessment would provide decisionmakers with information needed to make investment decisions in a sensitive budget environment. Lift Requirements Were Not Stated as Ranges The 1995 updated Mobility Requirements Study determined lift requirements through an iterative modeling process that examined various war-fighting and mobility schemes. However, the ILA did not use an iterative modeling process to determine requirements, which precluded the ILA from stating lift asset requirements and solutions as ranges. Rather, the ILA stated the requirements and solutions as absolute numbers. Given the dependence of intratheater lift requirements on the course of the battle and the theater infrastructure, requirements stated as ranges would provide a more accurate depiction of the dynamic intratheater situation. It would also allow decisionmakers the flexibility to determine the type and quantity of lift assets needed to meet requirements while accounting for such factors as potential enemy actions to disrupt airfields and seaports, chemical or biological warfare, weather, HNS, and various threat scenarios. DOD’s 1988 Worldwide Intratheater Mobility Study noted that intratheater mobility requirement statements are extremely dependent on the theater concept of operations. The study recommended that all intratheater mobility requirements be expressed as ranges when possible and that those requirements not expressed as ranges be understood as approximations. The 1996 Report of the Defense Science Board Task Force on Strategic Mobility noted that the deployment phase most subject to disruption by the adversary is the intratheater movement of troops and equipment to their final destinations. DOD officials said that the ILA could not express requirements as ranges because SUMMITS would have had to be rerun with a different input requirement. The officials told us that only one concept of operations was available—the TACWAR battle established for the 1995 updated Mobility Requirements Study. They stated that expressing the ILA requirements as ranges could have required amending the TACWAR battle timelines after the 1995 study had been completed and that this option was not seriously considered. The officials said, however, that SUMMITS and TACWAR are capable of interacting and that iterations can be modeled. Cost-Effectiveness Analysis Is Needed to Assess Tradeoffs Between Lift Assets The 1995 updated Mobility Requirements Study, which is the basis for DOD’s procurement strategy for intertheater lift assets, included a cost-effectiveness analysis that assessed tradeoffs between various intertheater lift assets. The study developed a set of options consisting of possible additions to current airlift, sealift, and afloat prepositioning programs. Life-cycle cost estimates were developed for each option, and cost was a factor in the analysis leading to the final recommendations. However, a cost-effectiveness analysis that examined tradeoffs between the assets was not done to support the ILA recommendations. According to Joint Staff officials, limited tradeoff assessments were discussed as part of the ILA, but these assessments did not include cost and were not documented. The ILA states that the study’s workloads for the 34- and 22.5-ton line haulers can be met by other means, such as excess HETS or PLS. However, tradeoff assessments were not made to determine whether these alternatives would be cost-effective uses of the HETS or PLS. The ILA identified a requirement for fewer HETS and PLS than the Army’s acquisition objectives for fiscal year 2003, but the study did not recommend that the Army procure fewer of these expensive assets. As of January 1996, each HETS cost $414,000 compared with $118,000 for the 34-ton line hauler, according to the Tactical Wheeled Vehicle Requirements Management Office’s Catalog of U.S. Army Tactical Wheeled Vehicles. Army officials noted that the HETS would provide excess capacity in a line-haul role. In addition, even though a PLS company can carry 17 percent more cargo, it costs almost twice as much as the 22.5-ton line hauler. A PLS company costs $18.8 million (1996 dollars) compared with $9.5 million (1996 dollars) for a company of 22.5-ton line haulers, according to an analysis performed by the Requirements Management Office. DOD officials noted, however, that the PLS can self-load and unload containers, thereby requiring fewer personnel than the 22.5-ton line hauler. In addition, a cost-effectiveness analysis was not conducted on the ILA’s proposed use of a squadron of 14 C-17s, beyond the planned procurement of 120 aircraft, for intratheater lift. Since the C-130 fleet is more than sufficient to meet requirements, according to the ILA, and outsize airlift capability exists with the planned procurement of C-17s and the current fleet of C-5s, it is important for a recommendation to procure additional C-17s beyond the currently planned 120 aircraft to be based on an analysis that includes cost-effectiveness as a criterion. In addition, a tradeoff assessment has not been conducted to consider the extent to which C-130s could be retired if additional C-17s were procured for intratheater lift. Conclusions The value of DOD’s future lift studies as decision-making tools can be strengthened if they state intratheater requirements and solutions as ranges rather than as absolute numbers to reflect the uncertainty associated with predicting lift requirements within the theater of operations. An iterative process resulting in requirements ranges may have shown, for example, that allied forces would not lose key objectives or incur additional casualties under a range of intratheater delivery schemes that required fewer lift assets to accomplish. Since the planned 1999 Mobility Requirements Study and ILA update are expected to be conducted simultaneously, concerns about changing the TACWAR battle by establishing a requirements range should be alleviated. Furthermore, if future mobility studies are to be the basis for the services’ acquisition plans, it would be prudent to determine the appropriate type and number of mobility assets to procure based on a tradeoff analysis of the capability and cost-effectiveness of different options. Tradeoff assessments of the lift alternatives considered in the study would provide decisionmakers the flexibility to take into account competing investment options within a constrained budget. The 1999 updated ILA will provide an opportunity to address these concerns so that decisionmakers can have a more substantive basis on which to determine DOD acquisition strategies. Recommendations We recommend that the Secretary of Defense direct that the 1999 updated determine intratheater requirements and solutions as ranges to reflect their dependence on the combat situation and include a cost-effectiveness assessment of the alternatives considered in the study that examines tradeoffs among the lift assets to reflect capability, cost, and requirements. Agency Comments DOD concurred with our recommendation concerning requirements ranges. DOD stated that cost-effectiveness analysis would be accomplished if appropriate and that DOD has in place an acquisition process that considers cost-effectiveness when making programmatic decisions. DOD officials explained that detailed cost-effectiveness analyses would significantly expand the time frame and cost of mobility studies. Our recommendation is directed at system tradeoff analyses that would provide decisionmakers information on the relative costs and capabilities of systems in light of identified requirements. The ILA made programmatic recommendations that included, to an extent, tradeoffs among lift assets. We believe that cost-effectiveness should be a part of a requirements study that makes acquisition recommendations. Intratheater Lift Assets Airlifters, tactical wheeled vehicles, and watercraft are all used for intratheater lift. The following sections provide information these assets. Airlifters The C-130 Hercules is the Air Force’s primary intratheater airlifter. It can carry 6 pallets and 17 tons of cargo and accommodate 90 passengers. The C-17 Globemaster, being produced by the Boeing Corporation, can carry 18 pallets and 65 tons of cargo and accommodate 102 passengers. The C-5 Galaxy can be loaded with 36 pallets and can carry 89 tons of cargo and 73 passengers. Figures I.1 through I.3 show the C-130, C-17, and C-5 airlifters, respectively. Tactical Wheeled Vehicles The primary mission of the Heavy Equipment Transporter System is to (1) deliver main battle tanks to forward assembly areas fully fueled, armed, and ready for combat and (2) evacuate tanks from the battlefield. The tank’s crew rides in the cab of the system. The Palletized Load System consists of a truck, trailer, and removable cargo beds. It is used by artillery, ordnance, and transportation units to move ammunition to and from transfer points. The 7,500-gallon fuel tanker and the 34-ton line hauler use the same tractor and transport fuel and cargo, respectively, from ports to corps supply points, which are located farthest from the battle front. The 5,000-gallon fuel tanker and the 22.5-ton line hauler also use the same tractor and operate primarily in the division and brigade areas, which are closer to the battle front where roads are generally less developed. Table I.1 shows the number of tactical wheeled vehicles per company. Figures I.4 through I.6 show the Heavy Equipment Transporter System, the Palletized Load System, and the 34-ton line hauler, respectively. Army Watercraft The Logistics Support Vessel has the capacity to carry 2,000 tons and accommodate 24 M1 main battle tanks or 25 20-foot containers (50 if they are double-stacked). Each Landing Craft, Utility-2000 has the capacity to carry 350 tons and accommodate 5 M1 main battle tanks or 12 20-foot containers (24 if double-stacked). Figures I.7 through I.9 show the Logistics Support Vessel and the Landing Craft, Utility-2000. National Security and International Affairs Division, Washington, D.C. Kansas City Field Office Gregory J. Symons The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) 1996 Intratheater Lift Analysis (ILA), focusing on whether: (1) the analysis and recommendations in the study were appropriately linked; (2) the study considered all options in meeting the requirements for various lift assets; and (3) improvements could be made to enhance the study's value as a decisionmaking tool. GAO noted that: (1) the ILA does not adequately fulfill the congressional directive to determine lift requirements and develop an integrated plan to meet them; (2) the study contains recommendations that would cost billions of dollars to implement, but the study's analysis generally did not support these recommendations; (3) the disconnect between the analysis and recommendations is especially evident in the information regarding tactical wheeled vehicles and outsize airlift capability; (4) in addition, the study's analysis did not incorporate several assets that can contribute significantly to the intratheater lift mission; as a result, the study's requirements and solutions may be overstated; (5) the analysis did not consider: (a) commercial vehicles provided by host nation support (HNS); (b) the use of the current and planned fleet of outsize-capable intertheater airlifters such as the C-5 and C-17; and (c) the extent to which Army watercraft could reduce the need for alternative sources of lift; (6) furthermore, improvements could enhance the study's value to decisionmakers; (7) these improvements include requirements stated as a range rather than as absolute numbers and tradeoff assessments based on the cost and capability of the various lift assets; (8) a range would have better reflected the dynamic nature of intratheater requirements, and system tradeoff assessments would have provided choices based on cost and capability; and (9) the 1999 Mobility Requirements Study and updated ILA will afford DOD a good opportunity to address these issues and provide Congress with a basis for acquisition decisionmaking in future budget cycles. |
Background FAA’s Mission and Organizational Structure FAA’s primary mission is to provide a safe, secure, and efficient global airspace system that promotes airspace safety in the United States and contributes to national security. The agency’s roles include regulating civil aviation, developing and operating a system of air traffic control and navigation for civil and military aircraft, and researching and developing the NAS, which consists of more than 19,000 airports, 750 air traffic control facilities, and about 45,000 pieces of equipment. FAA’s mission performance depends on the adequacy and reliability of the nation’s air traffic control system. The air traffic control system, the primary component of the NAS, is a vast network of computer hardware, software, and communications equipment. This system consists of automated information processing and display, communication, navigation, surveillance, and weather resources that permit air traffic controllers to view key information—such as aircraft location, aircraft flight plans, and prevailing weather conditions—and to communicate with pilots. These resources reside at, or are associated with, several air traffic control facilities—towers, terminal radar approach control facilities, air route traffic control centers (en route centers), flight service stations, and the System Command Center. Figure 1 shows a visual summary of the air traffic control system over the continental United States and oceans. FAA’s mission performance also depends on the skills and expertise of its work force, composed of over 50,000 staff who provide aviation services— including air traffic control; maintenance of air traffic control equipment; and certification of aircraft, airline operations, and pilots. In fiscal year 2005, FAA’s budget authority to support its mission was approximately $14 billion. According to FAA officials, approximately 95 percent of the agency’s total spending is in support of the NAS. Further, FAA estimates that it will spend $7.6 billion over the next two years to complete key modernization projects. As figure 2 illustrates, FAA has twelve staff offices to accomplish its mission—including the Office of International Aviation and the Office of Information Services/Chief Information Officer—and four lines of business—Air Traffic Organization, Commercial Space Transportation, Airports, and Regulation and Certification. Tables 1 and 2 provide additional information about the responsibilities of these offices and lines of business. An Enterprise Architecture Is Critical to Successful Systems Modernization Effective use of enterprise architectures, or modernization blueprints, is a trademark of successful public and private organizations. For more than a decade, we have promoted the use of architectures to guide and constrain systems modernization, recognizing them as a crucial means to a challenging goal: agency operational structures that are optimally defined in both business and technological environments. The Congress, the Office of Management and Budget (OMB), and the federal Chief Information Officer (CIO) Council have also recognized the importance of an architecture-centric approach to modernization. The Clinger-Cohen Act of 1996 mandates that an agency’s CIO develops, maintains, and facilitates the implementation of an IT architecture. Further, the E-Government Act of 2002 requires OMB to oversee the development of enterprise architectures within and across agencies. Enterprise Architecture: A Brief Description Generally speaking, an enterprise architecture connects an organization’s strategic plan with program and system solution implementations by providing the fundamental business and technology details needed to guide and constrain investments in a consistent, coordinated, and integrated fashion. As such, it should provide a clear and comprehensive picture of an entity, whether it is an organization (e.g., federal agency) or a functional or mission area that cuts across more than one organization (e.g., air traffic control). This picture consists of snapshots of both the enterprise’s current or “As Is” environment and its target or “To Be” environment, as well as a capital investment road map for transitioning from the current to the target environment. These snapshots further consist of “views,” which are basically one or more architecture products that provide conceptual or logical representations of the enterprise. The suite of products and their content that form a given entity’s enterprise architecture are largely governed by the framework used to develop the architecture. Since the 1980s, various frameworks have emerged and been applied. For example, John Zachman developed a structure or “framework” for defining and capturing an architecture. This framework provides for six windows from which to view the enterprise, which Zachman terms “perspectives” on how a given entity operates: those of (1) the strategic planner, (2) the system user, (3) the system designer, (4) the system developer, (5) the subcontractor, and (6) the system itself. Zachman also proposed six abstractions or models associated with each of these perspectives: these models cover (1) how the entity operates, (2) what the entity uses to operate, (3) where the entity operates, (4) who operates the entity, (5) when entity operations occur, and (6) why the entity operates. In September 1999, the federal CIO Council published the Federal Enterprise Architecture Framework (FEAF), which is intended to provide federal agencies with a common construct for their respective architectures, to facilitate the coordination of common business processes, technology insertion, information flows, and system investments among federal agencies. FEAF describes an approach, including models and definitions, for developing and documenting architecture descriptions for multiorganizational functional segments of the federal government. Similar to most frameworks, FEAF’s proposed models describe an entity’s business, the data necessary to conduct the business, applications to manage the data, and technology to support the applications. More recently, OMB established the Federal Enterprise Architecture (FEA) Program Management Office to develop a federated enterprise architecture according to a collection of five “reference models, and a security and privacy profile overlaying the five models.” The Performance Reference Model is intended to describe a set of performance measures for the major IT initiatives and their contribution to program performance. Version 1.0 of the model was released in September 2003. The Business Reference Model is intended to describe the federal government’s businesses, independent of the agencies that perform them. It serves as the foundation for the FEA. Version 2.0 of the model was released in June 2003. The Service Component Reference Model is intended to identify and classify IT service (i.e., application) components that support federal agencies and promote the reuse of components across agencies. Version 1.0 of the model was released in June 2003. The Data Reference Model is intended to describe, at an aggregate level, the types of data and information that support program and business line operations and the relationships among these types. Version 1.0 of the model was released in September 2004. The Technical Reference Model is intended to describe the standards, specifications, and technologies that collectively support the secure delivery, exchange, and construction of service components. Version 1.1 of the model was released in August 2003. The Security and Privacy Profile is intended to provide guidance on designing and deploying measures that ensure the protection of information resources. OMB has released Version 1.0 of the profile. Although these various enterprise architecture frameworks differ in their nomenclatures and modeling approaches, they consistently provide for defining an enterprise’s operations in both (1) logical terms, such as interrelated business processes and business rules, information needs and flows, and work locations and users and (2) technical terms, such as hardware, software, data, communications, and security attributes and performance standards. The frameworks also provide for defining these perspectives for both the enterprise’s current or “As Is” environment and its target or “To Be” environment, as well as a transition plan for moving from the “As Is” to the “To Be” environment. The importance of developing, implementing, and maintaining an enterprise architecture is a basic tenet of both organizational transformation and IT management. Managed properly, an enterprise architecture can clarify and help to optimize the interdependencies and relationships among an organization’s business operations and the underlying IT infrastructure and applications that support these operations. Employed in concert with other important management controls, such as portfolio-based capital planning and investment control practices, architectures can greatly increase the chances that an organization’s operational and IT environments will be configured to optimize its mission performance. Our experience with federal agencies has shown that making IT investments without defining these investments in the context of an architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. Our Prior Work Has Emphasized the Need for FAA to Establish Architecture Management Capabilities In November 2003, we reported the results of our governmentwide survey of agencies’ progress—including FAA’s—in establishing key enterprise architecture management capabilities as described in Version 1.1 of our architecture management maturity framework. This framework associates specific architecture management capabilities with five hierarchical stages of management maturity, starting with creating enterprise architecture awareness and followed by building the enterprise architecture management foundation, developing the enterprise architecture, completing the enterprise architecture, and leveraging the enterprise architecture to manage change. Table 3 provides a more detailed description of the stages of Version 1.1 of the framework. Based on information provided by FAA, we reported that the agency had not established an architecture management foundation; as a result, we rated the agency to be at stage 1 of our framework. Specifically, we reported that it had not (1) allocated adequate resources and (2) established a framework, methodology, and automated tools to build the enterprise architecture. According to our framework, effective architecture management is generally not achieved until an enterprise has a completed and approved architecture that is being effectively maintained and is being used to leverage organizational change and support investment decision making. An enterprise with these characteristics would need to have satisfied all of the stage 2 and 3 core elements and most of the stage 4 and 5 elements. Our Prior Work Has Also Emphasized the Need for FAA to Institutionalize Other Key IT Management Controls In August 2004, we reported that FAA had established most—about 80 percent—of the basic practices needed to manage its mission-critical investments, including many of the foundational practices for selecting and controlling IT investments. However, we reported that weaknesses still existed in the process. For example, FAA had not involved its senior IT investment board in regular reviews of investments that had completed development and become operational, and had not implemented standard practices for managing its mission-support and administrative investments. Because of these weaknesses, we concluded that agency executives could not be assured that they were selecting and managing the mix of investments that best met the agency’s needs and priorities. Accordingly, we made several recommendations, including that the agency develop and implement a plan aimed at addressing the weaknesses identified in our report. FAA generally concurred with our conclusion and recommendations. In addition, in August 2004, we reported that FAA had made progress in improving its capabilities for acquiring software-intensive systems, but that there were still areas that needed improvement. Specifically, we reported that it had recurring weaknesses in the areas of measurement and analysis, quality assurance, and verification. We concluded that these weaknesses prevented FAA from consistently and effectively managing its mission-critical systems and increased the risk of cost overruns, schedule delays, and performance shortfalls. We made several recommendations, including that FAA address these specific weaknesses and institutionalize its process improvement initiatives by establishing a policy and plans for implementing and overseeing process improvement initiatives. FAA generally concurred with our conclusion and recommendations. Our Prior Work Has Identified Problems with the Air Traffic Control Modernization Program FAA has a long and well-documented history of problems with its air traffic control modernization program, including cost overruns, schedule delays, and performance shortfalls. We first identified this program as an area at high risk in 1995 because of the modernization’s size, complexity, cost, and problem-plagued past. Over the past decade, we have continued to report on these problems. The program remains on our high-risk list today. In March 1999, we testified that FAA had had some success in deploying new modernization systems over the past two decades, but that the agency had not delivered most of its major air traffic control systems in accordance with its cost, schedule, and performance goals, due largely to its failure to implement established guidelines for acquiring new systems. Specifically, we testified that the agency had not fully implemented an effective process for monitoring the cost, schedule, benefits, performance, and risk of its key projects throughout their life cycles. We also noted that FAA lacked an evaluation process for assessing outcomes after projects had been developed, in order to help improve the selection and monitoring of future projects. Moreover, we testified that the agency’s problems in modernizing its systems resulted from several root causes, including the agency’s attempt to undertake this modernization without the benefit of a complete NAS architecture to guide its efforts. We concluded that the agency would continue to experience problems in deploying new systems until it had fully implemented solutions that addressed these root causes of its modernization problems and strengthened controls over its modernization investments. In February and October 2003, we testified that FAA had taken steps to improve the management of its air traffic control modernization, but that systemic management issues, including inadequate management controls and human capital issues, were contributing to the continued cost overruns, schedule delays, and performance shortfalls that major air traffic projects have consistently experienced. We stated that to overcome these problems, FAA would need to, among other things, improve its software capabilities by requiring that all systems achieve a minimum level of progress before they would be funded, and improve its cost estimating and cost accounting practices by incorporating actual costs from related system development efforts in its processes for estimating the costs of new projects. We testified that until these issues had been resolved, resources would not be spent cost-effectively, and improvements in capacity and efficiency would be delayed. FAA’s Enterprise Architecture Program: A Brief Description According to FAA, its enterprise architecture initiative is intended to influence the agency’s ongoing initiatives in E-Government, data management, information systems security, capital planning, investment analysis, and air traffic control and navigation and is to benefit the agency by aligning business processes with IT processes; improving flight safety; reducing the development and maintenance costs of systems; decreasing airline delays; guiding IT investments; and improving the security, interoperability, and data usage of these systems. FAA officials told us that the agency plans by April 2006 to have a comprehensive version of its enterprise architecture to guide and constrain the agency’s investment decisions. The Assistant Administrator for Information Services, who is the agency’s CIO, has been assigned responsibility for developing and maintaining the agency’s enterprise architecture. The CIO has designated a program director to oversee this effort. Two project offices are responsible for developing the NAS and non-NAS segments of the enterprise architecture, respectively, in coordination with the program director. Brief descriptions of the NAS and non-NAS architecture projects are provided below. NAS Architecture Project According to FAA, the NAS architecture is intended to be the agency’s comprehensive plan for improving NAS operations through the year 2015 and is to address how FAA will replace aging equipment and introduce new systems, capabilities, and procedures. The NAS architecture, which FAA reports is being developed in collaboration with the aviation community, is intended to achieve several objectives. For example, it is to (1) ensure that the NAS can handle future growth in aviation without disrupting critical aviation services, (2) improve flight safety and the use of airspace, (3) decrease airline delays, and (4) improve systems integration and investment planning. The agency is developing the NAS architecture in a series of incremental versions. It released the first version of the NAS architecture in September 1995. In 1999, FAA released Version 4.0 of this architecture, which, according to the agency, was the first version to include a 15 to 20-year view (a “To Be” view) and support budget forecasts. According to FAA, the current version of the NAS architecture (Version 5.0) shows how the agency intends to achieve the target system described by 2015. The chief operating officer (COO) for the Air Traffic Organization is responsible for developing and implementing the NAS segment of the architecture. The COO has tasked the Operations Planning/Systems Engineering group within FAA’s Air Traffic Organization with the day-to-day activities involved in this effort. This group is headed by the Vice President for Operations Planning, who reports directly to the COO. The COO has also designated a chief architect, who reports to the Director of Systems Engineering, to develop and maintain the NAS architecture and to provide technical leadership and guidance, as necessary, to support investment decision making. The Operations Planning/Systems Engineering group receives input from several FAA organizations, but primarily from business units within the Air Traffic Organization. Non-NAS Architecture Project According to FAA, the non-NAS architecture will cover the agency’s administrative services and mission support activities—the process areas, data, systems, and technology that support such functions as budget and finance, as well as all of the other governmental air transportation missions and functions that are unique to the agency (e.g., certification of aircraft). FAA initiated a project to develop the non-NAS architecture in March 2002 and, according to FAA, the agency plans to have, by January 2005, an initial baseline architecture that will describe the “As Is” and “To Be” environments. According to FAA, it plans to incrementally build on this baseline and have a version of the non-NAS architecture by April 2006 that will also include a sequencing plan. According to FAA, the Information Management Division within the Office of Information Services/CIO is responsible for developing and maintaining the non-NAS architecture. FAA has designated a chief architect, who reports to the program director, to oversee the day-to-day program activities for developing and maintaining the non-NAS architecture. To develop the non-NAS architecture, this division will receive input from the agency’s twelve staff offices and four lines of business. FAA Has Yet to Establish Key Architecture Development, Maintenance, and Implementation Processes FAA recognizes the need for and has begun to develop an enterprise architecture; however, it has yet to establish key architecture management capabilities that it will need to effectively develop, maintain, and implement the architecture. As previously stated, the agency has set up two separate project offices and tasked each with developing one of the two architecture segments (NAS and non-NAS) that together are to compose FAA’s enterprise architecture. The agency also reports that it has allocated adequate resources to these project offices and that chief architects have been assigned to head the architecture projects. However, FAA has not established other key architecture management capabilities, such as designating a committee or group representing the enterprise to direct, oversee, or approve the architecture effort; having an approved policy for developing, maintaining, and implementing the architecture; and fully developing architecture products that meet contemporary guidance and describe both the “As Is” and “To Be” environments and a sequencing plan for transitioning between the two. According to FAA officials, attention to and oversight of the enterprise architecture program have been limited in the past, and the agency has not documented its architecture management policies, procedures, and processes; but this is changing. For example, by the end of this fiscal year, FAA plans to issue a policy governing its enterprise architecture efforts and to establish a steering committee to guide and direct the program. By April 2005, the agency also plans to approve an architecture project management plan for the non-NAS architecture. In addition, it plans to have a framework for developing the NAS architecture by September 2005. Based on our experience in reviewing other agencies, not having an effective enterprise architecture program is attributable to, among other things, limited senior management understanding and commitment and cultural resistance to having and using an architecture. The result is an inability to implement modernized systems in a way that minimizes overlap and duplication and maximizes integration and mission support. FAA Has Yet to Implement Key Best Practices for Managing Its NAS Architecture Project As we first reported in 1997, it is critical that FAA have and use a comprehensive NAS architecture to guide and constrain its air traffic control system investment decisions. To effectively develop, maintain, and implement this architecture, FAA will need to employ rigorous and disciplined architecture management practices. Such practices form the basis of our architecture management maturity framework; the five maturity stages of our Version 1.1 framework are described in table 3. Some of these key practices or core elements associated with each of the stages are summarized below. For additional information on these key practices or core elements, see the framework. For stage 2, our framework specifies nine key practices or core elements that are necessary to provide the management foundation for successfully launching and sustaining an architecture effort. Examples of stage 2 core elements are described below. Establish a committee or group, representing the enterprise, that is responsible for directing, overseeing, or approving the enterprise architecture. This committee should include executive-level representatives from each line of business, and these representatives should have the authority to commit resources and enforce decisions within their respective organizational units. By establishing this enterprisewide responsibility and accountability, the agency demonstrates its commitment to building the management foundation and obtaining buy-in from across the organization. Appoint a chief architect. The chief architect should be responsible and accountable for the enterprise architecture, supported by the architecture program office, and overseen by the architecture steering committee. The chief architect, in collaboration with the CIO, the architecture steering committee, and the organizational head is instrumental in obtaining organizational buy-in for the enterprise architecture, including support from the business units, as well as in securing resources to support architecture management functions such as risk management, configuration management, quality assurance, and security management. Use a framework, methodology, and automated tool to develop the enterprise architecture. These elements are important because they provide the means for developing the architecture in a consistent and efficient manner. The framework provides a formal structure for representing the enterprise architecture, while the methodology is the common set of procedures that the enterprise is to follow in developing the architecture products. The automated tool serves as a repository where architectural products are captured, stored, and maintained. Develop an architecture program management plan. This plan specifies how and when the architecture is to be developed. It includes a detailed work breakdown structure, resource estimates (e.g., funding, staffing, and training), performance measures, and management controls for developing and maintaining the architecture. The plan demonstrates the organization’s commitment to managing architecture development and maintenance as a formal program. Our framework similarly identifies key architecture management practices associated with later stages of architecture management maturity. For example, at stage 3—the stage at which organizations focus on architecture development activities—organizations need to satisfy six core elements. Examples of these core elements are discussed below. Issue a written and approved organization policy for development of the enterprise architecture. The policy defines the scope of the architecture, including the requirement for a description of the baseline and target architectures, as well as an investment road map or sequencing plan specifying the move between the two. This policy is an important means for ensuring enterprisewide commitment to developing an enterprise architecture and for clearly assigning responsibility for doing so. Ensure that enterprise architecture products are under configuration management. This involves ensuring that changes to products are identified, tracked, monitored, documented, reported, and audited. Configuration management maintains the integrity and consistency of products, which is key to enabling effective integration among related products and for ensuring alignment between architecture artifacts. At stage 4, during which organizations focus on architecture completion activities, organizations need to satisfy eight core elements. Examples of these core elements are described below. Ensure that enterprise architecture products and management processes undergo independent verification and validation. This core element involves having an independent third party—such as an internal audit function or a contractor that is not involved with any of the architecture development activities—verify and validate that the products were developed in accordance with architecture processes and product standards. Doing so provides organizations with needed assurance of the quality of the architecture. Ensure that business, performance, information/data, application/service, and technology descriptions address security. An organization should explicitly and consistently address security in its business, performance, information/data, application/service, and technology architecture products. Because security permeates every aspect of an organization’s operations, the nature and substance of institutionalized security requirements, controls, and standards should be captured in the enterprise architecture products. At stage 5, during which the focus is on architecture maintenance and implementation activities, organizations need to satisfy eight core elements. Examples of these core elements are described below. Make the enterprise architecture an integral component of the IT investment management process. Because the road map defines the IT systems that an organization plans to invest in as it transitions from the “As Is” to the “To Be” environment, the enterprise architecture is a critical frame of reference for making IT investment decisions. Using the architecture when making such decisions is important because organizations should approve only those investments that move the organization toward the “To Be” environment, as specified in the road map. Measure and report return on enterprise architecture investment. Like any investment, the enterprise architecture should produce a return on investment (i.e., a set of benefits), and this return should be measured and reported in relation to costs. Measuring return on investment is important in order to ensure that expected benefits from the architecture are realized and to share this information with executive decision makers, who can then take corrective action to address deviations from expectations. Table 4 summarizes our framework’s five stages and all of the associated core elements for each. For its NAS architecture project, FAA is currently at stage 1 of our maturity framework. The NAS project office has satisfied three of the core elements associated with “building the enterprise architecture management foundation”—stage 2 of our framework-—and three of the elements associated with “developing enterprise architecture products”—stage 3 of our framework. It has not satisfied other stage 2 and 3 core elements or any core elements associated with stages 4 and 5. According to the framework, effective architecture management is generally not achieved until an enterprise has a completed and approved architecture that is being effectively maintained and is being used to leverage organizational change and support investment decision making; having these characteristics is equivalent to having satisfied all of the stage 3 core elements and many of the stage 4 and 5 elements. For the stage 2 core elements, FAA reports that it has allocated adequate resources for developing a NAS architecture. Further, it has established a project office that is responsible for architecture development and maintenance and has assigned a chief architect to the project. However, the agency has not satisfied other core elements for stage 2, such as assigning responsibility for directing, overseeing, or approving the architecture to a committee or group representing the enterprise. Without such an entity to lead and be accountable for the architectural effort, there is increased risk that the architecture will not represent a corporate decision-making tool and will not be viewed and endorsed as an agencywide asset. With respect to stage 3, according to the CIO, FAA plans to build on the current version of the NAS architecture (Version 5.0) to ensure that architecture products are developed that meet contemporary guidance and standards. According to FAA officials, including the CIO and the chief scientist for the NAS project office, the current NAS architecture does not conform to contemporary architecture guidance or standards—including OMB’s FEA reference models and GAO’s enterprise architecture management maturity framework—because it predates them and has not been updated to comply with them. However, the CIO stated that future versions of the architecture will conform to this guidance. Among other things, this guidance calls for products that describe the “As Is” and “To Be” business, performance, information/data, applications/services, technology, and security environments as well as a sequencing plan for transitioning from the “As Is” to the “To Be” states. Other stage 3 core elements nevertheless have not been met, such as having a written and approved architecture development policy. Further, none of the stage 4 and 5 core elements have been met, although the CIO stated that FAA has recently begun to take steps associated with meeting some of these core elements. The detailed results of our assessment of the NAS project office’s progress in implementing the core elements associated with the five maturity stages are provided in appendix II. In addition, FAA’s senior enterprise architecture officials, including the program director, stated that attention to and oversight of the enterprise architecture program have been limited in the past and that the agency has not documented its architecture management policies, procedures, and processes. These officials stated that the agency recognizes the need to establish an effective NAS architecture project and that it intends to do so. To this end, FAA currently plans to have, by September 2005, a framework for developing the architecture and an approved enterprise architecture policy requiring the development, maintenance, and implementation of an enterprise architecture. The CIO also stated that the agency plans to update its NAS architecture to reflect current architecture standards and guidance. Our research of successful organizations and our experience in reviewing other agencies’ enterprise architecture efforts show that not having these controls is, among other things, a function of limited senior management understanding of and commitment to an enterprise architecture and cultural resistance to having and using one. Until such barriers are addressed and effective architecture management structures and processes are established, it is unlikely that an agency will be able to produce and maintain a complete and enforceable architecture and thus implement modernized systems in a way that minimizes overlap and duplication and maximizes integration and mission support. Given the size and complexity of FAA’s air traffic control systems and their importance to FAA’s ability to achieve its mission, it is critical that FAA develop a well-defined architecture that can be used to guide and constrain system investment decisions. FAA Has Yet to Implement Key Best Practices for Managing Its Non-NAS Architecture Project Similar to its NAS architecture effort, FAA’s attempt to develop, maintain, and implement its non-NAS architecture needs to be grounded in the kind of rigorous and disciplined management practices embodied in Version 1.1 of our architecture management maturity framework. (Tables 3 and 4 provide a description of the framework’s five maturity stages and the key practices or core elements associated with each stage.) For its non-NAS architecture project, FAA is currently at stage 1 of our maturity framework. The non-NAS project office has satisfied three of the core elements associated with “building the enterprise architecture management foundation”—stage 2 of our framework—and four of the core elements associated with stages 3 and 5. According to the framework, effective architecture management is generally not achieved until an enterprise has a completed and approved architecture that is being effectively maintained and is being used to leverage organizational change and support investment decision making; having these characteristics is equivalent to having satisfied all of the stage 2 and 3 core elements and many of the stage 4 and 5 elements. For stage 2 core elements, FAA reports that it has allocated adequate resources, and it has established a project office and assigned a chief architect. However, the agency has not satisfied several of the stage 2 core elements that are critical to effective architecture management. For example, the agency has not established a committee or group representing the enterprise to guide, direct, or approve the architecture. Having such a corporate entity is critical to overcoming cultural resistance to using an enterprise architecture. As previously stated, the absence of such an entity increases the risk that the architecture will not represent a corporate decision-making tool and will not be viewed and endorsed as an agencywide asset. Concerning stage 3, FAA has not satisfied three of the six core elements. For example, although the agency is developing architecture products, it does not have a written and approved policy for architecture development. Without such a policy, which, for example, identifies the major players in the development process and provides for architecture guidance, direction, and approval, FAA will be challenged in overcoming cultural resistance to using an enterprise architecture and achieving agencywide commitment and support for an architecture. The agency has not implemented any of the stage 4 core elements, but it has implemented one core element—architecture products are periodically updated—associated with stage 5 of our framework. For example, FAA has not (1) documented and approved a policy for architecture implementation, (2) implemented an independent verification and validation function that covers architecture products and architecture management processes, and (3) made the architecture an integral component of its IT investment management process. The detailed results of our assessment of the non-NAS project office’s progress in implementing the core elements associated with the five maturity stages are provided in appendix III. According to FAA’s senior enterprise architecture officials, including the chief architect, the attention to and oversight of the enterprise architecture program have been limited in the past, and the agency has not documented its architecture management policies, procedures, and processes. FAA officials, including the CIO and the chief architect for the non-NAS project, agreed with our assessment of the project office’s current architecture management capabilities. These officials stated that the agency recognizes the need to establish an effective non-NAS architecture project, and it intends to do so. For example, the agency’s strategic plan includes the goal of having an approved enterprise architecture policy requiring the development, maintenance, and implementation of an enterprise architecture by September 2005, and the agency intends to establish a steering committee. In addition, the chief architect stated that FAA plans to have an approved architecture project management plan by April 2005, and a comprehensive version of the non-NAS architecture by April 2006. As previously stated, our research and our experience show that not having these controls is, among, other things, attributable to limited senior management understanding of and commitment to an enterprise architecture and cultural resistance to having and using one. Until such barriers are addressed and effective architecture management structures and processes are established, it is unlikely that any agency will be able to develop and maintain a complete and enforceable architecture and thus implement modernized systems in a way that minimizes overlap and duplication and maximizes integration and mission support. Conclusions Having a well-defined and enforced enterprise architecture is critical to FAA’s ability to effectively and efficiently modernize its NAS and non-NAS systems. To accomplish this, it is important for FAA to establish effective management practices for developing, maintaining, and implementing an architecture. Currently, FAA does not have these practices in place. Establishing them begins with agency top management commitment and support for having and using an architecture to guide and constrain investment decision making. Recommendations for Executive Action To ensure that FAA has the necessary agencywide context within which to make informed decisions about its air traffic control system and other systems modernization efforts, we recommend that the Secretary of the Department of Transportation direct the FAA Administrator to ensure that the following four actions take place. Demonstrate institutional commitment to and support for developing and using an enterprise architecture by issuing a written and approved enterprise architecture policy. Ensure that the CIO, in collaboration with the COO, implements, for the NAS architecture project, the best practices involved in stages 2 through 5 of our enterprise architecture management maturity framework. Ensure that the CIO focuses first on developing and implementing a NAS architecture. Ensure that the CIO implements, for the non-NAS architecture project, the best practices involved in stages 2 through 5 of our enterprise architecture management maturity framework. Agency Comments In commenting on a draft of this report, the Department of Transportation’s Director of Audit Relations stated via e-mail that FAA is continuing its NAS architecture efforts. The Director also provided technical comments, which we have incorporated as appropriate in the report. The Director’s comments did not state whether the department agreed or disagreed with the report’s conclusions and recommendations. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Director of OMB, the Secretary of the Department of Transportation, the FAA Administrator, FAA’s CIO, and FAA’s COO. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact Randolph C. Hite at (202) 512-3439 or [email protected], or David A. Powner at (202) 512-9286 or [email protected]. Major contributors to this report are acknowledged in appendix IV. Objective, Scope, and Methodology Our objective was to determine whether the Federal Aviation Administration (FAA) has established effective processes for managing the development and implementation of an enterprise architecture. To address our objective, we used our enterprise architecture management maturity framework, Version 1.1, which organizes architecture management best practices into five stages of management maturity. Specifically, we compared our framework to the ongoing efforts of FAA’s two project offices to develop the National Airspace System (NAS) and non-NAS segments of the architecture. For example, for the NAS architecture, we reviewed program documentation, such as the acquisition management system policy, the Joint Resources Council’s investment management guidance for NAS investments, and FAA’s NAS architecture development process. We reviewed, for the non-NAS architecture, program documentation, such as the methodology FAA is using to develop this architecture, a Systems Research and Applications report on the agency’s efforts to implement management processes and controls over its architecture development activities, and the Department of Transportation’s Enterprise Architecture Subcommittee and Architecture Review Board charters. We then compared these documents with the elements in our framework. To augment our documentation reviews of FAA’s architecture management efforts, we interviewed various officials, including the chief information officer, the program director, the chief architects for the NAS and non-NAS architectures, and the chief scientist for the NAS architecture, to determine, among other things, the agency’s plans to develop an enterprise architecture. Specifically, we inquired about (1) the agency’s plans for developing an enterprise architecture, including the key milestones and deliverables for completing the two segments of the architecture, (2) the content of the NAS and non-NAS architecture segments (i.e., architecture products that have been developed to date), and (3) the strategy to be used to align the NAS and non-NAS architectures. We conducted our work at FAA headquarters in Washington, D.C. We performed our work from June 2004 to March 2005, in accordance with generally accepted government auditing standards. Assessment of Architecture Management Efforts for the National Airspace System Agency is aware of EA. The Federal Aviation Administration (FAA) strategic plan includes the goal of having an approved EA policy requiring the development, maintenance, and implementation of an EA by September 2005. Adequate resources exist (funding, people, tools, and technology). According to the chief scientist and the chief architect for the NAS architecture, the agency has adequate project funding. FAA reports that fiscal year 2005 funding for the National Airspace System (NAS) architecture is about $2.6 million. In addition, the agency reports that it has skilled staff, including contractor support, for its NAS architecture project. Furthermore, FAA is using automated tools and technology, such as Rational Rose by Rational Software Corporation/IBM Software Group, CORE by Vitech Corporation, and Dynamic Object Oriented Requirements System by Telelogic. Committee or group representing the enterprise is responsible for directing, overseeing, or approving the EA. FAA has not assigned responsibility for directing, overseeing, or approving a NAS architecture to a group or committee representing the enterprise. Program office responsible for EA development and maintenance exists. In 1997, FAA established a project office that is responsible for developing and maintaining a NAS architecture. Chief architect exists. In February 2004, FAA designated the chief architect for the NAS architecture project. EA is being developed using a framework, methodology, and automated tool. According to the chief scientist, the NAS architecture is being developed using a framework that focuses on strategically supporting FAA’s investment management process. The chief scientist stated that, unlike other architecture frameworks, this framework is not fully developed or documented. Further, FAA has yet to provide us with any documentation on this framework and on how it is being implemented to support the agency’s investment management process. According to the chief information officer (CIO), the agency plans to select an architecture framework by September 2005. FAA does not have a methodology that defines the standards, steps, tools, techniques, and measures that it is following to develop, maintain, and validate a NAS architecture. As stated above, FAA is using automated tools to build a NAS architecture. EA plans call for describing both the “As Is” and the “To Be” environments of the enterprise, as well as a sequencing plan for transitioning from the “As Is” to the “To Be.” FAA has yet to develop architecture project management plans. EA plans call for describing both the “As Is” and the “To Be” environments in terms of business, performance, information/data, application/service, and technology. FAA has yet to develop architecture project management plans. EA plans call for business, performance, information/data, application/service, and technology descriptions to address security. FAA has yet to develop architecture project management plans. EA plans call for developing metrics for measuring EA progress, quality, compliance, and return on investment. FAA has yet to develop architecture project management plans and metrics for measuring NAS architecture progress, quality, compliance, and return on investment. Written/approved organization policy exists for EA development. FAA has yet to develop a written/approved policy for developing a NAS architecture. However, FAA’s strategic plan includes the goal of developing an approved enterprise architecture policy by September 2005. EA products are under configuration management. FAA has yet to establish a configuration management process. EA products describe or will describe both the “As Is” and the “To Be” environments of the enterprise, as well as a sequencing plan for transitioning from the “As Is” to the “To Be.” According to the chief information officer (CIO), future versions of the NAS architecture will conform to contemporary guidance. Such guidance describes, among other things, products that describe the “As Is” and “To Be” environments and a sequencing plan. Both the “As Is” and the “To Be” environments are described or will be described in terms of business, performance, information/data, application/service, and technology. According to the CIO, future versions of the NAS architecture will conform to contemporary guidance. Such guidance describes, among other things, products that describe the “As Is” and “To Be” environments in terms of business, performance, information/data, application/service, and technology. Business, performance, information/data, application/service, and technology descriptions address or will address security. According to the CIO, future versions of the NAS architecture will conform to contemporary guidance. Such guidance includes, among other things, business, performance, information/data, application/service, and technology descriptions that address security in both the “As Is” and “To Be” environments. Progress against EA plans is measured and reported. FAA has yet to develop architecture project management plans and metrics; therefore, progress against plans is not measured and reported. Written/approved organization policy exists for EA maintenance. FAA has yet to develop a written/approved policy for maintaining a NAS architecture. However, FAA’s strategic plan includes the goal of developing an approved enterprise architecture policy by September 2005. EA products and management processes undergo independent verification and validation. FAA has yet to establish an independent verification and validation process. EA products describe both the “As Is” and the “To Be” environments of the enterprise, as well as a sequencing plan for transitioning from the” As Is” to the “To Be.” FAA has yet to develop NAS architecture products that describe both the “As Is” and the “To Be” environments and a sequencing plan. Both the “As Is” and the “To Be” environments are described in terms of business, performance, information/data, application/service, and technology. FAA has yet to develop NAS architecture products that describe both the “As Is” and the “To Be” environments in terms of business, performance, information/data, application/service, and technology. Business, performance, information/data, application/service, and technology descriptions address security. FAA has yet to develop business, performance, information/data, application/service, and technology descriptions that address security in both the “As Is” and “To Be” environments. Organization CIO has approved current version of EA. FAA has yet to develop a version of the NAS architecture for the CIO to approve that conforms to contemporary guidance and standards. Committee or group representing the enterprise or the investment review board has approved current version of EA. FAA has yet to develop a version of the NAS architecture that conforms to contemporary guidance and standards for a committee or investment review board to approve. Quality of EA products is measured and reported. FAA has yet to develop NAS architecture product metrics; therefore, product quality is not measured and reported. Written/approved organization policy exists for IT investment compliance with EA. FAA has yet to develop a written/approved policy requiring IT investments to comply with a NAS architecture. However, FAA’s strategic plan includes the goal of developing an approved enterprise architecture policy by September 2005. Process exists to formally manage EA change. FAA has yet to establish a formal process for managing changes to a NAS architecture. EA is integral component of IT investment management process. According to the CIO, FAA has recently begun to consider architecture compliance as part of its Joint Resources Council process and the CIO’s approval of Exhibit 300 budget exhibits for NAS investments, and he anticipates that over the next couple of years the NAS architecture will become integral to the investment process. EA products are periodically updated. FAA has yet to complete development of NAS architecture products. IT investments comply with EA. According to the CIO, FAA has recently begun to consider investment compliance with the architecture, and the CIO expects this compliance determination to expand and evolve over the next couple of years. Organization head has approved current version of EA. FAA has yet to complete development of a NAS architecture for the Administrator to approve. Return on EA investment is measured and reported. FAA has yet to develop metrics and processes for measuring NAS architecture benefits; therefore, return on investment is not measured and reported. Compliance with EA is measured and reported. FAA has yet to develop metrics for measuring compliance with the NAS architecture; therefore, compliance with an architecture is not measured and reported. Assessment of Architecture Management Efforts for the Non-National Airspace System Agency is aware of EA. The Federal Aviation Administration (FAA) strategic plan includes the goal of having an approved EA policy requiring the development, maintenance, and implementation of an EA by September 2005. Adequate resources exist (funding, people, tools, and technology). According to the chief architect, the agency has adequate project funding. FAA reports that fiscal year 2005 funding for the non-National Airspace System (NAS) architecture is $1.5 million. In addition, the agency reports that it has skilled staff (two government employees, six full-time contractors, and additional contractor staff as needed) working to develop its non- NAS architecture. FAA is also using automated tools, such as Rational Rose by Rational Software Corporation/IBM Software Group, Microsoft Visio and an Oracle portal server. Committee or group representing the enterprise is responsible for directing, overseeing, or approving the EA. FAA has not assigned responsibility for directing, overseeing, or approving the non-NAS architecture to any group or committee. According to the chief architect, FAA plans to assign responsibility for directing the non-NAS architecture to its Information Technology Executive Board by April 2005. Program office responsible for EA development and maintenance exists. In January 2003, FAA established a project office that is responsible for developing and maintaining the non- NAS architecture. Chief architect exists. In January 2003, FAA designated a chief architect for the non-NAS architecture. EA is being developed using a framework, methodology, and automated tool. According to the chief architect, FAA is using the Federal Enterprise Architecture Framework and the Office of Management and Budget’s Federal Enterprise Architecture reference models to develop the non-NAS architecture. FAA also has a methodology for developing the architecture, but the methodology does not define the standards, steps, tools, techniques, and measures that it is following to develop, maintain, and validate the non-NAS architecture. However, according to the chief architect, FAA will update its methodology to describe management activities by March 2005. As stated above, FAA is using automated tools to build the non-NAS architecture. EA plans call for describing both the “As Is” and the “To Be” environments of the enterprise, as well as a sequencing plan for transitioning from the “As Is” to the “To Be.” FAA has yet to develop architecture project management plans. However, the chief architect stated that the agency intends to have an approved plan by April 2005 and the plan will call for describing both the “As Is” and the “To Be” environments of the enterprise, as well as a sequencing plan. EA plans call for describing both the “As Is” and the “To Be” environments in terms of business, performance, information/data, application/service, and technology. FAA has yet to develop architecture project management plans. However, the chief architect stated that the agency intends to have an approved plan by April 2005 and the plan will call for describing both the “As Is” and the “To Be” environments in terms of business, performance, information/data, application/service, and technology. EA plans call for business, performance, information/data, application/service, and technology descriptions to address security. FAA has yet to develop architecture project management plans. However, the chief architect stated that the agency intends to have an approved plan by April 2005 and the plan will call for the business, performance, information/data, application/service, and technology descriptions to address security for both the “As Is” and “To Be” environments. EA plans call for developing metrics for measuring EA progress, quality, compliance, and return on investment. FAA has yet to develop architecture project management plans and metrics. According to the chief architect, the agency intends to have an approved plan by April 2005 and the plan will include metrics for measuring non-NAS architecture progress, quality, compliance, and return on investment. Written/approved organization policy exists for EA development. FAA has yet to develop a written/approved policy for developing the non-NAS architecture. However, FAA’s strategic plan includes the goal of developing an approved enterprise architecture policy by September 2005. EA products are under configuration management. FAA has yet to develop non-NAS architecture products and a configuration management process has not been established. However, according to the chief architect, FAA plans to update its methodology to address how changes to all architecture products will be documented by March 2005. EA products describe or will describe both the “As Is” and the “To Be” environments of the enterprise, as well as a sequencing plan for transitioning from the “As Is” to the “To Be.” The chief architect stated that the agency intends to have an approved plan by April 2005 and that the plan will call for describing both the “As Is” and the “To Be” environments of the enterprise, as well as a sequencing plan. The chief architect also stated that the agency will have a comprehensive non-NAS architecture by April 2006. Both the “As Is” and the “To Be” environments are described or will be described in terms of business, performance, information/data, application/service, and technology. The chief architect stated that the agency intends to have an approved plan by April 2005 and that the plan will call for describing both the “As Is” and the “To Be” environments in terms of business, performance, information/data, application/service, and technology. The chief architect also stated that the agency will have a comprehensive non-NAS architecture by April 2006. Business, performance, information/data, application/service, and technology descriptions address or will address security. The chief architect stated that the agency intends to have an approved plan by April 2005 and that the plan will call for the business, performance, information/data, application/service, and technology descriptions to address security for both the “As Is” and “To Be” environments. The chief architect also stated that the agency will have a comprehensive non-NAS architecture by April 2006. Progress against EA plans is measured and reported. FAA has yet to develop architecture project management plans and metrics; therefore, progress against plans is not measured and reported. However, according to the chief architect, the agency intends to have an approved plan by April 2005 and progress against the plan will be measured and reported. Written/approved organization policy exists for EA maintenance. FAA has yet to develop a written/approved policy for maintaining the non-NAS architecture. However, FAA’s strategic plan includes the goal of developing an approved enterprise architecture policy by September 2005. EA products and management processes undergo independent verification and validation. FAA has yet to establish an independent verification and validation process. However, according to the chief architect, the non-NAS architecture products and architecture management processes will undergo independent verification and validation by December 2005. EA products describe both the “As Is” and the “To Be” environments of the enterprise, as well as a sequencing plan for transitioning from the” As Is” to the “To Be.” The current non-NAS architecture products do not yet fully describe both the “As Is” and the “To Be” environments of the enterprise, or a sequencing plan. However, according to the chief architect, FAA will have a comprehensive non-NAS architecture that describes both the “As Is” and the “To Be” environments of the enterprise, as well as the sequencing plan by April 2006. Both the “As Is” and the “To Be” environments are described in terms of business, performance, information/data, application/service, and technology. The current non-NAS architecture products do not yet fully describe both the “As Is” and the “To Be” environments in terms of business, performance, information/data, application/service, and technology. However, according to the chief architect, FAA will have a comprehensive non-NAS architecture that describes these terms by April 2006. Business, performance, information/data, application/service, and technology descriptions address security. According to the chief architect, the non-NAS architecture does not yet contain complete business, performance, information/data, application/service, and technology descriptions that address security for both the “As Is” and “To Be” environments. However, FAA will have a comprehensive non-NAS architecture comprised of these terms by April 2006. Organization chief information officer (CIO) has approved current version of EA. FAA has yet to develop a non-NAS architecture for the CIO to approve. According to the chief architect, the first comprehensive version of the non-NAS architecture is scheduled for release in April 2006. Committee or group representing the enterprise or the investment review board has approved current version of EA. FAA has yet to develop a non-NAS architecture for the committee or investment review board to approve. According to the chief architect, the first comprehensive version of the non-NAS architecture is scheduled for release in April 2006. In addition, FAA has yet to establish a committee or investment review board that will be responsible for approving the non-NAS architecture. Quality of EA products is measured and reported. FAA has yet to develop metrics and assess the quality of the non-NAS architecture products that it is currently developing; therefore, product quality is not measured and reported. However, according to the chief architect, the agency intends to have an approved plan by April 2005 and the plan will include metrics for measuring the quality of the non-NAS architecture products. Written/approved organization policy exists for IT investment compliance with EA. FAA has yet to develop a written/approved policy requiring that IT investments comply with the architecture. However, FAA’s strategic plan includes the goal of developing an approved enterprise architecture policy by September 2005. Process exists to formally manage EA change. FAA has yet to establish a formal process for managing changes to the non-NAS architecture. However, according to the chief architect, the agency intends to have an approved architecture project management plan by April 2005 and the plan will include a formal process for managing architecture changes. EA is integral component of IT investment management process. FAA has yet to complete development of a non-NAS architecture, and it is not an integral component of the IT investment management process. EA products are periodically updated. FAA updates the non-NAS architecture products annually to reflect the agency’s investment decisions. IT investments comply with EA. FAA has yet to complete development of a non-NAS architecture; therefore, IT investments are not evaluated for compliance with the architecture. However, the first version of the non-NAS architecture is scheduled for release in April 2006. Organization head has approved current version of EA. FAA has yet to complete development of a non-NAS architecture for the Administrator to approve. However, the first version of the non-NAS architecture is scheduled for release in April 2006. Return on EA investment is measured and reported. FAA has yet to develop metrics and processes for measuring non-NAS architecture benefits; therefore, return on investment is not measured and reported. However, the first version of the non-NAS architecture is scheduled for release in April 2006. Compliance with EA is measured and reported. FAA has yet to develop metrics for measuring compliance with the non-NAS architecture. However, the first version of the non-NAS architecture is scheduled for release in April 2006. According to the chief architect, these core elements will be addressed in the enterprise architecture policy that FAA plans to issue by September 2005. GAO Staff Acknowledgments Staff Acknowledgments Staff who made key contributions to this report were Kristina Badali, Joanne Fiorino, Michael Holland, Anh Le, William Wadsworth, and Angela Watson. | The Federal Aviation Administration's (FAA) mission is to promote the safe, orderly, and expeditious flow of air traffic in the U.S. airspace system. To this end, FAA is modernizing its air traffic control systems, a multibillion dollar effort that GAO has designated as a high-risk program. GAO's research into the practices of successful public- and private-sector organizations has shown that developing and using an enterprise architecture, or blueprint, to guide and constrain systems investments is crucial to the success of such a modernization effort. GAO was asked to determine whether FAA has established effective processes for managing the development and implementation of an enterprise architecture. FAA has two architecture projects--one for its National Airspace System (NAS) operations and one for its administrative and mission support activities--that together constitute its enterprise architecture program. However, it has established only a few of the management capabilities for effectively developing, maintaining, and implementing an architecture. For example, the agency reports that it has allocated adequate resources to the projects, and it has established project offices to be responsible for developing the architecture, designated a chief architect for each project, and released Version 5.0 of its NAS architecture. But the agency has yet to establish other key architecture management capabilities--such as designating a committee or group that represents the enterprise to direct, oversee, or approve the architecture, and establishing an architecture policy. FAA agreed that the agency needs an effective enterprise architecture program and stated that it plans to improve its management of both projects. For example, the agency intends to establish a steering committee; develop a policy that will govern the development, maintenance, and implementation of the architecture program; and have an approved architecture project management plan for the non-NAS architecture. GAO's experience in reviewing other agencies has shown that not having an effective enterprise architecture program can be attributed to, among other things, an absence of senior management understanding and support and cultural resistance to having and using one. It has also shown that attempting major systems modernization programs like FAA's without having and using an enterprise architecture often results in system implementations that are duplicative, are not well integrated, require costly rework to interface, and do not effectively optimize mission performance. |
Background All the major airlines have some union representation of at least part of their labor force. The various crafts or classes that unions typically represent include pilots, flight attendants, mechanics, and dispatchers. Sometimes unions also represent customer service agents and clerical workers, aircraft and baggage handling personnel, and flight instructors. The extent of unionization among the major carriers varies significantly. At Delta, unions represent the pilots and two small employee groups; at Southwest, on the other hand, unions represent 10 different employee groups. Different unions may represent a given employee craft or class at different airlines. For example, the Air Line Pilots Association (ALPA) represents pilots at United, but the Allied Pilots Association represents American pilots. Table 1 summarizes the representation of different crafts or classes at the major airlines. In general, airline labor contracts include three major elements: wages, benefits, and work rules. Work rules generally refer to those sections of a contract that define issues such as hours to be worked and what work is to be done by what employees. Negotiations between airlines and their labor unions on these contracts are conducted in accordance with the requirements of the Railway Labor Act (RLA). This act was passed in 1926 after the railroads and their unions agreed to set in place a legal framework that would avoid disruptions in rail service. The act was amended in 1936, after discussions with airline labor and management, to include the airline industry and its labor unions. See appendix III for a summary of the history and key provisions of the RLA. Airline labor contracts do not expire; rather, they reach an amendable date—the first day that the parties can be required to negotiate the terms of a new contract. Labor negotiations may begin before or after the amendable date, however. While a new contract is being negotiated, the terms of the existing contract remain in effect. Under the RLA, labor negotiations undergo a specific process that must be followed before a union can engage in any kind of work action, including a strike, or before a carrier can change work rules, wages, and benefits. After exchanging proposed changes to contract provisions, the airline and the union engage in direct bargaining. If they cannot come to an agreement, the parties must request mediation assistance from the NMB. By statute, if the NMB receives a properly completed application for mediation, it must make its best effort to mediate an amicable settlement. If negotiations are deadlocked after mediation, the NMB must then offer arbitration to both parties. If either party declines arbitration, the NMB releases the parties into a 30-day cooling-off period. While this process is set by law, the decision about when the negotiations are deadlocked is left to the NMB. If the NMB concludes that a labor dispute threatens to interrupt essential transportation service to any part of the country, the act directs the NMB to notify the President of this possibility. The President then can, at his discretion, convene a Presidential Emergency Board (PEB), which issues a nonbinding, fact-finding report. If the President does not call a PEB, after the 30-day cooling-off period ends the union is allowed to strike, and the airline is allowed to alter working conditions unilaterally. These actions are known as self-help. If the President does convene a PEB, it is given 30 days to hold hearings and recommend contract terms for a settlement to the parties. The union and the airline then have an additional 30-day cooling-off period, after the PEB makes its recommendations to the President, before either can engage in self-help. After a PEB, Congress may also intervene in the contract dispute by legislating terms of a contract between a carrier and a union. Congress, however, has never intervened in airline negotiations since deregulation. Figure 2 summarizes the key steps in the negotiation process under the RLA. Besides negotiations on contracts that are nearing or have passed the amendable dates, airline management and labor may also engage in other negotiations. For example, if an airline introduces a new type of aircraft into its fleet, management and labor will negotiate “side agreements” to the contract that set pay rates and work rules governing the operation of that aircraft. An example of this situation was when Delta and its pilots settled on pay rates for flying Delta’s newly introduced Boeing 777s in 1999. This agreement was an amendment to a contract that was ratified in 1996. Conversely, during financially difficult times, an airline’s management and labor may negotiate concessionary agreements before contracts reach the amendable date. For example, since 2001, several airlines have requested pay cuts from their unions due to the precarious financial condition of the airlines. In April 2003, American employees agreed to $1.8 billion in wage, benefit, and work rules concessions to help the airline avoid bankruptcy. In April, United employees represented by ALPA, Association of Flight Attendants (AFA), the International Association of Machinists and Aerospace Workers (IAM), the Transport Workers Union (TWU), and the Professional Airline Flight Control Association (PAFCA) agreed to $2.2 billion in average yearly savings to avoid liquidation or having all labor contracts abrogated by the bankruptcy court. Through January 2003, US Airways employees, including unionized, nonunionized, and management personnel, agreed to over $1 billion in cuts to avoid liquidation. Length of Negotiations and Number of Nonstrike Work Actions Have Increased, While Number of Strikes Has Declined In the 25 years since deregulation, airline contract negotiation lengths have increased while the frequency of strikes has declined, but the number of nonstrike work actions have increased. For the 236 contracts that the major passenger airlines negotiated since 1978, available data suggest that the median time taken to negotiate contracts has risen substantially since 1990, although this varies among the different carriers. In addition, 75 percent of strikes occurred prior to 1990. By comparison, all presidential interventions and all identified nonstrike work actions (such as sickouts or refusals to work overtime) occurred after 1990. Airline Contract Negotiation Lengths Have Increased Since 1978 The length of time to negotiate airline contracts has increased since deregulation. From 1978 to 1989, the median contract negotiation was 9 months while the median negotiation length from 1990 to 2002 increased to 15 months. In other words, from 1978 to 1989, half of the contracts were negotiated in more than 9 months while from 1990 to 2002, half of the contracts took more than 15 months to reach an agreement. However, in 1978–1989, 6 contracts were ratified or settled by the amendable date where as from 1990–2002, 9 contracts were ratified or settled by the amendable date. (In all, during the two time periods from 1978–1989 and 1990–2002, the number of negotiations that began before the amendable date were 65 and 51, respectively.) Conversely, the number of contracts that required more than 24 months to negotiate more than doubled between the two periods. Figure 3 summarizes changes in the length of time taken for airline labor negotiations from 1978 to 2002. Carriers differed in the degree to which their median negotiation lengths increased—if they increased at all. Negotiation lengths increased at six carriers that were measured, in some cases more than doubling. On the other hand, negotiation lengths decreased or remained constant at three: Continental, United, and Trans World Airlines (TWA). Figure 4 shows the change in median negotiation lengths at the major U.S. passenger airlines before and after 1990. Contract complexity may play a role in lengthening negotiations. In the 1980s, for example, scope clauses (provisions in labor contracts of the major airlines and their unions that limit the number of routes that can be transferred to smaller, regional jets) could be very short—sometimes only one paragraph. Now, however, such scope clauses can be 60 or more pages. Also, contracts negotiated during the 1980s tended to consist mainly of wages and benefits, while those negotiated in the 1990s included corporate governance issues such as code sharing, regionals, and furloughs. Another factor in the length of negotiations is the relationship between labor and management. According to industry experts who examined labor relations in the industry, the quality of labor relationships is defined by the parties’ level of trust, their level of communication, and their ability to problem solve. Those carriers that industry officials and labor- management experts regard as having positive labor relations tended to have shorter negotiation periods than carriers with acrimonious relationships. Industry officials noted increased tension within labor- management relationships during the 1990s, when the industry recovered from economic hardship to enjoy the biggest boom in its history. An industry official explained that during the recessionary economic period of the early 1990s, unions tended to stall negotiations to avoid making concessions. Conversely, during the peak economic period in the mid to late 1990s, some airlines’ management tried to further improve their profits by prolonging negotiations. Carriers described by industry officials and labor-management experts as having had positive labor relationships include Continental (following 1993) and Southwest. In the 1990s, their median negotiation periods were 7 and 13 months, respectively. Labor-management experts credit Continental’s current CEO for creating relationships of trust, and re- establishing Continental as a profitable carrier after its bankruptcy in the early 1990s. Industry officials also credit Southwest’s labor relationships to 30 years of profitability while maintaining its original leadership. Both companies have been recognized for extended periods of low conflict in labor negotiations, underpinned by high-trust workplace cultures. Carriers that have been described by labor-management experts as having had contentious relations with their unions include American, Northwest Airlines, TWA, and US Airways. Also, all have a history of strikes and/or court-recognized, nonstrike work actions. Furthermore, in the 1990s, many of these airlines had negotiations that tended to take much longer than Continental’s and Southwest’s. For example, the median length of time to negotiate contracts at US Airways in the 1990s was 34 months. By contrast, the length of time to negotiate contracts at Southwest was 13 months. Strikes Have Decreased and Nonstrike Work Actions Have Increased during the 1990s The incidence of strikes in the airline industry has decreased over time. Of the 16 strikes that occurred since 1978, 12 occurred prior to 1990, and 4 occurred subsequently. These strikes ranged from as short as 24 minutes to more than 2 years. Figure 5 summarizes the incidence of strikes, presidential interventions, and court-recognized, nonstrike work actions between 1978 and 2002. Six presidential interventions have been used to prevent strikes since deregulation. All six occurred since 1990. Not all presidential interventions were PEBs. In 1993, the President recommended binding interest arbitration for American’s flight attendant negotiation. In 1998, and again in 2001, two PEB warnings occurred; one occurred during Northwest’s pilot strike and the second for American flight attendants. Still, PEBs have been used three times in the airline industry since 1978: during a 1994 American pilot negotiation, a 1996 Northwest mechanic negotiation, and a 2000 United mechanic negotiation. Compared to strikes, the pattern for nonstrike work actions has been the opposite: their incidence has increased over time. In all, 10 court- recognized, nonstrike work actions have occurred, each since 1998. Such actions included various forms of slowdowns such as sickouts, work-to- rule, and refusals to work overtime. According to a labor-management expert, carriers believe there have been many more nonstrike work actions than the 10 recognized by the courts, but their existence is difficult to prove. Airline management has been unable to produce the evidence needed to prove the actions are taking place. Those nonstrike work actions that were not identified by the court include a number of highly publicized labor disruptions. For example, the reported, but unconfirmed, nonstrike work action taken by United’s pilots in the summer of 2000 was widely publicized by the media, yet the airline never brought the issue before a court of law. Additionally, it has been reported that the reason why these actions are difficult to detect is because a concern for safety often masks the source of such actions. Airline Strikes Adversely Affect Communities, but Impacts Have Not Been Fully Analyzed and Vary from Place to Place Airline labor strikes have exerted adverse impacts on communities, but we identified no published studies that systematically and comprehensively analyzed a strike’s net impact at the community level. For some strikes, we were able to identify evidence of individual impacts, such as reduced air service to and from the community, lost salaries or wages by striking or laid-off airline workers, or lower airport revenues. However, no studies have yet synthesized such information for a thorough picture of a strike’s impact on a community. Our analysis indicates that a strike’s potential impacts would likely vary greatly from community to community, because of differences in factors such as the amount of service available from other airlines. Thus, even if the impact of a strike were to be thoroughly studied at a particular community, it would be difficult to generalize these results to other locations. Airline Strikes Have Had Negative Economic Impacts on Communities With the reduction of air service stemming from an airline strike, communities have experienced economic disruptions from a number of sources. Lost income of airline employees, fewer travelers and less spending in travel related businesses, and less spending by the airline are just some of the ways that local economies have been affected by a strike. For example, canceled flights have lead to the layoff of nonstriking employees, fewer travelers in the airport spending money in concessions, and reduced landing fees for airports. Because passenger traffic dropped, spending at hotels suffered. Local reports illustrated some of a strike’s economic impacts on a community during the 2001 Comair pilot strike. Comair, a regional carrier for Delta, has its main hub at the Cincinnati/Northern Kentucky International Airport. Over the course of the strike, which lasted 89 days, Comair did not operate its 815 daily flights, causing the 25,000 passengers who would normally have been on those flights in an average day to curtail their travel or make arrangements on other airlines. The airline’s 1,350 striking pilots, many of whom are based in the area, lost an estimated $14 million in salaries, and the airline reported laying off an additional 1,600 nonstriking employees in the greater Cincinnati area as well. A concourse at the Cincinnati/Northern Kentucky International Airport closed during the strike. Reports stated that the concourse’s 16 stores and restaurants lost more than $3 million in sales, and that 152 of 193 workers were laid off. The airport also lost $1.2 million in landing fees from Comair during the strike. Impacts can be felt not only at hub communities like Cincinnati, but also at smaller spoke communities that may be served only by the striking airline. When Northwest Airlines pilots struck in 1998, for example, Mesaba Airlines, a regional affiliate, suspended operations as well. At least 12 of the communities served by Mesaba during the Northwest strike had no other air service. One of these locations was Houghton, Michigan. According to local reports, travelers to and from Houghton had to drive as far as Green Bay (213 miles from Hancock, Michigan, location of Houghton’s airport) or Wausau, Wisconsin, (192 miles away) to find alternative flights. DOT also recognized the possible impacts of halting all airline service. The department ordered Mesaba to return service to 12 communities served from Minneapolis under the terms of Mesaba’s Essential Air Service contract. However, before the order was implemented, the strike ended, and service was restored to these communities. Full Impacts at the Community Level Are Largely Unknown While the available information indicates that airline strikes can and do have adverse impacts on communities, we identified no published studies that attempt to comprehensively measure these impacts at the community level. The kinds of impacts cited above, for example, may have mitigating factors that need to be taken into account. In the Comair strike, for example, union strike funds replaced some of the lost income of strikers. ALPA approved payments of $1,400 per month to striking Comair pilots during the strike period, allowing them to spend at a reduced rate in the community. A study that reliably estimated the impact of a strike at the community level would need to take factors such as these into account. No such study has been done. Another reason for uncertainty about the full impacts of a strike on a community is that the impact of a strike on passengers’ travel decisions is often unknown. For example, while more than 100 communities lost Comair service to and from Cincinnati during the strike, all of these communities had service to Cincinnati from another airline. Thus, although hotel occupancy reportedly fell by more than 18 percent in Northern Kentucky in the strike’s first month, the degree to which this drop was attributable solely to the strike is unknown. Apart from community-level analysis of strikes, some studies have examined the overall economic impacts of aviation on regions or states. For example, the Campbell-Hill Aviation Group, on behalf of an industry interest group, published a report examining the state-level impact of a potential loss of aviation service, but this study did not evaluate the impact of any particular strikes on local or regional economies. For example, the study stated that, in the year ending in March 2002, Delta had 10 percent of the passenger traffic in Texas and projected that a 10 percent reduction in aviation benefits would cause a daily reduction of $17.7 million in one measure of the Texas economy, its gross domestic product (GDP). DOT also has on occasion produced wide-ranging assessments of the impacts of potential airline strikes, but these studies have never addressed the impacts of strikes that actually occurred. These studies are conducted at the request of the NMB, which uses them in evaluating whether the labor dispute threatens to interrupt essential transportation services in any part of the country. Once the NMB makes this assessment, it notifies the President, who may, at his discretion, empanel a PEB. If the NMB believes an airline strike is probable, it may request the department to examine the possible economic consequences of that strike. The department reports the extent of potentially lost air service to hub and spoke cities of the affected carrier, the number of passengers that would have no service if a strike were to occur, possible financial impacts on the carrier, indirect impacts on the national economy, and the mitigating and aggravating factors on the impacts of a strike. While DOT’s reviews may examine many areas that could be affected by a strike, they examine only potential strikes and are not conducted after actual strikes. Community-Level Impact of Any Future Strike Would Depend Partly on Service Available from Other Airlines While comprehensive studies of community-level impacts of past strikes are not available, one thing that emerges from our analysis is that any future strike’s impact on a given community is likely to be affected by the level of service available from other airlines. If alternative service is greatly limited, travelers may have to take alternative—and less direct— routes offered by other airlines, or, in extreme cases, travel great distances to other airports in order to fly at all. Those impacts on travelers and businesses will vary depending on whether the community is a hub or spoke destination and even among an airline’s hubs and spoke destinations. The impact of a future strike at an airline’s hub locations would depend in part on which airline is involved in the strike and its market share at the hub. Some airlines dominate air traffic at their hubs to a much greater extent than other airlines do, and a strike involving an airline with a dominant position at most of its hubs would likely have more impact than a strike involving an airline that is hubbing out of locations where competition is greater. In 2001, the airlines with the most and least dominated hubs (based on the percentage of total available seats controlled by the hubbing airline) were US Airways and America West. (See fig. 6.) US Airways averaged 81 percent of the seats offered at its hubs, while America West averaged 32 percent. Thus, based on the loss of seating capacity at its hubs, a strike at US Airways that halted service would likely have substantially more impact on its hub communities than a strike at America West that halted service. Among a single airline’s hub cities, the impact of a strike would also likely vary depending on service available from alternate carriers at those cities. Again, the impact of a strike at the hubbing carrier or its regional partners would be more substantial at more highly dominated hubs. For example, in 2001, Delta and its regional partners accounted for 91 percent of the seats available in Cincinnati, but only 19 percent of available seats at the Dallas/Fort Worth International Airport, which has the lowest market share among Delta’s hubs. Consequently, a strike against Delta would likely have caused much greater disruption in Cincinnati than in Dallas. In contrast to the differences among Delta’s hubs, the impact of a strike at Northwest would likely be felt equally at its Minneapolis/St. Paul, Detroit, and Memphis hubs. At each of its hubs, Northwest offered between 77 and 80 percent of available seats. As at hubs, the impacts of strikes on available air service at spoke cities would also depend on the amount and type of available alternative service. Those communities with air service from other carriers have a greater opportunity to mitigate the potential impact of a strike by enabling travelers to access the national air system using competing airlines. For example, figure 7 shows available air service, as of April 2003, at spoke communities served by Delta’s regional partner, Comair, from Cincinnati, and by Northwest’s regional carrier, Mesaba, from Minneapolis-St. Paul. Comair provided nonstop service to a total of 101 U.S. communities from Cincinnati. All but one of these communities had alternative service to Cincinnati from another airline—64 with nonstop service, 36 with one-stop service. Thus, if Comair’s operations were to be disrupted by a strike, passengers at these communities would still have the opportunity for service to and from Cincinnati. The picture at Minneapolis-St. Paul is somewhat different. There, 10 of the 47 spoke cities served by Mesaba would have no alternative service to Minneapolis-St. Paul. Other Factors Also Influence the Total Impact of Airline Strikes Several other factors could also influence the impact of a future strike on a community. The length of the strike is one such factor; longer strikes are more likely to have an adverse impact. Since deregulation, strikes have varied from 24 minutes for an American pilot strike in 1997 to almost 2 years for a Continental mechanics strike (1983–1985). Another likely factor is financial preparation; as already mentioned, the local impact of the Comair strike was likely mitigated somewhat by the union’s payments to striking pilots. Similarly, the ability of airlines to operate through a strike—whether by hiring replacement workers or having union members cross picket lines—could also influence a strike’s impact. For example, during a strike by Continental mechanics lasting almost 2 years, some Continental workers crossed the picket line and continued working. This allowed Continental to continue operation after a shutdown of only 3 days. Tactics used by the striking union can also reduce the overall impact. Alaska flight attendants used a technique called “CHAOS” (Creating Havoc Around Our System) that involved intermittent walkouts of certain crews on certain days. This tactic kept certain flights from operating, but did not shut down the entire airline. Nonstrike Work Actions Have Greater Impacts on Passengers than Lengthy Negotiations Our analysis indicates that passenger service has been affected more adversely by nonstrike work actions than by an increase in the length of negotiations. Generally, but not always, as negotiation periods increased, there has been a slight decline in on-time flights. However, the impact of these negotiations has been unclear, because the decline may also have been affected by other factors such as poor weather, aircraft maintenance, runway closures, air traffic control system decisions, or equipment failures. By comparison, the 10 court-recognized, nonstrike work actions more clearly resulted in negative impacts on passengers, as shown through such measures as a decrease in the number of on-time flights, an increase in the number of flight problem complaints, and a decrease in passenger traffic. Impact of Negotiation Lengths on Passengers Is Unclear Our analyses found a slight correlation between the length of negotiations and adverse impacts on passengers. We analyzed 23 negotiations between airlines and pilot unions from 1987 to 2002. As negotiations lengthened, the frequency of on-time arrivals declined slightly. However, it is not clear if the change in on-time flights is attributable solely to negotiation lengths, or if other factors may also have contributed to the on-time performance. DOT’s data on flight arrival and departure timeliness indicate whether a flight is delayed, but not what caused the delay. Common factors for delays include severe weather, aircraft maintenance, runway closures, customer service issues (e.g., baggage and accommodating passengers with special needs, such as those in wheelchairs or youths requiring escorts), air traffic control system decisions, and equipment failures. Thus, despite the apparent relation between lengthening negotiations and a deterioration of service quality, other exogenous factors may explain the change in flight delays. Nonstrike Work Actions Have Clearer Adverse Impacts on Passengers Available data indicates that nonstrike work actions have had adverse impacts on passengers. While DOT data do not specifically identify these actions as the causes for the delays or the reasons for the complaints, increases in the number of late flights, passenger complaints, and decreases in passenger traffic during the period of the actions suggest a clearer relationship than is apparent with these same measures and lengthy negotiations. The periods in which nonstrike work actions occur show decreases in on-time flights, increases in passenger complaints, and decreases in passenger traffic. Two examples of such actions, the American pilot sickout and the Delta pilot slowdown, are described in the next two sections. American Pilot Sickout American experienced decreases in on-time flights, increases in customer complaints, and drops in passenger traffic during a pilot sickout. (Under FAA regulations, any airline pilot can take himself out of the cockpit if he is sick, overly stressed, or does not feel “fit to fly.” During a sickout, pilots utilize these regulations to excuse themselves from work in order to put economic pressure on the airline during the negotiation.) In December 1998, AMR Corp, the parent company of American, purchased Reno Air, whose pilots were then to be integrated into a single workforce. In early 1999, American pilots began a sickout over a dispute involving a side agreement that would integrate Reno Air operations. On February 10, 1999, a federal judge ordered the pilots to return to work. Subsequently, the number of flights cancelled increased. On February 13, 1999, the judge found the pilots’ union in contempt of court. By February 16, the airline reported a return to its normal schedule but, reportedly, pilots were still refusing to work overtime and were adhering to work-to-rule practices, meaning that they would follow every regulation stipulated by the FAA in order to slow the airline. Figure 8 illustrates on-time arrival and departure rates at Dallas/Fort Worth International Airport for the period of August 1998 through December 1999 for American and Delta, which also operates a hub at that airport. The on-time flight statistics for the two airlines are relatively equal prior to the sickout period. During the next several months, American’s on-time record fell below that of Delta. Both carrier’s on-time rates declined somewhat, suggesting that other factors such as weather might also influence flight operations. However, the difference between the two airlines during this period is greater than in other periods. In August 1999, when Reno Air’s operations were officially integrated—even though no agreement was made—the two airlines’ records resumed a more closely parallel path. The American sickout also caused increases in passenger flight problem complaints. Figure 9 compares the change in complaints against American and Delta. The complaints began to rise in February of 1999 and, generally, continued to increase into the summer, when American reached an agreement with its pilots. A comparison of passenger traffic between American and Delta at Dallas/Fort Worth International Airport indicates that passenger traffic declined either to avoid the carrier experiencing the nonstrike work action or due to grounded flights. (American grounded up to 2,250 flights per day during the sickout period.) (See fig. 10.) During the American pilot sickout in February 1999, there was a drop in American’s passenger traffic. Compared to the year before, American’s passenger traffic declined by 15 percent while Delta’s passenger traffic rose by 5 percent. Another example of the impact of nonstrike work actions on passengers is the Delta slowdown in 2000–2001. In September 1999, Delta began negotiations with its pilots and submitted a contract proposal, which sought to tie future raises to the company’s financial performance. As a result, Delta pilots began refusing to fly overtime in the winter of 2000. When compared to Continental’s operations at Atlanta Hartsfield International Airport, Delta experienced substantial declines in on-time flights and increases in flight problem complaints while also experiencing declines in passenger traffic. Delta first went to court on December 5, 2000, and was denied an injunction. The airline then took the suit to the Eleventh Circuit on January 18, 2001, and the denial was overturned and remanded for injunction. Figure 11 shows the percent of on-time flights for both Delta and Continental at Atlanta’s Hartsfield International Airport for the period of August 2000 to August 2001. During the slowdown period from December to January, there is a decline in Delta’s on-time flights relative to Continental’s. Once the court issued an injunction against the union, the two airlines resumed a more similar pattern. Delta’s pilot slowdown also showed an increase in passenger complaints during this period. Figure 12 compares the change in passenger flight problem complaints about Delta and Continental during Delta’s slowdown. Flight complaints rose sharply in December and January, peaking at 185 in January 2001, and immediately declining after the union was enjoined on January 18, 2001. Finally, Delta’s passenger traffic at Atlanta Hartsfield International Airport also declined during the slowdown, but the pattern was less pronounced than for the American sickout discussed earlier. (See fig. 13.) In December 2000, when Delta first pursued an injunction in court, Delta’s and Continental’s passenger traffic dropped by 9 and 4 percent, respectively. Unlike the American sickout (when up to 2,250 flights were grounded per day), Delta pilots’ refusal to fly overtime grounded far fewer flights—about 100 to 125 per day—which means less passengers were affected by cancelled flights as compared to American. Agency Comments We provided copies of a draft of this report to NMB for review and comment. NMB indicated it generally agreed with the accuracy of our report, and it provided technical clarifications, which were incorporated into the report as appropriate. The NMB also provided an additional statement, which is included in appendix VIII. We also provided selected portions of a draft of this report to the major airlines and unions to verify the presentation of factual material. We incorporated their technical clarifications as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will provide copies to the Honorable Francis J. Duggan, Chairman of the National Mediation Board; the Honorable Norman Y. Mineta, Secretary of Transportation; and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-2834, [email protected] or Steve Martin at (202) 512-2834, [email protected]. Appendix VIII lists key contacts and key contributors to this report. Appendix I: Additional Questions In addition to the three primary questions, you asked us how many states use a system of binding arbitration and last offer arbitration with their essential service personnel. You also asked how many times in the last 25 years has Congress had to intervene in a dispute with railroads and what were the outcomes. As of November 2002, according to information from officials of Harvard University, 23 states—including the District of Columbia—use binding arbitration and/or last offer arbitration as arbitration options. (See table 2.) Of those, none use last offer arbitration as their sole arbitration option. According to information from the National Mediation Board, in the last 25 years Congress intervened in railroad negotiations eight times. These interventions occurred between 1982 and 1992. (See table 3.) Congressional interventions do not involve the airlines. Appendix II: Objectives, Scope, and Methodology This report examines the following three questions: What have been the major trends of labor negotiations in the airline industry since the industry was deregulated in 1978, including the number and length of negotiations and the number of strikes, presidential interventions to avoid or end strikes, and nonstrike work actions? What has been the impact of airline strikes on communities? What have been the impacts of the length of negotiations and the occurrence of nonstrike work actions on passengers? To determine the trends of airline labor negotiations, including the length of negotiations, the number of strikes, the number of presidential interventions, and the number of nonstrike work actions, we analyzed data from multiple sources. We obtained our data from major U.S. airlines and various labor organizations. The labor groups included the Air Line Pilots Association (ALPA), the Coalition of Airline Pilots Associations (CAPA), the Association of Flight Attendants (AFA), the International Association of Machinists and Aerospace Workers (IAM), and the International Brotherhood of Teamsters (IBT). We also received substantial negotiation and contract data from the U.S. National Mediation Board (NMB) and the Airline Industrial Relations Conference (AIRCon), a group funded by major U.S. airlines to facilitate the exchange of contract negotiation information and other labor relations matters among carriers. Because data were not available for commuter (regional) and all-cargo carriers, we originally limited our analysis to passenger airlines that are considered majors by the U.S. Department of Transportation (DOT) that were in operation during 2001. These airlines were Alaska, America West, American, American Eagle, American Trans Air (recently renamed as ATA Airlines), Continental, Delta, Northwest, Southwest, TWA, United, and US Airways. We later were not able to include American Eagle or American Trans Air, which met the DOT criteria, in our analysis because we were not able to obtain information on these airlines. Dates listed as negotiation start dates differ between the airlines, AIRCon, and NMB, therefore, limiting the accuracy of the data collected. A negotiation’s “start date” can be when the carrier’s management or union exchange a written notice stating that one of the parties desires a change in rates of pay, work rules, or working conditions or when face-to-face negotiations actually begin (i.e., when the two parties sit at a table and verbally negotiate the contract). By contrast, the NMB defines a “start date” only when it is called for mediation. For the purposes of our data collection, we first used dates provided by the airlines to AIRCon at the time the contract was being negotiated. If those were not available, we turned to the dates provided directly to us by the airlines from their files when available. We were supplied different dates, including ratification dates and settlement dates, for the end point of negotiations. We know of at least one union that did not have its members vote to ratify contract changes until after 1982. Again, we first used AIRCon provided ratification or settlement dates, if possible, and, in cases where these were not available, we used airline provided dates, or dates provided by NMB. We were unable to calculate a negotiation length for 83 of the 236 contracts because we could not identify either a start date or a ratification or settlement date for them. In addition, we did not calculate negotiation lengths for 6 initial contracts, the first contract a union signs after a craft or class becomes recognized at an airline. To obtain information on nonstrike work actions, we also examined media sources and also reviewed federal court records. Based on the information we were able to review, we defined court-recognized, nonstrike work actions as those work actions for which airlines obtained either temporary restraining orders or injunctions against unions. Officials from the airlines we spoke with stated that there have been many more nonstrike work actions than the 10 judged by the courts. Even some union officials stated that union members have taken actions that they considered legal under their contract or Federal Aviation Administration (FAA) regulations. These same actions, on other occasions, have been found to be violations of the status quo by the courts. Additional cases of nonstrike work actions, however, have been difficult to prove. Airline management has either been unable to produce the needed evidence in court or airlines never took unions to court. Union officials also strenuously deny illegal activity on the part of their unions. We interviewed officials from airlines, labor unions, the NMB, and industry groups. The airlines we spoke with included American, American Trans Air, Continental, Delta, Northwest, Southwest, Comair, Atlantic Coast Airlines, Federal Express, United Parcel Service, and Airborne Express. We only analyzed data from airlines where we could obtain full data. The labor groups we interviewed included ALPA, CAPA, AFA, IAM, and IBT. We also held discussions with officials from NMB, the Air Transport Association (ATA), Communities for Economic Strength Through Aviation (CESTA), and AIRCon. To determine the impact of airline strikes on communities, we searched for studies of these impacts from airlines, industry groups, and academic institutions. Specifically, we talked with United, Delta, Comair, ATA, and CESTA. Based on suggestions from airlines, unions, interest groups, and our own research we also talked with faculty at Harvard, the Massachusetts Institute of Technology, the University of Cincinnati, and the University of Kentucky. None of these sources knew of any published studies on specific impacts of past strikes on any community. In discussions with NMB, we learned that DOT produces studies, solely at the request of NMB, on the likely impacts of probable airline strikes on the airline and local and national economies. We obtained a copy of one of these studies from DOT. We also analyzed data on airline schedules and market share from Sabre, Inc.; BACK Aviation Solutions; and the Campbell-Hill Aviation Group. We also reviewed local media reports from communities affected by strikes. Due to the lack of published studies or generally accepted methodology to determine the impact of strikes, we cannot discount other possible causes for these impacts. To determine the impact of the length of negotiations and court- recognized, nonstrike work actions on passengers, we analyzed data on airline operational performance from DOT’s Air Travel Consumer Report and passenger traffic information from BACK Aviation Solutions. To determine the impact of negotiation lengths, we compared on-time performance throughout the course of 23 negotiations between airlines and pilot unions. To determine the impact of nonstrike work actions, we compared airlines’ on-time performance and flight complaints between airlines before, during, and after the 10 court-recognized, nonstrike work actions. We also analyzed changes in passenger traffic among airlines during these actions. Though our analysis included performing a correlation between on-time arrivals and the length of airline labor contract negotiations, we did not perform any multivariate analysis, and thus, cannot rule out possible alternative causes. We conducted our review between August 2002 and May 2003 in accordance with generally acceptable government accounting principles. Appendix III: Additional Background Information on the Railway Labor Act The Railway Labor Act, 45 U.S.C. § 151, et. seq., (RLA) was passed by Congress in May 1926 to improve labor-management relations in the railroad industry. In January 1926, a committee of railway executives and union representatives jointly presented a draft bill to Congress that was universally supported by those in the industry. Congress did not make any changes of substance to the bill, and the RLA was signed by the President on May 20, 1926. Congress has not altered the basic structure of the act that labor and management use to resolve what are known as “major disputes,” i.e., disputes over the creation of, or change of, agreements concerning rates of pay, rules, or working conditions. After discussions with airline management and labor the act was applied to air carriers in 1936. As a method to keep labor disputes from interrupting commerce, the new law represented a significant departure from past labor practices by requiring both sides to preserve the status quo during collective bargaining and preventing either side from taking unilateral action. When labor and management representatives drafted the legislation, they agreed that both sides of a labor dispute should negotiate the dispute and not make any change in the working conditions in dispute until all issues were worked out under the deliberate process outlined in the act. Key Provisions of the RLA The RLA is not a detailed statute. The main purposes of the act are threefold. First, Congress intended to establish a system that resolves labor disputes without interrupting commerce in the airline and railroad industries. The statute requires both labor and management “to exert every reasonable effort to make and maintain agreements ... and to settle all disputes ....” The Supreme Court has described that duty as being the “heart” of the act. Second, the act imposes on the parties an obligation to preserve and to maintain unchanged during the collective bargaining process “those actual, objective working conditions and practices, broadly conceived, which were in effect prior to the time the pending dispute arose and which are involved in or related to that dispute.” This is generally known as “maintaining the status quo.” Finally, the act requires that: “Representatives, for the purposes of this Act, shall be designated by the respective parties ... without interference, influence, or coercion exercised by either party over the self-organization or designation of representatives by the other.” That obligation was strengthened in 1934 so as to prohibit either party from interfering with, influencing, or coercing “the other in its choice of representatives.” Collective Bargaining Process under the RLA The collective bargaining process established by the RLA is designed to preserve labor relations peace. The carrier is required to maintain the status quo before, during, and for some time after the period of formal negotiations. The union and the employees have the reciprocal obligation to refrain from engaging in actions that are designed to economically harm the company, such as strikes during the same period. These actions are termed economic self-help in the act. Airline labor and management periodically engage in negotiations to reach a comprehensive collective bargaining agreement that will remain in effect for a defined period, usually 2 or 3 years. The parties are required to submit written notices (“Section 6 notices”) of proposed changes in rates of pay, rules, and working conditions. In some cases, parties may agree that collective bargaining is required to proceed according to a particular time schedule. If those direct discussions do not result in an agreement resolving a dispute, either party or the National Mediation Board (NMB) can initiate mediation. The RLA requires both parties to maintain collectively bargained rates of pay, rules, and working conditions while they negotiate amendments to the agreement. This requirement extends the status quo after an existing agreement becomes amendable if no agreement is reached by that time. If mediation proves unsuccessful, the NMB appeals to the parties to submit the dispute to binding interest arbitration. If that is unsuccessful, the statute provides for a 30-day cooling-off period. There can be no lawful self-help by either side during this period. Even after the termination of the 30-day period, the self-help option is contingent. If a dispute threatens “substantially to interrupt interstate commerce to a degree such as to deprive any section of the country of essential transportation services,” the President, upon notification by the NMB, is empowered to create an emergency board to investigate the dispute and issue a report that is followed by an additional 30-day period for final negotiations. After this process, the parties are left to self-help and further negotiation to reach a settlement. The only alternative is congressional action, which has never been used in an airline labor dispute. Appendix IV: Contracts Negotiated and Ratified or Settled by the Amendable Date Appendix V: Airline Strikes That Have Occurred Since Deregulation Appendix VI: Court-recognized, Nonstrike Work Actions Since Deregulation Appendix VII: Number of Presidential Interventions Since Deregulation Union Craft Pilots APA APFA Flight attendants Appendix VIII: Comments from the National Mediation Board Appendix IX: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those individuals named above, Jonathan Bachman, Brandon Haller, David Hooper, Terence Lam, Dawn Locke, Sara Ann Moessbauer, Stan Stenersen, and Stacey Thompson made key contributions to this report. | Labor negotiations in the airline industry fall under the Railway Labor Act. Under this act, airline labor contracts do not expire, but instead, become amendable. To help labor and management reach agreement before a strike occurs, the act also provides a process--including possible intervention by the President--that is designed to reduce the incidence of strikes. Despite these provisions, negotiations between airlines and their unions have sometimes been contentious, and strikes have occurred. Because air transportation is such a vital link in the nation's economic infrastructure, a strike at a major U.S. airline may exert a significant economic impact on affected communities. Additionally, if an airline's labor and management were to engage in contentious and prolonged negotiations, the airline's operations--and customer service--could suffer. GAO was asked to examine trends in airline labor negotiations in the 25 years since the industry was deregulated in 1978, the impact of airline strikes on communities, and the impact of lengthy contract negotiations and nonstrike work actions (such as "sickouts") on passengers. Since the airline industry was deregulated in 1978, the average length of negotiations has increased, strikes have declined, and nonstrike work actions (e.g., sickouts) have increased. After 1990, the median length of time needed for labor and management at U.S. major airlines to reach agreement on contracts increased from 9 to 15 months. Of the 16 strikes that occurred at those airlines since 1978, 12 occurred prior to 1990, and 4 occurred subsequently. All ten court-recognized, nonstrike work actions and all six presidential interventions occurred since 1993. Airline strikes have had obvious negative impacts on communities, including lost income for striking and laid off workers, disrupted travel plans, and decreased spending by travelers and the struck airline. However, such impacts have yet to be thoroughly and systematically analyzed. The potential net impacts of a strike on a community would depend on a number of factors, such as availability of service from competing (nonstriking) airlines and the length of the strike. For example, of two recent strikes, one lasted 15 days and one lasted 24 minutes. GAO's analysis indicates that passenger service has been affected more adversely by nonstrike work actions than by an increase in the length of negotiations. Generally, but not always, as negotiation periods increased, there has been a slight decline in on-time flights. However, the impact of these negotiations has been unclear because the decline may also have been affected by other factors such as poor weather. By comparison, the 10 court-recognized, nonstrike work actions more clearly resulted in negative impacts on passengers, as shown through such measures as a decrease in the number of on-time flights, an increase in the number of flight problem complaints, and a decrease in passenger traffic. |
Background Employer-provided educational assistance refers to educational expenses, such as courses, tuition, and books, that are paid for either directly by the employer or indirectly through reimbursement to employees. Under section 127 of the IRC, the amount for educational assistance that employees receive from their employers is generally excludable from employees’ gross income. Generally, the assistance may be for any type of course, except those related to sports, games, or hobbies, and covers such expenses as tuition, books, supplies, and equipment. Section 127 Was Enacted in 1978 Section 127 went into effect in 1979 with passage of the Revenue Act of 1978. Three major reasons for the enactment of section 127 were cited in a June 1988 Department of the Treasury report: (1) to reduce the complexity of the tax system, (2) to reduce possible inequities among taxpayers, and (3) to provide opportunities for upward mobility for less educated individuals. This report also explained that prior law required individuals to pay income taxes on the amount paid for training for new jobs or occupations if an employer had paid for it, a situation that was considered inequitable for those least able to pay but most in need of the education. Since section 127 was first authorized, it has expired and has been extended eight times. Several changes were made by these extensions, including setting an annual limit on the amount that can be excluded from employees’ gross income (currently $5,250) and establishing an annual reporting requirement (Form 5500). Based on our 1989 report which noted an absence of data on the use of employer-provided educational assistance, IRS developed Schedule F to supplement Form 5500. This form and schedule require employers to report information on (1) the number of employees eligible for such assistance, (2) the number of employees who received such assistance, and (3) the value of the assistance provided. The series of expirations and extensions of section 127 has contributed to problems with the quality of available IRS data. For example, the 1993 extension was not enacted until several months after section 127 had expired and the tax filing year had ended. Companies experienced uncertainty about how and whether to report educational assistance as employee income and withhold taxes from it. Some companies included assistance in employees’ income and withheld taxes, others did not. Ultimately, the extension was passed retroactively, which resulted in those companies that had reported the assistance as employee income having to refile employee wage statements and tax returns. Because of this, IRS has acknowledged that some employers may not have filed the required IRS forms, thus information about section 127 reported to IRS may not be complete. However, IRS officials told us that they believe most employers complied with the reporting requirements during the last expiration period. Other Tax Treatment of Educational Assistance Aside from section 127, three tax provisions generally determine the tax treatment of educational assistance: Section 117—Qualified Scholarships; Section 132—Certain Fringe Benefits; and Section 62(a)(2)(A)—Reimbursed Expenses of Employees. Under section 117, educational assistance provided by qualified educational institutions to employees and their dependents can be excluded from employees’ gross income. Under section 132, employees can exclude employer-provided educational assistance from their gross income if it qualifies as a working condition fringe benefit. A working condition fringe benefit is any property or service an employer provides to an employee that, had the employee paid for such property or services, the employee could deduct the payment from his/her income as a business expense. Under section 62, employees can exclude educational assistance from their income if the assistance is provided as part of an employer’s “accountable plan,” which requires that assistance be business-related, expenses be documented, and employees be required to return any excess payments. These three provisions do not have a dollar limit on the amount of assistance that can be excluded. Otherwise, employer-provided educational assistance must be included in employees’ gross income. Under certain conditions, however, employees may deduct educational assistance that has been included in their income. Decisions as to whether such assistance included in an employee’s gross income can be deducted depend on whether the assistance qualifies as a business expense under section 162 of the IRC. Section 162 discusses the circumstances under which an employer’s educational assistance expenses are related to an employee’s job and therefore are deductible. It allows individuals who itemize deductions on their tax returns to deduct qualifying job-related educational expenses and other miscellaneous itemized deductions from gross income. Moreover, qualifying educational expenses and other miscellaneous itemized deductions are deductible only to the extent that they exceed 2 percent of the individual’s adjusted gross income. Figure 1 illustrates an IRS aid to assist taxpayers in determining whether educational expenses are deductible. Does the education maintain or business? job? work? Objectives, Scope, and Methodology Our objectives were to provide information about (1) employer-provided educational assistance, including the characteristics of employers providing educational assistance and employees eligible for and receiving it, and (2) other tax provisions related to employer-provided educational assistance and how they differ from section 127. To achieve the first objective, we identified and obtained relevant data. We did this by contacting the IRS, searching the Internet to identify relevant organizations and research, and contacting various associations. Because no one source of information provided comprehensive data, we used available data from a variety of sources. We relied primarily on data from Form 5500 and Schedule F, provided to us by IRS, and data from NPSAS to develop our profile of employers and employees. We also used information about the eligibility of employees for educational assistance from BLS reports on employee benefits. The IRS data consisted of information about employers providing educational assistance and employees eligible for and receiving section 127 assistance. Employers reported this information to IRS on Form 5500 and its Schedule F for 1992 through 1994. IRS provided us with a database of these employer returns for 1992 and 1993, and for 1994 returns filed with IRS through May 31, 1996, which IRS officials told us represented 90 percent of the data it expects to receive from employers for 1994. We used these data to develop information about employer characteristics, such as how many programs employers have that offered educational assistance and employer size. We also used these data to identify the number of employees eligible for and receiving this assistance and the dollar amount of the assistance provided. NPSAS data consisted of information about postsecondary student aid, including section 127 assistance, from postsecondary students, their parents, and educational institutions. The Department of Education’s National Center for Education Statistics (NCES) conducted this national study for the academic year 1992 to 1993 and compiled the results in the NPSAS database. We obtained the NPSAS database from the NCES and used the data primarily to develop information on characteristics of employees, such as their level and type of education. The BLS reports contained estimated data about employee eligibility for educational assistance. This information was collected by BLS as part of its surveys on employee benefits in small, medium, and large private establishments. These reports were done over a 2-year period: the survey of employee benefits in medium and large private establishments covered 1993, and the survey of employee benefits in small private establishments covered 1994. We did not verify the validity or reliability of the data collected by IRS or the other agency study sponsors. Because of the series of expirations and extensions of section 127, some employers may not have filed Form 5500 and Schedule F with IRS as required. Further, not all employers who filed provided all required data on Form 5500 and Schedule F. For example, of the employer returns that reported providing educational assistance, 65 percent reported the amount of assistance in 1992; 74 percent, in 1993; and 80 percent, in 1994. Appendix II includes a list of employer and employee characteristics and the percentage of returns that reported this information for 1992, 1993, and 1994. In addition, the analyses generated from each of the three different data sources cannot be compared with one another because of variations in data collection methods, definitions, populations covered, and periods covered. Appendix II explains in detail the data sources we used including populations covered, sampling errors as appropriate, and data limitations. To obtain additional information about the databases and the results of our analyses of them, we interviewed officials from the Department of the Treasury, IRS, the Department of Education, and BLS. We also interviewed officials from selected associations, including the National Association of Independent Colleges and Universities, the American Payroll Association, the American Society for Training and Education, and the Institute for Higher Education Policy. To achieve the second objective, we reviewed relevant sections of the IRC, and IRS and Department of the Treasury rulings and regulations pertaining to those sections. We also interviewed officials from the Department of the Treasury, IRS, and selected associations. We did not evaluate the effectiveness of section 127 in promoting employee skills as it was beyond the scope of this assignment. We did our work from April through July 1996 in accordance with generally accepted government auditing standards. We obtained comments on a draft of this report from IRS. These comments are discussed at the end of this report. Characteristics of Employers Providing Educational Assistance According to IRS, for each of the 3 years we reviewed, employers filed over 3,200 returns that reported information about educational assistance they provided their employees under section 127. IRS officials believe that this number included most of the employers who provided section 127 assistance but note that some employers may not have filed because of confusion resulting from the series of section 127 expirations and extensions. The employers included auto manufacturers, hospitals and medical centers, utilities, telecommunications and electronics companies, banks, and other types of businesses. The dollar amount of educational assistance employers reported increased steadily for the 3-year period we reviewed. It increased from $525.3 million for 1992 to $691.3 million for 1994. This increase may be related to the increase in the percent of employers who reported the amount of assistance to IRS for the 3-year period. For 1992, the first year IRS required employers to report data on Schedule F of Form 5500, employers filed 3,556 returns, 65 percent of which (2,294) reported the amount of educational assistance they provided employees. The reporting rate increased to 80 percent for the 1994 returns for which we have data. Table 1 shows the number of all returns employers filed and the number and percentage of returns that showed the amount of assistance provided, as well as the amount reported, for the 3 years we reviewed. On a per-return basis, the total amount of educational assistance and the total number of employees receiving it varied widely for each of the 3 years we reviewed, according to IRS data. For employer returns that reported the amount of educational assistance provided to all employees, IRS data showed total annual amounts ranging from less than $500 to total amounts of more than $10 million. Of those returns that reported the number of employees receiving educational assistance, the total number of recipients ranged from fewer than 10 to more than 10,000. Overall, employer-provided educational assistance per recipient averaged $1,081 for 1992, $1,046 for 1993, and $1,253 for 1994. Employers Providing Assistance Varied in Size According to IRS data, employers providing section 127 assistance ranged in size from those with fewer than 50 employees to those with more than 100,000. Our analysis showed that large employers, those with 250 or more employees, accounted for over 75 percent of the returns that reported the amount of assistance provided annually for the 3-year period we reviewed. These employers provided 99 percent of the dollar amount of section 127 assistance reported to IRS for 1992 through 1994. Figure 2 shows the distribution of the reported dollar amount of section 127 assistance by employer size. In Total, Medium and Large Employers Had More Full-Time Employees Eligible for Educational Assistance Than Did Small Employers According to BLS data, medium and large employers (100 or more employees) in total reported a higher number of their employees were eligible for educational assistance than small employers reported. BLS estimated that 20.8 million full-time employees of medium and large employers and 13.4 million full-time employees of small employers were eligible for job-related educational assistance. It also estimated that 6.3 million full-time employees of medium and large employers were eligible for nonjob-related educational assistance compared to 2 million full-time employees of small employers who were eligible. BLS estimated the number of part-time employees eligible for educational assistance was almost the same, regardless of employer size. About 1.9 million part-time employees of medium and large employers and 1.9 million part-time employees of small employers were estimated to be eligible for job-related education. BLS also estimated that about 440,000 part-time employees of medium and large employers were eligible for nonjob-related educational assistance compared to almost 400,000 part-time employees of small employers who were eligible. (Additional BLS information on employee eligibility is in appendix III.) Characteristics of Employees Eligible for and Receiving Employer-Provided Educational Assistance The number and percentage of employees eligible for employer-provided educational assistance who received it varied slightly for the 3-year period we reviewed, according to IRS data. Employer returns filed with IRS showed that the reported number of employees eligible for assistance under section 127 ranged from 11,208,411 for 1992 to 9,899,354 for 1994 and that about 900,000 employees received assistance in each of the 3 years. The percentage of eligible employees who received educational assistance varied from 8.25 percent for 1992 to 9.11 percent for 1993 to 8.40 percent for 1994. Figure 3 shows the reported number of employees eligible for educational assistance and receiving it. Nearly All Section 127 Recipients Worked for Large Employers In each of the 3 years we reviewed, over 95 percent of the reported recipients of section 127 assistance worked for large employers, according to IRS data. For 1992, employers reported that 96 percent of section 127 recipients worked for large employers. For both 1993 and 1994, employers reported that 99 percent of the recipients worked for large employers. Figure 4 shows the distribution of recipients by employer size. Undergraduate Students Received More Than Half of Total Section 127 Assistance According to NPSAS data, 74 percent of full-time employees receiving educational assistance in academic year 1992 to 1993 were undergraduates. Our analysis of the NPSAS data showed that about 337,000 of the estimated 456,000 employees who were enrolled in educational institutions, responded to the NPSAS survey, and reportedly received section 127 assistance were considered undergraduate students. These undergraduates received an estimated $345.9 million in assistance, nearly 60 percent of the estimated $597.3 million provided to all section 127 recipients. Table 2 shows the estimated amount and percentage of section 127 recipients by student level. Most Recipients Worked for Private-Sector Employers Further analysis of the NPSAS data showed that 446,000 (almost 98 percent) of the estimated 456,090 Section 127 recipients identified their employers. The employers of an estimated 329,000 undergraduate employees and 117,000 graduate employees were identified. Almost 75 percent of section 127 undergraduate recipients who identified their employers worked for private-sector employers. An estimated 202,000 recipients who were undergraduates worked for private sector for-profit employers. Additionally, an estimated 43,000 undergraduate recipients worked for private nonprofit employers. Figure 5 shows the estimated number and percentages of undergraduate recipients by employer type. Over 65 percent of graduate-level section 127 recipients worked in the private sector. More than 58,000 of an estimated 116,000 graduate students worked for private sector for-profit employers, and an estimated 18,000 others worked for private nonprofit employers. Figure 6 shows the estimated number and percentages of graduate recipients by employer type. Most Undergraduate Recipients Were in Clerical or Technical Occupations, and Most Graduates Were in Professional or Teaching Occupations Our analysis of the NPSAS data also showed that most undergraduates—about 64 percent (125,000)—worked for private, for-profit employers in clerical or technical occupations, whereas most graduates—about 89 percent (64,000)—were in professional, manager/administrative, or teaching occupations. Figure 7 shows the estimated number of section 127 recipients by identified occupations and student level. Tax Provisions Related to Employer-Provided Educational Assistance Four provisions of the IRC describe the circumstances under which employers can exclude or deduct the cost of educational assistance from their employees’ gross income. To the extent the assistance meets requirements of these provisions, the IRC provisions also allow qualifying employees to exclude the amount of the assistance from their federal gross income. However, the requirements and those who can benefit from them vary by provision. Four Tax Provisions Apply to Employer-Provided Educational Assistance The following tax provisions apply to employer-provided educational assistance: Section 127: Educational Assistance Programs, Section 117: Qualified Scholarships, Section 132: Certain Fringe Benefits, and Section 62(a)(2)(A): Reimbursed Expenses of Employees. These provisions generally allow employers to deduct the cost of educational assistance provided to their employers from their taxes as an allowable deduction for necessary business expenses. Further, such assistance if excludable from gross income, is not subject to social security or other federal employment taxes. These provisions also allow employees receiving the assistance to exclude the value of qualifying assistance from their federal gross income. Certain Employer-Provided Educational Assistance Is Excludable Section 127. This section allows employers to provide educational assistance to their employees without having to include the value of the assistance as part of the employees’ gross income. Educational assistance under this provision does not have to be job-related; however, the assistance generally cannot be provided for education related to a sport, game, or hobby. Assistance excludable from taxes covers education expenses incurred by employees (tuition reimbursement programs) as well as by employers on their behalf or provided directly by employers to education providers. In addition, there must be written plans for the program and employees must be notified of the program’s existence. The section also has a number of specific restrictions that must be met for the program to be considered valid. Under section 127, taxpayers may exclude up to $5,250 a year of this assistance from their gross income. Employees whose assistance exceeds this cap can then use sections 62 or 132, provided they meet the job-relatedness requirements of those provisions. The recent extension of section 127 includes employer-provided assistance for undergraduate and graduate education, although it ends coverage of graduate assistance as of mid-1996. Qualified Scholarships and Tuition Reductions Are Excludable Section 117. This section allows qualifying employees of educational institutions to exclude the value of “qualified scholarship and tuition reductions” from their gross income. Qualified scholarship and tuition reductions include tuition and related expenses, e.g., tuition and fees to enroll or attend an educational institution and fees, books, supplies, and equipment required for courses of instruction at an educational institution. Section 117 generally provides that the exclusion applies to scholarship and tuition reductions provided to employees for any education below the graduate level (except in the case of graduate student employees engaged in teaching or research activities for the institution); the exclusion also applies to scholarship and tuition reductions given to the employees’ spouses or dependents; the institution may not discriminate in favor of highly compensated employees in providing tuition reductions; and the exclusion does not apply to amounts received as payment for teaching, research, or other services by the student required as a condition for receiving the reduction. The scholarship or tuition reduction can be for job or nonjob-related education, and there is no limit on the value of assistance that can be provided. Working-Condition Fringe Benefit Exclusion Section 132. This section allows employees to exclude from their gross income employer-provided educational assistance if it is classified as a “working condition fringe benefit.” According to the IRC, a working condition fringe benefit is any property or service provided to an employee to the extent that, if the employee paid for such property or services, that payment would be deductible under section 162 or 167. Section 132 generally provides that the educational assistance is allowable as an exclusion only to the extent that the expenses would have been deductible by the employee as a business expense under section 162 had the employee paid for the education; and an employer’s cash payment to an employee requires the employee to (1) use the payment for expenses in connection with a prearranged activity or undertaking (for which a deduction is allowable under section 162), (2) verify that the payment is actually used for such expenses, and (3) return any part not used to the employer, otherwise the payment will not qualify as a working condition fringe benefit. There is no limit on the value of assistance that can be provided, and the assistance can be for undergraduate or graduate education. The essential requirement is that the education be job-related, as defined by section 162. Employees would likely use this tax provision should the educational assistance they receive otherwise meet this provision’s requirements and exceed the section 127 income exclusion cap of $5,250. Employee Reimbursed Expenses May Be Excludable or Deductible Section 62 (a)(2)(A). This section allows employees to exclude from gross income the value of educational assistance provided under an employer “accountable plan” for reimbursed business expenses. An accountable plan has three requirements that must be met: (1) there must a relationship to the business, i.e., meet the requirements of section 162,(2) the expenses must be documented, and (3) employees need to return excess payments. There is no limit on the value of the assistance provided, and graduates and undergraduates are eligible. Employees would be able to use this tax provision should the educational assistance they receive meet the requirements of this section and exceed the section 127 income exclusion cap of $5,250. If employer reimbursements for educational assistance are not made under an accountable plan, the value of the assistance generally cannot be excluded from an employee’s gross income. However, if the value of the assistance is included in the employee’s gross income, the employee can deduct eligible expenses under section 162. Section 162 allows an individual to deduct certain job-related educational expenses along with other miscellaneous itemized deductions from gross income, provided these itemized deductions exceed 2 percent of the individual’s adjusted gross income. Comparison of Application and Restrictions of Different Sections The major differences between section 127 and sections 62, 117, and 132 are related to the type of education being supported, employee eligibility, limits on the value of assistance provided, and the ease of administration. First, under section 127, employers can provide assistance for job and nonjob-related education and requires no distinction between them. Section 117 also applies to job and nonjob-related assistance. Sections 62 and 132 are more restrictive, requiring that educational assistance meet the business relationship requirements of section 162, that is, be job-related to be excluded from gross income. Second, the four sections differ regarding who is eligible for educational assistance. Employers determine which employees are eligible under sections 127, 62, and 132, provided that the employers abide by the nondiscriminatory requirements of the provisions. Section 117 applies only to employees of qualified educational institutions and their spouses and dependents. Third, the value of assistance that is provided and excludable from income is limited to $5,250 under section 127. There are no dollar amount limits under sections 62, 117, and 132. Table 3 summarizes major similarities and differences between Section 127 and the three other tax provisions relevant to employer-provided educational assistance. Administrative Complexity of Tax Provision Requirements Differs Among Four Provisions Finally, section 127 and the three other provisions differ in terms of the administrative complexity of the tax provisions’ requirements. According to IRS and association representatives we interviewed, section 127 is straightforward and simple relative to the other sections. Section 127 does not require the employer and employee to demonstrate nor document that the educational assistance is job-related, as do sections 62 and 132. The job-relatedness requirements of section 162 that apply to sections 62 and 132 are not always clear. This lack of clarity has resulted in numerous rulings by IRS to further define what is meant by job-related education. IRS reporting requirements are also different under section 127. Employers are required to file Form 5500 and its Schedule F with IRS to report information on the assistance. This information consists primarily of the number of employees, the number who are eligible for assistance, the number who received the assistance, and the amount of the assistance provided. Under section 127, employers are not required to document whether or not the assistance is job-related. On the other hand, association and corporate representatives told us that the requirements of sections 62 and 132 are burdensome because Treasury regulations require documentation such as an accountable plan or other job-related information. In addition, according to the IRC, employers are required to report the amount of nonjob-related assistance as part of employees’ gross income on W-2s and to withhold income, unemployment, Social Security, and other employment taxes. Under all three sections, employees have to meet the documentation requirements set by their employers, e.g., provide receipts for course-related expenses. If employer-provided educational assistance is not excluded from an employee’s income, the employee may still be allowed to deduct it as a business-related expense under section 162. The employee can deduct the value of the assistance if (1) the assistance meets the job-relatedness test of section 162 and (2) the employee itemizes deductions. The employee needs documentation to support any income tax deductions for educational expenses. Employees who do not itemize deductions, however, cannot claim the deduction and are required to pay tax on the nonexcludable assistance received. Therefore, educational assistance that is not excludable or deductible from an employee’s income is to be reported and taxed. Agency Comments On November 21, 1996, we discussed our draft report with IRS officials who are responsible for administering the fringe benefit plans, including employer educational assistance reporting requirements. The officials, including the Acting Chief Compliance Officer, agreed with the IRS material in our report. We are sending copies of this report to Chairmen and Ranking Minority Members of congressional committees that have responsibilities related to these issues, the Commissioner of Internal Revenue, and other interested parties. We will also make copies available to others upon request. The major contributors to this report are listed in appendix IV. Please contact me at (202) 512-9044 if you or your staff have any questions about this report. Examples Illustrating IRS’ Determination of Deductibility of Job-Related Educational Expenses as Business-Related Expenses The following three examples illustrate how the Internal Revenue Service would determine whether job-related educational expenses would be deductible as business-related expenses under section 162 of the Internal Revenue Code (IRC). They are from the Department of the Treasury regulations covering section 162. Examples of Determinations of Whether Job-Related Expenses Are Deductible Under IRC Section 162 Example 1: A person holding a bachelor’s degree obtains temporary employment as a university instructor and undertakes courses as a candidate for a graduate degree. The person may become a faculty member only if he or she obtains a graduate degree and may continue to hold a position as instructor only so long as he or she shows satisfactory progress toward obtaining this graduate degree. Determination: The graduate courses taken by the person constitute education required to meet the minimum educational requirements for qualifying in his trade or business, and the expenses for these courses are therefore not deductible. Example 2: An employee individual practicing a profession other than law, for example engineering, is required by his employer to obtain a bachelor of laws degree. So the employee attends law school at night and receives the degree. Determination: Though he intends to continue practicing his nonlegal profession as an employee, the expenditures made by him in attending law school are nondeductible because this course of study qualifies him for a new trade or business. Example 3: A general medicine practitioner takes a 2-week course reviewing new development in several specialized fields of medicine. Determination: His expenses for the course are deductible because the course maintains or improves skills required by him in his trade or business and does not qualify him for a new trade or business. Technical Appendix: Methodology for Analyzing IRS and NPSAS Data Three Primary Sources Used: IRS, NPSAS, and BLS To achieve our objective to report information on the characteristics of the employers providing educational assistance and the employees receiving it under of section 127 of the Internal Revenue Code, we used available data from three primary sources. We relied mainly on data from IRS’s Form 5500 and its Schedule F to develop information about employers providing educational assistance. To develop information about employees, we relied primarily on NPSAS data. We supplemented employee information by using BLS reports on employee benefits. Details about each data source and dataset follow below. We did not verify the validity of the data collected by IRS or the other agency study sponsors. However, if the reporting rate for a specific variable was less than 50 percent, we did not consider the data to be reliable and therefore did not include them in our analysis. Reporting rates for IRS and confidence intervals for NPSAS variables presented in this report are provided in tables II.1 to II.2. IRS Data Employers providing educational assistance under section 127 of the Internal Revenue Code are required to file Form 5500-C/R and its Schedule F (Form 5500). IRS provided GAO with data for 1992, 1993, and 1994 on employer-provided educational assistance that was extracted from these forms. The database included employer returns on employer characteristics, including the amount of assistance spent on educational assistance. These returns also provided data on employees, specifically the total number of employees and those employees eligible for and receiving educational assistance from the employer. Form 5500 provides employer-specific information, including the employer identification number; the business code for the principal business activity that best describes the nature of the employer’s business; the entity code which identifies the employer as one of the following: a single-employer plan, plan of controlled group of corporations or common control employers, a multi-employer plan, multiple-employer collectively bargained plan, or a multiple-employer plan; the company plan number; and the type of benefit plan, including fringe benefit plans, which requires filing Schedule F (Form 5500). Schedule F provided us with employer and employee information, including employer identification number, business plan number, whether the employer’s fringe benefit plan included an educational assistance program as well as a cafeteria plan, total number of employees, total number of employees eligible to participate in the educational assistance program, total number of employees participating in the educational assistance program, and total cost of the fringe benefit plan for the plan year, including the costs for section 127 assistance. IRS does not define educational assistance in its instructions for completing the forms. Rather, it relies on employers to determine what educational assistance means based on the broad definition of educational assistance programs in section 127. Our Analysis of IRS Data We analyzed the most recently available IRS data—1992 through 1994. Though Schedule F was developed in 1991, it was not included in the tax packet sent to filers until the 1992 filing year. IRS officials told us they provided us with all data it had received and processed for 1992 and 1993. It also provided us with the 1994 data it had received and processed through May 31, 1996, which were the most recently available data. IRS officials stated the believed this represented about 90 percent of the returns it expected for that year. They considered this was sufficient to include in our analysis and we concurred. We did not verify IRS’s methodology in extracting data from returns. However, we did review the database to ensure that duplicates and non-relevant data were eliminated. Specifically, we developed computer programs to eliminate returns that were not exclusively for educational assistance and that were misreported duplications. In addition, we found that the data reported to IRS and provided to us were incomplete. For example, 65 percent of the returns showing educational assistance provided in 1992 reported the total dollar amount of the assistance. The reporting rates for the IRS data characteristics cited in this report, and their respective tables and figures, are shown in Table II.1. Note 1: References to figures refer to corresponding figures in the letter report. Note 2: The number of employer returns submitted to IRS was 3,556 for 1992, 3,336 for 1993, and 3,234 for 1994. NPSAS Data We used NPSAS data to develop information about employees receiving section 127 assistance. NPSAS is a comprehensive study that examines how students and their families pay for postsecondary education and provides other characteristics of the students. It includes nationally representative samples of undergraduate, graduate, and first-professional students,students attending less than 2-year, 2-year, 4-year, and doctoral-granting institutions, as well as students who receive financial aid and those who do not. It also includes data taken from a sample of students’ parents and information from educational institutions. NPSAS was conducted for the Department of Education’s National Center for Education Statistics for academic year 1992 to 1993. The NPSAS survey design was both large and complex, according to the NPSAS methodology report. Data on nearly 2,000 data elements were collected from a diverse set of respondents. Over 1,000 postsecondary institutions, 60,000 students, and 11,000 parents participated in the study. The target population was all students enrolled in postsecondary institutions in the United States, the District of Columbia, and Puerto Rico during the financial aid award year 1992-93 (terms beginning July 1, 1992, through June 30, 1993). NPSAS:93 was a stratified multistage probability sample of students enrolled in postsecondary institutions representing an estimated 18.5 million undergraduates and 2.7 million graduate and first-professional students. Overview of NPSAS Design and Sampling According to the NPSAS methodology report, the study design used a two-phase sampling selection process for institutions. First, geographic areas based on three-digit postal ZIP codes were selected as primary sampling units. Second, institutions were selected using two survey frames: The institution sampling frame was built from the 1990-91 Integrated Postsecondary Education Data System Institutional Characteristics file (IPEDS-IC), which was supplemented with the Office of Postsecondary Education Data System (OPE-IDS) file of institutions from the Stafford and Pell student aid programs as of April 15, 1992. Moreover, 22 strata were defined for the selection of institutions and two sampling strata were developed through the classification of institutions based on two criteria: (1) type of ownership or control, e.g., public, private/nonprofit, or private/for-profit institutions and (2) the length of time required to complete the highest degree offered, e.g., 4-year, 2-year, or less than 2-year. A sampling of 1,386 institutions was allocated to the 22 strata and 2 sampling frames. Computer-assisted data entry software was used to abstract comprehensive information about a student’s involvement with the institution. A total of 1,098 of the eligible institutions provided lists that could be used for student sample selection. Of these, student records were successfully extracted from 1,079. A total of 82,016 students were selected from enrollment files supplied by the eligible and participating institutions. They were administered a computer-assisted telephone interview. Of the 79,269 students ultimately eligible for the base survey, 66,096 (83.4 percent, unweighted) were classified as respondents. In addition, parents of a subsample of 18,129 students were interviewed by telephone. The NPSAS estimates are subject to sampling and nonsampling errors. Sample errors exist in all sample-based datasets. Estimates based on a sample may differ from those based on a different sample of the same underlying population. Nonsampling errors are due to a number of sources, including but not limited to, nonresponse, inaccurate coding, misspecification of composite variables, and inaccurate imputations. The Data Analysis System procedures that compute the estimates required 30 or more unweighted cases. Some estimates were not calculated because fewer than 30 cases were in the NPSAS analysis file. According to the NPSAS methodology report, the 1992-1993 data is not comparable with prior NPSAS estimates. Two design changes account for this: (1) different student enrollment period covered by the 1987, 1990 and 1993 studies and (2) inclusion of a small sample of students from Puerto Rico in 1990 and 1993. Our Analysis of NPSAS Data In using NPSAS data, we identified issues that would affect our analysis and reporting. NPSAS data are not comparable to IRS or BLS data primarily because of differences in the populations surveyed. Specifically, NPSAS estimates were based on responses to questions to eligible educational institutions based on a set of seven criteria. These criteria included that the institution offer more than just correspondence courses and offer at least one program requiring a minimum of 3 months or 300 clock hours of instruction. This precluded workshops, weekend seminars, and week-long computer courses, for example, that would be included in educational assistance reported to IRS. Further, low response rates and lack of estimates for certain variables also affected our ability to analyze and present employee data. For example, response rates were too low to estimate the number of undergraduates by all occupations identified by NPSAS, as figure 8—Estimated Section 127 Recipients by Occupation and Student Level, Academic Year 1992-1993—indicates. The following table provides the data variables and the estimated amount of section 127 assistance and the relevant estimated number of students we analyzed and presented in this report. The confidence intervals and sampling errors for the individual variables were not available. Estimated number (undergraduate) Estimated number (graduate) BLS Reports We used BLS information to supplement data about employees’ receiving section 127 assistance. BLS conducts surveys to obtain data on the incidence and provision of employee benefits, including educational assistance. The surveys collect data to determine the number of employees eligible for this assistance, not the use of it. We used information contained in the two most recent reports: Employee Benefits in Medium and Large Private Establishments, 1994 and Employee Benefits in Small Private Establishments, 1994. Overview of BLS Survey Design and Sampling The underlying surveys for the two reports we used covered all industries in the United States and the District of Columbia, except for farms and private households, as well as full- and part-time employees. The sample design for the surveys was a two-stage probability sample of detailed occupations. The first stage of sample selection was a probability sample of establishments. The second stage was a probability sample of occupations within those establishments. To ensure that the sample was representative, a sample of newly opened establishments were added to the sample of existing ones. The sample of establishments was selected by first stratifying the sampling frame by industry group, and then by region and establishment employment. The industry groups usually consist of 3-digit Standard Industrial Classification groups, as defined by the Office of Management and Budget. The number of sample establishments allocated to each stratum reflected the ratio of employment in the stratum to employment in all sampling frame establishments. Thus, a stratum that contained 1 percent of the total employment within the scope of the survey received approximately 1 percent of the total sample establishments. Each sampled establishment within an industry group had a probability of selection proportional to its employment. The survey design used an estimator that assigned the inverse of each sample unit’s probability of selection as a weight to the unit’s data at each of the two stages of sample selection. Weight-adjustment factors were applied to the establishment data. For the survey of medium and large private establishments, the first factor was introduced to account for establishment nonresponse, and a second poststratification factor was introduced to adjust the estimated employment totals to actual counts of the employment by industry for the survey reference date. For the survey of small private establishments, a third factor for occupational nonresponse was applied. The statistics in these reports are estimates derived from a sample of usable occupation quotes selected from the responding establishments. They are not tabulations based on data from all employees in small and medium and large establishments within the scope of the survey.Consequently, the data were subject to sampling and nonsampling errors. Data for the employee benefits surveys in small private establishments were collected from November 1993 to October 1994 and in medium and large private establishments from November 1992 to October 1993. The 2,135 responding small establishments yielded 6,489 occupational quotes for which data were collected. This resulted in an estimated 35,909,558 full-time workers and 12,716,611 part-time workers in small establishments who were within the scope of the survey. The 2,325 responding medium and large establishments yielded 13,012 occupational quotes for which data were collected. This resulted in an estimated 28,728,207 full-time workers and 5,564,940 part-time workers in medium and large establishments who were within the scope of the survey. BLS used four procedures to adjust for missing data from refusals from small establishments and three procedures to adjust for missing data from refusals from medium and large establishments. Our Use of BLS Data We used BLS data to report on the estimated number of all full-time and part-time employees and those eligible for educational assistance. We developed information on the eligibility of full-time employees in small private and medium and large private establishments for job-related and nonjob-related assistance using table 4 from the 1994 report on small establishments and table 4 from the 1993 report on medium and large establishments. We developed information about the eligibility of part-time employees in small and medium and large establishments for job-related and nonjob-related assistance using tables 80 and A.2 from the 1994 report on small establishments and tables 207 and A.2 from the 1993 report on medium and large establishments. Table II.3 shows the confidence intervals and sampling errors at the 95-percent level for the BLS estimates we used for full-time and part-time employees. Additional Information About Employees Analysis of NPSAS and BLS Data Revealed Additional Information Our analysis of NPSAS data showed that just over half of undergraduates and graduates that received section 127 assistance were female. Of the 336,880 undergraduates, 56 percent were female. Of the 119,210 graduates, 52 percent were female. Figure III.1 shows the estimated number of section 127 recipients by gender and student level. Medium and Large Employers Had More Than Twice the Rate of Educational Assistance Eligibility as Small Employers According to BLS data, medium and large employers (100 or more employees) were more likely than small employers to consider employees eligible for educational assistance. Of the estimated 28.7 million full-time employees of medium and large employers, BLS estimated that 72 percent (20.8 million) were eligible for job-related educational assistance and that of the 35.9 million full-time employees at small employers, it estimated 37 percent (13.4 million) were eligible. The percentage difference is more pronounced when looking at eligibility for nonjob-related educational assistance: an estimated 22 percent (6.3 million) of full-time employees at medium and large employers were eligible for nonjob-related educational assistance compared to 6 percent (2 million) at small employers were eligible. (Information on part-time employees is in appendix III.) Similarly, the percentage of part-time employees of medium and large employers eligible for educational assistance was more than twice that of those of small employers. BLS estimated that 35 percent (1.9 million) of the estimated 5.5 million part-time employees of medium and large employers were eligible for job-related education and that 15 percent (1.9 million) of about 12.7 million part-time employees of small employers were eligible. BLS also estimated that about 8 percent (about 440,000) part-time employees of medium and large employers were eligible for nonjob-related educational assistance compared to about 3 percent (almost 400,000) part-time employees of small employers were eligible. Major Contributors to This Report General Government Division, Washington, D.C. Office of General Counsel, Washington, D.C. Shirley Jones, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reported on employer-provided educational assistance between 1992 and 1994 under section 127 of the Internal Revenue Code (IRC), focusing on: (1) the characteristics of employers providing educational assistance, such as the number of providers and their size; (2) employees eligible for and receiving it, such as the number of recipients and their level of study; and (3) other tax provisions related to employer-provided educational assistance and the differences between them and section 127. GAO found that: (1) according to IRS data, employers annualy filed over 3,200 returns that reported information about educational assistance they provided their employees during 1992 through 1994; (2) employers filing returns varied in size, type of business, and amount of assistance provided; (3) large employers, those employing 250 or more employees, provided 99 percent of the dollar amount of the reported assistance to 98 percent of the employees who received it; (4) IRS data showed that about 900,000 employees received employer-provided educational assistance annually during 1992 through 1994, but few employees eligible for educational assistance under section 127 actually received it; (5) according to National Postsecondary Student Aid Study data, 74 percent of employees receiving educational assistance in academic year 1992 to 1993 were undergraduates, and about 64 percent of the undergraduates who identified their occupation were in clerical or technical occupations; (6) of employees receiving assistance who were graduate students and identified their occupation, about 89 percent were in professional, managerial, administrative, or teaching occupations; (7) generally, four tax provisions apply to employer-provided educational assistance; and (8) the major differences between section 127 and the three other provisions are related to the type of education supported, employee eligibility, limits on the value of assistance provided, and the ease of administration. |
Background Diminishing Ice Opens Potential for Increased Human Activity in the Arctic Scientific research and projections of the changes taking place in the Arctic vary, but there is a general consensus that Arctic sea ice is diminishing. As recently as September 2011, scientists at the U.S. National Snow and Ice Data Center reported that the annual Arctic minimum sea ice extent for 2011 was the second lowest in the satellite record, and 938,000 square miles below the 1979 to 2000 average annual minimum (see fig. 1). Much of the Arctic Ocean remains ice- covered for a majority of the year, but some scientists have projected that the Arctic will be ice-diminished for periods of time in the summer by as soon as 2040. These environmental changes in the Arctic are making maritime transit more feasible and are increasing the likelihood of further expansion in human activity including tourism, oil and gas extraction, commercial shipping, and fishing in the region. For example, in 2011, northern trans- shipping routes opened during the summer months, which permitted more than 40 vessels to transit between June and October 2011. The Northern Sea Route opened by mid-August, and appeared to remain open through September, while the Northwest Passage opened for periods in the summer for the fifth year in a row. See figure 2 for locations of these shipping routes. Despite these changes, however, several enduring characteristics still provide challenges to surface navigation in the Arctic, including large amounts of winter ice and increased movement of ice from spring to fall. Increased movement of sea ice makes its location less predictable, a situation that is likely to increase the risk for ships to become trapped or damaged by ice impacts.Report states that scientists currently project transpolar routes will not be reliably open until around 2040 and then only for a limited period during the summer and early fall. DOD’s report assessed that most national security missions will likely be limited to those months. National Policies Guide DOD and Other Stakeholders’ Operations in the Arctic Key strategy and policy documents detail the United States’ national security objectives and guide DOD’s and other stakeholders’ operations in the Arctic. The 2009 National Security Presidential Directive 66/Homeland Security Presidential Directive 25, Arctic Region Policy, establishes U.S. policy with respect to the Arctic region and tasks senior officials, including the Secretaries of Defense and Homeland Security, with its implementation. This directive identifies specific U.S. national security and homeland security interests in the Arctic, including missile defense and early warning; deployment of sea and air systems for strategic sealift, maritime presence and security operations; and ensuring freedom of navigation and overflight. Additionally, the 2010 National Security Strategy identifies four enduring national interests that are relevant to the Arctic and states that the U.S. has broad and fundamental interests in the Arctic. The 2010 Quadrennial Defense Review also provides top-level DOD policy guidance on the Arctic, highlighting the need for DOD to work collaboratively with interagency partners such as the Coast Guard to address gaps in Arctic communications, domain awareness, search and rescue, and environmental observation and forecasting. Finally, since the Arctic region is primarily a maritime domain, existing U.S. guidance relating to maritime areas continues to apply, such as the September 2005 National Strategy for Maritime Security and National Security Presidential Directive 41/Homeland Security Presidential Directive 13, the Maritime Security Policy. Multiple Federal Stakeholders Have Arctic Responsibilities DOD is responsible in the Arctic and elsewhere for securing the United States from direct attack; securing strategic access and retaining global freedom of action; strengthening existing and emerging alliances and partnerships; and establishing favorable security conditions. Additionally, the Navy has developed an Arctic Roadmap which lists Navy action items, objectives, and desired effects for the Arctic region from fiscal years 2010 to 2014. Focus areas include training, communications, operational investments, and environmental protection. Since the Arctic is primarily a maritime domain, the Coast Guard plays a significant role in Arctic policy implementation and enforcement. The Coast Guard is a multimission, maritime military service within the Department of Homeland Security (DHS) that has responsibilities including maritime safety, security, environmental protection, and national defense, among other missions. Therefore, as more navigable ocean water emerges in the Arctic and human activity increases, the Coast Guard will face expanding responsibilities in the region. For DOD facilities and Coast Guard assets in the Arctic and Alaska, see figure 2. Other federal stakeholders include: The National Science Foundation, which is responsible for funding U.S. Arctic research—including research on the causes and impacts of climate change––and providing associated logistics and infrastructure support to conduct this research. The National Science Foundation and the Coast Guard also coordinate on the use of the Coast Guard’s icebreakers for scientific research. The Department of State, which is responsible for formulating and implementing U.S. policy on international issues concerning the Arctic, leading the domestic interagency Arctic Policy Group, and leading U.S. participation in the Arctic Council. The Department of the Interior, which is responsible for oversight and regulation of resource development in U.S. Arctic regions. The department also coordinates with the Coast Guard on safety compliance inspections of offshore energy facilities and in the event of a major oil spill. The Department of Transportation and its component agency, the Maritime Administration, which works on marine transportation and shipping issues in the Arctic and elsewhere, among other things. The Department of Commerce’s National Oceanic and Atmospheric Administration, which provides information on Arctic oceanic and atmospheric conditions and issues weather and ice forecasts, among other responsibilities. DOD’s Arctic Report Addressed or Partially Addressed All Five Specified Reporting Elements DOD’s May 2011 Arctic Report either addressed or partially addressed all of the elements specified in the House Report. Specifically, our analysis showed that, of the five reporting elements, DOD addressed three and partially addressed two. The elements not fully addressed were to have included a timeline to obtain needed Arctic capabilities and an assessment of the minimum and optimal number of icebreakers that may be needed to support Arctic strategic national security objectives. According to DOD officials, these elements were not fully addressed for a number of reasons such as DOD’s assessment that Arctic operations are a challenge but not yet an urgency; the report’s being written prior to initiating the formal DOD capabilities development process, making it difficult to provide a timeline for obtaining Arctic capabilities; and DOD’s assessment that its need for icebreakers is currently limited to one mission per year. Furthermore, DOD’s Arctic Report notes that significant uncertainty remains about the extent, rate, and impact of climate change in the Arctic and the pace at which human activity will increase, making it challenging for DOD to plan for possible future conditions in the region and to mobilize public or political support for investments in U.S. Arctic capabilities or infrastructure. Figure 3 below summarizes our assessment of the extent to which DOD’s Arctic Report included each of the specified reporting elements and the reasons DOD officials provided for any elements that were not fully addressed. Appendix II includes our detailed evaluation of each of the specified reporting elements. DOD Has Identified Arctic Capability Gaps, but Lacks a Comprehensive Approach to Addressing Arctic Capabilities DOD has several efforts under way to assess the capabilities needed to support U.S. strategic objectives in the Arctic. However, it has not yet developed a comprehensive approach to addressing Arctic capabilities that would include steps such as developing a risk-based investment strategy and timeline to address near-term needs and establishing a collaborative forum with the Coast Guard to identify long-term Arctic investments. DOD Has Efforts Under Way to Assess Near-term Arctic Capability Gaps but Lacks a Risk-Based Investment Strategy to Address These Gaps While DOD’s Arctic Report assessed a relatively low level of threat in the Arctic region, it noted three capability gaps that have the potential to hamper Arctic operations. These gaps include (1) limited communications, such as degraded high-frequency radio signals in latitudes above 70°N because of magnetic and solar phenomena; (2) degraded global positioning system performance that could affect missions that require precision navigation, such as search and rescue; and (3) limited awareness across all domains in the Arctic because of distances, limited presence, and the harsh environment. Other key challenges identified include: shortfalls in ice and weather reporting and forecasting; limitations in command, control, communications, computers, intelligence, surveillance, and reconnaissance because of a lack of assets and harsh environmental conditions; limited inventory of ice-capable vessels; and limited shore-based infrastructure. According to DOD’s Arctic Report, capabilities will need to be reassessed as conditions change, and gaps will need to be addressed to be prepared to operate in a more accessible Arctic. Other stakeholders have also assessed Arctic capability gaps. Examples of these efforts include the following: U.S. Northern Command initiated a commander’s estimate for the Arctic in December 2010 that, according to officials, will establish the commander’s intent and missions in the Arctic and identify capability shortfalls. In addition, Northern Command identified two Arctic-specific capability gaps (communications and maritime domain awareness) in its fiscal years 2013 through 2017 integrated priority list, which defines the combatant command’s highest-priority capability gaps for the near-term, including shortfalls that may adversely affect missions. U.S. European Command completed an Arctic Strategic Assessment in April 2011 that, among other things, identified Arctic capability gaps in the areas of environmental protection, maritime domain awareness, cooperative development of environmental awareness technology, sharing of environmental data, and lessons learned on infrastructure development. In addition, it recommended that the command conduct a more detailed mission analysis for potential Arctic missions, complete a detailed capability estimate for Arctic operations, and work in conjunction with Northern Command and the Departments of the Navy and Air Force to conduct a comprehensive capabilities-based assessment for the Arctic. DOD and DHS established the Capabilities Assessment Working Group (working group) in May 2011 to identify shared Arctic capability gaps as well as opportunities and approaches to overcome them, to include making recommendations for near-term investments. The working group was directed by its Terms of Reference to focus on four primary capability areas when identifying potential collaborative efforts to enhance Arctic capabilities, including near-term investments. Those capability areas include maritime domain awareness, communications, infrastructure, and presence. The working group was also directed to identify overlaps and redundancies in established and emerging DOD and DHS Arctic requirements. As the advocate for Arctic capabilities, Northern Command was assigned lead responsibility for DOD in the working group, while the Coast Guard was assigned lead responsibility for DHS. The establishment of the working group—which, among other things, is to identify opportunities for bi-departmental action to close Arctic capability gaps and issue recommendations for near-term investments—helps to ensure that collaboration between the Coast Guard and DOD is taking place to identify near-term capabilities needed to support current planning and operations. Although the working group is developing a paper with its recommendations, officials indicated that additional assessments would be required to address those recommendations. U.S. Navy completed its first Arctic capabilities-based assessment in September 2011 and is developing a second capabilities-based assessment focused on observing, mapping, and environmental prediction capabilities in the Arctic, which officials expect to be completed in the spring of 2012. The Navy’s first Arctic capabilities- based assessment identified three critical capability gaps as the highest priorities, including the capabilities to provide environmental information; maneuver safely on the sea surface; and conduct training, exercise, and education. This assessment recommended several near-term actions to address these gaps. DOD’s Arctic Report states that the development of Arctic capabilities requires a deliberate risk-based investment strategy, but DOD has not developed such a strategy. Although DOD and its components have identified current Arctic capability gaps, the department may not be taking appropriate steps to best ensure its future preparedness because DOD lacks a risk-based investment strategy and a timeline for addressing near- term capability needs. According to DOD officials, there had been no Arctic-related submissions to its formal capabilities development process as of September 2011; this process could take two or more years to be approved, followed by additional time for actual capability development. GAO, Intelligence, Surveillance, and Reconnaissance: DOD Needs a Strategic, Risk- Based Approach to Enhance Its Maritime Domain Awareness, GAO-11-621 (Washington, D.C.: June 20, 2011). those that would minimize DOD investments by leveraging capabilities of interagency and international partners or they could also include submissions to DOD’s formal capabilities development process. Another alternative could include accepting the risk of potentially being late to develop these needed capabilities in order to provide limited fiscal resources to other priorities. Given that the opening in the Arctic presents a wide range of challenges for DOD, a risk-based investment strategy and timeline can help DOD develop the capabilities needed to meet national security interests in the region. Without a risk-based investment strategy and timeline for prioritizing and addressing near-term Arctic capability gaps and challenges, which is periodically updated to reflect evolving needs, DOD could be slow to develop needed capabilities, potentially facing operational risk and higher costs if the need arises to execute plans rapidly. Conversely, DOD could move too early, making premature Arctic investments that take resources from other, more pressing needs or producing capabilities that could be outdated before they are used. DOD and DHS Have Established a Collaborative Forum to Identify Potential Near- term Investments but Not Long-term Needs While DOD and DHS have established the working group to identify shared near-term Arctic capability gaps, this collaborative forum is not intended to address long-term Arctic capability gaps or identify opportunities for joint investments over the longer-term. DOD acknowledged the importance of collaboration with the Coast Guard over the long-term in its 2010 Quadrennial Defense Review, which states that the department must work with the Coast Guard and DHS to develop Arctic capabilities to support both current and future planning and operations. According to DOD and Coast Guard officials, although the working group is primarily focused on near-term investments, it has discussed some mid- to long-term capability needs. However, DOD and Coast Guard officials stated that after the completion of the working group’s paper, expected in January 2012, the working group will have completed the tasks detailed in the Terms of Reference and will be dissolved. Consequently, no forum will exist to further address any mid- to long-term capability needs. Although we have previously reported that there are several existing interagency organizations working on Arctic issues, these organizations do not specifically address Arctic capability needs. These organizations include the Interagency Policy Committee on the Arctic, the Arctic Policy Group, and the Interagency Arctic Research Policy Committee, among others. DOD and DHS also have long-standing memorandums of agreement related to coordination between DOD and the Coast Guard in both maritime homeland security and maritime homeland defense. The objectives of these interagency organizations range from developing coordinated research policy for the Arctic region to tracking implementation of national Arctic policy to identifying implementation gaps, but do not specifically address capability gaps in the Arctic. According to DOD and Coast Guard officials we spoke with, only the working group is focused specifically on addressing Arctic capabilities. After the working group completes its tasks in January 2012, there will be no DOD and Coast Guard organization focused specifically on reducing overlap and redundancies or collaborating to address Arctic capability gaps in support of future planning and operations, as is directed by the 2010 Quadrennial Defense Review. A theater campaign plan encompasses the activities of a supported geographic combatant commander, which accomplish strategic or operational objectives within a theater of war or theater of operations, and translates national or theater strategy into operational concepts and those concepts into unified action. priorities, such as identifying a need for icebreakers. However, the officials stated that the Arctic Estimate does not identify how DOD would acquire those icebreakers or how it would coordinate with the Coast Guard—the operator of the nation’s icebreakers—to reconstruct existing or build new icebreakers. The Coast Guard has a more immediate need to develop Arctic capabilities such as icebreakers and has taken steps to address some long-term capability gaps. Meanwhile, given that it could take approximately 10 years to develop icebreakers, the process for DOD and the Coast Guard to identify and procure new icebreakers would have to begin within the next year to ensure that U.S. heavy icebreaking capabilities are maintained beyond 2020. Our prior work has shown that collaboration with partners can help avoid wasting scarce resources and increase effectiveness of efforts. Without specific plans for a collaborative forum between DOD and the Coast Guard to address long-term Arctic capability gaps and to identify opportunities for joint investments over the longer-term, DOD may miss opportunities to leverage resources with the Coast Guard to enhance future Arctic capabilities. Conclusions At this time, significant DOD investments in Arctic capabilities may not be needed, but that does not preclude taking steps to anticipate and prepare for Arctic operations in the future. Addressing near-term gaps is essential for DOD to have the key enabling capabilities it needs to communicate, navigate, and maintain awareness of activity in the region. An investment strategy that identifies and prioritizes near-term Arctic capability needs and identifies a timeline to address them would be useful for decision makers in planning and budgeting. Without taking deliberate steps to analyze risks in the Arctic and prioritize related resource and operational requirements, DOD could later find itself faced with urgent needs, resulting in higher costs that could have been avoided. In addition, unless DOD and DHS continue to collaborate to identify opportunities for interagency action to close Arctic capability gaps, DOD could miss out on opportunities to work with the Coast Guard to leverage resources for shared needs. DOD may choose to create a new collaborative forum or incorporate this collaboration into an existing forum or process. Given the different missions and associated timelines of DOD and the Coast Guard for developing Arctic capabilities, it is important that the two agencies work together to avoid fragmented efforts and reduce unaffordable overlap and redundancies while addressing Arctic capability gaps in support of future planning and operations. Recommendations for Executive Action To more effectively leverage federal investments in Arctic capabilities in a resource-constrained environment and ensure needed capabilities are developed in a timely way, we recommend that the Secretary of Defense, in consultation with the Secretary of the Department of Homeland Security, take the following two actions: develop a risk-based investment strategy that: 1) identifies and prioritizes near-term Arctic capability needs, 2) develops a timeline for addressing them, and 3) is updated as appropriate; and establish a collaborative forum with the Coast Guard to fully leverage federal investments and help avoid overlap and redundancies in addressing long-term Arctic capability needs. Agency Comments and Our Evaluation In written comments on a draft of this report, DHS concurred with both of our recommendations. For its part, DOD partially concurred with both of our recommendations. It generally agreed that the department needed to take action to address the issues we raised but indicated it is already taking initial steps to address them. DOD and DHS’s comments are reprinted in appendices VI and VII, respectively. Technical comments were provided separately and incorporated as appropriate. With respect to DOD’s comments on our first recommendation, DOD stated that its existing processes—including prioritizing Arctic capability needs through the Commander’s annual integrated priority lists; balancing those needs against other requirements through the annual planning, programming, budgeting, and execution system process; and addressing Service requirements through program objective memorandum submissions—enable DOD to balance the risk of being late-to-need with the opportunity cost of making premature Arctic investments. However, DOD’s response did not address how it would develop a risk-based investment strategy. As stated in our report, DOD has considered some elements of such a risk-based investment strategy by setting strategic goals and objectives, determining constraints, and assessing risks (such as Northern Command’s inclusion of two Arctic-specific capability needs in its fiscal years 2013 through 2017 integrated priority list). However, DOD has not yet conducted the remaining three phases of a risk-based investment strategy: evaluating alternatives for addressing these risks, selecting the appropriate alternatives, and implementing the alternatives and monitoring the progress made and results achieved. We believe that considering potential alternative solutions, such as leveraging the capabilities of interagency or international partners, could help minimize DOD’s investment in Arctic capabilities. DOD’s Arctic Report also emphasized the need for a risk-based investment strategy, noting that “the long lead time associated with capability development, particularly the procurement of space-based assets and ships, requires a deliberate risk-based investment strategy” and noted that “additional capability analysis will be required.” By developing a risk-based investment strategy to prioritize near-term investment needs and a timeline for addressing them, DOD can be better prepared in its planning and budgeting decisions. With respect to our second recommendation, both DOD and DHS cited the importance of collaboration to develop Arctic capabilities and identified some existing forums that include Arctic issues, such as the annual Navy and Coast Guard staff talks and the joint DOD-DHS Capabilities Development Working Group. Our report also identified additional existing interagency organizations working on Arctic issues, and we agree that these forums can help avoid overlap and redundancies in addressing long-term Arctic capability needs. However, these forums do not specifically focus on Arctic capability needs, and no DOD and Coast Guard forum will be focused on reducing overlap and redundancies or collaborating to address Arctic capability gaps following the dissolution of the Arctic Capabilities Assessment Working Group in January 2012. We continue to believe that focusing specifically on long-term Arctic capability needs will enable DOD and the Coast Guard to better leverage resources for shared needs. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Department of Homeland Security. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at [email protected] or (202) 512-3489. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology The objectives of our work were to determine the extent to which (1) the Department of Defense (DOD) report on the Arctic addresses the reporting elements specified in House Report 111-491 and (2) DOD has efforts under way to identify and prioritize the capabilities needed to meet national security objectives in the Arctic. To gather information for both objectives we reviewed various DOD and Coast Guard documentation. We interviewed officials from the Office of the Secretary of Defense; Office of the Chairman of the Joint Chiefs of Staff; U.S. Northern Command and the North American Aerospace Defense Command; U.S. European Command; U.S. Pacific Command; U.S. Transportation Command; and U.S. Army, Navy, Air Force, and Marine Corps Arctic offices. We also interviewed Coast Guard officials to determine their contribution to DOD’s efforts to identify and prioritize capabilities. To address the extent to which DOD’s report on the Arctic addresses the reporting elements specified in House Report 111-491, we evaluated the DOD Report to Congress on Arctic Operations and the Northwest Passage (Arctic Report) issued in May 2011. We determined that the extent to which DOD addressed each specified element would be rated as either “addressed,” “partially addressed,” or “not addressed.” These categories were defined as follows: Addressed: An element is addressed when the Arctic Report explicitly addresses all parts of the element. Partially addressed: An element is partially addressed when the Arctic Report addresses at least one or more parts of the element, but not all parts of the element. Not addressed: An element is not addressed when the Arctic Report did not explicitly address any part of the element. Specifically, two GAO analysts independently reviewed and compared the Arctic Report with the direction in the House Report; assessed whether each element was addressed, partially addressed, or not addressed; and recorded their assessment and the basis for the assessment. The final assessment reflected the analysts’ consensus based on the individual assessments. In addition, we interviewed DOD officials involved in preparing the Arctic Report to discuss their interpretation of the direction in the House Report and the DOD report’s findings. To provide context, our assessment also reflected our review of relevant DOD and Coast Guard documents, as well as issues raised in recent GAO reports that specifically relate to some of the specified reporting elements. To address the extent to which DOD has efforts under way to identify and prioritize the capabilities needed to meet national security objectives in the Arctic, we reviewed documentation related to DOD’s Arctic operations, such as the U.S. Navy’s November 2009 Arctic Roadmap, the February 2010 Quadrennial Defense Review, the U.S. European Command’s April 2011 Arctic Strategic Assessment, the U.S. Coast Guard’s July 2011 High Latitude Study, and the Navy’s September 2011 Arctic Capabilities Based Assessment. We also interviewed officials from various DOD and Coast Guard offices. We conducted this performance audit from July 2011 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Extent to which DOD’s Arctic Report Addressed Specified Reporting Elements Reporting Requirement H. R. Rep. No. 111-491, which accompanied a proposed bill for the National Defense Authorization Act for Fiscal Year 2011 (H.R. 5136), directed DOD to submit a report on Arctic Operations and the Northwest Passage. This report is to include, among other things, an assessment of the strategic national security objectives and restrictions in the Arctic region. Detailed Assessment of This Element We determined that the Department of Defense (DOD) addressed this element because the Report to Congress on Arctic Operations and the Northwest Passage (Arctic Report) includes an assessment of U.S. strategic national security objectives and restrictions in the Arctic. Specifically, the report states that DOD reviewed national-level policy guidance and concluded that the overarching strategic national security objective for the Arctic is a stable and secure region where U.S. national interests are safeguarded and the U.S. homeland is protected (see fig. 4 for descriptions of the policy guidance documents DOD reviewed). The report further identifies two DOD strategic objectives to achieve the desired end-state for the Arctic: (1) prevent and deter conflict and (2) prepare to respond to a wide range of challenges and contingencies. In addition, the report identifies and examines restrictions in the Arctic. For example, the report states that uncertainty about the extent, impact, and rate of climate change in the Arctic will make it challenging to plan for possible future conditions in the region and to mobilize public or political support for investments in U.S. Arctic capabilities or infrastructure. Related Findings from Previous GAO Reports In 2010, we reported on the difficulties associated with developing capabilities needed to understand the extent, rate, and impact of climate change. Specifically, we found that while agencies have taken steps to plan for some continued climate observations via satellite data in the near-term, they lack a strategy for the long-term provision of such data. For example, we reported that DOD has not established plans to restore the full set of capabilities intended for the National Polar-orbiting Operational Environmental Satellite System over the life of the program. We noted that without a comprehensive long-term strategy for continuing environmental measurements over the coming decades and a means for implementing it, agencies will continue to independently pursue their immediate priorities on an ad-hoc basis, the economic benefits of a coordinated approach to investments in earth observation may be lost, and our nation’s ability to understand climate change may be limited. Appendix II: Extent to which DOD’s Arctic Report Addressed Specified Reporting Elements Reporting Requirement H. R. Rep. No. 111-491, which accompanied a proposed bill for the National Defense Authorization Act for Fiscal Year 2011 (H.R. 5136), directed DOD to submit a report on Arctic Operations and the Northwest Passage. This report is to include, among other things, an assessment on mission capabilities required to support the strategic national security objectives and a timeline to obtain such capabilities. Our Assessment: Partially Addressed Based on our assessment, we determined that DOD partially addressed this reporting element. Detailed Assessment of This Element We determined that DOD partially addressed this element because the Arctic Report includes a capability gap assessment in relation to Arctic mission areas but does not provide a timeline to obtain such capabilities. Specifically, the report identifies potential Arctic capability gaps over the near- (2010-2020), mid- (2020-2030), and far-term (beyond 2030) that may affect DOD’s ability to accomplish four of nine mission areas in the region, including maritime domain awareness, maritime security, search and rescue, and sea control. The report notes that three capability gaps in particular have the potential to hamper Arctic operations across all time frames: (1) insufficient communications architecture, (2) degraded Global Positioning System performance, and (3) extremely limited domain awareness. Other key challenges identified include: shortfalls in ice and weather reporting and forecasting; limitations in command, control, communications, computers, intelligence, surveillance and reconnaissance; and limited shore-based infrastructure and inventory of ice-capable vessels. Although DOD states in the report that capabilities will need to be reassessed as conditions change and gaps addressed in order to be prepared to operate in a more accessible Arctic, it does not provide a timeline for addressing capability gaps or challenges identified. Related Findings from Previous GAO Reports We previously reported on the challenges DOD and Coast Guard face in achieving maritime domain awareness, a capability gap identified in DOD’s Arctic Report. For example, in 2011 we found that DOD lacks a strategic, risk-based approach to manage its maritime domain awareness efforts and to address high priority capability gaps. To improve DOD’s ability to manage the implementation of maritime domain awareness across DOD, we recommended that DOD develop and implement a departmentwide strategy that: identifies objectives and roles and responsibilities for achieving maritime domain awareness; aligns efforts and objectives with DOD’s process for determining requirements and allocating resources; identifies capability resourcing responsibilities; and includes performance measures. We also recommended that DOD, in collaboration with other stakeholders such as the Coast Guard, perform a comprehensive risk-based analysis to prioritize and address DOD’s critical maritime capability gaps and guide future investments. DOD concurred with our recommendations and identified actions it is taking— or plans to take—to address them. We also reported in 2010 that the Coast Guard faces challenges in achieving Arctic domain awareness, including inadequate Arctic Ocean and weather data, lack of communication infrastructure, limited intelligence information, and lack of a physical presence in the Arctic.minimal assets and infrastructure for Arctic missions and diminishing fleet expertise for operating in Arctic-type conditions. Appendix II: Extent to which DOD’s Arctic Report Addressed Specified Reporting Elements Reporting Requirement H. R. Rep. No. 111-491, which accompanied a proposed bill for the National Defense Authorization Act for Fiscal Year 2011 (H.R. 5136), directed DOD to submit a report on Arctic Operations and the Northwest Passage. This report is to include, among other things, an assessment of an amended unified command plan that addresses opportunities of obtaining continuity of effort in the Arctic Ocean by a single combatant commander. Our Assessment: Addressed Based on our assessment, we determined that DOD addressed this reporting element. Detailed Assessment of This Element We determined that DOD addressed this element because the Arctic Report includes an assessment of the revised April 2011 Unified Command Plan that addresses the impact of aligning the Arctic Ocean under a single combatant commander. The April 2011 Unified Command Plan shifted areas of responsibility boundaries in the Arctic region. As a result of this realignment, responsibility for the Arctic region is now shared between U.S. Northern and U.S. European Commands— previously, under the 2008 Unified Command Plan, the two commands and U.S. Pacific Command shared responsibility for the region, as shown in figure 5. In addition, the April 2011 Unified Command Plan assigned Northern Command responsibility for advocating for Arctic capabilities. The Arctic Report states that having two combatant commands responsible for a portion of the Arctic Ocean aligned with adjacent land boundaries is an arrangement best suited to achieve continuity of effort with key regional partners and that aligning the entire Arctic Ocean under a single combatant command would disrupt progress in theater security cooperation achieved over decades of dialogue and confidence building by Northern and European Commands with regional stakeholders. The report also notes that although having multiple combatant commands with responsibility in the Arctic Ocean makes coordination more challenging, having too few would leave out key stakeholders, diminish long-standing relationships, and potentially alienate important partners. Detailed Assessment of This Element We determined that DOD addressed this element because the Arctic Report assesses the existing Arctic infrastructure to be adequate to meet near- (2010-2020) to mid-term (2020-2030) U.S. national security needs, noting that DOD does not currently anticipate a need for the construction of additional bases or a deep-draft port in Alaska before 2020. Specifically, the Arctic Report examines the defense infrastructure such as bases, ports, and airfields needed to support DOD strategic objectives for the Arctic, and it discusses the environmental challenges and higher costs associated with construction and maintenance of Arctic infrastructure. It concludes that with the low potential for armed conflict in the region, existing DOD posture is adequate to meet U.S. defense needs through 2030. In addition, the report states that DOD does not currently anticipate a need for the construction of a deep-draft port in Alaska before 2020. The report does not address the basing infrastructure required to support long-term U.S. national security needs. The report notes that given the long lead times for construction of major infrastructure in the region, DOD will periodically reevaluate this assessment as activity in the region gradually increases and the combatant commanders update their regional plans on a regular basis. The report also states that one area for future assessment might be the need for a co-located airport and port facility suitable for deployment of undersea search and rescue assets but does not provide a timeline for completing such an assessment. Related Findings from Previous GAO Reports Our prior work has identified the high costs associated with operating and maintaining installations outside the contiguous United States. In February 2011, we reported that DOD’s posture-planning guidance does not require the combatant commands to compile and report comprehensive cost data associated with posture requirements or to analyze the costs and benefits of posture alternatives when considering changes to posture.posture-planning process will continue to lack critical information that could be used by decision makers as they deliberate posture requirements and potential opportunities to obtain greater cost efficiencies may not be identified. We recommended that DOD revise its posture-planning guidance to require combatant commands to include the costs associated with initiatives that would alter future posture, and that DOD provide guidance on how the combatant commands should analyze the costs and benefits of alternative courses of action when considering proposed changes to posture. DOD agreed with our recommendations and identified corrective actions, but additional steps are needed to fully address the recommendations. These findings underscore the importance of DOD and Northern Command identifying and analyzing the costs and benefits of alternative courses of action associated with future defense posture in the Arctic. Appendix II: Extent to which DOD’s Arctic Report Addressed Specified Reporting Elements Reporting Requirement H. R. Rep. No. 111-491, which accompanied a proposed bill for the National Defense Authorization Act for Fiscal Year 2011 (H.R. 5136), directed DOD to submit a report on Arctic Operations and the Northwest Passage. This report is to include, among other things, an assessment of the status of and need for icebreakers to determine whether icebreakers provide important or required mission capabilities to support Arctic strategic national security objectives, and an assessment of the minimum and optimal number of icebreakers that may be needed. Our Assessment: Partially Addressed Based on our assessment, we determined that DOD partially addressed this reporting element. Appendix IV: Policy Guidance on the Arctic Identified in DOD’s Arctic Report Appendix V: Arctic Responsibilities under the Unified Command Plan: 2008 and 2011 Appendix VI: Comments from the Department of Defense Appendix VII: Comments from the Department of Homeland Security Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, key contributors to this report were Suzanne Wren (Assistant Director), Susan Ditto, Nicole Harms, Timothy Persons, Steven Putansu, Frank Rusco, Jodie Sandel, Amie Steele, and Esther Toledo. Stephen L. Caldwell (Director), Dawn Hoff (Assistant Director), and Elizabeth Kowalewski contributed expertise on the Department of Homeland Security and Coast Guard. Related GAO Products Coast Guard: Observations on Arctic Requirements, Icebreakers, and Coordination with Stakeholders. GAO-12-254T. Washington, D.C.: December 1, 2011. Climate Change Adaptation: Federal Efforts to Provide Information Could Help Government Decision Making. GAO-12-238T. Washington, D.C.: November 16, 2011. Coast Guard: Action Needed as Approved Deepwater Program Remains Unachievable. GAO-12-101T. Washington, D.C.: October 4, 2011. Polar Satellites: Agencies Need to Address Potential Gaps in Weather and Climate Data Coverage. GAO-11-945T. Washington, D.C.: September 23, 2011. Climate engineering: Technical status, future directions, and potential responses. GAO-11-71. Washington, D.C.: July 28, 2011. Homeland Defense: Actions Needed to Improve DOD Planning and Coordination for Maritime Operations. GAO-11-661. Washington, D.C.: June 23, 2011. Intelligence, Surveillance, and Reconnaissance: DOD Needs a Strategic, Risk-Based Approach to Enhance Its Maritime Domain Awareness. GAO-11-621. Washington, D.C.: June 20, 2011. Defense Management: Perspectives on the Involvement of the Combatant Commands in the Development of Joint Requirements. GAO-11-527R. Washington, D.C.: May 20, 2011. Coast Guard: Observations on Acquisition Management and Efforts to Reassess the Deepwater Program. GAO-11-535T. Washington, D.C.: April 13, 2011. Defense Management: Additional Cost Information and Stakeholder Input Needed to Assess Military Posture in Europe. GAO-11-131. Washington, D.C.: February 3, 2011. Coast Guard: Efforts to Identify Arctic Requirements Are Ongoing, but More Communication about Agency Planning Efforts Would Be Beneficial. GAO-10-870. Washington, D.C.: September 15, 2010. Environmental Satellites: Strategy Needed to Sustain Critical Climate and Space Weather Measurements. GAO-10-456. Washington, D.C.: April 27, 2010. Interagency Collaboration: Key Issues for Congressional Oversight of National Security Strategies, Organizations, Workforce, and Information Sharing. GAO-09-904SP. Washington, D.C.: September 25, 2009. Defense Acquisitions: DOD’s Requirements Determination Process Has Not Been Effective in Prioritizing Joint Capabilities. GAO-08-1060. Washington, D.C.: September 25, 2008. Coast Guard: Condition of Some Aids-to-Navigation and Domestic Icebreaking Vessels Has Declined; Effect on Mission Performance Appears Mixed. GAO-06-979. Washington, D.C.: September 22, 2006. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996. | The gradual retreat of polar sea ice, combined with an expected increase in human activityshipping traffic, oil and gas exploration, and tourism in the Arctic regioncould eventually increase the need for a U.S. military and homeland security presence in the Arctic. As a result, the Department of Defense (DOD) must begin preparing to access, operate, and protect national interests there. House Report 111-491 directed DOD to prepare a report on Arctic Operations and the Northwest Passage, and specified five reporting elements that should be addressed. House Report 112-78 directed GAO to review DODs report. GAO assessed the extent to which 1) DODs Report to Congress on Arctic Operations and the Northwest Passage (Arctic Report) addressed the specified reporting elements and 2) DOD has efforts under way to identify and prioritize the capabilities needed to meet national security objectives in the Arctic. GAO analyzed DODs Arctic Report and related documents and interviewed DOD and U.S. Coast Guard officials. DODs Arctic Report, submitted May 31, 2011, addressed three and partially addressed two of the elements specified in the House Report. While DOD has undertaken some efforts to assess the capabilities needed to meet national security objectives in the Arctic, it is unclear whether DOD will be in a position to provide needed capabilities in a timely and efficient manner because it lacks a risk-based investment strategy for addressing near-term needs and a collaborative forum with the Coast Guard for addressing long-term capability needs. DODs Arctic Report acknowledges that it has some near-term gaps in key capabilities needed to communicate, navigate, and maintain awareness of activity in the region. However, DOD has not yet evaluated, selected, or implemented alternatives for prioritizing and addressing near-term Arctic capability needs. In addition, DOD and the Coast Guard have established a working group to identify potential collaborative efforts to enhance U.S. Arctic capabilities. This working group is focused on identifying potential near-term investments but not longer-term needs, and it is currently expected to be dissolved in January 2012. Uncertainty involving the rate of Arctic climate change necessitates careful planning to ensure efficient use of resources in developing Arctic needs such as basing infrastructure and icebreakers, which require long lead times to develop and are expensive to build and maintain. Without taking steps to meet near- and long-term Arctic capability needs, DOD risks making premature Arctic investments, being late in obtaining needed capabilities, or missing opportunities to minimize costs by collaborating on investments with the Coast Guard. |
Background Although about 700,000 U.S. military personnel were deployed to the Gulf War in the early 1990s, casualties were relatively light compared with those in previous major conflicts. Some veterans began reporting health problems shortly after the war that they believed might be due to their participation in the conflict. VA, DOD, HHS, and other federal agencies initiated research and investigations into these health concerns and the consequences of possible hazardous exposures. VA is the coordinator for all federal activities on the health consequences of service in the Gulf War. These activities include ensuring that the findings of all federal Gulf War illnesses research are made available to the public and that federal agencies coordinate outreach to Gulf War veterans in order to provide information on potential health risks from service in the Gulf War and corresponding services or benefits. The Secretary of VA is required to submit an annual report on the results, status, and priorities of federal research activities related to the health consequences of military service in the Gulf War to the Senate and House Veterans’ Affairs Committees. VA has provided these reports to Congress since 1995. In May 2004, VA issued its annual report for 2002. VA has carried out its coordinating role through the auspices of interagency committees, which have changed over time in concert with federal research priorities and needs. Specifically, the mission of these interagency committees has evolved to include coordination for research on all hazardous deployments, including but not limited to the Gulf War. (See fig. 1.) Federal research efforts for Gulf War illnesses have been guided by questions established by the interagency Research Working Group (RWG), which was initially established under the Persian Gulf Veterans Coordinating Board (PGVCB) to coordinate federal research efforts. Between 1995 and 1996, the RWG identified 19 major research questions related to illnesses in Gulf War veterans. In 1996, the group added 2 more questions regarding cancer risk and mortality rates to create a set of 21 key research questions that have served as an overarching strategy in guiding federal research for Gulf War illnesses. (See app. I for the list of key questions.) The 21 research questions cover the extent of various health problems, exposures among the veteran population, and the difference in health problems between Gulf War veterans and control populations. In 1998, the RWG expanded federal Gulf War illnesses research priorities to include treatment, longitudinal follow-up of illnesses, disease prevention, and improved hazard assessment; however, no new research questions were added to the list of 21 key questions. With regard to veterans’ health status, the research questions cover the prevalence among veterans and control populations of symptoms, symptom complexes, illnesses, altered immune function or host defense, birth defects, reproductive problems, sexual dysfunction, cancer, pulmonary symptoms, neuropsychological or neurological deficits, psychological symptoms or diagnoses, and mortality. With regard to exposure, the research questions cover Leishmania tropica (a type of parasite), petroleum, petroleum combustion products, specific occupational/environmental hazards (such as vaccines and chemical agents, pyridostigmine bromide (given to troops as a defense against nerve agents), and psychophysiological stressors (such as exposure to extremes of human suffering). In 2002, VA established RAC to provide advice to the Secretary of VA on proposed research relating to the health consequences of military service in the Gulf War. RAC, which is composed of members of the general public, including non-VA researchers and veterans’ advocates, was tasked to assist VA in its research planning by exploring the entire body of Gulf War illnesses research, identifying gaps in the research, and proposing potential areas of future research. VA provides an annual budget of about $400,000 for RAC, which provides salaries for two full-time and one part- time employee and supports committee operating costs. RAC’s employees include a scientific director and support staff who review published scientific literature and federal research updates and collect information from scientists conducting relevant research. RAC’s staff provide research summaries for discussion and analysis to the advisory committee through monthly written reports and at regularly scheduled meetings. RAC holds public meetings several times a year at which scientists present published and unpublished findings from Gulf War illnesses research. In 2002, RAC published a report with recommendations to the Secretary of VA. It expects to publish another report soon. Federal Research on Gulf War Illnesses Has Decreased, and VA Has Not Collectively Analyzed Research Findings to Determine Research Needs As of September 2003, about 80 percent of the 240 federally funded research projects on Gulf War illnesses have been completed. Additionally, funding for Gulf War-specific research has decreased, federal research priorities have been expanded to incorporate the long-term health effects of all hazardous deployments, and interagency coordination of Gulf War illnesses research has diminished. Despite this shift in effort, VA has not collectively reassessed the research findings to determine whether the 21 key research questions have been answered or to identify the most promising directions for future federal research in this area. Most Federal Gulf War Illnesses Research Projects Are Complete, and Funding Is Decreasing as Research Priorities Evolve Since 1991, 240 federally funded research projects have been initiated by VA, DOD, and HHS to address the health concerns of individuals who served in the Gulf War. As of September 2003, 194 of the 240 federal Gulf War illnesses research projects (81 percent) had been completed; another 46 projects (19 percent) were ongoing. (See fig. 2.) From 1994 to 2003, VA, DOD, and HHS collectively spent a total of $247 million on Gulf War illnesses research. DOD has provided the most funding for Gulf War illnesses research, funding about 74 percent of all federal Gulf War illnesses research within this time frame. Figure 3 shows the comparative percentage of funding by these agencies for each fiscal year since 1994. After fiscal year 2000, overall funding for Gulf War illnesses research decreased. (See fig. 4.) Fiscal year 2003 research funding was about $20 million less than funding provided in fiscal year 2000. This overall decrease in federal funding was paralleled by a change in federal research priorities, which expanded to include all hazardous deployments and shifted away from a specific focus on Gulf War illnesses. VA officials said that although Gulf War illnesses research continues, the agency is expanding the scope of its research to include the potential long- term health effects in troops who served in hazardous deployments other than the Gulf War. In October 2002, VA announced plans to commit up to $20 million for research into Gulf War illnesses and the health effects of other military deployments. Also in October 2002, VA issued a program announcement for research on the long-term health effects in veterans who served in the Gulf War or in other hazardous deployments, such as Afghanistan and Bosnia/Kosovo. As of April 2004, one new Gulf War illnesses research project, for $450,000, was funded under this program announcement. Although DOD has historically provided the majority of funding for Gulf War illnesses research, DOD officials stated that their agency currently has no plans to continue funding new Gulf War illnesses research projects. Correspondingly, DOD has not funded any new Gulf War illnesses research in fiscal year 2004, except as reflected in modest supplements to complete existing projects and a new award pending for research using funding from a specific appropriation. DOD also did not include Gulf War illnesses research funding in its budget proposals for fiscal years 2005 and 2006. DOD officials stated that because the agency is primarily focused on the needs of the active duty soldier, its interest in funding Gulf War illnesses research was highest when a large number of Gulf War veterans remained on active duty after the war—some of whom might develop unexplained symptoms and syndromes that could affect their active duty status. In addition, since 2000, DOD’s focus has shifted from research solely on Gulf War illnesses to research on medical issues of active duty troops in current or future military deployments. For example, in 2000, VA and DOD collaborated to develop the Millennium Cohort study, which is a prospective study evaluating the health of both deployed and nondeployed military personnel throughout their military careers and after leaving military service. The study began in October 2000 and was awarded $5.25 million through fiscal year 2002, with another $3 million in funding estimated for fiscal year 2003. VA’s Coordination of Federal Gulf War Illnesses Research Has Lapsed, and VA Has Not Determined Whether Key Research Questions Have Been Answered VA’s coordination of federal Gulf War illnesses research has gradually lapsed. Starting in 1993, VA carried out its responsibility for coordinating all Gulf War health-related activities, including research, through interagency committees, which evolved over time to reflect changing needs and priorities. (See fig. 1.) In 2000, interagency coordination of Gulf War illnesses research was subsumed under the broader effort of coordination for research on all hazardous deployments. Consequently, Gulf War illnesses research was no longer a primary focus. The most recent interagency research subcommittee, which is under the Deployment Health Working Group (DHWG), has not met since August 2003, and as of April 2004, no additional meetings had been planned. Additionally, VA has not reassessed the extent to which the collective findings of completed Gulf War Illnesses research projects have addressed the 21 key research questions developed by the RWG. (See app. I.) The only assessment of progress in answering these research questions was published in 2001, when findings from only about half of all funded Gulf War illnesses research were available. Moreover, the summary did not identify whether there were gaps in existing Gulf War illnesses research or promising areas for future research. No reassessment of these research questions has been undertaken to determine whether they remain valid, even though about 80 percent of federally funded Gulf War illnesses research projects now have been completed. In 2000, we reported that without such an assessment, many underlying questions about causes, course of development, and treatments for Gulf War illnesses may remain unanswered. RAC’s Efforts to Provide Advice May Be Hindered by VA’s Limited Information Sharing and Collaboration, but Several Changes to Address These Issues Have Been Proposed RAC’s efforts to provide advice and make recommendations on Gulf War illnesses research may have been impeded by VA’s limited sharing of information on research initiatives and program planning as well as VA’s limited collaboration with the committee. However, VA and RAC are exploring ways to improve information sharing, including VA’s hiring of a senior scientist who would both guide the agency’s Gulf War illnesses research and serve as the agency’s liaison to provide routine updates to RAC. VA and RAC are also proposing changes to improve collaboration, including possible commitments from VA to seek input from RAC when developing research program announcements. At the time of our review, most of these proposed changes were in the planning stages. RAC Officials Cite VA’s Poor Information Sharing and Limited Collaboration as Impediments in Meeting Its Mission According to RAC officials, VA senior administrators’ poor information sharing and limited collaboration with the committee about Gulf War illnesses research initiatives and program planning may have hindered RAC’s ability to achieve its mission of providing research advice to the Secretary of VA. RAC is required by its charter to provide advice and make recommendations to the Secretary of VA on proposed research studies, research plans, and research strategies relating to the health consequences of service during the Gulf War. (See app. II for RAC’s charter.) RAC’s chairman and scientific director said that the recommendations and reports that the advisory committee provides to the Secretary of VA are based on its review of research projects and published and unpublished research findings related to Gulf War illnesses. Although RAC and VA established official channels of communication, VA did not always provide RAC with important information related to Gulf War illnesses research initiatives and program planning. In 2002, VA designated a liaison to work with RAC’s liaison in order to facilitate the transfer of information to the advisory committee about the agency’s Gulf War illnesses research strategies and studies. However, RAC officials stated that most communication occurred at their request; that is, the VA liaison and other VA staff were generally responsive to requests but did not establish mechanisms to ensure that essential information about research program announcements or initiatives was automatically provided to the advisory committee. For example, according to RAC officials, VA’s liaison did not inform RAC that VA’s Office of Research and Development was preparing a research program announcement until it was published in October 2002. Consequently, RAC officials said that they did not have an opportunity to carry out the committee’s responsibility of providing advice and making recommendations regarding research strategies and plans. In another instance, RAC officials stated that VA did not notify advisory committee members that the Longitudinal Health Study of Gulf War Era Veterans—a study designed to address possible long-term health consequences of service in the Gulf War—had been developed and that the study’s survey was about to be sent to study participants. RAC officials expressed concern that VA did not inform the advisory committee about the survey even after the plans for it were made available for public comment. Information sharing about these types of issues is common practice among advisory committees of the National Institutes of Health (NIH), which has more federal advisory committees than any other executive branch agency. For example, a senior official within NIH’s Office of Federal Advisory Committee Policy said that it is standard practice for NIH advisory committees to participate closely in the development of research program announcements. In addition, NIH’s advisory committee members are routinely asked to make recommendations regarding both research concepts and priorities for research projects, and are kept up-to-date about the course of ongoing research projects. VA and RAC Are Exploring Methods to Improve Information Sharing and Collaboration In recognition of RAC’s concerns, VA is proposing several actions to improve information sharing, including VA’s hiring of a senior scientist to lead its Gulf War illnesses research and improving formal channels of communication. In addition, VA and RAC are exploring methods to improve collaboration. These would include possible commitments from VA to seek input from RAC when developing research program announcements and to include RAC members in a portion of the selection process for funding Gulf War illnesses research projects. As of April 2004, most of the proposed changes were in the planning stages. Since the February 2004 RAC meeting, VA and RAC officials said they have had multiple meetings and phone conversations and have corresponded via e-mail in an attempt to improve communication and collaboration. VA officials said they have already instituted efforts to hire a senior scientist to guide the agency’s Gulf War illnesses research efforts and to act as liaison to RAC. According to VA officials, this official will be required to formally contact RAC officials weekly, with informal communications on an as-needed basis. In addition, this official will be responsible for providing periodic information on the latest publications or projects related to Gulf War illnesses research. In an effort to facilitate collaboration with RAC, VA has proposed involving RAC members in developing VA program announcements designed to solicit research proposals, both specifically regarding Gulf War illnesses and in related areas of interest, such as general research into unexplained illnesses. RAC officials stated that throughout March and April 2004, they worked with VA officials to jointly develop a new research program announcement for Gulf War illnesses. In addition, VA has proposed that RAC will be able to recommend scientists for inclusion in the scientific merit review panels. VA also plans to involve RAC in review of a project’s relevancy to Gulf War illnesses research goals and priorities after the research projects undergo scientific merit review. This could facilitate RAC’s ability to provide recommendations to VA regarding the projects that the advisory committee has judged are relevant to the Gulf War illnesses research plan. Concluding Observations Although about 80 percent of federally funded Gulf War illnesses research projects have been completed, little effort has been made to assess progress in answering the 21 key research questions or to identify the direction of future research in this area. Additionally, in light of decreasing federal funds and expanding federal research priorities, research specific to Gulf War illnesses is waning. Without a comprehensive reassessment of Gulf War illnesses research, underlying questions about the unexplained illnesses suffered by Gulf War veterans may remain unanswered. Since RAC’s establishment in January 2002, its efforts to provide the Secretary of VA with advice and recommendations may have been hampered by VA’s incomplete disclosure of Gulf War illnesses research activities. By limiting information sharing with RAC, VA will not fully realize the assistance that the scientists and veterans’ advocates who serve on the RAC could provide in developing effective policies and guidance for Gulf War illnesses research. VA and RAC are exploring new approaches to improve information sharing and collaboration. If these approaches are implemented, RAC’s ability to play a pivotal role in helping VA reassess the future direction of Gulf War illnesses research may be enhanced. However, at the time of our review most of these changes had not been formalized. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. Contact and Staff Acknowledgments For further information about this testimony, please contact me at (202) 512-7119 or Bonnie Anderson at (404) 679-1900. Karen Doran, John Oh, Danielle Organek, and Roseanne Price also made key contributions to this testimony. Appendix I: Key Gulf War Illnesses Research Questions Between 1995 and 1996, the Research Working Group (RWG) of the interagency Persian Gulf Veterans’ Coordinating Board identified 19 major research questions related to illnesses in Gulf War Veterans. The RWG later added 2 more questions to create a set of 21 key research questions that serve as a guide for federal research regarding Gulf War illnesses. (See table 1.) Appendix II: Charter For the VA Research Advisory Committee On Gulf War Veterans’ Illnesses (RAC) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | More than a decade after the 1991 Persian Gulf War, there is continued interest in the federal response to the health concerns of Gulf War veterans. Gulf War veterans' reports of illnesses and possible exposures to various health hazards have prompted numerous federal research projects on Gulf War illnesses. This research has been funded primarily by the Department of Veterans Affairs (VA), the Department of Defense (DOD), and the Department of Health and Human Services (HHS). In 1993, the President named the Secretary of VA as the responsible party for coordinating research activities undertaken or funded by the executive branch of the federal government on the health consequences of service in the Gulf War. In 2002, a congressionally mandated federal advisory committee--the VA Research Advisory Committee on Gulf War Veterans' Illnesses (RAC)--was established to provide advice on federal Gulf War illnesses research needs and priorities to the Secretary of VA. This statement is based on GAO's report entitled Department of Veterans Affairs: Federal Gulf War Illnesses Research Strategy Needs Reassessment (GAO-04-767). The testimony presents findings about the status of research on Gulf War illnesses and VA's communication and collaboration with RAC. The federal focus on Gulf War-specific research has waned, but VA has not yet analyzed the latest research findings to identify whether there were gaps in research or to identify promising areas for future research. As of September 2003, about 80 percent of the 240 federally funded medical research projects for Gulf War illnesses had been completed. In recent years, VA and DOD have decreased their expenditures on Gulf War illnesses research and have expanded the scope of their medical research programs to incorporate the long-term health effects of all hazardous deployments. Interagency committees formed by VA to coordinate federal Gulf War illnesses research have evolved to reflect these changing priorities, but over time these entities have been dissolved or have become inactive. In addition, VA has not reassessed the extent to which the collective findings of completed Gulf War illnesses research projects have addressed key research questions or whether the questions remain relevant. The only assessment of progress in answering these research questions was published in 2001, when findings from only about half of all funded Gulf War illnesses research were available. Moreover, it did not identify whether there were gaps in existing Gulf War illnesses research or promising areas for future research. This lack of a comprehensive analysis of research findings leaves VA at greater risk of failing to answer unresolved questions about causes, course of development, and treatments for Gulf War illnesses. RAC's efforts to provide advice and make recommendations to the Secretary of VA on Gulf War illnesses research may have been hampered by VA senior administrators' poor information sharing and limited collaboration on research initiatives and program planning. For example, VA failed to inform RAC about its 2002 major research program announcement that included Gulf War illnesses research. VA and RAC are exploring ways to improve information sharing and collaboration, including VA's hiring of a senior scientist who would both guide VA's Gulf War illnesses research and serve as the agency's liaison for routine updates to the advisory committee. However, most of these changes had not been finalized at the time of GAO's review. |
Background DOD’s ranges are used primarily to test weapon systems and train military forces; some ranges are used for both testing and training purposes, while others are limited to one use or the other. These ranges represent important national assets for the development and sustainment of U.S. military forces. This report focuses primarily on ranges used for training purposes. DOD requires ranges for all levels of training to include airspace for air-to-air, air-to-ground, drop zone, and electronic combat training; live-fire ranges for artillery, armor, small arms, and munitions training; ground maneuver ranges to conduct realistic force-on-force and live-fire training at various unit levels; and sea ranges to conduct surface and sub-surface maneuvers for training. In a February 2004 report to the Congress, DOD identified 70 major active-component training ranges in the continental United States—the Army has 35, the Navy 13, the Marine Corps 12, and the Air Force 10. The report also identified several National Guard, Reserve, and smaller training ranges. Readiness Reporting for Defense Infrastructure to Include Training Ranges The Office of the Secretary of Defense for Personnel and Readiness develops policies, plans, and programs to ensure the readiness of military forces and provides oversight on training issues. The Secretaries of the military departments are responsible for training personnel and for maintaining their respective training ranges and facilities. Until recent years, DOD had no readiness reporting system in place for its defense installations and facilities, including training ranges. In fiscal year 2000, DOD reported to the Congress for the first time on the readiness of its defense infrastructure as an integral element of its overall Defense Readiness Reporting System. At the core of the system is a rating classification, typically referred to as a “C” rating. The C-rating process is intended to provide an overall assessment for each of nine facility classes (e.g., “operations and training” and “community and housing”) on a military installation. Training ranges fall within the operations and training facility class. While the services provide overall assessments by facility class, they may not always provide detailed separate ratings for installation assets, such as training ranges, within a class. With respect to training ranges, the Army and Marine Corps have data that provide C-ratings for their ranges, but the Navy and Air Force do not. The definitions for C-ratings are as follows: C-1—only minor facility deficiencies with negligible impact on capability C-2—some deficiencies with limited impact on capability to perform C-3—significant facility deficiencies that prevent performing some C-4—major facility deficiencies that preclude satisfactory mission accomplishment. Although we have previously reported concerns about the consistency and quality of the services’ approaches to completing these assessments, their assessments nonetheless have shown a large portion of DOD facilities across all classes of facilities, which include training ranges, being rated either C-3 or C-4. DOD’s Training Transformation Initiative To effectively support the needs of combatant commanders in the new strategic environment of the 21st century, DOD has undertaken a transformation initiative to change the way it conducts training by preparing military forces to learn, improvise, and adapt to constantly changing threats as they execute military doctrine. The joint national training capability is one of three capabilities of this initiative and calls for the development of a live-virtual-constructive training environment. To meet this effort, defense planning guidance required OSD, in collaboration with the military services, Joint Chiefs of Staff, and Joint Forces Command, to develop a plan to transform military training to, among other things, ensure that training ranges and devices are modernized and sustainable. The Training Transformation Implementation Plan, which identifies DOD’s vision, goals, and milestones, was initially issued in June 2003 and subsequently updated in June 2004. Under the joint national training capability, DOD recognized the need for sustainable and modernized ranges and stated that range capabilities, such as instrumentation for the operating platforms, and modern range infrastructure are necessary to create the training environment, capture realistic ground situations, assess activity and performance, and promptly provide feedback to the training audience and serve as the foundation for the joint national training capability. Prior GAO Reports on Training Ranges In recent years, we have reviewed and reported on constraints, particularly those related to encroachment, on military training ranges. A brief summary on those reports follows: In June 2004, we reported that DOD’s training range report to the Congress, which was mandated by section 366 of the Bob Stump National Defense Authorization Act for Fiscal Year 2003, did not provide a comprehensive plan to address training constraints caused by limitations on the use of military lands, marine areas, and air space that are available in the United States and overseas for training. We also reported that DOD’s training report did not fully identify available training resources, specific training capacities and capabilities, and existing training constraints; fully assess current and future training requirements; fully evaluate the adequacy of current resources to meet current and future training range requirements in the United States and overseas; or include a comprehensive plan with quantifiable goals or milestones to measure progress, or projected funding requirements needed to implement the plan. In response to our recommendation calling for a comprehensive plan to fully address training constraints, DOD stated that the services had initiated a comprehensive planning process, which it considered to be evolutionary, and disagreed with our implication that DOD has not executed a comprehensive program to improve the sustainability of its ranges. In September 2003, we reported that through increased cooperation DOD and other federal land managers could share the responsibility for managing endangered species on training ranges. In February 2003, we also reported that while the amount of money spent on facility maintenance has increased, the amounts have not been sufficient to halt the deterioration of facilities, which include training ranges. In addition, we also reported a lack of consistency in the services’ information on facility conditions, making it difficult for the Congress, DOD, and the services to direct funds to facilities where they are most needed and to accurately gauge facility conditions. In April 2002, we reported that troops stationed outside of the continental United States face a variety of training constraints that have increased over the past decade and are likely to increase further. In June 2002, we reported on the impact of encroachment on military training ranges inside the United States with similar findings to those of the April 2002 report. In both reports, we stated that impacts on readiness were not well documented. In addition, we testified before the Congress twice on these issues—in May 2002 and April 2003. See the Related GAO Products section at the end of this report for a more comprehensive list of our products related to the issues discussed in this report. Degraded Conditions at Military Training Ranges Adversely Affect Training Activities Our visits to eight training ranges, along with DOD’s own assessments, show that military training ranges have been generally deteriorating over time and lack modernized capabilities. These degraded conditions have adversely affected training, placed the services at risk of not meeting DOD’s transformation goals, and jeopardized the safety of military personnel who use the ranges. Without adequately maintained and modernized ranges, the department not only compromises the opportunity to achieve its training transformation goal of sustainable and capable training ranges but also assumes the risk that its forces will be less prepared for its missions and subjected to safety hazards. Deficiencies Observed at Training Ranges We Visited Table 1 shows the wide variety of identified degraded conditions or lack of upgrades to meet current training needs at the ranges that we visited. The degraded conditions comprise both (1) those physical features of a training range that are subject to maintenance (e.g., tank trails and roads) over time and (2) those capabilities that are desirable for a modernized training range (e.g., automated threat emitters, automated targets, urban training facilities). Following the table is a discussion of degraded conditions that we observed. While the overall C-rating of the Fort Hood ranges in 2004 was C-2, 53 percent of the assessed training areas were identified by installation officials as having significant (C-3) or major (C-4) deficiencies that preclude satisfactory mission accomplishment. According to Army officials, the condition of Fort Hood’s training ranges is understated because the overall C-rating does not include all assessed training areas. In addition, training range officials identified 364 (91 percent) of the 400 miles of their tank trails, which are not rated under training areas, as unusable or hazardous because of deteriorated conditions (see fig. 1). As a result, units typically detoured onto paved, public roads to travel to and from training areas causing road damage and creating safety hazards to the public who use the roads. In addition, the urban training facilities were outdated, having been designed for Cold War scenarios that are not applicable to current military operations. For example, the facilities at Fort Hood resemble European villages with narrow streets. But in current military operations, tanks and other military vehicles patrol Middle Eastern settings and downtown cities. Also, while entrances to these European homes at Fort Hood are immediately off the road and easily accessible, homes in the Middle East are generally protected by tall, gated walls and designed around a courtyard, making soldiers more vulnerable to enemy fire before entering a home. Fort Stewart, Georgia While the overall C-rating of the Fort Stewart ranges in 2004 was C-3, 60 percent of the training areas were identified by installation officials as having major (C-4) deficiencies that preclude satisfactory mission accomplishment. In addition, range officials and units told us that the convoy training area limits soldiers to shoot out of only one side of a vehicle during ambush training exercises, although soldiers stated that in actual military operations they could be attacked from multiple directions. The range also lacks urban training facilities that accurately reflect the needs of current military operations, such as Middle Eastern-style building facades. A range official further stated that most of their ranges lack running water and therefore do not have functioning restrooms or showers, which leads to delays and inefficient use of training time. Similar to Fort Hood, the range also has deteriorated training areas that pose difficulties in maneuvering vehicles during training events (see fig. 2). There are numerous identified deficiencies at this range—a primary site for West Coast Navy units to train before deploying—that adversely affect the quantity and quality of training activities. Range and submarine squadron officials told us that a major deficiency is the malfunctioning of the undersea training area’s communications system, which effectively reduces the available training area to the southern portion of the range (see fig. 3). This situation is further exacerbated because the southern portion of the undersea training area overlaps with surface ship training areas, and so concurrent training cannot be conducted. Range officials stated that this and other deficiencies could also impede their ability to meet the increased demand created by the Navy’s revised ship deployment cycle, which requires more carrier groups to be deployable at a given time. Moreover, the range does not have an instrumented shallow-water capability. A recent study on the range’s capabilities for antisubmarine warfare found that current range resources are sufficient to meet 90 percent of the minimally required training tasks. However, the study found that the range does not provide a realistic training environment for 19 (63 percent) of 30 Navy training skills, primarily due to the lack of a shallow- water instrumented training range. The range also lacks adequate support capabilities, such as piers, docks, and mooring buoys. For example, although range officials stated that current fleet requirements necessitate a minimum of eight mooring buoys, only two are in satisfactory condition. As a result, these buoys are rarely available, which leads to reduced training support and costly workarounds, such as travel to alternate locations for the night. In addition, the lack of mooring or docking capabilities has also resulted in damaged military property and canceled training events. Range officials and users cited other deficiencies, including an inadequate number and types of targets, electronic warfare capabilities, and tracking systems for aircraft, as well as the lack of a dependable secure high-capacity communication system. In commenting on a draft of this report, the Navy stated that it is currently funding efforts to establish dedicated shallow water training ranges on both coasts. However, during our review, Navy officials acknowledged that the west coast range will not be established until the service addresses more restrictive environmental requirements and other anticipated obstacles on the east coast. Fallon Range Training Complex, Nevada Pilots and training range officials stated that the Fallon Range Training Complex lacks adequate systems to replicate current threats and targets. It lacks advanced surface-to-air missile threat systems and has an inadequate concentration of electronic warfare systems. As a result, the quality of training is adversely affected. Furthermore, because replacement parts for the current electronic warfare systems are becoming obsolete, the systems are becoming difficult to maintain. In addition, the range has an insufficient number of targets, particularly time-sensitive and moving targets, to reflect the current threat. Camp Lejeune, North Carolina While the overall C-rating of the Camp Lejeune ranges in 2004 was C-3, 12 percent of the training areas were identified by installation officials as having major (C-4) deficiencies that preclude satisfactory mission accomplishment. We observed several training areas with overgrown vegetation that obstructed the visibility of targets and range boundary markers, thereby precluding the use of highly explosive ammunition for safety reasons. This condition also diminished the trainers’ ability to accurately observe the Marines’ shooting proficiency. Some training areas also lack marked firing lanes, and only 5 of the 120 live-fire training areas had automated targets, thereby limiting the amount of training time available since Marines must set up and take down targets as a workaround (see fig. 4). Similar to the conditions found at Fort Hood and Fort Stewart, the urban training facilities were outdated and the range lacks an area to conduct training for soldiers on convoy operations. Consequently, soldiers either have to travel to other ranges to receive such training, which increases training costs and the amount of time soldiers are away from their families, or soldiers remain at their primary ranges and may be less prepared for the conditions they will face in combat. Camp Pendleton, California While the overall C-rating of the Camp Pendleton ranges in 2004 was C-2, 24 percent of the training areas were identified by installation officials as having significant (C-3) deficiencies that precluded accomplishment of some missions. Although encroachment is the primary problem for this range, several other deficiencies also affect its training and safety. For example, the range lacks a sufficient number of automated targets to provide feedback for users. In addition, one of the primary training areas is located in a dry riverbed lacking emergency escape routes, where range officials told us one Marine had drowned when it flooded. The training areas used by Navy special operation units have overgrown vegetation; are inadequately constructed to meet requirements and safety conditions; and lack target maintenance and storage facilities, bullet containment walls, turning and moving targets, and hygiene facilities. A lack of running water also creates a financial burden for the range office, which, as a costly workaround, must consequently rent temporary restroom structures. In addition, helicopter pilots stated that the range lacks needed mountaintop targets for them to train against threats from an elevated position. Nellis Test and Training Range, Nevada Although range officials stated that the Nellis Test and Training Range is the most capable in the Air Force, we were told about and observed several deficiencies that affect training, including an insufficient concentration of buildings to replicate an urban environment, inadequate scoring and feedback capabilities, and a lack of specific urban-setting target sets. The range also lacks a sufficient number of opposition forces for training exercises and advanced surface-to-air missile threat systems, which adversaries currently own and operate. Barry M. Goldwater Range, Arizona Pilots and training range officials told us that the Barry M. Goldwater Range lacks moving targets, camouflaged or concealed targets, enemy targets embedded within friendly forces and the civilian population, cave entrances, time-sensitive targets, and strafing pits at specific tactical locations, which are necessary to provide users with a more realistic training experience. It also lacks scoring and feedback capability in the live-fire training areas. Without a scoring system and targets, pilots must shoot at barren mounds of dirt, which diminishes their ability to obtain feedback on the proficiency of their attack. The range lacks the capability to provide remote site feedback, thus diminishing the amount of training and personal time available to pilots who must as a workaround travel to another base to receive this feedback. It lacks an adequate concentration of electronic warfare systems, and the systems it has are becoming difficult to maintain as replacement parts become obsolete. Also, its communication system is inadequate. Deficiencies Identified by DOD Studies DOD is aware of training range deficiencies, having issued a number of studies over the past 10 years that identify these training range deficiencies. For example, DOD’s 2001 Quadrennial Defense Review Report states that unique American training superiority is eroding from underlying neglect and needs support in sustainment and recapitalization, particularly as evidenced in the aging infrastructure and instrumentation of DOD’s training ranges. The Navy has completed a number of studies over the years that identify deficiencies at specific ranges. For example, in 1995 it issued a tactical training range roadmap identifying deficiencies at each of its ranges. Many of these deficiencies still exist, such as inadequacies of shallow-water ranges and of realistic targets. In September 2001, the Navy assessed its ranges and identified several deficiencies, including inadequate instrumentation at some of its most critical ranges. In September 2000, it completed a range needs assessment on 19 air-to- ground ranges and identified degraded range conditions and a lack of capabilities. A 2003 Air Force assessment of its training ranges found infrastructure deficiencies at 90 percent of its ranges, attributable to age and limited funding. The assessment considered the deficiencies significant at 24 of its 32 training ranges. While the Army and the Marine Corps have not issued composite studies on the deficiencies of their ranges, they have conducted overall annual range assessments as part of the readiness reporting system and identified deficiencies as well. Further, the Navy and Marine Corps have identified a number of deficiencies at their ranges while developing local range complex management plans. Various Factors Affect DOD’s Progress in Improving Training Range Conditions While OSD and the military services have undertaken a number of management actions that could improve the conditions of their training ranges, progress in overall improvements has been limited, due in part to the lack of a comprehensive approach to manage their training ranges. Specifically, a comprehensive approach should include, at a minimum, several key elements, such as well-defined policies that address all factors impacting range sustainability; plans that guide the timely execution of range sustainability actions; and range requirements that are geared to meet both service and joint needs. Further, while the military services lack adequate and easily accessible information that could precisely identify training range maintenance and modernization funding, available information indicates that identified training range requirements have historically not been adequately funded. Additionally, OSD and the services have not fully implemented specific actions identified in their policy, management guidance, reports, and plans for improving training range conditions. Without a fully implemented comprehensive approach, OSD and the services will not be able to ensure the long-term viability of their training ranges, nor their ability to meet transformation goals, nor will the Congress be in a position to fulfill its oversight role. OSD and the Services Have Taken Limited Range Improvement Actions, but a Comprehensive Approach Is Lacking OSD and the military services have collectively taken a number of steps that are designed to improve the conditions of training ranges at the service and local range level. For example, to varying extents, the military services have developed policies for training range sustainment, developed service-specific plans, established working groups to coordinate efforts among multiple organizations, defined range requirements, assessed conditions, developed Web-based systems to share information within and among OSD and the services, and developed local range management plans. While these key actions comprise elements of a comprehensive approach to training range sustainment, they have focused primarily on encroachment, or they have not been consistently implemented among the services, or they have not clearly defined the roles and responsibilities of all officials. Our analysis of the status of OSD’s and the services’ management actions taken to improve range conditions is shown in figure 5. Policy—While OSD promulgated a DOD range sustainment policy in 2003, that policy primarily focuses on external encroachment factors that impact training and does not clearly define the roles and responsibilities of several DOD commands that either provide oversight or are impacted by the conditions of the ranges. Specifically, the policy does not clearly define the maintenance and modernization responsibilities of the Deputy Under Secretary of Defense for Installations and Environment and Special Operations Command. Consequently, these organizations lack appropriate assignment of responsibility and accountability for the military training range improvements they oversee or manage. According to service officials, the Army and Marine Corps are finalizing draft revisions of their range sustainment policy, and the Air Force only recently started revising its policy. Navy officials stated that the service has not yet developed a policy to implement DOD’s 2003 policy or to clearly define the roles and responsibilities of the multiple Navy organizations responsible for maintaining and modernizing its training ranges. Range sustainment programs—As shown in table 2, OSD and some of the services have initiated specific range sustainment programs to integrate their individual components and commands. The Army has developed such an integrated program that incorporates the multiple facets of range sustainment, including maintenance and modernization, and includes involvement of all responsible officials. OSD and the Navy have established similar programs, but their programs focus primarily on encroachment issues and not on other factors that impact training, such as the maintenance and modernization of ranges. The Marine Corps has taken multiple sustainment initiatives, but has not named their efforts as a program. Strategic or implementation plans—Although DOD has developed strategic plans in other areas, such as the 2004 Defense Installations Strategic Plan and Training Transformation Strategic Plan, to guide the services with goals and milestones, it has not developed a comprehensive strategic plan for the long-term viability of its military training ranges. In June 2004, we reported that DOD’s training range report to the Congress, which was mandated by section 366 of the Bob Stump National Defense Authorization Act of Fiscal Year 2003, did not, among other things, provide a comprehensive plan to address training constraints caused by limitations on the use of military lands, marine areas, and air space that are available in the United States and overseas for training. In response to our recommendation calling for a comprehensive plan to fully address training constraints, along with quantifiable goals and milestones for tracking planned actions and measuring progress, DOD stated that the services had initiated a comprehensive planning process, which it considered to be evolutionary, and disagreed with our implication that DOD has not executed a comprehensive program to improve the sustainability of its ranges. Defense planning guidance has mandated DOD to develop a plan to ensure that training ranges are sustainable, but the plan addressed only encroachment issues impacting military training ranges. Similarly, the 2004 Defense Installations Strategic Plan identifies and provides goals for addressing encroachment factors impacting DOD’s training ranges, but not for other issues that affect the quality of training, such as range maintenance and modernization. The absence of such a plan could adversely impact DOD-wide initiatives, such as the joint national training capability and the overseas rebasing of forces to the United States. Furthermore, lacking a comprehensive DOD strategic plan, none of the services has developed implementation plans of their own. The Army and Air Force have developed documents on their sustainable range programs, but they do not provide specific goals or milestones that the services can use to measure their progress in meeting their vision and overall goals for ensuring the long-term viability of their ranges. While the Navy has taken several actions under its sustainable range program, it still lacks a plan with specific goals, milestones, funding sources and amounts, defined roles and responsibilities, and other critical components of a strategic plan. Multilevel integrated working groups—OSD and most of the services have developed formal sustainable range working groups at multiple levels that are intended to address training range constraints, since range viability is dependent on a number of fragmented organizations within OSD and the services. For example, the Deputy Secretary of Defense established a multilevel DOD-wide working group, which includes representatives from the services and some of the other OSD offices. However, the working group does not include a representative from Special Operations Command, although they are responsible for and impacted by the maintenance and modernization of military training ranges. Also, both the DOD-wide and Navy headquarters-level sustainable range working groups are primarily focused on encroachment issues and not on other issues that impact ranges and training, such as maintenance and modernization. For example, the Navy’s southwest regional range director stated that his primary responsibility is encroachment and munitions cleanup, and that he has not been assigned or been provided the resources to address the maintenance and modernization of ranges in his region. Also, on the basis of our discussion with officials, we noted that only the Marine Corps’ and Air Force’s working groups included all relevant organizations, such as special operations units, which have an interest in having maintained and modernized ranges. Range requirements—The Navy and Marine Corps have begun to identify or have identified specific requirements or capabilities needed for their ranges, which could be used for budgeting purposes as well as assessing training range deficiencies. In addition, the Navy has linked and the Marine Corps is in the process of linking its training requirements to these range requirements so that the services can identify specific training standards that are impacted by the conditions of a specific training area. However, only the Navy’s draft range requirements document links its ranges to special operations and joint training requirements to show the potential impact on the special operation units’ or combatant commanders’ needs, which is a key objective of DOD’s training transformation initiative. Also, none of the range requirement documents identify range support facility needs, although facility conditions directly impact the quantity and quality of training provided and the level of safety on the ranges. Systematic assessment of range conditions and impacts—At the time of our review, we found that none of the services regularly assessed the conditions of their ranges, including whether the ranges are able to meet the specific training requirements of the service and combatant commanders. While the Army and Marine Corps annually assessed the physical condition of their training ranges, the services do not assess the capabilities of the ranges or any impacts to training. While the Army’s assessment contained clearly defined criteria, local training range officials stated that because the criteria are revised regularly, comparing assessments across years is impossible. In addition, the overall assessment of Army training ranges does not accurately reflect the condition of all training areas on the range since it does not include the condition of a number of training areas. Also, according to service officials, both the Army’s and Marine Corps’ assessments are conducted by public works officials who do not have the background or specific knowledge of range infrastructure, as opposed to training range officials or training unit representatives. In addition, local officials stated that the Marine Corps’ assessment is highly subjective and does not provide the evaluator with specific criteria. While the Navy and Air Force do not routinely conduct annual assessments of their training ranges, the Air Force does perform assessments from time to time and the Navy has completed some one-time assessments on their ranges while developing local range complex management plans. We also found that none of the services regularly assess the impacts to training, and none of the services have linked their funding resources to the results of the assessments. Web-based range information management system—DOD reports and officials have increasingly called for a range information management system that would allow range offices and users to share information within and across the services. Such a Web-based system would include best practices, lessons learned, a scheduling tool, policies, points of contact, funding information, and range conditions and capabilities. Local range offices have undertaken a number of initiatives to ensure that their ranges remain viable while trying to minimize the negative impact on training, but they often lack an effective mechanism for sharing these initiatives with other organizations. For example, the range officials at the Fallon Range Training Complex routinely obtained targets and training structures at no cost from the Defense Reutilization and Marketing Service to enhance their training capability, but other training offices we visited were having difficulty obtaining these items or were paying for the items they were able to obtain. For example, figure 5 shows a mock airfield that was constructed at the Fallon Range out of materials obtained from the Defense Reutilization and Marketing Service. The Marine Corps has an active, centralized training range Web site to provide information to units and ranges across the world, including related service regulations, general and detailed information about each of its ranges, and training range points of contact. The Web site also allows units from any service to schedule their training events remotely, and provides them with a map of each training range including photographs and, in some instances, video footage to assist them in scheduling and designing their training events. However, to date, the Marine Corps has not used its Web site to exchange information, such as lessons learned and best practices, between and among training range offices and military units. Meanwhile, the Army has developed an initial Web site that provides similar, but more limited, information about its sustainable range program. The Air Force has also established a training range Web site to share information about its training ranges, but it has remained nonfunctional, since the service did not enter information into the site. The Air Force’s Air Combat Command is developing a separate training range information management system. While a cognizant command official stated that the command plans on adding a chat room feature to exchange information, the official stated that the system might not be Web-based, so the information would not be available to other range offices or units within and across the services. In commenting on a draft of this report, DOD stated that the Air National Guard is in the process of developing a Web- based range scheduling system that could meet some of the service’s needs, but additional funding is needed to complete this effort. While Navy reports and officials recognize the need for a servicewide training range management system, the service has not developed such a system. However, the Southern California Offshore Range has its own management system that is used for scheduling, identifying specific training requirements for each training event, documenting reasons why training is modified or canceled, tracking training range utilization rates by specific units, and recording maintenance issues and resolutions. In addition, the system allows the range office to compute the costs of training each unit using specific training requirements and warfare areas. Local range complex management plans—The Navy and Marine Corps have started to develop local range complex management plans for their training ranges, which, among other things, provide descriptions of the training ranges, a strategic vision for range operations, and recommendations for environmental planning; identify and analyze required capability shortfalls derived from fleet training needs; and include an investment strategy to address these deficiencies. Although most of the Navy’s and Marine Corps’ local range offices have started to develop plans with investment strategies, these strategies are not linked to any service investment strategies. Also, due to funding expectations, current needs have been pushed out 20 years. Consequently, today’s training requirements are being met with yesterday’s ranges and tomorrow’s training requirements will be met with today’s ranges. Further, six of the Marine Corps’ range complex management plans, including two of the service’s most significant training ranges, are currently unfunded. In addition, the Army and Air Force ranges we visited have outdated plans. The Army recently started developing standardized local range plans and the Air Force is creating a management system to develop plans for its ranges. However, the system is not scheduled to be operational until 2007. While these key actions comprise elements of a comprehensive approach to training range sustainment, they have focused primarily on encroachment, have not been consistently implemented among the services, or have not clearly defined the roles and responsibilities of all officials. Such an approach should include, at a minimum, several key elements, such as an overall comprehensive strategic plan that addresses training range limitations, along with quantifiable goals and milestones for tracking planned actions and progress. Other key elements include well- defined policies that address all factors impacting range sustainability, servicewide plans that guide the timely execution of range sustainability actions, range requirements that are geared to meet both service and joint needs, and a commitment to the implementation of this approach. (See app. II for a more comprehensive list of what we consider to be key managerial elements of a comprehensive approach). Services Have Not Adequately Funded Training Range Maintenance and Modernization Various documents and training range officials report that training range requirements have not been adequately funded historically to meet training standards and needs. According to service officials, a variety of factors—such as ranges having a lower funding priority amid competing demands—have contributed to or exacerbated funding limitations. However, the military services lack adequate and easily accessible information that could precisely identify the required funding and track what is allocated to maintain and modernize its ranges. Available Data Reflect Funding Shortages for Range Requirements Available data indicate that funding for training ranges has historically been insufficient to meet range requirements. For example, the 2003 Special Operations Command report on training ranges states that ranges are inadequately funded for construction, maintenance, repairs, and upgrades. In addition, a 2001 Navy range study states that both range operation funds and base operation funds, which also support range sustainment, were not adequate, thus adversely impacting utilization of the Navy’s ranges. A 2004 Naval Audit Service report also found that Navy range accounts were not being adequately funded and thus were dependent on funds from other accounts. Further, funding information provided by training range officials during this review showed that funding has not adequately met their requirements. For example, Fort Stewart training data indicated that the installation’s training range maintenance account was funded approximately 44 percent for fiscal years 1998 through 2002. Similarly, Camp Pendleton data revealed that the overall identified range needs were funded approximately 13 percent from fiscal years 1998 through 2002. DOD reports and officials identified the following as factors in the funding shortages: Training ranges typically have a lower funding priority than many other installation activities. Specifically, training ranges do not compete well for funding against other installation activities that are more visible or related to quality-of-life issues, such as gymnasiums, child care centers, and barracks, and consequently training funds are often reallocated from the range to support other base operations programs. For example, the 2003 Air Force training range assessment stated that critically needed sustainment funds for ranges were often diverted to fund other base requirements identified as more pressing. Service officials identified a number of organizational structure issues that exacerbate the extent to which training range requirements are prioritized and funded. While OSD’s and the services’ training range offices are located in an operations directorate, this directorate does not prioritize or fund base programs that provide resources for the sustainment, restoration, and modernization of DOD infrastructure (including ranges). Recognizing this as an issue, the Navy recently hosted a conference to address the fragmented management for budgeting and allocating funds to ranges. During the meeting, Navy officials agreed to 20 specific actions that could be taken to minimize future funding issues. Also, while local range personnel are responsible for maintaining and modernizing ranges, some of these offices are not directly linked to the command that prioritizes installation resources. For example, the range office at the Southern California Offshore Range, which is an operational unit, is not organizationally aligned with the installation management organization that prioritizes sustainment funds for San Clemente Island. In addition, although the majority of the Southern California Offshore Range’s exercises are fleet operations and not air operations, the range office is aligned under a naval air command and not the fleet command. In addition, the relative position of training ranges in the organizational framework affects the extent to which training range requirements are prioritized and funded. Specifically, while some local range offices report directly to the senior mission commander that prioritizes funding resources, other range offices report to offices several echelons below the commander. For example, the Air Force’s Air Warfare Center commander stated that since the range office for the Nellis Test and Training Range is an Air Force wing, it has the same opportunity to identify its requirements and deficiencies to him as have the other wings at Nellis Air Force Base, Nevada. Conversely, although the Fallon Range Training Complex range office used to report directly to the Naval Strike and Air Warfare Center commander, who sets funding priorities and requirements, the range office has since been aligned to a lower echelon position, thus placing the office at a less advantageous position in having its requirements and deficiencies identified as priorities. A lack of clearly defined roles and responsibilities can also result in overlooked training range requirements as well. Specifically, several training range officials stated that the Navy’s regional installation support structure lacks clearly defined roles and responsibilities for each of the program directors within the structure, which results in overlooked requirements at its training ranges. For example, because the Southern California Offshore Range is only a portion of the San Clemente Island in the Pacific Ocean, there are multiple officials responsible for the different operations occurring on the island, including training ranges, port, airfield, environmental, facilities, information technology, and safety. However, according to training range officials, deficiencies on the island are overlooked because the Navy has not issued guidance providing clearly defined roles and responsibilities for each of these program directors. Specifically, training range officials stated that they are unable to obtain funds to maintain or modernize support facilities on the island, such as the pier and roads, because program managers either tend to view the entire island as a training range and therefore not their responsibility or to view it as not one of their top priorities, since the adverse impact on their primary missions is relatively limited. Nevertheless, the condition of these support facilities directly impacts range activities. Various documented reports and testimonies of cognizant officials suggest that range needs are understated to the Congress due to the following factors: (1) installation real property inventories, which are used to calculate the installations’ sustainment funding requirements, do not contain complete and accurate information needed to compute requirements; (2) commands typically understate range needs because they have come to expect lower funding amounts; and (3) ranges may receive supplemental funding from units to help maintain conditions. For example, the 2003 Special Operations Command training range report found that Army installations had incorrectly categorized their range facilities built with operations and maintenance funds as multipurpose ranges, which are considered less costly to maintain than those specially targeted for the command. Therefore, these installations underbudgeted for the maintenance and repair of these facilities. In addition, Marine Corps officials stated that they recently updated their installation real property inventories and discovered numerous discrepancies that had resulted in understatement of their ranges’ needs. Also, officials at Fort Hood stated that 30 percent of its tank trails are not included in its real property records because the tank trails do not meet military construction standards. As a result, Fort Hood is unable to obtain sufficient funds to either sustain or improve the tank trails to an acceptable standard and add them to the real property inventory. Further, officials stated that commands understate range funding requirements because they have come to expect lower funding levels. For example, officials at Fort Hood stated that although their range modernization funding requirements totaled at least $8 million, they had programmed and budgeted for only $4 million. Also, the requirements and budget documents at the Southern California Offshore Range office showed that the range’s requirements were understated by about 30 percent for fiscal years 2005 through 2007. Consequently, range officials stated that even if this amount were fully funded and not transferred to other accounts, their needs would be unmet. Since the range has a management system that captures the cost to train units on the range, the office reported that they would have to cancel operations due to a lack of funds in May of each year, or eliminate all command and control and battle group exercises, including 20 already scheduled significant training events. In addition, a 2004 Naval audit found that the regular transfer of funds from units to training ranges resulted in understated requirements and senior Navy management, DOD officials, and the Congress not having important information needed to efficiently and effectively manage and fund Navy programs identified by the Congress as significant to readiness. The services do not link funding for their training ranges to range conditions, capabilities, impacts on training, or utilization. For example, while the number of training hours on the Southern California Offshore Range increased by 153 percent between fiscal years 1998 and 2001, range funding data reflect that funding increased by less than 10 percent. As a result, range officials told us that the training range requirements continued to be underfunded, conditions continued to deteriorate, and the capabilities continued to be lacking. Service officials across all commands lack adequate knowledge and training about the various resources available for range maintenance and how modernization impacts funding levels. For example, very few of the training range officials that we met during our review were aware of sustainment funds that were generated by the range property in the installation’s real property inventory systems. The services lack clearly defined range requirements that distinguish special operations-specific range needs and servicewide range needs, which results in confusion between which organization is responsible for funding range maintenance and modernization. Specifically, the 2003 Special Operations Command training range report stated that when Special Operations Forces are the primary users of a range funded with service dollars, disagreement sometimes arises over responsibility for maintenance costs. Consequently, there needs to be better clarification of what comprises Special Operations-specific facilities and what comprises service-common facilities. Services Lack the Capability to Accurately Capture Training Range Requirements and Funding Levels We found, and DOD recognizes, that the services lack the capability to accurately and easily capture training range funding information. DOD’s sustainable range working group officials told us that the services were unable to easily and precisely identify their funding requirements, funding levels, and trends in expenditures on an annual basis. Consequently, the group developed a subcommittee in 2004 to begin addressing this issue. Also, the 2004 Naval audit on range operations funds found that the lack of a range management system resulted in problems related to the visibility of the amount and use of funds being provided. Further, while training range officials for each of the services stated that they could identify some training range requirements or funding amounts, none were able to identify all of the funds that their ranges need and receive. For example, while the Army was able to identify its range operations requirements and funding levels, it was unable to identify its range sustainment requirements and funding levels. Officials in these range offices stated that they should have the ability to accurately identify all funding provided to their ranges if they are going to be effective program sponsors. Local training range officials were also unable to identify all their funding requirements and levels. They noted that a centralized system would provide a mechanism for service headquarters officials to identify funding requirements and at the same time relieve them of the burden of responding to constant requests for information. OSD and the Services Have Not Fully Implemented Previously Recommended Actions Although policy, management guidance, reports, and plans have either recommended or required specific actions, OSD and the services have not fully implemented these previously recommended actions. For example, although DOD’s sustainable range policy requires OSD to, among other things, provide oversight of training ranges and ensure that DOD-level programs are in place to protect the future ability of DOD components to conduct force training, a cognizant OSD official told us that OSD believes it should be a facilitator rather than a provider of oversight. Without adequate oversight, DOD-level initiatives, such as transformation efforts, could be jeopardized. In addition, OSD has not established a means to assess the readiness benefits of range sustainment initiatives, as required by the policy. In response to DOD guidance stating that DOD was to reverse the erosion of its training range infrastructure and ensure that ranges are sustainable, capable, and available, the Senior Readiness Oversight Council required the services, working with OSD, to prepare a prioritized list of range sustainment and upgrade programs and estimated costs for potential inclusion in the fiscal year 2003 budget. However, the list was never developed and submitted for potential funding opportunities. Defense officials could not provide us with an explanation as to why no appropriate action was taken. In addition, the 2003 Special Operations Command training range report identified a number of recommendations that could improve the conditions of training ranges units within the command use. For example, the report stated that all special operations’ components need to create master range plans that address their current and future range issues and solutions; identify and validate training requirements as well as facilities available and needed; and define acceptable limits of workarounds. However, according to a knowledgeable defense official, these recommendations have not been implemented to date because of resource shortages. Also, in July 1995, the Navy issued a tactical training range roadmap that, among other things, applied training requirements to training range capabilities and identified deficiencies to produce an investment plan for training range development. Although the plan stated that it should be updated biannually to remain current and accurately reflect fleet training requirements and associated instrumentation needs, the Navy has not updated the plan since that time. Without a commitment to implementation, it is unlikely that the OSD and the services will be able to ensure the success of their transformation efforts and long-term viability of their training ranges. Conclusions DOD training ranges are important national assets that have not been adequately maintained or modernized to meet today’s needs. While DOD has undertaken a number of actions in an effort to maintain and modernize its training ranges, it lacks a comprehensive approach to address range issues. We have previously recommended and continue to believe that DOD needs an overall strategic plan that identifies specific goals, actions to be taken, milestones, and a process for measuring progress and ensuring accountability. In turn, each service needs to develop a comprehensive implementation plan if deteriorating conditions are to be abated and overall training capabilities improved to meet today’s and tomorrow’s requirements. Similarly, OSD and the services have issued policies, conducted studies containing recommendations, identified range officials at various command levels, and developed working groups. However, not all relevant officials are included, their roles and responsibilities are not clearly defined, the policies and recommendations have been ignored or only partially implemented, and several of these actions focus only on external encroachment issues. DOD needs to ensure that OSD’s comprehensive strategic plan, the services’ implementation plans, DOD’s training transformation plan, DOD policies, and identified recommendations include all relevant officials, clearly define their roles and responsibilities, comprehensively address all sustainability issues, including the maintenance and modernization of military training ranges, and are fully implemented to ensure the long-term viability of these national assets. Although military training ranges are generally in degraded condition, which adversely affects the quantity and quality of training and safety of the users, the military services do not accurately and systematically assess their ranges, including whether the ranges are able to meet the specific training requirements of the service and combatant commanders. Without systematically assessing the conditions of their ranges, the services cannot accurately identify the ranges where the conditions negatively impact training and need improvements, the best locations for training, or which training ranges best meet the needs of DOD’s training transformation plan and of service and combatant commanders. Although local training range officials have undertaken a number of initiatives to ensure that their ranges remain viable while trying to minimize negative impact on training, the services have not provided these officials or military units with a Web-based range information management system. Without such a system, the range offices are unable to share best practices and lessons learned within and across the services and military units are unable to identify which ranges best meet their needs. Various documents and training range officials report that training range requirements have historically not been adequately funded to meet training standards and needs. Without appropriate attention and adequate funding, the services will be unable to meet DOD’s transformation goals and ensure the long-term viability of their ranges. The military services do not have the capability to accurately and easily identify the funding amounts needed or provided for maintaining and modernizing their ranges. Without this capability, the military services are constrained in their ability to accurately plan, program, and budget for the maintenance and modernization of their training ranges; provide complete and accurate information to the Congress for appropriation and legislative decision making; and obtain this information without constant requests for information from multiple officials at different commands. A variety of factors, such as ranges having a lower priority in funding, contributes to or exacerbates funding limitations. Without addressing these and other factors, training range conditions will continue to degrade. Recommendations for Executive Action We have previously recommended that OSD develop an overall comprehensive strategic plan for its training ranges that addresses training range limitations, along with quantifiable goals and milestones for tracking planned actions and progress. In response to our recommendation, DOD stated that the services had initiated a comprehensive planning process, which it considered to be evolutionary, and disagreed with the implication that DOD has not executed a comprehensive program to improve the sustainability of its ranges. However, our work has shown that this recommendation still has merit and should be addressed because it is fundamental to the comprehensive approach for managing training ranges that we are advocating. We are making other recommendations to you as follows: Direct the Under Secretary of Defense for Personnel and Readiness to: Update DOD Directive 3200.15 to broaden the focus of the policy to clearly address all issues that affect the long-term viability of military training ranges; and clearly define the maintenance and modernization roles and responsibilities of all relevant DOD components, including the Deputy Under Secretary of Defense for Installations and Environment, Joint Forces Command, and Special Operations Command. Broaden the charter of the DOD-wide working group, the Sustainable Range Integrated Product Team, to address all issues that could affect the long-term viability of military training ranges; and include all DOD components that are impacted by range limitations. Update DOD’s training transformation plan to address all factors that could impact the sustainability of military training ranges and not just external encroachment issues. Direct the Secretaries of the Military Services to implement a comprehensive approach to managing their training ranges, to include: A servicewide sustainable range policy that implements the updated DOD Directive 3200.15 and clearly defines the maintenance and modernization roles and responsibilities of relevant service officials at all levels. A servicewide sustainable range implementation plan that includes goals, specific actions to be taken, milestones, funding sources, and an investment strategy for managing their ranges. Defined training range requirements and a systematic process to annually assess the conditions of training ranges and their consequent impact on training, including whether the ranges are able to meet the specific training requirements of the service and combatant commanders. A Web-based range information management system that allows training range officials at all levels to share information, such as range conditions and their impact on training; funding sources, requirements and expenditures; and local range initiatives. Regularly developed strategies to address the factors contributing to funding shortages for ranges, including the reassessment of funding priorities for maintaining and modernizing ranges relative to other needs. Agency Comments and Our Evaluation In commenting on a draft of this report, the Deputy Under Secretary of Defense for Readiness agreed with our recommendations, stating the department and military services are or will be taking steps to implement them. The Deputy Under Secretary of Defense’s comments are included in this report in appendix III. DOD also provided technical clarifications, which we incorporated as appropriate. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We are sending copies of this report to the appropriate congressional committees and it will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-5581 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Mark A. Little, James R. Reifsnyder, Patricia J. Nichol, Tommy Baril, Steve Boyles, and Cheryl A. Weissman were major contributors to this report. Appendix I: Scope and Methodology To determine the conditions of military training ranges and their consequent impact, we collected and analyzed training-range-related information from officials within the headquarters and selected major commands of the military services. We also visited eight major active component training ranges situated at various locations in the continental United States—Fort Hood, Texas; Fort Stewart, Georgia; Southern California Offshore Range, California; Fallon Range Training Complex, Nevada; Camp Lejeune, North Carolina; Camp Pendleton, California; Nellis Test and Training Range, Nevada; and the Barry M. Goldwater Range, Arizona—to observe training range conditions and discuss consequential impacts. These ranges were selected by identifying the major training ranges for each service and seeking input from service range officials as to which ranges could best address our audit objectives. During our visits we met with installation officials, range managers, and units that use the ranges. We also reviewed relevant DOD studies and audit reports identifying the conditions of military training ranges. To assess the progress the department has made in improving training range conditions, we discussed and reviewed information relating to training range initiatives from the Office of the Secretary of Defense, the Joint Forces Command, the Special Operations Command, and the headquarters and selected major commands of the military services. We also examined key documents related to the funding of training ranges including associated funding requirements and funding allocations. In addition, we reviewed prior GAO reports and internal service audits addressing funding issues for military facilities, including training ranges. We also obtained and reviewed range-related information from range officials of each of the eight installations that we visited. Further, we toured the training areas or support facilities at each of the ranges we visited to observe initiatives implemented by local range offices to improve the condition or capability of their ranges. Although we found limitations in the availability of certain data, we believe the available data gathered are sufficiently reliable for the purposes of this report based on our discussions with OSD and military service officials and our review of the prior GAO reports and internal service audits. Organizations and Units Visited or Contacted for This Review Office of the Secretary of Defense Office of the Director of Readiness and Training, Office of the Deputy Under Secretary of Defense for Readiness Office of Installations Requirements and Management, Office of the Deputy Under Secretary of Defense for Installations and Environment Chief of Staff, Joint Forces Command Joint National Training Capability Joint Management Office, Joint Forces Joint Training Policy and Validation Division, Special Operations Training Directorate, Training Simulations Division, Office of the Deputy Office of Assistant Chief of Staff for Installation Management Installation Management Agency—Headquarters Installation Management Agency—Southeast Region Installation Management Agency—Southwest Region Navy Fleet Training Branch, Fleet Readiness Division, Fleet Readiness and Logistics, Office of the Deputy Chief of Naval Operations Operating Forces Support Division, Chief of Naval Installations Live Training Ranges Office, Fleet Forces Command Range and Training Area Management Division, Training and Education Command Office of the Director of Ranges and Airspace, Air and Space Operations Air Combat Command Air Education and Training Command Garrison Commander, Fort Hood Office of Assistant Chief of Staff, G3, III Corps Headquarters Company, 4th Infantry Division 8th Infantry Regiment, 2nd Battalion, 4th Infantry Division 16th Field Artillery, 3rd Battalion, 4th Infantry Division Headquarters Company, 1st Cavalry Division 3rd Air Support Operations Group (U.S. Air Force) Directorate of Plans, Training and Security Directorate of Public Works Range Division, Directorate of Plans, Training and Security Garrison Resource Management Office Deputy Garrison Commander, Fort Stewart 64th Armored Regiment, 1st Battalion, 1st Brigade, 3rd Infantry Division Headquarters Company, 3rd Infantry Division Training Division, Directorate of Plans, Training, Mobilization an Security Directorate of Public Works Garrison Resource Management Office Southern California Offshore Range, California Commodore, Submarine Squadron 11, Commander Submarine Force, Training and Readiness Department, 3rd Fleet Expeditionary Warfare Training Group, Pacific Naval Special Warfare Command Fleet Area Control and Surveillance Facility Detachment Southern Commander Helicopter Anti-Submarine Light Wing, Pacific Public Works Office, Naval Base Coronado Commanding Officer, Naval Air Station Fallon Program Manager of Ranges, Navy Region Southwest, Chief of Naval N5 Strike Department, Naval Strike and Air Warfare Center Training Range Branch, N5 Strike Department, Naval Strike and Air Comptroller, Naval Strike and Air Warfare Center Camp Lejeune, North Carolina Commanding General, Marine Corps Base Camp Lejeune Office of Assistant Chief of Staff for Training and Operations Range Development Division Training Resources Management Division Modeling and Simulation Division School of Infantry Special Operations Training Group 2nd Marine Division, 2nd Marine Expeditionary Force Weapons and Field Training Battalion Office of the Comptroller, Marine Corps Base Camp Lejeune Office of the Deputy Chief of Staff, Installations and Environment Office of the Assistant Chief of Staff for Training and Operations Range Operations Division Training Resources Management Division School of Infantry 1st Marine Division, 1st Marine Expeditionary Force Marine Aircraft Group 39, 3rd Marine Aircraft Wing, 1st Marine Nellis Test and Training Range, Nevada Commanding Officer, Air Warfare Center 98th Range Wing, Air Warfare Center 414th Combat Training Squadron, 57th Operations Group, 57th Wing, 57th Operations Support Squadron, 57th Operations Group, 57th Wing, Air Warfare Center 56th Fighter Wing 944th Fighter Wing 56th Fighter Wing Range Management Office 56th Operations Group, 56th Fighter Wing 355th Operations Group, 355th Wing 162nd Fighter Wing Operations Group, Arizona Air National Guard 563rd Rescue Group, Air Force Special Operations Command Western Army National Guard Aviation Training Site We conducted our work from August 2003 through March 2005 in accordance with generally accepted government auditing standards. Appendix II: Key Management Elements of a Comprehensive Approach for Managing Training Ranges The flow chart below depicts what we consider to be the defense organizational roles and responsibilities needed to implement a comprehensive approach for managing training ranges. Promulgate DOD policy for comprehensive management of military training ranges. Develop a comprehensive DOD strategic plan. Ensure accountability of policy, strategic plan, and other identified actions. Establish a working group that includes all affected officials (including special operations, installation management, and operational forces, combatant commanders) to address all factors that impact military training ranges. Promulgate servicewide policy for comprehensive management of military training ranges. Develop a servicewide implementation plan. Ensure accountability of policy, implementation plan, and other identified actions. Identify requirements for military training ranges (including cross-service and joint requirements). Link training ranges to training requirements (including service-specific, special operations, and combatant commanders). Develop a training range investment strategy. Accurately and easily account for training range funding requirements and funding levels. Establish a working group that includes all affected officials (including special operations, installation management, and operational forces) to address all factors that impact military training ranges. Develop a Web-based mechanism to share information and remotely schedule training events within and across the services. Develop and keep current range management plans with investment strategy. Accurately identify funding requirements and funding levels. Identify requirements of all users, regardless of service. Accurately capture training constraints, modifications, and cancellations. Regularly assess the conditions and capabilities of the range and their impact on training. Share lessons learned and best practices with other training range officials. These documents should identify, at a minimum, specific actions, quantifiable goals, and milestones to measure progress, projected funding requirements and sources, and clear assignment of responsibility. Related GAO Products Military Training: DOD Report on Training Ranges Does Not Fully Address Congressional Reporting Requirements. GAO-04-608. Washington, D.C.: June 4, 2004. DOD Operational Ranges: More Reliable Cleanup Cost Estimates and a Proactive Approach to Identifying Contamination Are Needed. GAO-04-601. Washington, D.C.: May 28, 2004. Military Munitions: DOD Needs to Develop a Comprehensive Approach for Cleaning Up Contaminated Sites. GAO-04-147. Washington, D.C.: December 19, 2003. Military Training: Implementation Strategy Needed to Increase Interagency Management for Endangered Species Affecting Training Ranges. GAO-03-976. Washington, D.C.: September 29, 2003. Defense Infrastructure: Changes in Funding Priorities and Management Processes Needed to Improve Condition and Reduce Costs of Guard and Reserve Facilities. GAO-03-516. Washington, D.C.: May 15, 2003. Military Training: DOD Approach to Managing Encroachment on Training Ranges Still Evolving. GAO-03-621T. Washington, D.C.: April 2, 2003. Defense Infrastructure: Changes in Funding Priorities and Strategic Planning Needed to Improve Condition of Military Facilities. GAO-03-274. Washington, D.C.: February 19, 2003. Defense Infrastructure: Most Recruit Training Barracks Have Significant Deficiencies. GAO-02-786. Washington D.C.: June 13, 2002. Military Training: DOD Lacks a Comprehensive Plan to Manage Encroachment on Training Ranges. GAO-02-614. Washington, D.C.: June 11, 2002. Military Training: DOD Needs a Comprehensive Plan to Manage Encroachment on Training Ranges. GAO-02-727T. Washington, D.C.: May 16, 2002. Military Training: Limitations Exist Overseas but Are Not Reflected in Readiness Reporting. GAO-02-525. Washington, D.C.: April 30, 2002. Defense Budget: Analysis of Real Property Maintenance and Base Operations Fund Movements. GAO/NSIAD-00-87. Washington, D.C.: February 29, 2000. Military Capabilities: Focused Attention Needed to Prepare U.S. Forces for Combat in Urban Areas. GAO/NSIAD-00-63NI. Washington, D.C.: February 25, 2000. | Military training ranges are important national assets and play a critical role in preparing military forces for their wartime mission. The Department of Defense (DOD) has reported for years that it faces increasing difficulties in carrying out realistic training at its ranges due to various constraints. While encroachment issues have had high visibility within DOD and the Congress, much less attention has been given to the overall conditions of training ranges, which can also have an adverse impact on training activities. This report, prepared under the Comptroller General's authority, discusses (1) the condition of military training ranges and their impact on training activities, and (2) what factors are affecting DOD's progress in improving training range conditions. GAO's visits to eight training ranges, along with DOD's own assessments show that ranges are deteriorating and lack modernization. This adversely affects training activities and jeopardizes the safety of military personnel. To ensure readiness, servicemembers must have access to capable ranges--a key DOD transformation goal--that enables them to develop and maintain skills for wartime missions. However, GAO observed various degraded conditions at each training range visited, such as malfunctioning communication systems, impassable tank trails, overgrown areas, and outdated training areas and targets. Whenever possible, the services work around these conditions by modifying the timing, tempo, or location of training, but officials have expressed concern that workarounds are becoming increasingly difficult and costly and that they compromise the realism essential to effective training. Without adequate ranges, DOD compromises the opportunity to achieve its transformation goal and assumes the risk that its forces will be less prepared for missions and subjected to hazards. DOD's progress in improving training range conditions has been limited and is partially attributable to a lack of a comprehensive approach to ensure that ranges provide the proper setting for effectively preparing its forces for warfare. First, while the services have individually taken a varying number of key management improvement actions, such as developing range sustainment policies, these actions lack consistency across DOD or focus primarily on encroachment without including commensurate efforts on other issues, such as maintenance and modernization. Second, even though the services cannot precisely identify the funding required and used for their ranges, identified range requirements have historically been inadequately funded, as evidenced by conditions GAO saw, and inadequately addressed. Service officials identified a variety of factors that have exacerbated funding limitations, such as ranges having a lower priority in funding decisions. Third, although DOD policy, reports, and plans have either recommended or required specific actions, DOD has not fully implemented such actions. |
Background SCHIP is a federal-state program that, in general, allows states to provide health insurance to children in families with incomes of up to 200 percent of FPL, or 50 percentage points above states’ Medicaid eligibility limits that were in place as of March 31, 1997. States submit plans for their use of SCHIP funds to CMS. The agency reviews and approves these plans and monitors their implementation. Within broad federal guidelines, states have considerable flexibility in designing their SCHIP programs in terms of establishing eligibility guidelines, the scope of benefits, and administrative procedures. A number of states have made use of this flexibility by expanding SCHIP eligibility to children in families with income levels as high as 350 percent of FPL. States’ choices in structuring their SCHIP programs have important programmatic and financial implications. States have three basic options for structuring their SCHIP programs: (1) a Medicaid expansion program, (2) a separate child health program, or (3) a combination program that includes both a Medicaid expansion and a separate child health program. A Medicaid expansion program allows a state to expand eligibility levels within the state’s existing Medicaid program and requires the state to follow Medicaid rules, including those on eligibility determination, benefits, and cost sharing. States that operate Medicaid expansions continue to receive federal funds at the regular Medicaid matching rate after they have exhausted their SCHIP funds. In contrast, a separate child health program may depart from Medicaid rules, including introducing limited cost sharing and creating waiting lists for enrollment. A separate child health program only receives a defined SCHIP allotment from the federal government, and the state can limit its own annual contribution and or enrollment once funds for SCHIP are exhausted. Even programs with the same structure do not always operate in the same way. For example, Medicaid expansion programs can operate under a section 1115 demonstration waiver, which allows states to implement policies that do not follow traditional Medicaid rules. Similarly, separate child health programs can operate as “Medicaid look-alike” programs, generally following Medicaid rules but maintaining limited funding. Federal law requires states to implement policies to minimize the potential for crowd-out. The SCHIP statute defines a “targeted low-income child” as one from a family that does not qualify for Medicaid and is not covered under a group health plan or under other health insurance. To help minimize crowd-out, states are required to implement policies that would discourage a family from dropping other private health insurance, and states must coordinate eligibility screening with other health insurance programs, such as Medicaid. CMS has provided general guidance to the states regarding activities to minimize and monitor crowd-out. In 1998 and 2001, CMS outlined the types of activities states would need to implement to minimize and monitor crowd-out, and gave states flexibility in choosing specific activities. In August 2007, CMS issued additional guidance for states that wished to expand eligibility to children in families with effective income levels at or above 250 percent of FPL. Most research efforts describe crowd-out as the movement of individuals from private to public health insurance. However, among these research efforts there is no universally accepted method to measure the extent of crowd-out, and as a result, estimates vary widely. One reason for this variation is differences in study type. Broadly, researchers have used population-based, enrollee-based, and applicant-based studies to measure crowd-out. Population-based studies measure crowd-out by estimating any decline in private health insurance within a population. Enrollee-based studies estimate the number of SCHIP enrollees who had insurance within a specified time frame, accounting for specific losses of private health insurance because of job loss or other circumstances. Applicant-based studies use state application data to identify the number of applicants declined for having current or prior health insurance. (See table 1.) Instead of providing a measure of crowd-out, applicant-based studies estimate the amount of crowd-out averted because of a state’s eligibility determination process. Assessments of the potential for crowd-out must take into account an understanding of the extent to which private health insurance is available and affordable to low-income families that qualify for SCHIP. With regard to the availability of private health insurance, the extent to which private firms offer health insurance to their employees varies by state. According to the 2006 Medical Expenditure Panel Survey (MEPS), 55.8 percent of private sector firms offered insurance—either individual or family health insurance—to their employees. Across the states, the percentage of private sector firms offering individual or family health insurance to their employees ranged from 89.6 percent of firms in Hawaii to 40.1 percent of firms in Montana. Smaller firms were less likely to offer individual or family health insurance to their employees than larger firms. For example, 35.1 percent of firms with fewer than 10 employees offered individual or family health insurance, while 98.4 percent of firms with 1,000 or more employees offered individual or family health insurance to their employees. There is some evidence suggesting that the availability of private health insurance for families is declining. For example, a recent study found that from 2003 through 2007, the percentage of low-income employees offered individual insurance through their employers did not change, but the percentage of low-income employees offered family health insurance decreased from 71.1 percent to 63.6 percent. With regard to affordability, MEPS looks at the amount that individuals paid annually in order to obtain private health insurance. The national average annual premium—which includes both employee and employer contributions—for a single employee was $4,118 and for a family was $11,381. Premiums also showed some variation by state, ranging from $3,549 in Hawaii to $4,663 in Maine for an individual, while premiums for family health insurance ranged from $9,426 in Hawaii to $12,686 in New Hampshire. When viewed by household income level, households at higher income levels were more likely to be offered health insurance—either individual or family health insurance—through their employers. For example, employed households with incomes less than 300 percent of FPL were less likely to be offered health insurance than households earning more than 300 percent of FPL. (See fig. 1.) In addition to employers offering health insurance, the costs of such insurance play an important role in the extent to which individuals accept such insurance. Based on an analysis of MEPS, lower-income households paid less for insurance premiums—either individual or family health insurance—than did households with higher incomes (see fig. 2). However, the estimated premiums paid by lower-income households constituted a larger percentage of the total income for these households (see fig. 3). CMS Provided Guidance to States on Crowd-Out; Information It Collected Was of Limited Use in Assessing the Extent to Which Crowd-Out Should Be a Concern CMS provided guidance to states regarding activities to minimize crowd- out in SCHIP, and the information it collected was of limited use in assessing the extent to which crowd-out should be a concern. In issuing guidance to states, CMS instituted specific requirements for program designs the agency identified as being at greater risk of crowd-out, including programs with higher income eligibility thresholds. CMS officials told us that they reviewed states’ SCHIP annual reports, analyzed national trends in public and private health insurance, and commissioned studies on SCHIP, and on this basis believed crowd-out was occurring. However, each of the approaches CMS used was limited in providing information on the occurrence of crowd-out and the extent to which it should be a concern. In particular, CMS did not collect certain indicators in the SCHIP annual report—such as whether applicants’ employers made private health insurance for families available and at what cost to the applicant— that could help show the potential for crowd-out. Moreover, information CMS did collect was not provided consistently by states. CMS Established Specific Requirements for Certain SCHIP Program Designs CMS established specific requirements for SCHIP program designs it identified as being at a greater risk for crowd-out. In issuing the SCHIP final rule, CMS outlined broad regulatory requirements regarding substitution of private health insurance with SCHIP and stated that it planned to incorporate additional flexibility into its review of state plans in this area. CMS could not apply eligibility-related crowd-out prevention requirements to Medicaid expansion programs except those operating under an 1115 demonstration waiver. CMS did outline specific requirements for separate child health programs with higher income eligibility levels, explaining that there is a greater likelihood of crowd-out as incomes increase. For example, the agency required separate child health programs at all eligibility thresholds to monitor crowd-out, and it required states with SCHIP eligibility thresholds from 201 to 250 percent of FPL to implement policies to minimize crowd-out should an “unacceptable level” of crowd-out be detected. CMS officials told us that they did not define an “unacceptable level” for all states, but rather negotiated with states individually. CMS viewed waiting periods as a policy to minimize crowd-out; therefore, states with waiting periods were not required to monitor for an “unacceptable level” of crowd-out, and only three states currently do so. CMS also required that states institute a number of requirements for premium assistance programs, including a waiting period. (Table 2 provides examples of program designs for which CMS provided specific guidance.) In a letter issued on August 17, 2007, CMS provided specific guidance on crowd-out for states operating separate child health programs above 250 percent of FPL, but implementation of the requirements in this guidance has been suspended. In this letter, CMS outlined specific activities states with separate child health programs should use to minimize crowd-out if they wish to cover children with effective family incomes above 250 percent of FPL. Among other things, these activities included a 12-month waiting period, a cost-sharing requirement, and efforts to prevent employers from changing health insurance offerings that would encourage a shift to public programs such as SCHIP. The letter caused states to raise a number of concerns, including those about the potential effect on current enrollees, and we have reported concerns regarding the issuance of this guidance. In a memorandum dated February 4, 2009, for the Secretary of Health and Human Services, the President directed that the August 17 letter be withdrawn immediately. Approaches CMS Used Did Not Address All Information Important to Assessing the Extent to Which Crowd-Out Should Be a Concern CMS reported using a variety of approaches to assess the occurrence of crowd-out in SCHIP, including reviewing states’ SCHIP annual reports, national estimates, and CMS-commissioned studies. From this information, CMS officials said they believed that crowd-out was occurring and that the potential for crowd-out was greater at higher income levels. However, the approaches CMS used provided limited information on the occurrence of crowd-out and thus the extent to which it should be a concern. The questions on crowd-out that CMS asked states to answer for their annual reports did not collect certain indicators of the potential for crowd- out, such as the extent to which SCHIP applicants were offered private health insurance for their families through their employers. In the annual reports that states must submit, CMS requires states to report the “incidence of substitution” in their state by calculating the percentage of applicants who drop private health insurance to enroll in SCHIP; however, CMS did not specify how states should calculate this incidence. While this question measures the number of SCHIP applicants who were enrolled in private health insurance and decided to take up SCHIP, it does not include SCHIP applicants who had private health insurance available through their employers but never enrolled. It also does not provide information on changes in insurance status, such as the extent to which SCHIP recipients gain access to private health insurance but remain enrolled in SCHIP. In addition, CMS does not specifically require states to provide information in their annual reports on the affordability of private health insurance available to SCHIP applicants. CMS has recently updated the requirements for the information that states must provide on crowd-out in their 2008 annual reports. For example, states will be expected to include more information on SCHIP applicants’ insurance status. Whereas states have previously been required to report only the percentage of applicants who have other insurance, CMS plans to require states to provide the percentage of applicants who have Medicaid and the percentage of applicants who have other insurance. In addition, CMS plans to require that states with waiting periods report on the percentage of applicants who meet exemptions to the waiting period. While these questions are useful from an eligibility standpoint, they still do not account for applicants who were offered private health insurance for their families through their employers but did not take it up. These questions also do not address the extent to which available private health insurance is affordable to the SCHIP population. CMS’s information on the occurrence of crowd-out is also limited because states did not consistently provide the information that CMS requested in their annual reports (see table 3). We reviewed SCHIP annual reports from 2007 for state responses on the percentage of applicants who dropped private health insurance to enroll in SCHIP and found that less than half of the 51 states provided a percentage in response to CMS’s question. Of those states providing a percentage, 7 states answered the question directly, with 1 reporting no crowd-out and the remaining 6 reporting an incidence of crowd-out of 1 percent or less. Two of the states reporting an incidence of less than 1 percent also reported this number for the percentage of applicants who had insurance at the time of application. Forty-one states provided a response related to what they knew about the insurance status of applicants, provided descriptive responses, or reported that data were not available. Three states did not answer this question in their annual reports. CMS officials told us that in cases where states had missing or incomplete responses to questions in the annual report, the agency allowed states to address these questions in the next year’s report. CMS officials also noted that states used different data sources when analyzing crowd-out. For example, they said that some states used surveys to measure crowd-out, while others relied on data collected through application questions. In addition to reviewing state annual reports, CMS officials told us that they used national data to assess the occurrence of crowd-out. Specifically, they said that they used the Current Population Survey and CMS enrollment data to track changes in public and private health insurance over time. While this type of analysis can account for broad trends in changes in insurance usage, it cannot account for the reasons for these changes. For example, it cannot isolate whether an increase in public insurance resulted from crowd-out or an unrelated decline in the availability of private health insurance. CMS officials told us that they commissioned two studies to look at the issue of crowd-out in SCHIP. However, the results of these studies did not provide a clear sense of whether crowd-out should be a concern. A 2003 study provided information on states’ early experiences with SCHIP based on state annual reports from 2000. The study reported that states’ estimates of crowd-out in SCHIP ranged from 0 to 20 percent, but cautioned that state data must be interpreted carefully because of a number of factors, including the limited experience upon which states based their estimates. A 2007 study assessed the SCHIP program, and included a background paper on crowd-out. The background paper synthesized evidence on the occurrence of crowd-out in SCHIP and concluded that while the evidence suggested that crowd-out occurred, the magnitude of the occurrence ranged widely—from 0.7 to 56 percent— depending on how crowd-out was defined and measured. The study commented on the strengths and weaknesses of various ways to define and measure crowd-out, but it did not indicate a preference for any specific methodology. States Used Similar Types of Policies to Minimize Crowd-Out, but Not All Collected Adequate Information to Assess whether Crowd-Out Should Be a Concern In general, states implemented similar types of policies in their activities to minimize crowd-out, but not all states collected information adequate to assess whether crowd-out should be a concern. States’ policies largely sought to deter individuals from dropping private health insurance. Officials we interviewed in the nine sample states generally believed that their policies were effective in minimizing crowd-out. However, little evidence existed to confirm this belief. Less than half of states investigate whether applicants had access to private health insurance, which is key to understanding the extent to which crowd-out should be a concern. States Used Similar Types of Policies to Minimize Crowd-Out, but Little Is Known about Their Effect Forty-seven states used one or more policies to minimize crowd-out. These policies deter individuals from dropping private health insurance in order to take up SCHIP. Our analysis found that waiting periods—required periods of uninsurance before applicants can enroll in SCHIP—were the most common policy states had in place to minimize crowd-out; premiums and other types of cost sharing were also frequently used (see fig. 4). As expected, these policies were more common in separate child health and combination programs than in Medicaid expansion programs operating with 1115 demonstration waivers. A waiting period is meant to discourage individuals from dropping private health insurance. Premiums and other types of cost sharing are used in SCHIP to narrow the cost difference between SCHIP and private health insurance, thereby reducing the likelihood that lower out-of-pocket costs for SCHIP will attract individuals who already have other health insurance. Ten states reported having premium assistance programs, where states used SCHIP funds to pay premiums for private health insurance sponsored by employers. In contrast to CMS’s view that premium assistance programs could raise concerns about crowd-out, 5 states reported that such programs were a policy to prevent crowd-out. One state reported instituting a policy directly aimed at affecting employer behavior. This state made it an unfair labor practice for insurance companies or employers to encourage families to enroll in SCHIP, for example, through changing benefit offerings, when the families already have private health insurance. Thirty-nine states had waiting periods, and they varied in length. The most common waiting period length was 3 or 6 months (see fig. 5). At least 2 of the 39 states reduced their waiting periods after concluding that crowd-out was not a significant problem and that the original waiting period length was unnecessarily long. All 39 states with a waiting period included exemptions designed to account for instances where a child involuntarily lost private health insurance. These exemptions were mostly related to the availability rather than the affordability of insurance (see table 4). Exemptions varied among states, but the most common exemptions were for a change in job status that led to a loss of private health insurance, a change in family structure, and exhaustion of health insurance provided through the Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA). CMS does not require states to consider issues of affordability in developing exemptions, and just over 10 states provided an exemption specifically for economic hardship, that is, if private health insurance makes up too much of a family’s income. SCHIP officials in the nine sample states generally believed that their policies to minimize crowd-out were effective but did not cite any specific data to support their assessment. Officials from these nine states generally reported that waiting periods, outreach efforts to inform applicants about program requirements, and premiums and other cost-sharing mechanisms were successful policies to minimize crowd-out. Officials in four of the states also noted that these policies may have unintended consequences, such as limiting families’ access to insurance. Among the nine states in our sample, seven states reported no immediate plans to change their policies to minimize crowd-out, while two states were considering reducing the requirements in place regarding waiting periods. Officials from one of the two states said that they are considering reducing the state’s waiting period. Officials from the other state wanted to add an economic hardship exemption to the state’s waiting period. Our analysis of previous research found few studies that estimated the effect of different policies on minimizing crowd-out, but those that did primarily focused on waiting periods and cost sharing. These studies concluded that waiting periods and cost sharing can have negative effects on individuals’ participation in SCHIP. These studies also suggested that policies to minimize crowd-out may deter SCHIP enrollment by eligible uninsured children at a faster rate than they deter use by individuals who have private health insurance. However, little is known about how variation in policies, such as the length of a state’s waiting period, affects efforts to minimize crowd-out. Not All States Collected Information Adequate to Assess Whether Crowd- Out Should Be a Concern All 51 states monitored crowd-out by asking applicants questions about whether they had private health insurance, but fewer states asked whether applicants were offered health insurance through their employers—a key piece of information in understanding whether crowd-out should be a concern. All 51 states asked applicants if they were currently insured, and 44 of the 51 states asked applicants whether they had been insured in the past. Twenty-four states asked applicants about their access to private insurance. Of these 24 states, 15 states asked applicants if they had access to any private health insurance and 9 states asked applicants if they had access to specific types of private health insurance (such as through state or school employment). The remaining 27 states did not ask questions to determine whether applicants had access to private health insurance through their employers. The majority of states made efforts to verify applicants’ responses to their questions about insurance. Twenty-five states used a database to verify applicants’ enrollment in private health insurance, 14 states verified applicants’ responses with employers, and 7 states used both. States conducted verification of insurance most frequently at the time of application, although about half of the states verified insurance status at eligibility renewal or at other regular intervals. (See table 5.) Twenty-nine states used databases at some point to verify individuals’ enrollment in private health insurance; however, there were differences in the databases used. The database sources states used included (1) databases from private insurance companies, (2) databases run by third-party administrators, and (3) databases that were maintained by the states. The databases also varied in the scope of insurance information they provided. For example, some databases included information on the majority of the population with private health insurance in a state, while other databases were more limited, such as those for state employees. While all of the databases verify insurance status, they do not determine whether individuals have private health insurance available through their employers but did not make use of it for their families. Another way states often verified insurance status was by checking with individuals’ employers—which could allow a state to determine whether the individuals had private insurance available to them. We identified 18 states that used various approaches to verify insurance status with employers at some point. Generally, the states asked whether (1) the individual was insured, (2) the individual had access to employer insurance, and (3) the individual paid out-of-pocket expenses for private health insurance. Other differences included the number of individuals states verified insurance status for and how employer information was collected. For example, at least 2 states only verified insurance status with employers if they found evidence that an individual might have private health insurance available through an employer, such as a pay stub showing that wages were deducted for health insurance premiums. Another state reported that it verified the availability of private health insurance through (1) an annual survey sent to all employers in the state and (2) an individual questionnaire sent to the employers of specific applicants and enrollees. The nine states whose officials we interviewed varied in the extent to which they measured crowd-out, with five states providing some estimate of crowd-out and four states reporting that they could not estimate it at all. The five states that did measure crowd-out developed their estimates differently. Officials from one of the five states based their estimate on the number of applicants who did not meet the eligibility criteria for SCHIP because they were enrolled in other insurance. However, this estimate measures how successful the state was in avoiding the inappropriate use of SCHIP rather than crowd-out. Officials from the remaining four states based their estimates on the number of individuals who dropped insurance for reasons that—by their states’ definitions—constituted crowd-out. For example, one state included in its estimate of crowd-out circumstances in which families had private health insurance available but dropped it because it was unaffordable, while another state explicitly excluded this reason from its estimate. None of the officials in our sample of nine states viewed crowd-out as a concern, with most basing this assessment on a variety of indicators, including the availability and affordability of private health insurance for the SCHIP population in their state. Six states told us that private health insurance was not readily available and three other states believed that where insurance was available it was not affordable. Another state referred to the small number of SCHIP enrollees at its highest eligibility level—from 250 to 300 percent of FPL—as indicating a low potential for crowd-out. Conclusions Understanding the potential for crowd-out is a complex task that begins with understanding the extent to which private health insurance is available to children in families whose incomes make them eligible for SCHIP. SCHIP is designed to offer health insurance to eligible children who would otherwise be uninsured, not to replace private health insurance. For crowd-out to occur, private health insurance must be available to low-income families who qualify for SCHIP, and these families must find such insurance affordable. CMS and the states differ in their views on whether crowd-out is a concern for SCHIP, yet the information on which both base their views is limited in providing a basis for assessing the occurrence of crowd-out. CMS’s guidance to and reporting requirements for states provide limited information on the extent to which crowd-out should be a concern. While precise measurements of crowd-out are difficult, certain indicators could help improve assessments of the extent to which crowd-out should—or should not—be a concern. In particular, asking SCHIP applicants who work whether they are offered private health insurance for their families through their employers could provide an initial assessment of the extent to which private health insurance is available to these individuals and thus better assess whether concerns about crowd-out are warranted. For low- income working families, the affordability of available private health insurance is also important in determining whether crowd-out should be considered a concern. However, CMS does not currently have this information, as states are not required to collect and report it. Information about the availability and affordability of private health insurance could help ensure the best use of SCHIP funds and help determine the future funding needs of this important program. Recommendation for Executive Action To improve information on whether crowd-out should be a concern in SCHIP, we recommend that the Acting Administrator of CMS refine CMS policies and guidance to better collect consistent information on the extent to which applicants have access to available and affordable private health insurance for their children eligible for SCHIP. Such actions should include ensuring that states collect and report consistent information on the extent to which SCHIP applicants have private insurance available to them and take appropriate steps to determine whether available private health insurance is affordable for SCHIP applicants. Agency Comments and Our Evaluation CMS reviewed a draft of this report and provided written comments, which are reprinted in appendix I. In addition to comments on our recommendation, CMS provided us with technical comments that we incorporated where appropriate. Overall, CMS concurred with the report’s findings, conclusions, and recommendation. CMS commented that the issue of crowd-out is complicated and that assessing or measuring it is difficult because of variations in definition and methodologies for assessing its occurrence. CMS agreed that the availability and affordability of health insurance are relevant issues in reviewing crowd-out. They also agreed with our recommendation, stating that information on the availability and affordability of health insurance from states would be extremely helpful in evaluating the potential for crowd-out. CMS also listed a number of concerns in response to our recommendation about the collection and submission of additional information related to the availability and affordability of private health insurance. CMS stated that the reliability of any information states collect from applicants about the availability or affordability of private coverage would be suspect if it were self-reported. However, our work found that much of the information currently collected by states and submitted to CMS on applicants’ insurance status, which is used for eligibility determinations, is also self- reported by applicants. Moreover, our work shows that the majority of states take steps to verify this self-reported information. Thus, the use of self-reported data is not unusual and collecting such data could help CMS make an initial assessment of the extent to which private health insurance is available and affordable. With these data, CMS could better assess whether concerns about crowd-out are warranted. CMS also noted that there is currently no national definition of affordability and that it does not believe it would be appropriate for the agency to develop one because of the diversity of states’ economies and health insurance markets, as well as the flexibility that is a key tenet of SCHIP. Further, CMS commented that its existing definition of a targeted low-income child does not specify that children only receive SCHIP if private health insurance is not affordable. We agree that CMS should not be responsible for devising a national standard of what is considered affordable within a state. We believe that states are in the best position to make such a determination, and as noted in our report, some states already make such designations in their assessments of an individual’s access to private health insurance. Finally, CMS commented that because of the administrative and reporting burdens that would be associated with additional data reporting, the agency would need to provide states with opportunities to discuss the changes needed to collect and report this information. We agree that such steps may be necessary. We also note that CMS recently updated questions in its SCHIP annual report template to help ensure that states are collecting and reporting necessary information related to crowd-out. Thus, the administrative efforts associated with additional information collection and reporting are well within the scope of CMS’s responsibility. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Acting Administrator of the Centers for Medicare & Medicaid Services, committees, and others. This report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Centers for Medicare & Medicaid Services Appendix II: GAO Contact and Staff Acknowledgments Acknowledgments In addition to the contact named above, Carolyn Yocom, Assistant Director; Kristin Bradley; William Crafton; Kevin Milne; Elizabeth T. Morrison; Rachel Moskowitz; and Samantha Poppe made key contributions to this report. | Congress created the State Children's Health Insurance Program (SCHIP) to reduce the number of uninsured children in low-income families that do not qualify for Medicaid. States have flexibility in structuring their SCHIP programs, and their income eligibility limits vary. Concerns have been raised that individuals might substitute SCHIP for private health insurance--known as crowd-out. GAO was asked to examine the Centers for Medicare & Medicaid Services' (CMS) and states' efforts to minimize crowd-out and determine whether it should be a concern. GAO examined (1) CMS's guidance to states for minimizing crowd-out and assessment of whether it should be a concern and (2) states' policies to minimize crowd-out and how they assess whether it should be a concern. To do the work, GAO reviewed federal laws and guidance, examined state annual reports, and interviewed CMS officials. GAO also interviewed SCHIP officials from nine states. CMS provided guidance to states about activities to minimize crowd-out in SCHIP, and the information it collected was of limited use in assessing the extent to which crowd-out should be a concern. Along with this guidance, CMS instituted specific requirements for certain program designs it identified as being at greater risk of crowd-out, including programs with higher income eligibility thresholds. CMS said that among other sources, it used states' SCHIP annual reports to assess the occurrence of crowd-out, and on this basis it believed that crowd-out was occurring. Yet each of the approaches CMS used was limited in providing information about the occurrence of crowd-out and thus the extent to which it should be a concern. CMS did not collect certain indicators of the potential for crowd-out in SCHIP annual reports, such as the extent to which private health insurance was available and affordable to families. States' responses to CMS were inconsistent: GAO's review of annual reports for 2007 found that less than half of the 50 states and the District of Columbia provided a percentage in response to CMS's question on the percentage of applicants who dropped private health insurance to enroll in SCHIP. In general, states implemented similar types of policies in their activities to minimize crowd-out, but not all states collected information adequate to assess whether crowd-out should be a concern. The majority of states used policies such as waiting periods--a required period of uninsurance before an applicant can enroll in SCHIP--to try to reduce incentives for dropping private health insurance. All 39 states with waiting periods offered exemptions for involuntary loss of private health insurance. These exemptions were mostly related to whether insurance was available rather than affordable. Not all states collected information that was adequate to assess whether crowd-out should be a concern. For example, while all 50 states and the District of Columbia asked SCHIP applicants if they were currently insured, 24 states asked applicants if they had access to private health insurance, which is important to understanding the potential for crowd-out. Ofthe 9 states we interviewed, 5 states measured the occurrence of crowd-out, but they all used different methodologies to develop their estimates; the remaining 4 states did not measure crowd-out. None of the officials in the 9 states viewed crowd-out as a concern, with most basing this assessment on a variety of factors, including the lack of available and affordable private health insurance for the SCHIP population in their state. Overall, CMS concurred with the report's findings and recommendation, but raised concerns regarding the difficulty of measuring crowd-out, particularly assessing the affordability of private coverage. While GAO agrees that measuring crowd-out is complicated, the actions GAO recommends are an essential first step to better assessing whether concerns about crowd-out are warranted. |
Background Elder financial exploitation, one type of elder abuse, can occur in conjunction with, and might lead to other types of elder abuse.exploitation of older adults can take many forms and perpetrators can include family members, friends, legal guardians, paid caregivers, and strangers. Table 1 provides some examples. Older adults are particularly attractive targets for financial exploitation by unscrupulous individuals. As a group, older adults tend to possess more wealth than those who are younger because they have had a longer time to acquire it. In addition, the incidence of Alzheimer’s disease and other dementias that undermine judgment increases with age. Moreover, financial capacity—the capacity to manage money and financial assets in ways that meet one’s needs—generally declines with age, and this decline may go unaddressed until it is too late. State and local agencies in the social services, criminal justice, and consumer protection systems in each state are at the forefront of efforts to prevent, detect, and respond to elder financial exploitation. Seven federal agencies whose missions correspond to the state and local social service, criminal justice, and consumer protection systems are positioned to contribute to state and local efforts in this area: AoA, CFPB, Justice, FTC, FinCEN, SEC, and the Postal Inspection Service (see fig. 1). At the state and local level, APS agencies investigate and substantiate reports of suspected elder abuse, including financial exploitation and, if the client agrees to accept help, can arrange for services to secure their safety and meet their basic needs. APS can also refer cases to law enforcement agencies or district attorneys for criminal investigation and prosecution. Whether an elder financial exploitation case comes to the attention of criminal justice authorities through referral from APS or some other means, law enforcement agencies and district attorneys can exercise broad discretion when deciding if a case warrants any action on their part. State-level consumer protection agencies—such as banking, securities, and insurance regulators—conduct examinations to ensure that rules to protect consumers are followed and take enforcement actions against institutions that break the rules. State attorneys general may also prosecute cases or respond to consumer protection inquiries. Although combating elder financial abuse is explicitly included in the mission of only one federal agency, CFPB’s Office for the Financial Protection of Older Americans (Office for Older Americans), it is implicit in the mission of others that work to combat elder abuse, protect consumers or investors, or prevent fraud (see fig. 2). Federal legislation has established a foundation for the federal government to assume a leadership role in combating elder abuse, including elder financial exploitation, and basis for greater coordination across federal agencies in this area. The Older Americans Act of 1965 (OAA) requires AoA to develop objectives, priorities, policy, and a long- term plan for facilitating the development, implementation, and continuous improvement of a coordinated, multidisciplinary elder justice system in the United States; promoting collaborative efforts and diminishing duplicative efforts in the development and carrying out of elder justice programs at the federal, state, and local levels; establishing an information clearinghouse to collect, maintain, and disseminate information concerning best practices and resources for training, technical assistance, and other activities to assist states and communities to carry out evidence-based programs to prevent and address elder abuse, neglect, and exploitation; working with states, Justice, and other federal agencies to annually collect, maintain, and disseminate data on elder abuse, neglect, and exploitation, to the extent practicable; establishing federal guidelines and disseminating best practices for uniform data collection and reporting by states; conducting research on elder abuse, neglect, and exploitation; and carrying out a study to determine the extent of elder abuse, neglect, and exploitation in all settings. Pub. L. No. 111-148, tit. VI, subtit. H, 124 Stat. 119, 782-804 (2010) (codified at 42 U.S.C. §§ 1320b-25, 1395i-3a, and 1397j-1397m-5). The EJA was enacted as part of the Patient Protection and Affordable Care Act, which was signed into law on March 23, 2010. o annually collect and disseminate data regarding elder abuse, neglect, and exploitation of elders in coordination with Justice;o develop and disseminate information on best practices and provide training for carrying out adult protective services; o conduct research related to the provision of adult protective o provide technical assistance to states and others that provide or fund the provision of adult protective services; o establish 10 elder abuse, neglect, and exploitation forensic centers, in consultation with Justice, that would (1) conduct research on forensic markers for elder abuse, neglect, or exploitation, and methodologies for determining when and how health care, emergency, social and protective, and legal service providers should intervene and when these cases should be reported to law enforcement; (2) develop forensic expertise regarding elder abuse, neglect, and exploitation; and (3) use the data they have collected to develop, in coordination with Justice, the capacity of geriatric health care professionals and law enforcement authorities to collect forensic evidence, including evidence needed to determine if elder abuse, neglect, or exploitation has occurred. § 2042(a)(1)(C), 124 Stat. 794 (codified at 42 U.S.C. § 1397m-1(a)(1)(C)). § 2042(a)(1)(D), 124 Stat. 794 (codified at 42 U.S.C. § 1397m-1(a)(1)(D)). § 2042(a)(1)(E), 124 Stat. 794 (codified at 42 U.S.C. § 1397m-1(a)(1)(E)). Grants to state and local governments for demonstration projects that test methods and training to detect or prevent elder abuse or financial exploitation; and An Elder Justice Coordinating Council and an Advisory Board on Elder Abuse, Neglect, and Exploitation to develop priorities for the elder justice field, coordinate federal activities, and provide recommendations to Congress. Currently, the Elder Justice Coordinating Council consists of the following federal agencies: Consumer Financial Protection Bureau, Corporation for National and Community Service, Department of Health and Human Services, Department of Housing and Urban Development, Department of Justice, Department of Labor, Department of the Treasury, Department of Veterans Affairs, Federal Trade Commission, Postal Inspection Service, and Social Security Administration. Coordination among federal agencies is also a feature of the Dodd-Frank Wall Street Reform and Consumer Protection Act, which established CFPB, requiring it to coordinate its consumer protection efforts of older CFPB’s Office for Older Americans is adults with other federal agencies. charged with facilitating the financial literacy of seniors on protection from unfair, deceptive, and abusive practices and on current and future financial choices. States Identified the Need for More Safeguards and Public Awareness Activities to Prevent Elder Financial Exploitation States Cited Need for More Safeguards to Prevent Elder Financial Exploitation According to officials in the four states we visited, financial exploitation of older adults by financial services providers, power of attorney agents, and in-home caregivers is particularly difficult to prevent. Older adults may consult with a variety of financial professionals, such as financial planners, broker-dealers, and insurance agents. However, older adults, similar to other consumers, may lack the information to make sound decisions about choosing a financial services provider and protecting their assets from exploitation. As a result, they may unknowingly put themselves at risk of financial exploitation. Individuals who present themselves as financial planners may adopt a variety of titles and designations. In some cases, privately conferred designations—such as Certified Financial Planner®—require formal certification procedures, including examinations and continuing professional education credits, while other designations may merely signify that membership dues have been paid. Designations that imply expertise in advising older adults have been a source of particular concern among state securities regulators, according to the North American Securities Administrators Association (NASAA). Older adults may lack information to distinguish among the various senior specific designations. Indeed, in 2011, we reported that there is some confusion about what these titles mean and the level of skill required to obtain them. Exploitation by “Senior Specialist” Calling himself a senior financial advisor, an insurance agent licensed in California met an 89-year-old partially blind, intermittently confused man at a senior center. The agent persuaded him to invest about $250,000 in a flexible premium deferred annuity, warning him not to let anyone talk him out of it. As a result, the man was left with no penalty-free access to his entire life savings for the next 11 years, while the agent earned a commission on this transaction. To earn about $16,000 more in commissions, the agent then convinced the man to move half the amount invested in the annuity into unregistered stock, which cost the man a surrender fee of about $10,000. The stock turned out to be worthless, leaving the man with a fraction of what he had when he met the agent. Attempts by the man’s nephew to retrieve his uncle’s money were unsuccessful. The nephew reported the insurance agent to the California Department of Insurance, which eventually revoked the agent’s license, but local police did not pursue the older adult’s case. While the insurance agent faced no criminal charges in this case, he was later sentenced to 3 years in prison for defrauding another older adult. Another concern is that older adults may be fooled by investment professionals who use questionable tactics to market financial products, such as “free lunch seminars” at which financial professionals seek to sell financial products to older adults during a free meal. SEC, the Financial Industry Regulatory Authority (FINRA), and NASAA examined 110 firms that sponsored free lunch seminars from April 2006 to June 2007 offered in seven states and found that 63 seminars used misleading advertising and sales materials, 25 seminars resulted in unsuitable recommendations, and in 14 seminars there were fraudulent practices used, such as selling fictitious investments. Preventing the sale of unsuitable or fraudulent investments to older adults is difficult.features that might not provide its intended benefit during the investor’s lifetime. Older adults also can be sold what they believe to be legitimate investments, but are actually completely fraudulent products that hold little or no value. Investment Fraud Using a “Ponzi” Scheme The founder and president of a real estate and financial consulting firm convinced around 200 individuals—about one-third of whom were older adults—to invest in real estate projects that failed to generate any significant revenue. He also convinced them to obtain reverse mortgages on their homes, and to invest the proceeds with his firm. The investments turned out to be a “Ponzi” scheme. Specifically, the perpetrator paid distributions to some investors from others’ deposits; misled investors with false amortization schedules; and used investors’ money to pay for his Porsche, mortgage, and other personal expenses. The scheme was reported to FINRA, investigated by the FBI, and prosecuted by the Eastern District of New York U.S. Attorney’s Office, which sought sentencing enhancements for targeting the elderly. Victims lost over $12 million. They also reported irreplaceable financial losses, emotional distress, feelings of betrayal and disbelief, and various physical symptoms as a result. SEC has developed some educational materials and SEC and CFPB have conducted research related to investment fraud that targets older adults. For example, SEC has published a guide for older adults that counsels them to check their investment adviser’s disciplinary history, lists warning signs of fraud, and provides information on where to go for help. SEC also provides a link to a FINRA website that provides consumers with the required qualifications, including educational requirements, of the designations used by securities professionals. In August 2012, SEC released a study on financial literacy among investors and stated the agency’s desire to develop a strategy for increasing the financial literacy of certain groups, including older adults. CFPB plans to issue a report in early 2013 to Congress and the SEC that will address providing information to older adults about financial advisors and their credentials. In June 2012, the CFPB issued a public inquiry for information about elder financial exploitation, including a question on what resources older adults have to determine the legitimacy, value, and authenticity of credentials held by investment professionals.expects to share its results in 2013. Older adults can use a legal document referred to as a financial power of attorney to appoint another person (an agent) to manage their finances should they become incapable of doing so. Having a financial power of attorney enables an older adult (a principal) to choose the person who can legally make these decisions for them, when needed. Powers of attorney are easy for anyone to create, can vary in specificity and format, and do not require legal assistance or a court for execution. Each of the four states we contacted has a law that helps prevent misuse of powers of attorney by specifying the responsibilities of agents and, in at least one, penalties for misuse. However, powers of attorney can be forged or perhaps otherwise improperly obtained without a principal’s knowledge or consent and an agent can easily use the principal’s money for his or her own benefit. For this reason, many state and local officials we interviewed in the four states were concerned about misuse of these instruments. For example, one Pennsylvania official described power of attorney documents as a “powerful, simple, and dangerous tool.” A month after an elderly man with dementia and his wife agreed to add their daughter’s name to their bank account, the daughter convinced her mother to sign a document providing her financial power of attorney. When the woman signed, she was in the hospital for a broken hip and a stroke and later claimed she was heavily medicated. Over the next 3 months, the daughter placed the deed to her parent’s home in her name, wrote checks on their account totaling nearly $600,000 that were never questioned by the bank, and attempted to withdraw about $500,000 more. When the woman’s son discovered what had been happening, he had the bank stop payment on the $500,000 and asked the local district attorney to investigate. The daughter was charged with numerous counts of theft, pled guilty, and was sentenced to 3 years probation. The deed was transferred back to the woman and although the prosecutor sought restitution, the $600,000 was not recoverable—it had been used to pay off the daughter’s mortgage, country club membership, and other bills. Some APS and criminal justice officials we spoke to indicated that stronger state power of attorney laws could help prevent elder financial exploitation by agents. For example, Pennsylvania officials said that current state laws have been ineffective at (1) creating practices to monitor the activities of power of attorney agents and (2) encouraging banks to question power of attorney documents they find questionable. In California, law enforcement officials noted that notaries were not always held accountable for their role in signing power of attorney documents. To help strengthen state laws designed to prevent misuse of financial powers of attorney, the Uniform Law Commission has developed the Uniform Power of Attorney Act, which explicitly defines the duties of the power of attorney agent, including fiduciary duties such as acting in good faith and keeping careful records; allows a third party to refuse to honor a power of attorney agreement if there is a good faith belief that the principal may be subject to abuse, and requires the third party to report to APS; allows co-agents to be appointed for additional third-party oversight; and imposes liability on agents who violate the law. According to the Uniform Law Commission, 13 states have adopted the entire Uniform Power of Attorney Act. Others have enacted various other power of attorney laws. For example, New York requires an agent to provide a full accounting to APS when it is investigating a report that the principal may be in need of protective or other services or the victim of abuse or neglect. If it is not provided within 15 days, APS can commence a special proceeding to compel the production of the information. Illinois has added safeguards for principals to its law and created additional court remedies for violations of the law. However, according to the Uniform Law Commission, a number of states have made no changes to laws governing powers of attorney since the Uniform Power of Attorney Act was published. Powers of attorney are generally regulated under state, not federal, law; however, AoA and CFPB are providing some information to states and power of attorney agents to help prevent power of attorney abuse. The AoA-supported National Legal Resource Center co-sponsors trainings for states on the adoption of the Uniform Power of Attorney Act. Furthermore, the CFPB is developing a guide to educate “lay fiduciaries”—including guardians and agents under powers of attorney—about their responsibilities, and is planning to develop several state-specific lay fiduciary guides, scheduled for release in 2013. There are limited safeguards to protect older adults from abuse by guardians, who are granted authority by a state court to make decisions in the best interest of an incapacitated individual concerning his or her person or property. While guardians can play a key role in managing the assets of these older adults, we have noted in past reports that guardians are only subject to limited safeguards that could protect these older adults from financial exploitation. For example, local officials in California noted that it can be hard to determine whether a person applying to be a guardian is doing so to further his ward’s best interests. We have also reported that few states conduct criminal background checks on potential guardians. Moreover, we have noted concerns with weak court oversight of appointed guardians, as well as poor communication between the courts and federal agencies that have enabled guardians to chronically abuse their wards and/or others. Exploitation by in-home caregivers was also cited by local APS officials, police, and district attorneys we spoke to as a type of abuse that is difficult to prevent. These caregivers range from personal care aides who provide non-medical assistance such as helping with laundry and cooking, to home health aides who check an older adult’s vital signs or assist with medical equipment. In-home caregivers may be employed by a private company approved to provide services via a state’s OAA program or independently hired by older adults or their families. Caregiver services may also be covered under a state Medicaid program if the individual is eligible for Medicaid. Older adults may rely on and trust in-home caregivers, and some caregivers have used that relationship to exploit their clients. For example, a caregiver may be given access to an older adult’s ATM or credit card to help with banking or grocery shopping and later be found withdrawing money or purchasing items for themselves. As the population ages and public policies encourage older adults to remain in their homes for as long as practical, there will be an increased need for in-home caregivers. OAA Title III-B provides funding for in-home services, such as personal care, chore, and homemaker assistance. subject to limited, if any, background checks. A California law enforcement official told us that caregivers suspected of exploiting older adults sometimes have a history of theft. While the Medicaid program requires states to develop and implement home care provider qualification standards, there is no federal Medicaid requirement for criminal background checks. According to the National Conference of State Legislatures, while many states have required agencies to conduct background checks before employing in-home caregivers who are paid by Medicaid or with other state funds, these laws vary greatly in their breadth and scope and the amount of flexibility afforded the agencies when they use the checks to make hiring decisions. Napa County, California recently initiated an innovative paid in-home caregiver screening initiative. Before in-home caregivers can work in that county, they must submit to a background check and obtain a permit annually. While background checks for in-home caregivers help flag potential abusers, an AARP study has found that states do not always use all available federal, state, and local criminal data systems. For one, the implementation cost may discourage their use. Moreover, their effectiveness in reducing elder abuse, in general, is unproven. As required by the Patient Protection and Affordable Care Act of 2010, the Centers for Medicare and Medicaid Services implemented the National Background Check Program that encourages states to adopt safeguards to protect clients of in-home caregivers. This voluntary program provides grants to states to conduct background checks for employees of long- term care facilities and providers, such as home health agencies and personal care service providers. As of November 2012, 19 states were participating. The results of this program could provide data on the effectiveness of background checks in preventing elder abuse, including elder financial exploitation. State and local authorities in the four states we visited told us current safeguards are not always sufficient to prevent exploitation by those older adults depend on for assistance. Although states are generally responsible for laws and regulations regarding these issues, the OAA directs the federal government to disseminate information about best practices to prevent elder abuse, including elder financial exploitation. According to our analysis, there is a role for the federal government to provide more information and guidance to prevent these types of elder financial exploitation. State and Federal Officials Called for Greater Focus on Public Awareness Experts and federal, state, and local officials told us that older adults need more information about what constitutes elder financial exploitation in order to know how to avoid it.officials told us that it is difficult for them to reach many older adults with this message and that they have little funding to promote public awareness. For example, in one California county officials reported that due to budget cuts, they had lost many positions that involved educating the public about elder financial exploitation. However, APS and law enforcement Each of the seven federal agencies we reviewed independently produces and disseminates public information on elder financial exploitation that is tailored to its own mission. For example, SEC produces information to educate investors about fraud prevention, including an investment guide for older adults. FTC publishes information to protect consumers, and AoA disseminates information to help reduce elder abuse, including elder financial exploitation. (See table 2 for examples of the types of information provided by each of these agencies.) These seven agencies have also worked together at times to increase public awareness of elder financial exploitation. For example, each year FTC and the Postal Inspection Service collaborate on community presentations during National Consumer Protection Week. However, although the OAA calls for a coordinated federal elder justice system, which includes educating the public, the seven agencies we reviewed do not conduct these activities as part of a broader coordinated approach. In previous work, we found that agencies can use limited funding more efficiently by coordinating their activities and can strengthen their collaboration by establishing joint strategies. Similar calls for coordination were raised when the EJCC held its first meeting on October 11, 2012, to begin implementing its mandate to coordinate federal elder justice activities and develop national priorities. As EJCC Chairman, the Secretary of HHS stated that combating elder abuse—which includes elder financial exploitation—is an “all-of-government” effort and that federal programs are not organized in a strategic way, which decreases their effectiveness. One expert noted that there is a clear need for a strategic, multi-faceted public awareness campaign on elder abuse. An official from the Financial Services Roundtable added that many agencies are trying to focus on awareness and education, but their efforts appear unorganized and uncoordinated. Difficulty Gaining Expertise, Sustaining Collaboration, and Obtaining Data Hinders States’ Responses to Elder Financial Exploitation Special Knowledge and Skills Are Needed to Respond to Elder Financial Exploitation According to state and local officials we spoke with in four states, effectively investigating and prosecuting elder financial exploitation requires special skills and knowledge, which APS workers, law enforcement officers, and district attorneys sometimes lack. For example, APS officials noted that some case workers have little background or training in investigating financial crimes, and would find it difficult to respond to these cases. Local law enforcement officials also noted that they receive little training on elder financial exploitation and need additional training to build expertise. In addition, we were told that some prosecutors and judges are reluctant to take on cases of suspected elder financial exploitation because of competing priorities and limited resources, a continuing belief that elder financial exploitation is primarily a civil issue, or a view of older adult victims as unreliable witnesses. State and local officials in the four states we reviewed are attempting to increase their expertise. For example, some state and local officials told us they attempt to acquire investigative expertise through formal and on- the-job training, by dedicating units or staff to investigate suspected cases of elder financial exploitation, or by contracting for assistance from certified fraud examiners or other forensic accountants. However, state and local officials also told us that funding constraints limited their ability to build this additional expertise. Moreover, officials and experts told us that in order to more effectively allocate their limited resources, state and local entities would need more information about which practices have proven to be most effective for investigating, as well as preventing, elder financial exploitation. AoA and Justice have developed some resources that could be used to help state and local agencies build expertise in identifying, investigating, and prosecuting elder financial exploitation (see table 3). Under the EJA, HHS is authorized to develop and disseminate best practices and provide training for APS workers, and AoA-supported resource centers compile information about elder abuse in general for easy access. However, information pertaining specifically to elder financial exploitation topics—such as mass marketing fraud, power of attorney abuse, or investment fraud—may be dated or more difficult to find because it is intermingled with other materials.National Center on Elder Abuse (NCEA) has compiled a list of elder abuse training materials from a variety of sources, but we could find no quick and clear way to identify which trainings cover financial exploitation. For example, AoA’s Additionally, Justice officials told us that it would be beneficial for more training to be available to prosecutors of elder abuse. Justice has identified providing training and resources to combat elder abuse as a strategy to achieve its objective of preventing and intervening in crimes against vulnerable populations. Justice officials indicated that they are developing an elder justice prosecution website that could serve as a resource and help build expertise. The website is expected to consolidate training materials in use across the country, primary litigation materials from local district attorneys, and information from relevant academic centers, such as the University of California at Irvine and Stanford University. However, it is unclear when this project will be completed, as Justice officials are waiting for materials from local district attorneys. As a result, prosecutors and other law enforcement officials currently do not have access to these materials. States Identified Additional Federal Support Needed to Sustain Crucial Collaborations across Systems and Levels of Government Collaboration between APS and Criminal Justice Systems The OAA requires AoA to develop a plan for promoting collaborative efforts to support elder justice programs at all levels. Officials we met from state and local social service and criminal justice agencies in three of the four states we reviewed said that while collaboration between their systems is important for combating elder financial exploitation, collaborating can sometimes be difficult because the two systems differ in the way they respond to exploitation and carry out their work. Specifically, APS focuses on protecting and supporting the victim, and criminal justice focuses on prosecuting and convicting exploiters. However, according to experts, by working together, APS, the criminal justice system, and other Experts have partners can more easily accomplish both of these goals.noted that some type of multidisciplinary response to elder abuse— including elder financial exploitation—is prudent because of the complex nature of the problems faced by victims and the wide variety of responses required to help them and to prosecute exploiters. In each of the four states we reviewed, local initiatives helped bridge the gap between APS and criminal justice agencies. In some locations APS, criminal justice agencies, and other public and private entities have formed groups that meet periodically to develop awareness activities, foster information sharing, and discuss and resolve individual cases. Some multidisciplinary groups discuss elder abuse broadly, such as elder abuse task forces in some Pennsylvania counties and multidisciplinary groups in New York City. Others concentrate on financial exploitation specifically, such as the Philadelphia Financial Exploitation Task Force, and Financial Abuse Specialist Teams in some California counties. Although multidisciplinary groups responding to elder financial exploitation already exist in each of the four states we visited and elsewhere, forming and sustaining these groups continues to be challenging, according to law enforcement officials in one state we visited and experts. Busy schedules and competing priorities make it difficult for some participants to attend meetings regularly, and a group’s focus influences how extensively members are willing to participate. For example, in one location officials told us that when the primary focus of their group shifted from prosecuting cases to providing services, participation by law enforcement officials declined. Collaborative efforts can also be undermined by a history of poor interaction between member organizations, differences in systemic understanding of elder financial exploitation, difficulties communicating across disciplines, different understandings of limits on information sharing, unclear roles, and failure to address the group’s long-term survival.relevant promising practices in this area could help promote creation of such groups—particularly when resources are limited—and ensure their success. However, information on Federal agencies have made some efforts to promote and inform collaboration between the APS and criminal justice systems in states. However, agencies have taken few steps to compile or disseminate promising practices in creating or sustaining multidisciplinary groups responding to elder financial exploitation, even though the OAA requires AoA to develop and disseminate information on best practices for adult protective services. AoA and Justice have offered a small number of grants to states to combat elder abuse or other crimes that require or encourage collaborative efforts such as multidisciplinary teams (see Table 4). AoA’s Elder Justice Community Collaborations program offered over 40 $10,000 grants, along with technical assistance and training, from 2007 to 2010 for the purpose of setting up elder justice coalitions. These coalitions, which included members across a broad range of disciplines, were required to create an elder justice strategic plan for their community, including plans for continuation beyond the grant period. This program was the only one we identified that was created specifically for the purpose of setting up new coalitions; other grants either allowed funds to be used for that purpose or required a coalition to be in place to implement the grant-funded initiative. Interstate or international mass marketing scams include “grandparent scams,” which persuade victims to wire money to bail “grandchildren” out of jail or pay their expenses, and foreign lottery scams that require victims to pay sizeable sums before they can receive their winnings. In 2011, the FBI’s Internet Crime Complaint Centercomplaints from victims of all ages about online fraud alone, with reported losses of about $485 million. Local law enforcement authorities in the four states we visited indicated that investigating and prosecuting the growing number of cases involving interstate and international mass marketing fraud, which often target older adults, is particularly difficult for them. For example, coordinating with law so state enforcement authorities in other jurisdictions is labor intensive,and local officials are often unable to pursue these cases themselves. Furthermore, even though various federal agencies have the authority to investigate and prosecute interstate and international scams (see fig. 3),local law enforcement officials told us there is not enough information available on whom they should contact when they need to refer a case to the federal level. They indicated that the lines of communication between local and federal agencies tend to be informal, based on whom local law enforcement officers know in a federal agency. Providing accurate contact information is consistent with Justice’s strategic objective for fiscal years 2012-2016 to strengthen its relationships with state and local law enforcement. Justice officials told us they believe that local officials know which federal officials to contact about international and interstate cases, but state and local law enforcement officials told us that it would be helpful to have more specific information. Cases that local officials do not refer to a federal agency due to a lack of correct contact information may not be investigated or prosecuted by either federal or local authorities. In addition to not knowing whom to contact, state and local law enforcement officials in the four states we reviewed told us that they are concerned that federal agencies do not take enough of the cases that are referred to them. For example, a law enforcement official from California described a case of widespread interstate check fraud, expressing frustration with federal agencies that would not provide any support when he requested it. Federal officials, on the other hand, told us that they cannot take all cases referred to them by state and local law enforcement and that they must prioritize their caseload to make the best use of their limited resources. Justice and FTC officials said they tend to focus on larger cases in which many victims were affected or a significant amount of money was lost, and Justice’s U.S. Attorneys also apply regional priorities, such as the vulnerability (including age) of the victim, when determining which cases to take. Even if federal agencies choose not to take a case a state or local agency refers to them, officials told us that consistent referrals of cases by state and local authorities allow them to identify patterns or combine several complaints against the same individual into one case. FTC’s Consumer Sentinel Network database (Consumer Sentinel) collects consumer complaint data and aims to be an information-sharing tool to enable state and local law enforcement to become more effective. Justice officials said they encourage individuals and state and local authorities to file a complaint of suspected fraud to either the Consumer Sentinel or the FBI’s Internet Crime Complaint Center. However, while some state Attorneys General were familiar with the FTC database, local law enforcement officials we spoke with did not say that they reported cases to it or used its data. One official said he did not find the Consumer Sentinel database useful because law enforcement officials are not familiar with it. FTC officials explained that while they have made attempts to get state- level offices to contribute to the Consumer Sentinel, barriers such as reservations about data sharing, obsolete technological infrastructure, and severe budgetary cutbacks have kept the numbers of contributors low. When state officials do not contribute to the Consumer Sentinel, the information in the database does not give a national picture of the extent of cross-border scams. As a result of this—in addition to the impact of some law enforcement officials not using the system—it may be more difficult to combat these scams, and officials at all levels may not have the information they need to target their resources appropriately. According to state and local officials, banks are important partners in combating elder financial exploitation because they are well-positioned to recognize, report, and provide evidence in these cases. Indeed, frontline bank staff are able to observe elder financial exploitation firsthand. For example, a bank teller who sees an older adult regularly is likely to notice if that individual is accompanied by someone new and seems pressured to withdraw money or if the older adult suddenly begins to wire large sums of money internationally. There are state efforts and bank policies to help bank employees recognize exploitation. In Illinois, all state-chartered banks are required to train their employees on what constitutes elder financial exploitation. State and local agencies in California and Pennsylvania provide information and training to banks to help them recognize elder financial exploitation. Most of the six banks we spoke with had a policy for periodically training employees on identifying elder financial exploitation. In addition, these banks had a system in place that routinely monitors bank transactions for unusual activity and can help identify exploitation. Banks may also help report suspected elder financial exploitation to local authorities. Training initiatives, such as Illinois’ program, encourage bank employees to report exploitation. Most of the six banks we spoke with had procedures in place for frontline employees to report suspected elder financial exploitation to bank management. Some of these banks also had internal units that are dedicated to receiving staff reports of elder financial exploitation and referring them to the proper authorities. Notwithstanding such efforts, APS and criminal justice officials told us elder financial exploitation is generally underreported by banks.the training they receive, bank staff may not be aware of the signs of elder financial exploitation or know how to report it. In addition, in five of the six prosecuted cases we reviewed in depth, there were missed opportunities for banks to raise questions about transactions. For example, in one case, bank officials did not take any action in response to repeated withdrawals of large amounts of money that were not typical for Despite that customer. Bank officials said they do report suspected elder financial exploitation, but also emphasized that banks are not law enforcement agencies. Officials said their primary responsibility is to protect customer assets and privacy and ensure customers have access to their funds. In addition, a banking association representative told us that even though federal privacy laws do not prohibit banks from reporting suspected abuse, banks are concerned that they will be held liable if they report suspected exploitation that is not later substantiated. Three federal agencies—CFPB, AoA, and FinCEN—are positioned to encourage banks to identify and report elder financial exploitation, either due to the agency’s mission or via proposed or existing activities. The CFPB is the primary federal consumer protection regulator with respect to a variety of financial institutions, including banks. The Dodd-Frank Act authorizes the CFPB to protect consumers, including older adults, from abusive practices committed in connection with the offering or provision of consumer financial products or services. In a November 2011 congressional testimony, the Assistant Director of CFPB’s Office for Older Americans said the agency has a unique opportunity to help enhance, coordinate, and promote efforts of a variety of groups, including financial services providers. While the federal government generally requires banks to train employees on a variety of issues, such as money laundering, physical bank security, and information security, we could find no similar requirements for banks to train employees to recognize and report elder financial exploitation. However, AoA is considering collaborating with one large national bank on a project to encourage bank training on elder financial exploitation. Banks are also required to file Suspicious Activity Reports (SAR) with FinCEN to alert them of potentially illegal bank transactions that involve, individually or in the aggregate, at least $5,000, which could include elder financial exploitation. In February 2011, FinCEN issued an advisory to banks that described elder financial exploitation, provided potential indicators of elder financial exploitation, and requested the use of a specific term (“elder financial exploitation”) when applicable in SAR narratives related to this activity. Bank records can help investigators track an older adult’s use of funds over time and detect irregularities. APS officials in Pennsylvania told us that although Pennsylvania state law grants APS access to bank records, they are often denied access on the basis of federal privacy laws or the bank’s policies. APS officials from California, Illinois, and New York also reported that they are denied access to bank records for the same reasons. As a result, investigators are unable to obtain the information necessary to investigate suspected exploitation, identify perpetrators, stop further exploitation from occurring, or obtain restitution for victims. Bank officials told us the federal government could help clarify bank roles and responsibilities related to privacy and financial exploitation of older adults. There are two federal laws that generally protect the privacy of consumer banking records: the Right to Financial Privacy Act of 1978 (RFPA) and the Gramm-Leach-Bliley Act.must meet to safeguard customer banking information. The RFPA generally prohibits financial institutions, including banks, from providing any federal governmental authority with access to copies of information in any customer’s records without first providing notice to the customer. Because a government authority is defined in RFPA to include only federal agencies and officials, however, it should not prevent banks from reporting possible financial exploitation of older adults—or providing bank records to—state APS. Each establishes standards that banks The Gramm-Leach-Bliley Act generally prohibits financial institutions, including banks, from disclosing nonpublic personal information to third parties including, but not limited to, federal governmental authorities. Nonetheless, the act has a number of general exceptions permitting disclosure, such as: to protect against or prevent actual or potential fraud, unauthorized transactions, claims, or other liability; consistent with the RFPA, for an investigation on a matter related to public safety; or to comply with a properly authorized civil, criminal, or regulatory investigation, subpoena, or summons by federal, state, or local authorities. Incomplete Data Hinder Efforts to Combat Elder Financial Exploitation The NCEA and experts have called for more data on the cost of elder financial exploitation to public programs and for trend data on its extent. According to our analysis, these data could help determine what government resources to allocate and how best to prevent and respond to this problem. According to one Utah official, quantifying the impact of elder financial exploitation in that state helped that state’s legislators understand the importance of combating this problem and convinced them to simply decrease, rather than eliminate, state APS funding altogether.undertaken such a study. However, according to our analysis, no other state has Similarly, data on the extent of elder financial exploitation over time could help state and local APS, as well as law enforcement agencies, assess the effectiveness of their efforts to combat it. The OAA and EJA both require the federal government to take steps to collect and disseminate data on all types of elder abuse, yet the studies federal agencies have funded in this area have produced little data on its extent over time, as we previously reported, or on its cost. Several federal agencies do collect administrative data on the number of complaints submitted by consumers or criminal cases that sometimes involve elder financial exploitation (see table 5)—data that could help state and local APS and law enforcement authorities determine what resources to allocate and how best to prevent and respond to this problem. Each agency publishes material containing a range of administrative data from its system that is available to the public. FTC, for example, publishes statistics from the Consumer Sentinel on the number and types of complaints, amount of losses, and characteristics of victims. While the number of reported incidents of elder financial exploitation in each agency’s system represents only a portion of all cases that actually occur in a given period and geographic area, the number over time could provide an indication of fluctuations in the extent of certain types of elder financial exploitation. Data from the Consumer Sentinel could be of particular interest to state and local APS and law enforcement authorities, because over half of the consumer complaints reported to this system involve financial exploitation through fraud. Individual complaints can be directly reported to the Consumer Sentinel by victims or others on their behalf. Cases reported to the FBI Internet Crime Complaint Center and non-governmental organizations, such as the Council of Better Business Bureaus, are also added to the complaints in the Consumer Sentinel. Currently, however, the Consumer Sentinel does not receive any of the complaints reported to any of the law enforcement or consumer protection agencies in 38 states. Moreover, less than half the complaints in the Consumer Sentinel contain the age of the victim because FTC does not require complaints to include this information or other indicators of whether the case involved elder financial exploitation. FTC officials told us the agency does not require complaints to include the age of the victim because of concerns regarding privacy and the potential burden this might place on individual complainants. In contrast, SARs in the FinCEN system will soon all be clearly identified when a filing institution reports suspected elder financial exploitation. In 2011, we found that state-level APS data could provide useful information on the extent of elder abuse, including elder financial exploitation, over time. We recommended that AoA work with states to develop a nationwide system to collect and compile these data.officials told us they have initiated discussions with states about establishing such a system, but have been unable to develop a comprehensive plan for implementing one due to a lack of funding. Conclusions Elder financial exploitation is a multi-faceted problem spanning social service, criminal justice and consumer protection systems of government. As a result, combating it is challenging and requires action on the part of not only many state and local agencies, but also multiple agencies at the federal level. Each of the seven federal agencies we reviewed is working to solve this problem in ways that are consistent with its own mission. However, the problem is large and growing. It calls for a more cohesive and deliberate approach governmentwide that, at a minimum, identifies gaps in efforts nationwide, ensures that federal resources are effectively allocated, establishes federal agency responsibilities, and holds agencies accountable for meeting them. The EJCC has recognized that combating elder abuse, including elder financial exploitation, is an effort that requires federal agencies to work together. A clearly articulated national strategy is needed to coordinate and optimize such federal efforts to effectively prevent and respond to elder financial exploitation, and the EJCC can be the vehicle for defining and implementing this strategy. In the current economic climate, state and local APS and law enforcement agencies will find it increasingly difficult to cope with growing numbers of cases without a national strategy attuned to their need for information and guidance on preventing and responding to elder financial exploitation, as well as additional data on its extent and impact. In addition to working together to build a national strategy to combat elder financial exploitation, there are a number of ways individual federal agencies could better support state and local APS and law enforcement agencies. For example, Justice has identified providing training and resources to combat elder abuse as a strategy to achieve its objectives of preventing and intervening in crimes against vulnerable populations. Without easily accessible information and guidance tailored to the needs of prosecutors nationwide, they may continue, given limited resources, to make such cases a low priority. Similarly, many cases cross jurisdictions and could involve multiple victims or have perpetrators located in other countries. These cases may not be investigated or prosecuted unless state and local law enforcement have better information on the process for contacting the federal government regarding these cases or the ways in which the federal government could provide support. Without information to correct banks’ misconceptions about the impact of federal privacy laws on their ability to release bank records, APS and law enforcement agencies will continue to find it difficult to obtain the information they need from banks to investigate suspected cases of elder financial exploitation. Moreover, without educating bank employees nationwide on how to identify and report suspected elder financial exploitation, many cases will continue to go unreported, uninvestigated, and unprosecuted. The CPFB is positioned to provide additional information to banks, as part of the agency’s consumer protection regulatory function and dedication to protecting the financial health of older Americans. Finally, to fulfill its mission of protecting consumers against unfair, deceptive, or fraudulent practices, the FTC established the Consumer Sentinel Network database to enhance information-sharing and support law enforcement at all levels. The Consumer Sentinel could serve as a valuable source of data on the extent of some types of elder financial exploitation nationwide and as an important resource for law enforcement authorities as they identify, investigate, and prosecute cases. The Consumer Sentinel’s usefulness in this area, however, will continue to be limited until the number of contributors to it is increased and complaints are required to include the age of the victim or other indicators of whether the case involved elder financial exploitation. In the absence of the latter, it is difficult to determine the number of financial exploitation complaints that involve older adults, which in turn makes any Consumer Sentinel data contributed less useful to state and local APS and law enforcement agencies. Recommendations for Executive Action To coordinate and optimize federal efforts to prevent and respond to elder financial exploitation, we recommend the Secretary of HHS, as chairman of the Elder Justice Coordinating Council, direct the Council to develop a written national strategy for combating this problem. This strategy should include a clear statement of its purpose and goals and indicate the roles and responsibilities particular federal agencies should have in implementing it. The strategy could address, among other things, the need to identify and disseminate promising practices and other information nationwide that can be used by state and local agencies to prevent exploitation, educate the public, and help state and local agencies collaborate, investigate, and prosecute elder financial exploitation; ensure coordination of public awareness activities across federal agencies; and collect and disseminate better data nationwide to inform federal, state, and local decisions regarding prevention of and response to elder financial exploitation. To develop expertise among prosecutors and other criminal justice officials, we recommend the Attorney General establish timeframes for and take the steps necessary to launch the elder justice prosecution website that Justice has begun to construct. To facilitate investigation and prosecution of interstate and international elder financial exploitation, we recommend the Attorney General conduct outreach to state and local law enforcement agencies to clarify the process for contacting the federal government regarding these cases and the ways in which the federal government could provide support. To encourage banks to identify and report suspected elder financial exploitation and to facilitate release of bank records to APS and law enforcement authorities for investigating this activity, we recommend the Director of the Consumer Financial Protection Bureau develop a plan to educate banks nationwide on how to identify and report possible elder financial exploitation; and develop and disseminate information for banks on the circumstances under which they are permitted, under federal privacy laws, to release relevant bank records to law enforcement and APS agencies. Response to Agency Comments We provided a draft of this report to the seven federal agencies that we reviewed for their comments. CFPB concurred with our recommendations and agreed that a collaborative and coordinated effort by federal agencies can help optimize strategies to combat elder financial exploitation (see appendix XVI). CFPB further noted that financial institutions can play a key role in preventing and detecting elder financial exploitation, and that CFPB is collecting information on financial institution training programs and considering how best to help institutions that request this information. HHS indicated in its general comments that our recommendations are consistent with what it heard during the inaugural meeting of the EJCC, and added that it looks forward to working with Congress to continue implementing the EJA (see appendix XVII). In an e-mailed response, FTC’s Bureau of Consumer Protection noted that the Consumer Sentinel database provides law enforcement with access to millions of consumer complaints. FTC added that the database has no required fields, and expressed its belief that if consumers were required to provide detailed personal information as a condition to filing a complaint, they might refuse to do so, thereby decreasing the overall effectiveness of the system. FTC explained that almost 48 percent of all fraud complaints in 2011 included the voluntary submission of age, and that nearly half of its non-individual data contributors do not submit age information in the data they provide to FTC. Given the potential for the Consumer Sentinel database to support and enhance state and local law enforcement agencies’ response to elder financial exploitation, particularly interstate and international cases, we continue to believe that FTC should study the feasibility of requiring that all complaints to the Consumer Sentinel database include the victim’s age or another indicator of whether the complaint involves elder financial exploitation. In doing so, FTC can examine different options, including the use of a check box similar to the one that FinCEN has included in its SARs. We are sending copies of this report to the seven agencies we reviewed, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. The report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XVIII. Mission To develop a comprehensive, coordinated, and cost-effective system of home- and community- based services that helps elderly individuals maintain their health and independence. How agency prioritizes elder financial exploitation Strategic goal of ensuring the rights of older people and preventing their abuse, neglect, and exploitation. Coordination with other agencies Works with state aging agencies to help them develop statewide plans conduct research on and develop information on best practices for Adult Protective Services (APS), Through NCEA, partnered with Treasury on its Go Direct financial Partnered with Treasury and the Financial Services Roundtable on a provide technical assistance to toolkit for training financial institutions on elder financial exploitation Worked with the SEC on several Seniors Summits that brought establish centers to conduct research, develop expertise, and improve law enforcement’s ability to combat elder financial exploitation agencies together to discuss elder financial exploitation Chairs the Elder Justice Coordinating Council, a collaborative body of federal agencies created under the EJA to recommend federal policies to combat elder abuse and ways federal agencies should coordinate to implement these policies. Contact information Call: (202) 619-0724 Co-leads an informal interagency workgroup that helps facilitate federal elder justice activities. On April 16, 2012, AoA became part of the Administration for Community Living, which also includes HHS’s Office on Disability and Administration on Developmental Disabilities. | Elder financial exploitation is the illegal or improper use of an older adult's funds or property. It has been described as an epidemic with society-wide repercussions. While combating elder financial exploitation is largely the responsibility of state and local social service, criminal justice, and consumer protection agencies, the federal government has a role to play in this area as well. GAO was asked to review issues related to elder financial exploitation. This report describes the challenges states face in (1) preventing and (2) responding to elder financial exploitation, as well as the actions some federal agencies have taken to help states address these challenges. To obtain this information, GAO interviewed state and local social service, criminal justice, and consumer protection officials in California, Illinois, New York, and Pennsylvania--states with large elderly populations; officials in seven federal agencies; and various elder abuse experts. GAO also analyzed federal strategic plans and other documents and reviewed relevant research, federal laws and regulations, and state laws. Officials in each of the four states GAO contacted identified the need for more safeguards and public awareness activities to help prevent elder financial exploitation. They also noted that it is difficult to prevent exploitation by individuals such as financial services providers, power of attorney agents, guardians, and paid in-home caregivers. Although states have primary responsibility for combating elder financial exploitation, the federal government could disseminate information on model power of attorney legislation, for example, to help states better safeguard against power of attorney abuse--one type of federal activity authorized under the Older Americans Act of 1965. In addition, experts and state and local officials told GAO that many older adults need more information about what constitutes elder financial exploitation in order to report and avoid it. The seven federal agencies GAO reviewed have undertaken activities to increase public awareness of elder financial exploitation. While some experts observed that a nationwide approach to educating the public is needed, federal public awareness activities are not currently conducted as part of a broader coordinated approach, which GAO believes could help ensure the effective use of federal resources. The Elder Justice Coordinating Council, which held its first meeting in 2012, could be the vehicle for developing and implementing a coordinated national strategy. The Council is composed of officials from federal agencies and is charged with developing national priorities and coordinating federal elder justice activities. Experts and officials in each state GAO reviewed indicated that difficulty 1) gaining expertise, 2) sustaining collaboration between law enforcement and adult protective services agencies, and 3) obtaining data hinders their response to elder financial exploitation. As with prevention, many federal agencies have individually taken steps to address these challenges that are in line with their own missions. For example, the Department of Justice (Justice) has begun to construct a website that contains training and other materials prosecutors can use to build their expertise in investigating and prosecuting elder abuse, which includes elder financial exploitation. However, there are gaps in federal support in some areas. For example, law enforcement officials in each of the four states GAO reviewed indicated that it is not clear how they should obtain the federal support they need to respond to interstate and international cases. Justice can provide this information, in keeping with its priority to strengthen its relationship with state and local law enforcement. Similarly, the Federal Trade Commission's (FTC) Consumer Sentinel Network database compiles incidents of financial exploitation reported to it by many sources around the country but receives incidents from state government agencies in only 12 states. The database would be of greater use if FTC obtained incidents from more of the states and contained an indicator that the incident involved an older adult. |
Background Under TRIA, Treasury is responsible for reimbursing insurers for a portion of terrorism losses under certain conditions. Payments are triggered when (1) the Secretary of the Treasury certifies that terrorists acting on behalf of foreign interests have carried out an act of terrorism and (2) aggregate insured losses for commercial property and casualty damages exceed $5,000,000 for a single event. TRIA specifies that an insurer is responsible (i.e., will not be reimbursed) for the first dollars of its insured losses—its deductible amount. TRIA sets the deductible amount for each insurer equal to a percentage of its direct earned premiums for the previous year. Beyond the deductible, insurers also are responsible for paying a percentage of insured losses. Specifically, TRIA structures pay-out provisions so that the federal government shares the payment of insured losses with insurers at a 9:1 ratio—the federal government pays 90 percent of insured losses and insurers pay 10 percent—until aggregate insured losses from all insurers reach $100 billion in a calendar year (see fig. 1). Thus, under TRIA’s formula for sharing losses, insurers are reimbursed for portions of the claims they have paid to policyholders. Furthermore, TRIA then releases insurers who have paid their deductibles from any further liability for losses that exceed aggregate insured losses of $100 billion in any one year. Congress is charged with determining how losses in excess of $100 billion will be paid. TRIA also contains provisions and a formula requiring Treasury to recoup part of the federal share if the aggregate sum of all insurers’ deductibles and 10 percent share is less than the amount prescribed in the act—the “insurance marketplace aggregate retention amount.” TRIA also gives the Secretary of the Treasury discretion to recoup more of the federal payment if deemed appropriate. Commercial property-casualty policyholders would pay for the recoupment through a surcharge on premiums for all the property-casualty policies in force after Treasury established the surcharge amount; the insurers would collect the surcharge. TRIA limits the surcharge to a maximum of 3 percent of annual premiums, to be assessed for as many years as necessary to recoup the mandatory amount. TRIA also gives the Secretary of the Treasury discretion to reduce the annual surcharge in consideration of various factors such as the economic impact on urban centers. However, if Treasury makes such adjustments, it has to extend the surcharges for additional years to collect the remainder of the recoupment. Treasury is funding TRIP operations with “no-year money” under a TRIA provision that gives Treasury authority to utilize funds necessary to set up and run the program. The TRIP office had a budget of $8.97 million for fiscal year 2003 (of which TRIP spent $4 million), $9 million for fiscal year 2004, and a projected budget of $10.56 million for fiscal year 2005—a total of $28.53 million over 3 years. The funding levels incorporate the estimated costs of running a claims-processing operation in the aftermath of a terrorist event: $5 million in fiscal years 2003 and 2004 and $6.5 million in fiscal year 2005, representing about 55 to 60 percent of the budget for each fiscal year. If no certified terrorist event occurred, the claims-processing function would be maintained at a standby level, reducing the projected costs to $1.2 million annually, or about 23 percent of the office’s budget in each fiscal year. Any funds ultimately used to pay the federal share after a certified terrorist event would be in addition to these budgeted amounts. Treasury and Industry Participants Have Made Progress in Implementing TRIA, but Treasury Has Not Yet Achieved Key Goals More than a year after TRIA’s enactment, Treasury and insurance industry participants have made progress in implementing and complying with its provisions, but Treasury has yet to fully implement the 3-year program. Treasury has issued regulations (final rules) to guide insurance market participants, fully staffed the TRIP office, and started collecting data and performing studies mandated by TRIA. However, Treasury has yet to make the claims payment function operational and decide whether to extend the “make available” requirement through 2005. In its advisory role, NAIC has effectively assisted Treasury in drafting guidance and regulations and planning mandated studies. Insurance companies are also generally complying with TRIA requirements by making changes to their operations, such as revising premiums and policy terms. However, insurers do not yet know whether they will be required to “make available” terrorism insurance for policies issued or renewed in 2005. Additionally, they have voiced concerns about the time Treasury might take to certify an act of terrorism as eligible for reimbursement under TRIA and process and pay claims after an act was certified. Treasury Has Issued Some Regulations, Staffed the TRIP Office, and Begun Studies and Data Collection To implement TRIA and make TRIP functional, Treasury has taken numerous regulatory and administrative actions, which encompass rulemaking, creating a new program office, and collecting and analyzing data. To date, Treasury has issued three final rules and one proposed rule, which provide uniform definitions of TRIA terms, explain disclosure requirements, determine which insurers are subject to TRIA, and establish a basic claims-paying process. Treasury has also created and staffed the TRIP office, which will oversee claims processing, payment, and auditing. Finally, Treasury has completed a TRIA-mandated assessment and is working on other reporting and data collection mandates. To Be Ready for Possible Terrorist Events, Treasury Quickly Issued Interim Guidance and Interim Final Rules After TRIA became effective, Treasury officials said they moved quickly to provide immediate guidance to the insurance industry on time-sensitive requirements. Because the process required to issue final regulations would take a few months, Treasury published four sets of interim guidance in the Federal Register between December 2002 and March 2003. The first three sets of interim guidance were in a question-and-answer format to provide quick answers to specific questions, and the fourth interim guidance contained regulatory language. The purpose of the interim guidance was to help insurance companies and other entities determine if they were subject to TRIA and to help insurers quickly modify forms and policies and adjust operations by providing definitions and program parameters. Interim guidance in December 2002 covered requirements for disclosure (e.g., notification to policyholders), the “make available” provision, and which lines of property-casualty insurance were subject to TRIA. For example, the guidance explained that under TRIA insurers are required to send notices to their policyholders containing information about the availability and cost of terrorism insurance and the 90 percent federal share. Subsequent guidance provided information on topics such as how certain insurers should allocate direct earned premiums (which are used to determine what their deductibles would be), alternative methods for complying with TRIA’s disclosure requirement, and the application of TRIA to non-U.S. insurers. The interim guidance remained in force while Treasury drafted final rules. In addition to interim guidance, Treasury also published two interim final rules and a proposed rule. The first interim final rule laid the foundation of the program and key definitions for terms used in TRIA. The second interim final rule covered disclosure and “make available” requirements. The proposed rule addressed “state residual market insurance entities” and “state workers compensation funds”—two types of state-created entities that will be discussed below. The interim final rules had the force of law until they were superseded by final rules. As a result, Treasury officials stated, had a terrorist act occurred before final rules took effect, a regulatory structure would have been in place to allow a faster response than would otherwise have been possible. Treasury Also Has Published Final Rules As of March 1, 2004, Treasury’s interim guidance, interim final rules, and proposed rule had been superseded by three final rules. The first final rule was published in the Federal Register on July 11, 2003, and addressed basic definitions of words used in TRIA, such as “insurer” and “property and casualty insurance.” Treasury officials said they completed this regulation first to provide a foundation for subsequent regulations, which would use these terms frequently. Although TRIA provided definitions for these terms, TRIA also specified that state insurance regulations be preserved where possible. According to Treasury officials, Treasury thus devoted much effort to ensure that TRIA’s definitions of property-casualty insurance terms would be consistently applied across jurisdictions—a difficult task because Treasury did not have existing uniform or consistent definitions of the terms used in TRIA. For example, the term “commercial property-casualty insurance” includes slightly different lines of insurance in each state’s definition. Treasury decided to use information that insurers submitted in annual statements to NAIC as the basis for defining property-casualty insurance. On October 17, 2003, Treasury issued its second final rule on disclosure and “make available” requirements for insurers (see fig. 2). These time-sensitive requirements, which insurers had to meet to be eligible to receive federal reimbursement for terrorist losses, had originally been spelled out in the interim final rule. Among other things, the rule stated that insurers that had used NAIC’s model disclosure forms to notify their policyholders about TRIA and terrorism insurance premiums had complied with TRIA disclosure requirements. The rule also clarified that insurers did not have to make available coverage for certain risks if the insurer’s state regulator permitted the exclusion of those risks and the insurer had made the same exclusion from coverage on all other types of policies. For example, Treasury’s explanations in the rule specifically used policy exclusions for NBC events to illustrate this point. (We discuss these exclusions in more detail later in this report.) The third final rule, also issued on October 17, 2003, instructed two kinds of insurers that are typically created by state governments—“state residual market insurance entities” and “state workers’ compensation funds”—on how TRIA provisions apply to them (see fig. 2). States establish residual market insurance entities to assume risks that are generally unacceptable to the normal insurance market, and state workers’ compensation funds are state funds established to provide workers injured on the job with guaranteed benefits. The other insurance companies operating in the state usually fund these state-created entities. The rule explained how a state residual market insurance entity and its insurance company members should allocate direct earned premiums among themselves for the purposes of calculating deductibles under TRIA, because the size of the TRIA deductible is determined by the size of a company’s direct earned premium. Treasury crafted provisions specific to state residual market insurance entities because, depending on the particular state law, both the premiums and the profits and losses of these entities may be shared with their insurance company members. Absent these specific provisions, in those cases where premiums were shared the premiums would be double counted, resulting in an unfair increase in the deductible of the insurance company. The rule also applied TRIA’s disclosure provisions to both types of state-created entities. Treasury also issued a proposed rule on December 1, 2003, which would establish the first stages of a basic claims-paying process (see fig. 2). According to Treasury officials, this proposed regulation sets up an initial framework for the claims process, including instructions to insurers to notify Treasury when they have reached 50 percent of their deductible. This notification provides Treasury with advance notice of possible impending claims. The proposed rule also contains, among other things, requirements for insurers to receive federal reimbursements and provides associated recordkeeping requirements. Treasury intends to supplement the proposed rule with additional, separate guidance that will provide detailed operating procedures for claims filing and processing. According to the officials, Treasury took this phased approach to get the basic rules out to insurers in case a terrorist event occurred. Finally, a Treasury official said that Treasury staff drafted another rule, which is currently under review by the Office of Management and Budget (OMB). The draft, which will be published and available for public comment as a proposed rule after OMB approves it, addresses litigation management (see fig. 2). The draft proposed rule would apply a TRIA provision that establishes that suits arising from certified terrorist events are federal causes of action and establishes litigation management procedures. Writing the regulations has been a lengthy and difficult process, not only because of the multiple procedural review requirements of federal rulemaking, but also because TRIA established that state insurance regulations should be preserved where possible. For example, as previously discussed, creating definitions in accord with the statutory definitions of more than 50 jurisdictions (the states, District of Columbia, Puerto Rico, and U.S. territories) required extensive discussions among the state regulators, which in turn required additional time to plan and execute. Treasury Has Fully Staffed the TRIP Office In addition to developing regulations to implement TRIA, Treasury fully staffed the TRIP office by September 2003. The TRIP office develops and oversees the operational aspects of TRIA, which encompass claims management—processing, review, and payment—and auditing functions. The TRIP staff consists of an executive director, a senior advisor, two attorneys, two policy analysts, and two administrative staff. Since becoming operational, TRIP staff have drafted regulations and performed other tasks necessary to make the program functional. For example, staff reviewed and incorporated appropriate public comments to proposed regulations and visited reinsurers to learn more about paying claims submitted by insurers as a prelude to developing criteria for claims payment and processing. Staff also will be issuing contracts for vendors to supply these claims services. (We discuss the claims processing function in more detail later in this report.) Additionally, TRIP staff have ongoing work such as issuing interpretive letters in response to questions submitted by the public and participating in conferences across the United States to inform regulators, industry participants, and the public about TRIA provisions. Treasury Has Begun Mandated Data Collection and Analysis Treasury has completed one TRIA mandate for data collection and a study and has begun work on others. Specifically, TRIA mandated that Treasury provide information to Congress in four areas: (1) the effects of terrorism on the availability of group life insurance, (2) the effects of terrorism on the availability of life and other lines of insurance, (3) annual data on premium rates, and (4) the effectiveness of TRIA (see table 1). Treasury’s Office of Economic Policy is responsible for organizing and analyzing information associated with the mandated studies and assessments. Pursuant to TRIA section 103(h)(1), Treasury completed an assessment of the availability of group life insurance and reinsurance for insurers issuing group life policies. Treasury concluded that the terrorism threat had not reduced the availability of group life insurance, but had reduced the availability of reinsurance, finding a general lack of catastrophic reinsurance for group life coverage. After completing the assessment, Treasury issued a press release in August 2003 stating that it had decided not to make group life insurance subject to TRIA because it found that insurers had continued to provide group life coverage. According to life insurance experts, life insurers have done so to maintain customer relations that would be difficult to reestablish if the coverage were discontinued. Additionally, life insurance experts noted that business from other lines of insurance would be lost if insurers were to discontinue group life, which is typically sold as part of a package with disability and medical coverage. Treasury has not yet completed a mandated study on the effects of terrorism on the availability of life and other lines of insurance. The study was to have been completed by August 2003, 9 months after TRIA was enacted. As of March 1, 2004, according to Treasury officials, the report based on this study was in draft form. Because internal Treasury reviews of the draft have not been completed, the draft report has not yet been made public. Pursuant to TRIA sections 104(f)(1) and 108(d)(1), Treasury officials said they began collecting data on annual premium rates and working on the study that would assess the effectiveness of TRIA and project the availability and affordability of terrorism insurance for certain groups of policyholders after TRIA expires. Treasury hired a private firm to collect premium data and other information in surveys from policyholders, insurers, and reinsurers. In the surveys, policyholders are asked to provide information such as business size, geographic locations of insured properties, premium data for TRIA-related terrorism insurance, and risk management measures used. Insurers are asked about the types of insurance sold that contain TRIA coverage, number of policies sold, number of policies sold with TRIA coverage, and methods used for estimating risks. Reinsurers will be asked for similar information. The data collected from the survey will provide information for the data collection efforts on annual premium rates and also provide the basis for assessing the effectiveness of TRIA. According to Treasury officials, Treasury began sending surveys to a nationally representative sample of 25,000 policyholders in November 2003 and approximately 700 insurers and insurance groups in January 2004. The first surveys will collect data for 2003, as well as 2002, to establish a baseline for analysis and reporting. The second and third surveys will be sent in 2004 and 2005. Treasury Has Tasks to Complete before TRIA Can Be Fully Implemented Before TRIA can be fully implemented, Treasury has to make certain decisions, develop additional regulations, and make certain TRIP functions operational. More specifically, TRIA gave Treasury until September 1, 2004, to decide if the requirement that insurers offer terrorism coverage on terms that do not differ materially from other coverage should be extended for policies issued or renewed in 2005, the third and final year of the program. Treasury did issue a press release on December 23, 2003, clarifying that the “make available” requirement for annual policies issued or renewed in 2004 extends until the policy expiration date, even though the coverage period extends into 2005. As of March 1, 2004, Treasury officials said they had not made a decision on the “make available” extension for policies that will be issued or renewed in 2005. The officials indicated that they would be in a better position to make this decision after they obtained enough preliminary data from their surveys, which they anticipate receiving in spring 2004. The survey data are expected to provide an analytical framework for Treasury’s decisions by collecting information on factors such as premium rates, geographic locations of covered property, policy limits and deductibles, and the extent to which certain terrorism risks are covered. Treasury has yet to develop all the regulations necessary to carry out TRIA provisions and make operational certain functions relating to claims administration, auditing, and oversight. While the implementation of some of these provisions and functions was covered by the proposed rule (see fig. 2), Treasury has not drafted final rules to cover the latter stages of the claims process, which would encompass resolving disputed claims with insurance companies, dealing with insurers that become insolvent, adjusting claims payments for over- and underpayments (netting), and handling claims submitted by insurers after aggregate insured losses have exceeded the $100 billion cap. Treasury officials said they plan to complete these regulations in the spring and summer of 2004, after they have fully addressed the claims-paying process. Treasury also has yet to write regulations addressing recoupment and surcharges and the collection of civil monetary penalties in cases of noncompliance or fraud. Treasury also plans to assess the need to develop additional regulations or refine past regulations on captive insurers and self-funded pools—types of self-insurers. Additionally, Treasury has not yet developed processes for auditing claims payments to insurers. However, Treasury plans to issue a request for proposal (RFP) for a postclaims auditing contractor in the third quarter of fiscal year 2004. The contractor will review claims and conduct field audits of insurers after an event to ensure that underlying documents support claims submitted to Treasury. Treasury officials anticipate awarding a contract in the fourth quarter of fiscal year 2004. Moreover, Treasury plans to develop guidance encompassing business procedures and audit parameters that will trigger reviews and audits. Treasury officials also said that other ongoing and completed work associated with the claims-processing function lays the foundation for the claims auditing process. Finally, a Treasury official estimated that by the end of fiscal year 2004, Treasury would implement all of the processes that would have to be in place before an event occurred. After fiscal year 2004, Treasury plans to develop procedures for requirements that will not need to be in place until after an event has occurred—such as recoupment and surcharge. Lastly, a key TRIP function—the actual processing and payment of claims—is not yet operational. From the beginning of its planning efforts, Treasury had envisioned that contractors would handle TRIA claims processing in the aftermath of a terrorist attack. According to TRIP officials, after incorporating a basic regulatory framework, one of the first priorities for the TRIP office was to write and issue a RFP to procure contractors to perform claims services. Treasury issued an RFP for claims-processing and payment services in December 2003, but had not hired any contractors as of March 1, 2004. Treasury attempted to accelerate the procurement process by reducing the number of days allowed for bidders to respond to the RFP and dedicating all TRIP staff to reviewing the proposals. However, the number of proposals received has pushed the contract award date beyond original estimates of February 2004. Treasury officials now believe they will award a contract by April 2004. Treasury has also continued to develop a proposed rule, related guidance, claims management requirements for the claims contractor and processes necessary to manage the claims function, and worked with industry to devise standard forms. Moreover, once the claims processing contract is awarded, Treasury plans to establish electronic interfaces between itself and the contractor, test the contractor’s systems and processes by using “dummy” claims submitted by insurers, and establish an electronic fund transferring process to speed reimbursement of insured losses. NAIC Is Fulfilling Its Advisory Role under TRIA NAIC is working with Treasury on various aspects of implementing TRIA, effectively fulfilling its advisory role. In January 2003, NAIC formed the Terrorism Insurance Implementation Working Group to work with Treasury. The working group consists of representatives from nine states and the District of Columbia, who are led by a state insurance commissioner, and has provided input to Treasury on an ongoing basis. In particular, the working group assisted Treasury each time it issued guidance and rules, according to Treasury and NAIC officials. For example, Treasury officials reported that NAIC aided them in writing a detailed definition for “insurer” for its first interim final rule published in the Federal Register in February 2003. NAIC coordinated meetings between Treasury and state insurance regulators to align or address differences in definitions that exist across the 50 states, the District of Columbia, Puerto Rico, and three U.S. territories. As noted previously, TRIA directed that state regulations be preserved when possible; thus, the definitions had to be highly consistent with state regulations. The NAIC official also explained that NAIC tried to ensure that the language of its suggestions to Treasury, when implemented, would be enforceable by all state insurance regulators. NAIC also aided Treasury in outreach and education efforts. In the weeks before TRIA was enacted, NAIC issued press releases informing insurers of the impending act and urging them to prepare for its new requirements. Moreover, NAIC applied its expertise in developing a model bulletin, regulations, and forms to help state regulators and insurers expeditiously carry out TRIA responsibilities. For example, NAIC issued a model bulletin, which state regulators could use to communicate key terms and definitions and explain the application of TRIA to losses resulting from foreign sources versus domestic sources of terrorism. NAIC made the model bulletin available on its Web site immediately upon the enactment of TRIA. NAIC also developed model disclosure forms for insurers to use when informing their policyholders about the availability of terrorism insurance under TRIA. As discussed previously, TRIA requires insurers to send disclosure notices to their policyholders about the availability and cost of terrorism insurance and the 90 percent federal share. Insurance Companies Made Changes to Their Operations to Comply with TRIA In order to comply with TRIA requirements, primarily those concerning disclosure to policyholders, insurers generally have made changes to their operations. According to an official of a large insurance company, to develop and disseminate information about TRIA terms and coverage, insurers have changed policies, software, and forms; trained staff; revised actuarial information and underwriting procedures; and expanded outreach and marketing. For example, the insurers had to send revised premium information in disclosure notices to hundreds of thousands of policyholders as well as submit thousands of new premium rates and the associated policy language to state regulators for approval. If the insurers had failed to make these disclosures, they would have lost their eligibility for reimbursement under TRIA. While the disclosure requirements required many revisions to insurer operations, the insurers did have the benefit of a “safe harbor.” As previously discussed, Treasury determined that use of NAIC’s model disclosure form constituted compliance with TRIA’s disclosure requirements. Moreover, insurers using NAIC’s model form could get coverage decisions from their policyholders without first investing time in devising a disclosure notice—a time-consuming process that would include review by an insurer’s legal staff for compliance with TRIA requirements. Given that TRIA invalidated terrorism exclusions as soon as it was enacted, insurers were exposed to uncompensated risks (i.e., the potential for having to pay for all the losses in a terrorism event without having received a premium) until their existing policyholders received written disclosures, accepted the coverage, or rejected it. Insurers Are Concerned That the Pace of TRIA Implementation Could Affect Business Planning, Reduce Cash Flow, or Result in Insolvency Insurers have expressed a number of concerns about Treasury’s implementation of TRIA. Insurers are concerned that Treasury has not already made a decision about extending the “make available” requirement through 2005; they are also concerned about the potential length of time it may take for the Secretary of the Treasury to certify a terrorist event, potential inefficiencies and time lags in processing and paying claims once an event is certified, and the issue of TRIA expiration. TRIA gives Treasury until September 2004 to make a decision about whether to require insurers to make terrorism insurance available—on terms that do not differ materially from that of other coverage—for policies issued or renewed in 2005, the third year of the program. Insurers have stated that this deadline is too late. Insurers need to make underwriting, price, and coverage decisions for these policies in mid-2004. However, Treasury has yet to make a decision about the “make available” requirement for policies issued or renewed in 2005. If Treasury did not extend the requirements through 2005, insurers would have to evaluate and possibly revise prices and terms for newly issued and renewing policies, according to an insurance official. Moreover, regulatory approval for these changes might take longer than the time it took to approve the changes to policies and procedures that insurers initially made to implement TRIA. TRIA allowed for federal preemption of the states’ authority to approve insurance policy rates and conditions, but the preemption expired on December 31, 2003—returning insurers to the previous regulatory scheme in which they must obtain regulatory approvals from each state that has these requirements to sell insurance. Thus, the timing of Treasury’s announcement on the extension may cost both companies and policyholders money if policy changes cannot be implemented in time to issue or renew policies. Insurers also are concerned that a delay in Treasury’s certification of a terrorist event as eligible for federal reimbursement, in conjunction with state regulations requiring prompt payment of claims, could create cash flow problems or even lead to insolvency for some insurers. While TRIA does not specify the length of time available for determining whether an event meets the criteria for certification, insurers are bound by law and regulations in most states to pay claims in a timely manner, which means they may have to pay policyholder claims in full without waiting for Treasury to certify an event, said an NAIC official. Because of this requirement to pay claims in a timely manner, insurers face potentially negative financial consequences under two possible scenarios: if Treasury made the certification decision after an extended period of time or if Treasury ultimately made the decision not to certify an event after an extended period of time. Under the first scenario, insurance industry observers have said that they could potentially experience a cash flow problem while awaiting a certification decision, and thus for reimbursement of the 90 percent federal share, because they have already paid 100 percent of the claimed losses. Insurers brought up the anthrax letter incidents as an example of their concerns about certification time frames, because law enforcement officials still have not identified the source, whether foreign or domestic, more than 2 years after the incidents. Under the second scenario, insurers could become insolvent if Treasury decided not to certify an event (i.e., decided the act was not the work of terrorists working on behalf of foreign interests) after insurers had already paid policyholder claims. Unless the policyholder had paid for coverage of all terrorist events—including those caused by domestic terrorists, which would be excluded from reimbursement under TRIA—insurers would have paid for losses for which they had collected no premium. Insurers would have no way to recover payments already made to policyholders for losses associated with the event other than to seek remedies through the courts, an NAIC official explained. Treasury has responded that the certification process is complex and possibly would require extensive investigation and correlation of information from many sources, most not under Treasury’s control. As a result, although Treasury officials said that they understood the difficulties facing insurers, they also felt that placing specific time limits on those making the certification decision would impose unworkable constraints on an already complex and difficult process. Treasury has taken some steps to facilitate the certification process by communicating with the Department of Justice and the Department of State. Specifically, Treasury has identified contacts within these agencies and has met with relevant individuals to discuss their roles in the certification process. Insurers are also concerned that the length of time Treasury may take to process and pay claims could impact an insurer’s cash flow. Treasury’s capacity to pay claims relatively quickly will determine how fast insurers receive the 90 percent federal share. According to an insurance company official, because of the long-standing relationships and familiarity that insurers have with reinsurers, it is often possible to receive speedy payment for losses. Insurers are concerned that this might not be possible with the TRIP office, especially since the claims-paying mechanism has yet to be created. Treasury officials explained that without a close preexisting relationship like that between an insurer and reinsurer, some procedures may, of necessity, differ. As noted previously, Treasury published a proposed rule addressing the claims-paying process. However, the proposed rule does not specify the maximum number of days in which Treasury must pay claims. According to a Treasury official, establishing a time frame for payment would not be appropriate. However, to address insurer concerns about prompt payment, Treasury has taken into consideration input received from the insurance industry and has been developing mechanisms to expedite the review, approval, and payment of claims. Treasury has also decided to use electronic fund transfers to insurer’s accounts to speed reimbursement to insurers with approved claims. Treasury officials said such a mechanism should reduce the potential for insurers to experience cash flow problems by eliminating the wait for Treasury to issue checks. Finally, insurance industry officials are worried that uncertainty about TRIA’s extension past 2005 will impede their business and planning processes. Although TRIA does not contain any specific extension provisions, Treasury officials have used forums such as NAIC and industry meetings to state that TRIA was designed to provide a program of three years duration. However, industry participants continue to believe that an extension is both possible and likely. As a result, they are concerned that a late decision to extend TRIA would create confusion and disarray in the industry because of the lead time needed to tailor business operations and plans to an insurance environment with TRIA or a federal government backup or, alternatively, without one. Despite Availability, Few Are Buying Terrorism Insurance, and the Industry Has Made Little Progress toward Post-TRIA Coverage While TRIA has improved the availability of terrorism insurance, particularly for high-risk properties in major metropolitan areas, most commercial policyholders are not buying the coverage. Limited industry data suggest that 10 to 30 percent of commercial policyholders are purchasing terrorism insurance, perhaps because most policyholders perceive themselves at relatively low risk for a terrorist event. Some industry experts are concerned that those most at risk from terrorism are generally the ones buying terrorism insurance. In combination with low purchase rates, these conditions could result in uninsured losses for those businesses without terrorism coverage or cause financial problems for insurers, should a terrorist event occur. Moreover, even policyholders who have purchased terrorism insurance may remain uninsured for significant risks arising from certified terrorist events involving NBC agents, radioactive contamination, or fire following the events. Finally, although insurers and some reinsurers have cautiously reentered the terrorism risk market, insurance industry participants have made little progress toward developing a mechanism that could permit the commercial insurance market to resume providing terrorism coverage without a government backstop. TRIA Has Improved the Availability of Terrorism Insurance, and Some High-Risk Policyholders Have Bought Coverage TRIA has improved the availability of terrorism insurance, especially for some high-risk policyholders. According to insurance and risk management experts, these were the policyholders who had difficulty finding coverage before TRIA. Although industry data on policyholder characteristics are limited and cannot be generalized to all policyholders in the United States, risk management and real estate representatives generally agree that after TRIA was passed, policyholders—including borrowers obtaining mortgages for “trophy” properties, owners and developers of high-risk properties in major city centers, and those in or near “trophy” properties—were able to purchase terrorism insurance. Additionally, TRIA contributed to better credit ratings for some commercial mortgage-backed securities. For example, prior to TRIA’s passage, the credit ratings of certain mortgage-backed securities, in which the underlying collateral consisted of a single high-risk commercial property, were downgraded because the property lacked or had inadequate terrorism insurance. The credit ratings for other types of mortgage-backed securities, in which the underlying assets were pools of many types of commercial properties, were also downgraded but not to the same extent because the number and variety of properties in the pool diversified their risk of terrorism. Because TRIA made terrorism insurance available for the underlying assets, thus reducing the risk of losses from terrorist events, it improved the overall credit ratings of mortgage-backed securities, particularly single-asset mortgage-backed securities. Credit ratings affect investment decisions that revolve around factors such as interest rates because higher credit ratings result in lower costs of capital. According to an industry expert, investors use credit ratings as guidance when evaluating the risk of mortgage-backed securities for investment purposes. Higher credit ratings reflect lower credit risks. The typical investor response to lower credit risks is to accept lower returns, thereby reducing the cost of capital, which translates into lower interest rates for the borrower. Most Policyholders Have Not Bought Terrorism Insurance Although TRIA improved the availability of terrorism insurance, relatively few policyholders have purchased terrorism coverage. We testified previously that prior to September 11, 2001, policyholders enjoyed “free” coverage for terrorism risks because insurers believed that this risk was so low that they provided the coverage without additional premiums as part of the policyholder’s general property insurance policy. After September 11, prices for coverage increased rapidly and, in some cases, insurance became very difficult to find at any price. Although a purpose of TRIA is to make terrorism insurance available and affordable, the act does not specify a price structure. However, experts in the insurance industry generally agree that after the passage of TRIA, low-risk policyholders (e.g., those not in major urban centers) received relatively low-priced offers for terrorism insurance compared to high-risk policyholders, and some policyholders received terrorism coverage without additional premium charges. Yet according to insurance experts, despite low premiums, many businesses (especially those not in “target” localities or industries) did not buy terrorism insurance. Some simply may not have perceived themselves at risk from terrorist events and considered terrorism insurance, even at low premiums (relative to high-risk areas), a bad investment. According to insurance sources, other policyholders may have deferred their decision to buy terrorism insurance until their policy renewal date. Some industry experts have voiced concerns that low purchase rates may indicate adverse selection—where those at the most risk from terrorism are generally the only ones buying terrorism insurance. Although industry surveys are limited in their scope and not appropriate for market-wide projections, the surveys are consistent with each other in finding low “take-up” rates, the percentage of policyholders buying terrorism insurance, ranging from 10 to 30 percent. According to one industry survey, the highest take-up rates have occurred in the Northeast, where premiums were generally higher than the rest of the country. The combination of low take-up rates and high concentration of purchases in an area thought to be most at risk raises concerns that, depending on its location, a terrorist event could have additional negative effects. If a terrorist event took place in a location not thought to be a terrorist “target,” where most businesses had chosen not to purchase terrorism insurance, then businesses would receive little funding from insurance claims for business recovery efforts, with consequent negative effects on owners, employers, suppliers, and customers. Alternatively, if the terrorist event took place in a location deemed to be a “target,” where most businesses had purchased terrorism insurance, then adverse selection could result in significant financial problems for insurers. A small customer base of geographically concentrated, high-risk policyholders could leave insurers unable to cover potential losses facing possible insolvency. If, however, a higher percentage of business owners had chosen to buy the coverage, the increased number of policyholders would have reduced the chance that losses in any one geographic location would create a significant financial problem for an insurer. Tighter Exclusions Leave Policyholders Exposed to Significant Perils Since September 11, 2001, the insurance industry has moved to tighten long-standing exclusions from coverage for losses resulting from NBC attacks and radiation contamination. As a result of these exclusions and the actions of a growing number of state legislatures to exclude losses from fire following a terrorist attack, even those policyholders who choose to buy terrorism insurance may be exposed to potentially significant losses. Although NBC coverage was generally not available before September 11, after that event insurers and reinsurers recognized the enormity of potential losses from terrorist events and introduced new practices and tightened treaty language to further limit as much of their loss exposures as possible. (We discuss some of these practices and exclusions in more detail in the next section.) State regulators and legislatures have approved these exclusions, allowing insurers to restrict the terms and conditions of coverage for these perils. Moreover, because TRIA’s “make available” requirements state that terms for terrorism coverage be similar to those offered for other types of policies, insurers may choose to exclude the perils from terrorism coverage just as they have in other types of coverage. According to Treasury officials, TRIA does not preclude Treasury from providing reimbursement for NBC events, if insurers offered this coverage. However, policyholder losses from perils excluded from coverage, such as NBCs, would not be “insured losses” as defined by TRIA and would not be covered even in the event of a certified terrorist attack. In an increasing number of states, policyholders may not be able to recover losses from fire following a terrorist event if the coverage in those states is not purchased as part of the offered terrorism coverage. We have previously reported that approximately 30 states had laws requiring coverage for “fire-following” an event —known as the standard fire policy (SFP)—irrespective of the fire’s cause. Therefore, in SFP states fire following a terrorist event is covered whether there is insurance coverage for terrorism or not. After the terrorist attacks of September 11, 2001, some legislatures in SFP states amended their laws to allow the exclusion of fire following a terrorist event from coverage. As of March 1, 2004, 7 of the 30 SFP states had amended their laws to allow for the exclusion of acts of terrorism from statutory coverage requirements. However as discussed previously, the “make available” provision requires coverage terms offered for terrorist events to be similar to coverage for other events. Treasury officials explained that in all non-SFP states, and the 7 states with modified-SFPs, insurers must include in their offer of terrorism insurance, coverage for fire following a certified terrorist event because coverage for fire is part of the property coverage for all other risks. Thus, policyholders who have accepted the offer would be covered for fire following a terrorist event, even though their state allows exclusion of the coverage. However, policyholders who have rejected their offer of coverage for terrorism insurance would not be covered for fire following a terrorist event. According to insurance experts, losses from fire damage can be a relatively large proportion of the total property loss. As a result, excluding terrorist events from SFP requirements could result in potentially large losses that cannot be recovered if the policyholder did not purchase terrorism coverage. For example, following the 1994 Northridge earthquake in California, total insured losses for the earthquake were $15 billion—$12.5 billion of which were for fire damage. According to an insurance expert, policyholders were able to recover losses from fire damage, because California is an SFP state, even though most policies had excluded coverage for earthquakes. Reinsurers Have Cautiously Returned to the Terrorism Insurance Market, but Many Insurers Have Not Bought Reinsurance Under TRIA, reinsurers are offering a limited amount of coverage for terrorist events, specifically for the insurer deductibles and 10 percent share, but insurers have not been buying much of this reinsurance. According to insurance industry sources, TRIA’s ceiling on potential losses has enabled reinsurers to return cautiously to the market. That is, reinsurers generally are not offering coverage for terrorism risk beyond the limits of the insurer deductibles and the 10 percent share that insurers may have to pay under TRIA. In spite of reinsurers’ willingness to offer this coverage, company representatives have said that many insurers have not purchased reinsurance. Insurance experts suggested that the low demand for the reinsurance might reflect, in part, commercial policyholders’ generally low take-up rate for terrorism insurance. Moreover, insurance experts also have suggested that insurers may believe that the price of reinsurance is too high relative to the premiums they are earning from policyholders for terrorism insurance. The relatively high prices charged for the limited amounts of terrorism reinsurance available are probably the result of interrelated factors. First, even before September 11 both insurance and reinsurance markets were beginning to harden; that is, prices were beginning to increase after several years of lower prices. Reinsurance losses resulting from September 11 also depressed reinsurance capacity and accelerated the rise in prices. The resulting hard market for property-casualty insurance affected the price of most lines of insurance and reinsurance. A notable example has been the market for medical malpractice insurance. The hard market is only now showing signs of coming to an end, with a resulting stabilization of prices for most lines of insurance. In addition to the effects of the hard market, reinsurer awareness of the adverse selection that may be occurring in the commercial insurance market could be another factor contributing to higher reinsurance prices. Adverse selection usually represents a larger-than-expected exposure to loss. Reinsurers are likely to react by increasing prices for the terrorism coverage that they do sell. In spite of the reentry of reinsurers into the terrorism market, insurance experts said that without TRIA caps on potential losses, both insurers and reinsurers likely would still be unwilling to sell terrorism coverage because they have not found a reliable way to price their exposure to terrorist losses. According to industry representatives, neither insurers nor reinsurers can estimate potential losses from terrorism or determine prices for terrorism insurance without a pricing model that can estimate both the frequency and the severity of terrorist events. Reinsurance experts said that current models of risks for terrorist events do not have enough historical data to dependably estimate the frequency or severity of terrorist events, and therefore cannot be relied upon for pricing terrorism insurance. According to the experts, the models can predict a likely range of insured losses resulting from the damage if specific event parameters such as type and size of weapon and the location are specified. However, the models are unable to predict the probability of such an attack. Even as they are charging high prices, reinsurers are covering less. In response to the losses of September 11, industry sources have said that reinsurers have changed some practices to limit their exposures to acts of terrorism. For example, reinsurers have begun monitoring their exposures by geographic area, requiring more detailed information from insurers, introducing annual aggregate limits and event limits, excluding large insurable values, and requiring stricter measures to safeguard assets and lives where risks are high. And as discussed previously, almost immediately after September 11 reinsurers began broadening NBC exclusions beyond scenarios involving industrial accidents, such as nuclear plant accidents and chemical spills, to encompass intentional destruction from terrorists. For example, post-September 11 exclusions for nuclear risks include losses from radioactive contamination to property and radiation sickness from dirty bombs. As of March 1, 2004, industry sources indicated that there has been little development or movement among insurers or reinsurers toward developing a private-sector mechanism that could provide capacity, without government involvement, to absorb losses from terrorist events. Industry officials have said that their level of willingness to participate more fully in the terrorism insurance market in the future will be determined, in part, by whether any more events occur. Industry sources could not predict if reinsurers would return to the terrorism insurance market after TRIA expires, even after several years and even if no more major terrorist attacks were to occur in the United States. They explained that reinsurers are still recovering from the enormous losses of September 11 and still cannot price terrorism coverage. In the long term and without another major terrorist attack, insurance and reinsurance companies might eventually return. However, should another major terrorist attack take place, reinsurers told us that they would not return to this market—with or without TRIA. Conclusions TRIA gave Treasury a very challenging task—to develop what is effectively the world’s largest reinsurer. This task was complicated by the very real possibility that Treasury could have been called on to perform at any time, without advance notice. More than a year after TRIA took effect, key pieces of this reinsurance entity are either in place or nearly in place. Perhaps most importantly for Treasury, the U.S. government, and the American people, no further terrorist attack, major or minor, has yet occurred on American soil. In spite of this breathing space and all that Treasury has accomplished, considerable work remains. Key components of the Terrorism Risk Insurance Program defined by TRIA remain uncompleted. At best, all the components will be in place shortly before the second anniversary of the 3-year program. Recognizing the complexity of the task, it is difficult to be critical, particularly given the lack of a terrorist event. However, had an attack occurred, the incomplete preparation could have added to the plight of the victims. Congress had two major objectives in establishing TRIA. The first was to ensure that business activity did not suffer from the lack of insurance by requiring insurers to continue to provide protection from the financial consequences of another terrorist attack. Since TRIA was enacted in November 2002, terrorism insurance generally has been available to businesses. While most have not purchased this coverage, purchases have been higher in areas considered to be at high risk of another terrorist attack. Quantifiable evidence is lacking on whether having TRIA coverage available has contributed to the economy. However, the current revival of economic activity suggests that the decision of most commercial policyholders to decline terrorism coverage has not resulted in widespread, negative economic effects. As a result, the first objective of TRIA appears largely to have been achieved. Congress’s second objective was to give the insurance industry a transitional period during which it could begin pricing terrorism risks and developing ways to provide such insurance after TRIA expires. The insurance industry has not yet achieved this goal. We observed after September 11 the crucial importance of reinsurers for the survival of the terrorism insurance market and reported that reinsurers’ inability to price terrorism risks was a major factor in their departure from the market. Additionally, most industry experts are tentative about predictions of the level of reinsurer and insurer participation in the terrorism insurance market after TRIA expires. Unfortunately, insurers and reinsurers still have not found a reliable method for pricing terrorism insurance, and although TRIA has provided reinsurers the opportunity to reenter the market to a limited extent, industry participants have not developed a mechanism to replace TRIA. As a result, reinsurer and, consequently, insurer participation in the terrorism insurance market likely will decline significantly after TRIA expires. Not only has no private-sector mechanism emerged for supplying terrorism insurance after TRIA expires, but also to date there has been little discussion of possible alternatives for ensuring the availability and affordability of terrorism coverage after TRIA expires. Congress may benefit from an informed assessment of possible alternatives—including both wholly private alternatives and alternatives that could involve some government participation or action. Such an assessment could be a part of Treasury’s TRIA-mandated study to “assess…the likely capacity of the property and casualty insurance industry to offer insurance for terrorism risk after termination of the Program.” Recommendation for Executive Action As part of the response to Treasury’s TRIA-mandated study requiring an assessment of the effectiveness of TRIA and evaluating the capacity of the industry to offer terrorism insurance after TRIA expires, we recommend that the Secretary of the Treasury, after consulting with the insurance industry and other interested parties, also identify for Congress an array of alternatives that may exist for expanding the availability and affordability of terrorism insurance after TRIA expires. These alternatives could assist Congress during its deliberations on how best to ensure the availability and affordability of terrorism insurance after December 2005. Agency Comments We requested comments on a draft of this report from the head of the Department of the Treasury or his designee. The Assistant Secretary for Financial Institutions at Treasury provided written comments that are included in appendix I stating, in general, that Treasury believed our report provided a thorough and well-balanced discussion of the impact and implementation of the Terrorism Risk Insurance Act of 2002. These written comments also provided amplification of certain points related to Treasury’s implementation of the Act. For example, Treasury commented that its “… implementation of TRIA has been guided by prioritizing the actions that were needed to make the program operational right away.” Treasury also described the emergency procedures in place since “the early days of the program.” Treasury believes these contingency plans would have allowed it to establish and implement a process for receiving, reviewing, and paying claims that would have enabled it to respond quickly to a terrorist event, if it had been necessary. Treasury also provided technical comments on the report that were incorporated as appropriate. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Chair and Ranking Minority Member, Senate Committee on Banking, Housing and Urban Affairs; the Ranking Minority Member, Committee on Financial Services, House of Representatives; and other interested congressional members and committees. We will also make copies available to others upon request. In addition, this report will also be available at no charge on GAO’s Web site at http://www.gao.gov. This report was prepared under the direction of Lawrence D. Cluff, Assistant Director. If you or your staff have any questions regarding this report, please contact the Assistant Director or me at (202) 512-8678. Barry Kirby, Tarek Mahmassani, Angela Pun and Barbara Roesmann also made key contributions to this report. Comments from the Department of the Treasury GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs | After the terrorist attacks of September 11, 2001, insurance coverage for terrorism largely disappeared. Congress passed the Terrorism Risk Insurance Act (TRIA) in 2002 to help commercial property-casualty policyholders obtain terrorism insurance and give the insurance industry time to develop mechanisms to provide such insurance after the act expires on December 31, 2005. Under TRIA, the Department of Treasury caps insurer liability and would process claims and reimburse insurers for a large share of losses from terrorist acts that Treasury certified as meeting certain criteria. As Treasury and industry participants have operated under TRIA for more than a year, GAO was asked to describe (1) their progress in implementing the act and (2) changes in the terrorism insurance market under TRIA. Treasury and industry participants have made significant progress in implementing TRIA during its first year, but Treasury has important work to complete in order to comply with its responsibilities under the act. For example, Treasury has issued regulations to define program requirements, created and fully staffed the Terrorism Risk Insurance Program office, and begun data collection efforts in support of mandated studies. Insurers also have adjusted their operations and policies to comply with TRIA. However, insurers have expressed concerns that Treasury has not yet decided whether to extend through 2005 the requirement that insurers offer terrorism coverage on terms that do not differ materially from other coverage. Although the act gives Treasury until September 1, 2004, to decide this issue, a more timely decision is needed to avoid hindering underwriting and pricing decisions for policies that are issued or renewed through 2005. In addition, Treasury has not fully established a claims processing and payment structure. Insurers are concerned that a delayed payment of claims by Treasury, whether because of the length of time taken to certify that an act of terrorism met the requirements for federal reimbursement or from inadequate claims processing capability, might seriously impact insurer cash flows or, in certain circumstances, insurer solvency. It appears that Congress's first objective in creating TRIA--to ensure that business activity did not materially suffer from a lack of available terrorism insurance--has been largely achieved. Since TRIA was enacted in November 2002, terrorism insurance has been generally available to businesses. But most commercial policyholders are not buying the coverage. According to insurance industry experts, purchases have been higher in areas considered to be at high risk of another terrorist attack. However, many policyholders with businesses or properties not located in perceived high-risk locations are not buying coverage because they view any price for terrorism insurance as high relative to their perceived risk exposure. Further, those who have bought terrorism insurance remain exposed to significant perils. Insurers have broadened long-standing policy exclusions of nuclear, biological, and chemical events. Congress's second objective--to give private industry a transitional period during which it could begin pricing terrorism insurance and develop ways to cover losses after TRIA expired--has not yet been achieved. Industry sources indicated that under TRIA, insurance market participants have made no progress to date toward the development of reliable methods for pricing terrorism risks and made little movement toward developing any mechanism that would enable insurers to provide terrorism insurance to businesses without government involvement. |
Background Federal agencies are generally permitted to contract with any qualified entity for any authorized purpose so long as that entity is not prohibited from receiving government contracts and the contract is not for an inherently governmental function. Agencies are required to use contractors that have a satisfactory record of integrity and business ethics, a record of successful past performance, and the financial and other resources needed to perform the contract. The FAR generally requires agencies to conduct full and open competition for contracts. Under the FAR, a federal agency may terminate a contract either for the government’s convenience or if the agency determines that the contractor is in default. The contractor does not have similar rights to terminate its contract with the government. The FAR also provides agencies with several methods to pay contractors—some of which allow for financial incentives for meeting performance goals. Because of provisions in the Social Security Act, Medicare claims administration contracting had unique features that differed from most other federal contracting. Before Medicare was enacted in 1965, providers had been concerned that the program would give the government too much control over health care. To increase providers’ acceptance of the new program, the Congress ensured that health insurers like Blue Cross and Blue Shield, which already served as payers of health care services to physicians and hospitals, became the contractors paying providers for Medicare services. Medicare’s authorizing legislation specified that contractors called fiscal intermediaries would administer Part A and Part B claims paid to hospitals and other institutions, such as home health agencies. Contractors called carriers would administer the majority of Part B claims for the services of physicians and other providers. By law, Medicare was required to choose its fiscal intermediaries from among organizations that were first selected by associations representing providers, a process called provider nomination. Medicare was also required to choose health insurers or similar companies to serve as its carriers and, by statute, did not have to award the contracts through competition. In addition, Medicare contracts were generally renewed each year. As a result, since the inception of the program, most Medicare claims administration contracts have been awarded and renewed on a noncompetitive basis, with limited exceptions. Contractors could not be terminated from the program unless they were first provided with an opportunity for a public hearing—a process not afforded under the FAR. Unlike other federal contractors, claims administration contractors could terminate their contracts. In addition, the contractors were paid on the basis of their allowable costs, generally without financial incentives to encourage superior performance. The MMA required CMS to significantly change its contracting arrangements and follow the FAR, except to the extent inconsistent with a specific requirement of the MMA. The MMA removed the specific procedures for selecting fiscal intermediaries and carriers, and unlike CMS’s existing contracts, MAC contracts must be fully and openly recompeted at least every 5 years. In addition, the new contracts will also contain performance incentives for contractors. Finally, MACs will not be permitted to default on their contracts or terminate their contracts as allowed under current contracting practices. Contract termination will follow the requirements of the FAR, which allow the government to terminate contracts for its convenience or for contractor default. MACs will assume work that is currently performed by 51 claims administration contractors. At present, there are 25 fiscal intermediaries and 18 carriers. In addition, four durable medical equipment (DME) regional carriers pay claims submitted by suppliers of DME, prosthetics, orthotics, and supplies, and four regional home health intermediaries process home health and hospice (HH) claims. CMS plans to select 23 MACs to serve specific jurisdictions, including 15 A/B MACs, which will process both Part A and Part B claims; 4 DME MACs, which will process claims for DME, prosthetics, orthotics, and supplies; and 4 HH MACs, which will process claims for HH care. CMS’s current schedule calls for the full fee-for-service contracting workload to be transferred to MACs by July 2009. CMS plans to conduct competitions for existing Medicare contractor workloads beginning with a start-up acquisition and transition cycle for the 4 DME MACs and 1 A/B MAC. The start-up cycle will be followed by two additional acquisition and transition cycles. Appendix VI shows CMS’s schedule and timing for competing all of the MAC contracts. MACs will be responsible for most of the functions currently performed by fiscal intermediaries and carriers. They will process and pay claims, handle first-level appeals of denied claims, and serve as providers’ primary contact with Medicare. In addition, they will coordinate with CMS’s functional Medicare contractors that perform limited Medicare functions on a national or regional basis, such as answering the 1-800-MEDICARE help line, coordinating Medicare and other insurance benefits, and conducting program safeguard activities. For example, the functional Program Safeguard Contractors (PSC) conduct activities to prevent or address improper payments—such as investigating potential fraudulent billing related to the claims paid by the claims administration contractors. MACs’ responsibilities for medical reviews of claims; benefit integrity, which involves the investigation of suspected fraud; and beneficiary inquiries will differ in some respects from those of the current claims administration contractors. Currently, three of the four DME regional carriers conduct their own medical reviews and benefit integrity activities for the claims they process. Under contracting reform, PSCs will be responsible for performing all medical reviews and benefit integrity activities related to the claims processed by the DME MACs. These responsibilities will be allocated differently for A/B MACs. All A/B MACs will conduct medical reviews of the Part A and Part B claims they will process, while PSCs will be responsible for conducting benefit integrity activities related to these claims. The current Medicare claims administration contractors respond to beneficiaries’ questions that are specific to their claims, while staff from 1-800-MEDICARE answer general questions on the telephone help line. In the future, staff at beneficiary contact centers (BCC) will answer calls placed to 1-800-MEDICARE and assume the role of responding to general and claims-specific questions. MACs will be responsible for responding to more complex inquiries from beneficiaries that require a more advanced understanding of Medicare claims processing or coverage rules. CMS’s Plan Does Not Provide an Appropriate Implementation Framework in All Critical Areas CMS’s plan for contracting reform provides detailed information—and an appropriate framework for implementation—in some, but not all, critical areas. For example, the plan presents detail on the proposed schedule for MAC implementation. Nevertheless, as figure 1 shows, the plan does not provide detailed information on the risks associated with contracting reform, some aspects of CMS’s implementation approach, and the integration of reform activities with other initiatives. CMS has recently taken steps to address areas of the plan where details and complete information were lacking, as part of its ongoing planning efforts. However, key decisions relating to critical areas are yet to be made and incorporated into its plan. CMS’s Plan Provides Useful Information about Some Aspects of Implementation CMS’s plan provides a clear discussion of the reasons for implementing contracting reform, including the restrictions and weaknesses in the current system, as shown in table 1. The plan also recognizes the benefits of improving Medicare contracting for beneficiaries and providers, such as providing a single point of contact for providers’ claims-related inquiries. The plan also provides maps of the current jurisdictions of Medicare contractors and future jurisdictions of MACs. A CMS official told us that the agency took beneficiaries’ patterns of care into account when drawing jurisdictional lines. In addition, according to the plan, CMS designed the new MAC jurisdictions, which were based on state boundaries, to achieve operational efficiencies, promote competition, and better balance the allocation of workloads. For example, there are one fiscal intermediary and three carriers currently serving New York, as well as two fiscal intermediaries and one carrier serving Connecticut. Under contracting reform, one A/B MAC will administer Part A and Part B claims for beneficiaries residing in these two states. Currently, different claims administration contractors handle Part A and Part B claims in the majority of states. For example, in Michigan, United Government Services processes Part A claims, and Wisconsin Physicians Service Insurance Company processes Part B claims. In addition, while some current contractors serve one state, others serve several— sometimes noncontiguous—states. For example, Blue Cross Blue Shield of Arizona processes Part A claims exclusively in Arizona, while National Heritage Insurance Company processes Part B claims on the East Coast in Maine, New Hampshire, Vermont, and Massachusetts and, on the West Coast, in California. The varying jurisdictions for contractors that process Part A and Part B claims have resulted in what CMS’s plan terms “a patchwork of responsibility and service.” CMS has developed 15 distinct, nonoverlapping geographic jurisdictions for the A/B MACs. Appendixes VII, VIII, IX, X, XI, and XII show the jurisdictional maps for the current fiscal intermediaries, the current carriers, the current regional home health intermediaries, the current DME regional carriers, the 15 new A/B MACs, and the 4 new DME MACs and 4 new HH MACs. A CMS official stated that while the A/B MACs’ jurisdictions continue to vary somewhat in size and workload, they are reasonably balanced in terms of the numbers of fee-for-service beneficiaries and providers served. However, CMS officials have stated that companies might be able to win more than one MAC contract, and, if so, their workloads in multiple jurisdictions would potentially be greater than those of companies that win contracts for a single jurisdiction. In addition to providing information on MACs’ jurisdictions, CMS’s plan provides timelines for implementing MAC contracting, including anticipated contract award dates for the start-up cycle and two subsequent cycles. CMS plans to monitor each cycle, including transitions, and to adjust the implementation schedule if necessary. The start-up cycle, which will result in the award of four contracts to DME MACs and one contract to an A/B MAC, should provide CMS with experience that can be applied to the next two cycles. For example, the start-up cycle will allow new CMS personnel to obtain additional expertise, if needed, on contracting activities. It will also allow CMS to examine its acquisition and transition efforts and apply lessons learned to future cycles. Recognizing that open communication with stakeholders is important to the successful implementation of contracting reform, CMS’s plan incorporates a written strategy to provide information and solicit questions, comments, and feedback on Medicare contracting reform from potential MACs, providers, and beneficiaries. This communication strategy includes periodically holding open meetings and establishing a Medicare contracting reform Web site. For example, CMS hosted a series of open meetings in 2004 and 2005 to share information and seek input on aspects of its contracting reform plan, including MAC jurisdictions, draft statements of work, and proposed performance standards. In addition, CMS’s Web site is routinely updated to provide answers to questions about contracting reform and provide access to important documents, such as its Report to Congress. The Web site also provides a link to a federal procurement Web site, where draft and final versions of MAC statements of work can be found. Interested parties, including organizations interested in competing for MAC contracts, provided feedback on these drafts through CMS’s open meetings and its Web site. In developing certain areas of the contracting reform plan, CMS also sought input from its headquarters and regional office staff. For example, CMS teams worked collaboratively to develop the draft statements of work for A/B MACs and DME MACs. CMS’s Contracting Reform Plan Does Not Fully Address Three Critical Implementation Areas While CMS’s contracting reform plan provides detailed information in some areas, it does not comprehensively address (1) contracting reform risks and how the agency plans to mitigate them; (2) the intended approach for implementing certain aspects of MAC contracting, including details on how CMS will monitor MACs’ performance; and (3) coordination of contracting reform activities with other complex initiatives that CMS is implementing. While a comprehensive contracting reform plan was due in October 2004, we found that the plan was still incomplete as of June 2005. The agency has begun to develop, but has not completed, a more detailed plan in critical implementation areas. Nevertheless, without having all of the critical elements of its plan in place, the agency is undertaking an accelerated schedule and intends to transfer all claims processing work to MACs by July 2009, more than 2 years ahead of the MMA’s time frame. Plan Does Not Fully Address Implementation Risks CMS’s plan does not comprehensively address three major risks and indicate the steps that the agency plans to take to mitigate them. These are CMS’s proposed implementation schedule, the volume and complexity of anticipated claims processing workload transitions, and the potential for voluntary contractor withdrawals. Each of these risks has the potential to disrupt claims administration services, resulting in delayed or improper payments to providers. The Report to Congress—one of the documents in CMS’s contracting reform plan—briefly noted that the anticipated implementation schedule “will require substantial risk management and schedule precision to minimize possible operational disruption.” CMS’s proposed implementation schedule calls for all work to be transferred to MACs by July 2009—more than 2 years ahead of the MMA’s time frame. The initial start-up acquisition cycle is taking place in a 27-month period—from April 2005 to July 2007—during which about 9 percent of the national claims possessing workload will be transferred to MACs. If CMS chooses current contractors that are administering claims in the MAC jurisdictions, the percentage of the workload transferred would be less. In the first phase of the start-up cycle, CMS will select and transfer workload to 4 DME MACs. In the second phase, CMS will select and transfer workload to 1 A/B MAC. Following the initial start-up cycle, CMS is planning two acquisition cycles, which will last from September 2006 to July 2009, during which it will select and transfer the remaining current contactors’ work to 14 A/B MACs and 4 HH MACs. As part of these two cycles, in the 22 months from September 2007 to July 2009, CMS plans to manage transitions of as much as 91 percent of the annual Medicare claims processing workload, which represent an estimated $250 billion in payments to providers. The transition period for cycle one is 1 year, from September 2007 to September 2008, and the transition period for cycle two is 10 months, from September 2008 to July 2009. In 13 of the past 15 years, CMS has transferred at least one contractor’s workload. These transitions took an average of 6 to 9 months, with some lasting as long as a year. CMS decided on a more compressed schedule after initially considering a longer implementation period. In November 2004, CMS officials told us that they were planning to move to MAC contracting using six acquisition cycles to be completed around April 2011. According to the Report to Congress, CMS officials believed that the potential savings from contracting reform suggested that transferring larger portions of the workload to MACs in a shorter time frame would allow savings to accrue more quickly to the Medicare program. Despite the ambitious time frame for implementation, CMS’s plan does not provide detailed information on the risks involved in transferring large segments of Medicare’s claims processing workload on an accelerated schedule or outline contingency plans for the transitions to MACs. CMS’s accelerated schedule for cycles one and two leaves little time for CMS to examine its acquisition and transition efforts, apply lessons learned, and resolve disagreements about the agency’s award process with companies that were not selected. Furthermore, due to the accelerated cycle, interested companies—some of which may be among the best qualified to perform as MACs—may decide not to compete to win multiple MAC contracts because developing concurrent proposals or assuming the workload for more than one jurisdiction simultaneously might strain their resources. In addition, it may prove difficult for CMS staff to evaluate proposals, award contracts, and manage concurrent transitions within the proposed time frame. CMS’s plan does not provide details on its strategy for managing these compressed transitions with its anticipated staff resources. CMS officials expressed concerns to us that many of the staff most experienced in handling transitions were, or were close to being, eligible to retire and that CMS might have to manage these transitions with less experienced staff. In addition, CMS staff have never had to manage as many simultaneous transitions, which is likely to add to the challenge of managing them so that they are as smooth as possible for providers. As we reported previously, the lack of sufficient staff resources has hampered other transitions. The volume and complexity of claims workload transitions is a second risk that CMS’s plan does not adequately address. Although CMS has regularly managed the transitions of claims administration contractors’ workloads and functions and has much experience in doing so, recent transitions have affected only about 10 percent of the claims for Part A and Part B in any year. Nevertheless, CMS is planning to transfer 91 percent of current contractors’ workload to MACs in less than 2 years. Furthermore, the MAC transitions will be more complex than past contractor transitions because both Part A and Part B workloads will be transferred from multiple contractors to a single MAC in a new jurisdiction. These changes mean, for example, that under the initial A/B MAC contract that is awarded—one that involves less than 3 percent of the national workload in a six-state jurisdiction—CMS will simultaneously transfer as many as nine separate segments of current contractors’ workload to the new MAC. Figure 2 illustrates the transitions that will occur to consolidate the Part A and Part B workload in the first contract to be awarded for an A/B MAC jurisdiction. These transitions will also involve transferring some portions of the work currently being done by the carriers and fiscal intermediaries to functional contractors. For example, CMS will be transferring medical review and benefit integrity work from DME regional carriers to PSCs at the same time that the claims workload transfers to DME MACs. While the start-up cycle transitions are complex, they are planned to affect only 1 A/B MAC and the 4 DME MACs. CMS will be conducting a much greater number of transitions for cycles one and two, as the rest of claims administration work is transferred from current contractors to 14 A/B MACs and 4 HH MACs. Additional factors may add to the complexity of the transitions. For example, if current fiscal intermediaries and carriers choose not to compete or lose competitions for MAC contracts in the jurisdictions where they currently process claims, they may have little incentive to be highly cooperative in the transition activities. In these cases, their knowledgeable staff who would facilitate transitions may seek employment elsewhere. Further, MAC transitions may involve the transfer of workloads to companies new to Medicare operations, which would add complexity to the process. Another risk that CMS’s plan has not fully addressed is the potential impact that voluntary contractor withdrawals may have on the planned transition schedule. CMS has not developed mitigation strategies to deal with these potential withdrawals. Several CMS officials told us that they were concerned that some contractors might voluntarily withdraw before the agency’s planned competition for jurisdictions that included their current service areas because the contractors did not intend to compete as MACs. In addition, contractors that lose competitions may opt to leave the Medicare program before transitions to new MACs have been completed. CMS has the option of paying contractors’ staff retention bonuses, so that key contractor staff can work through transitions, but that may not be enough to convince contractors to stay in the program. Voluntary withdrawals could force CMS to conduct competitions and manage transitions for the affected jurisdictions on a different or more accelerated schedule than originally planned. CMS could elect to choose a Medicare claims administration contractor to briefly perform the withdrawing contractor’s work until a MAC is chosen for the affected jurisdiction, but this could be perceived as limiting competition by favoring one company over others. The ultimate risk from transitions that do not proceed smoothly or on schedule is that providers might not receive payment for the items or services they furnished to beneficiaries or could be paid inappropriately. Interrupting providers’ cash flow by failing to pay them can create significant problems in their operations. On the other hand, any increase in improper payments would create a further drain on the Medicare trust funds. In fiscal year 2004, CMS estimated that Medicare claims administration contractors’ net improper payments amounted to $19.9 billion. CMS has not completed a comprehensive risk mitigation plan to address the risks associated with contracting reform, but the agency has taken some initial steps to manage the risks. CMS has developed a procedure for identifying, analyzing, responding to, and monitoring and controlling risks. As part of this procedure, CMS has identified certain risks that may have an impact on implementation, including the availability of resources to complete scheduled procurement tasks and the difficulty of developing a clear, complete statement of work that minimizes the need for future contract modifications. The agency is currently working on developing a document that lists proposed actions that could mitigate these and other identified risks. However, CMS’s descriptions of proposed mitigation actions lack specificity. For example, to address the risk that CMS may not have the funding to conduct transition activities as scheduled, the proposed mitigation action is to “monitor federal appropriations,” but the document does not indicate how the agency might redeploy resources or restructure its transitions, should a funding gap occur. Further, CMS has not developed mitigation actions for some serious risks, including the failure to create internal processes for managing MACs. Without such internal processes, CMS may not be able to effectively administer MAC contracts. Plan Lacks Detailed Information on MAC Contracting Strategy and Management and Oversight Approach Although CMS has done extensive work toward developing a strategy that outlines how it intends to implement MAC contracting, the agency’s plan lacks important implementation information in some areas. For example, CMS has made final decisions concerning certain elements of the MAC contracting strategy, such as paying performance incentives to encourage contractor innovation, efficiency, and cost effectiveness. However, for A/B MACs, the plan does not provide complete and definitive information on the contract type, performance measures and incentive structure, proposal evaluation criteria, and methods for maintaining a competitive environment and conducting market research to gather information on the number and size of companies that may submit proposals. CMS’s MAC acquisition strategy, which will provide information on these areas, is not yet complete. The agency planned to finalize this strategy in July 2005 and to issue the A/B MAC request for proposals in September 2005. Knowing such critical contracting information well in advance of the issuance of the request for proposals would make it easier for interested parties to develop specific plans for competing to win A/B MAC contracts. Having a robust pool of potential contractors with good proposals would make it easier for CMS to choose applicants likely to be effective as MACs. CMS’s contracting reform plan states that some MAC functions will be integrated with those of other types of Medicare contractors, but the agency has not fully developed the details of this integration. For example, the plan states that CMS expects that PSCs will continue to perform activities such as medical reviews and fraud investigations in the future and will coordinate closely with MACs. In addition, the statements of work for DME MACs and A/B MACs require that they sign agreements with PSCs to define respective roles and responsibilities. Among their responsibilities, DME MACs and A/B MACs will be expected to coordinate with PSCs in referring potential fraud cases when, for example, MACs identify claim forms that have been altered to obtain a higher payment or when it appears that a supplier or provider may have attempted to obtain duplicate payments. MACs’ coordination with PSCs is critical because findings of fraud could affect payments to providers. Coordination with PSCs is also discussed in “concept of operations” documents for DME MACs and A/B MACs. These documents provide high-level information on how MACs will be expected to work with PSCs and other contractors that focus on particular Medicare program functions, such as claims appeals. However, CMS has yet to develop many details, including information on the specific steps that will be used to facilitate contractor coordination. In addition, CMS’s plan does not fully outline how the agency intends to evaluate and manage MACs. CMS’s plan incorporates a strategy paper on evaluating MACs’ performance, which was developed for the agency by a support services contractor. The strategy paper makes a number of recommendations, including establishing a specific office within CMS to gather, validate, and score contractor performance data and to share this information with agency management. CMS officials are currently considering these and other recommendations that were contained in the strategy paper. However, as of June 2005, they had not decided whether to implement any of these recommendations and had not completed their design of an approach for overseeing and evaluating MAC performance. Contractor oversight is an area of considerable concern, because, in the past, CMS’s failure to monitor Medicare claims administration contractors left the Medicare program vulnerable to fraud, waste, and abuse. For example, CMS did not always detect activities, such as the falsification of reports on contractor performance and the improper screening, processing, and paying of claims, that led to additional costs to the Medicare program. In developing the MAC oversight strategy paper, CMS’s contractor drew on the work of a cross-component work group within CMS that was established in April 2003. The work group reported in June 2004 that CMS lacked an integrated and coordinated framework to guide a wide range of evaluation activities and that the agency had difficulty in compiling a comprehensive view of individual contractor performance. The work group also noted that complete and accurate information on contractor performance will be imperative as contracts for MACs are periodically recompeted and determinations about their records of performance become part of the qualification criteria. This information could also be critical in determining the amounts CMS pays to MACs as performance incentives. The plan also lacks detailed information on organizational changes to better realign agency personnel to support the management and oversight of new MAC contracts because CMS has not made final decisions in this area. While CMS currently administers some types of contracts that are governed by the FAR, MAC contracts generally will be larger, more complex, and more challenging to administer. CMS’s past oversight of claims administration contractors was hindered by organizational weaknesses, and at present, multiple central office components and regional offices have responsibilities to help oversee and manage claims administration contractors. Having an organizational structure that is appropriately aligned for CMS to manage and oversee MACs will make it easier for the agency to routinely evaluate its contractors on the basis of a variety of newly established performance measures. CMS has reorganized the central office component that will be responsible for awarding MAC contracts and has been considering ways to use regional offices’ staff expertise to support MAC contracting efforts. However, CMS has not completed its plan for organizational changes, including the division of labor and responsibilities for management and oversight of Medicare contractors among CMS components and between the central and regional offices. Plan Does Not Fully Integrate Scheduling of Contracting Reform Activities with Other Initiatives CMS has not developed an approach that fully integrates the planning and scheduling of Medicare contracting reform with other initiatives that will affect Medicare contractors, beneficiaries, and providers over the next several years. As CMS works toward implementing contracting reform, it is also focusing on several critical initiatives that must be integrated, or implemented concurrently, with Medicare contracting reform. These include the Medicare prescription drug benefit and the expansion of the existing options available to Medicare beneficiaries who enroll in private health plans. According to CMS officials, these initiatives may compete with contracting reform for agency resources. Other key planned initiatives, such as major systems upgrades or replacement, will directly affect Medicare claims administration operations and are anticipated to be fully or partially implemented between now and 2009 in conjunction with contracting reform. Information on these interrelated initiatives is provided in table 2. As we have previously reported, planning for IT system transitions has often been problematic in federal agencies. Coordinating the schedule for implementing these initiatives in conjunction with Medicare contracting reform is crucial to ensuring that claims administration operates smoothly during the transition to MACs. HIGLAS, in particular, provides an example of why effective integration is essential. Most outgoing contractors will not be utilizing HIGLAS at the time their workloads are transferred to MACs. Therefore, CMS will have to coordinate HIGLAS transition activities, including data preparation and data conversion testing, between the MACs that will be using HIGLAS and the outgoing contractors that have been using the existing financial management systems. Given that the HIGLAS implementation strategy calls for “just in time” data conversion to HIGLAS format by outgoing contractors at the time the work is transferred to the MACs, problems or delays in this conversion could delay MAC transitions. Therefore, the scheduling for HIGLAS will have to be carefully managed to allow sufficient time for the data conversion. Although CMS has begun initial efforts to integrate the planning and scheduling of several major initiatives that will affect contractors, an agency official told us that there are no planning documents to provide a detailed integration framework and that he did not know when such documents would be available. He said that CMS is attempting to determine the appropriate sequencing and interdependencies of the multiple initiatives occurring in the agency. To focus its sequencing efforts, CMS has designated the contracting reform implementation schedule as the anchor around which it will schedule the implementation of HIGLAS and other critical initiatives. For example, since CMS plans call for MACs moving to HIGLAS either before or with MAC claims workload transitions, the MAC implementation plan will be pivotal in determining when the HIGLAS transitions will be accomplished. CMS is also examining each project’s resource requirements to help ensure that the agency is able to fund the initiatives in the sequence planned. Delays in the implementation of MAC-related initiatives could potentially have a significant impact on the timing, scope of work, costs, and ultimate success of MAC implementation. For example, CMS has already begun to experience schedule slippage for its initiative to consolidate the contractors’ data centers. These data centers, which are provided by current carriers and fiscal intermediaries, conduct the physical processing of Medicare claims and as a result, play a crucial role in efficient and accurate claims administration. The agency had intended to award contracts for four data centers that would consolidate the work of 14 current centers before awarding the DME MAC contracts, which are anticipated to be awarded in December 2005. However, it suspended the request for proposals on May 3, 2005, because it was unable to consider the large number of comments that were received on its draft request. It reissued its solicitation on June 27, 2005, for two, instead of four, data centers. Delays or other problems in implementing the consolidation of its data centers could affect the efficiency and effectiveness of MACs’ claims processing transitions. CMS’s Report to Congress stated that having the new data centers would be critical to achieving the greatest efficiency from the MAC transitions, in part because some of the information services and support to be provided by MACs would depend on the modernized platform. Currently, CMS expects that one of the new DME MACs will be managing a data center for all of the DME MACs as a stopgap measure and plans to award the contract for the two data centers in February 2006. Implementation of the data centers is planned to coincide with implementation of the first A/B MAC selected. The data center consolidation effort faces some of the same complexities that might occur during the transition of the claims processing contracts. If claims administration contractors operating data centers opt to leave the program before the conclusion of their contracts, CMS will have to find other data center contractors to temporarily take over that workload. Furthermore, because the data center consolidation is planned to occur during the MAC transition period, the two data center contractors will have to support claims administration contractors moving in or out of the program, while also integrating some of the prior data center contract work into their new data center responsibilities. CMS generally envisioned multiple data center transitions overlapping, but the schedule for data center consolidation is uncertain. CMS has not indicated how it intends to handle the risks associated with moving data center work at the same time claims processing workload is being transferred to MACs. Plan’s Cost and Savings Estimates Do Not Provide a Reasonable Basis for Decision Making CMS’s plan includes estimated costs and savings for Medicare as a result of contracting reform, but the estimates are too uncertain to provide a reasonable basis for making implementation decisions. Because CMS has never undertaken an effort comparable to full-scale contracting reform, the plan’s cost and savings projections were based on questionable evidence and assumptions about a contracting environment that differs considerably from its current one. As a result, the costs to implement contracting reform and the savings generated from it could be significantly greater or less than CMS has anticipated. Cost Estimates Depend on Uncertain Outcomes In its plan, CMS estimated that the costs to implement contracting reform from 2006 to 2011 would total about $666 million. The plan’s cost estimate is higher than indicated in the Report to Congress, which included only the fiscal year 2006 budget request of $58.8 million to support a single year of contracting reform implementation costs. CMS opted not to include its estimates for funds that would likely be requested in its budgets for fiscal years 2007 through 2011. The Report to Congress indicated that contracting reform would require “substantial additional investment in subsequent years.” The costs CMS anticipates incurring each year are shown in figure 3. The estimated $666 million in costs is divided into four categories, as noted in table 3. The largest cost component is for the termination and transition of the current Medicare contractors, which CMS has estimated at $331.5 million. When a Medicare contract is terminated, contractors can have costs for items such as lease termination, equipment depreciation, and severance pay for contractors’ employees. The current Medicare contracts may require CMS to pay many of these termination costs when contractors leave the Medicare program. Similarly, when a Medicare contract workload is transferred from an outgoing contractor to another one, transition costs are incurred. Such transition costs include expenses related to transferring Medicare records and updating records related to Medicare benefit payments, including overpayments and other accounts receivable, so they are ready for the incoming contractor to use. Although CMS’s estimate for termination and transition costs is based on cost data from prior years, it is impossible to predict with certainty the termination or transition costs that will be incurred through implementing contracting reform. CMS’s estimate for termination and transition costs is based on the agency’s experience with both types of costs from 1995 to 2001. The estimate assumes that current contractors will win the majority of the MAC contracts and retain about 60 percent of their current workload. However, CMS officials do not know how many of the existing contractors will win MAC contracts for particular jurisdictions, so this assumption is speculative. Additionally, some of CMS’s prior contractor transitions were “turnkey” operations, in which an incoming contractor simply assumed the prior contractor’s business arrangement and staff without needing to incur some of the usual start-up costs, such as equipment purchases. Likewise, in turnkey transitions, CMS did not have to cover severance pay, because the outgoing contractor’s existing staff could be employed by the incoming contractor. As a result, the prior transitions may have cost less than the transitions that will occur during contracting reform because CMS is not requiring MACs to retain outgoing contractors’ work sites or staff. Contracting reform will allow CMS to pay performance incentives that are designed to reward MACs with exceptional performance. However, it is impossible to know the amount of incentive fees contractors will earn in the full-scale MAC environment until the contracts are awarded and CMS has more experience with contractor performance. These performance incentives are projected to cost 5.5 percent of the total estimated costs of the MAC contracts, or $190.6 million for fiscal years 2006 through 2011. CMS based this estimate on its prior experience in managing contractor incentive programs on a much smaller scale. Several IT modernization projects designed to support MACs by facilitating electronic claims processing are included in CMS’s estimate of contracting reform costs. These IT project costs include CMS’s planned consolidation of its current data centers. However, delays in the data consolidation initiative may affect the amount of these costs. The IT modernization costs also include plans to standardize the front end, or the way that electronic claims enter the DME MACs’ automated processing systems. CMS did not include similar costs for A/B MACs, because the agency had not made a decision to standardize the A/B MAC front ends at the time these estimates were made. CMS’s $132.5 million estimate for IT and other costs included $78.7 million in IT costs needed to support Medicare contracting reform, primarily the cost of data center consolidations. The final type of costs CMS estimated was for surveys of providers to assess their opinions about MAC performance. The MMA required that MAC performance be assessed in part based on provider satisfaction. CMS plans to begin surveying providers to measure their satisfaction with their MAC’s performance after MACs begin operating. The cost estimate for these surveys is $11.7 million and was based on an internal CMS analysis. A potential operational cost not part of CMS’s implementation estimate is funding for MAC contract modifications. Under the current contract arrangements, CMS is able to develop new tasks for contractors to complete. The agency may pay more for these tasks to be completed, regardless of the initial requirements set for the year, or may direct the contractor to do the work within its existing budget, if CMS’s review of the contractor’s spending pattern indicates that new funding is not needed to complete the new tasks. In the MAC environment, unless they become part of the statement of work, CMS will not be able to add new tasks to the MAC contracts without negotiating payment for them. Because contractors will submit proposals based on the tasks described in the original statement of work, work required after the contract is awarded could require CMS to negotiate with MACs. This could be the case, for example, if new legislation requires CMS to implement a major program change that was not anticipated in the established MAC contract costs. For example, in 2001, we reported that the Department of Defense (DOD) was not including contract adjustments when budgeting for its contracts with the insurers delivering health care to DOD employees. We warned that this approach could become quite costly, because in fiscal year 2001, it led to a $500 million shortfall in the DOD budget. When an agency is negotiating changes with an existing contractor, the competitive aspect of the negotiations is lost. As a result, the federal government may not always receive the best price. If CMS has to negotiate new tasks with the MACs for greater payment, contracting costs could rise above the agency’s estimates. To address this concern, CMS has instructed companies interested in becoming DME MACs to assume a level of effort for a specific number of changes. As long as the extra work to implement program changes does not exceed the level of effort in the statement of work, the Medicare program would not incur additional operational expenses. Savings Estimates Depend on Contractor Performance in Reducing Improper Payments Based on estimates generated both internally and by a consultant, CMS expects that contracting reform will generate significant savings to Medicare’s administrative budget and to the Medicare trust funds. While it is rational to assume some level of savings, these estimates are highly uncertain because they project the outcome of contracting processes and protocols that CMS has not used before. Furthermore, the consultant’s estimates relied on questionable evidence and were not reviewed by CMS program staff with the expertise to confirm whether the assumptions upon which they are based are realistic. CMS’s estimate of savings from 2006 to 2011 for the administrative budget totals $459.5 million, as shown in table 4. These savings are estimated to come from two sources. First, CMS anticipates that the competed MAC contracts will cost less than the current agreements and encourage more innovative efforts among contractors, which will allow them to operate at lower cost. CMS estimates that the introduction of competition will lower the contractor budget for awarded MAC contracts by 6 percent in the first year and 12 percent in each succeeding year. Second, CMS anticipates that the consolidation of its 14 Medicare data centers will lower operating costs. Both of the savings estimates shown in table 4 will be highly dependent upon contractor performance and the outcome of the competitive process. For instance, any savings CMS incurs from competing the MAC contracts will substantially depend on their final costs. CMS’s annual estimates of savings for the administrative budget increase significantly from fiscal year 2006 to fiscal year 2011. As shown in figure 4, these estimated savings would begin to outpace CMS’s estimated administrative costs in 2009, and by 2011, they would exceed estimated costs by $100 million. CMS anticipates that the bulk of the savings from Medicare contracting reform will occur through funds it can avoid spending from the Medicare trust funds, but the basis for this estimate is uncertain. CMS’s consultant estimated the total projected savings to the trust funds through fiscal year 2011 to be over $1.4 billion. The savings to the trust funds are expected to come from the three main sources shown in table 5. The consultant who created these savings estimates explained that while it is logical to assume some level of savings to the Medicare program, there are “enormous uncertainties” at this stage of the implementation process, which make it difficult to project the savings with much accuracy. Ultimately, each of these sources of savings assumes that contracting reform will lead to a lower rate of improperly paid claims. Further, while the estimate for each of the three sources is based on a different methodology and formula, the basis for each is similar enough that the savings accrued through each may overlap, resulting in possible double counting. Therefore, whether contracting reform will actually achieve the $1.4 billion savings is highly uncertain. The consultant’s estimate anticipates that MACs could detect a larger amount of improper payments because they will be examining both Part A and Part B claims, but there is little evidence to support the amount of savings assumed. Currently, Part A and Part B medical reviews are generally conducted by different contractors, which lessens their focus on problematic billing that spans both parts. The consultant estimated that having MACs conduct joint Part A and Part B medical reviews would lower the amount of improperly paid Medicare claims by 0.08 percent. However, CMS’s senior medical review staff indicated that they had no prior knowledge of this actuarial estimate until we showed it to them. The staff told us that there is no evidence to realistically estimate the amount of savings that may result from consolidating the medical review responsibility for both parts. Furthermore, according to CMS staff, the greatest savings would likely come through computerizing medical reviews to automatically examine and compare Part A and Part B claims before they are paid. However, this capability is not currently possible, because Part A and Part B claims are processed on different payment systems, and developing a combined Part A and Part B claims processing system that could automatically compare Part A and Part B claims before payment would take years to complete. Potential savings from improved fraud detection are also impossible to quantify, based on current information. The PSCs currently conduct fraud detection activities for both Part A and Part B in 40 states, the District of Columbia, Puerto Rico, and the Virgin Islands. As CMS implements contracting reform, the jurisdictions in which PSCs will conduct fraud detection for both parts may change. While CMS considers having a single PSC handling both Part A and Part B fraud detection a way to make its contractors more efficient, a senior official acknowledged that the agency had no evidence with which to determine whether having the PSCs conduct combined fraud reviews has been more effective in detecting fraud than having these reviews conducted by separate contractors for Part A and Part B. The consultant’s estimate also anticipated that MACs will be able to pay claims more accurately than the current Medicare contractors, due to more effective medical review of claims, thus increasing claims denial rates. While noting that some elements of claim denials are not associated with contractor performance, the consultant assumed that better contractor performance could be equated with increased claims denial rates. The calculation for this assumption was based on the projection that contractor denial rates would increase and that half of these increased denials would lead to program savings. However, CMS program staff told us that they do not consider denial rates in their evaluation of contractor performance but instead evaluate claims administration contractors’ rates of paying claims properly. The CMS program staff told us that they were not sure of the basis for the consultant’s calculations, and one senior official stated that it was unclear whether more denials would occur with new MACs. Finally, the consultant projected that if CMS awards contracts competitively, contractors will have an incentive to operate more efficiently and to adopt the leading industry innovations that improve performance. The consultant expected these efforts to result in lower levels of improperly paid claims. This projection was based on a 1995 GAO report that estimated that if Medicare contractors adopted the technology and capabilities used by private insurers to detect improper payments through automated claims reviews, Medicare payments for physicians’ services and supplies could be reduced by 1.8 percent. Since that report was issued, CMS has made additional efforts to reduce improper payments. In addition, CMS was not certain that GAO’s assumed savings were achievable. Recognizing this, the consultant reduced this portion of the savings estimate to a 0.09 percent reduction in Medicare fee-for-service payments. In the consultant’s opinion, this adjusted for current error rates and CMS’s opinion that the initial 1995 GAO estimate was too high. However, when we followed up with CMS in April 2005, a senior official stated that while it would be realistic to expect some level of savings in the new competitive contracting environment, she did not know how the amount could be accurately quantified. Conclusions The millions of dollars in savings that CMS envisions achieving through contracting reform in the early years of implementation are largely based on questionable estimates. However, these anticipated savings have been the driving force behind the agency’s decision to accelerate its schedule for contracting with MACs. The agency has opted to transfer the entire current contractor workload to MACs 2 years ahead of the MMA time frame, in the hope of garnering savings to Medicare as quickly as possible. The accelerated schedule raises concerns for a number of reasons. First, CMS has never before undertaken a project of this scope and magnitude— one that affects more than 35 million beneficiaries and 1 million health care providers. If transitions do not run smoothly, operational disruptions could lead to delayed payments to providers and increased improper payments by contractors. With Medicare net improper payments estimated to be almost $20 billion annually, any potential increase is cause for concern. Second, while CMS’s plan provides detailed information in some areas, other critical areas of the agency’s plan are still being developed. Although the agency is employing a start-up cycle that will provide an opportunity to gain valuable FAR contracting experience, the ambitious schedule for the subsequent two cycles leaves little time for the agency to learn from the experience and resolve problems that might arise. Finally, attempting complex transitions of almost all of the claims administration workload in less than 2 years, in conjunction with changes in the data centers and financial management systems, significantly increases the risk that providers’ claims will be paid improperly or not be paid at all. As CMS undertakes this important challenge, it is critical that the agency proceed at a prudent pace in order to apply lessons learned from early implementation experiences to future contracting cycles. Recommendation for Executive Action To better ensure the effective implementation of Medicare contracting reform, we recommend that CMS extend its implementation schedule to complete its workload transitions by October 2011, so that the agency can be better prepared to manage this initiative. Agency Comments and Our Evaluation In its written comments on a draft of this report, CMS noted that implementing Medicare contracting reform would enable the agency to improve the efficiency of the services delivered to Medicare beneficiaries and providers. CMS agreed that implementing contracting reform was a significant undertaking, but did not concur with our recommendation to extend its implementation schedule. CMS stated that by fully implementing MAC contracting 2 years earlier than required, it would achieve savings to the trust funds and operational efficiencies more quickly. In addition, CMS stated that extending the transition schedule would increase the risk of current contractors leaving the program before MAC contracts are awarded and eliminate the agency’s flexibility to adjust its schedule in response to unforeseen changes and still meet the mandated implementation date. We believe that by accelerating its implementation schedule to transfer the entire Medicare claims processing workload to MACs by July 2009, CMS is assuming an unnecessary risk. While it is true that lengthening the implementation schedule could increase the possibility that one or more contractors might withdraw from Medicare prematurely, we see greater risk in attempting complex transitions without sufficient time for adequate planning and midcourse adjustments. When the considerable risk associated with accelerated implementation is considered in light of uncertain savings, a more prudent approach would be to use the time frame established in the MMA to fully develop implementation plans, evaluate lessons learned, and apply them to future acquisition cycles. In recommending that CMS extend its implementation schedule, we assume that the agency would allow sufficient time at the end of the final transition to adjust for problems and unforeseen circumstances and still meet the mandated implementation date of October 1, 2011. CMS agreed that it would need sufficient time for this kind of adjustment and has not developed plans for all contingencies. For example, CMS responded to a relatively short schedule slippage for its enterprise data center implementation by including in contract language the option for one of the DME MACs to run a data center on an interim basis. However, CMS will still have to develop the details of the contract and choose the most appropriate company to perform this work. This is one example of the many adjustments that will undoubtedly have to be made before all of the transitions are finished. CMS also stated that it disagreed with our conclusion about its readiness to conduct transitions to MACs. Our report did not conclude that CMS would not be ready to conduct transitions according to its proposed schedule. However, having a fully developed plan in place would assist CMS in conducting these transitions as smoothly as possible. As we stated in the report, CMS has recognized that it needs to develop certain critical areas in its plan and is taking steps to address them. For example, it is clear from its comments that the agency is very concerned about the risks involved in the complex transitions of claims workload and is planning mitigation actions—such as hiring a contractor to help manage the effort. CMS’s comments provide additional information on other steps that it is taking to reduce or mitigate significant risks, coordinate the schedule for MAC implementation with other agency fee-for-service initiatives, develop detailed integrated implementation schedules, and address other GAO concerns. Nevertheless, the additional information provided by the agency generally reinforces our point that the agency’s implementation plan, which was due to the Congress and to us in October 2004, is still a work in progress. For example, as we pointed out in the report, CMS’s comments indicate that it has not completed its integrated implementation schedule and that it is leaving details concerning contractor coordination to MACs and other contractors. In addition, CMS has not finalized important implementation information, such as key performance measures or its MAC evaluation strategy and evaluation criteria for A/B MACs, or completed its proposal for a new organizational structure to oversee and manage the MACs. While CMS does have a risk management process, its current identification of risks and mitigation strategies lacks specificity and the agency has not completed a comprehensive risk mitigation plan. CMS also disagreed with our assessment of the quality of its cost and savings estimates. CMS said that its estimates of implementation costs were well informed by program experience and were the best available predictions of future costs. As we reported, CMS used information from previous transitions of contractor workload to help estimate its administrative costs. This grounded the estimate in the agency’s past experience. However, CMS had to make assumptions about the amount of claims workload to be transferred and transition costs to be paid, which might turn out to be inaccurate. Unlike previous workload transitions, CMS is not requiring MACs to maintain staff and facilities from the former contractors. This should allow the MACs to gain efficiencies in operations, but CMS may end up paying more in severance pay or for start-up costs than estimated. Similarly, CMS’s experience informed its estimate of administrative savings, but the estimate depends on assumptions about the efficiencies MACs will achieve that are difficult to predict with certainty. While CMS’s assumptions about administrative costs and savings might appear reasonable, if the assumptions are inaccurate, the estimates will not reflect the real costs and savings over time. In addition, CMS indicated that because the costs of contract modifications were for operations after the transfer of claims workload, they should not be included in the implementation cost estimate. As CMS noted, its DME statement of work includes a provision for implementing a specific number of programmatic changes after the contract is awarded, to reduce the possibility that CMS would have to negotiate contract modifications that incurred additional costs. We modified our draft to clarify our discussion of the potential costs of contract modifications. Our greatest concern relates to CMS’s consultant’s estimate of savings to the trust funds. As we indicated in our report, the estimate of savings to the trust funds is generally based on little evidence and its underlying assumptions may not be reasonable, yet it played a significant role in CMS’s decision to compress its implementation schedule. While CMS suggested that the savings estimate is conservative, the consultant who generated this estimate indicated that there were “enormous uncertainties” in estimating savings at this point in the implementation process. In its comments, CMS noted that our report highlighted the lack of direct evidence to support the amount of estimated savings. In response, CMS stated that the savings estimate was the best available, given that the changes proposed are unprecedented. CMS indicated that each of the three elements of the estimate of savings to the trust funds addresses a different aspect of the claims process. However, each of the three elements actually addresses the same aspect—MACs improving their medical and other claims review to increase denials of improper claims. CMS’s comments indicate that its technical staff agree that the assumptions underlying this estimate are reasonable. We discussed these estimates with CMS officials most knowledgeable about medical and other claims review and they did not agree that the assumptions were based on evidence and were reasonable. Further, because each element in the savings estimate assumes improvement in claims review and improper claims denial, we think it is likely that CMS is double counting its potential savings. For example, the consultant estimated that the MACs would have higher claims denial rates, but also separately estimated savings from other aspects of claims review that—if conducted more efficiently—would lead back to higher claims denial rates. We are sending copies of this report to the Secretary of HHS, the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (312) 220-7600 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are Sheila K. Avruch, Assistant Director; Sandra D. Gove; Joy L. Kraybill; Kenneth Patton; and Craig Winslow. Appendix I: Documents CMS Officials Have Identified as Constituting the Agency’s Plan for Implementing Contracting Reform The Centers for Medicare & Medicaid Services (CMS) has designated its report, entitled Report to Congress: Medicare Contracting Reform: A Blueprint for a Better Medicare, and the documents underlying this report as its plan for implementing Medicare fee-for-service contracting reform. Documents include the following: maps of jurisdictions for A/B Medicare administrative contractors (MAC), durable medical equipment (DME) MACs, and home health and hospice MACs; CMS’s estimates for savings to the Medicare trust funds, administrative costs and savings, provider and beneficiary savings, and supporting narrative and information; MAC transition timelines; DME and A/B MAC project schedules; requests for information for A/B MACs and DME MACs, as published on FedBizOpps, including concepts of operations, draft statements of work, draft performance standards, and workload implementation handbooks; DME MAC request for proposals and related documents, including the final statement of work, as published on FedBizOpps; materials on beneficiary and provider customer service; materials concerning work on reengineering Medicare fee-for-service BearingPoint, Inc., Health Services Research & Management Group, Contractor Evaluation Improvement Project (CEIP) Strategy Paper, Final Report (McLean, Va.: Mar. 18, 2005); Centers for Medicare & Medicaid Services, Medicare Contracting Reform: Acquisition Strategy for Medicare Administrative Contractors, draft (Baltimore, Md.: Feb. 28, 2005); Centers for Medicare & Medicaid Services, MAC Implementation Project, Risk and Issue Management Process, Schematic (Baltimore, Md.: Jan. 24, 2005); Centers for Medicare & Medicaid Service, MCMG Risk Management Plan for the Medicare Administrative Contractor Implementation Project, draft (Baltimore, Md.: Dec. 7, 2004); Centers for Medicare & Medicaid Services, Medicare Contracting Reform, Communication Plan (Baltimore, Md.: Nov. 10, 2004); LMI Government Consulting, Medicare FFS Contracting Reform: Level Five Work Breakdown Structure and Master Project Plan (McLean, Va.: October 2004); LMI Government Consulting, Medicare Fee-for-Service Contracting Reform: Assessment of Planning Needs (McLean, Va.: August 2004); Centers for Medicare & Medicaid Services, Report to the Medicare Contractor Oversight Board: Integration Issues in Modernizing Medicare, Final Report, submitted by CMS’s fee-for-service project integration team (Baltimore, Md.: July 2, 2004), and related briefing documents; and Logistics Management Institute, Sensitive Assessment Center, Medicare Fee-for-Service Contracting Reform: Key Challenges (McLean, Va.: December 2003). Appendix II: Documents Used by GAO to Develop Criteria for Reviewing CMS’s Plan for Contracting Reform Selected provisions of section 1874A of the Social Security Act, added by section 911 of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA), Pub. L. No. 108-173, 117 Stat. 2066, 2378—2386 (to be codified at 42 U.S.C. § 1395kk-1). Centers for Medicare & Medicaid Services, Contractor Reform—Update, September 22, 2004. Medicare fee-for-service contractor (unnamed), Contractor Suggestions Regarding Implementation of Medicare Administrative Contracts. Comments submitted to CMS by a current Medicare fee-for-service contractor. December 22, 2004. GAO, Business Process Reengineering Assessment Guide, GAO/AIMD- 10.1.15 (Washington, D.C.: May 1997). GAO, An Evaluation Framework for Improving the Procurement Function: A Guide for Assessing Strengths and Weaknesses of Federal Agencies’ Procurement (Exposure Draft), (Washington, D.C.: October 2003). LMI Government Consulting, Medicare Fee-for-Service Contracting Reform: Assessment of Planning Needs (McLean, Va.: August 2004). Appendix III: GAO’s Criteria for Evaluating CMS’s Contracting Reform Plan Subelement Does the plan explain contracting reform objectives, such as promoting competition and establishing better communication between contractors and beneficiaries? Does the plan present an overview of how MAC implementation fits into broader agency plans? Does the plan state when major events will take place, including announcing MAC jurisdictions, issuing proposed performance measures for inclusion in A/B MAC contracts and requests for proposals, selecting DME MACs and A/B MACs, and completing the transition to MAC contracting? Does the plan provide key information on budget and costs for contracting reform, such as estimated termination costs for current contractors, MAC operational costs, and performance incentives? Does the plan provide high-level information on performance measures—that is, what constitutes success in contracting reform and how will progress be measured? Does the plan identify a contingency or risk mitigation strategy for potential problems as contracting reform is being implemented? Does the plan address transition concerns and contingency planning for the transitions? Does the plan explain how CMS staff will be organized and who will have specific roles and responsibilities in managing contracting reform? Does the plan provide information on CMS’s intended contractor management and oversight structure for MACs? Does the plan provide an approach for ensuring that the agency has the right staff in the right numbers with the right skills in the right places to accomplish its mission effectively? This approach requires that an agency devote adequate resources to provide its acquisition workforce with the training and knowledge necessary to perform their jobs. It also requires long-range planning, including succession planning, to ensure the workforce has the necessary skills and qualifications to perform the procurement function into the future. Does the plan provide information on MAC jurisdictions, including number and geographic areas? Does the plan provide information on the jurisdictional rollout plan, and how this timeline might be affected by voluntary contractor withdrawals? Does the plan discuss the strategy for combining Part A and Part B and associated implications or risks? Does the plan address the acquisition process for new contracts? Does the plan describe the eligibility criteria expected of MACs? Does the plan describe the functions that MACs will perform, such as developing local coverage decisions, determining payment amounts, making payments, educating beneficiaries, and communicating with providers? Does the plan provide information on functions that will be assigned to non-MAC contracts? Is the plan clear in defining the roles of MACs and other contractors that conduct program integrity functions, so that program integrity efforts are not duplicative? Does the plan explain how MACs and other contractors will interface and coordinate their different program integrity activities? Does the plan provide information on how MACs will deal with chain providers, which is a concern for those with establishments in multiple MAC jurisdictions? Does the plan address the establishment and definition of performance measures for MACs? Does the plan provide clear, transparent, and consistent policies and processes that provide a basis for the planning, award, administration, and oversight of procurement efforts? Appendix IV: Scope and Methodology To conduct this evaluation, we consulted CMS to determine the documents included in its plan for contracting reform. Appendix I lists the documents that were identified by agency officials as included in CMS’s contracting reform plan, including its Report to Congress, that were provided to us through June 3, 2005. We developed evaluation criteria to assess the extent to which CMS’s plan provides an appropriate framework to implement Medicare contracting reform. To develop these criteria, we analyzed the statutory provisions added by section 911 of the MMA, documents and related information prepared to help CMS plan for contracting reform, and GAO guidance on assessing federal agencies’ procurement functions. We also reviewed GAO’s guidance on changing the approach through which mission-critical work is accomplished. These documents are listed in appendix II. The evaluation criteria we developed address contracting reform planning and implementation, contracting reform management and oversight, and CMS’s contracting strategy for MACs and are listed in appendix III. We used these criteria to evaluate CMS’s plan. In addition to this assessment, we also conducted interviews with officials at CMS headquarters and regional offices concerning the process for developing the plan, the implementation schedule, the challenges that CMS faces in implementing contracting reform, lessons learned that have prepared CMS for moving to the MAC environment, and the risks and benefits involved in the transition to MAC contracting. We also interviewed officials from four current Medicare contractors to obtain their views on CMS’s contracting reform plan and the challenges, risks, and benefits involved in undertaking this effort. To assess the extent to which the plan’s cost and savings estimates were sound enough to support decision making on implementation, we reviewed CMS’s estimates for administrative costs and savings, savings to the Medicare trust funds, and supporting documentation. We evaluated the assumptions associated with the estimates. We conducted interviews with CMS officials who have been involved in developing estimates for the costs and savings related to Medicare contracting reform in order to understand the rationale upon which the estimates were based. We interviewed other CMS officials who work in program areas that will be affected by contracting reform to learn how they expect contracting reform to generate costs or savings in their program areas. We also interviewed a representative of CMS’s actuarial contractor, which developed the savings estimates for the Medicare trust funds. We did not verify the reliability of CMS’s data that were used to generate financial estimates. We performed our work from November 2004 through July 2005 in accordance with generally accepted government auditing standards. Appendix V: Comments from the Department of Health and Human Services Appendix VI: CMS’s MAC Procurement and Transition Schedule Figure 5 shows CMS’s procurement and transition schedule for MACs, as of June 2005. During one start-up cycle and two additional transition cycles, CMS will conduct competitions to select a total of 23 MACs. In the first phase of the start-up cycle, CMS will select four MACs that will be administering claims for DME, prosthetics, orthotics, and supplies—called DME MACs. In the second phase of the start-up cycle, CMS will select one of the MACs that will be responsible for paying Part A and Part B claims— called A/B MACs. During cycle one, CMS will select seven A/B MACs. During cycle two, CMS will select seven A/B MACs and four MACs that will be responsible for administering claims for home health and hospice (HH) care, called HH MACs. Appendix VII: Jurisdictional Map of the Current Fiscal Intermediaries BCBS SC (Palmetto) BCBS ND (Noridian) BCBS OK (Group Health) BCBS AL (Cahaba) Anthem (AdminaStar) BCBS TN (Riverbend) BCBS FL (First Coast) Appendix VIII: Jurisdictional Map of the Current Carriers Appendix VIII: Jurisdictional Map of the Current Carriers BCBS AL (Cahaba) BCBS FL (First Coast) Anthem (AdminaStar) BCBS SC (Palmetto) BCBS ND (Noridian) Triple S, Inc. Appendix IX: Jurisdictional Map of the Current Regional Home Health Intermediaries Appendix IX: Jurisdictional Map of the Current Regional Home Health Intermediaries (Cahaba) (Cahaba) (Palmetto) Appendix X: Jurisdictional Map of the Current Durable Medical Equipment Regional Carriers Connecticut (CIGNA) (Palmetto) Connecticut (Palmetto) Appendix XI: Jurisdictional Map of the 15 New Medicare Administrative Contractors Appendix XII: Jurisdictional Map of the Four DME MACs and the Four HH MACs D | The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) significantly reformed contracting for the administration of claims for Part A, Medicare's hospital insurance, and Part B, which covers outpatient services such as physicians' care. The MMA required the Centers for Medicare & Medicaid Services (CMS)--the agency within the Department of Health and Human Services (HHS) that administers Medicare--to conduct full and open competition for all of its claims administration contracts and to transfer the work to Medicare administrative contractors (MAC) by October 2011. The MMA required the Secretary of HHS to submit a report to the Congress and GAO on the plan for implementing Medicare contracting reform and for GAO to evaluate the plan. To address this mandate, GAO reviewed the extent to which (1) the plan provides an appropriate framework for implementing Medicare contracting reform and (2) the plan's cost and savings estimates are sound enough to support decisions on implementation. CMS's plan provides an appropriate framework to implement contracting reform in some critical areas but not in others. For example, the plan indicates the rationale for reform but lacks a detailed schedule to coordinate reform activities with other major initiatives CMS intends to implement at the MACs during the same period. Further, CMS's plan does not comprehensively detail steps to address potential risks during the transitions of the claims workload from the current contractors, such as failing to pay providers or paying them improperly. These transitions will be complex to manage because they require moving multiple claims workloads from current contractors to a single MAC with new jurisdictional lines. As many as nine separate segments of current contractors' workload will be moved to the first A/B MAC. CMS has accelerated its schedule to transfer the current contractor claims workload to MACs by 2009, more than 2 years ahead of the MMA's time frame. This schedule leaves little time for CMS to adjust for any problems encountered. CMS's estimates of costs and savings are too uncertain to support decisions on contracting reform implementation. First, CMS's internal cost estimate for a 6-year implementation period of about $666 million is based on reasonable data but questionable assumptions about contract awards. Second, its estimate of $1.4 billion in savings from reductions in improper payment by MACs depends on questionable evidence and assumptions that were never validated by knowledgeable CMS staff. However, the $1.4 billion estimate prompted CMS to accelerate its implementation schedule to accrue savings as rapidly as possible. While it is reasonable to assume that contracting reform will result in savings, the actual amount could differ greatly from the estimate. Basing an accelerated implementation schedule on uncertain savings raises concerns that CMS has unnecessarily created additional challenges to effectively managing the risk of these transitions. |
Background We previously identified problems with the federal employment tax deposit system. In fiscal year 1988, IRS collected employment tax deposits totaling $627 billion from approximately 5 million employers. About one-third of these employers were penalized a total of $2.6 billion for not making timely deposits. In addition, IRS found employment tax deposit regulations difficult to administer and enforce because employers could be subject to more than one deposit rule during a tax period and because exceptions to the rules could be confusing. The regulations required employers to monitor and accumulate withheld income and Social Security taxes from payday to payday until one of four separate deposit rules (quarterly, monthly, eighth-monthly, or daily) was triggered. IRS developed the revised employment tax deposit regulations after it adopted Compliance 2000, which it began to develop in 1988. Compliance 2000 is an approach IRS uses to improve voluntary taxpayer compliance with the tax law. It is designed to change the behavior of noncompliant taxpayers and to reduce taxpayer burden. Under Compliance 2000, IRS seeks to identify and address the causes of taxpayer noncompliance and taxpayer burden by analyzing its own systems and by obtaining feedback from those who must comply with a tax, i.e, “stakeholders.” IRS began revising the regulations in May 1990, when it first asked the public for suggestions on how to simplify the employment tax deposit regulations. In 1991, Congress considered two bills that were intended to simplify the regulations, and hearings were held on the bills in the House and Senate. A tax bill incorporating employment tax simplification was vetoed in March 1992 for reasons unrelated to its employment tax deposit provisions. Treasury and IRS then tried to simplify the employment tax rules by proposing regulatory changes that were based on the vetoed legislation. Proposed regulations were published in May 1992, IRS convened a hearing in August of that year, and final regulations were issued that September. In appendix I, we discuss in further detail the events surrounding the issuance of the final regulations. Consideration of Stakeholders’ Views Resulted in Simpler and Easier to Follow Regulations Treasury and IRS officials reached out to stakeholders between May 1990 and August 1992 to obtain input on how to simplify the employment tax deposit regulations. The officials used a legislatively developed proposal that was reviewed in congressional hearings as the basis of the revised regulations that were proposed in May 1992. Officials published their proposed regulations, obtained significant suggestions for improvements, and rapidly made changes that resulted in final regulations that were considered by most stakeholders to be much simpler than the regulations they replaced. As early as May 1990, IRS recognized that the complexity of its employment tax deposit regulations resulted in noncompliance. It attempted to simplify these complex regulations at the same time it was also undertaking Compliance 2000. IRS began developing the Compliance 2000 approach to tax administration in 1988 to reduce taxpayer noncompliance. Compliance 2000 aims to reduce unintentional noncompliance by increasing education and outreach. Compliance 2000 also entails IRS’ factually determining the causes of noncompliance and correcting them. According to IRS, correcting causes of unintentional noncompliance, such as complex regulations and insufficient explanations, may be more effective than addressing them through traditional means, i.e., enforcement sanctions. In concert with the Compliance 2000 approach, IRS is increasing its role as an advocate for simpler tax laws and regulations. IRS intends to provide taxpayers with an opportunity to participate in the design and evaluation of regulations. In discussions with us, 5 of the 10 stakeholders we interviewed said that IRS operated in the spirit of Compliance 2000 during its efforts to revise the employment tax deposit regulations. Small business stakeholders in particular said that IRS actively sought their views. IRS’ efforts to obtain stakeholder input began in May 1990 when IRS asked for public suggestions on how the employment tax deposit regulations might be simplified. Its efforts also included sending IRS officials to an August 1990 meeting of the American Institute of Certified Public Accountants (AICPA) on the deposit regulations and having IRS officials meet with stakeholders in March 1991. In May 1992, Treasury and IRS published proposed revisions to the employment tax deposit regulations to provide stakeholders an opportunity to provide written comments. In addition, IRS convened an August 3, 1992, hearing on the proposed regulations. Most witnesses questioned whether the regulations were simple enough to help employers achieve a high compliance rate. Considering the process overall, the National Federation of Independent Business and the Small Business Legislative Council considered their access to IRS officials during the regulation development process to be exemplary. The Small Business Legislative Council representative considered the process followed in revising the employment tax deposit regulations to be a model for future regulation projects. Some other stakeholder representatives, notably eight who sent a letter to the Treasury and IRS executives, were less pleased with their communications with Treasury and IRS officials. Nevertheless, virtually all of the stakeholders we interviewed said that with the final regulations, IRS had responded well to their concerns and significantly simplified the employment tax deposit rules. Employment Deposit Experience Holds Lessons to Improve Future Interactions With Stakeholders Although the final employment tax deposit regulations were well received, certain stakeholders were dissatisfied with various aspects of the process used as Treasury and IRS developed the revisions. Two concerns of these stakeholders were that an adequate dialogue did not occur between Treasury and IRS officials and the stakeholders and that Treasury and IRS officials did not follow statutory or executive branch guidance that either appeared to be applicable or that the stakeholders thought would have been appropriate to follow. The experience gained in developing these revised regulations may help Treasury and IRS officials improve future efforts to communicate meaningfully with stakeholders. Dialogue Between Stakeholders and Regulatory Policymakers After IRS’ August 3, 1992, hearing on the proposed employment tax deposit regulations, eight stakeholder representatives signed letters to the Assistant Secretary (Tax Policy) and the IRS Commissioner on August 25, 1992. The letters stressed the stakeholders’ belief that a two-way dialogue had not occurred between the stakeholders and policymakers on business issues of concern to the stakeholders. The representatives’ belief was based, in part, on a question from the hearing panel during the August hearing. At least two stakeholders interpreted the question to imply that the stakeholders’ motive for participating in the regulation development process was solely profit-driven. Also, in the letter the representatives noted the short deadline set by IRS and Treasury for finalizing the regulations. The representatives took the short deadline, along with the close working relationship that Treasury and IRS officials had with two other stakeholder organizations that strongly supported the proposed approach, to imply that few changes would be made to respond to their concerns. Regulatory Flexibility Act and Executive Order 12291 Certain stakeholders also believed that Treasury and IRS should have followed provisions of the Regulatory Flexibility Act of 1988 (RFA) and/or Executive Order 12291 (EO). RFA requires federal agencies to assess the effects of their proposed rules on small entities. As defined in RFA, small entities include small businesses, small governmental units, and small not-for-profit organizations. As a result of their assessments, agencies must either (1) perform a regulatory flexibility analysis describing the impact of the proposed rules on small entities or (2) certify that their rules will not have a “significant economic impact on a substantial number of small entities.” Where applicable, an agency must include an initial regulatory flexibility analysis, or a summary, in the Federal Register notice of proposed rulemaking. A final analysis must be made available to the public, with information on how to obtain copies included in the Federal Register with the issuance of final regulations. The regulatory flexibility analysis is intended to focus attention on the effect of regulations on small entities and to minimize that effect. However, RFA requires such analyses for regulations only when agencies must publish the regulations for notice and comment under the provisions of the Administrative Procedure Act (APA). Because they had classified the employment tax deposit regulations as interpretative and interpretative regulations do not have to be published for notice and comment, Treasury and IRS officials did not prepare a regulatory flexibility analysis. EO 12291 required that drafters of major regulations (1) develop a regulatory impact analysis that considered the costs and benefits of proposed regulations and (2) determine whether the least burdensome approach was selected. EO 12291 was intended, among other things, to reduce the burdens of regulations, increase agency accountability for regulatory actions, and ensure well-reasoned regulations. EO 12291 considered a regulation major if it met certain criteria. For example, a regulation was considered major if it was likely to result in an annual effect on the economy of $100 million or more or if it would cause significant adverse effects on competition, employment, investment, productivity, innovation, or the ability of United States-based enterprises to compete with foreign-based enterprises in domestic or export markets. Treasury and IRS officials determined that the proposed employment tax deposit regulations were not major. Although the agencies stated in the notice of proposed rulemaking that the proposed regulations were not major, they were not required to explain why. A memorandum submitted to the Office of Management and Budget (OMB) in compliance with Treasury’s procedures for implementing the EO also did not explain the basis for the officials’ determination that the regulations were not major. However, even if the proposed regulations had been classified as major by Treasury and IRS officials, they would not have been subject to a regulatory impact analysis. Treasury and OMB have a memorandum of agreement that exempts Treasury from following the EO’s processes for IRS’ interpretative regulations. The belief that IRS did not follow RFA or EO 12291 was a source of dissatisfaction among some stakeholders. Several stakeholders told us that even if Treasury and IRS officials did not have to develop a regulatory flexibility analysis under RFA, they nevertheless should have. As the revised regulations were being developed, IRS officials noted that a primary focus of their efforts to simplify the employment tax deposit regulations was to address problems that small employers were having in complying with the existing regulations. These stakeholders said that if small employers were the focus of the Treasury and IRS effort to revise the regulations, they did not understand the rationale for IRS’ not following a process specifically intended to help ensure that small employers’ needs were addressed in the rulemaking process. Similarly, several stakeholders believed that the proposed regulations were major regulations subject to the regulatory impact requirements of EO 12291. As interpretative regulations, the employment tax deposit regulations were exempt from the EO’s requirements under a memorandum of agreement between Treasury and OMB. The notice of proposed rulemaking did not explain that this exemption existed because the memorandum of agreement did not require such an explanation. Ultimately, on the basis of satisfaction with the final regulations expressed by virtually all parties involved, we believe that meaningful communication did occur before the employment tax deposit regulations were finalized. However, the dissatisfaction of some stakeholders with their ability to engage Treasury and IRS officials in a dialogue during the process used in developing the regulations suggests that opportunities may exist to improve future communications, which would be consistent with IRS’ Compliance 2000 initiative and would enhance the probability that sound regulations will be adopted. Regulatory Flexibility and Impact Analyses May Have Been Beneficial Although Treasury and IRS officials judged that they were not required to do either the regulatory flexibility or regulatory impact analyses, such analyses may have been beneficial. Regulatory flexibility and regulatory impact analyses direct regulation drafters’ attention to the effect of regulations on stakeholders, increasing the likelihood that effective communications will occur between regulation drafters and stakeholders. In general, RFA and EO 12291 reflected policymakers’ judgments that the process used in developing regulations could be improved. Accordingly, RFA and EO 12291 have provided structures that focus regulation drafters’ attention on minimizing the burden of regulations on affected parties in general and on small entities in particular. RFA and EO 12291 established criteria for regulation drafters to apply in judging whether burden reduction goals were being achieved. They also established processes requiring the regulation drafters to document their consideration of how burdens were minimized and make their analyses available to the public when proposed and final regulations are published. More specifically, if RFA applies to a regulation and the agency cannot certify that the regulation will not have a significant economic impact on a substantial number of small entities, RFA requires an initial regulatory flexibility analysis that focuses on small entities and is to contain the following reporting items: a description of the reasons why action by the agency is being considered; a statement of the objectives of, and legal basis for, the proposed a description of and, where feasible, an estimate of the number of small entities to which the proposed regulation will apply; a description of the projected reporting, recordkeeping, and other compliance requirements of the proposed regulation, including an estimate of the classes of small entities that will be subject to the requirement and the type of professional skills necessary for preparation of the report or record; and an identification, to the extent practicable, of all relevant federal regulations that may duplicate, overlap, or conflict with the proposed regulation. The analysis must also describe any significant alternatives to the proposed regulation that accomplish the stated objectives and that minimize any significant economic impact on small entities. EO 12291 required a regulatory impact analysis for major rules that was to include, but not be limited to, the following: a description of the potential benefits of the regulation, including any beneficial effects that cannot be quantified in monetary terms, and the identification of those likely to receive the benefits; a description of the potential costs of the regulation, including any adverse effects that cannot be quantified in monetary terms, and the identification of those likely to bear the costs; a determination of the potential net benefits of the regulation, including an evaluation of effects that cannot be quantified in monetary terms; and a description of alternative approaches that could substantially achieve the same regulatory goal at lower cost, together with an analysis of this potential benefit and costs and a brief explanation of the legal reasons why such alternatives, if proposed, could not be adopted. In our opinion, the processes required by RFA and EO 12291 reflected principles similar to those of IRS’ Compliance 2000 approach, e.g., that regulations are better when they are based on an analysis of their effect on stakeholders and designed to avoid unwarranted burdens. Treasury officials believe that the efforts of Treasury and IRS officials to reach out and obtain stakeholders’ input during the time they were revising the employment tax deposit regulations effectively satisfied RFA’s and the EO’s requirements. To the extent that they were unsuccessful in obtaining stakeholders’ input or adequately reflecting that input in the proposed regulation, Treasury and IRS officials point out that the notice and comment process they voluntarily followed is intended to permit anyone who has concerns about proposed regulations to raise those concerns. The officials further noted that they did respond to the concerns that were raised during the notice and comment period and did so in a manner that led to widespread satisfaction with the final regulations. Treasury officials also said that even though RFA did not apply, a copy of the notice of proposed rulemaking was provided to the Chief Counsel for Advocacy of the Small Business Administration for comments, as required by section 7805(f) of the Internal Revenue Code. Furthermore, Treasury and IRS officials sought the views of small businesses by contacting small business associations. Given the considerable stakeholder satisfaction with the final employment tax deposit regulations, it may well be that had Treasury and IRS officials followed RFA and EO 12291 requirements, the final regulations would not have been significantly different. The principal advantage that may have been gained by following RFA and EO processes in this case could have been less contentious communications with some stakeholders. If IRS and Treasury follow these processes in the future, this might better ensure sound communications with all stakeholders. One stakeholder’s view was that Treasury and IRS officials chose to obtain stakeholder input as they worked on revising the employment tax deposit regulations, but obtaining input should be a requirement rather than a choice to be made by regulators. Although following RFA procedures or those of EO 12291 when Treasury and IRS officials are not required to do so may promote sound communications with stakeholders, doing so also could subject the government to future litigation. That is, following the procedures would set a precedent that could provide the basis for future suits seeking to compel Treasury and IRS to adhere to the procedures even though RFA or the EO did not require adherence. To the extent that such litigation was successful, Treasury and IRS could be required to follow RFA and the EO for interpretative regulations. Our work was not intended to determine whether such a result would be desirable. Treasury and IRS could avoid such litigation and yet better ensure that regulation drafters consider the principles of RFA and the EO by incorporating RFA- and EO-like requirements into the Treasury regulations handbook. This handbook provides guidance to regulation drafters as they develop or revise regulations. Treasury could require regulation drafters to document their consideration of the factors specified in RFA and the EO. Although this would be internal documentation that would not be available for stakeholders to review, a documentation requirement could provide greater assurance that regulation drafters obtain and consider information analogous to that required by RFA and the EO. Benefits of Discussing Draft Regulations With Stakeholders Whether or not Treasury and IRS incorporate RFA and EO requirements into the internal procedures for developing regulations, once regulation drafters basically have fixed on a regulatory scheme, that scheme may provide a valuable focus for obtaining stakeholder reactions. Working through the implementation consequences of a draft regulatory proposal with stakeholders could help promote communications with stakeholders and develop regulation drafters’ knowledge of businesses affected by their regulations. As they began considering how to revise the employment tax deposit regulations, IRS officials invited stakeholders to provide suggestions for how the regulations might be improved. Treasury and IRS officials also met with various stakeholders to discuss ideas for revising the regulations. In addition, congressional hearings, at which many stakeholders testified, were held on two legislative proposals to replace the employment tax deposit regulations. Despite these opportunities to provide their views, most of the stakeholders commenting on the proposed regulations during IRS’ August 1992 hearing raised concerns about whether the proposed regulations would simplify the process and whether they met the needs of small businesses. It is not clear why some of these concerns surfaced so late in the process. For example, some of those who expressed concerns in their written comments on the proposed regulations had opportunities as early as May 1990 to provide input to IRS. In part, the stakeholders may not have raised their concerns because they did not have a specific proposal to react to during the earliest opportunities they had to meet with or otherwise provide input to IRS or Treasury officials. However, when hearings were held in 1991 on the House and Senate bills addressing how the employment tax deposit regulations should be simplified, stakeholders did have specific proposals to react to. Stakeholders from several organizations testified on the House and Senate bills. In general, they concluded that the proposed legislation, particularly in the Senate bill, would improve existing deposit rules significantly. Given the overall support for the congressional bills and the relatively few suggested modifications—especially to the Senate bill—Treasury and IRS officials adopted the Senate bill’s approach to simplifying the employment tax deposit regulations. On the basis of our discussions with stakeholders and our review of the comments offered on the proposed regulations, it appears that stakeholders did not thoroughly analyze some of the implementation issues associated with the legislative bills until after hearings had been held. For example, the Tuesday/Friday deposit dates specified in the congressional proposals were not raised as a problem by witnesses in the congressional hearings. However, witnesses at the IRS hearing did have concerns, and Treasury and IRS responded by changing the deposit dates to Wednesdays and Fridays. Similarly, witnesses at the congressional hearings did not raise concerns about having the data necessary to implement the “look back” rule, but data availability issues were raised at the IRS hearings. In contrast, some witnesses in both the congressional and IRS hearings pressed for higher thresholds for businesses to qualify for monthly depositor status and for retaining the 5 percent safe harbor rule which enabled employers to avoid penalties for underpayment of taxes if their shortfall was no more than 5 percent of taxes due and the shortfall was deposited by a specified make-up date. Thus, regardless of why stakeholders did not raise concerns about the legislative proposal, the emergence of implementation concerns from stakeholders’ analyses of Treasury’s proposed revisions to the regulations suggests the importance of a specific proposal to obtaining the most useful input from stakeholders. The notice and comment process Treasury and IRS followed by publishing the proposed regulations did provide an opportunity for stakeholders to analyze the implementation consequences associated with the regulatory approach. However, the notice and comment process did not provide the forum for dialogue desired by a significant portion of stakeholders. Important, although somewhat intangible, additional benefits could result if such analyses were done as part of Treasury and IRS officials’ efforts to develop the proposed regulations themselves. Meeting with stakeholders to work through the implementation issues associated with draft regulations before the regulations are published for notice and comment would be a step toward providing the level of dialogue with regulatory policymakers that certain stakeholders perceived was lacking in the development of the employment tax deposit regulations. To the extent that such meetings facilitated a two-way dialogue, communications between regulatory officials and stakeholders could be more productive and the officials’ understanding of the businesses their regulations affect could be increased. Doing such analyses before regulations are proposed for comment would complement the purposes of RFA and the EO and would be in concert with IRS’ Compliance 2000 approach. Treasury and IRS officials suggested that working through the implementation consequences of draft regulations with stakeholders could present problems. Many tax-related regulations affect a broad spectrum of taxpayers and professions that provide tax services to them. Officials were concerned that they could not meet with representatives of all potentially affected parties and that parties not included likely would object. Deciding which stakeholders to include in meetings is a practical problem. However, regulation drafters informally seek input from various stakeholders now. One of the concerns of the stakeholders who were dissatisfied with the process used to develop the revised employment tax deposit regulations was that this informal communication appeared to favor certain stakeholders over others. Thus, ensuring balanced and fair inclusion of stakeholders would not appear to be a deciding factor in determining how to obtain comments since it would apply to both how officials currently interact with stakeholders and to any future meetings that might be held with stakeholders to work through the implementation consequences of draft regulations. Measures of Simplicity Could Result in Better Informed Judgments According to Treasury and IRS officials, to determine whether a regulation has been simplified, they must consider multiple objectives. Thus, they must judge such things as whether the revised regulations treat stakeholders fairly and whether the burden imposed on the affected parties is minimized without sacrificing acceptable compliance levels. In the specific case of the employment tax deposit regulations, a Treasury official noted that officials also were concerned that the revised regulations neither gain nor lose significant amounts of revenue. Balancing these sometimes conflicting objectives—e.g., fairness may require exceptions for specific unusual cases but such exceptions can add complexity and burden—restricts the ability of regulators to achieve fully any one objective like simplification. However, certain information could be useful for officials to reach an informed judgment about whether a regulation has been simplified and is successful over time. In general, the officials involved in the process of developing and approving a regulation make a judgment as to whether it has been simplified. According to IRS officials, to judge whether a regulation has been simplified, officials consider how closely the final regulation corresponds to the comments of those stakeholders who would be affected by any change. According to this criterion, Treasury and IRS officials and virtually everyone we interviewed agreed that the final employment tax deposit regulations had been simplified, especially given the revenue and compliance constraints that also had to be met. In our opinion, Treasury and IRS officials could make more informed judgments about whether simplification has been achieved if they had information that indicated whether simplification was likely. For example, will fewer steps be required of taxpayers to comply with a regulation? Will the time it takes to comply be reduced? Will the number of records or amount of information that taxpayers must assemble and maintain be reduced? For the employment tax deposit regulations, the answers to such questions indicate that simplification was achieved. Fewer steps are now involved in determining when deposits are due. In appendix II, we show the steps involved before and after the simplification effort. For many employers, the time required to comply should decrease since they no longer must continuously monitor their employment tax liabilities to determine when they should make deposits. Employers may need to retain somewhat more information on their past employment tax liabilities to determine under the look back rule what their filing frequency will be for the forthcoming year. However, they can avoid retaining information if they rely on the notification of filing status that IRS will send to employers before each calendar year. Treasury officials said that at least some simplicity measures were considered as the employment tax deposit regulations were being revised. For instance, officials analyzed information to determine how many small employers would move from semiweekly depositor status to monthly depositor status at different thresholds for determining that status. Officials worked to establish a threshold that moved the greatest number of small employers to the monthly depositor status while maintaining revenue neutrality in the regulatory change. On the other hand, officials cautioned against placing emphasis on developing and using measures of simplicity. Their reservations included that (1) developing and using such measures would require more resources or would divert resources from regulatory efforts; (2) it would be very difficult to develop meaningful measures; and (3) simplicity must be balanced with other objectives, such as equity and administrability of regulations. While measuring simplicity is difficult, and balancing it against other regulatory objectives requires judgment, in our opinion judgments can be made on a more informed basis if measures of simplicity are used as reference points. To control the number of such measures developed and the associated resources required to collect and maintain the measures, officials may wish to agree on a set of key simplicity measures for any particular regulation. In the case of the employment tax deposit regulations, the number of small businesses qualifying for monthly depositor status was one such measure. Another measure at least implicitly used in determining that the employment tax deposit regulations were too complex was the number of taxpayers subject to penalties each year. Having used such measures in revising the employment tax deposit regulations, officials also have a means for determining the success of the revisions over time. By checking whether the number of penalties assessed falls and remains lower over time, and whether the number of monthly depositors rises to expected levels and remains there, officials would be able to judge on a more informed basis whether the revised regulations should be revisited in the future. In addition, identifying and using simplicity measures would complement IRS’ objective of reducing taxpayer burden, which is one of three IRS’ objectives in its fiscal year 1994 strategic business plan. To assess burden, IRS is developing a system to measure the burden of complying with tax law. Conclusions The final employment tax deposit regulations published in September 1992 are widely considered to be significantly simpler and easier to apply than earlier versions of the regulations. Treasury and IRS officials developed the regulations by soliciting input from stakeholders—those who would be affected by changes to the regulations. Involving stakeholders in the process is a basic strategy employed under IRS’ Compliance 2000 approach. Stakeholders we interviewed agreed that their concerns were considered and acted upon by Treasury and IRS officials in the development of the final regulations. One stakeholder even considered the development of this regulation to be a model for how Treasury and IRS officials generally should develop new or revised regulations. Despite the widespread satisfaction with the final employment tax deposit regulations, certain stakeholders were dissatisfied with the process followed by Treasury and IRS officials as they revised the regulations. Concerned stakeholders did not believe that an adequate dialogue had been established with Treasury or IRS officials or, in some cases, believed that officials should have followed the procedures specified in RFA or the EO 12291. Given such things as the diversity of interests among the stakeholders who may be affected by tax regulations, the time constraints under which Treasury and IRS officials often must operate, and the sometimes conflicting goals that must be reconciled when tax regulations are written, complete stakeholder satisfaction is unlikely. Nevertheless, the employment tax deposit regulation experience suggests that Treasury and IRS officials could modify their practices to improve communications with stakeholders and provide greater assurance that stakeholders’ views will be obtained and considered. Communications clearly would be impeded when information is not made available. The confusion and frustration that some stakeholders experienced because the regulatory impact analysis requirements of EO 12291 were not followed might have been avoided. Several stakeholders believed that the regulations were major and thus subject to the EO. But the notice publishing the proposed regulations did not explain that even if the regulations were major under the EO’s criteria (which officials did not believe was the case), the EO did not apply pursuant to the existing memorandum of understanding between Treasury and OMB. The new EO 12866 contains criteria similar to those in the revoked EO 12291 to be used in identifying regulations subject to the new EO’s requirements. However, according to Treasury officials, IRS’ interpretative regulations continue to be exempt from EO 12866’s requirements. Treasury and IRS could help forestall stakeholder confusion by providing this explanation in notices of proposed rulemaking, when applicable. In addition, although Treasury and IRS officials judged that the regulatory flexibility and regulatory impact analyses of RFA or EO 12291 did not have to be done for the employment tax deposit regulations, the principles underlying such analyses were complementary to the intent of officials to simplify employment tax deposit regulations for small businesses. The RFA and EO principles also are similar to those stated in IRS’ corporate objective to reduce the burden on taxpayers and, under Compliance 2000, to make regulations and procedures as simple and fair as possible. If Treasury and IRS adopted internal policies that require drafters of regulations to document, when time constraints permit, their consideration of the factors that are included in RFA and the new EO, these policies could help ensure that the principles will be applied consistently by officials who develop regulations. IRS officials say that they informally communicate with stakeholders while regulations are being developed. This communication should help the regulation drafters understand the effects that differing regulatory schemes could have on stakeholders and whether the regulatory approaches can be effective in achieving their purposes. The fact that stakeholders had concerns once they had analyzed the proposed employment tax deposit regulations suggests that the value of communicating with stakeholders may be related to how complete the regulatory proposal is when informal communications occur. It is true that the notice and comment process can, as it did with the employment tax deposit regulations, trigger stakeholder analyses that help identify improvements needed in proposed regulatory approaches. Earlier recognition of those concerns could improve communications between regulators and stakeholders. In the case of the employment tax deposit regulations, such focused informal communications could have lessened the need to rework the proposed regulations and could have forestalled the impression among some stakeholders that Treasury and IRS officials were not giving balanced consideration to the concerns of all parties. A major current IRS objective is to simplify tax laws and regulations because complexity is considered a contributing factor to noncompliance. Whether simplification is achieved in any particular circumstance or overall in the tax system is a somewhat subjective judgment. According to Treasury and IRS officials, although regulatory guidance did not require officials to do so, they used some measures of simplicity as the employment tax deposit regulations were revised to make judgments concerning the balance between achieving simplicity and obtaining other regulatory objectives. By explicitly identifying key simplicity measures, using them while developing regulations, and continuing to use them to gauge whether the final regulations are successful, Treasury and IRS would better ensure that informed judgments are made and that these judgments would be consistent with IRS objectives. Recommendations to the Secretary of the Treasury To help forestall stakeholder confusion and frustration regarding the applicability of statutory and executive guidance to tax-related regulations, we recommend that the Secretary of the Treasury direct that when such guidance is not applicable the text accompanying the publication of proposed and final regulations should contain a complete explanation why this is so. We also recommend that the Secretary require that regulation drafters document internally, when time constraints permit, their consideration of the factors provided in such statutory and executive guidance to better ensure that tax regulations reflect stakeholders’ needs. To maximize the value of informal communications with stakeholders, we recommend that the Secretary encourage regulation drafters to meet with selected stakeholders to work through implementation issues associated with draft tax regulations before publishing the regulations for notice and comment. To better ensure that a well-informed basis exists for Treasury and IRS officials to make judgments concerning whether simple, yet effective, regulations have been designed, we recommend that the Secretary of the Treasury require regulation drafters to develop key measures of simplicity for tax regulations. Officials should use these measures to help judge whether existing regulations are too complex and whether regulations under development are sufficiently simple. Agency Comments In commenting on a draft of this report, Treasury’s Commissioner, Office of Tax Policy, and the IRS Chief Counsel said they were generally very pleased with the conclusions set forth and generally agreed with the recommendations. However, the officials disagreed with certain statements in the report dealing with the issue of how the flexibility analysis of RFA and the regulatory impact analysis of EO 12291 apply to IRS regulations in general. The officials considered some of these statements to be inaccurate. We made appropriate changes to ensure that the report accurately portrays the RFA and EO requirements. In addition, the officials interpreted other statements in the report as strongly suggesting that all IRS regulations should be subject to the analytical requirements of RFA and the EO. The officials believed that it would be inappropriate to draw such a conclusion from an analysis of, and some stakeholders’ statements concerning, the development of one regulation. On the other hand, the officials did not object to the specific recommendation made in the draft. We revised some of the text in the report to remove any implication that all IRS regulations should be subject to the analytical requirements of RFA and the EO. The pertinent recommendation in the draft report recognized, for example, that time constraints would not always permit regulation drafters to adhere to the analytical requirements of RFA and the EO. We also modified our recommendation to clarify that regulation drafters’ documentation of their consideration of the factors contained in RFA and applicable executive branch guidance would be for internal purposes. In our opinion, by requiring that drafters of regulations internally document their consideration of the factors in the RFA and executive guidance, Treasury and IRS would increase assurance that these factors, which complement IRS goals in developing regulations, will be weighed consistently by officials as they develop regulations. Such internal documentation, however, would not go beyond what Congress or the president may have intended when they designed the procedures applicable to developing regulations. Objectives, Scope, and Methodology Our objectives were to determine (1) whether Treasury and IRS developed the employment tax deposit regulations by applying principles from IRS’ Compliance 2000 approach, which is designed to improve voluntary taxpayer compliance, reduce taxpayer burden, and increase IRS’ attention to the needs of those affected by its actions; (2) whether, and, if so, how the process used by Treasury and IRS to develop and revise the regulations could be improved; and (3) how Treasury and IRS officials know when their efforts to develop and revise regulations result in regulations that are sufficiently simple and easy to follow. We discussed all three objectives with IRS officials and obtained written information from IRS. In addition, to determine if IRS followed Compliance 2000 to revise the federal employment tax deposit regulations, we reviewed IRS documents describing Compliance 2000 to obtain an understanding of its principles and requirements. We discussed IRS’ adherence to Compliance 2000 with IRS officials and various stakeholders who participated in the process of developing the revised regulations. We also interviewed IRS officials and stakeholders who commented on the proposed regulations to understand the history of the development of the revised regulations and to determine how the views of all parties were considered. From those who had been involved in the development of the revised regulations, we selected a judgmental sample of 10 stakeholders who represented those in federal and state governments, Congress, and private industry who would be affected by the revised regulations or who were knowledgeable about the process of developing tax-related regulations. To determine whether Treasury and IRS followed required procedures and whether the process used could be improved, we obtained from the appropriate IRS officials a description of the statutory, regulatory, and internal processes that must be followed when Treasury and IRS develop a regulation. We analyzed requirements that IRS must adhere to when it develops regulations to determine if IRS complied with the requirements in its interpretative ruling of the regulations. Further, to determine how well these requirements were followed, we also obtained the opinions of various stakeholders and Treasury and IRS officials. To determine how Treasury and IRS officials knew whether the proposed regulations had been simplified, we interviewed appropriate Treasury and IRS officials. We also asked various stakeholders how Treasury and IRS officials could determine whether regulations were sufficiently simple. We obtained written comments on a draft of this report from Treasury and IRS and incorporated their comments where appropriate. (See app. III for the full text of Treasury’s and IRS’ comments.) We did our work from August 1992 to July 1993 in accordance with generally accepted government auditing standards. We are sending copies of this report to various interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others on request. The major contributors to this report are listed in appendix IV. Please contact me on (202) 512-5407 if you or your staff have any questions concerning this report. Background Employers who withhold income and Social Security taxes are required to deposit these employment taxes under the federal employment tax deposit system. Under section 6302(c) of the Internal Revenue Code, the Secretary of the Treasury has the authority to set the requirements for when employers must deposit employment taxes. The frequency of deposits and when the deposits are due are determined by the amount of taxes withheld and when paydays occur. In July 1990, we reported that the rules for depositing employment taxes were complex and resulted in nearly one-third of all employers being penalized in 1988 for failing to make timely deposits. We recommended that IRS simplify the employment tax deposit rules by making the deposit date more certain and by exempting significant numbers of small employers from frequent deposit requirements. Having earlier reviewed our July 1990 report, IRS solicited suggestions from the public for ways to improve the employment tax deposit rules. It did so as part of a notice (Notice 90-37, May 21, 1990) that provided information for the public on changes Congress had made to a related penalty. IRS received about 30 responses. Later that year, IRS and Treasury officials attended a roundtable discussion sponsored by the American Institute of Certified Public Accountants (AICPA) that focused on changes needed in the employment tax deposit system. IRS developed an internal draft of revised regulations by December 1990, and a meeting was held in March 1991 to consider the input of the payroll community. Approximately 30 outside organizations were represented. However, at about this time, the House Committee on Ways and Means began to consider legislative changes to address the employment tax deposit problems. Therefore, Treasury and IRS officials suspended their efforts to revise the regulations. The Senate Committee on Finance also considered a bill to address the problems. Both Committees held hearings that covered the employment tax issue, and a simplified employment tax deposit rule was incorporated in H.R. 4210, which was subsequently vetoed for reasons unrelated to the employment tax deposit regulations. During the fall of 1991, IRS drafted regulations to implement the House and Senate bills. After the veto, Treasury and IRS began to revise the draft regulations. On May 18, 1992, IRS published in the Federal Register (57 FR 21045) a notice of proposed changes to the employment tax deposit regulations. The proposed regulations were similar to the provisions contained in the vetoed bill. The existing employment tax deposit process required employers to monitor and accumulate employment taxes from payday to payday until one of four separate deposit rules (quarterly, monthly, eighth-monthly, or daily) was triggered (see app. II for a depiction of the decisionmaking process required under these rules). Under the deposit rules, deposit requirements could change from month to month. Employers had difficulty determining when deposits were due and could inadvertently switch from one rule to another and be penalized for failure to make timely deposits. The eighth-monthly deposit rule was particularly complicated since it divided the month into eight parts of varying lengths. IRS’ proposed changes sought to simplify the employment tax deposit regulations in part by classifying a greater portion of employers as small employers and letting such small employers deposit employment taxes less frequently, generally monthly. The proposed regulations increased the number of employers classified as small employers by raising the threshold for those qualified to make monthly deposits. Previously, anyone with a tax liability of less than $3,000 in a calendar month would have deposited on a monthly basis for that month. The proposed regulations specified that a taxpayer with a quarterly liability of $12,000 or less during the reference period would deposit on a monthly basis for a calendar quarter. For those above the small employer threshold, the proposed regulations simplified the deposit schedule by designating specific days of the week, i.e., Tuesdays and Fridays, that deposits would be due. The proposed regulations also enabled an employer to look back on a quarterly basis, examine its deposit history for a 1-year period, and determine whether it would be a monthly or semiweekly depositor for the next quarter. The proposed regulations also modified the IRS “safe harbor” rule, which allowed employers that did not deposit the full amount of taxes due to avoid penalties as long as the shortfall was no more than 5 percent and the shortfall was deposited by a specified make-up date. The proposed regulations decreased the allowable shortfall to 2 percent of the amount due or $100, whichever amount was greater. IRS received written comments responding to the Federal Register notice, and a hearing was held on August 3, 1992. The comments suggested changes to the proposed employment tax regulations, which included modifying the semiweekly deposit rule, increasing the threshold for monthly deposits, changing the look back period (the period for which an employer would review its deposit history and determine its future deposit schedule), altering the safe harbor threshold, and reconsidering the implementation date. On August 19, 1992, Treasury and IRS held a meeting with representatives of Members of Congress and small business. Treasury and IRS held a second meeting on August 20, 1992, with members of the payroll community. Each group was informed of IRS’ most recent proposals and tentative decisions about the regulations. After Treasury and IRS officials considered the written and oral comments on the proposed regulations, the final regulations were issued on September 24, 1992. These regulations replaced the existing employment tax deposit process with a new one that is considered to be significantly simpler and easier to understand and comply with. The new employment tax regulations basically treat an employer as either a monthly depositor or a semiweekly depositor. The semiweekly deposit rule changed so that deposits are made on Wednesdays or Fridays. According to IRS, as long as an employer deposits employment taxes within 3 banking days after a payroll, it will always satisfy the semiweekly rule. In addition, the final regulations incorporate the statutory requirement that employers that accumulate employment taxes of $100,000 or more during any deposit period must deposit those taxes on the first banking day after the $100,000 is reached. This rule applies to both monthly and semiweekly depositors. The final regulations also increased the dollar threshold for determining whether an employer is a monthly depositor or a semiweekly depositor. The threshold increased from $12,000 per quarter of employment taxes to $50,000 per year. An employer that reported $50,000 or less in taxes during the look back period would deposit monthly. Conversely, an employer who reported more than $50,000 would be a semiweekly depositor. Further, under the final regulations, employers can determine their deposit status for an entire calendar year rather than for each quarter. The look back period for each calendar year is the 12-month period that ended the preceding June 30. In its Federal Register announcement issuing the new regulations, IRS also committed to determining an employer’s deposit status and notifying the employer before the beginning of each calendar year. In finalizing the regulations, IRS did not modify its proposed safe harbor; the shortfall amount remained at $100 or 2 percent of the amount of employment taxes required to be deposited, whichever was greater. IRS retained the January 1, 1993, date for implementing the new regulations, but it provided a 1 year transition period so that employers had until December 31, 1993, to change to the new process if they needed to take longer to adapt their systems to the new requirements. Old and New Employment Tax Deposit Processes The diagrams in figures II.1 and II.2 show the old and new employment tax deposit processes. Among other changes, the new process reduced the number of rules for determining how often employment taxes are due and substituted two fixed days of the week for the eighth-monthly periods previously used by relatively large employers to determine when their deposits had to be made. Figure II.1: Old Employment Tax Deposit Process (Figure notes on next page) Figure II.1 depicts the old process used by employers for determining when employment taxes were due. Under this process, employers were burdened with a series of rules from payday to payday. As in the figure, when employee wages were paid at the end of the pay period, employers accumulated their tax liabilities and then determined which of the four deposit rules applied for that pay period. Depending on the deposit requirement, different modes of payment were required. The use of the eighth-monthly deposit period added to the complexity of this process. The eight periods between the dates the deposits would be due varied in length from 3 to 6 days, depending on the specific period and the month involved. The amount of time an employer would have after a payday to make a deposit varied from 3 to 8 days, depending upon the length of the deposit period as well as where in the eighth-monthly period the payday fell. To comply with the changing deposit requirements, employers monitored undeposited employment taxes from payday to payday to determine when changes in employment tax amounts would trigger a different deposit rule that required an earlier deposit as well as when each eighth-monthly period ended. Otherwise, the employer could unintentionally make a late deposit and be penalized. 7/1-6/30 of prior year to decide if > $100,000? makes next day deposit of $100,000 year? Figure II.2 illustrates the new employment tax deposit process. The process is streamlined, and the number of rules the employers were required to follow under the old process has been reduced. Under the new rules, an employer’s status as either a monthly depositor or semiweekly depositor is determined annually. The look back period for each calendar year is the 12-month period that ended the preceding June 30. IRS will notify employers of their status before the beginning of each calendar year. This notification will provide employers with additional upfront certainty for determining their deposit obligations. For example, an employer that reported $50,000 or less in employment taxes for the period July 1, 1992, through June 30, 1993, generally would be a monthly depositor during calendar year 1994. An employer who reported more than $50,000 in employment taxes for that look back period would be a semiweekly depositor during 1994. The new rules enable employers to identify when their employment taxes will be due throughout a year, eliminating the need for the employer to continuously monitor employment tax liabilities and redetermine deposit due dates. Comments From the Internal Revenue Service Major Contributors to This Report General Government Division, Washington, D.C. Michael Brostek, Assistant Director Sharon T. Paris, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the revised federal employment tax deposit regulations issued by the Department of the Treasury and the Internal Revenue Service (IRS) on September 24, 1992, focusing on: (1) whether the Treasury and IRS developed the regulations by applying principles from the IRS Compliance 2000 initiative; (2) how the revision process could be improved; and (3) how Treasury and IRS officials know when their efforts to develop and revise regulations result in regulations that are sufficiently simple and easy to follow. GAO found that: (1) the new tax deposit regulations are considered to be significantly simpler and easier for stakeholders to understand and comply with than the proposed regulations; (2) the new regulations provide most employers with a fixed deposit rule that they can follow for an entire calendar year; (3) in keeping with Compliance 2000, IRS obtained stakeholders' input throughout the revision process; (4) although stakeholders are satisfied with the final employment tax deposit regulations, certain stakeholders are dissatisfied with various aspects of the regulatory development process; (5) although Treasury and IRS officials are not always able to interact with all stakeholders to the extent the stakeholders desire, Treasury and IRS officials could improve their future communications with stakeholders by directing drafters' attention to stakeholders' concerns; and (6) IRS officials used some measures to gauge the simplicity of the revised regulations and to balance simplicity with other regulatory objectives while revising the employment tax deposit regulations. |
Background Federal policy aimed at promoting sustainability in federal facilities sets goals for reducing greenhouse gas emissions and implementing key green building requirements, among other areas. Green building goals established by executive order in 2009 built on previous efforts to establish federal green building policy. Figure 1 shows the timeline of sources of key green building requirements from 2005 through June 2015. In March 2015, the third executive order to require compliance with the Guiding Principles—Executive Order 13693—revoked two prior executive orders and certain other green building policies and extended the time frames for agencies’ existing buildings to comply with the Guiding Principles from 2015 to 2025. Key federal green building requirements include dozens of specific requirements related to five Guiding Principles: employ integrated design principles, optimize energy performance, protect and conserve water, enhance indoor environmental quality, and reduce the environmental impact of materials. The requirements range from requirements to reduce water consumption to others aimed at improving indoor environmental quality, including tobacco smoke control and daylighting requirements. See appendix I for the specific requirements included in the Guiding Principles which are currently undergoing revision. CEQ officials said that the revisions will include consideration of climate change resilience and employee and visitor wellness as called for in Executive Order 13693. The current criteria for determining whether a building complies with the Guiding Principles include either (1) demonstrating a building was compliant with each of the five Guiding Principles or (2) documenting that a commitment to third-party certification for a building was made prior to October 1, 2008, and that the building obtained the certification. In addition, for leased buildings, a building is considered compliant if either (1) the building was third-party certified at any time or (2) the agency demonstrated compliance with the appropriate set of Guiding Principles (those for new construction and major renovations or existing buildings). As of June 12, 2015, the revised Guiding Principles were not complete, but CEQ officials told us that they are working toward meeting the August 16, 2015, deadline to complete the revision. OMB’s sustainability and energy scorecard assesses federal agency performance in meeting federal sustainability goals.building is based on the extent to which agencies meet intermediate goals toward the 2015 goal of implementing the Guiding Principles for all new construction and major renovation and at least 15 percent of existing buildings and leases over 5,000 square feet. In fiscal year 2013, 10 of the 16 agencies that received green buildings scores had not met intermediate goals, or could not demonstrate compliance with the Guiding Principles for new construction, major renovations, or leases, and The goal for green received a red score on the scorecard.reviewed, 2 received a red score—DOD and DOE—and 3 received a green score—EPA, GSA, and VA. Of the 5 select agencies we Federal agencies have been using third-party green building certification systems since the late 1990s. The third-party certification systems most commonly used in the United States and by federal agencies are the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) and the Green Building Initiative’s Green Globes. achieve different rating levels within the certification systems depending on how many points are earned. LEED’s rating levels include Certified, Silver, Gold, and Platinum; and Green Globes’ rating levels include one, two, three, or four Green Globes. These systems have certifications for the design and construction of new buildings and the operations and maintenance of existing buildings, among others. The Living Building Challenge is another third-party certification system that was reviewed by Pacific Northwest National Laboratory in 2012. However, representatives of the International Living Future Institute, which administers the system, told us the system has not been used by federal agencies. See appendix III for more information on LEED, Green Globes, and the Living Building Challenge. within the building industry.renovations, EISA requires the Secretary of Energy, in consultation with GSA and DOD, to identify a certification system and level that the Secretary determines to be the most likely to encourage a comprehensive and environmentally-sound approach to certifying green buildings. GSA is required to evaluate and compare third-party green building certification systems at least once every 5 years to support DOE’s recommendation. In 2013, GSA recommended that federal agencies obtain at least a LEED Silver rating or, if using Green Globes, at least two Green Globes for new construction and major renovations. With respect to new construction and major As part of GSA’s evaluation of certification systems, it recommended in 2013 that federal agencies continue to use these systems. In addition, in 2013, the National Research Council issued a report that recommended that DOD continue to require new buildings and major renovations use LEED Silver or an equivalent system. information on federal reviews of third-party green building certification systems. Section 2830 of the National Defense Authorization Act for Fiscal Year 2012, Pub. L. No. 112-81, 125 Stat. 1298, 1695 (Dec. 31, 2011), required the Secretary of Defense to submit a report to the congressional defense committees with a cost-benefit analysis, return on investment, and long-term payback of specific energy-efficiency and sustainability standards used by DOD for military construction and renovation. DOD requested the National Research Council establish a committee of experts to conduct an evaluation to inform its report to Congress. The National Research Council’s study states that the additional incremental costs to design and construct green buildings are relatively small when compared to the total costs over a building’s life cycle. Specifically, the study found that research studies indicate that the incremental costs to design and construct green buildings typically range from 0 to 8 percent higher than the costs to design and construct conventional buildings, depending on the methodology used in the study and the type of building analyzed. None of the studies focused on the long-term cost-effectiveness attributable to the use of green building certification systems. Summary DOE identified the criteria that a certification system must meet as required in the Energy Independence and Security Act of 2007. Specifically, the system under which the building is certified must: (1) allow assessors and auditors to independently verify the criteria and measurement metrics of the system; (2) be developed by a certification organization that: (i) provides an opportunity for public comment on the system; and (ii) provides an opportunity for development and revision of the system through a consensus-based process; (3) be nationally recognized within the building industry; (4) be subject to periodic evaluation and assessment of the environmental and energy benefits that result under the rating system; and (5) include a verification system for postoccupancy assessment of the rated buildings to demonstrate continued energy and water savings at least every 4 years after initial occupancy. The building must be certified to a level that promotes the guidelines referenced in Executive Order 13423 and Executive Order 13514. The objective of the review was to determine the alignment between federal high- performance green building requirements and three LEED v4 systems—the current version of LEED at the time of the review. GSA found that these systems did not fully align with all of the federal requirements. GSA recommended that agencies, among other items, continue using third-party certification systems; select one system at the agency or bureau level–either LEED or Green Globes; and use system credits that align with federal requirements. The National Defense Authorization Act of 2012 required DOD to submit a report to Congress on the impact of specific energy efficiency and sustainability standards used by DOD for military construction and repair. The National Research Council conducted the study on DOD’s behalf and recommended that DOD continue to require new buildings or major renovations be designed to achieve a LEED-Silver or equivalent rating. It also found that the incremental costs to design and construct high-performance or green-certified buildings is relatively small compared to the total costs over a building’s life cycle. The review analyzed three systems—LEED, Green Globes, and the Living Building Challenge—against multiple criteria. The review found that none of the systems completely aligned with all of the federal requirements. Federal Efforts to Support Implementation of Key Green Building Requirements Include Oversight, Training, and Other Tools Provided by Several Agencies Several agencies—CEQ, DOE, EPA, GSA, and OMB—provide oversight, training, and other tools to support agencies’ implementation of key federal green building requirements. Officials from these supporting agencies told us that when the Guiding Principles are revised later this year, they will need to update some of their efforts. Below are examples of federal efforts to support agencies’ implementation of key green building requirements. A more detailed list of federal efforts to support agencies is included in appendix II. OMB and CEQ provide guidance and oversight of agencies’ implementation of key green building requirements. CEQ evaluates and OMB approves agency Strategic Sustainability Performance Plans—annual documents that describe an agency’s strategy and plans for, and progress toward achieving green building and other sustainability goals. CEQ provides agencies with a template each year that includes guidance on how to report agency progress toward implementing the Guiding Principles for its buildings, along with other sustainability goals such as agency-wide greenhouse gas reductions and water use efficiency and management. CEQ is required to review and evaluate the plans, and OMB is required to review and approve the plans. According to OMB staff, the review ensures agencies have addressed all relevant sustainability goals, including green building. As discussed above, OMB’s annual sustainability and energy scorecards score agencies on whether they make progress toward sustainability goals, including the goal for green buildings— implementing the Guiding Principles for all new construction and major renovations and for at least 15 percent of existing buildings over 5,000 square feet. OMB staff told us fiscal year 2015 scorecards will continue to evaluate progress toward the 2015 goal outlined in Executive Order 13514, but it will need to update the metric for fiscal year 2016 to reflect the revised Guiding Principles and revised agency goals as outlined in Executive Order 13693. DOE provides training, benchmarking, and other tools to support agencies’ implementation of key green building requirements. Officials from DOE’s Federal Energy Management Program (FEMP) described the program as being on the front line of providing assistance to other agencies regarding sustainability issues. It provides education, training, guidance, and technical assistance for agencies implementing key green building requirements. Specifically, FEMP provides both web-based and in-person training on implementing the Guiding Principles and also offers web-based training on related topics, such as best practices in operations and maintenance. Several of the agencies we spoke with told us their staff has participated in FEMP training on the Guiding Principles. The web-based, on-demand training provides an overview of each of the five Guiding Principles and covers best practices for measuring and reporting on implementation. FEMP officials told us this training will have to be updated to reflect the revised Guiding Principles. FEMP had not planned to update the training this year since the timing of the revisions was unknown until Executive Order 13693 set a deadline for completion of the revision, and officials told us updating the training may require a reallocation of FEMP’s current budget. FEMP also offers customized training for agencies. For example, GSA worked with FEMP to develop training sessions that provided customized information on GSA’s approach to documenting compliance with the Guiding Principles. DOE is also a resource for information for agencies with questions about key green building requirements. For example, a Navy official told us the Navy obtained assistance from FEMP subject matter experts about energy conservation measures and found the assistance it received very helpful. An official from DOE’s Sustainability Performance Office—its internal office that oversees departmental sustainability efforts—told us the official has reached out to DOE’s Pacific Northwest National Laboratory for assistance on technical matters, such as benchmarking water use and energy modeling. In addition, DOE provides support to agencies implementing requirements for buildings to benchmark energy use through its Labs21 energy benchmarking tool. Labs21 is a benchmarking tool designed specifically for laboratories, which are more energy intensive than other building types and, therefore, cannot be compared directly to other building types, such as office buildings. According to DOE, Labs21 enables agencies to compare the performance of their laboratories to similar facilities and thereby help identify potential energy cost savings opportunities. DOE also co-chairs—along with GSA—the Interagency Sustainability Working Group. According to FEMP officials, the working group provides officials from federal agencies a forum for information exchange and collaboration on sustainability issues. Bimonthly meetings include an opportunity for staff from each agency to highlight agency progress in green building, view presentations on a variety of sustainability issues, and network with staff from other federal agencies. According to FEMP officials, the working group is also a place for FEMP and GSA to get real-time feedback on agency needs, which they can then share with the Office of Federal Sustainability— formerly the Office of the Federal Environmental Executive—and OMB. EPA provides benchmarking and other tools to support agencies’ implementation of key green building requirements. EPA’s ENERGY STAR Portfolio Manager is a web-based system for federal agencies and other entities to measure and track data on buildings, such as energy and water use. Portfolio Manager has an energy benchmarking feature that agencies can use to implement the benchmarking requirement in the Guiding Principles. Specifically, the feature compares a building’s energy use to that of other, similar buildings and gives the building a score on a scale from 1 to 100—a score of 50 represents median energy performance, while a score of 75 or better indicates the building is a top performer. The Guiding Principles state a preference for agencies to use Portfolio Manager for energy benchmarking, and DOE guidance designates Portfolio Manager as the benchmarking system for federal buildings. According to an EPA official, it is unlikely that the benchmarking feature of Portfolio Manager will need to be substantially updated in response to the revised Guiding Principles. ENERGY STAR Portfolio Manager also includes a Sustainable Buildings Checklist that is designed specifically to assist agencies with assessing their existing buildings against the Guiding Principles. The checklist includes all five Guiding Principles and asks users to check whether the action has been completed, to identify the responsible team member, and to upload relevant supporting documentation. For example, to document compliance with the commissioning requirement in the Guiding Principles, a user can upload a commissioning report, or to document compliance with the energy efficiency requirement, a user can upload an ENERGY STAR certification.across their building portfolio. The Sustainable Buildings Checklist may need to be revised when the Guiding Principles are revised, but an EPA official that manages ENERGY STAR could not comment on what resources may be needed to update the system without seeing the revisions. Agencies can track progress for individual buildings and GSA provides educational tools and green leasing language to help agencies implement key green building requirements. GSA’s Office of Federal High-Performance Green Buildings provides technical and best practice advice to federal agencies. For example, it developed the Sustainable Facilities Tool (SFTool), a web-based tool for facility managers, leasing specialists, and project managers that provides education on sustainability issues. SFTool allows users to explore a virtual building—including spaces such as a cafeteria, conference room, or reception area—to identify opportunities to incorporate the Guiding Principles and other sustainability requirements into a building project. SFTool also includes an annotated copy of Executive Order 13693 with hotlinks that define key terms or provide links to more detailed information or tools. Officials stated they will revise SFTool when the Guiding Principles are revised, but they do not expect to make major changes. GSA also has green lease policies and procedures and has developed green lease clauses that agency officials told us can be used to ensure a lease aligns with the Guiding Principles. According to GSA officials, they have developed more than 30 green lease clauses that may be appropriate for leases of different sizes and complexity. GSA officials said they do not know how much time or effort will be required to update green leasing language in response to the revised Guiding Principles without knowing what the content of the revisions will be. However, officials said it could take 6 months or more to undergo the necessary reviews. Agencies Use Third- Party Certification to Help Implement Key Federal Green Building Requirements All five select agencies use third-party certification systems to help implement key federal green building requirements for new construction and major renovation projects. While third-party certification does not ensure that a building meets all of the key requirements, agencies we reviewed have developed various tools to ensure that any remaining federal requirements are implemented at their buildings after third-party certification and noted that there are additional benefits to using these systems beyond helping to implement key requirements. Of the select agencies we reviewed, none require third-party certification for existing buildings, but three of the agencies have developed their own systems for assessing the implementation of key requirements for existing buildings. Table 2 shows the third-party certification requirements for new construction and major renovation projects for each of the five select agencies, including the DOD military services. Officials from all five select agencies (DOE, EPA, GSA, VA, Air Force, and Army) told us that third-party certification helps ensure compliance with key green building requirements by holding contractors and agency project teams accountable for incorporating the requirements. EPA and GSA officials stated that requiring contractors to achieve third-party certification holds them accountable for incorporating sustainable elements into the design of a building. EPA officials also said that the third party verifies that a contractor is completing the necessary documentation for certification, which can also be used by the agency to demonstrate compliance with key requirements. In addition, we heard from EPA, VA, Air Force, and Army officials that third-party certification can provide assurance that project teams are helping the agency to meet key requirements. Army officials stated that certification drives accountability for project teams. GSA headquarters and building-level officials told us that certification provided external validation that their projects accomplished what the project teams intended. Select agency officials noted that using third-party certification systems does not ensure that all of the key federal green building requirements are met. Pacific Northwest National Laboratory’s review of third-party certification systems found that, of the three systems reviewed, none fulfilled all federal green building requirements. Pacific Northwest National Laboratory evaluated the new construction categories for Green Globes, LEED, and the Living Building Challenge against 27 federal green building requirements and found that 10 of the 27 requirements were fully met using Green Globes, 11 using LEED, and 11 using the Living Building Challenge. Several select agencies (Air Force, Army, EPA, GSA, and VA) have developed crosswalks that align specific credit categories in third-party certification systems with key federal green building requirements. Officials at the National Renewable Energy Laboratory (NREL) stated that they used crosswalks developed by GSA and the Department of the Interior while designing its Research Support Facility, which obtained a LEED Platinum rating and, according to NREL’s 2014 Site Sustainability Management Plan, complies with the Guiding Principles. Officials from GSA’s Office of Federal High Performance Green Buildings stated that once the Guiding Principles are revised, GSA may develop a new crosswalk between the Guiding Principles and third-party certification systems that agencies can use.EPA, VA, and Air Force) said that such a document would be helpful. VA and Air Force officials noted that while a general crosswalk would be a good starting point, they would need to customize it based on their specific needs. For example, VA officials stated that they use the health care facilities-specific certification for medical centers, which is not very Officials from several agencies (DOE, common across the federal government, and they would have to make sure that a general crosswalk made sense for those buildings. Air Force officials stated that the DOD policy and its crosswalk will be updated when the Guiding Principles are revised; in the past when updating DOD policy they used GSA guidance and customized it through the “DOD lens.” Officials from agencies we spoke with said that their agencies use different tools to ensure that remaining federal requirements are implemented at their buildings after third-party certification. Several agencies developed guidance for project managers. For example, according to VA officials, its Sustainable Design Manual was developed to be a one-stop shop for new construction and major renovations, including guidance on how to meet requirements that are not covered by obtaining third-party certification. Several agencies (EPA, VA, Navy, Air Force, and Army) have developed a checklist that project managers must submit. The checklists provide guidance on what is needed to meet the requirements through third-party certification and by other means. The Army and Air Force checklists provide the text of the requirement, the statutory or executive source, and specific design elements that can be included to meet the requirement. Several agencies we spoke with (DOE, EPA, GSA, and VA) require specific language in contracts to ensure that contractors comply with all requirements, even those that did not align with the third-party certification system. In addition to helping agencies implement key federal green building requirements, agency officials and building energy managers (DOE, EPA, GSA, Army, Air Force, Navy, OASD EI&E, and VA) that we spoke with mentioned other benefits of using third-party certification, including the following: Provides a well-established framework. Some third-party certification systems are recognized industry standards and familiar to contractors. An interagency group co-chaired by DOD, DOE, and GSA found that the main benefit of using third-party certification systems is that they have a robust infrastructure that is able to keep up with an evolving marketplace. Furthermore, Pacific Northwest National Laboratory reported that some federal agencies found the systems to be useful tools for documenting and tracking a building’s progress toward meeting requirements in its review of third-party certification systems. In addition, these systems offer frameworks for reducing energy and water use in buildings, compared with design approaches and practices used for conventional buildings, according to the National Research Council’s review. The National Research Council’s review also found that these systems can help establish explicit and traceable objectives for future building performance and a feedback loop to determine if the objectives were met. VA building-level officials stated that, because of the strict documentation requirements, they use a third-party certification system as a guide even when they do not pursue formal certification. Reduces need for additional staff. DOD officials (Air Force, Army, and Navy) stated that using third-party certification reduces the need for additional staff to conduct certain activities. Specifically, current staff would have an increased workload or agencies would need additional personnel if they used their own system to validate a building’s compliance with the key requirements. Air Force headquarters and building-level officials stated they do not have sufficient personnel to implement their own system and that using a third-party eliminates the need to rely on staff to ensure a building complies with key requirements. A Navy official stated that third-party certification provides a level of subject matter expertise that their staff currently do not have. Army officials also stated that third-party certifiers already have the subject matter expertise and for the government to gain that level of expertise would require significant time and effort. Serves as a communication tool. Officials from some agencies (Army, EPA, and OASD EI&E) and GSA building-level officials said that certification can be used as a tool to communicate the agencies’ sustainability efforts with its own staff, the public, and contractors. According to Army and OASD EI&E officials, third-party certification provides a common language across industry and government to evaluate and measure sustainability features. GSA building-level officials told us that obtaining certification was an important method for them to communicate GSA’s sustainability efforts to the public. Specifically, third- party certification provided a recognizable label to show the public the agency’s use of sustainable practices in the recent renovation of a large federal office building. EPA officials we spoke with stated that because a third-party system is a trusted brand it is like a building received a seal of approval. According to some agency headquarters and building-level officials (Air Force, Army, EPA, and Navy), although third-party certification can reduce the need for additional staff resources, certification is a resource- intense process. Some agency headquarters and building-level officials (Air Force, Army, Navy, and EPA) stated that the current process to complete certification involves some costs. The monetary costs for certification vary project-to-project, according to several agency officials (Air Force, EPA, and VA). GSA and DOE building-level officials said that it was difficult to isolate the cost of certifying their buildings because certification fees were paid for by the contractors designing and constructing the building, so these costs are included as part of the overall contract award. Officials from GSA stated that the cost of certifying a new construction or major renovation project is, on average, 0.012 percent of the total project budget. A study completed in 2004 for GSA estimated that the documentation costs associated with obtaining LEED certification ranged from about $22,000 to about $34,000 per project, although GSA officials told us that since 2004 these costs have decreased as the market has changed. According to Green Building Initiative representatives, the typical total agency costs for Green Globes certification are about $12,000 to $30,000 per project. In addition to certification fees, some agencies also allocate staff resources for administrative purposes, such as reviewing the documentation submitted by contractors. Representatives of one third-party certification system stated that, in working with federal agencies, they have found that the biggest element of the cost of certification for the agencies is the agency staff time. A Navy official stated that the time needed to complete all of the documentation was a limitation because staff have other higher- priority responsibilities. According to Army officials, documentation to support certification also could be particularly challenging for less experienced project teams or for small contractors. Despite the current staff resources needed to oversee third-party certification, Army officials stated that it is still less expensive to use a third-party system than to develop, execute, and oversee their own. The costs for the Army to obtain third-party certification are negligible relative to the costs of the design elements needed to meet key requirements, according to these officials. Officials from several agencies we spoke with are not certain how they will use third-party certification systems in the future. Air Force officials stated that they are currently updating the implementing guidance for its sustainability policy. As part of DOD’s process, OASD EI&E and Air Force officials are determining how the use of third-party certification for new construction projects will be most valuable to help ensure and demonstrate compliance with federal requirements, which could include the use of certification systems aimed specifically at assessing compliance with the Guiding Principles. According to EPA and VA officials, the agencies may reevaluate the use of third-party certification depending on the new version of the Guiding Principles. A DOE official said that it will continue to allow the use of third-party certification but may not require it anymore. While none of the five select agencies require third-party certification of existing buildings, three agencies (EPA, GSA, and VA) developed their own systems for assessing the implementation of key requirements at existing buildings. GSA developed a methodology using a third-party certification system–LEED Volume Program for Operations and Maintenance–as a framework to identify the type of documentation needed to achieve certification, as well as compliance with key federal requirements. GSA mapped each of the Guiding Principles, federal regulations, and mandates, and the agency’s operational policies against one or more LEED for Existing Buildings credit categories. It found that, in some cases, GSA’s policies were more restrictive than LEED’s and, in other cases, LEED’s requirements were more restrictive. The methodology GSA developed requires a building to meet the most restrictive category, whether it is based on the third-party certification system or GSA policy. According to GSA, project teams can meet approximately 80 percent of key requirements by obtaining LEED- Certified for Existing Buildings Operations and Maintenance. In addition, on an annual basis, GSA officials said that they use the LEED Volume Program for Operations and Maintenance to pursue certification for approximately one existing building in each of its 11 regions. Agencies Face Challenges Implementing Key Requirements Based on Their Building Inventories, Missions, and Competing Priorities Select agencies face challenges implementing key federal green building requirements because of the characteristics of their building inventories, mission-related concerns, competing priorities, and the criteria used to evaluate compliance with the Guiding Principles, which can be a disincentive to implementing some requirements. Forthcoming revisions to the Guiding Principles may address some of these challenges, and we discuss them under the appropriate challenge. CEQ officials told us they are aware of and plan to consider these challenges as they complete the revisions. Characteristics of Building Inventories The characteristics of building inventories that present a challenge to agencies as they implement key federal green building requirements include the age, number, and other characteristics of existing buildings; special-use buildings (e.g., laboratories, hospitals, and industrial spaces); leased space; and historic preservation status. Age, Number, and Other Characteristics of Existing Buildings Officials from several agencies (DOD, DOE, EPA, and VA) told us that implementing requirements at existing buildings is more challenging than for new construction or major renovations. According to officials from DOD (Navy and OASD EI&E), this is because many of their buildings are old. Air Force officials told us that the majority of the existing building inventory incorporated the building standards in place at the time they were constructed and, as a result, have mechanical or other systems that do not incorporate current requirements. In addition, VA officials said that existing buildings are more difficult than new construction because certain design features that could help implement requirements such as passive solar—a building design that uses structural elements of a building to heat and cool it without the use of mechanical equipment—in many cases can only be incorporated when constructing a new building, or with greatly increased technical difficulty and cost in existing buildings. These officials said that retrofitting an existing building is also challenging if the building is occupied because occupants may require relocation, which entails moving and other costs. In addition, according to OASD EI&E officials, in some cases, existing buildings may have been inadequately maintained as a result of funding shortfalls. In January 2003, we designated federal real property as a high-risk area, in part, due to the deteriorating condition of some government facilities. We previously reported that the deteriorated conditions were due, in part, to the age of many federal facilities (often over 50 years old) and other factors that resulted in agencies deferring some maintenance and repair of their facilities. We reported that delaying or deferring routine maintenance and repairs can, in the short term, diminish the performance of these systems and, in the long term, shorten service life. In addition, we have previously reported on opportunities to concurrently address deferred maintenance and repair backlogs and reduce energy consumption. For example, in January 2009, we concluded that agencies can replace old systems—such as heating and air conditioning, electrical, and plumbing— with new, more efficient systems that would lead to energy savings and reduce or eliminate deferred maintenance and repair associated with the systems. DOD officials (Army and OASD EI&E) said that the sheer number of existing buildings in their portfolios is a challenge. According to DOD’s 2014 Strategic Sustainability Performance Plan, significantly increasing the percentage of DOD buildings that comply with the Guiding Principles is a challenge given the tens of thousands of older, existing buildings. According to Army officials, about 90,000 of the 150,000 existing buildings in the Army’s inventory meet the threshold requiring compliance—buildings greater than 5,000 square feet—with the Guiding Principles. Officials from the Air Force noted that improving existing buildings involves a process including building assessment, determining the work needed to elevate the buildings into compliance, identifying funding, and executing the projects, among other steps. According to these officials, obtaining the funding and executing the project could take multiple fiscal years to accomplish. In addition, according to DOD’s 2014 Strategic Sustainability Performance Plan, part of the challenge posed by DOD’s existing buildings is that a large fraction of them do not have meters in place to track electricity use, and making investment decisions related to retrofits requires accurate consumption data. Also, according to DOD and DOE officials, federal buildings are often configured and managed as campuses and, although the Guiding Principles are building-specific, DOD officials said that they are more successful implementing certain requirements, such as on-site renewable energy, at the campus level. Special-Use Buildings (Laboratories, Hospitals, and Industrial Spaces) According to officials from several agencies (DOD, DOE, EPA, and VA) their building inventories include certain building types, such as laboratories, hospitals, and industrial buildings for which some requirements are difficult to implement. For example, according to DOE’s 2014 Strategic Sustainability Performance Plan, DOE’s building inventory consists of special-use facilities—scientific laboratories, accelerators, light sources, supercomputers and data centers, and industrial facilities—and, as a result of these factors, DOE is challenged with integrating sustainability into aging infrastructure and energy-intensive processes. Hospitals have much higher energy intensities compared with offices and other types of buildings and also have fewer opportunities for reducing energy use, according to VA’s 2014 Strategic Sustainability Performance Plan.hospitals will be challenging because of strict medical standards, energy- intensive medical equipment, and the increasing number of patient visits. In addition, VA officials said that its hospitals are already more energy According to VA’s plan, future reductions in energy use at VA efficient than the average U.S. hospital; it has already implemented the most cost-effective measures for improving energy efficiency; and additional measures would be more costly. Similarly, laboratories use significantly more energy and present greater environmental challenges than offices, according to EPA’s 2014 Strategic Sustainability Performance Plan. EPA officials told us that laboratories have resource- intensive equipment and mechanical systems. For example, EPA’s laboratory designs include single-pass air cooling systems that use more resources than other systems. However, EPA officials told us that they plan to classify laboratories according to risk and identify those where they can adjust the number of air flows accordingly to conserve resources. Several DOD officials (Air Force, Army, and Navy) told us that many of the buildings in their inventory are industrial, which creates challenges for implementing certain key requirements. For example, the Air Force’s inventory includes aircraft maintenance facilities, ground vehicle maintenance facilities, hangars, and storage warehouses, and implementing certain requirements such as daylighting in these spaces can be challenging. Army officials noted that DOD’s industrial buildings’ energy use differs from more traditional energy use that most energy conservation measures are geared to address. According to officials from several agencies (DOD, DOE, and VA), it is difficult to apply the Guiding Principles to certain buildings or spaces. The Guiding Principles were written for more typical commercial buildings and applying them to different building types can be challenging, according to DOD officials. According to one DOE official, it would be helpful if the revisions provided some flexibility based on building type because DOE has diverse property types including office space, laboratories, and highly-secure industrial facilities such as nuclear sites. Similarly, according to VA officials, ideally the new Guiding Principles would allow specialized buildings such as medical centers a path to compliance that acknowledges their unique mission-based characteristics. Leased Space Officials from several agencies (DOD, GSA, and VA) identified challenges implementing requirements for leased space. For example, according to GSA officials, leases are often in buildings where the government only has a partial presence and certain requirements—such as overall water consumption reduction—cannot be met without steps being taken for the whole building. Challenges implementing the requirements for leased space may be affected by the new Executive Order and revisions to the Guiding Principles. Executive Order 13693 differs from Executive Order 13514 with regard to leases. Specifically, Executive Order 13514 required that agencies ensure that at least 15 percent of the agency’s existing buildings (above 5,000 gross square feet) and building leases (above 5,000 gross square feet) meet the Guiding Principles. However, Executive Order 13693 does not call for leased space to meet the Guiding Principles, but rather requires that agencies ensure that all new agency lease solicitations over 10,000 rentable square feet include, among other specifications, (1) criteria for energy efficiency either as a required performance specification or as a source selection evaluation factor and (2) requirements for building lessor disclosure of carbon emission or energy consumption data for that portion of the building occupied by the agency that may be provided by the lessor through submetering or estimation from prorated occupancy data, whichever is more cost-effective. Historic Preservation Status Officials from several agencies (DOD, GSA, and VA) said that implementing key requirements at historic buildings is a challenge because historic preservation requirements limit what can be done to retrofit these buildings. For example, according to Army and Navy officials, implementing new technologies to reduce energy use may be difficult because the exterior appearance or interior features of a building may need to be maintained or replacement of windows may not be allowed. Air Force officials noted that meeting both green building and historic preservation requirements often leads to less conventional design and construction solutions, which can significantly impact both cost and the ability to complete the project. According to GSA officials, renovating an historic building to implement key requirements is generally deemed more expensive than moving into a leased building that does not have the same stringent historic preservation requirements. While agencies identified buildings with historic preservation status as posing a challenge to their ability to implement requirements, GSA’s renovation of two historic buildings—50 United Nations Plaza Federal Office Building in San Francisco, California, and the Wayne N. Aspinall Federal Building and U.S. Courthouse in Grand Junction, Colorado—both incorporated green building requirements and received LEED Platinum certification. The renovations to the 50 United Nations Plaza Federal Office Building included new mechanical, electrical, lighting, and plumbing systems; roof replacement and refurbishment of existing historic wood windows; and restoration of the historically significant interiors and central courtyard, as well as redesign of office interiors. GSA estimated that the building at 50 United Nations Plaza would achieve annual energy savings of about 59 percent compared with a comparable building and projected annual energy savings for this project of about $393,958. In addition, according to GSA officials, although GSA could not include a photovoltaic solar array on the roof of the Wayne Aspinall Federal Building in the manner that it originally planned because historic preservation officers said it would violate the integrity of the building, GSA worked with the engineers on the project to come up with an alternative strategy to incorporate a smaller solar array on-site. Mission-Related Concerns Officials from all five select agencies (DOD, DOE, EPA, GSA, and VA) told us that mission-related concerns can make implementing certain key requirements challenging. For example, VA must implement new safety requirements in its hospitals and other buildings with overnight stays to help prevent and control health-care associated Legionella disease (Legionnaires’ disease) and implementing these requirements will increasingly impact the agency’s ability to implement energy and water conservation requirements, according to VA officials. Specifically, the new safety requirements will increase water and energy demand because they require, among other activities, (1) increased flushing of hot and cold water at outlets and (2) maintaining specific water temperature ranges— cold water should be kept at or below 67 degrees to the greatest extent practicable, and hot water should be kept no lower than 124 degrees. Cooling water below 67 degrees in hot environments where cold water is commonly warmer than 67 degrees requires additional energy, and flushing water systems increases water use, according to VA officials. VA officials also said that the goals of reducing energy use and wait times for veterans are in conflict; specifically, VA is extending medical center hours to address a backlog of patients, which will increase its energy use. In addition, Air Force and VA officials told us that implementing daylighting requirements—which call for a minimum amount of daylight exposure in a certain amount of the space—is challenging due to mission-specific Specifically, Air Force officials also told us that daylighting requirements.may be contrary to what the space is used for or potentially detrimental to the mission. For example, daylighting may not be possible because of security concerns in spaces, such as a Sensitive Compartmented Information Facility—an enclosed area within a building that does not have windows and is used to process sensitive information—or it is not practical in a space, such as a command control center where daylight could disrupt the ability to view screens. Competing Priorities Officials from all five select agencies (DOD, DOE, EPA, GSA, and VA) told us that they face challenges because they have multiple priorities that compete for limited resources. In addition, DOD and DOE officials said that there are limited incentives to implement requirements that do not have any economic benefit. Specifically, according to DOD officials, the use of limited resources to implement certain key requirements—such as those that aim to improve indoor air quality—can be difficult to justify because they may not also reduce energy use or operating costs. Also, DOD officials said that green buildings can increase occupant productivity and morale, but there is no way to include these intangible benefits in a life-cycle cost analysis. According to VA officials and its 2014 Strategic Sustainability Performance Plan, retaining green building features in already-designed new construction projects is challenging due to budget constraints and the need to address higher priority, mission-based needs. Officials told us that ensuring green building elements are retained and not removed at the end of the project to reduce costs if the project looks like it will go over budget is challenging. According to EPA’s 2014 Strategic Sustainability Performance Plan, its laboratory mechanical system upgrades are complex and frequently take several years to design, complete, and commission, and finding ways to fund projects in a time of reduced resources, including sustainable building improvement projects, is challenging. Criteria for Evaluating Compliance Officials from DOD and DOE told us that the criteria used to evaluate compliance with the Guiding Principles—which require a building to meet all of the dozens of requirements included in the Guiding Principles—can be a disincentive to implementing some requirements at an individual building because they receive no credit for implementing one requirement if they do not implement all the requirements. Air Force officials said that the current criteria encourage agencies to focus on investing in high- performing buildings for which a relatively small investment results in compliance. These officials said that this is in conflict with an approach focused on addressing the worst performing buildings and systems first and, as a result, pursuing compliance in isolation would be in conflict with the agency-wide energy and water strategies. Revisions to the Guiding Principles could affect this challenge if, as Air Force officials stated, the criteria used to evaluate implementation is adjusted to allow buildings to demonstrate progress as opposed to being an all or nothing standard. CEQ officials could not comment on whether the all or nothing approach would be reconsidered as part of the revision, but officials said that they were aware of that issue and want to ensure that they are not providing any disincentives for agencies to meet some of the requirements even if they cannot meet all. Agency Comments We provided CEQ, DOD, DOE, EPA, GSA, OMB, and VA with a draft of this report for their review and comment. DOE and VA provided written comments, reproduced in appendix IV and V, respectively, and also provided technical comments that were incorporated, as appropriate. CEQ, DOD, EPA, GSA, and OMB either had no comments or provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees; the Chairman of the Council on Environmental Quality; the Administrators of the General Services Administration and the Environmental Protection Agency; the Director of the Office of Management and Budget; and the Secretaries of Defense, Energy, and Veterans Affairs. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Frank Rusco at (202) 512-3841 or [email protected], Brian J. Lepore at (202) 512-4523 or [email protected], or David J. Wise at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix VI. Appendix I: 2008 Guiding Principles for Sustainable New Construction, Major Renovations, and Existing Buildings New construction and major renovations Use an integrated team to develop and implement policy regarding sustainable operations and maintenance. establishes and maintains an integrated project team as described on the Whole Building Design Guide <http://www.wbdg.org/design/engage_process. php> in all stages of a project’s planning and delivery; integrates the use of the Office of Management and Budget’s A-11, Section 7, Exhibit 300: Capital Asset Plan and Business Case Summary; establishes performance goals for siting, energy, water, materials, and indoor environmental quality along with other comprehensive design goals and ensures incorporation of these goals throughout the design and lifecycle of the building; and considers all stages of the building’s lifecycle, including deconstruction. Incorporate sustainable operations and maintenance practices within the appropriate Environmental Management System. Assess existing condition and operational procedures of the building and major building systems and identify areas for improvement. Establish operational performance goals for energy, water, material use and recycling, and indoor environmental quality, and ensure incorporation of these goals throughout the remaining lifecycle of the building. Incorporate a building management plan to ensure that operating decisions and tenant education are carried out with regard to integrated, sustainable building operations and maintenance. Augment building operations and maintenance as needed using occupant feedback on work space satisfaction. Employ commissioning practices tailored to the size and complexity of the building and its system components in order to verify performance of building components and systems and help ensure that design requirements are met. This should include an experienced commissioning provider, inclusion of commissioning requirements in construction documents, a commissioning plan, verification of the installation and performance of systems to be commissioned, and a commissioning report. Employ recommissioning, tailored to the size and complexity of the building and its system components in order to optimize and verify performance of fundamental building systems. Commissioning must be performed by an experienced commissioning provider. When building commissioning has been performed, the commissioning report, summary of actions taken, and schedule for recommissioning must be documented. In addition, meet the requirements of Energy Independence and Security Act of 2007 (EISA), Section 432 and associated Federal Energy Management Program (FEMP) guidance. Building recommissioning must have been performed within 4 years prior to reporting a building as meeting the Guiding Principles. II. Optimize energy performance Increase energy efficiency New construction and major renovations design to earn the ENERGY STAR® targets for new construction and major renovation where applicable. For new construction, reduce energy use by 30% compared to the baseline building performance rating per the American National Standards Institute/American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., (ASHRAE)/Illuminating Engineering Society of North America Standard 90.1-2007, Energy Standard for Buildings Except Low-Rise Residential. For major renovations, reduce the energy use by 20 percent below prerenovations 2003 baseline. Laboratory spaces may use the Labs21 Laboratory Modeling Guidelines. rating of 75 or higher or an equivalent Labs21 Benchmarking Tool score for laboratory buildings. Option 2: Reduce measured building energy use by 20% compared to building energy use in 2003 or a year thereafter with quality energy use data. Option 3: Reduce energy use by 20% compared to the ASHRAE 90.1-2007 baseline building design if design information is available. Use energy efficient products Use ENERGY STAR® and FEMP-designated energy efficient products, where available. Use ENERGY STAR® and FEMP-designated energy efficient products, where available. Measurement and verification Per the Energy Policy Act of 2005 Section 103, install building level electricity meters to track and continuously optimize performance. Per EISA Section 434, include equivalent meters for natural gas and steam, where natural gas and steam are used. Per the Energy Policy Act of 2005 Section 103, install building level electricity meters to track and continuously optimize performance. Per EISA Section 434, include equivalent meters for natural gas and steam, where natural gas and steam are used. Compare actual performance data from the first year of operation with the energy design target, preferably by using ENERGY STAR® Portfolio Manager for building and space types covered by ENERGY STAR®. Verify that the building performance meets or exceeds the design target, or that actual energy use is within 10% of the design energy budget for all other building types. For other building and space types, use an equivalent benchmarking tool such as the Labs21 benchmarking tool for laboratory buildings. Compare annual performance data with previous years’ performance data, preferably by entering annual performance data into the ENERGY STAR® Portfolio Manager. For building and space types not available in ENERGY STAR®, use an equivalent benchmarking tool, such as the Labs21 benchmarking tool for laboratory buildings. Per Executive Order 13423, implement renewable energy generation projects on agency property for agency use, when lifecycle cost-effective. Per Executive Order 13423, implement renewable energy generation projects on agency property for agency use, when life cycle cost-effective. Per the Energy Independence and Security Act (EISA) Section 523, meet at least 30% of the hot water demand through the installation of solar hot water heaters, when life cycle cost-effective. New construction and major renovations use of harvested rainwater, treated wastewater, and air conditioner condensate should also be considered and used where feasible for nonpotable use and potable use where allowed. water baseline for plumbing fixtures older than 1994 is 160% of the Uniform Plumbing Codes 2006 or the International Plumbing Codes 2006 fixture performance requirements, or Option 2: Reduce building measured potable water use by 20% compared to building water use in 2003, or a year thereafter with quality water data. Outdoor Water. Use water efficient landscape and irrigation strategies, such as water reuse, recycling, and the use of harvested rainwater, to reduce outdoor potable water consumption by a minimum of 50% over that consumed by conventional means. The installation of water meters for locations with significant outdoor water use is encouraged. Outdoor Water. Three options can be used to measure outdoor potable water use performance: Option 1: Reduce potable irrigation water use by 50% compared to conventional methods. Option 2: Reduce building related potable irrigation water use by 50 percent compared to measured irrigation water use in 2003 or a year thereafter with quality water data. Option 3: Use no potable irrigation water. Employ design and construction strategies that reduce storm water runoff and discharges of polluted water off-site. Per EISA Section 438, to the maximum extent technically feasible, maintain or restore the predevelopment hydrology of the site with regard to temperature, rate, volume, and duration of flow using site planning, design, construction, and maintenance strategies. Employ strategies that reduce storm water runoff and discharges of polluted water off-site. Per EISA Section 438, where redevelopment affects site hydrology, use site planning, design, construction, and maintenance strategies to maintain hydrologic conditions during development, or to restore hydrologic conditions following development, to the maximum extent that is technically feasible. Install water meters for building sites with significant indoor and outdoor water use is encouraged. If only one meter is installed, reduce potable water use (indoor and outdoor combined) by at least 20% compared to building water use in 2003, or a year thereafter with quality water data. Per the Energy Policy Act of 2005 Section 109, when potable water is used to improve a building’s energy efficiency, deploy life cycle cost- effective water conservation measures. Per the Energy Policy Act of 2005 Section 109, when potable water is used to improve a building’s energy efficiency, deploy life cycle cost-effective water conservation measures. Specify the Environmental Protection Agency’s (EPA) WaterSense-labeled products or other water conserving products, where available. Choose irrigation contractors who are certified through a WaterSense labeled program. Specify EPA’s WaterSense-labeled products or other water conserving products, where available. Choose irrigation contractors who are certified through a WaterSense labeled program. IV. Enhance indoor environmental quality Ventilation/thermal comfort Meet ASHRAE Standard 55-2004, Thermal Environmental Conditions for Human Occupancy, including continuous humidity control within established ranges per climate zone, and ASHRAE Standard 62.1-2007, Ventilation for Acceptable Indoor Air Quality. Meet ASHRAE Standard 55-2004, Thermal Environmental Conditions for Human Occupancy and ASHRAE Standard 62.1-2007, Ventilation for Acceptable Indoor Air Quality. New construction and major renovations condensation to prevent building damage, minimize mold contamination, and reduce health risks related to moisture. building damage, minimize mold contamination, and reduce health risks related to moisture. For façade renovations, Dew Point analysis and a plan for cleanup or infiltration of moisture into building materials are required. Achieve a minimum daylight factor of 2% (excluding all direct sunlight penetration) in 75% of all space occupied for critical visual tasks. Provide automatic dimming controls or accessible manual lighting controls, and appropriate glare control. Automated lighting controls (occupancy/vacancy sensors with manual-off capability) are provided for appropriate spaces including restrooms, conference and meeting rooms, employee lunch and break rooms, training classrooms, and offices. Two options can be used to meet additional daylighting and lighting controls performance expectations: Option 1: Achieve a minimum daylight factor of 2% (excluding all direct sunlight penetration) in 50% of all space occupied for critical visual tasks, or Option 2: Provide occupant controlled lighting, allowing adjustments to suit individual task needs, for 50% of regularly occupied spaces. Specify materials and products with low pollutant emissions, including composite wood products, adhesives, sealants, interior paints and finishes, carpet systems, and furnishings. Use low emitting materials for building modifications, maintenance, and cleaning. In particular, specify the following materials and products to have low pollutant emissions: composite wood products, adhesives, sealants, interior paints and finishes, solvents, carpet systems, janitorial supplies, and furnishings. Implement a policy and post signage indicating that smoking is prohibited within the building and within 25 feet of all building entrances, operable windows, and building ventilation intakes during building occupancy. Prohibit smoking within the building and within 25 feet of all building entrances, operable windows, and building ventilation intakes. Use integrated pest management techniques as appropriate to minimize pesticide usage. Use EPA-registered pesticides only when needed. Follow the recommended approach of the Sheet Metal and Air Conditioning Contractor’s National Association Indoor Air Quality Guidelines for Occupied Buildings under Construction, 2007. After construction and prior to occupancy, conduct a minimum 72-hour flush-out with maximum outdoor air consistent with achieving relative humidity no greater than 60%. After occupancy, continue flush-out as necessary to minimize exposure to contaminants from new building materials. New construction and major renovations meeting or exceeding EPA’s recycled content recommendations. For other products, specify materials with recycled content when practicable. If EPA-designated products meet performance requirements and are available at a reasonable cost, a preference for purchasing them shall be included in all solicitations relevant to construction, operation, maintenance of, or use in the building. EPA’s recycled content product designations and recycled content recommendations are available on EPA’s Comprehensive Procurement Guideline website at <www.epa.gov/cpg>. or exceeding EPA’s recycled content recommendations [for building modifications, maintenance, and cleaning]. For other products, use materials with recycled content such that the sum of postconsumer recycled content plus one- half of the preconsumer content constitutes at least 10% (based on cost or weight) of the total value of the materials in the project. If EPA- designated products meet performance requirements and are available at a reasonable cost, a preference for purchasing them shall be included in all solicitations relevant to construction, operation, maintenance of, or use in the building. EPA’s recycled content product designations and recycled content recommendations are available on EPA’s Comprehensive Procurement Guideline website at <www.epa.gov/cpg>. Biobased content. Per Section 9002 of the Farm Security and Rural Investment Act, for USDA- designated products, specify products with the highest content level per USDA’s biobased content recommendations. For other products, specify biobased products made from rapidly renewable resources and certified sustainable wood products. If these designated products meet performance requirements and are available at a reasonable cost, a preference for purchasing them shall be included in all solicitations relevant to construction, operation, maintenance of, or use in the building. USDA’s biobased product designations and biobased content recommendations are available on USDA’s BioPreferred web site at <www.usda.gov/biopreferred>. Biobased content. Per Section 9002 of the Farm Security and Rural Investment Act, for USDA- designated products, use products with the highest content level per USDA’s biobased content recommendations. For other products, use biobased products made from rapidly renewable resources and certified sustainable wood products. If these designated products meet performance requirements and are available at a reasonable cost, a preference for purchasing them shall be included in all solicitations relevant to construction, operation, maintenance of, or use in the building. USDA’s biobased product designations and biobased content recommendations are available on USDA’s BioPreferred website at <www.usda.gov/biopreferred>. Environmentally preferable products. Use products that have a lesser or reduced effect on human health and the environment over their lifecycle when compared with competing products or services that serve the same purpose. A number of standards and ecolabels are available in the marketplace to assist specifiers in making environmentally preferable decisions. For recommendations, consult the Federal Green Construction Guide for Specifiers at <www.wbdg.org/design/greenspec.php>. Environmentally preferable products. Use products that have a lesser or reduced effect on human health and the environment over their lifecycle when compared with competing products or services that serve the same purpose. A number of standards and ecolabels are available in the marketplace to assist specifiers in making environmentally preferable decisions. For recommendations, consult the Federal Green Construction Guide for Specifiers at <www.wbdg.org/design/greenspec.php>. New construction and major renovations construction, demolition and land clearing materials, excluding soil, where markets or on-site recycling opportunities exist. Provide salvage, reuse and recycling services for waste generated from major renovations, where markets or on-site recycling opportunities exist. beverage containers and paper from building occupants, batteries, toner cartridges, outdated computers from an equipment update, and construction materials from a minor renovation. Eliminate the use of ozone depleting compounds during and after construction where alternative environmentally preferable products are available, consistent with either the Montreal Protocol and Title VI of the Clean Air Act Amendments of 1990, or equivalent overall air quality benefits that take into account lifecycle impacts. Eliminate the use of ozone depleting compounds where alternative environmentally preferable products are available, consistent with either the Montreal Protocol and Title VI of the Clean Air Act Amendments of 1990, or equivalent overall air quality benefits that take into account lifecycle impacts. Appendix II: Select Federal Efforts to Support Implementation of Key Federal Green Building Requirements Provides instructions to federal agencies on designing, constructing, maintaining, and operating buildings in sustainable locations, as called for in Executive Order 13514, Federal Leadership in Environmental, Energy, and Economic Performance. Provides instructions to federal agencies on implementation of water use efficiency and management goals in Executive Order 13514, Federal Leadership in Environmental, Energy, and Economic Performance. Web-based and in-person training on the Guiding Principles, including training customized to an agency’s needs. Establishes guidelines for agencies to meter their buildings for energy (electricity, natural gas, and steam) and water. Among other guidance, defines which buildings are appropriate to meter and provides metering prioritization recommendations for those agencies with limited resources. Designates ENERGY STAR Portfolio Manager as the building energy use benchmarking system to use for federal facilities. Describes minimum data inputs and public disclosure requirements, among other things. Training and educational tools that describe types of building commissioning— including recommissioning and continuous commissioning—and when and where each might best be used to ensure that a facility performs according to its design and the needs of its owners and occupants. Training and education on applying lifecycle cost analysis to evaluate the cost- effectiveness of energy and water efficiency investments, with assistance provided by the National Institute of Standards and Technology. Identifies products that are in the upper 25% of their class in energy efficiency. FEMP sets efficiency levels for product categories that have the potential to generate significant federal energy savings. Allows laboratory owners to compare the performance of their laboratories to similar facilities and thereby help identify potential energy cost savings opportunities. Online tool for tracking and assessing energy and water use. Certain property types can receive a 1-100 ENERGY STAR score, which compares a building’s energy performance to similar buildings nationwide. Designed to assist agencies in assessing their existing buildings against the Guiding Principles, including serving as a repository for compliance documents. Offers guidance and tools for purchasing products or services that have a lesser or reduced effect on human health and the environment when compared with competing products or services that serve the same purpose. Gives agencies a framework to help them reduce storm water runoff from development projects and protect water resources. Purpose/description Aims to provide consumers with easy ways to save water, as both a label for products—such as toilets and sinks—and an information resource to help people use water more efficiently. General Services Administration (GSA) Sustainable Facilities Tool (SFTool) Web-based tool intended for facility managers, leasing specialists, and project managers that provides education on sustainability issues, including on the Guiding Principles. Developed leasing clauses that can be used to demonstrate the lease complies with the Guiding Principles. DOE and GSA Interagency Sustainability Working Group Provides sustainability officials from federal agencies a forum for information exchange and feedback on sustainability issues. Describes preaward and postaward procurement actions to verify compliance with a contract’s sustainable requirements, and provides resources for confirming a contractor has provided acceptable documentation to show compliance with sustainable requirements. Score agencies on whether they are meeting intermediate goals for compliance with sustainability goals, including for the Guiding Principles. Appendix III: Third-Party Certification Systems Reviewed or Required by Select Federal Agencies Summary Projects attain a rating through the achievement of all prerequisites and points in different categories related to the eight areas of focus. The total possible points vary based on the version of LEED that is used. LEED is a web-based system and all documentation is submitted online. Green Business Certification Inc. provides the third-party certification service by reviewing the submitted documentation. Projects attain a rating through the achievement of points in different categories related to seven areas of focus. A project can attain a total of 1,000 points. Complete an initial web- based survey, and subsequent documentation is submitted to the third- party assessor or can be submitted online. An on-site assessment is required for certification. The third-party assessor is contracted by the Green Building Initiative. Summary Projects attain ‘Living’ status by completing all the imperatives, or categories, related to seven petals, or areas of focus. ‘Living’ status means that a building is regenerative, not just green. A building can receive Petal Certification if it meets the requirements of three or more petals, including water, energy, or materials. A project can complete petals in three typologies, or certification types. A project can attain Net Zero Energy certification by demonstrating through actual performance data that it produces more energy than it consumes. Rating levels Living Building Challenge Award and Certificate Petal Recognition Net Zero Energy Certification This is not a comprehensive list of categories and subcategories for LEED certification. Examples of other categories include retail, schools, and hospitality. Appendix IV: Comments from the Department of Energy Appendix V: Comments from the Department of Veterans Affairs Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individuals named above, Karla Springer (Assistant Director), Harold Reich (Assistant Director), Sara Vermillion (Assistant Director), Janice Ceperich, John Delicath, Swati Deo, Debra Draper, Philip Farah, Cindy Gilbert, Geoffrey Hamilton, Armetha Liles, Marietta Mayfield Revesz, and Barbara Timmerman made key contributions to this report. | As the nation's largest energy consumer, the federal government spent about $7 billion in fiscal year 2014 to provide energy to over 275,000 federally owned or leased buildings. Federal law and policies for improving sustainability across the federal government include “green building” provisions—construction and maintenance practices designed to make efficient use of resources and reduce environmental impacts, among other benefits. A March 2015 executive order required CEQ to revise key green building requirements and extended the time frames for implementation in existing buildings. Third-party certification systems are used to assess how well green building elements are incorporated into a building's design and operation. GAO was asked to review federal green building efforts and agencies' use of third-party certification systems. This report examines (1) federal efforts to support agencies' implementation of key green building requirements, (2) select agencies' use of third-party certification systems, and (3) challenges select agencies face in implementing requirements. GAO reviewed federal requirements; agency policies and guidance; and interviewed officials from agencies with supporting roles and agencies with experience implementing the requirements and using different certification systems. GAO also reviewed documentation and interviewed representatives from third-party certification organizations. GAO is not making recommendations. CEQ, DOD, DOE, EPA, GSA, OMB, and VA reviewed a draft report and most provided technical comments that GAO incorporated, as appropriate. The Council on Environmental Quality (CEQ), Department of Energy (DOE), Environmental Protection Agency (EPA), General Services Administration (GSA), and Office of Management and Budget (OMB) provide guidance, oversight, training, and other support to agencies implementing key federal green building requirements. For example, DOE offers training on measuring and reporting on the implementation of requirements, among other things. Also, EPA's Energy Star Portfolio Manager is a web-based tool agencies and other entities can use to measure and track buildings' energy and water use. According to officials, some federal support efforts will need to be updated when the revised requirements are issued, as called for in the March 2015 executive order. All of the select agencies GAO reviewed—Department of Defense (DOD), DOE, EPA, GSA, and the Department of Veterans Affairs (VA)—use third-party certification systems to help implement key federal green building requirements for new construction and major renovation projects. While certification does not ensure that a building meets all requirements, agencies have developed tools to ensure that any remaining federal requirements are implemented at their buildings, and officials noted that there are additional benefits to using these systems. For example, officials stated that certification provides a well-established framework for documenting and ensuring compliance; serves as a tool to communicate with contractors and the public; and reduces the need for additional staff to verify that a building meets requirements. Of the select agencies GAO reviewed, none require third-party certification for existing buildings, but three have developed their own systems for assessing the implementation of key requirements for existing buildings. Several agencies stated that they are not certain how they will use third-party certification systems in the future after the revisions to key green building requirements are issued. For example, EPA and VA officials stated that they may reevaluate their requirement to certify specific projects after the revised green building requirements are issued. Regardless of whether they use certification systems, the agencies GAO reviewed identified a variety of challenges in implementing current green building requirements, including challenges related to their building inventories, missions, and the criteria for evaluating compliance. For example, DOD officials said that the sheer number of buildings in their inventory proves challenging. In addition, according to officials from several agencies, their building inventories include certain building types, such as laboratories, hospitals, and industrial buildings for which some requirements are difficult to implement. VA cited mission concerns, including new safety requirements and extended hours to address patient backlogs, as a challenge to implementing energy and water conservation requirements. Also, some agency officials said that the criteria for evaluating compliance with the requirements can be a disincentive to implementing some requirements because no credit is received unless all of the requirements are implemented. Forthcoming revisions to key green building requirements may address some of these challenges. CEQ officials said that they were aware of the challenges and want to ensure that they are not providing any disincentives for agencies to meet some of the requirements even if they cannot meet all. |